/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(18.09 KB 480x360 LB5YkmjalDg.jpg)

Version 333 hydrus_dev 12/05/2018 (Wed) 23:13:16 Id: 52b785 No. 10909
https://www.youtube.com/watch?v=LB5YkmjalDg windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v333/Hydrus.Network.333.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v333/Hydrus.Network.333.-.Windows.-.Installer.exe os x app: https://github.com/hydrusnetwork/hydrus/releases/download/v333/Hydrus.Network.333.-.OS.X.-.App.dmg tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v333/Hydrus.Network.333.-.OS.X.-.Extract.only.tar.gz linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v333/Hydrus.Network.333.-.Linux.-.Executable.tar.gz source tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v333.tar.gz 🎉🎉 Merry v333! 🎉🎉 I was slightly short on time, but I still had a great week. There's some fixes and speedup and a bit of fun. file viewing statistics The client now records how often (and for how long) each file has been viewed in the preview window and main media viewer! You will see how many times you have viewed a file in its normal thumbnail and media right-click menus. You can customise how this info displays in the menu–including hiding it completely–under options->media. You can also sort by total views or viewtime under the normal file sort dropdown (which is now itself alphabetical and has improved labels). This is only a first version, so it isn't perfect. The total viewtime isn't updated until you finish looking at a file, for instance, and counts will sometimes be temporarily desynchronised. I expect to revisit it, maybe adding the stats to the main media viewer's top-right window, and making the current viewtime live, so the seconds count up correctly as you look at something. I'll add a system predicate for it as well, so you'll be able to search for things you've seen at least x times and so on. I will add an option to turn these stats off completely next week. What do you get here? Do you go back to the same twenty files over and over, or are things more evenly distributed? Once we have fleshed this data out, I could draw some distribution graphs or do some more innovative 'show me some infrequent stuff' searching. another cache After the good file and tag work of the past few weeks, I've written a higher-level cache that significantly streamlines how gui-level media is stored in the client. There is now only ever one copy of each media object, with all thumbnails and media viewers pointing at that same one. This reduces total memory usage and CPU in many situations, makes it possible to immediately show content changes after advanced updates like tag repository processing, and it speeds up certain searches as duplicate media objects do not have to be recreated from scratch at the db level. I am quite pleased with this cache. I have been thinking about it for a long time. Please let me know if you have any problems with it (likely it would be some variation on "I changed a tag/rating/whatever, but the file doesn't show the change, even when I refresh the search"). tumblr madness If you use tumblr for lewd purposes and missed the news this week, tumblr have gone nuts and decided to ban all nsfw content off their platform on Dec 17th. There's been some corporate drama related to the tumblr app, but no one knows what's really going on with this overbroad new decision–my assumption is Verizon haven't been able to find a workable business model, so they are seizing this chance to reduce liabilities rather than continue throwing money away. Maybe they'll try to find clean ads to run like 4chan are currently doing with 4channel, or maybe they expect to kill the whole thing softly over the next few years. And perhaps the outcry will convince them to reverse the decision, but don't plan on it. So! If you had plans to download from some nsfw tumblr, get it going now. Go visit the creator's blog and see where they are migrating to so you can figure out new subscriptions. If you have a big tumblr yourself that is about to get semi-nuked due to your reblogs, there may also be a clever one-time way to get higher quality versions of what you reblogged as well, as per here: >>10905 full list - added a first version of file viewing statistics! the client db now keeps track of how many times a file is loaded in the preview and full media viewers, and for how long! - you can see the media and preview stats on any single media right-click menu. there are multiple options for how this displays, including hiding it completely, under options->media - viewing stats update as they happen! (although viewtime typically only updates on the end of viewing. I'll likely make this more live, especially if I end up showing this info in the media media viewer) - you can now sort files by total media views/viewtime! - mr. bones's wild ride continues, as well - deleted the old 'file list' way of updating in-ui media objects in favour of a long-planned global media cache. there is now only ever one active copy of any particular media, and all data-level updates need only occur once on that single copy. this saves a bunch of CPU, memory, and overall hassle behind the scenes! various search results/lookups for media already loaded elsewhere now load super fast! - tag siblings refresh is quicker and less memory heavy thanks to this as well
[Expand Post]- furthermore, the complicated tag changes from tag repository processing and advanced content updates are now reflected immediately in the gui on the job's completion! (as long as you have fewer than 10k files open, ha ha) previously, these required a search refresh to show the results - the file sort choice dropdown on all pages is now sorted alphabetically. it has always been a mess picking what you want from here, so let's see if this helps! - tag and rating sort options are now listed as 'tag:' and 'rating:' respectively - fixed some misc file sort choice code, which was failing to keep certain defaults in certain situations - fixed the tag import options' new 'load from defaults' button to correctly load the tag blacklist - the keyboard icon on the media viewer's top hover window now permits activation of current/default shortcut sets under submenus. it now also omits these entries if no custom shortcut sets exist - cleaned up some of the hover_window-canvas interaction code - fixed some long-time sperg-out buffer-drawing when changing position in a long video - the database->backup actions are now hidden if the current db has non-default file/thumbnail locations. for now, in these cases, only a custom backup is appropriate - fixed some ancient repository admin code that fetches summary account info given an account key - the filename tagging dialog now has a much shorter listctrl by default, so should fit better on smaller monitors - fixed the 'review session cookies' dialog's clear button, which was not deleting sessions after clear. it now also wraps the operation in a yes/no confirmation next week I am still planning to take about four weeks over Christmas for a big conversion to python 3. This break starts next week, December 12th. I will use this last week just to tidy up, fix any stupid bugs, and write some more help so there is a clean 'final' v334 python 2 build. The poll for the 'next big thing' is finished, and prototyping a new client API won out. I will start this in the new year and hope to make it an 8-12 week job before starting up a new poll (or possibly going straight on to OR searching, which was a very close second).
(406.18 KB 500x254 F6aEeh0[1].gif)

>>10909 Mr. Bones Got Boned
>>10910 Unfortunately, Mr. Bones is unable to understand any client that has yet to view any media. Please click some thumbs, and the ride will continue.
>>10909 >>10901 Won't this require you to mass reblog stuff to get it? Might be faster to make a throwaway account and mass reblog an artist's posts or whatever. Is there a script to automate this?
>>10909 Thanks for the update, looking forward to giving the new cache a whirl! >>10906 I'm guessing that when they became the main recognized center for the jihad against white males, either they took it upon themselves to diversify their workforce or the majority of their back-end devs just quietly packed up and moved on. Two acquisitions later and they need an app on the Apple store to stay relevant to their target audience, and Apple in a surprising turn of events actually upholds some sort of standard against the endless CP rings the admins were either participating in and helping hide or genuinely hilariously ignorant of, BEFORE being exposed and forced into it. So now with no time and a new set of programmers probably mostly from Yahoo or Verizon rather than original devs, Verizon adds in a shovelware "automated solution". What's hilarious is all of the articles talking about how Tumblr used to be a "champion of net neutrality and free expression". As for questionable business decisions, I'm not sure why Verizon bought Yahoo at all or even why it still exists when Charter/Spectrum/Time Warner/whatever is just that tiny nudge less shitty. Anyways, not ALL nsfw porn is gone, you can still see gender confirmation surgery pics to help brave wymyn celebrate their outer bodies matching their inner spirits!
>>10909 with 4chan, they lost stripe, they can't take credit cards for passes anymore, and no one will work with them to the point you can only use crypto, and for me, my bank was one of the ones that will cut every tie they have with you over crypto, or at least had that as a policy for a time, I can't risk that shit and as much as I like posting on 4chan, I am not doing it if its a fucking 5 chain long train our ai from google or potentially lose my ability to use a credit card. for tumblr, look at every single porn thing, its well bought, its well trafficked, but not a single major player will allow themselves to be tied to these sites, everyone makes it as hard as possible to do business with anyone when you are part of porn. verizon doesn't see this as a hill worth dying on, and likely sees the rest of tumbler as the autistic bullshit that will never cause them real damage, if it was real cp, why would you possibly stick your neck out for them?
>>10914 was it actual cp? I'm not able to figure out if it was just loli or if it was actual cp that got them shut down, I really don't want to be googling 'was there cp on tumblr' because that's kind of just asking for a party van to show up.
>>10919 Nah I'm sure it was legit cp, just like how twitter has legit cp rings and youtube has those elsa, spider-man, fingerpaint, etc. videos.
(47.38 KB 575x575 7-11.jpg)

>>10909 >tumblr commits suicide Another perfect example of why you should download fucking everything, as if the removal of easy raws wasn't enough.
Is their a way to search through Notes or see which images have Notes written on them?
ok, downloaded some tumblrs for the eventual purge, got 800 ignores and 300 ignores, the only way to open the logs for them that I know is to highlight it, then scroll down to logs would there be a way to see the search logs and the file logs without needing to highlight them via a right click menu?
is it possible to have a 'if getting from this domain, do not bother unless logged in?' I keep having to put in 2 hentai foundry links every time it logs me out, this time it grabbed pages mid log in, it found 15, then I re submitted it when it logged in, 68 images.
>>10909 It's here. Happy 333!
Quite often when I'm going to disconnect my encrypted drive which i have Hydrus on, it says "warning a program is still using the drive" and I check the task manager and find client.exe is still running even though the maintenance splash screen disappeared long ago. If I leave it, it will eventually close. This is sometimes hours after I actually closed the program. What's up with that?
Could you add a way to save a number of custom action presets in the duplicate filter? My problem is that normally for alternates I want to copy character tags over, but some artists really like to do the same image but change the face or whatever so it's a different character. In those cases I currently have to pick custom action and go in and remove the copy character tags action, which takes too much effort when you have a couple of those in a row, so it would be nice if it would be possible to save a couple of custom action presets so it's easier to do.
>OR lost >for the fucking API I blame the black people.
>>10913 I am not an avid tumblr user, so I don't really know nuffing about it, I am afraid. I think retroactively reblogging content you want is probably too much work, but if you have a big blog now with lots of reblogs already, this export trick may be a good tool before the 17th when it all goes private.
>>10914 >>10919 >>10921 If the site dies in the next couple of years, or one of the old devs makes a big tell-all post earlier, I will be very interested to read about the behind-the-scenes of all this. I bet the top corporate guys don't really know what the site is, nor what to do with it, just that it is a headache from five different angles. I was reading some stuff about it and came across this interesting link of nsfw nightmare fuel generated by the Yahoo nsfw-detector set on inverse: ++++WARNING, HERETICAL CONTENT FOLLOWS++++ https://open_nsfw.gitlab.io/
>>10924 Yeah, I might put in a 'go straight to file import status' menu entry for advanced users. I'll check the code and make sure this makes sense. BTW, those 'ignores' are probably mostly posts without media content (i.e. text posts). So you probably aren't missing anything. But if you find it is failing to get stuff, let me know and I'll see if I can rush out a fix now.
>>10926 That is how it is supposed to work. It should say in the little network status control something like 'waiting on login'. Can you say more about your problem here–maybe the exact URLs or search queries, so I can try it my end and see what is causing those numbers? BTW, the HF 'real' login works much better than the default click-through, so if you have a throwaway HF account, switch the login script around and put that in, and I think it'll only have to login in every 30 days (or maybe only once, since I think new requests will update your session time to 30days again).
>>10932 I am not sure what's going on here. I might say the exe was hanging around to clean up some db file handle stuff, but hours is way too long. When this next happens, please go to install_dir/db–do the four client*.db files have .db-shm and .db-wal siblings, or have those been tidied up at that point? I assume it isn't doing any CPU/HDD work either, right? If the splash is gone and the program isn't doing any CPU/HDD work, it is almost certain that force-killing the process is no problem at all, btw.
>>10934 Yeah, I think I should change it so they are all customisable. Several people have had similar problems to yours. I will make a proper job for this, thank you.
>>10936 It was pretty close, and I have been asked for API and OR a lot over time, so I am overall happy with how it went. I am leaning more towards working on OR right after API, which means it should be ready for middle of 2019 or so. Please let me know how it works for you when I am ready to roll it out.
>>10940 >>10901 Well, I made a sub blog just to try it out. Did a test text post and 1 reblog but it just says "backup processing." Might have to do with me already having hit the button on my main, which is also still processing, but if the thing is going to take forever then you'll never make it before the 17th anyway.
>>10947 Someone on e621 said it takes a few days to make a backup and it doesn't depend on the amount of content.
>>10919 I spent hours looking through hundreds of NSFW blogs on Tumblr and never found any CP. They're just using that excuse as to not piss off the people who make NSFW blogs. What really happened is that Apple banned them from the app store since they don't want to host apps with pornographic content.
>>10942 oh yea, no problem with rushing, its more a quality of life thing to get to the logs easier. and with hf, I use my account to log in, the problem is download starts before log in is finished. conquistabear was the artist, the first look up got 15 files the second when log in was done got 68
>>10954 the shitstorm that kicked everything off was said to be cp, my understanding is that there is a database with hashes they didn't keep up to date with, and if they quickly applied a fix to everything before the deleting shitstorm happened, it may have never been noticed. also, its probably a good thing if you never saw the real cp, as that would mean you were looking for it, but that's assuming that's what it was and not just loli, keep in mind, the tumblr app was there for a long time before any of this happened, and the porn was there for likely longer. something happened that forced a change.
>>10941 Yeah, I've seen that, it got posted around a lot back when neural imaging was really starting to gain traction (remember the procedural dog faces everywhere?) >>10954 The pedophiles who post pedo content and aren't immediately v& are often very intelligent and have long memories. They can exchange stuff nobody else will ever really be likely to find by using any of an impressive variety of code phrases and references coined twenty years ago on some murky usenet or something. It's like a game for a lot of them, many are /g/ types very big into cryptography and Linux and all that. Hell, I'm sure some of them just used tags like "the lovely woods" or whatever that we would all recognize but wouldn't think to type in on Tumblr because we're (assumedly) not pedophiles. Basically easy enough to find if you know what you're looking for once you find a publicly indexed tumblr or two that reposts from a web of interconnected unlisted ones, but you'd have to be looking first, and for a bit, at that. They're not just posting like "naked kid #pedo #pedophilia #underage #nambla #lovehasnoage"
>>10947 >>10953 Mine finally finished; I can confirm it gives you raw size files, both in size and dimensions. Unfortunately taking several days to go through isn't very helpful. And of course they don't tell you how big the zip you're downloading is.
>>10959 Can you IPFS/BitTorrent your files? Also how large is the collection? Thanks in advance.
>>10959 Wew, looks like it is a good thing I have saved every tumblr post url after they removed raw access. Time to mass reblog all those posts and export resulting blog.
>>10959 My sub blog's export just finished; despite me hitting the export button ~4 days ago, it picked up the files I've reblogged since then. So you may want to hit the button now and reblog stuff as you wait.
>>10963 Why would you want the files I've saved? >>10965 Do you know of any way to automate this? It's a pain in the ass to do it manually
The new mr.bones is busted on my end: 2018/12/10 22:34:07: TypeError %d format: a number is required, not NoneType File "include\ClientGUIMenus.py", line 156, in event_callable callable( *args, **kwargs ) File "include\ClientGUI.py", line 2129, in _HowBonedAmI panel = ClientGUIScrolledPanelsReview.ReviewHowBonedAmI( frame, boned_stats ) File "include\ClientGUIScrolledPanelsReview.py", line 1951, in init media_label = 'Total media views: ' + HydrusData.ToHumanInt( media_views ) + ', totalling ' + HydrusData.TimeDeltaToPrettyTimeDelta( media_viewtime ) File "include\HydrusData.py", line 1078, in ToHumanInt text = locale.format( '%d', num, grouping = True ) File "locale.py", line 198, in format File "locale.py", line 204, in _format
Duplicate processing greatly inflates the view count since you often flip back and forth between two images looking for which is better. Wish to disable it just for the duplicate viewer.
>>10967 At first I thought about an app to automatically reblog stuff from url list but in the end I've decided that I will spend far less time manually reblogging ~700 url's.
>>10976 I was more just wondering if a script existed. Use Hydrus to gather url's list -> script auto reblogs them Or maybe that's impossible. I'm not a programmer.
>>10955 If you highlight the bad one and look at the file and gallery logs (where it lists the urls it visited), are either of them somehow not 'www.hentai-foundry.com'? Could the gallery urls be just 'hentai-foundry.com'? The system is pretty strict about jobs waiting on logging-in domains, but if the domains are not matching up until file urls come in or something, this may explain it.
>>10970 Thanks. Look at some files and wait 60s and it should be fixed. I messed up the empty call, properly fixed tomorrow.
>>10973 Thanks, this is a good point. Someone else mentioned having a 'don't register view if you looked at it less than 0.5s' or something so you don't register a fast scroll or quick thumb click. I will get to these in the new year. I'm rolling out a 'suspend file viewing tracking' menu checkbox tomorrow. I hope this sorts you out for the meantime.
I haven't looked at the options for a while and they've been shuffled around in newer versions, so I'll just ask - where are the daily file cap options now? I just need to finish these subscriptions before Tumblr deletes them.
(158.71 KB 1631x587 client_2018-12-12_04-48-50.png)

(350.46 KB 1663x1549 client_2018-12-12_04-49-52.png)

>>10978 Not sure what I was suppose to look for. When these started, hydrus held off for a few seconds, but not till it was done logging in, this got the smaller gallery, immediately after this, I pasted it in again, and this is what I got.
>>10985 For bandwidth? Go network->data->review bandwidth usage. For your specific problem, I think the easiest and best solution is: edit default bandwidth rules to change your client-wide rules for all subcriptions. Just delete any rules in there. Then scroll the list on the main window for 'web domain:tumblr.com' and double-click and set specific rules and similarly set no rules. This will make all your subs work as much as they want and also let any tumblr jobs work as much as they want.
>>10987 Thanks. I don't see anything obviously wrong here. If hydrus is waiting for the login, then everything should be working ok. Could HF maybe not apply your account's filters right after logging in? I am really not sure here. My test HF login on my dev machine lasts 30 days, and I think it keeps upping that back to 30 days every time I request, so I feel like a regular client that hit HF every few days would only ever log in once, or a higher hard limit like once a year–do you see your client logging in HF a lot? What does it say under your network->downloaders->manage logins? Does it say 'yeah, logged in for 29 days' there, or something else? I wonder if perhaps your credentials are incorrect or the login script is failing to log in right, so you are accidentally being given a filtered guest login at times.
>>10992 here, this is a known link that doesn't pull images up, I tried going incognito to find ones, but apparently it finds the images that way too. for the artist, so I stuck with a known one that wouldn't work. https://mega.nz/#!KooGTIBC!aRlDdrI1XRA-iMDmkzye7J6Q-C3bTWnb3_74-8-0eDI Honestly, I think full stopping the download, program wide, till a log in finishes would be a good work around for this. or with the downloaders that these download from, a double tap when logging in, the first page or two of searches could get redone twice, because by then everything should be logged in, and if something didn't catch it would get them in the first two pages.
>>10991 I set everything to super high file and data limits there already though (like 100k files and 100s of TBs worth of bandwidth), and it's still "waiting on bandwidth" half the time. That change affected both my global and the specific tumblr rules. Also tumblr.com specifically (not its subdomains) has "yes" in the "blocked?" column. What does this mean?
>>11000 Oh, never mind, I had to edit the web domain rules as well as the subscription rules, seems like. All working now.


Forms
Delete
Report
Quick Reply