/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(32.04 KB 480x360 r1nn-tp26KE.jpg)

Version 425 Anonymous 01/13/2021 (Wed) 22:34:36 Id: 6d169b No. 15109
https://www.youtube.com/watch?v=r1nn-tp26KE windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v425/Hydrus.Network.425.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v425/Hydrus.Network.425.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v425/Hydrus.Network.425.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v425/Hydrus.Network.425.-.Linux.-.Executable.tar.gz I had a good week. I optimised and fixed several core systems. faster I messed up last week with one autocomplete query, and as a result, when searching the PTR in 'all known files', which typically happens in the 'manage tags' dialog, all queries had 2-6 seconds lag! I figured out what went wrong, and now autocomplete should be working fast everywhere. My test situation went from 2.5 seconds to 58ms! Sorry for the trouble here, this was driving me nuts as well. I also worked on tag processing. Thank you to the users who have sent in profiles and other info since the display cache came in. A great deal of overhead and inefficient is reduced, so tag processing should be faster for almost all situations. The 'system:number of tags' query now has much better cancelability. It still wasn't great last week, so I gave it another go. If you do a bare 'system:num tags > 4' or something and it is taking ages, stopping or changing the search should now just take a couple seconds. It also won't blat your memory as much, if you go really big. And lastly, the 'session' and 'bandwidth' objects in the network engine, formerly monolithic and sometimes laggy objects, are now broken into smaller pieces. When you get new cookies or some bandwidth is used, only the small piece that is changed now needs to be synced to the database. This is basically the same as the subscription breakup last year, but behind the scenes. It reduces some db activity and UI lag on older and network-heavy clients. better I have fixed more instances of 'ghost' tags, where committing certain pending tags, usually in combination with others that shared a sibling/parent implication, could still leave a 'pending' tag behind. This reasons behind it were quite complicated, but I managed to replicate the bug and fixed every instance I could find. Please let me know if you find any more instances of this behaviour. While the display cache is working ok now, and with decent speed, some larger and more active clients will still have some ghost tags and inaccurate autocomplete counts hanging around. You won't notice or care about a count of 1,234,567 vs 1,234,588, but in some cases these will be very annoying. The only simple fixes available at the moment are the nuclear 'regen' jobs under the 'database' menu, which isn't good enough. I have planned maintenance routines for regenerating just for particular files and tags, and I want these to be easy to fire off, just from right-click menu, so if you have something wrong staring at you on some favourite files or tags, please hang in there, fixes will come. full list - optimisations: - I fixed the new tag cache's slow tag autocomplete when in 'all known files' domain (which is usually in the manage tags dialog). what was taking about 2.5 seconds in 424 should now take about 58ms!!! for technical details, I was foolishly performing the pre-search exact match lookup (where exactly what you type appears before the full results fetch) on the new quick-text search tables, but it turns out this is unoptimised and was wasting a ton of CPU once the table got big. sorry for the trouble here–this was driving me nuts IRL. I have now fleshed out my dev machine's test client with many more millions of tag mappings so I can test these scales better in future before they go live - internal autocomplete count fetches for single tags now have less overhead, which should add up for various rapid small checks across the program, mostly for tag processing, where the client frequently consults current counts on single tags for pre-processing analysis - autocomplete count fetch requests for zero tags (lol) are also dealt with more efficiently - thanks to the new tag definition cache, the 'num tags' service info cache is now updated and regenerated more efficiently. this speeds up all tag processing a couple percent - tag update now quickly filters out redundant data before the main processing job. it is now significantly faster to process tag mappings that already exist–e.g. when a downloaded file pends tags that already exist, or repo processing gives you tags you already have, or you are filling in content gaps in reprocessing - tag processing is now more efficient when checking against membership in the display cache, which greatly speeds up processing on services with many siblings and parents. thank you to the users who have contributed profiles and other feedback regarding slower processing speeds since the display cache was added - various tag filtering and display membership tests are now shunted to the top of the mappings update routine, reducing much other overhead, especially when the mappings being added are redundant - . - tag logic fixes: - I explored the 'ghost tag' issue, where sometimes committing a pending tag still leaves a pending record. this has been happening in the new display system when two pending tags that imply the same tag through siblings or parents are committed at the same time. I fixed a previous instance of this, but more remained. I replicated the problem through a unit test, rewrote several update loops to remain in sync when needed, and have fixed potential ghost tag instances in the specific and 'all known files' domains, for 'add', 'pend', 'delete', and 'rescind pend' actions - also tested and fixed are possible instances where both a tag and its implication tag are pend-committed at the same time, not just two that imply a shared other - furthermore, in a complex counting issue, storage autocomplete count updates are no longer deferred when updating mappings–they are 'interleaved' into mappings updates so counts are always synchronised to tables. this unfortunately adds some processing overhead back in, but as a number of newer cache calculations rely on autocomplete numbers, this change improves counting and pre-processing logic - fixed a 'commit pending to current' counting bug in the new autocomplete update routine for 'all known files' domain - while display tag logic is working increasingly ok and fast, most clients will have some miscounts and ghost tags here and there. I have yet to write efficient correction maintenance routines for particular files or tags, but this is planned and will come. at the moment, you just have the nuclear 'regen' maintenance calls, which are no good for little problems - . - network object breakup: - the network session and bandwidth managers, which store your cookies and bandwidth history for all the different network contexts, are no longer monolithic objects. on updates to individual network contexts (which happens all the time during network activity), only the particular updated session or bandwidth tracker now needs to be saved to the database. this reduces CPU and UI lag on heavy clients. basically the same thing as the subscriptions breakup last year, but all behind the scenes - your existing managers will be converted on update. all existing login and bandwidth log data should be preserved
[Expand Post]- sessions will now keep delayed cookie changes that occured in the final network request before client exit - we won't go too crazy yet, but session and bandwidth data is now synced to the database every 5 minutes, instead of 10, so if the client crashes, you only lose 5 mins of login/bandwidth data - some session clearing logic is improved - the bandwidth manager no longer considers future bandwidth in tests. if your computer clock goes haywire and your client records bandwidth in the future, it shouldn't bosh you _so much_ now - . - the rest: - the 'system:number of tags' query now has greatly improved cancelability, even on gigantic result domains - fixed a bad example in the client api help that mislabeled 'request_new_permissions' as 'request_access_permissions' (issue #780) - the 'check and repair db' boot routine now runs _after_ version checks, so if you accidentally install a version behind, you now get the 'weird version m8' warning before the db goes bananas about missing tables or similar - added some methods and optimised some access in Hydrus Tag Archives - if you delete all the rules from a default bandwidth ruleset, it no longer disappears momentarily in the edit UI - updated the python mpv bindings to 0.5.2 on windows, although the underlying dll is the same. this seems to fix at least one set of dll load problems. also updated is macOS, but not Linux (yet), because it broke there, hooray - updated cloudscraper to 1.2.52 for all platforms next week Even if this week had good work, I got thick into logic and efficiency and couldn't find the time to do anything else. I'll catch up on regular work and finally get into my planned network updates. It looks like 8kun is going off clearnet, or at least expecting to. The site has been dying for a long time now, and it is well past time I moved. I simply procrastinated. I also regret being behind the 8kun bug report and question threads for several weeks now. In the coming week I will clear out appropriate responses and lock the board in prep for deletion. I will update 426's hydrus links so Endchan is our primary board for the time being.
>8kun is kill Oh shit, had no idea. I've only come here for Hydrus anyway since the rebrand. Did they announce this somewhere?
>>15109 Have you considered 9chan?
Maybe its time we moved to a generic forum. I hate the idea but we've been jumping from 1 chan to the next so quick lately. I wonder if they make accountless forum.
>>15111 They sent a message to all BOs about it, and there are some (I think new) little text banners here and there saying the TOR link on the main page and so on. My guess is relations with their host aren't super great, and with all the other deplatforming going on, they are just waiting for the email now. Sounds like the situation sucks. I don't browse here any more either other than /hydrus/, and I should have moved a while ago, I was just putting it off. Although, funnily enough, having said I would 'lock' the board, I then couldn't find that option in the admin panel. I'll have another look today once I am caught up with the main threads, but if you just can't do that, I'll make a sticky with an announced deletion date. Making an archive of the Q&A thread is probably worthwhile. >>15114 I am not sure. I considered trying for 8chan.moe or somewhere in the webring, since that is where I browse these days personally, and they are opening up board creation in more places. But I think I'll just sit at endchan for now. It is a nice quiet corner. I have never been a comfortable board owner, and never good about running and checking a board for spam etc…, so I'd probably be happier just being a janny on a board run by others, which is how the hydrus discord works, although I wouldn't want to ask people to do hotpocket work unless they are keen. >>15121 Yeah, jumping around has been frustrating. My feeling is we will get another wave of hosts setting new rules over the next 1-3 months and taking out wrongthink and then things will settle again. If you are comfortable in a more normie-friendly experience, the discord is at https://discord.gg/wPHPCUZ The plan to move to Endchan as primary board has changed. Codexx here on 8chan.moe kindly offered to host me on /t/, so I will now be maintaining a Hydrus Network General thread there.
Edited last time by hydrus_dev on 01/20/2021 (Wed) 04:44:21.


Forms
Delete
Report
Quick Reply