• 2 Posts
  • 16 Comments
Joined 11 days ago
cake
Cake day: February 3rd, 2026

help-circle
  • Well, it’s not that there’s a particular “problem” in a sense like a bug. But it’s that if the device can be pushed further, and thus by higher polling we achieve lower effective input latency and slightly smoother input, then why wouldn’t we do it? The same way gamers get higher refresh rate screens (and sometimes yet try to push them further), or other devices.

    As for the implementation, my module is partially based on a patchset for actual kernel module, but it’s unclear to me if it was tried to be upstreamed or why it failed if so. But it clearly didn’t make it in, and there’s no sign of that changing any time soon. Maybe the kernel devs would consider it “unorthodox” to alter the descriptor parameters against what the manufacturer intended.

    But some devices do allow themselves to be polled higher and will just sample the inputs more often, if their firmware and sensors are capable of it. In fact, many “gaming” mice have a proprietary software that uses a proprietary protocol (this often has a Linux equivalent like Sonaar for Logitech) to set on-device settings where it’ll reconnect reporting different bInterval (requested polling rate) to the host based on what was set. And yet the manufacturers will by default use some “safe default” setting like 125 or 250 at most, just to avoid any potential issues on some devices and thus RMA costs, with opt-in options of 500 and 1000. But some manufacturers don’t bother making such an option or an app at all, so that’s where this module comes in. And especially for controllers, it’s much less common to see such an option even if the app is there, even though high amount of non-Microsoft controllers do allow such overclocking (Microsoft ones at 125 Hz locked are pathetic, you can feel the latency difference between that and my 250 Hz controller overclocked to 500 Hz side-by-side).

    But TL;DR is that it’s just a gamer optimization, and one that isn’t quite easily possible with upstream kernel currently. Some kernel modules do have some options for some degree of overclocking, but e.g. one of them has a bug that it didn’t work with USB 3 devices, so yeah…


  • Lower effective input latency, higher input smoothness (the latter perceivable probably only on displays with higher refresh rates). That’s of course only for USB input devices (gamepads, mice, maybe keyboard), as for other types of devices idk.

    But do note that only some devices will allow you to do this. For gamepads, the site gamepadla.com has a bunch of OC results made by Windows gamers. For mice, I saw some threads on some forums at some point (my mouse is natively 1000 Hz, so I didn’t focus on this)

    EDIT: But like the difference can be really perceivable, it’s not a placebo. Especially on something like 240 Hz screen, the difference between say 125 Hz and 1000 Hz polling is just jarring. But it’s rare a 125 Hz mouse could be brought up this much, usually its sensor wouldn’t even be precise enough if it was shipped at such low polling.

    But for example my controller could be overclocked from 250 to 1000, but 500 was the sweet spot in how it felt, while at 1000 it was unstable with some lags from time to time. But 500 was working perfectly and felt smoother.

    Also notably the PS5’s DualSense can be overclocked from 250 to 1000 Hz (people claim 8000, but apparently it’s actually a lie)




  • And yet they nailed down the latency to be surprisingly low, it was much better than Parsec I used at the time on LAN, with NVIDIA datacenter being at 25 ms instead of the 5 ms it is at today (and people in the city it’s at have it at sweet 1 ms)

    Of course there’s a lot to dislike about the service and the trend overall, such as the recently inflated outrageous pricing, but from technical standpoint I was surprised how well it worked, with me being rather sensitive to latency. You’re probably right there’s more latency between mouse and the monitor already, but that also means the network doesn’t necessarily add that much on top…



  • I will say I have one funny regression with my HDMI monitor where it sometimes goes blank for a bit when app goes full-screen on another monitor or right after wake-up. I laugh at this, because it’s still a superior experience, and the kernel version that introduced this, fixed another quirk. Because the problem isn’t with Linux here, but that this monitor has a broken ass firmware. And it resets itself after waking up from sleep or changing inputs, I had problems with this under Windows too, and other monitors don’t do this. But I’m not going to point fingers at wrong direction, plus current state of things doesn’t bother me. Same cannot be said about Windows, where another one of my monitors would randomly reset itself from time to time, which would cause the screen to remove itself from the system and cause the whole system to get 1-2 minute long aneurysm (hope you weren’t gaming during that, especially a multiplayer game…). Meanwhile if that thing happens to this monitor on Linux, simply nothing happens and I don’t even notice it.

    Sooo maybe it’s dumb luck that shit works better or just as well on Linux. But it’s real. I didn’t buy anything specifically for Linux, other than always sticking to AMD and avoiding NVIDIA, because I’ve long despised the latter. My whole system works great, the laptop I randomly purchases (AMD-based) works great, my parents’ laptop works great, my grandma’s computer works great, my work machine works great (well certainly much better than on Windows, though it’s not a powerful machine), my friend-with-NVIDIA’s computer works great (surprisingly), my other friend’s computer works great (after figuring out how to install Arch; also with a broken monitor firmware suffering btw), his girlfriend’s computer also works great.

    Maybe it’s actually dumb misfortune for those who have problems or some terribly obscure hardware. Maybe I live in some great lucky bubble where things work for the most part around me. Hard to tell which group is a majority and which isn’t.

    I do have the fingerprint reader on my laptop not working, that’s unfortunate, but I forgot it’s even a thing, since I never had one on another machine anyway. That same poor laptop got a bunch of 1-star reviews on the store’s website for “poor work culture” just because Windows 11 at setup or idle would ramp up its fans to 100% for no reason, this never happens on Linux unless maybe I actually intentionally hammer it with something. It’s crazy.

    Okay one thing I’ll have to admit, about one actual thing not working well, oh irony: my Steam Deck is the only device that has some huge problems with my Wi-Fi router. Just that device out of like 20 others. And just with that router. Drat. I’ll have to see if the next major OpenWrt version will improve it.

    Aaaaanyway, can you tell me more about the DP+HDMI problem? I’m actually somewhat curious. And what GPU do you have. I’m wondering if it’s related to anything I’ve ever seen, or something else entirely.




  • I would sooner commit sudoku than ever do anything Kubernetes, and yet shit basically just works for me. Nothing is perfect, but it’s 5x better than Windows, so I’m never going back. It seems the server and desktop Linux experience doesn’t quite transfer and apply that much between each other.

    I’m not denying your experience to be clear. But for some people it really does work all well. Multi-monitor handling on KDE is so superior for me that I don’t know how I ever dealt with whatever Windows was doing


  • p0358@lemmy.blahaj.zoneto196@lemmy.blahaj.zoneFree Palestine Rule
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    5
    ·
    2 days ago

    Thank you.

    And I consider these calls to defederation to be a good example of the problems fediverse has at its core. Defederation should be last resort, against instances either fully dedicated to or promoting illegal content, or just unmoderated or spammy.

    Suggesting some whole instance should be defederated because they dared to ban people for obnoxious hate speech you’ve cited definitely does not make feddit look like the bad ones here whatsoever. That is my opinion here.