I am the journeyer from the valley of the dead Sega consoles. With the blessings of Sega Saturn, the gaming system of destruction, I am the Scout of Silence… Sailor Saturn.

  • 1 Post
  • 19 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle
  • The good news is it generally isn’t necessary to reverse engineer browser behavior when writing a browser. Since it’s mostly fairly standardized, there’s a decent test suite, and the major browsers are all open source.

    Though this comes with some caveats:

    • There are exceptions like the CSS viewport spec which was reverse engineered from an iphone.
    • There are a lot of specifications because browsers have been around for decades and Chromium keeps implementing stuff, and it can be hard to find enough programmers to write all of them / catch up from a fresh start
    • This is a somewhat unstable situation; if we lose even a single major browser engine it’s easy to imagine Chrome maybe not bothering with standardization and just telling people to read the blog posts and code.
    • Web pages will do nonsense like break themselves if you provide a User-Agent string they don’t like. Mozilla has an ongoing compatibility effort where they sometimes have to override the UA string for specific pages. So less popular browsers are already playing from a disadvantageous position.

  • This doesn’t directly answer your question but I guess I had a rant in me so I might as well post it. Oops.


    It’s possible to write tools that make point changes or incremental changes with targeted algorithms in a well understood problem space that make safe or probably safe changes that get reviewed by humans.

    Stuff like turning pointers into smart pointers, reducing string copying, reducing certain classes of runtime crashes, etc. You can do a lot of stuff if you hand-code C++ AST transformations using the clang / llvm tools.


    Of course “let’s eliminate 100% of our C code with a chatbot” is… a whole other ballgame and sounds completely infeasible except in the happiest of happy paths.

    In my experience even simple LLM changes are wrong somewhere around half the time. Often in disturbingly subtle ways that take an expert to spot. Also in my experience if someone reviews LLM code they also tend to just rubber stamp it. So multiply that across thousands of changes and it’s a recipe for disaster.

    And what about third party libraries? Corporate code bases are built on mountains of MIT licensed C and C++ code, but surely they won’t all switch languages. Which means they’ll have a bunch of leaf code in C++ and either need a C++ compatible target language, or have to call all the C++ code via subprocess / C ABI / or cross-language wrappers. The former is fine in theory, but I’m not aware of any suitable languages today. The latter can have a huge impact on performance if too much data needs to be serialized and deserialized across this boundary.

    Windows in particular also has decades of baked in behavior that programs depend on. Any change in those assumptions and whoops some of your favorite retro windows games don’t work anymore!


    In the worst case they’d end up with a big pile of spaghetti that mostly works as it does today but that introduces some extra bugs, is full of code that no one understands, and is completely impossible to change or maintain.

    In the best case they’re mainly using “AI” for marketing purposes, will try to achieve their goals using more or less conventional means, and will ultimately fall short (hopefully not wreaking too much havoc in the progress) and give up halfway and declare the whole thing a glorious success.

    Either way ultimately if any kind of large scale rearchitecting that isn’t seen through to the end will cause the codebase to have layers. There’s the shiny new approach (never finished), the horrors that lie just beneath (also never finished), and the horrors that lie just beneath the horrors (probably written circa 2003). Any new employees start by being told about the shiny new parts. The company will keep a dwindling cohort of people in some dusty corner of the company who have been around long enough to know how the decades of failed code architecture attempts are duct-taped together.


  • Yep! These are the sorts of cursed horrors I come to awful systems for! Even if that video should have been 15 minutes at most.

    The misgendering / anti-woke nonsense seemed to be a huge elephant in the room in the video so I’m a little disappointed that Karl Jobst didn’t address it or defend her or share her perspective*. Like it is incredible that someone can spend 1.5 hours talking about a obviously anti-trans harassment campaign fueled by a bunch of gamergate types and then not bring up the topic once.

    It’s almost as if he’s giving space to the idea that maybe all the transphobic losers just coincidentally took issue with her decently good aiming and her gender was a completely unrelated. But Karl is terrified of forming or expressing any sort of subjective opinion so I’m not too surprised.

    * The best of which is this quote:

    noticing that so many people with backgrounds in professional play are defending me, and a lot of 50 year old dads think im cheating














  • The documentation for “Turbo mode” for Google Antigravity:

    Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)

    No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It’s not even named similarly to dangerous modes in other software (like “force” or “yolo” or “danger”)

    Just a cool marketing name that makes users want to turn it on. Heck if I’m using some software and I see any button called “turbo” I’m pressing that.

    It’s hard not to give the user a hard time when they write:

    Bro, I didn’t know I needed a seatbelt for AI.

    But really they’re up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user “well in our small print somewhere we used the phrase ‘Gemini can make mistakes’ so why did you enable turbo mode??”