• CheesyFox@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    6
    ·
    5 months ago

    good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It’s like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

    Under no circumstance should we accept a “black box” explanation.

    Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

    • thecodeboss@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 months ago

      Don’t worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      6
      ·
      5 months ago

      Hey look, this took me like 5 minutes to find.

      Censius guide to AI interpretability tools

      Here’s a good thing to wonder: if you don’t know how you’re black box model works, how do you know it isn’t racist?

      Here’s what looks like a university paper on interpretability tools:

      As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.

      Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn’t get you in trouble with the EU.

      Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.

      Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind’s finer pleasures, but this attitude of yours is profoundly stupid. It’s weak. You don’t want to know? It doesn’t make you curious? Why are you comfortable not knowing things? That’s not how science is propelled forward.

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        5 months ago

        “Enough” is doing a fucking ton of heavy lifting there. You cannot explain a terabyte of floating point numbers. Same way you cannot guarantee a specific doctor or MRI technician isn’t racist.

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          8
          ·
          5 months ago

          A single drop of water contains billions of molecules, and yet, we can explain a river. Maybe you should try applying yourself. The field of hydrology awaits you.

          • Tja@programming.dev
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            5 months ago

            No, we cannot explain a river, or the atmosphere. Hence weather forecast is good for a few days and even after massive computer simulations, aircraft/cars/ships still need to do tunnel testing and real life testing. Because we only can approximate the real thing in our model.

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              5 months ago

              You can’t explain a river? It goes down hill.

              I understand that complicated things frieghten you, Tja, but I don’t understand what any of this has to do with being unsatisfied when an insurance company denies your claim and all they have to say is “the big robot said no… uh… leave now?”