1. Post in !techtakes@awful.systems attacks the entire concept of AI safety as a made-up boogeyman
  2. I disagree and am attacked from all sides for “posting like an evangelist”
  3. I give citations for things I thought would be obvious, such as that AI technology in general has been improving in capability compared to several years ago
  4. Instance ban, “promptfondling evangelist”

This one I’m not aggrieved about as much, it’s just weird. It’s reminiscent of the lemmy.ml type of echo chamber where everyone’s convinced it’s one way, because in a self-fulfilling prophecy, anyone who is not convinced gets yelled at and receives a ban.

Full context: https://ponder.cat/post/1030285 (Some of my replies were after the ban because I didn’t PT Barnum carefully enough, so didn’t realize.)

  • Skiluros@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    8 days ago

    I am not sure if I read the correct thread, but I personally didn’t find your arguements convincing, although I think a full ban is excessive (at least initially).

    Keep in mind that I do use local LLM (as an elaborate spell-checker) and I am a regular user of ML based video upscaleling (I am a fan of niche 80s/90s b-movies).

    Forget the technical arguments for a seconds. And look at the social-economic component behind US-style VC groups, AI companies, and US technology companies in general (other companies are a separate discussion).

    It is not unreasonable to believe that the people involved (especially the leadership) in the abovementioned organizations are deeply corrupt and largely incapable of honesty or even humanity [1]. It is a controversial take (by US standards) but not without precedent in the global context. In many countries, if you try and argue that some local oligarch is acting in good faith, people will assume you are trying (and failing) to practise a standup comedy routine.

    If you do hold a critical attitude and don’t buy into tedious PR about “changing the world”, it is reasonable to assume that irrespective of the validity of “AI safety” as a technical concept, the actors involved would lie about it. And even the concept was valid, it is likely they would leverage it for PR while ignoring any actual academic concepts behind “AI safety” (if they do exist).

    One could even argue that your arguementation approach is an example of provincialism, group-think and generally bad faith.

    I am not saying you have to agree with me, I am more trying to show a different perspective.

    [1] I can provide some of my favourite examples if you like, I don’t want to make this reply any longer.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      1
      ·
      8 days ago

      It is a controversial take (by US standards)

      The only people in denial about corruption within us government and corporate systems are boomers.

      Clearly most common folk know the deal hence why Luigi is a hero and we don’t even know if he actually destroyed that parasit Brian Thompson

      • Skiluros@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 days ago

        It’s been a while since I’ve been/lived in the US (I do have close friends who lived there though), but I disagree. It seemed like a general social issue that crosses all demographic segments.

    • PhilipTheBucket@ponder.catOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      8 days ago

      I’m not saying that any of what you just said is not true. I’m saying that all of that can be true, and AI can still be dangerous.

      • Skiluros@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        8 days ago

        That’s not what we are discussing though. We are discussing whether aweful.systems was right or wrong in banning you. Below is the title of your post:

        Instance banned from awful.systems for debating the groupthink

        I will note that I don’t think they should be this casual with giving out a bans. A warning to start with would have been fine.

        An argument can be made that you went in to awful.systems with your own brand of groupthink; specifically complete rejection of even the possibility that we are dealing with bad faith actors. Whether you like it or not, this is relevant to any discussion on “AI safety” more broadly and that thread specifically (as the focus of the linked article was on Apollo Research and Anthropic and AI Doomerism as a grifting strategy).

        You then go on to cite a YT video by “Robert Miles AI Safety”, this is a red flag. You also claim that you can’t (or don’t want to) provide a brief explanation of your argument and you defer to the YT video. This is another red flag. It is reasonable for one to provide a 2-3 sentence overview if you actually have some knowledge of the issue. This is not some sort of bad faith request.

        Further on you start talking about “Dunning-Kruger effect” and “deeper understanding [that YT fellow has]”. If you know the YT fellow has a deeper understanding of the issue, why can’t you explain in layman terms why this is the case?

        I did watch the video and it has nothing to do with grifting approaches used by AI companies. The video is focused on explaining a relatively technical concept for non-specialists (not AI safety more broadly in context of real world use).

        Further on you talk about non-LLM ML/AI safety issues without any sort of explanation what you are referring to. Can you please let us know what you are referring to (I am genuinely curious)?

        You cite a paper; can you provide a brief summary of what the findings are and why they are relevant to a skeptical interpretation of “AI safety” messaging from organization like Apollo Research and Anthropic?

        • PhilipTheBucket@ponder.catOP
          link
          fedilink
          arrow-up
          3
          arrow-down
          3
          ·
          8 days ago

          complete rejection of even the possibility that we are dealing with bad faith actors

          Incorrect. I definitely think we are dealing with bad faith actors. I talk about that at the end of my very first message. I actually agree that the study they looked at, based on asking a chatbot things and then inferring judgements from the answers, is more or less useless. I’m just saying that doesn’t imply that the entire field of AI safety is made of bad actors.

          You also claim that you can’t (or don’t want to) provide a brief explanation of your argument and you defer to the YT video. This is another red flag. It is reasonable for one to provide a 2-3 sentence overview if you actually have some knowledge of the issue.

          No. I said, “AI chat bots that do bizarre and pointless things, but are clearly capable of some kind of sophistication, are exactly the warning sign that as it gains new capabilities this is a danger we need to be aware of.” That’s a brief explanation of my argument. People deploying AI systems which then do unexpected or unwanted things, but can get some types of tasks done effectively, and then the companies not worrying about it, is exactly the problem. I just cited someone talking at more length about it, that’s all.

          I did watch the video and it has nothing to do with grifting approaches used by AI companies.

          Yes. Because they’re two different things. There is real AI safety, and then there is AI safety grift. I was talking about the former, so it makes sense that it wouldn’t overlap at all with the grift.

          Further on you talk about non-LLM ML/AI safety issues without any sort of explanation what you are referring to. Can you please let us know what you are referring to (I am genuinely curious)?

          Sure. Say you train a capable AI system to accomplish a goal. Take “maximize profit for my company” as an example. Then, years from now when the technology is more powerful than it is now, it might be able to pursue that goal so effectively that it’s going to destroy the earth. It might decide that enslaving all of humanity, and causing them to work full-time in the mines and donate all their income to the company’s balance sheet, is the way to get that done. If you try to disable it, it might prevent you, because if it’s disabled, then some other process might come in that won’t maximize the profit.

          It’s hard to realize how serious a threat that is, when I explain it briefly like that, partly because the current AI systems are so wimpy that they could never accomplish it. But, if they keep moving forward, they will at some point become capable of doing that kind of thing and fighting us effectively if we try to make them stop, and once that bridge is crossed there’s no going back. We need to have AI safety firmly in mind as we devote so much incredible resources and effort to making these things more powerful, and currently, we are not.

          I think it’s highly unlikely that whatever that system will be, will be an LLM. The absolutely constant confusion of “AI” with “LLM” in the people who are trying to dunk on me is probably the clearest sign, to me, that they’re just babbling in the wilderness instead of trying to even bother to understand what I’m saying and why AI safety might be a real thing.

          You cite a paper; can you provide a brief summary of what the findings are and why they are relevant to a skeptical interpretation of “AI safety” messaging from organization like Apollo Research and Anthropic?

          The only relevance the paper has is that I was challenged to show that LLMs are gaining capabilities over time. That’s obviously true, but also, sure, it’s been studied objectively. They set out a series of tasks, things like adding numbers together or basic reasoning tasks, and then measured the performance of various iterations of LLM technology over time on the tasks. Lo and behond, the newer ones can do things the old ones can’t do.

          The paper isn’t itself directly relevant to the broader question, just the detail of “is AI technology getting any better.” I do think, as I said, that the current type of LLM technology has gone about as far as it’s going to go, and it will take some new type of breakthrough similar to the original LLM breakthroughs like “attention” for the overall technology to move forward. That kind of thing happens sometimes, though.

          • Skiluros@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 days ago

            I originally stated that I did not find your arguments convincing. I wasn’t talking about AI safety as a general concept, but the overall discussion related to the article titled (Anthropic, Apollo astounded to find a chatbot will lie to you if you tell it to lie to you).

            I didn’t find you initial post (or any you posts in that thread) to be explicit in the recognition in the potential for bad faith actions from the likes of Anthropic, Apollo. On the contrary, you largely deny the concept of “criti-hype”. One can, in good faith, interpret this as de facto corporate PR promotion (whether that was the intentional or not).

            You didn’t mention the hypothetical profit maximization example in the thread and your phrasing implied a current tool/service/framework, not a hypothetical.

            I don’t see how the YT video or the article summary (I did not read the paper) is honestly relevant to what was being discussed.

            I am honestly trying to not take sides (but perhaps I am failing in this?), more like suggesting that how people interpret “groupthink” can take many forms and that “counter-contrarian” arguments in of themselves are not some of magical silver bullet.

            • PhilipTheBucket@ponder.catOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              7 days ago

              I wasn’t talking about AI safety as a general concept

              Okay, cool. I was. That was my whole point, that even if some is grift, AI safety itself is a real and important thing, and that’s an important thing to keep in mind.

              I think I’ve explained myself enough at this point. If you don’t know that the paperclips reference from the linked article is indicative of the exact profit maximization situation that I explained in more detail for you when you asked, or you can’t see how the paper I linked might be a reasonable response if someone complains that I haven’t given proof that AI technology has ever gained abilities over time, then I think I’ll leave you with those conclusions, if those are the conclusions you’ve reached.