1. Post in !techtakes@awful.systems attacks the entire concept of AI safety as a made-up boogeyman
  2. I disagree and am attacked from all sides for “posting like an evangelist”
  3. I give citations for things I thought would be obvious, such as that AI technology in general has been improving in capability compared to several years ago
  4. Instance ban, “promptfondling evangelist”

This one I’m not aggrieved about as much, it’s just weird. It’s reminiscent of the lemmy.ml type of echo chamber where everyone’s convinced it’s one way, because in a self-fulfilling prophecy, anyone who is not convinced gets yelled at and receives a ban.

Full context: https://ponder.cat/post/1030285 (Some of my replies were after the ban because I didn’t PT Barnum carefully enough, so didn’t realize.)

  • BomberMan9865@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    2 days ago

    I’ve never posted there before but it seems like they are a legitimately terrible instance, don’t blame yourself.

    Since you’re an admin you’re in a better position than most people since you can ban then unban yourself using another admin account to remove their ban of you if you really want to. It’s seen as sleezy to do that but awful.systems is sleezy too, so maybe justified?

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    7 days ago

    Oh yeah, they’re cunts. My very first comment there went ‘how lovely, this sneer club nonsense made the jump from reddit,’ and that was a one-step permaban.

    Conservatism is not a political ideology. It’s tribalism. All they can do is perform loyalty to the ingroup. If you prove their reasons wrong, they will pick a different reason. This card-shuffling behavior will occasionally resemble a cogent argument… but they don’t mean it. It never predicts their future claims. The only consistent element is the conclusion: outgroup bad, ingroup good.

    ‘It’s not a debate club’ just means ‘we’re going to shout our opinions at you, shut up and take it.’ Fuck that. That’s naked bastardry before you even showed up. An entire instance for this enforced circlejerk does not deserve federation.

    • PhilipTheBucket@ponder.catOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      7 days ago

      And then, because they learn these maladaptive ways of interacting with anyone who disagrees with them, any time they spend outside of the little bubble will feature people being hostile to them, which they will interpret as being oppressed which will reinforce the whole structure. Religion does the same thing, as does lemmy.ml.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        ·
        7 days ago

        Lemmy.ml is the most “so you hate waffles?” forum I have ever seen. As you said - they’ve got a dummy in mind, and you’re just the face tacked onto it. Your own words are like 10% of the argument happening in their heads. Trying to pick apart “that’s not what I fucking wrote,” without wasting six paragraphs they’re also not going to read, or falling afoul of the blatantly one-sided “be nice or else” threats, is an endless psychic vampire attack.

        I was on reddit for fifteen years. I’ve been here for two. I am anything but averse to arguing, even with complete buttheads. But “be civil” is the biggest gift to trolls anyone has ever devised. It lets them spit whatever dishonest contrarian nonsense they want - and the obvious and necessary “oh fuck off” is what gets the boot. You will participate in legitimizing their hot take, because some cult of moderators thinks trolling is means name-calling. Like nobody’s ever rude for a damn good reason. And also “this is abusive, I am leaving” counts as rude, because go fuck yourself.

        • PhilipTheBucket@ponder.catOP
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          7 days ago

          Not that long ago, I got in a huge argument with someone on lemmy.ml, and they were furious that I refused to play by the “rules” of engaging at length with every one of the 3-5 new things they would bring up in every new comment while refusing to provide sources for any of it, and also saying that any of the sources I was citing needed to be “contextualized” and so basically, didn’t count.

          Eventually, he tried to pull rank on me saying he teaches this stuff IRL and listed his number of students, as a way of saying why I needed to listen to him. As it happens, I was a teacher of teachers for a living, and when I pulled rank back on him, he wasn’t interested in the conversation anymore.

          It only ever goes one way. Always. It’s always that you need to play by the rules, but they do not.

          Edit: I should say, to the credit of the lemmy.ml mods, nothing I was saying got me deleted or banned, even though we were dealing with a hot-button topic. Maybe the moderation is improving. I was seriously a little surprised and impressed that they left it alone, I’m sure they got reports.

          • mindbleach@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            ·
            7 days ago

            and also saying that any of the sources I was citing needed to be “contextualized” and so basically, didn’t count.

            Usually while demanding you read three volumes on theory, like they’re owed a book report.

            It only ever goes one way. Always. It’s always that you need to play by the rules, but they do not.

            This is where I disagree with you: they’re being consistent. They think you’re doing what they’re doing. This is what it looks like, when you win their game. You beat this guy. But that doesn’t mean he switches teams. That’s not how games work. It’s how arguments work. And however argument-shaped his sentences were, he was never telling you why he went from premises to conclusion. He was just shuffling cards.

            • PhilipTheBucket@ponder.catOP
              link
              fedilink
              arrow-up
              4
              ·
              7 days ago

              Hm… I think for this guy, it was a little more complicated than that. For most of the lemmy.ml people, I think you’re right. I think this guy was very sincerely believing in what he was saying, he just had a sort of self-referential way of looking at reality, where anything that didn’t agree with him was CIA propaganda, so there’s no way he could ever bootstrap his way out of what he believed. I didn’t get the vibe that he was just arguing in bad faith all around, I think he really believed it. That’s why I talked to him for as long as I did.

  • rickyrigatoni@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    7 days ago

    Every post from awful.systems is angry about something and half the time I have no idea what they’re even angry about and I’m afraid to ask.

    • PhilipTheBucket@ponder.catOP
      link
      fedilink
      arrow-up
      4
      ·
      7 days ago

      There’s a guy in the friend group who is SUPER amped up about how hetero he is. He talks about it, and how he hates the gays, regularly.

      There’s a bank that runs CONSTANT billboards about how you’re not just a number to them, you’re an important human being, and they pay a smiling lady in the commercials to be super friendly so you’ll know they really care about you.

      There’s a community on Lemmy whose whole reason for being is that they are WAY smarter than all these people writing these articles. We don’t even have to debate, because they KNOW they’re way smarter, super smart, and everyone else is dumb. They can’t stop talking about how much smarter they are, it seems like that part is more important to them than the tech, for them to tell you all about.

      • sunzu2@thebrainbin.org
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        3 days ago

        who is SUPER amped up about how hetero he is. He talks about it, and how he hates the gays, regularly.

        Hmmm, that’s like the first step haha

        Tell him to keep going

  • echolalia@lemmy.ml
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    7 days ago

    I think Skiluros is right on the money, but I’m just going to point out something. (Disclamer: I am a regular reader of tech takes and I enjoy their snarky negativity)

    You walked into a hater’s club with a rule of “no debates”, debated the regular posters, and got banned. Is it heavy handed? Maybe, but it is low-effort moderation. I get the feeling if they didn’t moderate similarly to this, they would be able to preserve the vibe of the place (and you are not obligated to like or agree with this vibe). They’re allowed to have their own corner of the internet.

    I think they’d probably reverse it if you asked them to. I base this on the idea that instance bans are easy to hand out, and asking politely for an unban is something most banned people don’t bother to do. I could be wrong.

    I bet they get absolutely flooded with folks who just want to debate instead of joining in on the sneering. It’s gotta be way lower effort to just ban people. It’s not like there’s any large communities on that instance (Look at their local front page: buttcoin, sneerclub, techtakes. All hater’s clubs, many posts months old on “active” setting), so I don’t think they’re doing real harm, either. It’s not like you were instance banned from like, lemmy.world or something.

    There’s plenty of other communities to discuss AI on lemmy. IMHO, you’re just missing the point of techtakes. You don’t have to agree with them, just like they aren’t required to refute your youtube video.

    • PhilipTheBucket@ponder.catOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      7 days ago

      That’s so weird, though. You can sneer at people who are wrong, without needing to mechanically censor anyone who might point out that you’re actually the wrong one. It feels like they want the bullying aspect without the fact-checking aspect. There’s plenty in tech that you can make fun of because it is wrong without needing to shield yourself from any possible criticism when you do that.

      I didn’t check the instance rules, as I think most people don’t if something just occurs to them when they see something and they want to say something. I don’t care enough to beg for readmission. I’m just pointing out that they are being weird, and checking myself a little bit, and wanting to continue the conversation with anyone who wants to, in a place where I won’t be silenced.

      • echolalia@lemmy.ml
        link
        fedilink
        arrow-up
        12
        arrow-down
        3
        ·
        7 days ago

        I didn’t check the instance rules

        Mistakes happen, but it is on you.

        I don’t care enough to beg for readmission.

        But you do care enough to type words and words and words somewhere else, no?

        I’m just pointing out that they are being weird

        I politely disagree. What you’re viewing as mechanical censorship is just community curation to them. Part of “power tripping” implies they are abusing power, and I don’t see them preventing you from participating with anything you appreciate. There’s plenty of other AI communities on lemmy.

        In summary, a comic:

        • PhilipTheBucket@ponder.catOP
          link
          fedilink
          arrow-up
          5
          arrow-down
          6
          ·
          7 days ago

          @db0@lemmy.dbzer0.com I would like to officially request a new rule for this community: Anyone who makes the argument “Yes but censorship is okay, because the mods are the boss, they’re doing community curation” should be banned with the reason listed as “If you insist.”

          • echolalia@lemmy.ml
            link
            fedilink
            arrow-up
            6
            arrow-down
            2
            ·
            7 days ago

            I’d argue that would be a power trip, friend, because he’s made a major change to the rules without his user’s permission. And, many people outside that instance depend on db0’s communities, which are large and varied. Very unlike many communities at awful.systems, which are meant for venting, snark, and sneering down your nose at people, warranted or unwarranted.

            Everyone at awful.systems likely agrees with the moderation of the admins or they would not be there.

            • PhilipTheBucket@ponder.catOP
              link
              fedilink
              arrow-up
              3
              arrow-down
              4
              ·
              7 days ago

              All of a sudden it’s totally different lol.

              @db0@lemmy.dbzer0.com I was completely serious. I think it can form a good educational experience, leopards and faces and such.

              • echolalia@lemmy.ml
                link
                fedilink
                arrow-up
                6
                arrow-down
                2
                ·
                7 days ago

                I hardly think it’s suddenly different, it’s just actually different. It’s two different scenarios.

                By the way, my dad works at nintendo and can beat up your dad.

                • PhilipTheBucket@ponder.catOP
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  3
                  ·
                  7 days ago

                  It’s actually covered by the existing TOS. There’s affirmative support for the standards of:

                  • Welcoming attitude and approach,
                  • Rational debate and discussion,
                  • Genuine exchanges of ideas,

                  And under “What is Unacceptable,” it lists “authoritarianism,” and advocating or encouraging “the spread of behavior that is designed to overturn the standards described so far.” I’d say this absolutely qualifies as advocacy for both authoritarianism in moderation, and overturning the ideas of welcoming participants to a rational discussion and genuine exchange of ideas. You might not have been aware of it, mistakes happen, but it is on you.

  • surph_ninja@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    edit-2
    7 days ago

    The old school tech guys are super anti-AI. I think it’s the usual refusal to keep up with new tech.

    I’ll admit, I was in the same basket, until I heard a professor speak about it at my son’s university. They were talking about university concerns of students cheating with AI, and one progressive professor told us about how she encourages its use. She said now that pandora’s box has been opened, the best thing she could do to best prepare them for the real world was teach them to use it properly to improve their workflow, rather than try to ban its use.

    There was more to it, but at the end of the talk I realized I’d made the mistake of writing it off. I made the exact same mistake a lot of these tech guys are now, and underestimated how fast it’s advancing. I messed with AI a couple years prior, wasn’t impressed, and let that form my opinions. When I tried it again, I couldn’t believe how much more impressive it was than before. Then I stayed with it, and I couldn’t believe how fast I was watching it improve every single month. If you’re not working with it regularly, you really cannot understand how fast this is moving.

    Realizing this was similar to the invention of the digital calculator, I tried to spread the word to the old farts that the abacus would soon be dead. But none of them want to hear it. Saying anything positive about AI will get you slammed with downvotes and bans, and lots of lectures like ‘I tried it two years ago, and it was a joke.’ It was shocking to me, how many Luddites are in tech.

    Screw’em. Let them get left behind. Can’t drag someone into the future who wants to be stuck in the past. It still has a long way to go, but I’ve started using it to speed up my workflow. Even with the mistakes it makes, it’s worth it for how fast I can now get through the blank page phase of a project. No more boiler plate work slowing me down. It probably won’t become sentient in my lifetime, but damn if it isn’t an incredibly useful tool.

    • Susaga@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      7 days ago

      If the curtain catches fire, then pandora’s box has already been opened and you might as well start spraying gasoline around the room. No point trying to fix problems when we can just accept that the room is on fire and start preparing to be fireproof.

      AI is a shitty attempt at a shitty thing. If it improves your work, then your work was REALLY bad. If it gets better, then it will be a GOOD attempt at a shitty thing. Your work is STILL really bad, but now you have a machine to make things you claim credit for. It will never be a good thing.

      AI is a technological fire pit, and you are blindly walking into the flames so the other char-grilled victims don’t leave you behind. Let me put out the damn fire.

      • surph_ninja@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        4
        ·
        7 days ago

        When you realize later that the world has left you behind, I want you to think back on this nonsense you posted.

        • Susaga@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          7 days ago

          I said this stuff about crypto. I say the same things to the same people with the same confidence. Why should it end any different?

          • surph_ninja@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            4
            ·
            7 days ago

            Because this is new tech. Not a Ponzi scheme.

            Seems you’re still struggling to adequately assess emerging technology.

            • Susaga@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              3
              ·
              7 days ago

              It’s the same people picking up new technology and telling else to get on board or be left behind. People with a good understanding of technology and society point out the obvious flaws. Then everyone who jumped on the bandwagon starts calling everyone who didn’t jump with them a Luddite who is going to be left behind.

              Meanwhile, you have people stealing the work from artists without compensation. You have a rampant misuse of computing power to meet the needs of the new technology. You have features forced on people who want nothing to do with it. You have countless people using the technology to get a cheap cash-grab, then hopping on to do it again. You have people using the technology to commit legitimate crimes, using the slow speed of legal definitions to get away with it.

              This is nothing new. We’ve been here before. I’d like to move on.

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    19
    arrow-down
    5
    ·
    8 days ago

    YDI

    They have literally 1 rule in the sidebar, and you did break it, so eh. Your source is also someone who says “my career is very real, please continue paying me”, which is exactly the wrong thing to post.

    • PhilipTheBucket@ponder.catOP
      link
      fedilink
      arrow-up
      6
      arrow-down
      4
      ·
      7 days ago

      One of my sources was a paper on arxiv, the other was an academic on YouTube. When I cited the paper on arxiv, sort of confused that I had to come up with a citation for the idea “AI is getting more powerful as time goes on,” the person had asked for even a single example of an LLM gaining the ability to do something as if that was some gotcha question, was replaced by a different person swearing “literally” the opposite of the paper I just showed him.

      Maybe you have a point about the debate rule. It seems that community is not for that, it’s for being an echo chamber and they like it that way. I notice that none of the people who were debating against me got banned.

  • 9point6@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    4
    ·
    8 days ago

    PTB

    I had the unexpectedly bemusing experience of commenting on that instance recently.

    Same situation, they really don’t like AI over there and just go super hostile on anyone who dares say anything off script. Zero nuance.

    Treating that place as a zoo now if it shows up again

  • AwesomeLowlander@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    7 days ago

    They’re very much an echo chamber, all disagreeing or insufficiently groupthink views are rejected. I had two discussions there and then gave up and blocked them.

  • Skiluros@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    7 days ago

    I am not sure if I read the correct thread, but I personally didn’t find your arguements convincing, although I think a full ban is excessive (at least initially).

    Keep in mind that I do use local LLM (as an elaborate spell-checker) and I am a regular user of ML based video upscaleling (I am a fan of niche 80s/90s b-movies).

    Forget the technical arguments for a seconds. And look at the social-economic component behind US-style VC groups, AI companies, and US technology companies in general (other companies are a separate discussion).

    It is not unreasonable to believe that the people involved (especially the leadership) in the abovementioned organizations are deeply corrupt and largely incapable of honesty or even humanity [1]. It is a controversial take (by US standards) but not without precedent in the global context. In many countries, if you try and argue that some local oligarch is acting in good faith, people will assume you are trying (and failing) to practise a standup comedy routine.

    If you do hold a critical attitude and don’t buy into tedious PR about “changing the world”, it is reasonable to assume that irrespective of the validity of “AI safety” as a technical concept, the actors involved would lie about it. And even the concept was valid, it is likely they would leverage it for PR while ignoring any actual academic concepts behind “AI safety” (if they do exist).

    One could even argue that your arguementation approach is an example of provincialism, group-think and generally bad faith.

    I am not saying you have to agree with me, I am more trying to show a different perspective.

    [1] I can provide some of my favourite examples if you like, I don’t want to make this reply any longer.

    • PhilipTheBucket@ponder.catOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      7 days ago

      I’m not saying that any of what you just said is not true. I’m saying that all of that can be true, and AI can still be dangerous.

      • Skiluros@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        7 days ago

        That’s not what we are discussing though. We are discussing whether aweful.systems was right or wrong in banning you. Below is the title of your post:

        Instance banned from awful.systems for debating the groupthink

        I will note that I don’t think they should be this casual with giving out a bans. A warning to start with would have been fine.

        An argument can be made that you went in to awful.systems with your own brand of groupthink; specifically complete rejection of even the possibility that we are dealing with bad faith actors. Whether you like it or not, this is relevant to any discussion on “AI safety” more broadly and that thread specifically (as the focus of the linked article was on Apollo Research and Anthropic and AI Doomerism as a grifting strategy).

        You then go on to cite a YT video by “Robert Miles AI Safety”, this is a red flag. You also claim that you can’t (or don’t want to) provide a brief explanation of your argument and you defer to the YT video. This is another red flag. It is reasonable for one to provide a 2-3 sentence overview if you actually have some knowledge of the issue. This is not some sort of bad faith request.

        Further on you start talking about “Dunning-Kruger effect” and “deeper understanding [that YT fellow has]”. If you know the YT fellow has a deeper understanding of the issue, why can’t you explain in layman terms why this is the case?

        I did watch the video and it has nothing to do with grifting approaches used by AI companies. The video is focused on explaining a relatively technical concept for non-specialists (not AI safety more broadly in context of real world use).

        Further on you talk about non-LLM ML/AI safety issues without any sort of explanation what you are referring to. Can you please let us know what you are referring to (I am genuinely curious)?

        You cite a paper; can you provide a brief summary of what the findings are and why they are relevant to a skeptical interpretation of “AI safety” messaging from organization like Apollo Research and Anthropic?

        • PhilipTheBucket@ponder.catOP
          link
          fedilink
          arrow-up
          3
          arrow-down
          3
          ·
          7 days ago

          complete rejection of even the possibility that we are dealing with bad faith actors

          Incorrect. I definitely think we are dealing with bad faith actors. I talk about that at the end of my very first message. I actually agree that the study they looked at, based on asking a chatbot things and then inferring judgements from the answers, is more or less useless. I’m just saying that doesn’t imply that the entire field of AI safety is made of bad actors.

          You also claim that you can’t (or don’t want to) provide a brief explanation of your argument and you defer to the YT video. This is another red flag. It is reasonable for one to provide a 2-3 sentence overview if you actually have some knowledge of the issue.

          No. I said, “AI chat bots that do bizarre and pointless things, but are clearly capable of some kind of sophistication, are exactly the warning sign that as it gains new capabilities this is a danger we need to be aware of.” That’s a brief explanation of my argument. People deploying AI systems which then do unexpected or unwanted things, but can get some types of tasks done effectively, and then the companies not worrying about it, is exactly the problem. I just cited someone talking at more length about it, that’s all.

          I did watch the video and it has nothing to do with grifting approaches used by AI companies.

          Yes. Because they’re two different things. There is real AI safety, and then there is AI safety grift. I was talking about the former, so it makes sense that it wouldn’t overlap at all with the grift.

          Further on you talk about non-LLM ML/AI safety issues without any sort of explanation what you are referring to. Can you please let us know what you are referring to (I am genuinely curious)?

          Sure. Say you train a capable AI system to accomplish a goal. Take “maximize profit for my company” as an example. Then, years from now when the technology is more powerful than it is now, it might be able to pursue that goal so effectively that it’s going to destroy the earth. It might decide that enslaving all of humanity, and causing them to work full-time in the mines and donate all their income to the company’s balance sheet, is the way to get that done. If you try to disable it, it might prevent you, because if it’s disabled, then some other process might come in that won’t maximize the profit.

          It’s hard to realize how serious a threat that is, when I explain it briefly like that, partly because the current AI systems are so wimpy that they could never accomplish it. But, if they keep moving forward, they will at some point become capable of doing that kind of thing and fighting us effectively if we try to make them stop, and once that bridge is crossed there’s no going back. We need to have AI safety firmly in mind as we devote so much incredible resources and effort to making these things more powerful, and currently, we are not.

          I think it’s highly unlikely that whatever that system will be, will be an LLM. The absolutely constant confusion of “AI” with “LLM” in the people who are trying to dunk on me is probably the clearest sign, to me, that they’re just babbling in the wilderness instead of trying to even bother to understand what I’m saying and why AI safety might be a real thing.

          You cite a paper; can you provide a brief summary of what the findings are and why they are relevant to a skeptical interpretation of “AI safety” messaging from organization like Apollo Research and Anthropic?

          The only relevance the paper has is that I was challenged to show that LLMs are gaining capabilities over time. That’s obviously true, but also, sure, it’s been studied objectively. They set out a series of tasks, things like adding numbers together or basic reasoning tasks, and then measured the performance of various iterations of LLM technology over time on the tasks. Lo and behond, the newer ones can do things the old ones can’t do.

          The paper isn’t itself directly relevant to the broader question, just the detail of “is AI technology getting any better.” I do think, as I said, that the current type of LLM technology has gone about as far as it’s going to go, and it will take some new type of breakthrough similar to the original LLM breakthroughs like “attention” for the overall technology to move forward. That kind of thing happens sometimes, though.

          • Skiluros@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 days ago

            I originally stated that I did not find your arguments convincing. I wasn’t talking about AI safety as a general concept, but the overall discussion related to the article titled (Anthropic, Apollo astounded to find a chatbot will lie to you if you tell it to lie to you).

            I didn’t find you initial post (or any you posts in that thread) to be explicit in the recognition in the potential for bad faith actions from the likes of Anthropic, Apollo. On the contrary, you largely deny the concept of “criti-hype”. One can, in good faith, interpret this as de facto corporate PR promotion (whether that was the intentional or not).

            You didn’t mention the hypothetical profit maximization example in the thread and your phrasing implied a current tool/service/framework, not a hypothetical.

            I don’t see how the YT video or the article summary (I did not read the paper) is honestly relevant to what was being discussed.

            I am honestly trying to not take sides (but perhaps I am failing in this?), more like suggesting that how people interpret “groupthink” can take many forms and that “counter-contrarian” arguments in of themselves are not some of magical silver bullet.

            • PhilipTheBucket@ponder.catOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              7 days ago

              I wasn’t talking about AI safety as a general concept

              Okay, cool. I was. That was my whole point, that even if some is grift, AI safety itself is a real and important thing, and that’s an important thing to keep in mind.

              I think I’ve explained myself enough at this point. If you don’t know that the paperclips reference from the linked article is indicative of the exact profit maximization situation that I explained in more detail for you when you asked, or you can’t see how the paper I linked might be a reasonable response if someone complains that I haven’t given proof that AI technology has ever gained abilities over time, then I think I’ll leave you with those conclusions, if those are the conclusions you’ve reached.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      1
      ·
      7 days ago

      It is a controversial take (by US standards)

      The only people in denial about corruption within us government and corporate systems are boomers.

      Clearly most common folk know the deal hence why Luigi is a hero and we don’t even know if he actually destroyed that parasit Brian Thompson

      • Skiluros@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        It’s been a while since I’ve been/lived in the US (I do have close friends who lived there though), but I disagree. It seemed like a general social issue that crosses all demographic segments.

  • PugJesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    8 days ago

    PTB, also major groupthink going on where they want to argue with you about LLMs when your point is explicitly broader.

    • PhilipTheBucket@ponder.catOP
      link
      fedilink
      arrow-up
      16
      arrow-down
      2
      ·
      8 days ago

      I feel like a lot of people, these ones included, have a ready-made idiot in their minds who is saying certain things which they love to dunk on, and the instant someone disagrees with them, they get to work on it with gusto, filling in the other side of the argument with the idiot beliefs so they can save some time and get to snarking.

  • PhilipTheBucket@ponder.catOP
    link
    fedilink
    arrow-up
    6
    arrow-down
    3
    ·
    8 days ago

    Is it “brigading” to ask someone to drop a polite note into the original post, inviting them to continue the conversation here? A couple of people said things I want to respond to, but of course I can’t.

    • Umbrias@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      6 days ago

      awful systems is full of toxicity but they are not wrong on this. your comments there specifically fail to address the context (“ai in general” but pivot to ai is very specifically discussing the current ml methods used by these companies, especially llm). this sort of off topic posting is likely why you were percieved the way you were.

      In addition, AI safety (what you put forward) is conceptually a scapegoat to avoid realistic and immediate harms from ai hype and related tech industry nonsense. the sorts of what ifs you pose largely rely on magical thinking to the benefit of companies who continue grifting and rotting everything they touch in the meantime. It also serves to imply capabilities to these technologies that are unreasonable, often absurd, and feeds into the grift. this is the exact topic the article is about, in fact.

      Anyway, deprogramming the sci fi notion of ai singularity in these contexts is frought and drawn out. if all you are looking for is to keep arguing with awful systems folks, go find a better use of your time.

  • infinite_ass@leminal.space
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    7 days ago

    99% of people are dumb as rocks, and tribal and intensely conformist on top. And the moderators are the worst. There’s no getting around that. For this reason any public forum is invariably awful. Joy is only found on the edges