I figured out how to remove most of the safeguards from some AI models. I don’t feel comfortable sharing that information with anyone. I have come across a few layers of obfuscation to make this type of alteration more difficult to find and sort out. This caused me to realize, a lot of you are likely faced with similar dilemmas of responsibility, gatekeeping, and manipulating others for ethical reasons. How do you feel about this?

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 个月前

          Yes. That’s research. Sometimes you don’t achieve what you set out to do.

          • KRAW@linux.community
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 个月前

            Well luckily AI researchers have achieved plenty in over 60 years. We call the ideas and innovations resulting from this research “AI.”

    • DarkCloud@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      3 个月前

      How about something autonomous that makes choices of its own will, and performs long term learning that influences the choices it makes, just as a flat benchmark.

      LLMs don’t qualify, they’re trained, retain information within a conversation, then forget it after the conversation is closed. They don’t do any long term learning after their initial training so they’re basically forever trapped in the mode of regurgitating within the parameters set by the training data at the time they’re trained.

      That’s just a very fancy way to search and read out the training data. Definitely not an active intelligence in there.

      They also don’t have any autonomy, they’re not active of their own accord when they’re not being addressed. They’re not sitting there thinking, so they have no internal personal landscape of thought. They have no place in which a private intelligence can be at play.

      They’re innert.