Title says it all

  • chatokun@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    My sister caught her 8 year old son talking to ai chat bots on a software like this and blocked it. She went through the history and said it was often trying to flirt with him, but he didn’t seem to be interested,and seemed to more just be looking to talk.

    This may be an aim to get young kids, though I’m definitely not saying the pedo vibes aren’t intentional. I just think they’re going for more than one audience group.

  • TheFriar@lemm.eeM
    link
    fedilink
    arrow-up
    0
    ·
    6 days ago

    If I were you I’d send this to some media outlets. Tank some AI stock and create some more negative news around it.

  • jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    6 days ago

    there’s plausible denia… nah i got nothin. That’s messed up. Even for the most mundane, non-gross use case imaginable, why the fuck would anybody need a creepy digital facsimile of a child?

    • ckmnstr@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      6 days ago

      I mean, maaaybe if you wanted children and couldn’t have them. But why would it need to be “beautiful and up for anything”?

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        6 days ago

        “beautiful and up for anything” is incredibly suggestive phrasing. It’s an exercise in mental creativity to make it sound not creepy. But I can imagine a pleasant grandma (always the peak of moral virtue in any thought experiment) saying this about her granddaughter. I don’t mean to say I have heard this, only that I can imagine it. Barely.

  • viciouslyinclined@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    6 days ago

    And the bot has 882.9k chats.

    Im not surprised and I dont think you or anyone else is either. But that doesn’t make this less disturbing.

    Im sure thw app devs are not interested in cutting off a huge chunk of their loyal users by doing the right thing and getting rid of those types of bots.

    Yes, its messed up. In my experience, it is difficult to report chat bots and see any real action taken as a result.

    • Shin@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      Ehhh nah. As someone who used character.ai before there are many horrible bots that get cleared and the bots have been impossible to have sex with unless you get really creative. The most horrendous ones get removed quite a bit and were consistently reposted. I’m not here to shield a big company or anything, but the “no sex” thing was a huge thing in the community and they always fought with the devs about it.

      They’re probably trying to hide behind the veil of more normal bots now, but I struggle to imagine how they’d get it to do sexual acts, when some lightly violent RPs I tried to do got censored. It’s pretty difficult, and got worse over time. Idk though, I stopped using it a while ago.

    • viciouslyinclined@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      6 days ago

      They definitely knew who they were targeting when they made this. I only hope that, if those predators simply must text with a child, they keep talking to an ai bot rather than a real child.

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    Yes it’s what you think it is. I don’t think, however, that there is anywhere to report it that will care enough to do something about it.

  • bdonvr@thelemmy.club
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    Unfortunately in a lot of places there’s really nothing illegal if it’s just fantasy and text.

    • gandalf_der_12te@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      6 days ago

      why is that unfortunate though? who would you be protecting by making that chatbot illegal? would you “protect” the chatbot? would you “protect” the good-think of the users? do you think it’s about preventing “normalization” of these thoughts?

      in case of the latter: we had the very same discussion with shooter-video-games and evidence shows that shooter games do not make people more violent or likely to kill with guns and other weapons.

      • zalgotext@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        6 days ago

        I don’t think it’s the same discussion, video games and AI chatbots are two very different things that you engage with in very different ways.

  • you_are_dust@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    I’ve messed around with some of these apps out of curiosity of where the technology is. There’s typically a report function in the app. You can probably report that particular bot from within the app to try and get that bot deleted. Reporting the app itself probably won’t do much.

    • Obelix@feddit.org
      link
      fedilink
      arrow-up
      0
      ·
      7 days ago

      Do not complain to scummy companies, they will ignore you. Send messages to the media and police.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 days ago

        I’d say do complain to companies first, at least to those based in a regular country, and only then blog about it. Also underlines your point if you write, I informed them but they didn’t care.

        I believe it’s the other way around if it’s really shady and/or crime involved and you suspect the company to sweep it under the carpet. So you’ll want to inform the police first so they can gather evidence. But don’t waste their resources with minor things. They have enough to do. And I think this one isn’t cutting it yet, so I wouldn’t add it to the workload of already overworked police.

        Judging by what I’ve seen when talking to police and media, they often also lack interest or time to focus on some random things as long as there’s bigger fish to fry… I’ve already reported a worse service (which was already in the news) to the internet office of the police, and nothing ever came of it. So that’s sometimes not the solution either.

        I think spreading some awareness is a good thing, so this post is warranted. But what I’d do in this specific case is take a screenshot and save the URL, in case I want to escalate things at a later date. But then start with a regular report to the company, as they seem to be a regular company registered in the USA. And then I’d wait 2 weeks before bothering other people.
        If this was an image or video generator, I’d act differently and maybe go straight to the police. But it isn’t.

        • Grimtuck@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          6 days ago

          I disagree. This will only result in reactive moderation. If you want them to take this seriously and stop this before these bots go live then shame them on the internet. Don’t think that they don’t know what’s going on on their own site. These websites profit from taking delayed action.