• comrade-bear@lemmygrad.ml
    link
    fedilink
    arrow-up
    10
    ·
    10 days ago

    I’ve actually heard from pretty respectable folks, that one, an possibly quite an important one as it is, of the goals of NATO in Ukraine is to gather training data for war/propaganda oriented AI, via the palantir company. And apparently same goes for Israel, so for this purpose who wins is not relevant just that the fight rages on.

    And lastly and quite concerning palantir apparently has expanded towards south Korea, which is quite alarming if the trend continues.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      8
      ·
      9 days ago

      There is no doubt in my mind that AIs are being trained on the data. We already know this thing exists for example https://www.theverge.com/2019/7/31/20746926/sentient-national-reconnaissance-office-spy-satellites-artificial-intelligence-ai

      That said though, the tech is not unique to the US, and that means Russia and China would be training predictive systems on the data as well. Russia might be somewhat behind in that regard, but China most certainly would have military AIs that can rival the US.

    • amemorablename@lemmygrad.ml
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      10 days ago

      Wouldn’t happen to have a source on it somewhere? I mean, with all the shit the west gets up to, I could def believe it. But also, I wouldn’t be too quick to be scared by such a thing. Just on a fundamental principle level of things, one thing to remember is that the best human generals can make tactical blunders and AI is nowhere near being on the level of human intelligence in the first place. I could see attempts to use it for statistical judgments, but such judgments are likely going to be locked into a specific scenario and it’ll be hard to generalize. At the end of the day, we’re still dealing with material conditions and AI is too.

      And though it might not be exactly the same tech, based on what I’ve seen so far with generative AI, it’s a lot less effective at generalizing than people tend to think it is. One of the problems in generative AI, to put it in specifics, is that if the dataset has no experience with X subject, then it’s likely going to struggle to do anything in that subject, ex: if a text model was not trained on any data about Legos, it won’t somehow extrapolate that Legos exist in the world (which makes intuitive sense when you think about it for an example like that). Same thing with humans, but worse with AI. Even we can only generalize so much beyond what we know for sure and we overcome this by learning new things as we encounter them. But a lot of what’s getting called AI doesn’t learn a damn thing unless you make it do that, explicitly, and training gets expensive fast. And if you try to make it some kind of self-learning, it could easily run itself in a direction you don’t want, like the Microsoft Tay incident.

      So I mean, they can try, but colonialism has gone on as long as it has without AI even in existence for the majority of that time. AI might impact how brutality is carried out, but the brutality has been going on for hundreds of years. And in spite of that, China is doing well, as are some other AES states. BRICS is making progress. The empire can be resisted and will be until its war criminals are brought to justice.

      Edit: wording