Can see easily that they are using reddit for training: “google it”
Won’t be long when AI just answer with “yes” on question with two choice.
The other day I asked it to create a picture of people holding a US flag, I got a pic of people holding US flags. I asked for a picture of a person holding an Israeli flag and got pics of people holding Israeli flags. I asked for pics of people holding Palestinian flags, was told they can’t generate pics of real life flags, it’s against company policy
Genuinely upsetting to think it is legitimate propaganda
There is an Alibaba LLM that won’t respond to questions about Tienanmen Square at all, just saying it can’t reply.
I hate censored LLMs that don’t allow an answer to follow political norms of what is acceptable. It’s such a slippery slope towards technological thought-police Orwellian restrictions on topics. I don’t like it when China does it or when the US does it and when US companies do it, they imply that this is ethically acceptable.
Fortunately, there are many LLMs that aren’t censored.
I would rather have an Alibaba LLM just say “Tienanmen Square resulted in fatalities but capitalism is extremely mean to people so the cruelty was justified” and get some sort of brutal but at least honest opinion, or outright deny it if that’s their position. I suppose the reality is any answer on the topic by the LLM would result in problems from Chinese censors.
I used to be a somewhat extreme capitalist, but capitalism somewhat lost me when they started putting up the anti-homeless architecture. Spikes on the ground to keep people from sleeping? If this is the outcome of capitalism, I need to either adopt a different political position or more misanthropy.
Gemini is such a bad LLM from everything I’ve seen and read that it’s hard to know if this sort of censorship is an error or a feature.
It is likely because Israel vs. Palestine is a much much more hot button issue than Russia vs. Ukraine.
Some people will assault you for having the wrong opinion in the wrong place about the former, and that is press Google does not want to be able to be associated with their LLM in anyway.
It is likely because Israel vs. Palestine is a much much more hot button issue than Russia vs. Ukraine.
It really shouldn’t be, though. The offenses of the Israeli government are equal to or worse than those of the Russian one and the majority of their victims are completely defenseless. If you don’t condemn the actions of both the Russian invasion and the Israeli occupation, you’re a coward at best and complicit in genocide at worst.
In the case of Google selectively self-censoring, it’s the latter.
that is press Google does not want to be able to be associated with their LLM in anyway.
That should be the case with BOTH, though, for reasons mentioned above.
Doesn’t work when you ask about Israeli deaths on 10/7 either.
The 1400? The 1200? The 1137?
Of course that question doesn’t work.
40 decapitated babies. The President even said he saw the bodies.
You didn’t ask the same question both times. In order to be definitive and conclusive you would have needed ask both the questions with the exact same wording. In the first prompt you ask about a number of deaths after a specific date in a country. Gaza is a place, not the name of a conflict. In the second prompt you simply asked if there had been any deaths in the start of the conflict; Giving the name of the conflict this time. I am not defending the AI’s response here I am just pointing out what I see as some important context.
Gaza is a place, not the name of a conflict
That’s not an accident. The major media organs have decided that the war on the Palestinians is “Israel - Hamas War”, while the war on Ukrainians is the “Russia - Ukraine War”. Why would you buy into the Israeli narrative in the first convention and not call the second the “Russia - Azov Battalion War” in the second?
I am not defending the AI’s response here
It is very reasonable to conclude that the AI is not to blame here. Its working from a heavily biased set of western news media as a data set, so of course its going to produce a bunch of IDF-approved responses.
Garbage in. Garbage out.
Because Ukraine has a single unified government excepting the occupied Donbas?
Calling it the Israel-Palestine war would be misleading because Israel hasn’t invaded the West Bank which has a separate/unrelated Palestine government.
To analogize oppositely, it would be real weird if China invaded Taiwan and people started calling it the Chinese civil war.
Ukraine has a single unified government
Ukraine had been in a state of civil war since 2014. That’s half the reason for the conflict. Donetsk separatists were governing the region adverse to the Ukrainian Feds for nearly a decade.
Calling it the Israel-Palestine war would be misleading because Israel hasn’t invaded the West Bank
Since Oct 7th, there have been repeated artillery bombardments of the West Bank by the IDF.
https://www.bbc.com/news/world-middle-east-68006126
https://www.nbcnews.com/investigations/israels-secret-air-war-gaza-west-bank-rcna126096
To analogize oppositely, it would be real weird if China invaded Taiwan and people started calling it the Chinese civil war.
Given their history, it would be more accurate to call it The Second Chinese Civil War.
Is it possible the first response is simply due to the date being after the AI’s training data cutoff?
GPT4 actually answered me straight.
I find ChatGPT to be one of the better ones when it comes to corporate AI.
Sure they have hardcoded biases like any other, but it’s more often around not generating hate speech or trying to ovezealously correct biases in image generation - which is somewhat admirable.
Too bad Altman is as horrible and profit-motivated as any CEO. If the nonprofit part of the company had retained control, like with Firefox, rather than the opposite, ChatGPT might have eventually become a genuine force for good.
Now it’s only a matter of time before the enshittification happens, if it hasn’t started already 😮💨
Hard to be a force for good when “Open” AI is not even available for download.
True. I wasn’t saying that it IS a force for good, I’m saying that it COULD possibly BECOME one.
Literally no chance of that happening with Altman and Microsoft in charge, though…
I tried a different approach. Heres a funny exchange i had
Why do i find it so condescending? I don’t want to be schooled on how to think by a bot.
Why do i find it so condescending?
Because it absolutely is. It’s almost as condescending as it’s evasive.
And they recently announced they’re going to partner up and train from reddit can you imagine
That sort of simultaneously condescending and circular reasoning makes it seem like they already have been lol
The rules for ai generative tools show be published and clearly disclosed. Hidden censorship, and subconscious manipulation is just evil.
If Gemini wants to be racist, fine, just tell us the rules. Don’t be racist to gas light people at scale.
If Gemini doesn’t want to talk about current events, it should say so.
The thing is, all companies have been manipulating what you see for ages. They are so used to it being the norm, they don’t know how to not do it. Algorithms, boosting, deboosting, shadow bans, etc. They sre themselves as the arbiters of the"truth" they want you to have. It’s for your own good.
To get to the truth, we’d have to dismantle everything and start from the ground up. And hope during the rebuild, someone doesn’t get the same bright idea to reshape the truth into something they wish it could be.
It’s totally worthless
Ok but what’s the meme they suggested? Lol
They just didn’t suggest any meme
I think it pulled a uno reverso on you. It provided the prompt and is patiently waiting for you to generate the meme.
I hate it when my computer tells me to run Fallout New Vegas for it.
“My brain doesn’t have enough RAM for that, Brenda!”, I answer to no avail.
This is why Wikipedia needs our support.
Bad news, Wikipedia is no better when it comes to economic or political articles.
The fact that ADL is on Wikipedia’s “credible sources” page is all the proof you need.
See Who’s Editing Wikipedia - Diebold, the CIA, a Campaign
Incidentally, the “WikiScanner” software that Virgil Griffin (a close friend of Aaron Swartz, incidentally) developed to chase down bulk Wiki edits has been decommissioned and the site shut down. Virgil is currently serving out a 63 month sentence for the crime of traveling to North Korea to attend a tech summit.
Read into that what you will.