

I think quote posts are undesirable for the reasons you mentioned but I have to accept that it will be huge for adoption, and the flip side (promoting others work in a positive light) is also going to be really great.
I think quote posts are undesirable for the reasons you mentioned but I have to accept that it will be huge for adoption, and the flip side (promoting others work in a positive light) is also going to be really great.
You are the OP, you literally removed someone’s tweet from it’s original context (or reposted without fact checking) and presented it here with an entirely different, false context. The fact that it’s being misinterpreted is 100% on you for presenting it inaccurately, not the guy who’s words you misrepresented.
I actually upvoted this before deciding to fact check which took me no more than ten seconds.
This guy made a joke and a bunch of Twitter users took it seriously. Context.
Oh yeah absolutely, but I also think the goal of the AI companies is not to actually create a functioning AI that could “do a job 20% as good as a human, but 90% cheaper”, but to sell fancy software, whether it works or not, and leave the smaller companies holding the bag after they lay off their workforce.
Right? It actually makes me feel insane that the topic of “humans working less” is never in the selling points of these products.
Honestly I suspect that rather than some nefarious capitalist plot to enslave humanity, it is just more evidence that the software can’t actually do what the people selling it to big corporations claim it can do.
This bit at the end, wow:
Gartner still expects that by 2028 about 15 percent of daily work decisions will be made autonomously by AI agents, up from 0 percent last year.
Agentic AI is wrong 70% of the time, but even assuming a human employee is barely correct most of the time and wrong 49% of the time, is it really still more efficient to replace them?
Very nice (and sad)
I like where your head’s at, but Mastodon’s system of verification seems much easier to me and doesn’t rely on a third party.
Also this is not a news article, it’s an opinion essay. It’s perfectly reasonable to read this at any time.
For YouTube tutorial videos I have no issue with relying on GPT, but I think it’s important to recognize that the translation of art is art. I don’t feel good about the idea of something without a soul or perspective interpolating a work of art from one culture and language into another that might be wildly different from where it started.
That all said, I think Crunchyroll and anyone else using AI art without disclosing it absolutely should be honest about it.
Because unlike Mastodon there is no one main “discussion forum” fedi app, we have Lemmy, mbin, kbin, piefed. The term has nothing to do with Meta Threads!
Lemmy does feel more and more like 4chan every day…
The term “reasoning model” is as gaslighting a marketing term as “hallucination”. When an LLM is “Reasoning” it is just running the model multiple times. As this report implies, using more tokens appears to increase the probability of producing a factually accurate response, but the AI is not “reasoning”, and the “steps” of it “thinking” are just bullshit approximations.
This is literally literally a drama article
It’s annoying to be treated that way isn’t it?
Please don’t sealion me.
I don’t like having to defend the Times, but the rumor is that they rushed this story out before Christopher Rufo could break the news (with what would almost certainly be a right-wing spin).
I doubt the Times would ever admit to publishing a story with the goal of hindering the formation of a right-wing narrative but in this case if they did, it might have been the right call as opposed to waiting for Rufo and publishing a “fact check” of his reporting later.