Sounds like it could be the plot of a mystery novel akin to JK Rowling’s Cormoran Strike series.
The author is very much that type of guy:
Florida Man. Individualist. Free minds and free markets. Distrustful of ideologies, whether left or right.
Sounds like it could be the plot of a mystery novel akin to JK Rowling’s Cormoran Strike series.
The author is very much that type of guy:
Florida Man. Individualist. Free minds and free markets. Distrustful of ideologies, whether left or right.
Kelsey Piper bluechecks thusly:
James Damore was egregiously wronged.
I mean, maybe? But the amount of trust I put in a description from “GeekWire” written by “an investor at Madrona Venture Group and a former leader at Amazon Web Services” who uncritically declares that spicy autocomplete “achieved strong reasoning capabilities” is … appropriately small.
I’ve previously discussed the concept of model collapse, and how feeding synthetic data (training data created by an AI, rather than a human) to an AI model can end up teaching it bad habits, but it seems that DeepSeek succeeded in training its models using generative data, but specifically for subjects (to quote GeekWire’s Jon Turow) “…like mathematics where correctness is unambiguous,”
That sound you hear is me pressing F to doubt. Checking the correctness of mathematics written as prose interspersed with equations is, shall we say, not easy to automate.
Wait, the splinter group from the cult whose leader wants to bomb datacenters might be violent?
I mean, “downvotes are proof that the commies are out to get me” is an occasion not just to touch grass, but to faceplant into an open field of wildflowers.
Enjoy your trip to the egress.
yeah, DeepSeek LLMs are probably still an environmental disaster for the same reason most supposedly more efficient blockchains are — perverse financial incentives across the entire industry.
the waste generation will expand to fill the available data centers
oops all data centers are full, we need to build more data centers
This is much more a TechTakes story than a NotAwfulTech one; let’s keep the discussion over on the other thread:
Perhaps the most successful “sequel to chess” is actually the genre of chess problems, i.e., the puzzles about how Black can achieve mate in 3 (or whatever) from a contrived starting position that couldn’t be seen in ordinary (“real”) gameplay.
There are also various ways of randomizing the starting positions in order to make the memorized knowledge of opening strategies irrelevant.
Oh, and Bughouse.
Pouring one out for the local-news reporters who have to figure out what the fuck “timeless decision theory” could possibly mean.
The big claim is that R1 was trained on far less computing power than OpenAI’s models at a fraction of the cost.
And people believe this … why? I mean, shouldn’t the default assumption about anything anyone in AI says is that it’s a lie?
Altman: Mr. President, we must not allow a bullshit gap!
Musk: I have a plan… Mein Führer, I can walk!
I would appreciate this too, frankly. The rabbit hole is deep, and full of wankers.
This seems like an apt point to share Maxwell Neely-Cohen’s “Century-Scale Storage”.
I asked ChatGPT, the modern apotheosis of unjustified self-confidence, to prove that .999… is less than 1. Its reply began “Here is a proof that .999… is less than 1.” It then proceeded to show (using familiar arguments) that .999… is equal to 1, before majestically concluding “But our goal was to show that .999… is less than 1. Hence the proof is complete.” This reply, as an example of brazen mathematical non sequitur, can scarcely be improved upon.
brb, saving copies of physics and math books before they go offline
Michael Keaton bursts out of a grave It’s sneer time!
Do the Zizians fit in the “rationalist/EA/risk community”? Gosh and golly gee.
Yuddites and Zizians are a better example of the “narcissism of small differences” than any of the ones that Siskind propped up.