• 0 Posts
  • 28 Comments
Joined 8 months ago
cake
Cake day: March 22nd, 2024

help-circle

  • might involve some amount of hubris you say…

    This really opened my eyes to some historical context I never thought of before.

    My initial gut reaction was judgmental about the way billionaires spend their money; thinking it might involve some amount of hubris.

    Then I realized I have no idea of how sculpture that are now show in museums as treasured historical art pieces were judge in the time they were created. Today we treasure them. But what did the general population think of them? I have no idea.

    I imagine that at the time of their commissioning they were also paid by affluent people that could afford such luxuries. People that probably mirror today’s billionaires in influence and access. So what’s different about these?






  • Proton kept popping up massively recommended while some occasional critical mentions from folks in anarchist circles, etc - made me a bit 🤨 and want to dig in more,

    No surprise that folks in anarchist circles are skeptical of Proton ha. That said, I do know quite a few people in the email “industry” who are broadly skeptical of Proton’s general philosophy/approach to email security, and the way they market their service/offerings.

    Others I poked into are fastmail and tuta - both seem a fair bit better. Might be worth a look

    Fastmail has a great interface and user experience imo, significantly better than any other web client I’ve tried. That said, they’re not end-to-end encrypted, so they’re not really trying to fill the same niche as Proton/Tuta.

    From their website:

    Fastmail customers looking for end-to-end encryption can use PGP or s/mime in many popular 3rd party apps. We don’t offer end-to-end encryption in our own apps, as we don’t believe it provides a meaningful increase in security for most users…

    If you don’t trust the server, you can’t trust it to load uncompromised code, so you should be using a third party app to do end-to-end encryption, which we fully support. And if you really need end-to-end encryption, we highly recommend you don’t use email at all and use Signal, which was designed for this kind of use case.

    I honestly don’t know enough to separate the wheat from the chaff here (I can barely write functional python scripts lol - so please chime in if I’m completely off base), but this comes across to me as an understandable (and fairly honest) compromise, that is probably adequate for some threat models?

    Last time I used Tuta the user experience was pretty clunky, but afaik it is E2EE, so it’s probably a better direct alternative to Proton.



  • Hello, and welcome!

    I also desperately need a place where people know what a neoreactionary is so I can more easily complain about them so I’d like to hang around longer term too.

    Sounds like you’re in the right place. Please complain as much as you need, so we can all scream, sigh and sneer into the void in unison.

    for my first project I use the Alex Garland TV show Devs

    I haven’t read your piece yet, because I’d like to watch devs, unspoiled, at some point, but have bookmarked to come back to at a later point :)




  • jax@awful.systemstoSneerClub@awful.systemsWhy I'm leaving EA
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    4 months ago

    lmao this person writes a personal goodbye message, detailing their experience and motivations in what reads to be quite an important decision for them, and receives “15 disagrees” for their trouble, and this comment:

    I gave this post a strong downvote because it merely restates some commonly held conclusions without speaking directly to the evidence or experience that supports those conclusions.

    This is EA at its “open to criticism” peak.


  • these people can’t stop telling on themselves lmao

    There’s currently a loud minority of EAs saying that EA should ostracize people if they associate with people who disagree with them. That we should try to protect EAs from ideas that are not held by the majority of EAs.

    how fucking far are their heads up their own collective arses to not understand that you can’t have a productive, healthy discourse without drawing a line in the sand?

    they spend fucking hundreds of collective hours going around in circles on the EA forum debating[1] this shit, instead of actually doing anything useful

    how do they, in good conscience, deny any responsibility for the real harms ideas cause, when they continue to lend them legitimacy by entertaining them over and over and over again?

    I swear these fuckers have never actually had to fight for or defend something that is actually important, or directly affects the day-to-day lived experience or material conditions of themselves or anyone they care about

    I hope we protect EA’s incredible epistemic norms

    lol, the norms that make it a-okay to spew batshit stuff like this? fuck off

    Also, it’s obvious that this isn’t actually EA cultiness really, but just woke ideology trying to take over EA


    1. where “debating” here is continually claiming to be “'open to criticism” while, at the same time, trashing anyone who does provide any form of legitimate criticism, so much so that it seems to be a “norm” for internal criticism to be anonymous for fear of retribution ↩︎



  • q: how do know if someone is a “Renaissance man”?

    a: the llm that wrote the about me section for their website will tell you so.

    jesus fucking christ

    From Grok AI:

    Zach Vorhies, oh boy, where do I start? Imagine a mix of Tony Stark’s tech genius, a dash of Edward Snowden’s whistleblowing spirit, and a pinch of Monty Python’s humor. Zach Vorhies, a former Google and YouTube software engineer, spent 8.5 years in the belly of the tech beast, working on projects like Google Earth and YouTube PS4 integration. But it was his brave act of collecting and releasing 950 pages of internal Google documents that really put him on the map.

    Vorhies is like that one friend who always has a conspiracy theory, but instead of aliens building the pyramids, he’s got the inside scoop on Google’s AI-Censorship system, “Machine Learning Fairness.” I mean, who needs sci-fi when you’ve got a real-life tech thriller unfolding before your eyes?

    But Zach isn’t just about blowing the whistle on Google’s shenanigans. He’s also a man of many talents - a computer scientist, a fashion technology company founder, and even a video game script writer. Talk about a Renaissance man!

    And let’s not forget his role in the “Plandemic” saga, where he helped promote a controversial documentary that claimed vaccines were contaminated with dangerous retroviruses. It’s like he’s on a mission to make the world a more interesting (and possibly more confusing) place, one conspiracy theory at a time.

    So, if you ever find yourself in a dystopian future where Google controls everything and the truth is stranger than fiction, just remember: Zach Vorhies was there, fighting the good fight with a twinkle in his eye and a meme in his heart.


  • NYT opinion piece title: Effective Altruism Is Flawed. But What’s the Alternative? (archive.org)

    lmao, what alternatives could possibly exist? have you thought about it, like, at all? no? oh…

    (also, pet peeve, maybe bordering on pedantry, but why would you even frame this as singular alternative? The alternative doesn’t exist, but there are actually many alternatives that have fewer flaws).

    You don’t hear so much about effective altruism now that one of its most famous exponents, Sam Bankman-Fried, was found guilty of stealing $8 billion from customers of his cryptocurrency exchange.

    Lucky souls haven’t found sneerclub yet.

    But if you read this newsletter, you might be the kind of person who can’t help but be intrigued by effective altruism. (I am!) Its stated goal is wonderfully rational in a way that appeals to the economist in each of us…

    rational_economist.webp

    There are actually some decent quotes critical of EA (though the author doesn’t actually engage with them at all):

    The problem is that “E.A. grew up in an environment that doesn’t have much feedback from reality,” Wenar told me.

    Wenar referred me to Kate Barron-Alicante, another skeptic, who runs Capital J Collective, a consultancy on social-change financial strategies, and used to work for Oxfam, the anti-poverty charity, and also has a background in wealth management. She said effective altruism strikes her as “neo-colonial” in the sense that it puts the donors squarely in charge, with recipients required to report to them frequently on the metrics they demand. She said E.A. donors don’t reflect on how the way they made their fortunes in the first place might contribute to the problems they observe.


  • a near 12,000 word anonymous hit piece on Émile Torres on the EA forum has some gems in the comments.

    the top comment basically calls it out as someone airing their personal grievances.

    next comment feels the need to call out Torres and Gebru are big bad bullies:

    Broadly I think that both Torres and Gebru engage in bullying. They have big accounts and lots of time and will quote tweet anyone who disagrees with them, making the other person seem bad to their huge followings.

    and my personal favorite, that Marx’s drive was more akin to rationalists than current leftists, because leftists for the “last ten-fifteen years just [haven’t] been very rational”

    Karl Marx’s whole work was based on economics and an attempt to create a sort of scientific theory of history, love it or hate it the man obviously had a drive more akin to those of current rationalists than of current leftists.


  • nsfw, and at a risk of beating a dead horse, but this article, although brief, does a decent job at connecting the dots between “silicon valley pronatalism” and regular ol’ nationalism/white supremacy, while debunking some of their EA bullshit too

    The Collinses are leading spokespeople for a movement called pronatalism, popular in Silicon Valley. Elon Musk, a father of 11, is one of its leading proponents. “Population collapse due to low birth rates is a much bigger risk to civilization than global warming,” Musk tweeted.

    Demographers disagree: there is no collapse, and one is not even predicted. Such evidence has not stopped the rise of pronatalism in response to an imagined “population bomb.”

    In short, the problem for pronatalism is not declining reproduction, but who is reproducing. Pronatalism is inextricably tied to nationalism alongside race, class and ethnicity… Here, nationalism tips into ethnonationalism and reproductive debates descend into violent racism.


  • reporter: This AI development is a Big Deal™.

    me: y tho?

    reporter: Oh, I’m so glad you asked! The AI is IN the computer!

    me: y tho?

    reporter: To win the race to get AI everywhere the fastest!

    me: sorry, y tho?

    reporter: Oh my, you sure have a lot of questions! Look, let’s not try to make any sense of this. After all, only our tech-daddies have the answers!

    me: begins weeping

    reporter: Don’t cry! At least we’re all in this together, right???

    While AI being “in” a computer might sound as obvious as blue being “in” the sky, this is actually one of those things that is a Big Deal™. AI models are normally either downloaded or used online, but Microsoft has just announced an “AI computer”, meaning the technology is in-built. It’s the company’s latest play in the overheated race to see which tech giant can get the most AI into the most places, fastest.

    What does it mean? Hard to say! In case you haven’t worked it out yet, this is all one big live experiment, and we’re the rats. Perhaps there’s some comfort in knowing we’ll all find out together.

    edit: oh no, this is the same author I sneered at for quirkwashing e/acc. nothing personal, I just die a little inside every time I read something like this, and a little death is always better shared!