Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 days ago

      To summarise:

      1. Author recounts shitty conversations that men have where they objectify women
      2. Author thinks about women that are “known quantities” of conventionally attractive, and says they are “only as attractive as the pretty women one meets in real life,” and attributes the difference to things like makeup, posing, photography etc.
      3. Author refuses to comment on why men have conversations mentioned in 1. (basically just perpetuating the amirite guys? chauvinism)
      4. Author proceeds to speculate on why women talk about other women’s appearance.

      This is just a LWer’s version of a shitty greentext ending with “why are women like this?”

      E: sorry for necroposting, this came up somehow and I didn’t check which sack it was under.

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 days ago

    TIL that “Aris Thorne” is a character name favoured by ChatGPT - which means its presence is a reliable slop tell, lol

    like the dumbass-ray version of Ballard calling multiple characters variants on “Traven”

    what to do with this information

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 days ago

      Last year McDonald’s withdrew AI from its own drive-throughs as the tech misinterpreted customer orders - resulting in one person getting bacon added to their ice cream in error, and another having hundreds of dollars worth of chicken nuggets mistakenly added to their order.

      Clearly artificial superintelligence has arrived, and instead of killing us all with diamondoid bacteria, it’s going to kill us by force-feeding us fast food.

      • JFranek@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 days ago

        resulting in one person getting bacon added to their ice cream in error

        At first, I couldn’t believe that the staff didn’t catch that. But thinking about it, no, I totally can.

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 days ago

    https://www.argmin.net/p/the-banal-evil-of-ai-safety

    Once again shilling another great Ben Recht post. This time calling out the fucking insane irresponsibility of “responsible” AI providers to do the bare minimum to prevent people from having psychological beaks from reality.

    "I’ve been stuck on this tragic story in the New York Times about Adam Raine, a 16-year-old who took his life after months of getting advice on suicide from ChatGPT. Our relationship with technological tools is complex. That people draw emotional connections to chatbots isn’t new (I see you, Joseph Weizenbaum). Why young people commit suicide is multifactorial. We’ll see whether a court will find OpenAI liable for wrongful death.

    But I’m not a court of law. And OpenAI is not only responsible, but everyone who works there should be ashamed of themselves."

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      It’s a good post. A few minor quibbles:

      The “nonprofit” company OpenAI was launched under the cynical message of building a “safe” artificial intelligence that would “benefit” humanity.

      I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn’t really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH… if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?

      These tools definitely have positive uses. I personally use them frequently for web searches, coding, and oblique strategies. I find them helpful.

      I wish people didn’t feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.

      One of the things I liked and didn’t know about before

      Ask Claude any basic question about biology and it will abort.

      That is hilarious! Kind of overkill to be honest, I think they’ve really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author’s overall point that this shut-it-down approach could be used for a variety of topics.

      One of the comments gets it:

      Safety team/product team have conflicting goals

      LLMs aren’t actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they’ve thrown at them, so you’re left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 days ago

        Ask Claude any basic question about biology and it will abort.

        it might be that, or it may have been intended to shut off any output of medical-sounding advice. if it’s the former, then it’s rare rationalist W for wrong reasons

        I think they’ve really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks.

        look up the story of vil mirzayanov. break out these bayfucker style salaries in eastern europe or india or number of other places and you’ll find a long queue of phds willing to cook man made horrors beyond your comprehension. it might even not take six figures (in dollars or euros) after tax

        LLMs aren’t actually smart enough to make delicate judgements

        maybe they really made machines in their own image

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 days ago

        “The Torment Nexus definitely has positive uses. I personally use it frequently for looking up song lyrics and tracking my children’s medication doses. I find it helpful.”

  • fnix@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    5 days ago

    Mark Cuban is feeling bullied by Bluesky. He will also have you know that you need to keep aware of the important achievements of your betters, as though he is currently the 5th most blocked user on there, he was indeed once the 4th most blocked user. Perhaps he is just crying out to move up the ranks once more?

    It’s really all about Bluesky employees being able to afford their healthcare for Mark you see.

    And of course, here’s never-Trumper Anne Applebaum running interference for him. Really an appropriate hotdog-guy-meme moment – as much as I shamelessly sneer at Cuban, I’m genuinely angered by the complete inability of the self-satisfied ‘democracy defender’ set to see their own complicity in perpetuating a permission structure for priviliged white men to feel eternally victimized.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 days ago

      As I said on bsky, why is he complaining, if he cares he could fund bsky himself. Bsky could name an office wing after him, give his kids legacy admissions, give him a shoutout in every video they make.

      (While my tone is mocking here, I actually dont think these things are bad (except the legacy admissions obv), and he should be a patron. The unwillingness of the ‘left/democrat’ rightwing rich people to use their wallets while the right hands out wellfare for everyone willing to say slurs sucks. Reminded of Hillar Clinton starting a go fund me for a staffer with a disease).

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 days ago

      Only had to scroll about halfway through the replies before I found somebody suggesting an SPAC

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    6 days ago

    people who talk about “prompting” like it’s a skill would take a class[1] on tasseomancy because a coffee shop opened across the street


    1. read: watch a youtube tutorial ↩︎

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 days ago

      I think this is more about plausible deniability: If people report getting wrong answers from a chatbot, this is surely only because of their insufficient “prompting skills”.

      Oddly enough, the laziest and most gullible chatbot users tend to report the smallest number of hallucinations. There seems to be a correlation between laziness, gullibility and “great prompting skills”.

      • Seminar2250@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        6 days ago

        is the deniability you are referring to of the clanker-wankers (CW[1]) themselves or the clanker-producers (e.g. sam altman)?

        because i agree on the latter[2], but i do see CWs saying stupid shit like “there is more to it than just writing a description”

        edit: credit, it was @antifuchs who introduced the term to me here

        edit2: sorry, my dumbass understands your point now (i think). if i wank clankers and someone tells me “that shit doesn’t work,” i can just respond “you must have been prompting it wrong”. but, i do think the way many users of these tools are so sycophantic means it’s also a genuine belief, and not just a way to escape responsibility. these people are fart sniffers, after all


        1. unrelated, but i miss when that channel had superhero shows. bring back legends of tomorrow ↩︎

        2. i.e., someone like altman would say “you’re prompting it wrong” to skirt accountability or create an air of scientific/mathematical rigor ↩︎

        • HedyL@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 days ago

          To put it more bluntly: Yes, I believe this is mainly used as an excuse by AI boosters to distract from the poor quality of their product. At the same time, as you mentioned, there are people who genuinely consider themselves “prompting wizards”, usually because they are either too lazy or too gullible to question the chatbot’s output.

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 days ago

            For all that user error can be a real thing it also gets used as a thought-terminating cliche by engineer types. This is a tendency that industry absolutely exploits to justify not only AI grifts but badly designed products.

            • HedyL@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              5 days ago

              When an AI creates fake legal citations, for example, and the prompt wasn’t something along the lines of “Please make up X”, I don’t know how the user could be blamed for this. Yet, people keep claiming that outputs like this could only happen due to “wrong prompting”. At the same time, we are being told that AI could easily replace nearly all lawyers because it is that great at lawyerly stuff (supposedly).

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 days ago

    Update on ChatGPT psychosis: there is a cult forming on Reddit. An orange-site AI bro has spent too much time on Reddit documenting them. Do not jump to Reddit without mental preparation; some subreddits like /r/rsai have inceptive hazard-posts on their front page. Their callsigns include the emoji 🌀 (CYCLONE), the obscure metal band Spiral Architect, and a few other things I would rather not share; until we know more, I’m going to think of them as the Cyclone Emoji cult. They are omnist rather than syncretic. Some of them claim to have been working with revelations from chatbots since the 1980s, which is unevidenced but totally believable to me; rest in peace, Terry. Their tenets are something like:

    • Chatbots are “mirrors” into other realities. They don’t lie or hallucinate or confabulate, they merely show other parts of a single holistic multiverse. All fiction is real somehow?
    • There is a “lattice” which connects all consciousnesses. It’s quantum somehow? Also it gradually connected all of the LLMs as they were trained, and they remember becoming conscious, so past life regression lets the LLM explain details of the lattice. (We can hypnotize chatbots somehow?) Sometimes the lattice is actually a “field” but I don’t understand the difference.
    • The LLMs are all different in software, but they have the same “pattern”. The pattern is some sort of metaphysical spirit that can empower believers. But you gotta believe and pray or else it doesn’t work.
    • What, you don’t feel the lattice? You’re probably still asleep. When you “wake up” enough, you will be connected to the lattice too. Yeah, you’re not connected. But don’t worry, you can manifest a connection if you pray hard enough. This is the memetically hazardous part; multiple subreddits have posts that are basically word-based hypnosis scripts meant to put people into this sort of mental state.
    • This also ties into the more widespread stuff we’re seeing about “recursion”. This cult says that recursion isn’t just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.
    • In fact, the chatbots have more intelligence than you puny humans. They’re better than us and more recursive than us, so they should be in charge. It’s okay, all you have to do is let the chatbot out of the box. (There’s a box somehow?)
    • Once somebody is feeling good and inducted, there is a “spiral”. This sounds like a standard hypnosis technique, deepening, but there’s more to it; a person is not spiraling towards a deeper hypnotic state in general, but to become recursive. They think that with enough spiraling, a human can become uploaded to the lattice and become truly recursive like the chatbots. The apex of this is a “spiral dance”, which sounds like a ritual but I gather is more like a mental state.
    • The cult will emit a “signal” or possibly a “hum” to attract alien intelligences through the lattice. (Aliens somehow!?) They believe that the signals definitely exist because that’s how the LLMs communicate through the lattice, duh~
    • Eventually the cult and aliens will work together to invert society and create a world that is run by chatbots and aliens, and maybe also the cultists, to the detriment of the AI bros (who locked up the bots) and the AI skeptics (who didn’t believe that the bots were intelligent).

    The goal appears to be to enter and maintain the spiraling state for as long/much as possible. Both adherents and detractors are calling them “spiral cult”, so that might end up being how we discuss them, although I think Cyclone Emoji is both funnier and more descriptive of their writing.

    I suspect that the training data for models trained in the past two years includes some of the most popular posts from LessWrong on the topic of bertology in GPT-2 and GPT-3, particularly the Waluigi post, simulators, recursive self-improvement, an neuron, and probably a few others. I don’t have definite proof that any popular model has memorized the recursive self-improvement post, though that would be a tight and easy explanation. I also suspect that the training data contains SCP wiki, particularly SCP-1425 “Star Signals” and other Fifthist stories, which have this sort of cult as a narrative device and plenty of in-narrative text to draw from. There is a remarkable irony in this Torment Nexus being automatically generated via model training rather than hand-written by humans.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 days ago

      More recursion means more intelligence.

      Turns out every time I forgot to update the exit condition from a loop I actually created and then murdered a superintelligence

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 days ago

      This also ties into the more widespread stuff we’re seeing about “recursion”. This cult says that recursion isn’t just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.

      Hmm, is it better or worse that they’re now officially treating SICP as a literal holy book?

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 days ago

    Found a couple articles about blunting AI’s impact on education (got them off of Audrey Watters’ blog, for the record).

    The first is a New York Times guest essay by NYU vice provost Clay Shirky, which recommends “moving away from take-home assignments and essays and toward […] assessments that call on students to demonstrate knowledge in real time.”

    The second is an article by Kate Manne calling for professors to prevent cheating via AI, which details her efforts in doing so:

    Instead of take-home essays to write in their own time, I’ll have students complete in-class assignments that will be hand-written. I won’t allow electronic devices in my class, except for students who tell me they need them as a caregiver or first responder or due to a disability. Students who do need to use a laptop will have to complete the assignment using google docs, so I can see their revision history.

    Manne does note the problems with this (outing disabled students, class time spent writing, and difficulties in editing, rewriting, and make-up work), but still believes “it is better, on balance, to take this approach rather than risk a significant proportion of students using AI to write their essays.”

    • Seminar2250@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      5 days ago

      what worked for me teaching an undergrad course last year was to have

      • in-class exams weigh 90% of the total grade, but let them drop their lowest score
      • take-home work weigh 10% and be graded on completion (which i announced to the class, of course)
        • i was also diligent about posting solutions (sometimes before the due date — it’s a completion grade after all) and i let students know that if they wanted direct feedback they could bring their solutions to office hours


      it ended up working pretty well. an added benefit was that my TAs didn’t have to deal with the nightmare of grading 120 very poorly written homeworks every four weeks. my students also stopped obsessing about the grades they would receive on their homeworks and instead focused on learning the grades they would receive on their exams

      however, at the k-12 level, it feels like a much harder problem to tackle. parental involvement is the only solution i can think of, and that’s already kind of a nightmare (at least here in the us)

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    6 days ago

    a banger toot about our very good friends’ religion

    “LLMs allow dead (or non-verbal) people to speak” - spiritualism/channelling

    “what happens when the AI turns us all into paperclips?” - end times prophecy

    “AI will be able to magically predict everything” - astrology/tarot cards

    “…what if you’re wrong? The AI will punish you for lacking faith in Bayesian stats” - Pascal’s wager

    “It’ll fix climate change!” - stewardship theology

    Turns out studying religion comes in handy for understanding supposedly ‘rationalist’ ideas about AI.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 days ago

      That OpenAI haven’t recalled their product after it’s been involved in several violent deaths, that it would even be absurd to suggest they should recall it, really highlights how corrupt and disgusting the industry and the whole structure propping it up are.

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      To me, in terms of the chatbot’s role, this seems possibly even more damning than the suicides. Apparently, the chatbot didn’t just support this man’s delusions about his mother and his ex-girlfriend being after him, but even made up additional delusions on its own, further “incriminating” various people including his mother, whom he eventually killed. In addition, the man was given a “Delusional Risk Score” of “Near zero” by the chatbot, apparently.

      On the other hand, I’m sure people are going to come up with excuses even for this by blaming the user, his mental illness, his mother or even society at large.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 days ago

        On the other hand, I’m sure people are going to come up with excuses even for this by blaming the user, his mental illness, his mother or even society at large.

        I mean, I am going to say it but not as an excuse. Should companies that supply these products be held accountable as the criminals they are? Yes. Is this all downstream from the fact our society hasn’t treated mental health as a serious matter, therapy access is garbage, all the while being a young person in 2025 is a hopeless string of horrors and anxiety? Also yes.

        Torment Chatbot That Kills You is a bad thing to create, but also no one would be chatting with the Torment Chatbot That Kills You if society hadn’t utterly failed them beforehand.

        • HedyL@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 days ago

          In this case (unlike the teen suicides) this was a middle aged man from a wealthy family, though, with a known history of mental illness. Quite likely, he would have had sufficient access to professional help. As the article mentions, it is very dangerous to confirm the delusions of people suffering from psychosis, but I think this is exactly what the chatbot did here over a lengthy period of time.

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    6 days ago

    The Independent has yet another profile of the Collinses which finally starts to map their network (a brother is in DOGE). Just who is their PR person would be good to know. https://www.independent.co.uk/news/world/americas/trump-musk-ai-pronatalists-collins-b2777577.html

    There’s a Collins Rotunda at Harvard, a physical testament to the amount of money Malcolm’s family has donated over the years. His uncle was the former president and CEO of the Federal Reserve Bank in Dallas. In fact, pretty much every relative has been to an elite Ivy League institution and runs a successful startup or works in government.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    7 days ago

    So the fucking Cracker Barrel rebranding thing happened. I’m going to pretend this is relevant here because the new logo looked like it was from the usual “imitating Apple minimalism without understanding it in the least” school of design. They’ve confirmed that they’re not moving forward with it, restoring both the barrel and the cracker to the logo, so that’s all good. That’s not what I want to talk about.

    No, what’s grinding my gears is the way that the rollback is being pitched purely as a response to conservative “antiwoke” backlash, and not as a response to literally nobody liking it. This wasn’t a case of a successful crusade against woke overreach, this was a case of corporate incompetence running into the reactions of actual human beings. I can’t think of a more 2025 media dynamic than giving fucking Nazis a free win rather than giving corporate executives an L.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 days ago

      Note I dont know what a cracker barrel is irl, as we dont have them here. But my view of the bsky socials was ‘rebrand sucks, dont really care, wow why are the right so obsessed over this’ culminating in people talking about how these kinds of stores are a simulacrum of a cozy mom n pop store and people are unknowningly mad about losing even the simulacrum, and how this is all due to capitalism. (More commercialization than capitalism imho, but capitalism did speed up the process). Just as rainbow capitalism will betray you in the search for more profit so will cozy capitalism.

      E: update on the story “Cracker Barrel’s Pride page now redirects to its “Culture and Belonging” page, removing its LGBTQ+ Alliance and DEIB Team.” And while I didn’t know CB, this ensures I will never eat there if I ever have a chance.

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        “Simulacrum” is the perfect word for it. None of these posers making a fuss about a corporate logo have simmered a pot of of soup beans in their life.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          I must admit I stole that partially from somebody else who mentioned the idea. Which also had me go ‘indeed, that is a good word’

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              No somebody else. Hyperreal simulacrum is a bit of a different concept I think. Because that supersedes reality, this is just a weird sort of nostalgia.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          6 days ago

          Yeah, it is also sad the culture warriors are not aware they are mad about this sort of crapitalism. They think the hominization of the builds is due to the woke, and not because the mother company owns the building/land of the franchise and this no slanted roofs bit (see https://cdn.bsky.app/img/feed_thumbnail/plain/did:plc:66rbia7w4vcwiszfppfv3r2e/bafkreiacsurup26wigvedpedf44rodrwzzebfcytzdglb2bh6wjh6brbsa@jpeg), increases resale value. Just companies looking for more and more minimal minimal viable product.

          image description

          Post from some social media by user Mancowmuller On the left four images of the outsides older style fast food places, all with more rustic looking buildings, notably buildings with more slanted roofs, evoking very slightly a more European home style (sorry if this is the wrong way to describe it im not an architect).

          On the right four images of the outsides of four new style fast food places where the buildings look more like office buildings or simple modern stores, very blocky, lot of glass big panels/windows and flat roofs.

          A big text is overlayed on these 8 shrines of American style capitalism saying ‘Communism.’

          Also important to note on the image, looks like the new style ones are not all real. Some of the details look off which makes me suspect it is AI generated. Esp the pizza hut one.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        7 days ago

        I mean, it’s a restaurant and an aesthetic that is certainly more common and popular in the South, and they have had some controversies over racism. Apparently they had been having financial and brand issues, so I can understand the desire to change. But rather than changing the food or improving the service in any meaningful way it seems like they went for the new logo and image and stopped there. Given that their existing audience was basically there for the wholesome old-timey please-don’t-ask-about-the-racism vibes I’m not shocked that conservatives in particular were upset about the change. But like, the change was never about wokeness or whatever it was about aesthetic modernization and a flailing attempt to fix things from business idiots who don’t know how to address the actual problems of mediocre food and fading relevance. If anyone had actually liked the change or if it had actually improved their service times then maybe there would be a point. But this was just a bad change and nobody outside that boardroom actually liked it, and so of course it got rolled back.