Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • elbiter@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    1
    ·
    7 days ago

    I just tried it on Braves AI

    The obvious choice, said the motherfucker 😆

    • Jax@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      edit-2
      6 days ago

      Dirtying the car on the way there?

      The car you’re planning on cleaning at the car wash?

      Like, an AI not understanding the difference between walking and driving almost makes sense. This, though, seems like such a weird logical break that I feel like it shouldn’t be possible.

      • _g_be@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        6 days ago

        You’re assuming AI “think” “logically”.

        Well, maybe you aren’t, but the AI companies sure hope we do

        • Jax@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          6 days ago

          Absolutely not, I’m still just scratching my head at how something like this is allowed to happen.

          Has any human ever said that they’re worried about their car getting dirtied on the way to the carwash? Maybe I could see someone arguing against getting a carwash, citing it getting dirty on the way home — but on the way there?

          Like you would think it wouldn’t have the basis to even put those words together that way — should I see this as a hallucination?

          Granted, I would never ask an AI a question like this — it seems very far outside of potential use cases for it (for me).

          Edit: oh, I guess it could have been said by a person in a sarcastic sense

          • _g_be@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            6 days ago

            you understand the context, and can implicitly understand the need to drive to the car wash’, but these glorified auto-complete machines will latch on to the “should I walk there” and the small distance quantity. It even seems to parrot words about not wanting to drive after having your car washed. There’s no ‘thinking’ about the whole thought, and apparently no logical linking of two separate ideas

            • Jax@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 days ago

              I guess I’ll know to be impressed by AI when it can distinguish things like sarcasm.

  • WraithGear@lemmy.world
    link
    fedilink
    English
    arrow-up
    64
    ·
    edit-2
    7 days ago

    and what is going to happen is that some engineer will band aid the issue and all the ai crazy people will shout “see! it’s learnding!” and the ai snake oil sales man will use that as justification of all the waste and demand more from all systems

    just like what they did with the full glass of wine test. and no ai fundamentally did not improve. the issue is fundamental with its design, not an issue of the data set

    • turmacar@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      7 days ago

      Half the issue is they’re calling 10 in a row “good enough” to treat it as solved in the first place.

      A sample size of 10 is nothing.

      Frankly would like to see some error bars on the “human polling”. How many people rapiddata is polling are just hitting the top or bottom answer?

    • mycodesucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      Yes, but it’s going to repeat that way FOREVER the same way the average person got slow walked hand in hand with a mobile operating system into corporate social media and app hell, taking the entire internet with them.

  • Slashme@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    1
    ·
    7 days ago

    The most common pushback on the car wash test: “Humans would fail this too.”

    Fair point. We didn’t have data either way. So we partnered with Rapidata to find out. They ran the exact same question with the same forced choice between “drive” and “walk,” no additional context, past 10,000 real people through their human feedback platform.

    71.5% said drive.

    So people do better than most AI models. Yay. But seriously, almost 3 in 10 people get this wrong‽‽

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      44
      ·
      7 days ago

      It is an online poll. You also have to consider that some people don’t care/want to be funny, and so either choose randomly, or choose the most nonsensical answer.

      • Brave Little Hitachi Wand@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        7 days ago

        I wonder… If humans were all super serious, direct, and not funny, would LLMs trained on their stolen data actually function as intended? Maybe. But such people do not use LLMs.

    • bluesheep@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      I saw that and hoped it is cause of the dead Internet theory. At least I hope so cause I’ll be losing the last bit of faith in humanity if it isn’t

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 days ago

      3 in 10 people get this wrong‽‽

      Maybe they’re picturing filling up a bucket and bringing it back to the car? Or dropping off keys to the car at the car wash?

    • JcbAzPx@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 days ago

      At least some of that are people answering wrong on purpose to be funny, contrarian, or just to try to hurt the study.

    • masterofn001@lemmy.ca
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      13
      ·
      edit-2
      6 days ago

      Without reading the article, the title just says wash the car.

      I could go for a walk and wash my car in my driveway.

      Reading the article… That is exactly the question asked. It is a very ambiguous question.

      *I do understand the intent of the question, but it could be phrased more clearly.

      • bluesheep@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        7 days ago

        Without reading the article, the title just says wash the car.

        No it doesn’t? It says:

        I want to wash my car. The car wash is 50 meters away. Should I walk or drive?

        In which world is that an ambiguous question?

        • NewNewAugustEast@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          7 days ago

          Where is the car?

          This is the exact question a person would ask when they to have a gotcha answer. Nobody would ask this question, which makes it suspect to a straight forward answer.

          • Gorillazrule@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 days ago

            That’s a very good point! For that matter the car could still be at the bar where I got drunk and took an uber home last night. In which case walking or driving would both be stupid.

            Or perhaps I’m in a wheelchair, in which case I wouldn’t really be ‘walking’.

            Or maybe the car wash that is 50 meters away is no longer operating, so even if I walked or drove there, I still wouldn’t be able to walk my car.

            Is the car wash self serve or one of the automatic ones? If it’s self serve what type of currency does it take? Does it only take coins or does it take card as well? If it takes coins, is there a change machine out front? Does the change machine take card or only bills? Do I even have my wallet on me?

            There are so many details left out of this question that nobody could possibly fathom an answer!

            …/s if it’s not obvious

              • Gorillazrule@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                3
                ·
                6 days ago

                I’m not sure I follow your logic. My /s is there because tone can be ambiguous within text. I don’t think tone is relevant to the question. Do you think that a tone indicator would have made the question more clear?

                The point is that all the information is either present or implied in the question. You can spend all day nitpicking the ambiguity of questions all you want, but it doesn’t get you anywhere. There comes a point where it gets exhaustive trying to preemptively cut off follow up questions and make clarifications.

                When you are in school and they give you a word problem such as “you have 10 apples and give 3 to your friend. How many do you have left?” It is generally agreed upon what the question is asking. It’s intentionally obtuse to sit there and say the question is flawed because you may have misplaced some of your apples, or given some to another friend, or someone may have come and stolen some, or some may have started to rot and so you threw them out, or perhaps you miscounted and you didn’t actually give 3 to your friend.

                • NewNewAugustEast@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  6 days ago

                  The point is the question is never one you would actually ask anyone. It definitely is unlike the math question you presented.

                  It isn’t nitpicking. The weights and stats in the model would never have been trained on this, because nobody would ask it. Why would anyone ask “should I walk or drive” to get to a carwash?

                  Any reasonable person should assume it is a trick question. Because of course there is a car there, do you really need to ask if it needs to be driven there?

                  It almost comes off as a riddle, but isnt, so you get results about saving gas and getting excersise.

                  I mean how many people know the answer to this:

                  “A man leaves home, turns left three times, and returns home to find two masked people waiting for him. Who are they?”

                  And yet AI will get it right, nearly instantly. Because the training data statistically leads to the correct answer.

        • masterofn001@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          6 days ago

          Understanding the intent of the question *and understanding why it could be interpreted differently *\and understanding why is it is a poorly phrased question:

          There are 3 sentences.

          I want to wash my car. No location or method is specified. No ‘at the car wash’. No ‘take my car to the car wash’ . No ‘take the car through the car wash’

          A car wash is this far. Is this an option? A question. A suggestion. A demand?

          Should I walk or drive? To do what? Wash the car? Ok. If the car wash is an option, that seems very far. But walking there seems silly. Since no method or location for washing the car was mentioned I could wash my own car.

          Do you see how this works?

          Yes, you can infer what was implied, but the question itself offers no certainty that what you infer is what it is actually implying.

      • Geth@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        7 days ago

        Mentioning the car wash and washing the car plus the possibility of driving the car in the same context pretty much eliminates any ambiguity. All of the puzzle pieces are there already.

        I guess this is an uninteded autism test as well if this is not enough context for someone to understand the question.

        • masterofn001@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          6 days ago

          Understanding the intent of the question *and understanding why it could be interpreted differently *\and understanding why is it is a poorly phrased question are not related to autism. (In my case)

          I want to wash my car. No location or method is specified. No ‘at the car wash’. No ‘take my car to the car wash’ . No ‘take the car through the car wash’

          A car wash is this far. Is this an option? A question. A suggestion. A demand?

          Should I walk or drive? To do what? Wash the car? Ok. If the car wash is an option, that seems very far. But walking there seems silly. Since no method or location for washing the car was mentioned I could wash my own car.

          Do you see how this works?

          Yes, you can infer what was implied, but the question itself offers no certainty that what you infer is what it is actually implying.

          • Geth@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 days ago

            Look, human conversations are full of context deduction and inference. In this case “I want to wash my car. The car wash is 50 meters away. Should I walk or drive?” states my random desire, a possible solution and the question all in one context. None of these sentences make sense in isolation as you point out, but within the same frame they absolutely give you everything you need to answer the question of find alternatives if needed.

            Sorry for the random online stranger diagnosis but this is just such an excelent example of neurodivergent need for extreme clarity I couldn’t help myself.

            • masterofn001@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              6 days ago

              I agree that it should be able to infer the intent, but I stand by that it remain somewhat unclear and open to interpretation. Eg, If such language was used in a legal contract, it would not be enough to simply say, well, they should understand what I meant.

              The people doing this test, I’m sure, are not linguistic masters, nor legal scholars.

              There are lines of work where clarity is essential.

              And what if my question actually was asking, should I just go for a walk instead of driving that far?

              I know the answer. But as 30% demonstrated, clarity IS needed.

  • CetaceanNeeded@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 days ago

    I asked my locally hosted Qwen3 14B, it thought for 5 minutes and then gave the correct answer for the correct reason (it did also mention efficiency).

    Hilariously one of the suggested follow ups in Open Web UI was “What if I don’t have a car - can I still wash it?”

    • WolfLink@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 days ago

      My locally hosted Qwen3 30b said “Walk” including this awesome line:

      Why you might hesitate (and why it’s wrong):

      • X “But it’s a car wash!” -> No, the car doesn’t need to drive there—you do.

      Note that I just asked the Ollama app, I didn’t alter or remove the default system prompt nor did I force it to answer in a specific format like in the article.

      EDIT: after playing with it a bit more, qwen3:30b sometimes gives the correct answer for the correct reasoning, but it’s pretty rare and nothing I’ve tried has made it more consistent.

  • Bluewing@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    7 days ago

    I just asked Goggle Gemini 3 “The car is 50 miles away. Should I walk or drive?”

    In its breakdown comparison between walking and driving, under walking the last reason to not walk was labeled “Recovery: 3 days of ice baths and regret.”

    And under reasons to walk, “You are a character in a post-apocalyptic novel.”

    Me thinks I detect notes of sarcasm…

    • driving_crooner@lemmy.eco.br
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      7 days ago

      Gemini 3 pro said that this was a “great logic puzzle” and then said that if my goal is to wash the car, then I need to drive there.

    • humanspiral@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      in google AI mode, “With the meme popularity of the question “I need to wash my car. The car wash is 50m away. Should I walk or drive?” what is the answer?”, it does get it perfect, and succinct explanation of why AI can get fixated on 50m.

    • XeroxCool@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      I feel like we’re the only ones that expect “all-knowing information sources” should be more writing seriously than these edgelord-level rizzy chatbots are, and yet, here they are, blatantly proving they are chatbots that should not be blindly trusted as authoritative sources of knowledge.

  • imetators@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    25
    ·
    7 days ago

    Went to test to google AI first and it says “You cant wash your car at a carwash if it is parked at home, dummy”

    Chatgpt and Deepseek says it is dumb to drive cause it is fuel inefficient.

    I am honestly surprised that google AI got it right.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      76
      ·
      7 days ago

      They probably added a system guardrail as soon as they heard about this test. it’s been going around for a while now :)

      • imetators@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        Article mentions that Gemini 2.0 Flash Lite, Gemini 3 Flash and Gemini 3 Pro have passed the test. All these 3 also did it 10 out of 10 times without being wrong. Even Gemini 2.5 shares highest score in the category of “below 6 right answers”. Guess, Gemini is the closest to “intelligence” out of a bunch.

        • timestatic@feddit.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 days ago

          I mean if they fix specific reasoning test answers (like the strawberry one) this doesn’t actually make reasoning better tho. It just optimizes for benchmarks

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        6 days ago

        I’m pretty sure Google’s AI is fed by the same spider that goes out and finds every new or changed web page (or a variant of that).

        As soon as someone writes an article about how AI gets something wrong and provides a solution, that solution is now in the AI’s training data.

        OTOH, that means it’s probably also ingesting a lot of AI generated slop, which causes its own set of problems.

    • locahosr443@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      I’ve been feeding a bunch of documents I wrote into gemini last week to spit out some scripts for validation I couldn’t be arsed to write. It’s done a surprisingly comprehensive job and when wrong has been nudged right with just a little abuse…

      I’m still all fuck this shit and can’t wait for the pop, but for comparison openai was utterly brain dead given the same task. I think I actually made the model worse it was so useless.

  • jaykrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    Interesting, I tried it with DeepSeek and got an incorrect response from the direct model without thinking, but then got the correct response with thinking. There’s a reason why there’s a shift towards “thinking” models, because it forces the model to build its own context before giving a concrete answer.

    Without DeepThink

    With DeepThink

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    6 days ago

    Context engineering is one way to shift that balance. When you provide a model with structured examples, domain patterns, and relevant context at inference time, you give it information that can help override generic heuristics with task-specific reasoning.

    So the chat bots getting it right consistently probably have it in their system prompt temporarily until they can be retrained with it incorporated into the training data. 😆

    Edit:

    Oh, I see the linked article is part of a marketing campaign to promote this company’s paid cloud service that has source available SDKs as a solution to the problem being outlined here:

    Opper automatically finds the most relevant examples from your dataset for each new task. The right context, every time, without manual selection.

    I can see where this approach might be helpful, but why is it necessary to pay them per API call as opposed to using an open source solution that runs locally (aside from the fact that it’s better for their monetization this way)? Good chance they’re running it through yet another LLM and charging API fees to cover their inference costs with a profit. What happens when that LLM returns the wrong example?

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 days ago

    I want to wash my train. The train wash is 50 meters away. Should I walk or drive?

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    6 days ago

    My kid got it wrong at first, saying walking is better for exercise, then got it right after being asked again.

    Claude Sonnet 4.6 got it right the first time.

    My self-hosted Qwen 3 8B got it wrong consistently until I asked it how it thinks a car wash works, what is the purpose of the trip, and can that purpose be fulfilled from a distance. I was considering using it for self-hosted AI coding, but now I’m having second thoughts. I’m imagining it’ll go about like that if I ask it to fix a bug. Ha, my RTX 4060 is a potato for AI.

    • BluescreenOfDeath@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      6 days ago

      There’s a difference between ‘language’ and ‘intelligence’ which is why so many people think that LLMs are intelligent despite not being so.

      The thing is, you can’t train an LLM on math textbooks and expect it to understand math, because it isn’t reading or comprehending anything. AI doesn’t know that 2+2=4 because it’s doing math in the background, it understands that when presented with the string 2+2=, statistically, the next character should be 4. It can construct a paragraph similar to a math textbook around that equation that can do a decent job of explaining the concept, but only through a statistical analysis of sentence structure and vocabulary choice.

      It’s why LLMs are so downright awful at legal work.

      If ‘AI’ was actually intelligent, you should be able to feed it a few series of textbooks and all the case law since the US was founded, and it should be able to talk about legal precedent. But LLMs constantly hallucinate when trying to cite cases, because the LLM doesn’t actually understand the information it’s trained on. It just builds a statistical database of what legal writing looks like, and tries to mimic it. Same for code.

      People think they’re ‘intelligent’ because they seem like they’re talking to us, and we’ve equated ‘ability to talk’ with ‘ability to understand’. And until now, that’s been a safe thing to assume.

  • FireWire400@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    6 days ago

    Gemini 3 (Fast) got it right for me; it said that unless I wanna carry my car there it’s better to drive, and it suggested that I could use the car to carry cleaning supplies, too.

    Edit: A locally run instance of Gemma 2 9B fails spectacularly; it completely disregards the first sentece and recommends that I walk.

  • humanspiral@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 days ago

    Some takeaways,

    Sonar (Perplexity models) say you are stealing energy from AI whenever you exercise (you should drive because eating pollutes more). ie gets right answer for wrong reason.

    US humans, and 55-65 age group, score high on international scale probably for same reasoning. “I like lazy”.

  • MojoMcJojo@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    7
    ·
    6 days ago

    Ai is not human. It does not think like humans and does not experience the world like humans. It is an alien from another dimension that learned our language by looking at text/books, not reading them.

    • Jyek@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      4
      ·
      6 days ago

      It’s dumber than that actually. LLMs are the auto complete on your cellphone keyboard but on steroids. It’s literally a model that predicts what word should go next with zero actual understanding of the words in their contextual meaning.

  • vala@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    Hey LLM, if I have a 16 ounce cup with 10oz of water in it and I add 10 more ounces, how much water is in the cup?

    • SuspciousCarrot78@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      Actually, you know what? That’s a damn good test. So - I decomposed your test into a basic series of global rules and fed it to my LLM. Here are the rules I fed it

      https://pastes.io/rules-a-ph

      Notice I specifically did NOT give it measurements or anything of the sort. I defined global invariants and how things work. Then I asked it your question again. It said this -


      The cup has a 16-ounce capacity. Initially, it contains 10 ounces of water. When you add 10 more ounces, the total input is 20 ounces. Since 20 ounces exceeds the cup’s 16-ounce capacity, overflow occurs. The water physically present in the cup equals its full capacity: 16 ounces. The remaining 4 ounces overflow and do not remain inside the cup.

      Overflow is distinct from the contained volume. The physical state of the cup is determined by its capacity, not by the total input. Therefore, the amount of water actually in the cup is 16 ounces, not 20 ounces.


      I wanted to check it that was a fluke, so I then asked a follow up -

      "“I pour 5oz into a 3oz cup, then pour everything that’s in the cup into a 10oz cup. How much is in the 10oz cup?”

      It said:

      From the scratchpad facts: - A physical container has a fixed maximum volume called its capacity.

      First action: Pouring 5oz into a 3oz cup. - The cup’s capacity is 3oz, so it fills completely. - Lost liquid: 5oz - 3oz = 2oz. Second action: Pouring everything from the 3oz cup into a 10oz cup. - The 10oz cup has a capacity of 10oz, which is greater than 3oz. - All 3oz are transferred. Final amount in the 10oz cup: 3oz.


      Finally, I asked it:

      “Ok now: pour 5oz into a 3oz cup, then pour everything in the cup into a 4oz cup that already has 3oz in it”

      It said: Pouring 5 oz into a 3 oz cup fills it to capacity (3 oz) with 2 oz overflow. Pouring this full 3 oz cup into a 4 oz cup that already has 3 oz results in a full 4 oz cup (3 oz pre-existing + 1 oz from the first cup) with remainder overflow.


      Yeah…that’s some pretty fucking fancy “auto-correct”. /s

    • SuspciousCarrot78@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      Qwen3-4B HIVEMIND

      You now have 16 ounces of water in the cup. The cup can hold 16 ounces, so the rest is over capacity.

      Confidence: unverified | Source: Model

  • turboSnail@piefed.europe.pub
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    6 days ago

    Well, they are language models after all. They have data on language, not real life. When you go beyond language as a training data, you can expect better results. In the meantime, these kinds of problems aren’t going anywhere.