Hey everyone,
I have noticed that some recipes on the internet make no sense and have my suspicions that they may be AI slop. The ratios are off, the cook time is unlikely, the illustration is AI made… But it’s hard. And it will likely be getting harder to identify AI generated recipes in the future.
It’s come to the point that I have a hard time trusting a recipe witten after the AI craze started (let’s call it 2025). I’m so suspicious of everything, my go-to “authenticity” check is to not bother with recipes that have recent publication dates. But this isn’t exactly fair nor fool proof.
Do you have tips on how to spot an AI generated recipe?


I wonder if the first “AI generated” recipe was the birthday cake in Portal (actually written by humans, but recited by an AI who it’s insinuated made the recipe). This recipe includes broken glass, so you kinda know it’s not real.
At the end (not really a spoiler), they show a black forest cake, which is delicious. The recipe given obviously does not make a black forest cake, even setting aside the joke comments like the broken glass.
The problem I see is that sooner rather than later, the problems will be so small that most home cooks won’t catch the problem until it’s too late.
The bigger problem is that if experienced cooks can be taught it’s wrong (e.g. something like swapping baking soda for baking powder), why can’t the AI, assuming the AI’s goal is actually to help you. I feel like we are at a point with AI that if I ask for a recipe with very specific requirements, it should either be able to conjure up the recipe, or tell me why it can’t be done so I can change the parameters equivalent (e.g. you can’t ask for a steak to be vegan, that simply does not compute).
A simplified version of how LLM AI works is that it has “read” all the books in the world, and selects the most likely next word/sentence. This can look a lot like knowing, but there’s no reasoning or sythesis behind it, which is why it can’t be “taught” that baking powder leads to a more sour outcome than baking soda.
AI has no goals. The companies that program and release AI have the goals of keeping you engaged with the AI.
I’ve seen some with impossible cook times but I assume that’s just different ingredient expectations or something.
For example I want to look up a time and temperature to cook chicken breasts but the time is impossibly short. At that temperature it may take 2-3x the time. Are we already at the point of ai recipes with unnoticeable flaws or do “chicken breasts” mean something very different in different places? Or maybe it’s a flawed conversion from metric?
No, it’s fundamental to the way LLMs (don’t) work. Take 10 random pages from a cookbook. Look at the cook times. I’m guessing the “impossible” times you’ve noticed will be within the range of times from the random cookbook.
The LLM doesn’t actually know anything about cooking; it’s just mashing together something plausible based on 1000 previous cookbooks.
Maybe it’s pressure cooker times?