One idea to always go back to is:
Extraordinary claims require extraordinary evidence
- Carl Sagan
This can be tough to evaluate sometimes, but it’s a good general idea.
Does the claim sit outside the natural world as currently understood by scientific theory?
If yes, then there’s going to need to be a lot of evidence. If not, the level of evidence is lower.
Does the claim involve a low probability event?
If yes, then more evidence is needed of that event.
Does the claimant have a stake in the claim?
For example, does the person get money, fame or other stuff by getting people to believe the claim? If so, more evidence should be required.
What type of evidence would you expect to see, if the claim were correct?
When things exist, they tend to leave evidence of their existence. Bones, ruins, written records, etc. If someone says something exists, or used to exist, but they should have archeological/anthropological evidence to back it up.
Sure, it’s always going to be a bit subjective as to what requires proof. And for a lot of low stakes things, there’s no point in going after it. If someone claims to be from Pitcairn, then what’s the point of questioning it? Just say, “huh, cool” and move on. If someone is trying to convince you that an historical figure existed, and that should effect how you see the world, maybe ask for as bit more evidence.
Ya, about that “AI-driven endpoint security”, it does a fantastic job of generating false positives and low value alerts. I swear, I’m to the point where vendors start talking about the “AI driven security” in their products and I mentally check out. It’s almost universally crap. I’m sure it will be useful someday, but goddamn I’m tired of running down alerts which come with almost zero supporting evidence, pointing to “something happened, maybe.” AI for helping write queries in security tools? Ya, good stuff. But, until models do a better job explaining themselves and not going off on flights of fancy, they’ll do more to increase alert fatigue than security.