

I’m going to be that guy; I actually use LLMs to get more done as an IT technician. It’s a starting point, not a cure-all. And being honest/direct in one’s prompts is a huge part of the output, too.
But nobody wants to talk about tokenized data and how word groupings reduce false positives. It’s only going to replace window lickers.








ITT: People not understanding how LLMs are trained. They tokenize words and phrases (give them serial numbers to index), study relationship and distance between tokens, and mimic the most common outcomes they’ve been trained on.
It’s not magic, it’s a parrot.