- cross-posted to:
- linustechtips@lemmit.online
- cross-posted to:
- linustechtips@lemmit.online
Businesses that rush to use AI to write content or computer code, often have to pay humans to fix it.
Businesses that rush to use AI to write content or computer code, often have to pay humans to fix it.
Your co-worker is bad at his job, and doesn’t understand programming.
LLMs are cool tech, but I’m gonna code review everything, whether it comes from a human or not.
That plays into the pattern I’ve been seeing.
The “AI prompting experts” are useless as they don’t understand fundamentals
And doesn’t understand LLMs, which don’t “learn” a damn thing after the training is completed. The only variation after that is random numbers and the input it receives.
That’s not true. There are other ways of influencing the numbers that tools use. Most of them have their own internal voting systems, so that humans can give feedback to directly influence the LLM.
Diffusion models have LoRAs, and the models themselves can be trained on top of the base model.