Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisat…
What do you mean with “LLMs only can run the program and guess what the error is based on the error messages and user input”? LLMs don’t run programs, but interpolate within similar code they’ve seen. If they pretend to run it, it’s only because they interpolate runs from their training corpus.
PS: nevermind the haters here, as anywhere else. If one doesn’t talk about the arguments, but takes it to the personal level, they’re not worth responding to.
What do you mean with “LLMs only can run the program and guess what the error is based on the error messages and user input”? LLMs don’t run programs, but interpolate within similar code they’ve seen. If they pretend to run it, it’s only because they interpolate runs from their training corpus.
PS: nevermind the haters here, as anywhere else. If one doesn’t talk about the arguments, but takes it to the personal level, they’re not worth responding to.
this is not debate club, per the sidebar
apparently it isn’t, as per the deleted posts for no reason whatsoever…
holy fuck please learn when to shut the fuck up