

I think the disagreement here is semantics around the meaning of the word “lie”. The word “lie” commonly has an element of intent behind it. An LLM can’t be said to have intent. It isn’t conscious and, therefor, cannot have intent. The developers may have intent and may have adjusted the LLM to output false information on certain topics, but the LLM isn’t making any decision and has no intent.
That just seems like good advice for any law enforcement interaction. If that is grounds for arrest, we’ve fallen even farther than I thought already and I thought we’d fallen pretty far.