What is Overreliance?
Overreliance occurs when users blindly trust the output of an LLM without verifying it, leading to errors, misinformation, or security breaches.
LLMs are prone to "hallucinations" — confidently stating false information. If users rely on LLMs for critical tasks (like legal advice, medical diagnosis, or code generation) without oversight, the consequences can be severe.