Why AI Keeps Making Stuff Up—And How to Fix It
In brief Hallucinations are structural, not glitches. OpenAI shows LLMs bluff because training rewards confidence over accuracy. A simple fix: reward “I don’t know.” Changing scoring rules to favor refusals could shift models toward honesty. Users can fight back. Ask for sources, frame prompts tightly, and use factuality settings to cut down on false answers.…