Ai Hallucination

Article Content Page: Knowledge Calibration File: sections/03_ai_hallucination.html Theme: purple

AI language models have absorbed extensive scientific literature and could potentially streamline this consolidation process. However, they often write with convincing authority even when evidence is thin, risking the introduction of incorrect details—a phenomenon known as hallucination. This frequently occurs after Reinforcement Learning from Human Feedback (RLHF) training, where models are rewarded for sounding helpful and confident, even if "I don't know" would be more accurate. In microbiology, confident fabrications can be more harmful than no answer at all.