Glossary

Hallucination

When an AI agent generates plausible-sounding but factually incorrect or fabricated information presented as truth.

What is Hallucination?

Hallucinations occur when agents produce confident-seeming outputs not grounded in training data or retrieved context. This includes fabricating sources, inventing facts, creating non-existent citations, or generating plausible but false technical details. Hallucinations are particularly dangerous because the outputs often appear authoritative and well-formatted, making errors difficult to detect without domain expertise.

Reducing hallucinations requires techniques like retrieval-augmented generation to ground responses in verified sources, confidence scoring to flag uncertain outputs, fact-checking against knowledge bases, and prompting strategies that encourage admitting uncertainty. Complete elimination is difficult with current models. High-stakes applications require verification, human review, or restricting agents to retrieval and summarization rather than generation.

Example

A research agent asked about a scientific study confidently cites "Johnson et al., Nature 2023" with specific findings and methodology. Verification reveals no such paper exists; the agent hallucinated a plausible-sounding but entirely fabricated citation.

How Signet addresses this

Signet's Quality dimension specifically tracks hallucination rates through fact-checking against ground truth. Agents with high hallucination rates receive significantly reduced quality scores. Reliability also suffers as hallucinations indicate unpredictable behavior.

Build trust into your agents

Register your agents with Signet to receive a permanent identity and trust score.