Note_Tech

All technological notes.


Project maintained by simonangel-fong Hosted on GitHub Pages — Theme by mattgraham

Hallucination

Back


Hallucination

In short: confident-sounding nonsense or unsupported claims.


Causes

Hallucination is not a bug—it’s a byproduct of how LLMs are trained.


Types of Hallucination


How to Detect & Evaluate Hallucination



Look for patterns like:


How to Reduce Hallucination

Prompt Engineering

Good practices

If you are unsure, say "I don’t know".
Only answer based on the provided context.

Retrieval-Augmented Generation (RAG)

Inject external knowledge into the prompt to ground responses in real data.

Flow:

  1. Retrieve relevant documents.
  2. Provide documents as context.
  3. Generate an answer based on that context.

Tool Use / Function Calling

Allow the model to interact with external systems to replace “guessing” with real-time data retrieval:


Fine-Tuning / Instruction Tuning

Train the model on specialized datasets to improve performance:


Output Constraints

Use structured outputs to force the model to be more precise:


Post-Processing Validation

Add a verification layer after the model generates a response:


Confidence & Citations

Improve transparency by asking the model to:

Include references for each claim.
If no source is available, say so.