Understanding and Mitigating LLM Hallucinations

How can we detect and mitigate hallucinations for safer LLM deployment?

March 24, 2024 · 9 min · Olivier MF Martin

A Quick Guide to Explainable AI

Opening AI’s black box to understand how algorithms reason and foster trust.

February 18, 2024 · 11 min · Olivier MF Martin