Tackling hallucinations: MIT spinout teaches AI to admit when it’s clueless
AI hallucinations are becoming more dangerous as models are increasingly trusted to surface information and make critical decisions. We’ve all got that know-it-all friend that can’t admit when they don’t know something, or resorts to giving dodgy advice based on something they’ve read online. Hallucinations by AI models are like...