Harnessing Reasoning Trajectories for Hallucination Detection via Answer-agreement Representation Shaping
arXiv:2601.17467v1 Announce Type: new Abstract: Large reasoning models (LRMs) often generate long, seemingly coherent reasoning traces yet still produce incorrect answers, making hallucination detection challenging....