How generative AI undermines scientific accuracy and integrity
‘Hallucinated citations’ and ‘phantom references’ are citations and references invented by Large Language Models (LLMs) that look real but don’t exist.
They’re a creative combination of author names, paper titles, journal names and publication years that mimic genuine publications but in reality are completely fictional.
Why are ‘hallucinated citations’ and ‘phantom references’ a problem for scientific accuracy and integrity – including attachment science? More in my latest Substack post.

0 comments on “Hallucinated citations and phantom references”