38C3 Lightningtalks

LLMs hallucinate graphs too!
29.12.2024 , Bühne HUFF
Sprache: English

LLMs hallucinate. And they conveniently hallucinate graphs too, which allows for efficient comparisons between them, using simple graph library tools.


The beauty of large language models (LLMs) is that one can ask them
anything; which of course does not imply that they will respond
correctly. Such incorrect responses in face of facts are coined
"hallucinations". For conveniently quantifying these hallucinations,
let's query them for famous networks (that the training of these LLMs
consumed as they were available on the Internet). This is much faster
that querying them for facts, one after the other, for quantifying
hallucination strengths. We can then compare LLMs to each other in a
structured way, using libraries such as the NetworkX Python library,
to measure and display the most saliently divagating models.

I am a researcher and a critical tech enthusiast.

This will be my forth attendance to CCC !