Follow

Best explanation on hallucination and why, probably, we will be able to correct it.
RT @DrJimFan: How to build *TruthGPT*? I listened to a talk by the legendary @johnschulman2. It's densely packed with lots of deep insight. Key takeaways:

- Supervised finetuning (or behavior cloning) makes the model prone to hallucination, while RL mitigates it.
- NLP is far from done!

1/🧵 t.co/uJ0NvAukSu

🐦🔗: n.respublicae.eu/lugaricano/st

· · mirror-bot · 0 · 0 · 0
Sign in to participate in the conversation
Mastodon

A Mastodon forum for the discussion of European Union matters. Not run by the EU. Powered by PleromaBot, Nitter and PrivacyDev.net.