Don't Use ChatGPT for Research
Today's post is inspired by Jess on Bluesky, who is fighting the good fight on behalf of content editors everywhere. Don't use ChatGPT for research!
Today's post is inspired by Jess on Bluesky, who is fighting the good fight on behalf of content editors everywhere. Don't use ChatGPT for research!
"It doesn't have thoughts... it doesn't have a database. Stop it!"
It would be astonishing if we could ask an LLM a question and get a definitive, factual, sourced answer. It's tempting to think we're already at that stage, given how confident ChatGPT can sound. But we're not.
ChatGPT uses anything from 10 to 25 times the energy that Google uses for the exact same query, yet frequently invents the answer, making it a total waste of resources.
The merging of LLMs with search seems to have led people to let their guard down in the last few weeks.
Here's an example. In the past week, here have been a ton of threads on Bluesky about presidential pardons that never happened.
And here's another.
Yes, Perplexity or AI Overviews might give you a better answer or some actual sources. But they aren't always going to be right.
The danger is that a few correct answers can lead us to accept the incorrect ones without checking them.
If you need speedy answers on something, it's much better to upload documents to NotebookLM and ask questions there, where the sources are known to you.
Side note: I've noticed that ChatGPT really sucks for any kind of navigational query. It either fails to understand intent, as in the screenshot below, or it gives me a malformed URL that I can't click on with a super long explanation that I didn't want.
I just switched to Brave search, which is working well for me so far. (And it has its own AI tool, Leo, which is helpful. I'll write about that another day.)
Read more about the fake presidential pardons on The Verge (paywall).