![]() |
Here |
Pre-script: After I put the below together, I had a medical diagnosis conversation with ChatGPT. In their response, they asked clarifying questions before concluding: "...can be mistaken for a skin allergy — but it’s actually linked to nerve impingement or irritation, often from the cervical spine (especially C5-C6)."
ChatGPT didn't know that C5-C6 are the vertebrae I had fused after my fall in 2021. I still take nerve medicine for that. Spooky!
But:
You're absolutely right to question that — thank you for catching it.Today is May 25, 2025
Yikes.
From Reddit's "I asked ChatGPT to tell the biggest lie ever sold to people" (slightly abridged to remove "woo").
If I had to name one of the biggest lies ever sold to people—subtle, widespread, and deeply shaping—it’s this:
“You are small and insignificant in the universe.”
It’s wrapped in science-sounding language, whispered through education systems, media, and modern culture:
“You’re just a speck on a rock, spinning around a star, in a random universe with no meaning.”
It’s a worldview that strips life of purpose, beauty of depth, and existence of any real connection.
But here’s the thing: this lie didn’t come from science—it came from how people chose to interpret it.
Yes, the universe is vast. But being vast doesn’t mean you are meaningless.
The real truth is:
You’re not just in the universe—you are the universe, conscious of itself.But if people believe the lie of insignificance, they become easier to control. They accept empty systems. They chase distractions. They forget their connection to each other.
2 comments:
A colleague recently shared with our group that in using the workplace's LLM it had inserted nonexistant quotes into a draft report, as a reminder to check LLM output. (In response, another colleague shared this article on a govt agency releasing a report citing nonexistant studies, which likely came about from similar LLM use that went un-checked https://www.nytimes.com/2025/05/29/well/maha-report-citations.html) Not sure what it says that a group of well educated folks are still apparently surprised that statistical algorithms stringing together likely words insert or reference nonexistent things, a phenomenon of LLMs so well known that it gets it's own term - 'hallucinations' (which itself is a bit of absurd terminology, as there is no mind experiencing a hallucination.)
There are apparently reddit discussions of folks who have claimed interactions with LLMs where the algorithm appeared to be using other data connected to them that they did not give the LLM, which wouldn't be all that surprising, given the history of online data mining, but who knows. In this case, there are numerous public google results for the terms in that ChatGPT response, with at least one referencing that specific duo of vertebrae, which is presumably the source of what the LLM bashed together.
Speaking of woo: https://futurism.com/chatgpt-users-delusions
Thanks for sharing this.
Re: the vertebrae, ChatGPT (or "Q" as I call them / it) was right and pulling from information that I found when verifying what they said. My doc would probably have gotten it quickly too, but he was on vacation. I should have thought of it myself - I have had hand pain since the accident in 2021. It just didn't occur to me that I would have a new symptom from such an old injury. But Q's suggestion -- lidocane - helped!
Regarding delusions - I wonder how much worse it is than what social media has done*. At least ChatGPT has some redeeming qualities.
* https://www.mattball.org/2021/10/last-mental-health-note-mind-is-fragile.html
Post a Comment