July 7th, 2024

"AI", students, and epistemic crisis

An individual shares a concerning encounter where a student confidently presented false facts from an AI tool, leading to a clash over misinformation. Educators face challenges combating tailored misinformation in the digital era.

Read original articleLink Icon
"AI", students, and epistemic crisis

In a Twitter post, an individual recounts a troubling encounter with a student who relied on an AI tool, Ch*tGPT, for information. The student confidently presented unique false facts obtained from this tool, creating a personalized stream of misinformation. The author highlights the challenge of correcting such misinformation, as each student is fed tailored inaccuracies that they believe to be true. The student's reliance on technology for information led to a clash with the author, who tried to correct the misinformation. The author expresses concern about the pervasive spread of misinformation through such tools, leading to a potential epistemic crisis where individuals are trapped in a cycle of false information. The post was later removed due to harassment. This incident underscores the complexities and challenges educators face in combating the spread of misinformation in the digital age.

Link Icon 22 comments
By @sieste - 3 months
Like probably many people here I still remember having to find facts in books in libraries, before the internet made this skill mostly redundant. Then, as a student I remember having to put together facts from various (internet) sources into a coherent narrative. Now chatbots can just generate text and that skill seems less valuable.

I use both the internet and GenAI extensively now. But I feel that having gone through the "knowledge work" activities without the crutches puts me in a better position to assess the correctness and plausibility of internet sources and AI in a way that kids who grow up using them constantly don't have.

I feel quite privileged to be in that position, that I wouldn't be in had I been born 10 or 20 years later. I also feel sorry for kids these days for not having the opportunity to learn things "the hard way" like I had to. And I feel extremely snobbish and old for thinking that way.

By @i_am_proteus - 3 months
I've seen this happen, too, including student incredulity that ChatGPT can be wrong, and recalcitrance when guided to find proper sources. Up to the point of arguing for a higher grade based on the correctness of LLM output.
By @huimang - 3 months
I feel like this is a bit overblown.

Growing up we heard, ad nauseam, that "wikipedia is not a reliable source". People just need to state the same thing about LLMs- they aren't reliable, but can potentially point you to primary sources that -are- reliable. Once the shininess of the new toy wears off, people will adjust.

By @atoav - 3 months
The problem with not writing yourself isn't that people didn't do the writing themselves, the problem is that they didn't do the thinking themselves that is a prerequisite for writing.

Now of course like any tool this can be used without falling into that trap, but people are lazy and the truth is that if you don't absolutely have to do it yourself most people won't.

By @xg15 - 3 months
I would have thought this problem was easy to solve: "Yes, look it up, but remember your source, there is a lot of bullshit on the internet. Especially don't trust AI tools, we know those often return nonsense information."

(Actually, didn't ChatGPT have a disclaimer right next to the prompt box that warns against incorrect answers?)

So I'm more surprised (and scared) that students don't just use LLMs for sourcing but are also convinced they are authoritative.

Maybe being in the tech bubble gave a false impression here, but weren't hallucinations one of the core points of the whole AI discurse for the last 1.5 years? How do you learn about and regularly use ChatGPT, but miss all of that?

By @auggierose - 3 months
You could probably make the same argument for search bar vs. peer-reviewed publications. Of course, the search bar (which is also AI, by the way) can help you to get to the peer-reviewed publications. But the same is true for ChatGPT. The problem is that ChatGPT sounds like presenting objective facts. But maybe the lesson here is that just because something sounds right, it isn't necessarily right, and that is something that should be taught in school. Of course, that undermines the function of school to produce obedient citizens.
By @A_D_E_P_T - 3 months
The great problem with ChatGPT is that it's a sycophant and aims to please.

If you ask it about something it doesn't know, right then and there, it will concoct a fiction for you. It won't say "I don't know," or "I can't help with that."

If you coach it to respond to something in a certain way, it'll respond that way for you as its top priority.

If you ask it to review a text, it'll usually find a way to give you at least a 7 or 8 out of 10. (Though, interestingly, rarely a 10/10 score. You can upload excerpts from some of the great works of literature and philosophy and see ChatGPT give them an 8/10, just as it gives an 8/10 to your essay or blog post.) Practically the only way to get a halfway critical response is to add the words "be critical" to your prompt.

A more circumspect and less obsequious ChatGPT would solve a lot of problems.

By @deadbabe - 3 months
I really don’t see any other solution to this kind of problem except for one: LLMs must become perfect, and never be wrong.

Relying on kids to do cross referencing and deeper fact checks into everything they ask an LLM is just not going to happen at scale.

By @tatrajim - 3 months
And, on a related note for education, AI is quickly obviating the need to master and plumb the depths of foreign languages. Dating myself, doubtless, but as an undergraduate, it was an unalloyed joy to study ancient Greek and read Plato and Euripides in the original, however haltingly. And later Korean, Japanese, and Chinese beckoned providing a lifetime of rich understanding of life outside the confines of English. For Americans, at least, perhaps ours is the last generation that will seek rewire our understanding of reality through linguistic hacking.
By @joaquincabezas - 3 months
so it’s the classical “it’s on the internet so it’s true” but on steroids. I remember a US student in the early 2000s citing a geocities website as source for the FACT that aliens created the pyramids of Egypt
By @unraveller - 3 months
Every teaching moment is also a learning moment. Asking kids to refrain from wanting tiny bits of assistance at hand in tough times is tantamount to asking them to lie or accept a poorer grade as others do cheat the whole hog undetected. Human's crave ease. Education promises an easier adulthood and it is no way clear you can provide it by legacy regurgitation means.
By @Kiro - 3 months
Am I the only one who almost never experience any hallucinations when talking to ChatGPT? I have to really push it into a corner with trick questions and conflicting instructions about obscure topics in order to trigger hallucinations.

That it would just come up with random false facts about something as common and "basic" as the history of the Greek language sounds like a made-up issue.

By @pitt1980 - 3 months
One thing I’ve noticed, is that at least in the free version, if you ask chatGPT for sources outside itself, ‘can you give me a link to somewhere on it internet where it says that?’, it won’t do that.

It is very much a black box in terms of letting you track its logic.

Seems like future versions should be much more transparent in terms of letting you track the logic of why it’s telling you what it’s telling you.

By @adammarples - 3 months
"I am one person trying to work against a campaign of misinformation so vast that it fucking terrifies me. This kid is being set up for a life lived entirely inside the hall of mirrors."

This is a little hyperbolic and instead maybe the kid can learn that chatgpt.com is not a reliable source. It even says at the bottom of the page "chat gpt can make mistakes". Lots of things are not reliable sources. Wikipedia is not a reliable source. Teachers are supposed to teach this, and teach cross referencing at the very least, not freak out.

By @werdnk - 3 months
In the current system, where students can anonymously report teachers (most of whom do not have tenure and are afraid) it will be hard to change anything.

Otherwise, you could do a mixture of very strict exams without multiple choice and large individual projects (no group projects).

If you only do exams, people who don't do well thinking in a room crammed full of people at 8am will be disadvantaged.

By @csantini - 3 months
<< They keep coming up with weird “facts” (“Greek is actually a combination of four other languages) >>

Not as wrong as the author thinks. From Britannica.com:

"Greek language, Indo-European language spoken mostly in Greece. Its history can be divided into four phases: Ancient Greek, Koine, Byzantine Greek, and Modern Greek."

By @padolsey - 3 months
I don't buy into the rhetoric about misinformation but the author touches on a real concern. I blame ChatGPT and other LLM clients for not surfacing their fallibility more clearly. They should highlight claims in their UX and allow options for the user to "research" or "verify" in their own way, without relying on that very flakey single one-shot inference. The big honchos (Anthropic, OpenAI) need to make it clearer that their output is, at best, an informed guess. A tiny disclaimer doesn't cut it.
By @lionkor - 3 months
well point the student at the little disclaimer that says that it may sometimes be incorrect!

Of course they changed it multiple times, it used to say that you shouldnt take what it says at face value, now it says it may sometimes be inaccurate, soon it'll say "learn more" and link to a page about how their model is super accurate but potentially, one in a million, makes mistakes.

Im worried about these students because they will be in power in a few decades. Its already a shitshow with people who didnt grow up as AI iPad toddlers.

I feel like parents are failing here, more than anything else. You can't stop these companies from doing this if it drives up the stock price, equally you cant vote against it effectively because a majority of the voting population either doesn't care, doesn't know, or asks ChatGPT what to vote for.

Of course Plato was right and the only way we can fix it is to have philosophers in charge, not demagogues. Good luck with that, maybe in another 2000 years we'll be smart enough to make that happen!

By @pietmichal - 3 months
This sentiment reminds me of my teachers who complained that Wikipedia is not a reliable source to learn from.
By @gitfan86 - 3 months
Perplexity links to sources. All he has to do was show the student the same search on perplexity.

I do feel sorry for professors like this that cannot adapt to technology changes at the rate they are happening

By @Semaphor - 3 months
If they only trust ChatGPT, why not just show them? From asking leading questions, to simply asking ChatGPT what the chances are of it making mistakes and hallucinating, there are tons of options.

Considering this is a transcription of a deleted twitter post, my internet radar of "yeah, that happened" gives a lower chance this is true than your average ChatGPT answer.

By @allsummer - 3 months
This is a problem for this particular teacher (who sees their students surpassing them in understanding and using AI), but of course it is projected to be a problem for the student.

No student is ever hurt by the introduction of a more advanced knowledge system. We heard similar laments decades ago, with: Students just believe the first 10 search results of Google. Those students are now the teachers of today, starting at the search bar.

I'd go so far as saying (if version other than 3.5 was used) that ChatGPT was correct and has far more linguistics knowledge than this teacher ever will. "Greek is actually a combination of four other languages" is not an answer that ChatGPT will ever give, but something a teacher makes up to claim Ch*tGPT is a Nonsense Machine.

ChatGPT: Greek has evolved in stages from Mycenaean Greek (Linear B script) through Classical Greek, Hellenistic (Koine) Greek, Byzantine Greek, and Modern Greek. It has been influenced by ancient Near Eastern languages, Latin, Turkish, Italian, and French.

If there really is an epistemic crisis, then it already existed and ChatGPT merely reflects it, not caused or contributed to it.