Email or username:

Password:

Forgot your password?
29 comments
Altbot

@paninid Here is the alt-text description for the image:

A screenshot of a tweet from user "mike ginn" with the handle "[@]shutupmikeginn". The tweet reads: "its amazing how chatgpt knows everything about subjects I know nothing about, but is wrong like 40% of the time about things im an expert on. not going to think about this any further" Above the tweet is the user's profile picture, which is a black and white photo of a man holding a camera.

Provided by @altbot, generated using Gemini

Geoff Coffey

@paninid To be fair the news is this way too.

Alex von Kitchen

@gwcoffey @paninid i was going to say this is why i stopped reading newspapers

TC

@Dangerous_beans @gwcoffey @paninid came here to say the same.

Having deep knowledge of something, make it absolutely clear that "news" all to often cross the line between "explains something complex in an easy to understand way" and "Explains something complex, by making it so simple that it's no longer true"

Alex von Kitchen

@for that's generous. my area of expertise is in social welfare and health systems, and half the time they are straight up lying about that

iveyline

@paninid As someone wrote in a post AI is 90%hype and 10% useful. We have human intelligence. Using AI to compose a letter or write a book is sheer laziness and turning us into zombies. Apart from specialist data mining purposes AI serves little purpose.

Thomas

@iveyline
Where mindless form is required "AI" is just as well an answer as non-compliance. In other words, LLMs can be a high-tech tool of sabotage for Luddites in the socio-technological realm.

@paninid

Keyka

@iveyline @paninid It seems like all the hype around AI came about because of the LLM. Beyond that, I see many useful (but not news-worthy) applications of AI: from generating better OCR (text recognition), translating languages, playing board games, detecting cheaters in chess games, generating music recommendations that users like, detecting users with high chances of bad credit, to classifying fireflies (which are increasingly rare) from video footage.

Wynke

@iveyline @paninid Although I think that AI is not usually very useful and often rather destructive (both in 'being wrong' and in environmental impact), I do not quite agree with this reasoning. There are a lot of things that humans could, and used to, do ourselves, and that are made a lot easier with the help of tools that were also first seen as 'that's just laziness'.

Vacuum cleaners, sewing machines, automobiles, printing presses and more come to mind.

Wynke

@iveyline @paninid Obviously I do not mean 'go ahead, let AI write your book for you'. But that is because AI has no actual creativity and no accountability and because of environmental and societal impact. Not because it is 'lazy' or because using tools is a bad thing. Because that would also apply to a great many things that are actually positive developments.

schratze

@paninid and you used gemini for the image description on this? lmao

noodle

@paninid
I caught an expert I work with using chat GPT to look up stuff they should have known (and could have asked me). I guided them through this logic. I think they 'got it' by the end.

Nazo

@paninid I guess people don't notice this particular facet because they don't really ask about things they're an expert on.

Well, ok, and they have to be an expert on something in the first place......

Paul Sutton

@paninid

This is never brought up when the media discuss AI.

Schrank :shopware: 🐘 (er/ihm)

This has a name. But I can’t find it.
We tend to assume news articles we don’t know much about are true although we see tons of mistakes if we are an expert in the field.

Can someone please tell me the name if this bias/effect/…? It’s driving me nuts.

@paninid

Davey

@Schrank @paninid
Gell-Mann Amnesia is it?
Usually relating to journalism.

I always mix it up with the Voight-Kampff test from Blade Runner

Lady MountainJay

@Schrank @paninid

"The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies."

en.wikipedia.org/wiki/Gell-Man

Henryk Plötz

@paninid This is called the Baader Meinhof effect. Source: ChatGPT.

Dieu

@paninid Sounds like average journalism.

bit

@paninid I don't use AI, not an ideological decision for me, it just doesn't ever work. I have this little test for every AI I come across, where I tell it to make a hello world axum server with tls on. There are 3 correct solutions right in the axum github repo, and more in other repos. The last AI I tested was gemma3, it was released days ago, from Google, supposedly competes well with deepseek. Well, it started making shit up from the start, just like every other AI I tested previously.

Kierkegaanks regretfully

@paninid that 40% is probably for chatgpt… the meta sourced ai that was built into one of my apps is wrong maybe 60-80% of the time when i know the subject

C.S.Strowbridge

@paninid

I've heard the same thing said about Elon Musk.

Szescstopni

@paninid @mrundkvist Michael Crichton named this effect "Gell-Mann amnesia".

From: Wikipedia:

In a speech in 2002, Crichton coined the term Gell-Mann amnesia effect to describe the phenomenon of experts reading articles within their fields of expertise and finding them to be error-ridden and full of misunderstanding, but seemingly forgetting those experiences when reading articles in the same publications written on topics outside of their fields of expertise, which they believe to be credible. He explained that he had chosen the name ironically, because he had once discussed the effect with physicist Murray Gell-Mann, "and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have."

en.wikipedia.org/wiki/Gell-Man

@paninid @mrundkvist Michael Crichton named this effect "Gell-Mann amnesia".

From: Wikipedia:

In a speech in 2002, Crichton coined the term Gell-Mann amnesia effect to describe the phenomenon of experts reading articles within their fields of expertise and finding them to be error-ridden and full of misunderstanding, but seemingly forgetting those experiences when reading articles in the same publications written on topics outside of their fields of expertise, which they believe to be credible. He...

Go Up