29 comments
@Dangerous_beans @gwcoffey @paninid came here to say the same. Having deep knowledge of something, make it absolutely clear that "news" all to often cross the line between "explains something complex in an easy to understand way" and "Explains something complex, by making it so simple that it's no longer true" @for that's generous. my area of expertise is in social welfare and health systems, and half the time they are straight up lying about that @iveyline @paninid It seems like all the hype around AI came about because of the LLM. Beyond that, I see many useful (but not news-worthy) applications of AI: from generating better OCR (text recognition), translating languages, playing board games, detecting cheaters in chess games, generating music recommendations that users like, detecting users with high chances of bad credit, to classifying fireflies (which are increasingly rare) from video footage. @iveyline @paninid Although I think that AI is not usually very useful and often rather destructive (both in 'being wrong' and in environmental impact), I do not quite agree with this reasoning. There are a lot of things that humans could, and used to, do ourselves, and that are made a lot easier with the help of tools that were also first seen as 'that's just laziness'. Vacuum cleaners, sewing machines, automobiles, printing presses and more come to mind. @iveyline @paninid Obviously I do not mean 'go ahead, let AI write your book for you'. But that is because AI has no actual creativity and no accountability and because of environmental and societal impact. Not because it is 'lazy' or because using tools is a bad thing. Because that would also apply to a great many things that are actually positive developments. This has a name. But I can’t find it. Can someone please tell me the name if this bias/effect/…? It’s driving me nuts. "The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies." @paninid I don't use AI, not an ideological decision for me, it just doesn't ever work. I have this little test for every AI I come across, where I tell it to make a hello world axum server with tls on. There are 3 correct solutions right in the axum github repo, and more in other repos. The last AI I tested was gemma3, it was released days ago, from Google, supposedly competes well with deepseek. Well, it started making shit up from the start, just like every other AI I tested previously. @paninid that 40% is probably for chatgpt… the meta sourced ai that was built into one of my apps is wrong maybe 60-80% of the time when i know the subject @paninid it sounds a lot like Gell-Mann amnesia does for the media: https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect?wprov=sfti1 |
@paninid Here is the alt-text description for the image:
A screenshot of a tweet from user "mike ginn" with the handle "[@]shutupmikeginn". The tweet reads: "its amazing how chatgpt knows everything about subjects I know nothing about, but is wrong like 40% of the time about things im an expert on. not going to think about this any further" Above the tweet is the user's profile picture, which is a black and white photo of a man holding a camera.
Provided by @altbot, generated using Gemini