While a liar subverts the truth intentionally, a bullshitter simply doesn’t care about it. Some argue that Hicks et al.’s claim that ChatGPT is similarly indifferent to the truth falls short because GenAI tools lack intent. However, while the tools themselves are merely “probabilistic automation systems”, their creators, companies like OpenAI, Google, Anthropic, and Microsoft, intend to tempt users into prolonged use and paid subscriptions by prioritizing engaging, pleasing output over accuracy. Therefore, the “bullshitter” comparison isn't far off.
ooooh I like the way everything in this article is written, in general, but also this a point I sometimes feel isn't hammered enough: when you give up control of your voice and work to an LLM, you are giving control to a set of unknown individuals with motives that are probably in conflict with your own. And regardless of accuracy (which is low!), that is a very, very bad idea.

RE: https://toot.cafe/users/baldur/statuses/116130499944110898

Because misinformation and inaccuracies in LLM output are often subtle, the advice to use GenAI tools “critically” does not work well for summarization. Slight inaccuracies can be very damaging in academic work, but can usually only be detected by a close reading of the text and/or an expert. However, needing to read the full text closely to verify the AI output defeats the whole purpose of generating a summary.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://fire.asta.lgbt/notes/aj5s5z47th3m02ku on your instance and quote it. (Note that quoting is not supported in Mastodon.)