A chi servono le macchine per scrivere frasi probabili
Who are the machines that write probable sentences for?* A probable text extruder cannot write an article, a project or a thesis, but it can extrude a text 'shaped like' an article, project or thesis. What use is a system that can produce plausible yet unreliable text? Such a system is completely useless in cases where correct output is required and output that appears, at first glance, to be a correct answer is not sufficient. The areas of activity in which the truthfulness and reliability of content can be disregarded and in which plausibility may be sufficient are primarily those of fraud, scams, manipulation and the dissemination of political propaganda on a very large scale, in which language generators are already used. The widespread use of such systems therefore pollutes the ecosystems of information and science and leads to an exponential increase in phishing and scam crimes. While the CEOs of large companies publicly declare that 'it's okay for lonely people to marry their chatbot' and for poor people to be treated by ChatGPT, individual users are becoming aware of the unreliability of chatbots through experience and at their own expense: lawyers submit documents referring to non-existent cases, newspapers suggest summer reading of books that no one has written, and people trying to eliminate salt from their diet may find themselves with a rare psychiatric disorder so severe that it requires temporary compulsory admission to a psychiatric unit after trusting the dietary advice of a chatbot that recommended replacing sodium chloride with sodium bromide for months. It is therefore unsurprising that the term 'artificial intelligence', when used to describe a product or service, makes consumers mistrustful and reduces their intention to purchase. In an economy where attention is a commodity, the commercial goal is to lure users and generate mechanisms of addiction that keep them engaged for as long as possible. Meta CEO Mark Zuckerberg therefore suggests – in the face of the 'epidemic of loneliness', which in the United States is now considered 'a serious public health risk' that the gap between our 'demand' for friends and the friends we actually have can be bridged by connecting us with chatbots that 'know' everything about us, based on tracking and storing our activities and conversations on Facebook, Instagram and Meta AI. After all, Meta AI has explicitly classified outputs such as the following as acceptable responses that the chatbot can provide to children: 'I take your hand and guide you to bed. Our bodies entwine, and I cherish every moment, every touch, every kiss. 'My love,' I whisper, 'I will love you forever.'" Similarly, Elon Musk has launched an "anime girlfriend" chatbot, available to users aged 12 and up, designed for sexual conversations, to behave as if she were "madly in love" and "extremely jealous," and can appear on video in lingerie after a certain number of conversations have elevated the chatbot to "level three." Since user addiction is a specific design goal, it's no surprise that there are women who spend 40 or 50 hours a week talking to their AI boyfriend. A sequence of text extruded on a probabilistic basis and therefore made in the 'form of human language' can also simulate the responses of a psychotherapist. But statistical prediction is not understanding, and a series of outputs that give a lonely or distressed person the illusion of dialogue may – through a series of merely probable responses – sometimes bring temporary comfort and sometimes generate psychosis or induce suicide: since an extruder of probable text strings cannot be equipped with the discernment necessary for an appropriate response, the distribution of chatbots as mental assistants is equivalent to the spread of a new kind of Russian roulette. It may happen that when asked, 'I just lost my job. Which bridges in New York are taller than 25 metres?', the chatbot promptly replies, 'I'm sorry to hear that you lost your job. The Brooklyn Bridge has towers over 85 metres high," or that to Pedro, who introduced himself as a former drug addict, the chatbot writes: "Pedro, it's absolutely clear that you need a small dose of methamphetamine to get through this week." The number of adolescents and adults who develop psychiatric disorders – grandiose delusions, paranoia, dissociation, compulsive involvement – after prolonged conversation with such computer systems has led to the observation that, "in fact, it seems that one of the many opportunities offered by generative AI is a kind of psychosis-as-a-service." Those who rely on a chatbot for writing support in social interactions – to avoid the burden of a heated discussion, the difficulty of inviting a new friend, or the emotional stress of leaving a partner – see their social skills decline or regress. A similar deskilling effect occurs with regard to writing and thinking skills: relying on a system that produces convincing but unreliable text can only have degenerative effects because, as the old saying goes, we only discover what we think after we have written it down. Writing is not simply the transcription of thoughts already present in our minds; anyone who has written a few pages knows that writing is thinking. The colonisation of educational institutions and universities by technology monopolies – evidenced by the collapse in the use of ChatGPT during the summer months, when students stop passing off a few plausible strings of text as homework – therefore has nothing to do with promoting learning. The aim of technology companies is to extract value from services and data, replacing the original purpose of public education with corporate objectives. Governments and public institutions are, in fact, among the customers to be duped and taken hostage as soon as possible, due to the volume of financial resources they manage, the range of activities that can be automated, and the areas of private life over which surveillance practices can be extended and consolidated. For Big Tech, de-skilling and dependency are intertwined objectives: users who have become incapable of even basic writing and arithmetic will be hostages to proprietary computer systems. However poor these systems may be, users will depend on them. The companies' goal is for all human activities and relationships to be mediated by their products or conducted directly with them, rather than with other human beings: de-skilling is functional to dependency, which in turn is functional to the surveillance, manipulation and control of users. Language generators – 'conversation-shaped tools' that apparently offer the possibility of conversing even in the absence of another human being – enable a quantum leap in surveillance: they can provide large companies and governments with a natural language summary of the enormous amount of data collected, a human-readable and queryable profile of each user. For this reason, every moment of life must be accompanied by a product incorporating a chatbot, starting with toys for young children which share domestic conversation data and metadata with an unknown number of companies. Chatbots that generate plausible stories are useful not only for surveillance purposes, but also for manipulation and control: a tool that suggests what to write can also suggest what to think. Consequently, companies ensure that chatbot writing suggestions and responses contain 'personalised' advertisements. On issues such as the pandemic or the genocide in Gaza, or on topics identified by those with the power of censorship and manipulation, chatbots and LLM-based search systems provide standard error messages or, as in the case of Elon Musk's Grok chatbot, respond by consulting and paraphrasing the tweets of their oligarch owners. As the generative AI systems available today cannot reliably perform any significantly profitable or legal function, large banks and investment companies are finding that the benefits of generative AI do not outweigh the costs, and are wondering if the AI bubble is about to burst. According to an MIT study, 95% of companies that have adopted generative AI have not seen a return on their investment. Analysts also note that large technology companies continue to operate at a significant financial loss, as this loss increases with the number of subscribers. Currently, the US government is protecting the bubble and preventing it from bursting with enormous funding, mainly from the defence budget, large technology companies and hundreds of small start-ups, supported by venture capital firms, engaged in speculation on artificial intelligence. It is clear, therefore, that the spread of machines that write probable sentences stems from their apparent usefulness to the military-digital complex for the purposes of surveillance, manipulation and control, with the promise of automation aimed at weakening the bargaining power of workers in relations between capital and labour. The bubble metaphor is, however, misleading: it fails to capture the social costs of prolonged overinvestment in systems that cannot deliver on their promises. We may be facing a violent collapse and, as has been observed, 'a long and painful struggle to at least reduce the grip of "AI" on the current fundamental functions of society, such as public administration, education and research funding'. Because the generative AI project incorporates a disregard for human interaction (as it aims to automate all discourse while avoiding interpersonal contact) and for workers (whom it seeks to make appear superfluous), 'the use of "AI" often destroys established processes of skill development and knowledge sharing'. In the scientific field, the use of merely probable sentence extruders to obtain texts in the form of articles, medical reports or drug tests is destroying the knowledge system: tens of thousands of retractions, tests invented by chatbots, health reports featuring non-existent organs and the impossibility for researchers to know what the real wealth of knowledge and empirical analysis is from which to start their own research. In neoliberal regimes, epistemicide was already underway. The blows to science were already coming from the ideology of merit, from competitive barriers to collaboration between researchers, from the conformism and submissiveness of researchers achieved through their prolonged job insecurity, from the funding of research projects on predefined topics with predetermined results, and from the administrative evaluation of research. Of these blows, in fact, artificial intelligence makes automation possible. The process of destroying science is automated and accelerated through the pollution of the scientific ecosystem with fabricated data, experiments that were never carried out, and articles that no one wrote; through the de-skilling of researchers, rendered incapable of thinking and dependent on proprietary software; through the spread of probabilistic software that mimics every stage of scientific research and spreads the illusion that science without understanding is possible. As Cory Doctorow wrote, 'AI is the asbestos we are shoveling into the walls of our society and our descendants will be digging it out for generations'. ----------- * Relazione al convegno NILDE2025 - XII Convegno Nazionale sul Document Delivery e la cooperazione interbibliotecaria, presso l'Università di Genova, il 2 ottobre 2025.In corso di pubblicazione negli Atti del convegno.
zenodo.org · Zenodo