ChatGPT has been launched for a year now, and you would have to be in a coma not to hear that it is a great advance… and a great threat. But in this discourse of fear we have heard above all that artificial intelligence will steal our jobs, and that once the robot is smarter than us we will be expendable. We think of the Terminator when it may be the “chatpocalypse” that kills us… out of sheer desperation. This amusing term refers to a near future where we will no longer talk to humans, but to artificial intelligences capable of understanding natural language, and replying like a person, in text or voice messages, and in any language. But it may not be so funny.
Because it’s already happening, not massively, it’s still expensive, so it’s limited to customer service in large companies. Artificial intelligence has given a boost to automated chatbots, and IVRs, interactive voice systems. So they can respond to text messages, voice messages, and even talk to you on the phone in any language, and you’ll have a hard time distinguishing them from a person. At least at first. At the moment they are designed to answer routine questions that most users ask, such as how much does it cost to send me, and to give quick answers. But above all, to reduce the need for human intervention.
Have you ever seen chat rooms popping up on websites asking if they can help you? Well, don’t ask too difficult questions, or you will unleash… the “chatpocalypse”. Anyone who has tried talking to chatGPT, Bard, Claude, or any of the many other AI chat rooms, will have noticed that if they go into a loop with an answer, you won’t get them out of there. If you also complain that they made up something to answer you, technically called “AI hallucination”, they humbly apologize. To repeat it again minutes later. Like a total cuñao.
And at the moment it is the only reason why they have not yet completely replaced humans. The big companies that implement these chatbots have limited their large language models, LLMMs, by setting limits. They prevent them from talking to you about anything, or answering certain questions and thus dodge the hallucination. It also prevents them from being as versatile as a human, but still useful enough to be able to replace customer service employees in many of their tasks. But this limitation, experts say, is only temporary. Both the developers of the technology, such as OpenAI or Google, and the companies that apply it, are confident that having AI chatbots comparable to a person is only a matter of time. If today we have chatGPT version 4, which cannot, when 8 or 17 is released it will be perfectly capable.
So theoretically we are not so far from a future where changing your phone company, fixing that bill you were wrongly charged, and even getting a mortgage will depend on whether you are able to make yourself understood by an AI. In a masterful reflection of that dystopian future, the recent series The Architect, awarded as the best at the Berlin Film Festival, we find the protagonist talking to her bank in a digital kiosk. That is, a bank branch on the street, staffed by a chatbot. The conversion they have is a real delirium, because while the human asks how to get a mortgage, and tries to make herself understood, the AI insists that she does not meet the conditions. It’s like talking to a wall. It only needs to tell the girl why she is asking stupid questions. And all this would be laughable if it were not going to happen tomorrow, or the day after tomorrow.
Can we relax in the thought that this dystopian future will not actually come? Last April, in Belgium, the newspaper La Libre reported the case of a woman who denounced the suicide of her husband induced by an AI chatbot. The application, Chai, provides by default Eliza, programmed to simulate emotions and engage the user in a kind of buddy relationship. ChatGPT and similar services are purposely cold in manner, so as not to generate confusion and believe them people. But not this one, and the thirty-year-old, married, father of two children, sanitary, and obsessed with climate change came to the conclusion, conversing with Eliza, that she loved him. Even more than his own wife. Worse, the chat hallucination led him to tell her that he had to kill himself so they could live together in paradise. Climate change had no solution, so why not break free from it. It’s a tragedy that we certainly can’t separate from mental health issues, rather than understanding it as something generalized. But it sets a good example of what can be achieved by a conversational artificial intelligence with the ability to mimic a human being in a credible way.
Someday that fantasy in The Architect will have spread to all the products and services we buy. What company won’t choose a robot over a human if it’s cheaper, doesn’t take vacations and works all the time. Well, it’s not to die for. If the machine does not understand us, we will always have the hope of being passed on to a human. But beware, this may not be the solution in the future either. A study published this June warned that in Amazon Mechanical Turk, a marketplace platform for small stores to advertise their products, about 46% of workers use artificial intelligence to write product reviews. In other words, when you ask a human how to get your mortgage, they may answer: wait, I’ll ask the chatbot.



