ALEXANDRU ENE
PM, Lodge Millenium No. 58, NGLR
While artificial intelligence (AI) is praised for its revolutionary potential, there is a rise in concerns regarding what we might call “artificial stupidity” – the negative and unintentional effects of hastily and recklessly implementing AI technologies. This essay explores the problematic aspects of current AI based on recent studies and review cited in various sources.
First point
Language debasement
A recent study from the prestigious University of Tübingen in Germany has revealed a concerning tendency in the use of language which can partially be attributed to AI language patterns. Researchers have seen a significant increase in the use of “surplus words”, many of them borrowed from corporate slang.
The study shows that:
- The use of words such as “showcasing” and “underscore” has increased almost tenfold in 2024 as opposed to 2023.
- The frequency of words such as “potential”, “findings” and “crucial” has dramatically increased; these words are associated to corporate slang as well.
- Previously, similar abrupt increases in the use of certain words had been seen only in the case of major global events, such as the terms “Ebola” in 2015 or “coronavirus”, “lockdown” and “pandemic” in between 2020 and 2022.
This tendency suggests that AI generative models can contribute to leveling and impoverishing the language by promoting a style of communication lacking in substance and full of corporate clichés, lacking content and lexical value. Researchers at Tübingen have given the following example: “A comprehensive grasp of the intricate interplay between demand and supply is pivotal for effective strategies.” The Romanian translation reminds us of sketches by the great comedian Toma Caragiu: “The fully comprehensive and relationally superannuated demythization… The phenomenological and chromatically introspective incongruence…”
Second point
Security and confidentiality risks
A Google internal memo, recently leaked to the press, underlines serious concerns regarding the impact of generative AI on the internet and society in general. According to this document:
- Generative AI is used to produce false or plagiarized content on a large scale, flooding the internet with unreliable information.
- The main tactics in the malicious use of AI include manipulating human resemblance and falsifying evidence.
- These techniques are deliberately being used to influence public opinion, facilitate political rigging and generate illicit profits.
- The low technical entry barrier required for using these AI systems amplifies the issue by allowing a large array of persons to produce fake content with ease.
The memo warns that the proliferation of synthetic low-quality content risks raising the level of general skepticism with regards to digital information and overloading users with the need for complex fact-checking of online information.
Third point
Ethical and intellectual property issues
The hasty implementation of AI technologies also raises significant ethical problems, especially regarding intellectual property rights. A notable example is the case of Perplexity, a company that offers a new AI-based search engine.
Identified problems include:
- Compiling and summarizing content without citing original sources.
- Bypassing paywalls of publications and unauthorized use of images.
- Keeping the revenue from publicity without compensating the original sources of the content.
These practices are a clear breach of intellectual property rights and raise doubts about the ethics of using AI for searching and compiling information.
Fourth point
Economic impact and the speculative bubble
Contrary to optimistic expectations, AI implementation doesn’t seem to generate the anticipated economic benefits. A detailed analysis by Goldman Sachs shows that:
- Very few companies are currently obtaining profit by implementing AI.
- Many companies that have forcibly implemented AI seems to perform poorer than before.
- Only 5-6% of big companies in various fields have truly adopted AI technologies.
The main reasons for the slow implementation include: - Errors and hallucinations of AI systems.
- Issues of privacy and security.
- Lack of transparency from companies developing AI models.
A report by the Sequoia Capital investment fund underlines the amplitude of the necessary investments for AI infrastructure: - Companies involved in AI should generate approximately 600 billion dollars each year only to cover the costs of infrastructure.
- Even in the most optimistic scenarios, big tech companies would generate profits of only 10 billion dollars out of AI, thus remaining with a huge financial hole.
David Kahn, a renowned analyst at Sequoia, warns that: - Expectations of quick profit from AI should be tempered.
- Current investments in AI are largely speculative.
- There is a significant risk of this AI bubble, already valued at hundreds of billions of dollars, to burst, with the possibility of generating a global economic crisis.
Fifth point
Conclusion
While AI potential remains vast, its current implementation raises many issues. From the debasement of language to spreading disinformation and to ethical issues and economic risks, “artificial stupidity” seems to be a significant side effect of the current technological race. It is vital that the development and implementation of AI be guided by solid ethical principles and take into consideration their long-term impact on society and the economy. Only through a balanced and responsible approach can we hope to capitalize on the benefits of AI, while minimizing its unintentional negative effects.