Neuroscience and psychology
The silence surrounding the impact of artificial intelligence on the education system
In this critical article, we discuss the impact of artificial intelligence at the university level and the difficulty of "ethical use" or "responsible use" of AI. We address the ethical implications of academic fraud and the impact on employment of the generation that is delegating the development of its intellectual competence to machines.
At one time, it was said that the impact of social media depended on its "responsible use." The problem with this reasoning is that technologies are not used with logic, but rather they impose their own logic. Social media was designed to create salient visual cues and forms of communication that promote primitive emotional reactions and impulsive interaction: gossip, laughter, envy, resentment, etc. This is precisely the basis of its success. By design, general use cannot be "responsible." On the other hand, the internet has made available to everyone the ability to access a universe of books and readings of great intellectual value. But this has not made people read more, nor has it led them to choose more in-depth reading material. It has done exactly the opposite, and reading has continued to decline, in parallel with human intelligence.
The same is true of Artificial Intelligence, particularly in education. Reference is made to a conditional "ethical and responsible use of AI". There is no such thing as "responsible use" in a general sense, because use is determined by the functional structure of the technology itself. Technology is not used with logic, but rather imposes its own logic of use. It is AI that propagates its own order of things, in the direction of authors such as Jaques Ellul. Thus, it is the human being who is the instrument of technology rather than the other way around. As a result, the education system has been subjugated by these types of technologies, whose logic is not ethical, but intrinsically mechanistic and utilitarian, with no one knowing what to do about it.
We correct hundreds of Master's theses every year, and we can describe the reality: few use AI to expand their thinking, most use it to replace it and avoid having to write. People tend to take the shortest route to their goals. As a result, cognitive skills such as conceptual elaboration, deep reading comprehension, organization of ideas through the writing process, synthesis and integration, interpretation and articulation of one's own criteria, etc., remain undeveloped in the individual. Everything that this tool does, the student does not work on, which means that the brain does not work either. This will inevitably result in a loss of intelligence and other neurocognitive abilities (attention, memory, executive functions, etc.). The reverse Flynn effect will continue on its path.
Education becomes a process of obtaining a degree by the shortest possible route, not a process of developing skills. This makes assigning work almost science fiction. As things stand, the leading scientific journals are saying that what should now be evaluated are things such as the student's "empathy" or "creativity." There are quite a few problems with this view. First, mature moral decisions require precisely the ability to engage in deep reasoning, not simple emotionality.
Empathy from a "pedagogical" point of view, so to speak, is a naively overused concept that is even psychologically problematic. The same is true of the concept of "creativity." Creative thinking is difficult without intelligence in the first place. Intelligence is not a sufficient cause for creativity, but it is a necessary cause to a large extent. You need to be particularly intelligent to see beyond what others see. However, inadvertently, what is being dropped is the end of assessment based on skills and knowledge in an area to obtain a "higher qualification." It seems that in the near future, "assessment" will inevitably begin to be based on a set of vague aspects far removed from the objective competence of the individual. Everything is beginning to move in the direction of Baudrillard's society of simulation. Another thing is that they want to pretend that nothing is happening, just as they have done with the degradation of the education system for decades.
Therefore, academic fraud is not harmless to society, and it is worrying, at least for the few of us who still care about these things. I myself have had to put my foot down with quite a few people, some with blatant incompetence that could have serious consequences for third parties, and others who have simply normalized cheating as a way of life. Putting my foot down is not simply an academic matter: it is the ethical duty of every teacher. The responses of this type of person are often quite aggressive, and the teacher is basically alone in these situations. After all, these people have been enjoying a system designed to allow them to do as they please for years, degrading everything in their path. This situation has not been accidental; it has been allowed, if not designed and actively legislated.
A few years ago, the main cause of failure was the submission of work containing plagiarized text. Today, this is very rare. Most of my own students fail mainly because they use AI, more specifically, because their work contains fabricated data that does not correspond to the scientific studies they are supposed to be evaluating (the famous hallucinations), and because they make gross errors derived from its use. If there are no errors, even if a professor knows that a paper has been written using AI (I have students who don't even bother to remove the colored arrows produced by the AI from their master's thesis), they cannot really objectively fail them. The final twist is that universities are being flooded with "pseudo-legal" complaints written with AI, some of which are quite amusing, in which students pose as lawyers to try to coerce the approval of their academic work done with AI through all kinds of threats. I insist: all of this has been allowed, if not actively designed. What has grown in the education system in recent decades is, above all, bureaucratization as a response, thereby dissolving the human nature of relationships and the value of the teaching process on the part of the teacher, leaving a contractual framework in which the student feels like a customer who has paid for a service and is entitled to all kinds of demands, rather than a student.
AI has positive uses, but this does not mean that there is no general use that is not positive. What is true about the article in Nature is what its title says: university assessments need to be rethought. Simply because there is no other option. And to the rebels, and to those professors who have not yet given up, spending your nights grading papers until your eyes close, stopping those who need to be stopped, I offer my utmost respect.
"RESPONSIBLE USE" OF ARTIFICIAL INTELLIGENCE
Humans are rationalizers rather than rational beings. Much of human behavior is mediated by the effect of Wittgensteinian language games. When a politician wants to raise more taxes, they know they must use a language game to gain consent. They will say that they are "increasing the welfare state." Alehop. The psychological impact of any objective fact depends more on the language game with which it is disguised than on the fact itself. Politicians understand this best, and their voters understand it least.At one time, it was said that the impact of social media depended on its "responsible use." The problem with this reasoning is that technologies are not used with logic, but rather they impose their own logic. Social media was designed to create salient visual cues and forms of communication that promote primitive emotional reactions and impulsive interaction: gossip, laughter, envy, resentment, etc. This is precisely the basis of its success. By design, general use cannot be "responsible." On the other hand, the internet has made available to everyone the ability to access a universe of books and readings of great intellectual value. But this has not made people read more, nor has it led them to choose more in-depth reading material. It has done exactly the opposite, and reading has continued to decline, in parallel with human intelligence.
The same is true of Artificial Intelligence, particularly in education. Reference is made to a conditional "ethical and responsible use of AI". There is no such thing as "responsible use" in a general sense, because use is determined by the functional structure of the technology itself. Technology is not used with logic, but rather imposes its own logic of use. It is AI that propagates its own order of things, in the direction of authors such as Jaques Ellul. Thus, it is the human being who is the instrument of technology rather than the other way around. As a result, the education system has been subjugated by these types of technologies, whose logic is not ethical, but intrinsically mechanistic and utilitarian, with no one knowing what to do about it.
We correct hundreds of Master's theses every year, and we can describe the reality: few use AI to expand their thinking, most use it to replace it and avoid having to write. People tend to take the shortest route to their goals. As a result, cognitive skills such as conceptual elaboration, deep reading comprehension, organization of ideas through the writing process, synthesis and integration, interpretation and articulation of one's own criteria, etc., remain undeveloped in the individual. Everything that this tool does, the student does not work on, which means that the brain does not work either. This will inevitably result in a loss of intelligence and other neurocognitive abilities (attention, memory, executive functions, etc.). The reverse Flynn effect will continue on its path.
Education becomes a process of obtaining a degree by the shortest possible route, not a process of developing skills. This makes assigning work almost science fiction. As things stand, the leading scientific journals are saying that what should now be evaluated are things such as the student's "empathy" or "creativity." There are quite a few problems with this view. First, mature moral decisions require precisely the ability to engage in deep reasoning, not simple emotionality.
Empathy from a "pedagogical" point of view, so to speak, is a naively overused concept that is even psychologically problematic. The same is true of the concept of "creativity." Creative thinking is difficult without intelligence in the first place. Intelligence is not a sufficient cause for creativity, but it is a necessary cause to a large extent. You need to be particularly intelligent to see beyond what others see. However, inadvertently, what is being dropped is the end of assessment based on skills and knowledge in an area to obtain a "higher qualification." It seems that in the near future, "assessment" will inevitably begin to be based on a set of vague aspects far removed from the objective competence of the individual. Everything is beginning to move in the direction of Baudrillard's society of simulation. Another thing is that they want to pretend that nothing is happening, just as they have done with the degradation of the education system for decades.
AI AND THE FUTURE OF WORK
Apart from the impact of AI use on their general intelligence and the development of the academic skills that are assumed, what students seem to fail to understand, fascinated by the effort savings they get from AI, is that precisely those savings mean that they are not acquiring any intellectual skills that AI cannot automatically perform in a fraction of the time. I think most students don't understand the irony, and apparently imagine themselves working comfortably assisted by AI, pressing a button for everything that requires effort or intellect. I have news for you: AI doesn't need anyone to press a button. The person has removed themselves from the equation. In other words, anyone who doesn't know how to do something that AI can't do will be left out of the workforce. Only those who have developed sufficient skills to contribute the reasoning, context, and nuances that AI, in its brute-force processing, cannot grasp will survive in the workplace. Many intelligent and hard-working people will be left out of the job market. Those who are not particularly intelligent or hard-working, have not developed any skills, and do not contribute anything beyond the automatic response of AI will not even get started.ETHICS IN THE CLASSROOM
Although slowly, some media outlets are commenting on how the university education system is currently a haven academic deception, and the universities themselves do not know what to do to stop it. Students feel entitled not to do any work and feel entitled to have AI do their assignments. Teachers know this. Their least costly option is to look the other way and avoid all kinds of institutional, professional, and personal problems and inconveniences. The other option is to act as a police officer, rather than a teacher, and waste hours correcting papers looking for signs of AI, which they can't really do anything about given the legal vacuum regarding academic fraud (which is nothing new). On the other hand, there is evidence that healthcare professionals who cheat academically transfer their behavior to patients.Therefore, academic fraud is not harmless to society, and it is worrying, at least for the few of us who still care about these things. I myself have had to put my foot down with quite a few people, some with blatant incompetence that could have serious consequences for third parties, and others who have simply normalized cheating as a way of life. Putting my foot down is not simply an academic matter: it is the ethical duty of every teacher. The responses of this type of person are often quite aggressive, and the teacher is basically alone in these situations. After all, these people have been enjoying a system designed to allow them to do as they please for years, degrading everything in their path. This situation has not been accidental; it has been allowed, if not designed and actively legislated.
A few years ago, the main cause of failure was the submission of work containing plagiarized text. Today, this is very rare. Most of my own students fail mainly because they use AI, more specifically, because their work contains fabricated data that does not correspond to the scientific studies they are supposed to be evaluating (the famous hallucinations), and because they make gross errors derived from its use. If there are no errors, even if a professor knows that a paper has been written using AI (I have students who don't even bother to remove the colored arrows produced by the AI from their master's thesis), they cannot really objectively fail them. The final twist is that universities are being flooded with "pseudo-legal" complaints written with AI, some of which are quite amusing, in which students pose as lawyers to try to coerce the approval of their academic work done with AI through all kinds of threats. I insist: all of this has been allowed, if not actively designed. What has grown in the education system in recent decades is, above all, bureaucratization as a response, thereby dissolving the human nature of relationships and the value of the teaching process on the part of the teacher, leaving a contractual framework in which the student feels like a customer who has paid for a service and is entitled to all kinds of demands, rather than a student.
FINAL WORDS
Ethics and intelligence are things that have ceased to have value. In our society, anything that does not bring immediate benefit to the individual is of little importance. Education is becoming a pure simulacrum, as Baudrillard would say. Public schools collect their public money, private schools collect their private money, teachers collect their paychecks, the government collects taxes from working people... and the wheel keeps turning. The rebels, those who still study, are aware that the system does not educate anyone; it is the person who wants to do so who educates themselves, more in spite of the system than thanks to it.AI has positive uses, but this does not mean that there is no general use that is not positive. What is true about the article in Nature is what its title says: university assessments need to be rethought. Simply because there is no other option. And to the rebels, and to those professors who have not yet given up, spending your nights grading papers until your eyes close, stopping those who need to be stopped, I offer my utmost respect.

>
>
>
>
>
>
>
>
>
>
>
>
>
>
>