Author: MIT (Massachusetts Institute of Technology) excerpt
Compiled by: Felix, PANews
With the widespread adoption of large language model (LLM) products like OpenAI’s ChatGPT, businesses and individuals from around the world are using LLMs almost every day. Like other tools, LLMs have their own advantages and limitations.
Recently, the Massachusetts Institute of Technology (MIT) published a 206-page research report exploring the cognitive costs of using LLMs (such as ChatGPT) in educational contexts for writing articles, revealing the impact of using LLMs on the brain and cognitive abilities. The study indicates that excessive reliance on AI chatbots like OpenAI’s ChatGPT may diminish cognitive abilities.
The research team divided participants into three groups: LLM group, search engine group, and brain-only group. These participants wrote articles within a limited time using designated tools (the brain-only group did not use any tools) over a period of 4 months, with different themes for each experiment. The team arranged 3 rounds of identical grouping experiments for each participant. In the 4th round of experiments, the team asked participants in the LLM group not to use any tools (referred to as LLM to brain group), while participants in the brain-only group used LLM (brain to LLM group). A total of 54 participants were recruited for the first 3 rounds of experiments, of which 18 completed the 4th round.
The research team used electroencephalography (EEG) to record the brain activity of participants in order to assess their cognitive engagement and cognitive load, and to gain insights into the neural activation during the writing task. The team conducted natural language processing (NLP) analysis and interviewed each participant after the end of each experiment. They scored with the help of human teachers and an AI judge (a specially constructed AI agent).
In natural language processing (NLP) analysis, participants who only used their brains showed significant variability in their writing styles across articles on most topics. In contrast, the articles written by the LLM group tended to be statistically homogeneous on each topic, with noticeably smaller deviations compared to other groups. The search engine group may have been influenced, at least to some extent, by the promotion and optimization of content by search engines.
The LLM group used the most specific named entities (NER), such as people, names, places, years, and definitions; whereas the number of NER used by the search engine group was at least half less than that of the LLM group; the group that only used the brain used 60% fewer NER than the LLM group.
Participants in the LLM and search engine group are under extra pressure due to limited time (20 minutes), so they tend to focus on the output results of the tools they use. Most of them are focused on reusing the output content of the tools, thus being busy with copy-pasting instead of incorporating their own original ideas and editing the content from their own perspectives and experiences.
In terms of neural connectivity patterns, researchers measured participants’ cognitive load using the dynamic directional transfer function (dDTF) method. dDTF can reveal systematic and frequency-specific changes in network coherence, which is significant for executive function, semantic processing, and attentional modulation.
The analysis of EEG indicates that there are significant differences in neural connectivity patterns among the LLM group, the search engine group, and the brain-only group, reflecting different cognitive strategies. The degree of brain connectivity systematically decreases with the increase of external support: the brain-only group exhibits the strongest and most extensive network, the search engine group shows a moderate level of engagement, while the overall coupling of the LLM-assisted group is the weakest.
In the 4th round of experiments, participants transitioning from LLM to brain-only showed weaker neural connections, with lower engagement in α and β networks; while participants transitioning from brain-only to LLM demonstrated higher memory recall abilities and reactivated extensive occipital-parietal and frontal nodes.
In the interview, the LLM group had a lower sense of belonging to their articles. The search engine group had a strong sense of belonging, but it was lower than the group that relied solely on their brains. The LLM group also lagged behind in their ability to cite articles they had written a few minutes ago, with over 83% of ChatGPT users unable to cite articles written just a few minutes prior.
This yet-to-be-peer-reviewed study shows that during a 4-month research period, participants in the LLM group performed worse than the control group, which relied solely on their brains, in terms of neural, linguistic, and scoring aspects. As the educational impact of LLMs on the general public is just beginning to manifest, the use of AI LLMs may actually hinder the enhancement of learning skills, especially for younger users.
Researchers indicate that before LLMs are recognized as beneficial to humanity, “longitudinal studies” are needed to understand the long-term effects of AI chatbots on the human brain.
When asked about ChatGPT’s view on this research, it responded: “This research does not say that ChatGPT is inherently harmful—instead, it warns people not to rely on it excessively without thought or effort.”
Related reading: a16z: 11 key landing areas for the integration of crypto and AI, from AI agents, DePIN to micropayments.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
MIT Report: Over-reliance on AI chat Bots will drop cognitive abilities
Author: MIT (Massachusetts Institute of Technology) excerpt
Compiled by: Felix, PANews
With the widespread adoption of large language model (LLM) products like OpenAI’s ChatGPT, businesses and individuals from around the world are using LLMs almost every day. Like other tools, LLMs have their own advantages and limitations.
Recently, the Massachusetts Institute of Technology (MIT) published a 206-page research report exploring the cognitive costs of using LLMs (such as ChatGPT) in educational contexts for writing articles, revealing the impact of using LLMs on the brain and cognitive abilities. The study indicates that excessive reliance on AI chatbots like OpenAI’s ChatGPT may diminish cognitive abilities.
The research team divided participants into three groups: LLM group, search engine group, and brain-only group. These participants wrote articles within a limited time using designated tools (the brain-only group did not use any tools) over a period of 4 months, with different themes for each experiment. The team arranged 3 rounds of identical grouping experiments for each participant. In the 4th round of experiments, the team asked participants in the LLM group not to use any tools (referred to as LLM to brain group), while participants in the brain-only group used LLM (brain to LLM group). A total of 54 participants were recruited for the first 3 rounds of experiments, of which 18 completed the 4th round.
The research team used electroencephalography (EEG) to record the brain activity of participants in order to assess their cognitive engagement and cognitive load, and to gain insights into the neural activation during the writing task. The team conducted natural language processing (NLP) analysis and interviewed each participant after the end of each experiment. They scored with the help of human teachers and an AI judge (a specially constructed AI agent).
In natural language processing (NLP) analysis, participants who only used their brains showed significant variability in their writing styles across articles on most topics. In contrast, the articles written by the LLM group tended to be statistically homogeneous on each topic, with noticeably smaller deviations compared to other groups. The search engine group may have been influenced, at least to some extent, by the promotion and optimization of content by search engines.
The LLM group used the most specific named entities (NER), such as people, names, places, years, and definitions; whereas the number of NER used by the search engine group was at least half less than that of the LLM group; the group that only used the brain used 60% fewer NER than the LLM group.
Participants in the LLM and search engine group are under extra pressure due to limited time (20 minutes), so they tend to focus on the output results of the tools they use. Most of them are focused on reusing the output content of the tools, thus being busy with copy-pasting instead of incorporating their own original ideas and editing the content from their own perspectives and experiences.
In terms of neural connectivity patterns, researchers measured participants’ cognitive load using the dynamic directional transfer function (dDTF) method. dDTF can reveal systematic and frequency-specific changes in network coherence, which is significant for executive function, semantic processing, and attentional modulation.
The analysis of EEG indicates that there are significant differences in neural connectivity patterns among the LLM group, the search engine group, and the brain-only group, reflecting different cognitive strategies. The degree of brain connectivity systematically decreases with the increase of external support: the brain-only group exhibits the strongest and most extensive network, the search engine group shows a moderate level of engagement, while the overall coupling of the LLM-assisted group is the weakest.
In the 4th round of experiments, participants transitioning from LLM to brain-only showed weaker neural connections, with lower engagement in α and β networks; while participants transitioning from brain-only to LLM demonstrated higher memory recall abilities and reactivated extensive occipital-parietal and frontal nodes.
In the interview, the LLM group had a lower sense of belonging to their articles. The search engine group had a strong sense of belonging, but it was lower than the group that relied solely on their brains. The LLM group also lagged behind in their ability to cite articles they had written a few minutes ago, with over 83% of ChatGPT users unable to cite articles written just a few minutes prior.
This yet-to-be-peer-reviewed study shows that during a 4-month research period, participants in the LLM group performed worse than the control group, which relied solely on their brains, in terms of neural, linguistic, and scoring aspects. As the educational impact of LLMs on the general public is just beginning to manifest, the use of AI LLMs may actually hinder the enhancement of learning skills, especially for younger users.
Researchers indicate that before LLMs are recognized as beneficial to humanity, “longitudinal studies” are needed to understand the long-term effects of AI chatbots on the human brain.
When asked about ChatGPT’s view on this research, it responded: “This research does not say that ChatGPT is inherently harmful—instead, it warns people not to rely on it excessively without thought or effort.”
Related reading: a16z: 11 key landing areas for the integration of crypto and AI, from AI agents, DePIN to micropayments.