Researchers at Microsoft and Carnegie Mellon University recently published a study on how using generated AI in the workplace affects critical thinking skills.
“Improperly used techniques can lead to degradation of cognitive faculties that should be preserved,” the paper states.
When people rely on AI generated in the workplace, their efforts change to ensure that AI responses are sufficient for use, and higher-order critical thinking skills such as information creation, evaluation, and analysis. Use it, not use it. If the AI responds only inadequately, workers will be deprived of the daily opportunities to practice judgment and strengthen cognitive muscles, atrophy and unprepared when exceptions arise. “He says.
In other words, if we rely too much on AI and think about it for us, it becomes worse to solve the problem when AI fails.
In this study of 319 people who reported using Generated AI at work at least once a week, respondents share three examples of how to use Generated AI at work. I was asked. For example, colleagues); information (studying topics or summarizing long articles); advice (suggesting guidance or charting from existing data). He was then asked whether to practice critical thinking skills when performing tasks, and whether using generative AI would use more or less effort to think critically. For each task the respondents mentioned, they were asked to share how confident they were in their own, their generative AI, and their ability to assess AI output.
Approximately 36% of participants reported using critical thinking skills to mitigate the potential negative consequences of AI use. One participant said she used CHATGPT to write a performance review, but reaffirmed the AI output for fear that she could inadvertently submit something that would cause her to pause. Another respondent reported that he had to edit the email generated by AI that he sends to his boss – that culture focuses more on hierarchy and age. In many cases, participants may validate AI-generated responses in more general web searches from resources such as YouTube and Wikipedia, and defeat the purpose of using AI first.
In order for workers to compensate for the shortcomings of generated AI, they need to understand how those shortcomings occur. However, not all participants are familiar with AI limits.
“The potential downstream harm of genai responses can motivate critical thinking, but only if the user is consciously aware of such harm,” the paper reads.
In fact, this study found that participants who reported trust in AI had less critical thinking efforts than those who reported confidence in their abilities.
Researchers disagree with saying generative AI tools make fun of you, but this study says that relying on generative AI tools can undermine your ability to independently solve problems. indicates.
Source link