After the advent of ChatGPT, AI’s role is increasing exponentially in the education sector, with some positives but many concerns. Surveys have shown that many students are using AI to do their assignments. On the other hand, Universities are also being more cautious by adopting AI detection tools to catch this “cheating”.
How Does AI Plagiarism Affect A Student?
Here are some ways AI is being used by students nowadays:
- Write Assignments: AI can help students cheat by providing easy access to answers or rephrasing existing text to trick and bypass plagiarism checkers.
- Plagiarism: AI-powered tools may produce plagiarized material by providing easy access to pre-existing content, which will undermine students’ academic integrity and originality.
- Inaccurate Information: AI systems may propagate inaccurate or misleading educational content. This will lead to diminished trust in educational resources and blind belief in non-existent resources.
- Biased AI algorithms: AI systems used in education often inherit biases from their training data, causing inequalities and unfair treatment based on biased data or human prejudices.
- Over-reliance on technology: Excessive reliance on AI could lead to a decline in critical thinking skills and creativity, as students become reliant on ready-made solutions without understanding the underlying concepts.
- Diminished Critical Thinking: AI tools may discourage independent thinking and problem-solving skills, as students rely on recommendations provided by the AI rather than engaging in in-depth critical analysis.
- Equity issues: Unequal access to AI technology could worsen existing disparities in education, the people who can afford to pay for services like gpt4 have greater resources, widening the achievement and scope gap for students of different backgrounds.
- Privacy concerns: AI tools collect and analyze vast amounts of student data, and can build data profiles of each individual. This raises concerns about privacy breaches and the potential for misuse or exploitation, like AI worms.
- Decreased Emotions & Social Skills: Over-reliance on AI-driven tools may lead to reduced emotional engagement, as students interact less with teachers and peers.
What do Universities think of AI in Education?
A survey conducted by BestColleges in 2023 showed that 56% of college students admit completing assignments using AI tools.
Tilman Wolf, the Senior Vice Provost of Academic Affairs at UMass Amherst explained:
“We have seen that our library has received more requests for interlibrary loans, for books or journals that don’t even exist because somebody looks at a reference that was generated by a generative AI model that has, you know, journals and books in there that don’t exist.”
He further added:
“Generative AI models are not going to go away. And I think the important thing is that we create awareness on our campus, what they can and cannot do, and that we think about how we can be transparent about where we use them and where we don’t use them, and that we train our students so that they are prepared for the workforce where they can use these tools in an appropriate manner.”
Tilman Wolf
In October 2023, Forbes Advisor surveyed 500 teachers from around the U.S. about their experiences with AI in the classroom. Cheating using AI tops the list of teachers’ concerns about AI in education Teachers fear that the use of AI means that they will receive lesser human-generated content.
The following poll shows that 65% of faculty fear about plagiarism in essays and assignments:
According to another 2023 report by Europol Innovation Labs, by 2026 nearly 90% of all online content will be generated by AI systems. This is very concerning as it will be a massive problem if it turns out to be true.
Another issue that is being faced is the plagiarized content generated by an AI. There are several doubts regarding the originality of the content. Educational institutes across the world are questioning the authenticity of this content generated by AI.
All these concerns are constantly being raised and ways are being looked at to overcome them.
Institutions are trying to stop the usage of AI in education-based work like assignments, experiments, and essays by using different AI checkers. However, students are finding it easy to bypass these checkers by manipulating and rephrasing certain parts of the content generated by an AI.
In a study conducted by researchers from British University Vietnam and James Cook University Singapore, Generative AI text detection tools show significant weaknesses when presented with manipulated content.
The research assessed the performance of six prominent AI text detectors using 805 text samples. It revealed that the initial accuracy of these detectors, averaging 39.5% for non-manipulated content, significantly dropped to 17.4% when presented with manipulated content by including deliberately introduced spelling and grammar errors.
Let’s look at a few examples! We asked ChatGPT-3.5 to give us a short 120-word essay about the importance of fitness:
Here is the AI content checker report for the essay:
We then rephrased this essay and fed it to ZeroGPT to check the percentage of AI content:
Manipulating the text slightly by using synonyms and different sentence structures was enough to bypass the AI checker.
We ask ChatGPT to give us another short essay about the importance of project managers in software engineering:
First, check how much AI generated content it has:
We then rephrased some content just like the previous example and this again helped in fooling the AI checker tool.
Still, there are some ways teachers could adopt to help tackle this problem of AI in education. They should first use AI platforms to understand their capabilities. Test AI tools together and discuss their limitations. Also, do regular assessments to get snapshots of progress over time. This will ensure tracking students’ writing over a lengthy duration.
Another smart tip will be to include a “trojan horse” word or phrase in your assignment that won’t be visible to the student but you can use this keyword later on to see if the student pasted the prompt into an AI tool.
This will help catch cases of cheating. Above is a reel that shows how to use the Trojan Horse trick to catch AI-generated content.
Conclusion
While AI presents promising opportunities for learning, its misuse poses significant challenges such as plagiarized content, and decreased social skills. As universities aim to tackle this sudden AI boom, proactive measures and collaborative efforts are essential to safeguard academic integrity and student learning experiences.