AI is here – If we fight it, we’ll lose and so will our students!
Artificial intelligence (AI) is the simulation of human intelligence by machines. AI research deals with the question of how to create computers that are capable of intelligent behaviour.
In practical terms, AI applications can be deployed in a number of ways, including:
- Machine learning: This is a method of teaching computers to learn from data, without being explicitly programmed.
- Natural language processing: This involves teaching computers to understand human language and respond in a way that is natural for humans.
- Robotics: This involves the use of robots to carry out tasks that would otherwise be difficult or impossible for humans to do.
- Predictive analytics: This is a method of using artificial intelligence to make predictions about future events, trends, and behaviours.
We are already using artificial intelligence frequently in our daily lives, often without even realising it. Some common examples include:
- Voice assistants such as Amazon Alexa and Apple Siri
- Google search and other search engines
- Spam filters
- Fraud detection
- Recommendation engines (e.g. Netflix)
- Self-driving cars
It’s used in education for lots of good stuff too, but AI has a more contentious side.
Increased Use of AI
Recent studies have highlighted that artificial intelligence is increasingly being used by students to cheat in assignments. They find that students are using AI to generate answers to questions, write essays and even create presentations, and more and more students are turning to it to get ahead.
There are a number of reasons why students are turning to AI to cheat. Firstly, it is relatively easy to access, with many AI tools available online, such as Open AI which we’ll discuss later.
Secondly, AI can generate answers and essays that are very close to what a human would write, making it harder for teachers to detect, and the content generated is ‘original’ in that it has not been created in that way before.
Thirdly, the use of AI is not just limited to the complete authoring of work; students are also using it to improve their grades. For instance, AI can help students to proofread their essays and identify errors that they may have missed. Additionally, AI can also be used to create presentations and lecture content from scratch.
Lastly, and most importantly in my view, students turn to AI because they are increasingly desperate. They frequently have to commute long distances due to housing concerns; they often have to work north of 20 hours per week on top of their studies to make ends meet, and they face a huge assessment burden in college. They’re often over-assessed, with huge bunching of deadlines across modules, and often in rigid assessment formats, not flexible enough to cater for the needs of diverse students.
The use of AI in education is a controversial issue. Some argue that it is a valuable tool that can help students to improve their grades. However, others argue that the use of AI is out and out cheating and that it gives an unfair advantage to those who use it. Regardless of the debate, it is clear that the use of AI in education is on the rise. This is likely to continue as AI technology becomes more sophisticated and more accessible.
A Threat to Academic Integrity?
Clearly, we want students to be using ‘their own’ work, their own creativity, their own ingenuity, their own understanding, and academic integrity is an important value of the sector.
But often the discourse is heavily skewed towards pointing the finger at students, and using surveillance (and ironically AI) technologies to police their work, rather than examining the role of educators in creating the conditions for academic integrity to flourish. I worry that in the rush to ‘clamp down’, we will return to more rigid assessment modes like in-person exams, which the evidence shows strongly disadvantage disabled students and other cohorts.
What kind of message does that send? How can we ask for integrity from our students when we are actively using modes that we know are disadvantaging them? Both sides must play their part to create a trusting environment where dishonesty is minimised and fairness and equity are demonstrated.
Creating an Environment of Trust
There are several ways higher education institutions can promote good conditions for academic integrity. One way is by having a clear and concise policy on academic integrity, which should be easily accessible to students, staff and faculty. This policy should outline why integrity is important, what is considered to be academic dishonesty and the consequences for engaging in such behaviour. It should also highlight what institutions are doing to support students in difficulty and offer flexible solutions.
Another way to promote academic integrity is through education and training. Institutions can offer workshops and seminars on the topic, and make academic integrity a part of new student orientation programs. Faculty can also be encouraged to incorporate academic integrity into their courses.
Perhaps the most important way to promote academic integrity is to highlight the supports available to students who are struggling with their work, introduce more flexible assessment methods, and work across programmes to reduce and space out the assessment load. We should also, in my view, be working to create an environment which places far less weight on competitive grading, and far more emphasis on rewarding growing and learning, which disincentivises cheating.
Ultimately, the promotion of academic integrity is the responsibility of everyone in the higher education community – staff and students alike. By working together, we can create an environment in which academic integrity is valued and upheld.
It's my firm belief that the only way we can address the issue of integrity and AI is through developing a culture of trust, which places equal learning and development opportunities at the heart of decision-making. Technology will always be ahead of us, and the more we try to clamp down and police the issue, the more we will have to turn to disadvantaging modes of assessment and unethical, bias-ridden, surveillance technologies. This is a recipe for further mistrust between our staff and students, and will only breed greater levels of dishonesty in response. If we focus our energies on trying to fight the rise of AI in education in this way, we are destined to lose, and our students will suffer.
But Wait – Should We Be Fighting AI At All?
But there’s another reason we shouldn’t fight it. In fact, there are many arguments for why higher education institutions should teach their students how to use AI tools to create better work and to build assessments that develop their skills at critically analysing and applying AI outputs.
Firstly, as the world becomes more and more digitised, those who are able to use artificial intelligence tools in any capacity will have an obvious advantage in the job market.
Secondly, artificial intelligence can help professionals in a variety of fields do their jobs more efficiently and effectively. For example, doctors can use artificial intelligence to diagnose diseases more accurately, lawyers can use it to sort through mountains of evidence more quickly, and architects can use it to design buildings that are more energy-efficient. Imagine what creativity we could unleash if more fields knew how to leverage these kinds of technology.
Thirdly, artificial intelligence can help people to automate repetitive tasks. Its already doing that for many of us. For example, if we have to enter the same information into a database over and over again as part of our administration work, we can use an artificial intelligence tool to automate this task. This will not only save time, but it will also reduce the chances of errors being made. Wouldn’t you like to have the AI literacy to apply that use?
Overall, there are many reasons why higher education institutions should teach their students how to use artificial intelligence tools to create better work. Artificial intelligence is a growing field with immense potential, and those who are able to use these tools will be well-positioned for success in the future.
The Big Reveal
Over time, our education system has outlawed the use of many tools we now see as part of our everyday toolkit to be more effective and efficient in our work. Spelling and grammar checkers and calculators for example are tools which were deemed in the past to somehow impinge on academic standards.
When new technologies emerge, our higher education systems tend to mistrust their use before they become normalised. We evolve to accept and rely on them, and when we do, they help us to be wildly more efficient, accurate and effective.
Perhaps we should begin to think now about how we can get ahead of this with AI, and think of the endless creativity and innovation possible if more of our students can master these tools in relation to their fields and their lives. In a world where knowledge is as ubiquitous, easily accessible, and easily adaptable as it is now, perhaps we can think of students work with AI as a collaboration with that collective body of knowledge.
In this world, the line between plagiarism and intelligently and efficiently building on the existing ideas and knowledge of the community becomes blurry.
Take this article for example. To illustrate the point, I’ve written it in collaboration with AI. About 60-70% of it was written by an Open AI algorithm. The text produced is completely unique, and built on processing the huge body of knowledge the internet holds, using human intelligence models to interpret the commands I gave it. I even used the AI art generator DALL-E to create the original imagery for the piece based on commands. If ye don’t believe me watch the video below – I filmed myself doing it.
I first thought through the ideas and structure of the article, then gave the Open AI playground tool commands to write various sections for me, and then joined the dots, edited, and added personal reflection. Is this work mine? Probably. Maybe a collaboration with the world? I’m not sure.
But I think the power of what these tools can do should be harnessed, not rejected. Hugely exciting developments in innovation, equity and inclusion could emerge.