Some argue that AI is just another tech bubble, but here I explain why things are definitely different this time round. I consider how students have become enthusiastic early adopters of AI tools to help them with their assignments and argue that the whole system of assessment in universities needs to be reinvented.
AI: We’ve been here before, but why it’s different this time
My personal history with AI goes back to the early 1990s. For my Masters dissertation project I designed an interface for an expert system to support the demolition industry. At the time, expert systems – databases of specialist knowledge programmed to replicate the diagnostic skills of subject experts – were seen as the future of computing. But, as has often been the case with AI since its emergence in the 1950s, results didn’t quite match the hype and attention moved on to the next big thing – which turned out to be the Internet.
I have worked in and around technology for the whole of my career, but I have been as surprised as anyone by the recent explosion in AI. However, I am now sure, looking beyond the hype, that this time really is different. Why do I believe this?
1. AI has largely replaced programmed logic with probabilistic statistical inference, removing the need for bespoke programming for every subject area.
2. Algorithms have been developed that are supremely capable of predicting the most likely next character in a string and the next word in a sentence, in order to answer any given question.
3. Where previous efforts to encode the underlying logic of language have foundered with the realisation that context is often key to meaning, the new generation of AI tools are able to expand outwards from words and sentences in order to take into account that wider context.
4. Computing hardware is available, in the form of graphics processing units – originally designed to provide the parallel processing capabilities required for modern graphical computer games – that enable the parallel processing requirements of AI algorithms to run extremely quickly.
5. The Internet now provides a virtually limitless set of training data.
6. The world’s biggest tech companies now possess the vast wealth to enable them to develop the huge computing infrastructure needed to run processor-hungry AI algorithms on an industrial scale.
The results obtained from the new AI tools are so extraordinary that we can no longer think of computers in terms of logical calculators following our instructions – indeed, with the new algorithms it has become effectively impossible for us to work out how results have been obtained. With the ability to create answers that are wholly new and unexpected, it is not surprising that even their creators often refer to ‘magical’ powers. It is this generative aspect of the new wave of AI tools that has largely created the current excitement. Accompanying this, the idea of computers possessing ‘general intelligence’ is now being discussed more seriously than ever before.
Believing that this technology is truly world-changing, and desperate not to be left behind, since the initial release of ChatGPT in November 2022 the big tech companies (Google, Meta, Microsoft) have been releasing models at breakneck speed. In the history of humanity, we have never witnessed new technology delivered and disseminated so rapidly.
Universities are at the forefront of AI – but not in the way we might have expected
Like many revolutions, the initial impact of the new wave of AI has perhaps been most keenly felt in universities. However, with much of the theoretical and technological development of AI taking place in commercial settings, rather than on campuses, the key early adopters of this new technology have not been academics, but rather students, primarily those using AI to help them with their assessments.
While students have surely been trying to take short cuts in their assignments since the days of the earliest universities, the chances of getting caught likely made cheating not worth the risk for most students. Although the rise of the Internet offered more potential for wrongdoing, software such as Turnitin providing tools that proved to be fairly effective. Since the launch of generative AI tools, however, students now appear to hold the advantage.
AI and assessment – why universities cannot win
Following the introduction of generative AI, for a brief time teachers and institutions maintained the line that cheats will be caught. However, it has now become clear to most that it is becoming virtually impossible to detect the use of AI tools in assignments. My own experiences in the past couple of years bears this out.
Where the incompetent use of AI is fairly easy to spot, I wondered how many, if any, of the submissions that pass also now use AI. Although I know my students well enough to predict who would do well, there are always many students who are unknown quantities: they are either silent in class or rarely seen. In these cases, good submissions are not necessarily a surprise.
An experiment in the use of AI for assessments
So, I embarked upon a little experiment. First, I bought a subscription for one of the popular AI tools. I then submitted the scenario and the questions from the assignment that my students had just sat. The essay I got back was good: it answered the questions reasonably well, and was fluently written. However, there were too many academic references, often from journal articles that would never appear on any undergraduate reading list, with the context of the references often not matching the actual content of the article and, most tellingly, some of the references entirely invented. Next, I tried to tweak the model by instructing it to only use the references that I had specified. Not bad. Then, I told it to include some direct quotes from the sources.
I ended up with an answer that was excellent. The only problem is that it was really too good for a first year undergraduate student with English as a second language. So, I tried to get the AI to dumb it down a bit. This took a couple of iterations, including asking it to add a few formatting and spelling errors. Finally I ended up with an assignment that was good, but not exceptional. One that would warrant a mark of 60-65%. Exactly the type of average submission that would not arouse immediate suspicion.
It was all very easy. Much easier than actually going through the effort of reading sources, constructing answers and adding references. Of course, I know what a good assignment looks like, and, at least for their first attempt, students do not. But they learn quickly.
Efforts to deal with AI in assessments
Where we have students who never turn up to class or cannot answer questions orally on a subject, we cannot automatically assume that they are not capable of producing excellent assignments. Even in small cohorts where we know students individually, it is hard to definitively say that a student is incapable of producing a particular piece of work, especially given the potential consequences of failing a student.
In light of the capabilities of the current crop of AI tools, the various strategies that institutions have adopted seem to me to be somewhat naïve. At one extreme are total bans on AI. To work effectively, these rely on an ability to identify AI unambiguously which does not exist. It also means that students are insulated from using the AI tools that they will undoubtedly require in the workplace.
At the other end of the spectrum are the ‘realists’ who welcome students using AI. The implication of this, though, is that in order to maintain the purpose of assessment – as a test of the acquisition of content and the understanding of concepts – we will need to completely rethink current methods. Some argue for a return to in-person examinations, others for more oral assessment, but these have their own issues.
Finally, there is the ‘pragmatic’ solution typified by ‘traffic light’ systems, where green means ‘anything goes’, red means ‘no AI’ and orange indicates that AI can be used for some parts of an assignment but not others. As with the first option, this assumes that misdemeanours can be identified, but also that both students and teachers can effectively distinguish between different uses of AI, such as helping to come up with ideas, providing an initial draft, and writing the whole thing. The latter seems to be quite problematic in practice. If a student is going to use AI for one task, are they really going to be able to resist asking it to do just one more thing, if it is the choice between a couple of prompts or staying up all night to finish the assignment?
Of course, we cannot separate assessment from other aspects of education. It is clear that the use of AI is itself a skill that graduating students will need in the workplace, and if they graduate without the ability to use it they will be at a disadvantage. However, if we allow students to graduate without understanding key concepts around their subject, we are effectively delegating the future to AI. Do we really want to be treated by doctors who are able to interrogate AI tools to identify what is wrong with us and prescribe medicines but who have no understanding of the human body? It is perhaps foolish to expect the tech companies to help us here. The revenue they get from students, whilst perhaps small in the scheme of their ultimate ambitions, is not insignificant. More importantly, however, they need the workforce of the future to be dependent on their tools and, following the accepted precedent of providing free or reduced cost educational licences, getting future workers hooked before they even leave college is a guaranteed way of securing future revenue.
A familiar claim from proponents of AI is that it will ultimately bring prosperity to all. However, we are already seeing that it can reinforce existing inequalities. As I found myself, free AI tools can be helpful for helping students with their assignments, but paying for a subscription will give you the edge. For many students in wealthier nations the current cost of such a subscription will be affordable, but for many, particularly in less wealthy countries, it will not be. We already know that economic inequalities feed through to education and beyond, and this may simply be another way of helping the rich to become richer at the expense of the rest.
Selected reading
ABDELWAHAB, H. R., RAUF, A. & CHEN, D. 2023. Business students’ perceptions of Dutch higher educational institutions in preparing them for artificial intelligence work environments. Industry and Higher Education, 37, 22-34.
AHMED, F. 2024. The digital divide and AI in education: Addressing equity and accessibility. AI EDIFY Journal, 1, 12-23.
ALWAQDANI, M. 2025. Investigating teachers’ perceptions of artificial intelligence tools in education: potential and difficulties. Education and Information Technologies, 30, 2737-2755.
BEARMAN, M. & AJJAWI, R. 2023. Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54, 1160-1173.
BOŽIĆ, V. 2023. Artifical intelligence as the reason and the solution of digital divide. Language Education and Technology, 3.
BRATIANU, C., HADAD, S. & BEJINARU, R. 2020. Paradigm shift in business education: a competence-based approach. Sustainability, 12, 1348.
CHARDONNENS, S. Adapting educational practices for Generation Z: integrating metacognitive strategies and artificial intelligence. Frontiers in Education, 2025. Frontiers, 1504726.
CONTRERAS, J. 2025. Transforming Business Education With AI. AACSB [Online]. [Accessed 30/12/2025].
DE FINE LICHT, K. 2024. Generative artificial intelligence in higher education: Why the ‘banning approach’ to student use is sometimes morally justified. Philosophy & Technology, 37, 113.
GILLANI, N., EYNON, R., CHIABAUT, C. & FINKEL, K. 2023. Unpacking the “Black Box” of AI in education. Educational Technology & Society, 26, 99-111.
HOLMES, W. & TUOMI, I. 2022. State of the art and practice in AI in education. European journal of education, 57, 542-570.
JOSE, B., CHERIAN, J., VERGHIS, A. M., VARGHISE, S. M., S, M. & JOSEPH, S. 2025. The cognitive paradox of AI in education: between enhancement and erosion. Frontiers in Psychology, 16, 1550621.
LEATON GRAY, S. 2020. Artificial intelligence in schools: Towards a democratic future. London Review of Education, 18, 163-177.
LECUN, Y., BENGIO, Y. & HINTON, G. 2015. Deep learning. nature, 521, 436-444.
LUCKIN, R. 2025. Nurturing human intelligence in the age of AI: rethinking education for the future. Development and Learning in Organizations: An International Journal, 39, 1-4.
MARKAUSKAITE, L., MARRONE, R., POQUET, O., KNIGHT, S., MARTINEZ-MALDONADO, R., HOWARD, S., TONDEUR, J., DE LAAT, M., SHUM, S. B. & GAŠEVIĆ, D. 2022. Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI? Computers and Education: Artificial Intelligence, 3, 100056.
MCCORD, B, The rise of the answer machines, Financial Times, Published Jan 24 2026
MILMO, D. 2025. ChatGPT launches study mode to encourage ‘responsible’ academic use. The Guardian, 29/7/2025.
MITCHELL, M. 2019. Artificial intelligence: A guide for thinking humans, Penguin UK.
PRATSCHKE, B. M. 2024. Generative AI and education: Digital pedagogies, teaching innovation and learning design, Springer.
RITTER, B. A., SMALL, E. E., MORTIMER, J. W. & DOLL, J. L. 2018. Designing management curriculum for workplace readiness: Developing students’ soft skills. Journal of Management Education, 42, 80-103.
SOLLOSY, M. & MCINERNEY, M. 2022. Artificial intelligence and business education: What should be taught. The International Journal of Management Education, 20, 100720.
STATON, B, The recruitment company training AI to do your job, Published Jan 24 2026
STINE, J., TRUMBORE, A., WOLL, T. & SAMBUCETTI, H. 2019. Implications of artificial intelligence on business schools and lifelong learning. Final Report at Academic Leadership Group, 19-2.
TEGMARK, M. 2018. Life 3.0: Being human in the age of artificial intelligence, Vintage.
VASWANI, A., SHAZEER, N., PARMAR, N., USZKOREIT, J., JONES, L., GOMEZ, A. N., KAISER, Ł. & POLOSUKHIN, I. 2017. Attention is all you need. Advances in neural information processing systems, 30.
XU, J. J. & BABAIAN, T. 2021. Artificial intelligence in business curriculum: The pedagogy and learning outcomes. The International Journal of Management Education, 19, 100550.
