AI – The shape of things

Prompt by JPxG

model by Boris Dayma, upscaler by Xintao Wang, Liangbin Xie et al. (Apache License 2.0 or BSD ), via Wikimedia Commons

Artificial Intelligence systems are becoming ubiquitous and disruptive, nowhere more so than in the education sector. Here, Pete Chalk looks back at a whirlwind nine months since the release of ChatGPT

On the 30th November 2022, the world woke up to the release of the ChatGPT web site by OpenAI, 49% owned by Microsoft, promising free, open access to all, with the ability to generate text or computer code in response to user prompts.

Immediately, in December, it was banned by the popular computer programmers’ online FAQ discussion forum Stack Overflow because “while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce” (Vice 5.12.23).

Other bans swiftly followed, particularly in the education sector. In January “the New York City Department of Education has blocked access to ChatGPT on its networks and devices over fears the AI tool will harm students’ education” (The Verge 5.1.23). Australia announced a ban on its use in state schools. In February, Oxford, Cambridge and six other Russell Group universities announced a ban on its use in assessment.

However, at the same time, other educationalists recognised that students were already using it to help write essays and that banning it would be futile. There might also be positive educational uses for ChatGPT and other similar AI tools (such as Google Bard and the MidJourney and Dall-E image generators). International students, in particular, have reported that they find AI tools such as GrammarlyGo and Google Translate very useful to help write essays. Some students and staff report positively the use of ChatGPT in summarising a topic, drafting a presentation or essay plan etc (subject to checking factual accuracy).  The International Baccalaureate (IB) said that “schoolchildren will be allowed to use the chatbot in their essay [and that it] should be embraced as ‘an extraordinary opportunity’, [but] students must be clear when they were quoting its responses” (i-news 28.2.23).

UCL told its students that ‘we will support you in using them effectively, ethically and transparently.’ Glasgow University advocated their use ‘responsibly’.  Eventually, in July, the Russell Group published their five principles: “support students and staff to become AI-literate; to use generative AI tools effectively and appropriately; adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access; ensure academic rigour and integrity is upheld; share best practice as the technology evolves”. Most other universities in the UK, and worldwide, have now adopted similar policies (supported by quasi-governmental organisations such as the Quality Assurance Agency).

In April Unesco published a report on ChatGPT and AI in Higher Education urging universities to   consider a wide range of concerns and challenges: ethics, integrity, regulation, privacy, bias, accuracy, accessibility and commercialisation. Similar reports followed from the EU, QAA and the US President’s meeting with the big tech companies. Subsequently, most universities advised staff and students to refrain from submitting their work to an AI tool for copyright and data protection reasons, to always cite their use, and to understand the likelihood of inaccurate responses (‘hallucinations’), fake references and bias (for example, towards white, western culture).

Meanwhile, evidence emerged of the widespread use of AI tools by students submitting assignments. In May, “Intelligent.com surveyed 1,223 current undergraduate and graduate students. Key findings from the survey include: 30% of college students used ChatGPT for schoolwork this past academic year. Of this group, 46% say they frequently used the tool to do their homework” (Intelligent 9.6.23). By June, “almost 400 students have faced investigations for using AI chatbots in a university-assessed piece of work and at least 146 have so far been found guilty, with dozens of investigations still ongoing. The figure was highest at the University of Kent where 47 students have been investigated for using ChatGPT or a similar AI chatbot” (The Tab 4.7.23). From my own personal experience, I know these figures are most definitely an understatement of the actual picture.

In April, Turnitin (originality checking software) released its AI detector worldwide, later reporting that during the “first three months of the detector’s operation, it was used on more than 65 million papers. Of these, 2.1 million – 3.3 per cent – were flagged as having at least 80 per cent AI writing present. Nearly 6.7 million – 10.3 per cent – had more than 20 per cent AI writing present” (THE 25/7/23). Most UK universities have been reluctant to switch on the Turnitin detector due to its high rate of false positives and the lack of concrete evidence to confirm that it is actually AI writing. Instead they have focussed on changes to the assessment process, such as integrating AI tools into the assignment, with citation and critical analysis, or a return to traditional exams (the Tory government agenda). Anecdotally, colleagues state that the high rate of detection is not reflected in misconduct cases and that the presence of even a low figure for the likelihood of AI text being present probably makes them biased in their marking.

Many education trade unions are concerned about the effect on jobs. While this is unlikely in the short term, given the limitations in the tools noted above, it has almost certainly had a major effect on those working for online tutoring companies. One indicator of this was in May, when “Chegg’s share price fell nearly 50 per cent following the news. The warning spooked investors throughout the education and publishing sector as Pearson’s shares fell by over 12 per cent. Shares in US-listed rivals Udemy and Coursera also dropped” (CityAM 2.5.23). Chegg (which has been accused, in Australia, of being an essay mill, supporting contract cheating) immediately responded by going into partnership with OpenAI, forming a new company called CheggMate, and its shares recovered. 

The medium to long-term threat to jobs in education is very real though. Sam Altman, the CEO of OpenAI, announced that “I think US college education is nearer to collapsing than it appears. What a time to start an alternative to college! The world really needs it” (InsideHigherEd 21.6.23). He has joined up with the Khan Academy to set up OpenAI Academy, which he says will be open access, and KhanMingo, which is a paid-for AI tutoring service run by the very profitable Khan Academy. Meanwhile Microsoft has purchased LinkedIn, and could use its partnership with OpenAI to introduce more online courses with AI support there. Pearson, currently charging $20 a month to students wanting their support with homework etc, has also announced that it is expanding its use of AI in supporting tutoring.

In a similar way to the Tories running down the NHS to promote private health, universities worldwide have been expanding at a dramatic rate, without concomitant increases in funding or staffing. The effects are shown very dramatically by this analysis of a lecturer’s workload in Australia: “In August 2020, the ABC reported that tutors at ten Australian tertiary institutions were effectively being encouraged to ‘skim read’ assessments.” (Counterpunch 9.8.22).

As the start of the academic year 2023/24 approaches, all universities are asking staff to review and adjust their assessments, learn about AI Tools, understand the new regulations on AI, teach students about the ethical and legal implications of AI while, at the same time, remain in dispute over pay and conditions and see class sizes continue to rise. And students will undoubtedly be tempted by private companies offering ‘support for homework’ in return for payment. The parallels with privatisation of the NHS are painfully obvious.

But what can an incoming Labour government do? It could address the ‘digital divide’ by offering free broadband, a free AI/human tutor, and a free PC to all children on free school meals. It could invest in our universities, for example by creating a shared AI supercomputer, cloud and LLM (large language model) based on open access, non-copyrighted research journals and text books etc. This could be invaluable for generating both teaching material and research ideas. It could introduce legislation to protect copyright, ensure data protection and security, remove bias and uphold principles of privacy and accessibility, open access and open source in all LLMs and other emerging AI Tools. Foremost, it could uphold the Unesco principle that free education for all is a human right.

Pete Chalk
Pete Chalk is a Learning & Teaching Specialist at the University of Hertfordshire and member of the London & SE Academic Integrity Network, but writes here in a personal capacity. He is a member of Chartist EB

Leave a comment...

This site uses Akismet to reduce spam. Learn how your comment data is processed.