Opinion:Generative AI Tools Like ChatGPT Can Revolutionise How Education Is Imparted

 

About a week ago (shout out Bobby Shmurda), artificial intelligence research firm/startup OpenAI launched their ChatGPT tool to the public. ChatGPT is a conversational AI tool which allows users to input prompts and get responses in a dialogue format. According to OpenAI, ChatGPT can answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. 

The tool has acquired a lot of fanfare with users mesmerised by the quality of output. Not everyone is impressed though. Developer knowledge-sharing platform Stack Overflow has banned its users from posting code generated by ChatGPT. The reason? 

“…because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers,” Stack Overflow said in a statement.

Contrary to some beliefs that perhaps they banned it because it is a possible disruptor of the platform, StackOverflow's reasoning behind the ban is, well, reasonable. To understand why, it is important to first understand how ChatGPT works.

According to OpenAI, ChatGPT “employs a statistical model about what bits of language go together under different contexts.” Together with additional context from its Reinforcement Learning from Human Feedback (RLHF) training from the user generated feedback, it is able to produce plausible-looking and detailed results. 

In layman terms, ChatGPT does not compute, for example, the code snippet it outputs. Instead, it scrawls the internet and implements user feedback to devise plausible-looking output. "Plausible-looking" is perhaps the most important part here. Although snippets which have been posted on social media over the last 10 days might make one think otherwise, ChatGPT's output is mostly gibberish. For example, the tool fails to perform basic arithmetic functions:


Also, like almost all other AI tools whose feedback relies on racially unbalanced internet content, ChatGPT's output can sometimes be a tad-bit racist. Here it is making an assumption that for someone to be a senior executive, they have to be white:


Many other examples of the flaws of ChatGPT's output exist but this is not what this blog post is about. Instead of trying to show its shortfalls, the purpose is to show alternative use cases in education for the still very nascent generative AI technology.

Using Generative AI To Revolutionise Education

A few days after OpenAI made ChatGPT available to the general public, tech blogger Ben Thompson posted this blog post which had me thinking about alternative use cases for generative AI technologies. On the blog post, he compared ChatGPT to a calculator.

The gist of the comparison is that if a middle-school teacher wanted to teach students how to multiply numbers, for example, instead of letting the students just punch the numbers and operators on a calculator and get the answer, they would rather teach the students, step-by-step, how multiplication works. Of course when the students are now in higher grades, they would not need to do the long method of multiplication because they know the principles already. It is then that they would be allowed to use calculators.

The same use case can be applied to a generative AI technology like ChatGPT in delivering education content. Unfortunately, Web 2.0 platforms, from search engines like Google Search to social media platforms like Facebook and Twitter are a cesspool of misinformation and disinformation presented as credible information which, when you think about it, is exactly what ChatGPT outputs, with, of course, a cooler conversational user interface.

Instead of taking a tool like ChatGPT's output as the holy grail, why not use its practical flaws of outputting half truths and falsities to significantly reduce the time it takes to deliver educational content? Think of it this way. Let's say that a student is given an English literature assignment to write a sonnet. Instead of banning them from using an AI tool like ChatGPT, the student would be allowed to use it, under the condition that the bulk of their grade would come not writing a sonnet from scratch from their ability to edit and verify the correctness of a sonnet being output by a generative AI tool.

The assumption here is that most of the output will be incorrect which, in most cases and for the foreseeable, will be as AI is still in its nascency.This would in effect achieve two things. First off, it chucks out the window the futility of trying to prevent the inevitability of AI tools being used by students to "make things a bit easier". for themselves Secondly, it revolutionises education in that the focus shifts from requiring students to cram, pass and forget concepts to requiring them to intuitively think about concepts in math, language, arts, science, etc.

Just like with the calculator example whereby a student has to determine if the solution given by the device for a multiplication problem is correct, students being verifiers and editors of AI generated output will allow them to complete assignments at a fraction of the time, accelerating and increasing the amount of knowledge students can consume in say, a semester. In the sonnet example, during the semester, a student is taught what makes up a sonnet. In the exam, they are asked to verify and edit if an AI generated output into the perfect sonnet.

Another example could be that of the aforementioned output of ChatGPT getting basic arithmetic wrong. A student would then be asked, in a test or exam setting, to verify and edit the code  by correctly identifying the error then applying the correct mathematical principles to correct the output, saving time compared to when the student was asked to solve the problem from scratch and also making them apply what they have learnt throughout the semester.

Of course arguments against this method of teaching can be made. Some might say it spoon-feeds students and makes their learning magnitudes easier but, isn't what that all technologies of fore, including Web 2.0 like Google search, have always done? Imagine how much the ability to just Google a solution to a problem disrupted the imparting of knowledge to students in the late 90s and early 2000s. up to now The same thing will happen with generative AI and just like with Google Search, the education system will adapt around this technology to ensure that although things have been made "easier for students", the quality of education is not eroded.

Generative AI tools are not going anywhere. As a matter of fact, this is just their dawn. The amount of VC investment going into AI startups in 2022 has skyrocketed. Hence, the futility of trying to keep these technologies out of classrooms is clear. The best thing to do is, just like how it was done with other revolutionary technologies in the past, find a way to incorporate it into the education system to make it more better and efficient.

NB: This article firs appeared on Some Black Guy's Thoughts

Previous Post Next Post

AD

AD