Op-Ed: Perceptions of AI in education — The issue is trust.

0


Image: — © JENS SCHLUETER/AFP via Getty Images

The negative perceptions of AI are so strong that positive perceptions are struggling to get noticed. The “threat” of AI is still outvaluing the benefits. The problem is that even very basic descriptions of AI generate such negativity worldwide.

Education is a classic case in point. The perception of AI in studying comes with a lot of baggage already on board. All tiers of education are under fire for various understandable reasons from cost to quality. Levels of basic literacy in core skills are among the issues that never go away, for example.

The destruction of the credibility of education through massively inflated costs and overkill levels of elitism aren’t exactly major assets, either. “Buy a degree” as a sector-wide slogan doesn’t help much. The quality of actual education is either despised or derided.

Add AI to this incredibly unimpressive image, and you get the impression that these privileged students aren’t even doing their own work. Really doing a great job with the public image there, guys.

That impression of not doing their work is totally wrong. It’s not even viable, but when has that mattered to making headlines? The toxic fact is that it’s the default impression created by the image of the education system. A bit of slightly targeted digging exposes a far more believable view of AI in education.

There’s a sort of cosmic irony at work here.

Consider this logic:

Students are hammered with the all-embracing “life and death are based on your grades” environment.

When they do assignments, that’s the criteria for success. You live if your work is OK.

If they use AI, they’re likely to be punished for cheating, which equates to a failing grade.

AI generates content and therefore gets as much paranoid student scrutiny as the actual lessons. They’re teaching themselves AI learning.

This situation leads to a guffaw-worthy outcome for those of us educated pre-digitally.

When using AI, it seems that the students scrupulously do their own editing and check to ensure that nothing wrong or weird is in it. They’re suspicious of any possible garbage the AI might include. They’re effectively also doing a teacher’s job on their own work before they submit it.

There’s a much less obvious danger in this sort of work. AI is perfectly capable of introducing terminology and levels of information the students haven’t encountered and may not even understand. That shows instantly, and there’s no getting around it. Erratic variations between PhD and grade school levels of information are a bit conspicuous.

If you do that suicidal type of AI work, there will definitely be a bumpy ride for your grades. You can see why this conscientious risk avoidance by students is very practical. Cheats also have no hope of survival under even slightly critical questioning. The paranoia is fully justified.

You can also see why the students are very wary of these built-in traps. This “indirect education” apparently works. Cheating rates overall are actually pretty low, around 5%.  

All of this, of course, is below the public radar. Specialist education sites mention it regularly, but the mainstream doesn’t seem to be trying too hard. The public perception of AI in education remains negative. The public definitely needs to be “educated”, excuse the expression.

It’s because parents are also subject to the “life and death are based on grades” image. The absurdity is that going back to quill pens and uncomfortable draconian old-style schooling is the unworkable alternative image. That can’t work, either.

Anyone in digital technology marketing can be forgiven for having a good laugh about what’s now happening with AI studies. Peripheral software, the often very junk-like curse of the 2000s, has come back to life as a new class of useful study aids for students using AI.

Unlike the old stuff, this software and support actually works. This link leads to Unite AI’s very informative ten examples of the new support options for working with AI in education. It’s well worth reading. These supports are baseline-level assistance for students and teachers.

The bottom line here is practical help with using AI and includes everything from “fill in the blanks” to math problems and lesson plans. The obvious gaps are being filled pretty quickly.

“Somebody has noticed!” they marveled in the trenches of all levels of education. You could also argue that this sort of software and support should have been a top educational priority in the 1990s.

The good news for AI advocates, desperate students and besieged teachers is that the negative perceptions must inevitably change. The realities of AI in education are already so far from the public image it’s bordering on ridiculous.

The most obvious future major issue is actually about something inherent in the human psyche. It’s managing the morons. When it comes to learning (or pretending to learn) anything. The inevitable cadre of lazy dummies and stupid cheats will be so far behind that they’ll need remedial education.

AI will have to do that remedial education. Irony sometimes includes justice.  

AI is like any tool. A tool that doesn’t kill you will be trusted.   

_______________________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.


Op-Ed: Perceptions of AI in education — The issue is trust.
#OpEd #Perceptions #education #issue #trust

Leave a Reply

Your email address will not be published. Required fields are marked *