AI: AI Tests the way we Teach and Train. RTZ #469

AI: AI Tests the way we Teach and Train. RTZ #469

New technologies have always challenged the way we’ve built institutional habits, particularly in Education.

Early digital calculators caused concerns that kids would now have atrophied math skills by leaning on these new-fangled technology ‘improvements’. It turned out not to have the negative impact that was feared:

“Thirty years ago calculators promised immense opportunity – opportunity, alas, that brought considerable controversy. The sceptics predicted students would not be able to compute even simple calculations mentally or on paper. Multiplication, basic facts, knowledge would disappear. Calculators would become a crutch.”

“The controversy has not dissipated over time. As recently as 2012, the UK government announced it intended to ban calculators from primary classrooms on the grounds that students use them too much and too soon.”

“Research conducted in response to this found little difference in performance tests whether students used calculators or not. An earlier US study had found the same: the calculator had no positive or negative effects on the attainment of basic maths skills.”

Of course LLM/Generative AI technologies in this AI Tech Wave are causing similar concerns on a broader scale. And is yet another pressure point on the broader issue of what I’ve called the number one problem in ‘Scaling AI’, Trust in AI Technologies.

Axios explains in “Teachers still can’t trust AI text checkers”:

“As kids of all ages head back to school, educators are still struggling to spot students who are letting chatbots write their reports for them.”

“The big picture: Commercial AI text detection tools — even those claiming high accuracy — still have some big flaws.”

“Catch up quick: After the release of ChatGPT, teachers quickly realized that the plagiarism detection software they’d used before failed to work on student submissions that were generated by an AI system.”

  • “Academics, startups and even OpenAI itself began releasing genAI text detectors, but none of those tools were very effective either.”

  • “And the problem has gotten worse.”

  • “As the technology to detect machine-generated text advances, so does the technology used to evade detectors,” says University of Pennsylvania computer and information science professor Chris Callison-Burch. “It’s an arms race.”

Of course the broader AI industry is all over trying to come up with solutions, but they’re still in development, and have Scaling hurdles of their own:

“Between the lines: Earlier this year, Google announced a new technique for watermarking text so that it can later be identified as AI-generated, but there hasn’t been an update to the tool since then. Google did not respond to a request for comment.”

  • “Callison-Burch thinks watermarking is an “excellent idea,” but it’s an insufficient tool against student plagiarism since it requires widespread adoption by AI companies.”

  • “Sophisticated users could also download open-source AI software that will let them generate text without watermarks, Callison-Burch told Axios.”

  • OpenAI has also developed a text watermarking method, but has not released it yet. A spokesperson told Axios its tool is “technically promising,” but also has “important risks” that the company is weighing while researching alternatives.”

The long-term answers are likely going to involve adjustments beyond technology-based solutions:

“State of play: As teachers start their third year fearing ChatGPT-generated text, many are rethinking their genAI abstinence polices.”

  • “The popular AI writing assistant Grammarly is trying to solve the cheating problem by making it easier to disclose the use of genAI in writing.”

Over time, the net way forward here is likely going to evolve new ways to teach and test students that make LLM/Generative AI a part of the curriculum rather than just try and detect it to preserve current ways of teaching and testing.

After all, the broader issue is that all of us young and old are going to want to learn how to use these emerging AI technologies to be better equipped to do things in society.

Not avoid using them to preserve the ways we’ve done things in the past. That of course takes time, and is an evolutionary process. These are but the earliest of days in figuring out the best ways forward. Stay tuned.

(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)





Want the latest?

Sign up for Michael Parekh's Newsletter below:


Subscribe Here