Image
ChatGPT and artificial intelligence are changing education.
Artificial intelligence tools will have a large impact on learning.

Few innovations have landed on the collective radar of people over the past year as profoundly as artificial intelligence (AI) and generative technologies. These cutting-edge advancements have revolutionized how we interact with technology, transforming them from tools into creative collaborators.

At the heart of this transformation lies the concept of AI content generation. As we stand on the threshold of this creative revolution, Professor Helen Crompton, the executive director of ODU’s Research Institute for Digital Innovation in Learning (RIDIL) and professor of instructional technology, researches how AI impacts humanity. With a focus on education, she explores how these advancements can be harnessed to reshape learning experiences and enhance student outcomes.

Here, Crompton discusses the rise and impact of AI and ChatGPT. The following comes from conversations Crompton has had with experts in 2023, notably a chat with Sandro Galea, the dean of the School of Public Health at Boston University. The responses are edited for length and clarity.

Q: Can you define AI?  

HC: Artificial intelligence is technology performing tasks that typically require human intelligence, such as learning, reasoning, problem-solving, and decision–making. We had a lot of AI technology in the 1950s, but people were not ready. The funding wasn't there, so that's a huge reason it didn't progress. In the last few years, I started noticing changes that created an epiphany for me in 2017 in the way that AI can be used for learning.  

Q: Can you explain the growth in public consciousness around ChatGPT?  

HC: ChatGPT is generative technology, meaning an AI that creates something. It's pre-trained with data to allow it to think in a way. The process is similar to teaching a child. ChatGPT was pre-trained on the internet, which brings issues of bias, but parents also have a bias when they teach their children.

There's been a fear in pop culture about artificial intelligence taking over the world, and we've had a long process to get to the point where this might be an exciting technology for humanity. Look at the advancement of Google search and, specifically, Google Translate. AI has progressed to enable Google Translate to conduct a real-time conversation between two languages. Funding has increased, and programming and technology have improved.  

Microsoft gave OpenAI a lot of money to develop what they have created with ChatGPT. We've hit a point of no return. You can't reel back what's out there in ChatGPT.  

Q: What is the downside to ChatGPT?  

HC: When you ask a very clever person something, they may not know the answer, but they might make up something that sounds plausible. ChatGPT does this sometimes. People learning a new topic have so much to learn quickly, and ChatGPT can help summarize some of these things. We know that generative AI can provide a response that's not always accurate, so having a generation of unthinking students who rely too much on AI could present a problem.

There are obvious problems with bias and cases of harmful use. People could look for things they're not supposed to. ChatGPT, Google's Bard and Bing search are amazing tools, but companies and trainers must build safeguards to counteract some of the adverse outcomes. And they are doing that now.  

I advocate for regulations. We need some type of education and guidance for all. There are some regulations about AI, but we now need more on generative AI specifically. These regulations need to be upheld by organizations developing AI systems and guidance is needed for the end users to ensure the best use of these tools.

Q: As we learn more about these generative technologies, how will values be embedded into what they produce?   

HC: AI is all brain and no heart. Everything that AI is doing is mimicking what we want. It is trying to achieve the objectives that it's told. It can appear empathetic, but it's not.  

But it doesn't mean that it can't help humans with empathy and values. For example, doctors often have to deliver bad news to patients. This is a challenging skill to master. AI can help here. AI can replicate a visual face that a doctor can interact with and practice empathy with a fake AI patient. The simulation can help a doctor hone a skill—using intonations, certain language and tone—that is difficult to practice and certainly preferable to practicing on patients.