At Virginia Peninsula Community College’s United for Impact Series on June 18, the topic of discussion was artificial intelligence, highlighting its benefits and risks.
“It’s a complex issue,” Christa Krohn told the more than 60 people in attendance at the Peninsula Workforce Development Center.
Krohn, the senior director of learning systems at Ohio-based TIES (Teaching Institute for Excellence in STEM), was the keynote speaker and facilitator for the series, “AI Insights – Navigating the Future.” Stuart Henderson, the director at Jefferson Lab in Newport News, was the guest speaker. Five area leaders were part of a panel discussion at the end of the event, which featured breakout sessions on AI for Educators, AI for Business, AI for the military, and AI Ethics and Policy.
While many think AI is a recent phenomenon, that is not the case. The first paper published on machine learning was in 1943, and the first chatbot was created in 1960, noted Krohn in her presentation.
“It’s not new,” she said.
In fact, the first use of the phrase is traced to 1955 at Dartmouth College. However, with the prevalence of computers today, AI is becoming an everyday occurrence. As with all technology, the key is using it effectively while mitigating misuse, Henderson said.
That was the main topic of discussion by the panel, which consisted of Lisa Surles-Law, science education program manager at Jefferson Lab; Ian Taylor, chair of VPCC’s economics department; Jeff Corbett, an entrepreneur with a background in logistics; Edward Halper, the Air Combatant Command’s facility sustainment, renovation and modernization program manager; and Reeve Bull, director of Virginia’s Office of Regulatory Management.
Surles-Law said AI features such as ChatGPT, DALL-E (text to image), and Sora (text to video) can help with the mundane things, including creating drafts, summarizing, planning, scheduling and translating.
“It frees you up to be creative and do the things you want to do,” she said.
For Taylor, AI is great when he’s not available, such as at 2 a.m., if a student has a basic question such as when the next test is or how an assignment is weighed.
Halper uses AI for data analytics and said, “It’s a tool, not a crutch.” The user still must know how and why something was done.
Bull compared the current AI revolution to the industrial revolution and knowledge/technology revolutions, with lots of opportunities (economic) and concerns (workforce and ethics).
Krohn referenced a quote from economist Richard Baldwin: “It’s not AI that is going to take your job, but someone who knows how to use AI might. AI won’t take your job – if you know how to use it.”
Bull added there must be guardrails to promote responsible/modest growth.
“The upsides outweigh the downsides, but you have to be mindful of both,” he said.
Also, don’t use AI just for the sake of using AI.
“There might be some areas where it’s not really useful,” Bull said.
The keys are embracing uncertainty, leading with ethical AI principles, and continuously learning and adapting.
The panel stressed three things: learn why AI made changes or suggestions, parents need to work with AI and their children to understand the benefits and risks, and a human must be kept in the loop and involved in the final decision.
“It’s still the human who is responsible,” Taylor said.
Krohn cited a study that showed many college students don’t use AI because they think of it as cheating. She disagrees.
“It’s not cheating,” she said.
But that doesn’t mean everything is great.
“We stand at the precipice of an extraordinary era, where artificial intelligence is poised to transform virtually every aspect of our lives,” she said.
It’s already done that with the way we approach challenges and solve problems in almost every industry and the military.
“With this remarkable potential comes the responsibility to develop and deploy AI in ethical, transparent and in inclusive manners, ensuring that it benefits humanity as a whole,” she said.