“Where’s my tablet?”
Seven-year-old voices can be full of wonder, but for Widmarck Francois, a cybersecurity student and father, this repeated question from his child carried the weight of a growing concern.
Francois, the winner of an essay contest examining ethical concerns regarding AI, shared his story, capturing the audience’s attention and concerns.
“Basically, I have kids, and I see what technology does to my kids,” he said. “My seven-year-old is addicted to his tablet, and that’s the only thing he thinks about once he gets home.”
Francois’ words shared how AI’s rapid growth is altering relationships and societal norms, raising questions about accountability among its creators and users.
This reflection informed his winning essay, where he advocated for the ethical design of AI systems that prioritize transparency and respect for human autonomy.
His remarks brought forth a question: how do fairness and empathy play a role in a world influenced by AI? It was a topic of discussion at a recent Montgomery College event.
As part of MC’s 12th annual Humanities Days lineup, the Oct. 24 Ethics and AI guest lecture brought students, faculty, and experts together at the Takoma Park/Silver Spring campus. Philosophy professor Daniel Jenkins facilitated the event.
Keynote speaker Elissa Redmiles, a Georgetown University assistant professor and faculty associate at the Berkman Klein Center for Internet & Society, joined the discussion virtually and guided the audience through the ethical challenges of AI, including fairness perception, algorithmic bias, and non-consensual image abuse.
Francois’ story, shared toward the lecture’s conclusion, brought Redmiles’ earlier discussions on AI’s ethical challenges into focus and expanded on the necessary steps to ensure AI is developed with accountability and fairness at its core.
“Why don’t we predict whether a positive intervention will be effective? Let’s turn our predictions into positive actions that will have less negative consequences,” Redmiles said, inviting the audience to reimagine AI systems as tools to address societal challenges.
Redmiles outlined AI’s impact on sectors like healthcare, education, and criminal justice, calling for measures to advance equity and transparency, and described unchecked biases in algorithms as catalysts for cycles of inequality, leading to self-fulfilling prophecies.
Addressing bias in AI, she argued, is critical to creating systems prioritizing fairness and upholding ethical, transparent decision-making.
For Francois, this reflection was in motion long before the lecture.
“It is challenging to hold someone responsible for AI-driven outcomes,” he said, stressing the need for ethical guidelines and laws to hold developers, companies, and users accountable.
He explained that predictive algorithms, while efficient, often overlook the nuance necessary in human-centered decision-making.
“That reduces the human element of discretion and empathy in AI systems,” Francois said, noting that hiring and judicial reviews are examples where understanding individual needs and societal context is essential.
This perspective aligns with a 2023 Pew Research Center study, which found that 71% of Americans oppose the use of AI in hiring decisions, citing concerns about fairness and an algorithm’s inability to evaluate traits like creativity or interpersonal skills.
Francois’ remarks highlight the importance of maintaining human oversight, particularly in areas where decisions can alter lives.
In parallel, general studies student Sara Lopez Gomez reflected on AI’s broader societal impact.
“If society keeps relying on AI, we would use AI to think for us, and that is something we should avoid,” Lopez Gomez said. “We shouldn’t have technology run our lives.”
Lopez Gomez’s thoughts mirrored the lecture’s focus on integrating AI thoughtfully into society, keeping human accountability central to decision-making, with technology supporting societal progress rather than dictating it.
The lecture’s emphasis on fairness and accountability inspired attendees like cybersecurity student Nasrin Van Wyk, who considered how ethical concerns shape her field.
“In the context of cybersecurity, ethical concerns have significant implications where algorithms designed would not only identify possible threats but to protect the privacy of the user and to prevent misuse,” Van Wyk said.
Accountability issues prompted Jenkins to emphasize the role of education and dialogue in addressing AI’s ethical dilemmas.
“I think it’s actually going to be a long, uninterrupted conversation for the rest of our lives,” Jenkins said. “It’s the hope that intellectuals and educational communities can provide guidance to a society at large, policymakers, and everyday people to make sense out of the changes in the complexity of AI systems and their impact on society.”
The sentiment stressed the need for collaboration across disciplines to ensure that, as technology evolves, its development aligns with societal values.
Francois’ concerns, sparked by his seven-year-old’s question, remain a reminder of what is at issue in the AI conversation.
“As AI becomes increasingly embedded in our lives, it’s critical that these systems respect and preserve the inherent worth of every individual,” he said. “Human beings shouldn’t be reduced to statistical profiles or treated as a means to an end.”