“Where’s my tablet?”
Seven-year-old voices can be full of wonder, but for Widmarck Francois, a cybersecurity student and father, this repeated question from his child carried the weight of a growing concern.
Francois, the winner of an essay contest examining ethical concerns regarding AI, shared his story, capturing the audience’s attention and concerns.
“Basically, I have kids, and I see what technology does to my kids,” he said. “My seven-year-old is addicted to his tablet, and that’s the only thing he thinks about once he gets home.”
Francois’ words shared how AI’s rapid growth is altering relationships and societal norms, raising questions about accountability among its creators and users.
This reflection informed his winning essay, where he advocated for the ethical design of AI systems that prioritize transparency and respect for human autonomy.
His remarks brought forth a question: how do fairness and empathy play a role in a world influenced by AI? It was a topic of discussion at a recent Montgomery College event.
As part of MC’s 12th annual Humanities Days lineup, the Oct. 24 Ethics and AI guest lecture brought students, faculty, and experts together at the Takoma Park/Silver Spring campus. Philosophy professor Daniel Jenkins facilitated the event.
Keynote speaker Elissa Redmiles, a Georgetown University assistant professor and faculty associate at the Berkman Klein Center for Internet & Society, joined the discussion virtually and guided the audience through the ethical challenges of AI, including fairness perception, algorithmic bias, and non-consensual image abuse.
Francois’ story, shared toward the lecture’s conclusion, brought Redmiles’ earlier discussions on AI’s ethical challenges into focus and expanded on the necessary steps to ensure AI is developed with accountability and fairness at its core.
“Why don’t we predict whether a positive intervention will be effective? Let’s turn our predictions into positive actions that will have less negative consequences,” Redmiles said, inviting the audience to reimagine AI systems as tools to address societal challenges.
Redmiles outlined AI’s impact on sectors like healthcare, education, and criminal justice, calling for measures to advance equity and transparency, and described unchecked biases in algorithms as catalysts for cycles of inequality, leading to self-fulfilling prophecies.
Addressing bias in AI, she argued, is critical to creating systems prioritizing fairness and upholding ethical, transparent decision-making.
For Francois, this reflection was in motion long before the lecture.
“It is challenging to hold someone responsible for AI-driven outcomes,” he said, stressing the need for ethical guidelines and laws to hold developers, companies, and users accountable.
He explained that predictive algorithms, while efficient, often overlook the nuance necessary in human-centered decision-making.
“That reduces the human element of discretion and empathy in AI systems,” Francois said, noting that hiring and judicial reviews are examples where understanding individual needs and societal context is essential.
This perspective aligns with a 2023 Pew Research Center study, which found that 71% of Americans oppose the use of AI in hiring decisions, citing concerns about fairness and an algorithm’s inability to evaluate traits like creativity or interpersonal skills.
Francois’ remarks highlight the importance of maintaining human oversight, particularly in areas where decisions can alter lives.
In parallel, general studies student Sara Lopez Gomez reflected on AI’s broader societal impact.
“If society keeps relying on AI, we would use AI to think for us, and that is something we should avoid,” Lopez Gomez said. “We shouldn’t have technology run our lives.”
Lopez Gomez’s thoughts mirrored the lecture’s focus on integrating AI thoughtfully into society, keeping human accountability central to decision-making, with technology supporting societal progress rather than dictating it.
The lecture’s emphasis on fairness and accountability inspired attendees like cybersecurity student Nasrin Van Wyk, who considered how ethical concerns shape her field.
“In the context of cybersecurity, ethical concerns have significant implications where algorithms designed would not only identify possible threats but to protect the privacy of the user and to prevent misuse,” Van Wyk said.
Accountability issues prompted Jenkins to emphasize the role of education and dialogue in addressing AI’s ethical dilemmas.
“I think it’s actually going to be a long, uninterrupted conversation for the rest of our lives,” Jenkins said. “It’s the hope that intellectuals and educational communities can provide guidance to a society at large, policymakers, and everyday people to make sense out of the changes in the complexity of AI systems and their impact on society.”
The sentiment stressed the need for collaboration across disciplines to ensure that, as technology evolves, its development aligns with societal values.
Francois’ concerns, sparked by his seven-year-old’s question, remain a reminder of what is at issue in the AI conversation.
“As AI becomes increasingly embedded in our lives, it’s critical that these systems respect and preserve the inherent worth of every individual,” he said. “Human beings shouldn’t be reduced to statistical profiles or treated as a means to an end.”
Samuel W. • Dec 5, 2024 at 11:43 am
Francois’ journey from a concerned father to a thought leader on AI ethics underscores the human stakes in technological advancement. His son’s fixation on a tablet symbolized the more extensive societal shift, where technology increasingly mediates human interactions, raising ethical concerns about autonomy and empathy. Through his winning essay and the Humanities Days lecture, Francois called for AI systems that prioritize fairness, transparency, and human oversight. His reflections remind us that technology should serve as a tool for empowerment, not as a substitute for human judgment, urging developers and policymakers to embed ethical considerations at every stage of AI design.
Clayton Avila • Dec 2, 2024 at 1:27 pm
The debate is important and reveals how much we don’t know about which path AI will take. We must focus on developing this technology in order to ensure that it is the most beneficial for society, but its evolution, I believe, will not be stopped by society’s legitimate interests, but rather by global pressures, where whoever is faster in presenting solutions that meet emerging demands will win this race.
rachel saidi • Nov 25, 2024 at 5:32 pm
I predict that as AI becomes more entrenched in our lives, it will actually blur the lines between humanities and STEM fields. Not only will we all need to understand AI and the ethical implications, but we will also be interacting with positive and negative results. I commend Prof. Jenkins for facilitating this event and Ash Ibasan for writing a wonderful article highlighting the presentations.
Viktoriia Lyon • Nov 25, 2024 at 1:55 pm
I am happy to see that people are concerned about AI issues. Thank you for reporting
To Bede Continued... • Nov 22, 2024 at 3:27 pm
This is a very touching article. I loved the connection between Widmarck’s significance and the ethical concerns surrounding AI.
Especially in an evolving and advancing society, concerns about AI should be emphasized. People don’t like being told that there are perceptions that all AI users abuse it for cheating, daily overreliance, and even forging a romantic relationship, but rather, some use it as an assistive tool that is a boon to their lives and careers.
Hopefully, your article can reduce the perception that AI advancement is the devil’s advocate by outweighing its cons with the benefits of utilization, considering the current public stance and its sophistication when embedded into smart devices.