By Mya Trujillo
Although artificial intelligence (AI) has become an effective tool in the daily lives of many, such technology — which can mirror human expression with a few clicks on a keyboard — has boundless potential to be either incredibly beneficial or detrimental to the planet, depending on how it is regulated and used.
With AI rapidly evolving and reshaping the digital landscape, its adoption in academia, business practices, and media requires scrutiny to ensure humans don’t become dependent on it for thought. A swift increase in normalized usage for email management, data analysis, chatbot systems and more has prompted experts and professionals to put frameworks, committees, and educational practices in place to encourage the technology’s intentional and responsible adoption.
“We don’t want to have a whole cadre of AI zombies, where we’re just repeating without thinking,” Dr. Talitha Washington, executive director of Howard University’s (HU) Center for Applied Data Science and Analytics (CADSA), told The Informer.
Washington co-chairs Howard’s AI Advisory Council, which was launched in June 2024 to advise the university’s president on navigating how the technology is transforming within higher education. Since its creation, the council has also worked to create AI and tech-centered opportunities and partnerships for the historically Black university.
Such boards and councils are becoming more popular and necessary as institutions around the world continue to incorporate AI into their practices. To actively work toward a future of safe and inclusive digital spaces, practices and tools, the United Nations established its Independent International Scientific Panel on AI in late 2024 — the first global body of its kind.
Panel members will develop a scientific understanding of how these technologies are reconstructing the world and how to ensure they benefit humanity by fostering peace, security, human rights and sustainable development. During the entity’s first meeting on March 3, U.N. Secretary General António Guterres expressed that its work will help strengthen global coordination and innovation.
“The world urgently needs a shared, global understanding of artificial intelligence, grounded not in ideology, but in science; not in fake news, but in knowledge,” Guterres said in his remarks. “Like AI itself, this panel is in a race against time.”
‘We Have to Make Sure That AI is Being Fair’
Responsible AI use, with an understanding of its impact on people’s lives, is a key factor in guaranteeing an equitable future and seamless progression into a more digital age. Another factor that affects fair technology systems and their use is the acknowledgement and reduction of biases in generative AI.
All large language models (LLMs) are trained with vast datasets that they use to process language and generate text. Generative pre-trained transformers (GPTs), which are used in generative AI systems like modern chatbots, are some of the largest and most intelligent LLMs. Because their development requires so much data, the risk of cognitive biases making their way into these models, and in turn, the output of discriminatory information or language, is prevalent.
“We have to make sure that AI is being fair, and we also have to make sure there’s some sort of transparency … [and] trust involved,” Carey Digsby, an artist and business consultant who regularly uses AI, told The Informer. “In other words, the data that’s there, do we fully trust it’s been utilized to our benefit?”
A 2024 study published in the science journal Nature revealed that GPTs are prone to harboring covertly racist inclinations in their programming that seep into their outputs and decisions. Those leading the study found that these systems deem people who use African American Vernacular English (AAVE) to be unemployable or less likely to be associated with a job.
While there isn’t a surefire way to eliminate biases in AI, gaining a better understanding of how preconceived notions can affect algorithms’ operations is a crucial step in reducing their presence and effects. One way to work toward this kind of mitigation is by implementing education about AI systems and how they can exist in the world without harming it.
“The more you know about it, the better you can utilize it, and it can be … a tool for yourself,” Digsby told The Informer. “I think that’s one of the biggest issues is [that] many people fear AI instead of looking at it as…an advantage, whether it be in the job you’re currently working or possibly pushing you to another position or something else along the line.”
At Howard, the free version of Microsoft’s Copilot, an AI-assistant that helps increase productivity, is embedded into the university’s Microsoft Office products and tools. The AI Advisory Council also hosts workshops to support the system’s use and show students its potential, from interpreting text to helping with scheduling.
The university is also on track to launch its Fundamentals of AI certificate, intended to be accessible to all undergraduate, graduate and professional students. As it was being outlined, HU faculty approved three courses to be included in the certificate program: Introduction to AI Tools and Techniques; Ethical and Responsible AI; and AI in the Disciplines.
While many Americans are wary of AI, with 47% harboring little to no trust in the technology according to the Pew Research Center, Washington hopes that the future of AI consists of fixing faulty hardware, mitigating negative environmental implications and a commitment to original human thought.
“At the end of the day, critical thinking will remain important,” Washington told The Informer. “Having a creative mind and being able to think outside the box will be important, and … original thought will remain paramount.”
The post As AI Usage Increases, Ethical Implementation Remains Crucial appeared first on The Washington Informer.

