What’s hot right now? Generative artificial intelligence, ChatGPT, and other AI tools. AI is everywhere and automating anything associated with coding, content, and literature. Depending on who you ask, this is an exciting or scary period based on one’s reference and history.
How quickly has AI been adopted? ChatGPT is the first technical app to reach 100 million users just two months after its inception. About half of U.S. employees are using AI and ChatGPT for coding, hiring, customer service, content creation, and automation productivity. With such an accelerated start, the U.S. government and corporations must be mindful and intentional to ensure accessible, ethical, and inclusive practices incorporate all demographic groups.
I recently held the position of senior director of diversity and inclusion solutions at SilkRoad Technology, which enabled me to develop a well-rounded perspective on the use of technologies such as AI and ChatGPT through building our own internal diversity, equity, inclusion, and belonging (DEIB) strategy as well as driving the adoption and direction of the talent management technology solutions we provided to our enterprise clients. From that experience, I identified several ways companies, innovators, and dominant groups can ensure that AI does not hurt DEI.
1. Learn from the mistakes of the past.
Prior history has shown us that technological innovations typically widen the gap between wealthy and low-income communities because of a lack of access, knowledge, and resources. A perfect example of this was during the COVID-19 pandemic. Students from low-income households fell behind in education due to poor infrastructure and a lack of access to technologies required to foster remote learning.
So, how can we avoid another possible digital divide? The way I see it, the U.S. government must intervene to ensure access and accessibility to all public schools within underserved communities. Suppose certain features within OpenAI remain restrictive and only available to premium users. If we do not genuinely open AI to everyone, its resources will be inequitable due to a lack of accessibility, which will cause learning gaps in underserved communities.
2. Focus on the data.
Humans, not machines, make AI. This might sound like a cliché or a sound bite from a movie such as “The Terminator,” but it is not. Solve the problem at the source using diverse and inclusive data sets to create a more predictable, determined outcome. AI systems are fed data, and if the data set is biased, the result will be too.
Assume there will be flaws. Consistently evaluate and iterate based on results that show potential or biased outcomes. Embed representative data into applications from the beginning so no demographic group is left behind or unintentionally marginalized.
3. Use AI to create equitable outcomes for underserved communities.
AI can increase equity and distribute DEI in the public and private sectors. There are several ways AI can be used across these channels:
- AI can be used to reduce bias in the hiring process and advance promotional opportunities for underrepresented employees.
- AI can create a more inclusive workplace by conducting audits to increase representation for disparate groups of people.
- For government roles, AI can help create equity by delivering quality services to underserved populations adversely affected by systematic socioeconomic factors such as crime, poverty, unemployment, poor education, health care, and race.
It’s important to note that some companies are already using AI to impact DEI. One example is using AI to neutralize potential bias within job descriptions to attract a more broad applicant pool. This is done by embedding unbiased gender-neutral language data sets within the core code and eliminating words and phrases that might adversely impact potential candidates from the underrepresented population.
Another example is using AI in simulated behavior-based inclusion training solutions to identify and address specific areas of unconscious bias within organizations. The public sector is also using AI as a communication tool with the creation of chatbots to improve the experience and accessibility of services for underserved communities.
By following these steps, society as a whole can ensure that AI is a conduit and enabler for DEI to create equitable outcomes and inclusion within the U.S.