Why AI provides a fresh opportunity to neutralize bias
Humans develop biases over time. We aren’t born with them. However, examples of gender, economic, occupational and racial bias exist in communities, industries and social contexts around the world. And while there are people leading initiatives to fundamentally change these phenomena in the physical world, it persists and manifests in new ways in the digital world.
In the tech world, bias permeates everything from startup culture to investment pitches during funding rounds to the technology itself. Innovations with world-changing potential don’t get necessary funding, or are completely overlooked, because of the demographic makeup or gender of their founders. People with non-traditional and extracurricular experiences that qualify them for coding jobs are being screened out of the recruitment process due to their varied backgrounds.
Now, I fear we’re headed down a similar path with Artificial Intelligence. AI technologies on the market are beginning to display intentional and unintentional biases – from talent search technology that groups candidate resumes by demographics or background to insensitive auto-fill search algorithms. It applies outside of the business world as well – from a social platform discerning ethnicity based on assumptions about someone’s likes and interests, to AI assistants being branded as female with gender-specific names and voices. The truth is that bias in AI will happen unless it’s built with inclusion in mind. The most critical step in creating inclusive AI is to recognize how bias infects the technology’s output and how it can make the ‘intelligence’ generated less objective.
We are at a crossroads.
The good news: it’s not too late to build an AI platform that conquers these biases with a balanced data set upon which AI can learn from and develop virtual assistants that reflect the diversity of their users.This requires engineers to responsibly connect AI to diverse and trusted data sources to provide relevant answers, make decisions they can be accountable for and reward AI based on delivering the desired result.
Broadly speaking, attaching gendered personas to technology perpetuates stereotypical representations of gender roles. Today, we see female presenting assistants (Amazon’s Alexa, Microsoft’s Cortana, Apple’s Siri) being used chiefly for administrative work, shopping and to conduct household tasks. Meanwhile, male presenting assistants (IBM’s Watson, Salesforce’s Einstein, Samsung’s Bixby) are being used for grander business strategy and complex, vertical-specific work.
I believe AI developers should take gender out of the virtual assistant picture completely. Give virtual assistants a personality. Give them a purpose. But let’s not give them a gender. After all, people use virtual assistants to access vital, relevant and sometimes incredibly random information. Assigning a gender adds no value to the human benefits found in this brand of technology.
The most human step in taking bias out of the equation is hiring a diverse team to code the AI innovations of tomorrow. Homogeneity limits and dilutes innovation. It’s absolutely vital for AI developers and innovators to hire talent from different cultures, backgrounds and educational pedigrees. AI engineers that create teams of people who approach challenges from different perspectives and embrace change will be more successful in creating AI that addresses real world business and consumer issues. The central goal of the AI community should be to build technologies that truly achieve diversity, inclusion and, ultimately, full equity through utility.
Ultimately, I think that AI presents the world (no exaggeration) with an opportunity to correct the all-too-human tendency toward both intentional and unconscious biases. In the tech world, this extends to humans interacting with technology in daily life. It impacts markets embracing new innovations, companies hiring from a diverse talent pool and venture capitalists listening to early stage investor pitches without prescreening who is delivering them. If humans can ethically and responsibly build – and continue to innovate upon – unbiased AI, they will play a small, but significant role in using technology to shift society in the necessary direction of acceptance and equality.
Kriti Sharma is the vice president of AI at Sage Group, a global integrated accounting, payroll and payment systems provider. She is also the creator of Pegg, the world’s first AI assistant for accounting, with users in 135 countries.
TopicsArtificial Intelligence
(责任编辑:关于我们)
- ·Spate of defections show Kim Jong
- ·NK leader sets goals for bolstering self
- ·S. Korea could face shortage of hospital beds
- ·Enter your tech product for Mashable's 'Top Picks of CES 2019'
- ·Essential Apps to Install on your Windows PC or Mac
- ·Here’s Jared Kushner ready for a yacht party in Iraq
- ·Defector group says it will use drones for leaflet campaign against N. Korea
- ·Dutch men are holding hands in solidarity against anti
- ·“新丰味”喜获中国首届县域品牌擂台赛十大营销创新品牌
- ·S. Korea could face shortage of hospital beds
- ·蒙顶山茶有了专属茶器
- ·US will work closely with Seoul to monitor NK threats: Pentagon
- ·NK continues to develop military capabilities that pose threat to U.S. and allies: Kirby
- ·iPhone 8 still manages to draw substantial crowd, despite the iPhone X that's coming
- ·Number of COVID
- ·N. Korea shows no signs of imminent nuclear test ahead of major anniversary: ministry
- ·我市奖励63座完成油气回收治理加油站
- ·Amazon is showing off a lot of new Alexa
- ·Google Gemini now allows AI
- ·S. Korea, US allay jitters over possible rift in extended deterrence against NK