The pandemic that has raged across the globe over the past year has shone a cold, hard light on many things—the varied levels of preparedness to respond; collective attitudes toward health, technology, and science; and vast financial and social inequities. As the world continues to navigate the covid-19 health crisis, and some places even begin a gradual return to work, school, travel, and recreation, it’s critical to resolve the competing priorities of protecting the public’s health equitably while ensuring privacy.

The extended crisis has led to rapid change in work and social behavior, as well as an increased reliance on technology. It’s now more critical than ever that companies, governments, and society exercise caution in applying technology and handling personal information. The expanded and rapid adoption of artificial intelligence (AI) demonstrates how adaptive technologies are prone to intersect with humans and social institutions in potentially risky or inequitable ways.

“Our relationship with technology as a whole will have shifted dramatically post-pandemic,” says Yoav Schlesinger, principal of the ethical AI practice at Salesforce. “There will be a negotiation process between people, businesses, government, and technology; how their data flows between all of those parties will get renegotiated in a new social data contract.”

I in action

As the covid-19 crisis began to unfold in early 2020, scientists looked to AI to support a variety of medical uses, such as identifying potential drug candidates for vaccines or treatment, helping detect potential covid-19 symptoms, and allocating scarce resources like intensive-care-unit beds and ventilators. Specifically, they leaned on the analytical power of AI-augmented systems to develop cutting-edge vaccines and treatments.

While advanced data analytics tools can help extract insights from a massive amount of data, the result has not always been more equitable outcomes. In fact, AI-driven tools and the data sets they work with can perpetuate inherent bias or systemic inequity. Throughout the pandemic, agencies like the Centers for Disease Control and Prevention and the World Health Organization have gathered tremendous amounts of data, but the data doesn’t necessarily accurately represent populations that have been disproportionately and negatively affected—including black, brown, and indigenous people—nor do some of the

Read More

————

By: MIT Technology Review Insights
Title: Evolving to a more equitable AI
Sourced From: www.technologyreview.com/2021/05/18/1024910/evolving-to-a-more-equitable-ai/
Published Date: Tue, 18 May 2021 17:02:13 +0000

Did you miss our previous article…
https://www.mansbrand.com/how-to-fix-what-the-innovation-economy-broke-about-america/

Comments

0 comments