AI Star Focuses on Heart and Soul
Dr. Rumman Chowdhury, a star in the field of artificial intelligence, is Global Lead for Responsible AI at Accenture Applied Intelligence. She gives importance to ethics and transparency in her cuttingedge research and search for solutions, providing a breath of fresh air in the charged discussions about AI. The MIT and Columbia grad has received wide media coverage and has been named by Forbes as one of ‘Five Who are Shaping AI.’
What first drew you to the field of artificial intelligence?
What fascinates me about artificial intelligence,
data science, and analytics is the ability to find broad
patterns of human behavior, from which we can begin
to understand aggregate movements. It’s possible to
improve people's lives.
Why should one care about the ethics related
to AI?
Models we use in AI are exactly that—models, a
representation of the real world. In creating that representation,
we make broad assumptions, and these
assumptions about the data and the people behind the
collection and curation of data are where bias can be
introduced. Ethics is necessary at all stages of AI development:
when conceptualizing an application or
product, collecting and transforming data, selecting,
training, and testing models, and using those models
in the real world.
AI already exists, sometimes visibly, and mostly invisibly. It impacts the movies we see, the songs we listen to, the jobs we are interviewed for, whether or not we receive the mortgages we apply for, and much more. Ethics matters because AI is applied to a nuanced world.
How close to reality is the gloom-and-doom
scenario so often painted in the news headlines
about AI?
Most of us who work in the space of ethics and tech
are actually optimists about this technology—really!
We imagine a world in which this advanced technology
is being used to solve long-standing problems and
improve the human condition, rather than
perpetuate the existing social, cultural, and
institutional biases that exist. Doing so requires
raising awareness, and then doing the
hard work involved to change course.
Why do we need plenty of nuance
in this discussion?
AI, data science, and advanced
analytics are used to find
generalized patterns of human
behavior using imperfect representations of the real
world. However, our reality, our individual day-to-day
is anything but generalized; it is specific to our lives.
Nuance and context are required to understand how
a broad-brushstroke model is applied in a real-world
setting that may be a scenario that is unfamiliar to the
'generalized' examples.
How can our readers help mold this discussion
or what can they do to help create a more equitable
landscape?
There are so many ways to be engaged. First, I want
to emphasize that non-technologists have a role to play
as well.
If you work in a tech or tech-adjacent field, and are able to raise the right kinds of questions, please do so! There are many groups out there for folks in tech from all aspects that have developed materials or rubrics to help guide these narratives.
If you're a consumer, be smart about what you choose to introduce into your life. Ask yourself what value you will receive, and what kind of information about you and your life will be owned by this organization. Learn more about transparency, privacy, and your rights.
How does one go about creating more ethical and
transparent AI algorithms?
There are many ways. At Accenture Responsible
AI, we shape our practice around four pillars: technical,
organizational, operational, and reputational. "Technical"
refers to our quantitative work in highlighting bias
and discrimination in datasets and models. "Organizational"
reflects on the organizational structure of the
company, and how to have the right kind of structure
to support the responsible and transparent use of AI.
"Operational" refers to the governance of AI systems,
not just on a project-by-project basis, but overall within
the organization. "Reputational" looks at the impact
of the model when deployed in the real world, and
also thinks about how to best communicate with the
affected communities.
Poornima Apte, a widely published freelance writer, editor, and book reviewer, is based in the Boston area. This is her last Talk Time column, but she will continue to write for Khabar as an occasional contributor. Learn more at WordCumulus.com. Find her on Twitter @booksnfreshair
Enjoyed reading Khabar magazine? Subscribe to Khabar and get a full digital copy of this Indian-American community magazine.
blog comments powered by Disqus