As any other professional field, academia suffers from social biases, i.e. preferences for certain social groups. In tech, gender and racial biases are especially prominent. Working in artificial intelligence, I often find myself being the only woman in the room during project meetings. When a few years ago I learned about the concept of implicit bias and took the Harvard bias test, I started realising why this was happening and why people (including myself) acted and reacted in certain ways.
Implicit biases are based on our unconscious stereotypes and shape our social preferences and behaviour. We want to believe that we are not racist or sexist. However, being brought up in a certain cultural context makes unconscious biased reactions inevitable.
The more I was diving into the bias studies, the more motivated I was to promote diversity and inclusion in STEMM. Although I’m not a social scientist, I started running interactive workshops on implicit bias for my university colleagues. Helping people become aware of their biases and offering a safe space to discuss the topic looked like a necessary first step.
Later on, I discovered the concept of targets and allies formulated by Valerie Aurora, a former Linux kernel developer. While most of the diversity initiatives are aimed at changing the behaviour of those being discriminated against, Valerie focuses on those with privilege and teaches them how to act as allies. We all need to learn how to step up for each other, because each of us can be a target or an ally depending on the context. For example, in the gender context I’m a target, whereas in the LGBTQ or racial context, I’m privileged.
We all need to learn how to step up for each other, because each of us can be a target or an ally depending on the context
These concepts and related practices are a powerful tool for changing our workspaces and communities. Implementing them in my work environment, and sharing them with scientists at other institutions, has already made a difference. I hope that my participation in Homeward Bound will give me further ideas on how to make academia more inclusive, and to introduce me to a network of people motivated to work on it together.
HB4 participant Katja Ovchinnikova works on AI-driven precision medicine for cancer treatment at the Department for BioMedical Research of the University of Bern. On a volunteer basis she does computational analysis of marine science data. Her application fields include language processing, robotics, computer vision, computational biology, and most recently, marine science and ecology. She has a deep interest in unconscious bias studies, social-political theories and ethics in science.