Excellence in STEM with Anaelia Ovalle

IEEE Computer Society Team
Published 07/05/2023
Share this on:

Anaelia OvalleConcluding our Pride Month series of Excellence in STEM, we engage with Dr. Anaelia Ovalle as they provide a glimpse into the challenges faced by marginalized individuals in the field.

Discover firsthand accounts of barriers to inclusion, such as encounters with influential academics attempting to impede progress and instances of unconscious bias in collaborative environments.

Gain a deeper understanding of the importance of cultural empathy and the transformative potential of embracing diversity in technology.

 

What is your definition and meaning of equity, diversity, and inclusion in the context of computer science and engineering?


I think there are several domains to think of this in. All in all, DEI means distributing power to those historically marginalized.

In producing research: My definition of DEI engages with and exercises sociotechnical vigilance across task definition, the methodology I choose, the metrics I’m using, and what groups I evaluate with respect to because I know that these decisions reflect an exercising of power. In the context of CS/Engineering, it means I build systems mindful of hegemonies that dominate my worldview. To do this, I try to challenge myself by pushing past the literal objective and asking how this technology provides a (dis)service to underrepresented communities.

Colleagues: it means looking out for colleagues by sending opportunities to them, even if they were initially my own. For example, I recognize that I’m in a position of privilege when a company reaches out to me for a position and I already have something going on. I can get really in my head about applying to opportunities. I can’t tell you the number of times I applied to something only because someone sent it my way and said “hey, I think you’d be really good for this.” So when I know someone is looking for internships and there’s solid skill alignment, I just politely tell recruiters my status and ask if they’d be open to referral, which they usually are. It’s a win-win for both the colleague and the recruiter.

Scholarly production: it means sharing academic hard capital — papers — with collaborators in various ways and forms.

 

What barriers to inclusion have you experienced throughout your career?


I’ve had powerful academics make moves to try to block me from opportunities in both overt and covert ways. Barriers manifest not only as what is said but what is left unsaid. It is ugly, scary and left me feeling completely dejected regarding my prospects as a Ph.D. student. I’m really lucky to have a strong community to lean on when things get rough, which buoyed me throughout. At the end of the day, something to get very clear is that these individuals are only a symptom of the broader harmful socialization academia instills if left unchecked — a hunt to concentrate power via the mantra “publish or perish”. These socializations, across academic institutions, are things we absolutely need to become and stay mindful of and actively unlearn as we go through our day to day, otherwise we researchers are doomed to perpetuate it. Yes, I have papers and a dissertation to write, but I have to remind myself to slow down and consistently choose to center my relationships with other researchers, not just what they bring for a paper.

As a lead author on a project and only AFAB person (others AMAB), I’ve had colleagues look to my other AMAB author for direction, even if he’d just repeat exactly what I had just said. I don’t know if they were doing it intentionally or inadvertently, but it is what it is. And it’s the wildest thing. I know how to get things done but let’s get something straight – there’s a PRICE TO PAY (mentally/physically/emotionally), regardless of how I respond – whether I choose to address it, escalate it, or ignore it. And it’s a price I didn’t consent to pay for.

 

What are 1-2 ways the computing community can work together to prevent these experiences from occurring to future professionals?


I’ll share two things: One at the system level and one at the individual level.

System level: Academic institutions need to have very clear and codified incentives in place to support diverse students. Specifically, this means prioritizing minimizing POC student attrition rate, not just maximizing the recruitment of diverse students. What does this look like, concretely? Chairs, deans, professors on tenure committees, and any other position of power across Rx universities can make more student-centric metrics (e.g. POC attrition rate in lab) tied to professor promotions and tenure. Institutions can also regularly gather critical feedback on what barriers exist for underserved students to help them self-actualize.

Individual level: Dismantle the systems of power that serve you in STEM academia/ industry by sharing it with others. For example, the STEM field is predominantly male, White, and Asian. Per the Pew Research Center, “Asian and White students remain overrepresented among STEM college graduates compared with their share of all college graduates in 2018… Women earned less than one-quarter of bachelor’s degrees in engineering (22%) and computer science (19%) and no more than about three-in-ten master’s or research doctoral degrees in these fields as of 2018”. Now imagine if those same dominant groups proactively reached out to provide resources, knowledge, and training to underserved communities so that they are best equipped to navigate academia and industry. I’m not saying this doesn’t already happen at some level, but this needs to happen more and at different professional resolutions (students, profs to be, engineers, engineering managers, etc).

 

A lack of understanding of others’ experiences may sometimes lead to unintended consequences. What recommendations can you make to the community to help them increase their understanding of your culture and/or background that would help individuals feel more welcomed?


I recommend a few things.

1) Understand your power and its impact on others. Are you a GSR, masters/phd student, are you a lecturer, assistant prof, associate prof? Let’s say I’m a prof , at some level, at an R1 university and running a research lab in material science. I must understand that I am solely responsible for setting the lab culture. My students look to me for how they should operate, and it’s my job to set an example that doesn’t end up forcing them to leave or feel exploited. I understand that my historically underrepresented and international students will have a different experience than those that the STEM field mainly reflects. And I understand they may be less likely to push back because they depend on me to keep their funding/visa, so I highly encourage feedback and ask for help. Yes, I need to produce research. Yes I need to apply for grants. And it is my job to ensure I show up for my students in the ways they uniquely need so they get the education needed to self-actualize and do the research they care about. I am committed to my students and am wise to do my best to minimize attrition, especially when my grants are DEI-based or include anything related to reporting on the diversity of my lab.

2) If you don’t know how power is inherently linked to DEI initiatives, aren’t aware of how you use your own power, or you just want to learn more — learn about intersectionality. For example, Dr. Lisa Bowleg’s Intersectionality Training Institute was a game-changer for me. It equipped me with a new form of analytical thinking and vocabulary, allowing me to understand what specific experiences perpetuate harm across historically underserved populations. I didn’t know this training would help me throughout my career; in scholarly production and beyond. Put simply, gaining awareness helps me not do the thing.

3) proactively expand your cultural competency. This can look like attending training/conferences led by non-dominant groups to learn how they operate and experience disempowerment. Again, gaining awareness helps me not do the thing. Even more so, it helps me educate others on how to best operate, too.

 

Can you share an example from your education or career experiences where diverse voices had, or could have had, a significant impact on a project?


Where it did have a significant impact on a project: I will interpret “diverse” voices as those voices coming from historically marginalized communities. A paper I worked on comes to mind – Queer In AI: A Case Study in Community-Led Participatory AI. This work emerges from diverse voices in operationalizing intersectionality, decentralized organizing, and community-led initiatives toward AI advocacy. I can sit here and confidently say that it wouldn’t have been what it turned out to be (Best Paper @ ACM FAccT 2023), had we not centered the empowerment and self-advocacy of marginalized communities.

Where it could have had a significant impact on a project: Beyond a single project, an active conversation is taking place across several countries, including the US, on how to regulate artificial intelligence (AI) right now. Who should governments be protecting in the deployment of AI and to what extent? And the answer to this question, unsurprisingly, will depend on who you ask. If you ask me, based on my opinion and expertise, regulators must look to historically marginalized communities to get these answers, as countless research shows that AI-driven technologies propagate negative societal attitudes, representation and allocational harms, and erasure throughout these communities [1-5].

 

Given the importance of computer science and engineering becoming and being a more diverse and inclusive community, we strive to hear the perspectives of persons from equity-seeking populations. What are 1 or 2 ways in which such diverse perspectives and experiences can be solicited and heard without making the persons who share them possibly feel tokenized or otherwise made uncomfortable?


This is such a good question. I think it’s really important to understand that soliciting these experiences is a form of work. I’ll speak for myself here. I need to not only put in the time but also tap into my own historically disempowered spaces to give solicitors what they want, which is also draining and can leave me running on empty. This “running on empty” while another got the info they needed, in my opinion, is what leads to the feeling tokenized. Solution? People should be compensated for their work (e.g., guaranteed exposure, capital, etc.). As long as the soliciting party makes it clear that this is a form of work and the person will be appropriately compensated, it makes things feel much less tokenizing and more like we are operating on the same page.

 

Learn More About Dr. Anaelia Ovalle


Anaelia Ovalle (they/them) is an Afro-Caribbean, queer, and non-binary Ph.D. candidate in Computer Science at the University of California, Los Angeles, and Eugene Cota-Robles Fellow. Anaelia’s researches AI-driven language technologies, who they best serve, and who they may end up leaving out. Advised by Prof. Kai-Wei Chang, Anaelia centers algorithmic fairness and AI ethics praxis. With particular emphasis on impacts on historically marginalized communities, they necessarily operate at 2 resolutions: (1) inclusive natural language processing and representation learning (e.g., what does nonbinary exclusion mean and look like in a language context?) and (2) expanding AI ethics through intersectionality and participatory design (e.g., whose voices speak loudest in the framing of a machine learning task?). Their research synergizes across both algorithmic fairness and critical social theory to guide approaches in mitigating AI-driven sociotechnical harms. Anaelia has previously interned across several Responsible AI research teams, including Meta, Amazon Prime Video, and Amazon Alexa. Prior to starting their Ph.D., they received a BS magna cum laude in Data Science from the University of San Francisco.

 

References


[1] Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P. S., … & Gabriel, I. (2021). Ethical and social risks of harm from language models.

[2] Abid, A., Farooqi, M., & Zou, J. (2021). Persistent anti-muslim bias in large language models. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.

[3] Dev, S., Monajatipoor, M., Ovalle, A., Subramonian, A., Phillips, J. M., & Chang, K. W. (2021). Harms of gender exclusivity and challenges in non-binary representation in language technologies. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.

[4] Sun, T., Gaut, A., Tang, S., Huang, Y., ElSherief, M., Zhao, J., … & Wang, W. Y. (2019). Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.

[5] Ovalle, A., Goyal, P., Dhamala, J., Jaggers, Z., Chang, K.-W., Galstyan, A., … Gupta, R. (2023). “I’m Fully Who I Am”: Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1246–1266. Presented at the Chicago, IL, USA. doi:10.1145/3593013.3594078