Skip to content

Welcome


I am a research scientist working at the intersection of human-computer interaction, accessibility, creativity, privacy, and computer vision.

I design and implement studies that guide the development of technologies that support non-visual access to images, video, graphics, and art. I am committed to cultivating inclusive design practices and technologies for people with disabilities. #Ally

I am on the job market for industry and academic research positions. Check out my CV | Google Scholar | Linked-In | Twitter


UX Research Portfolio

Personalizable Privacy-Preserving Visual Assistance Technology

User-centered research, content analysis, and dataset creation for the development of privacy-preserving techniques that identify and remove private information from blind people’s images by using few-shot learning. Learn more here.

This image shows thumbnail images of 19 types of private information that were found in images taken by people who are blind. Some of the information is text-based, while other information can be classified as objects. (Face, Pregnancy Test, Tattoo, License Plate, Credit Card, Computer Screen, Pill Bottle, Letter with Address.
People who are blind take images, and send/share them with remote-sighted assistants to answer their visual questions. As shown in this image, some of these images contain private information that blind people do not want humans to see or companies to collect and use to train artificial intelligence.

Intelligent and Context-Aware Image and Video Description

Mixed-methods research to identify the content blind people want in image descriptions and new metrics for evaluating descriptions authored by humans and AI to make images and videos accessible. Learn more here.

Diagram showing an image as input, where a variety of scenarios (shopping, traveling, sharing information, researching, social interactions) which are input into machine learning systems to output image descriptions that are context aware.
Image descriptions provide non-visual access to digital pictures. They are written by humans and artificial intelligence. Most descriptions are one-size-fits-all. In this process diagram, we show an approach that can be used to create image descriptions that are context-aware, e.g. responsive based on where the image is found.

Inclusive Tactile Media Design, Production & Distribution

Design-based implementation research to increase accessible and multimodal learning resources for blind people, and cultivate a community of practice focused on inclusive and accessible design. Learn more here.

An image of a tactile graphic of a house and a mouse, 3D printed with green filament. Two hands touching the page.
Tactile pictures are important learning resources for multimodal learners. They enable blind children to access picture books and develop and gain literacy. Yet, there are few tactile picture books available. This image shows the first 3D printed tactile picture, a proof of concept, and a design probe.

Background


My academic background is highly interdisciplinary.

I have a Ph.D. in Technology Media and Society (Ph.D. TMS), a Masters of Science in Information Communication Technology for Development (MS. ICTD), and a Bachelors of Environmental Design and Planning (B.ENVD). I am a Rotary International Ambassadorial Scholar alum.

My work is supported by a 2020 National Science Foundation/ Computing Research Association, Computing Innovation Fellowship. I am mentored by Dr. Leah Findlater at the University of Washington/Apple. We work to Safeguard Private Visual Information.

I was previously a Bullard Postdoctoral Fellow at the University of Texas-Austin (2019-2020), mentored by Dr. Danna Gurari. We work on the development of next-generation image description technologies with the Microsoft Research Ability Initiative.

Dr. Tom Yeh advised me during my Ph.D. from ATLAS at the University of Colorado (2014-2019). We founded the Build A Better Book Project to broaden inclusion in the design of accessible media.


News