I am a third-year Ph.D. student at the University of Southern California under the advisement of Dr. Jesse Thomason in the GLAMOR Lab. My work leverages linguistic theory to learn robust sign language models.
I have published computer science and linguistics research in the following areas:
I have taught Computer Science—from elementary to graduate—since 2016. Topics include:
Outside of teaching, I enjoy sharing my research to peers in forums such as:
Understanding Knowledge and Intent
Understanding Face and Gesture
Understanding Emotion
I use state-of-the-art pose estimation and facial expression recognition to produce linguistically-informed representations of American Sign Language phonemes such as handshape and movement. I then use semi- and self-supervised learning techniques to approximate the meaning of phoneme combinations.
Many research papers social and behavioral science do not reproduce, constituting a crisis for the field. This NSF-funded project seeks to model and predict paper reproducibility. My contributions increase pipeline performance by leveraging psycholinguistic features such as emotionality and coherency, as well as pragmatic features like document structure.
Pretrained language models like RoBERTa claim state-of-the-art performance on comprehension, inference, and generation. But do they have the most basic level of common sense? This work operationalizes common sense via cloze testing, and measures how disparate a language model's answers are. For example, "Leopards have ___ on their bodies." is supplied with "scars" and "tattoos" before "spots", indicating significant confusion as to what leopards look like.
I am currently looking for internship opportunities for Summer 2023. If my portfolio interests you, please email me at kezarlee[at]gmail[dot]com or click the button below.