WordGesture-GAN Paper wins SIGCHI 2023 Award

The Association for Computing Machinery (ACM) CHI ‘23 Conference brings together researchers and practitioners from around the world with the goal of making the world a better place through interactive digital technologies. This year, CHI ‘23 received over three thousand paper submissions, of which only the top five percent were chosen to receive an award.

Jeremy Chu presenting at SIGCHI

The paper, , written by Department of Computer Science PhD students Jeremy Chu, Dongsheng An (graduated), Yan Ma, and Wenzhe Cui (graduated) with professors David Gu and Xiaojun Bi, won an Honorable Mention Award.

The team detailed a Generative Adversarial Network (GAN)-based technique to synthesize realistic word gestures for evaluating gesture typing keyboards. A word gesture is when a user makes a continuous stroke on a virtual keyboard connecting letters to spell out a certain word. User-drawn gestures are becoming a more popular way of typing on the keyboard, and WordGesture-GAN can generate realistic word gestures and predict the input performance. It hopes to advance the state of the art for generative modeling of human word-gesture production.

Gesture typing keyboard

Along with collaborator Shumin Zhai from Google, the authors explored using deep learning as a tool to model human gesture-typing behavior. They created WordGesture-GAN, which is able to predict interaction behaviors beyond what lab tests and field studies can show. This model is crucial in the development of user interfaces, particularly for language input technologies such as word-gesture input.

Congratulations to all!

 

 

 

-Kimberly Xiao

Homepage photo credit: L. Azevedo/Shutterstock