Location
Event Description
Do Natural Language Understanding Systems Learn to Understand or to
Find Shortcuts? (Naoya Inoue, http://naoya-i.github.io/)
ABSTRACT: Recent studies have suggested that natural language understanding (NLU) systems learn to exploit superficial, task-unrelated cues (a.k.a. annotation artifacts) in current datasets. This prevents the community from reliably measuring the progress of NLU systems. In this talk, I will discuss two latest studies from our research team: (i) analysis of annotation artifacts in commonsense causal reasoning and (ii) creation of benchmark for evaluating NLU systems' internal reasoning.
---------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------
Learning graph-structured sparse models (Baojian Zhou, https://baojianzhou.github.io/)
ABSTRACT: Learning graph-structured sparse models has recently received significant attention thanks to their broad applicability to many important real-world problems. However, such models, of more effective and stronger interpretability compared with their counterparts, are difficult to learn due to optimization challenges. In this talk, we will discuss how to learn graph-structured sparse models under stochastic and online learning settings. Some interesting related problems will also be discussed.
