Optimization and Machine Learning - presented by Yifan Sun

Abstract: Optimization is a growing topic of interest in the machine learning community. It starts out as an option to check in Tensorflow (SGD? Adam? Adagrad?), but as we get more into the how and why of these options, we uncover many fundamental principles relating to operations research, control theory, and dynamical systems, dating back as far as the Cold World era. 

In this talk I will give a broad overview of some of the important optimization themes in machine learning. I will try to give connections between tools we are used to seeing in popular packages 
and fundamental optimization concepts like duality, convexity, contractive operators, etc. While we cannot hope to completely cover this diverse research area, I hope to provide a glimpse of this exciting research area that is permeating more and more into the machine learning world. 

Bio: Yifan Sun received her PhD in Electrical Engineering from the University of California Los Angeles in 2015, with research focusing on convex optimization and semidefinite programming. She was then Technicolor Research and Innovation, focusing on machine learning and 
data science applications. More recently, she completed two postdocs focusing on optimization, at the University of British Columbia in Vancouver, Canada and INRIA, in Paris, France.

The Center of Excellence in Wireless and Information Technology (CEWIT) will host the 16th International Conference on Emerging Technologies for a Smarter World (CEWIT2020) virtually on November 5, 2020. The conference will center on the four major fields which are penetrating our business and personal lives: Machine Learning, Artificial Intelligence, Blockchain and Computational Medicine. For more info visit: https://www.cewit.org/.

Virtual Job Fair for New Stony Brook Graduates & Experienced Alumni
Using a platform called Career Fair Plus, participants will be able to schedule 10-minute video meetings with participating employers of interest to them.
Recent graduates and alumni can register and learn more about how the fair will be run by registering on Handshake.

Hieu Le presents Incorporating Physical Illumination Constraints into Deep Learning Shadow Detection and Removal (PhD Proposal)

Shadows provide useful cues to analyze the scene but also hamper many computer vision algorithms such as image segmentation, object detection or tracking. For those reasons, shadow detection and shadow removal have been well studied topics in computer vision. Early approaches for shadow detection and removal focus on physical illumination models of shadows. These methods can express, identify, and remove shadows in a physically plausible manner. However, these models are often hard to optimize and slow in inference due to reliance on hand-designed image features. On the other hand, recent deep-learning approaches have achieved breakthroughs in performances for both shadow detection and removal. They learn to extract useful features automatically through training while being extremely efficient in computation. However, these models are data-dependent, opaque and ignore the physical aspects of shadows.

We propose to incorporate physical illumination constraints into deep-learning frameworks. Thus the mapping learned by the deep-network closely follows the physics of shadows, enabling the network to systematically and realistically modify shadows in images. For shadow detection, we present a novel GAN framework in which the generator can generate realistic images with attenuated shadows that can be used to train a shadow detector. For shadow removal, we propose a method that uses deep-networks to estimate the unknown parameters for a shadow image formation model that removes shadows. The system outputs shadow-free images in high-quality with no image artifacts and achieves state-of-the-art shadow removal performance. Lastly, we propose a system trained without the need for any shadow-free images in which physical constraints play pivotal roles that enable training the networks.

For Zoom information, please email events@cs.stonybrook.edu.

Do Natural Language Understanding Systems Learn to Understand or to
Find Shortcuts? (Naoya Inoue, http://naoya-i.github.io/)

ABSTRACT: Recent studies have suggested that natural language understanding (NLU) systems learn to exploit superficial, task-unrelated cues (a.k.a. annotation artifacts) in current datasets. This prevents the community from reliably measuring the progress of NLU systems. In this talk, I will discuss two latest studies from our research team: (i) analysis of annotation artifacts in commonsense causal reasoning and (ii) creation of benchmark for evaluating NLU systems' internal reasoning.
---------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------
Learning graph-structured sparse models (Baojian Zhou, https://baojianzhou.github.io/

ABSTRACT: Learning graph-structured sparse models has recently received significant attention thanks to their broad applicability to many important real-world problems. However, such models, of more effective and stronger interpretability compared with their counterparts, are difficult to learn due to optimization challenges. In this talk, we will discuss how to learn graph-structured sparse models under stochastic and online learning settings. Some interesting related problems will also be discussed.

The overall purpose of this seminar is to bring together people with interests in Computer Vision theory and techniques and to examine current research issues. This course will be appropriate for people who already took a Computer Vision graduate course or already had research experience in Computer Vision. To enroll in this course, you must either: (1) be in the PhD program or (2) receive permission from the instructors.

Each seminar will consist of multiple short talks (around 10 minutes) by multiple people. Students can register for 1 credit for CSE 656. Registered students must attend and present a minimum of 2 or 3 talks. Everyone else is welcome to attend. Fill in https://forms.gle/pCVXovgfMfQwGqG38 to subscribe to our mailing list for further announcement.