Predictable Autonomy for Cyber-Physical Systems by Stanley Bak, Safe Sky Analytics

ABSTRACT: Cyber-physical systems combine complex physics with complex software. Although these systems offer significant potential in fields such as smart grid design, autonomous robotics and medical systems, verification of CPS designs remains challenging. Model-based design permits simulations to be used to explore potential system behaviors, but individual simulations do not provide full coverage of what the system can do. In particular, simulations cannot guarantee the absence of unsafe behaviors, which is unsettling as many CPS are safety-critical systems.

The goal of set-based analysis methods is to explore a system's behaviors using sets of states, rather than individual states. The usual downside of this approach is that set-based analysis methods are limited in scalability, working only for very small models. This talk describes our recent process on improving the scalability of set-based reachability computation for LTI hybrid automaton models, some of which can apply to very large systems (up to one billion continuous state variables!). Lastly, we'll discuss the significant overlap of techniques used for our scalable reachability analysis methods with set-based input/output analysis of neural networks.

BIO: Stanley Bak is a computer scientist investigating the predictable design of autonomous cyber-physical systems. He strives to develop practical formal methods that are both scalable and useful, which demands developing new theory, programming efficient tools and building experimental systems. He received a Bachelor's degree in Computer Science from Rensselaer Polytechnic Institute (RPI) in 2007 (summa cum laude), and a Master's degree in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2009. He completed his PhD from the Department of Computer Science at UIUC in 2013. He received the Founders Award of Excellence for his undergraduate research at RPI in 2004, the Debra and Ira Cohen Graduate Fellowship from UIUC twice, in 2008 and 2009, and was awarded the Science, Mathematics and Research for Transformation (SMART) Scholarship from 2009 to 2013. From 2013 to 2018, Stanley was a Research Computer Scientist at the US Air Force Research Lab (AFRL), both in the Information Directorate in Rome, NY, and in the Aerospace Systems Directorate in Dayton, OH. He currently helps run Safe Sky Analytics, a research consulting company investigating verification and autonomous systems, and performs teaching as an Adjunct Professor at Georgetown University.

Face Editing with Machine Learning presented by Zhixin Shu

ABSTRACT: The face is the most informative feature of humans and has been a long-standing research topic in Computer Vision and Graphics. Images of faces are also ubiquitous in photography and social media, and people have devoted significant resources to capturing and editing face images. Face editing can be broadly viewed as the encoding, manipulation and the decoding of some representations for face images. The challenges are that we want to manipulate an image in a controllable way and generate results that are both desirable and as realistic as possible. This thesis explores different Machine Learning-based face-editing approaches. I discuss the role of machine learning for achieving desirable edits by learning both the physical aspects as well as the statistical manifold of human faces. In my work for eye-editing, I discuss the importance of understanding multiple physical elements of a face image, such as shape, illumination, pose, etc. In a deep-learning-based approach, I introduce image formation domain knowledge to the construction and training of a neural network. This network provides transparent access to the disentangled representations of the aforementioned physical properties. With this network, we can achieve various face editing tasks in forms of representation manipulation. After that, I introduce Deforming Autoencoders, a network that learns to disentangle shape and appearance in an unsupervised manner. This disentanglement is beneficial for the learning of some other factors of variations, such as illumination and facial expression. In an extension of Deforming Autoencoders, we incorporate non-rigid structure-from-motion to learn a 3D morphable model for faces that only requires an image set for training. At last, I describe an image-to-image network for 3D face reconstruction, which also utilizes structure-from-motion in deep learning. With real face images in training, this network not only reconstructs 3D faces more accurately than prior art but also has better generalization ability in real-life testing cases.

Communication-Efficient Heterogeneity-Aware Machine Learning System and Architecture by Xuehai Qian

ABSTRACT: The key success of deep learning is the increasing size of models that can achieve high accuracy. At the same time, it is difficult to train the complex models with large data sets. Therefore, it is crucial to accelerate training with distributed systems and architectures, where communication and heterogeneity are two key challenges. In this talk, I will present two heterogeneity-aware decentralized training protocols without communication bottleneck. Specifically, Hop supports arbitrary iteration gap between workers by novel queue-based synchronization which can tolerate heterogeneity with system techniques. Prague uses randomized communication to tolerate heterogeneity with a new training algorithm based on partial reduce -- an efficient communication primitive. If time permits, I will present the systematic tensor partitioning for training on heterogeneous accelerator arrays (e.g., GPU/TPU). We believe that our principled approaches are crucial for achieving high-performance and efficient distributed training.

BIO: Xuehai Qian is an assistant professor at University of Southern California. His research interests include domain-specific systems and architectures, performance tuning and resource management of cloud systems and parallel computer architectures. He received his PhD from the University of Illinois Urbana Champaign and was a postdoc at UC Berkeley. He is the recipient of W.J Poppelbaum Memorial Award at UIUC, NSF CRII and CAREER Award, and the inaugural ACSIC (American Chinese Scholar In Computing) Rising Star Award.

Predicting Subjective Attributes in Visual Data - Zijun Wei

ABSTRACT: Recent progress in deep neural networks has revolutionized many computer vision tasks such as image classification, detection and segmentation. However, in addition to excelling in tasks that predict well-defined objective information, human-centered artificial intelligence systems should also be able to model subjective attributes, as defined by human perceptual behavior, that goes beyond the pure physical content of visual data. Example subjective tasks are the prediction of spatial or temporal regions that are interesting to humans (e.g., attract attention or are visually pleasing) and the recognition of subjective attributes (e.g., visually elicited sentiments). Better models for these tasks will improve the human-computer interaction experience in various applications. This thesis investigates several approaches to address the challenges in predicting those subjective attributes in visual data over a diverse set of tasks. I first present a novel framework for real-time automatic photo composition. The framework consists of a cost-effective data collection workflow, an efficient model training pipeline and a lightweight module to account for personalized preferences. Then I develop a novel and general algorithm to detect interesting segments in sequential data, which can be naturally applied to video summarization tasks. Furthermore, I propose methods that learn to represent sentiments elicited by images, in an unsupervised manner, using linguistic features extracted from large scale Web data. To conclude this thesis, I introduce a human-vision-inspired image classification algorithm that also predicts spatial visual attention even though no attention data was used for training it.  

Le Hou Dissertation Defense: Deep Learning for Digital Histopathology across Multiple Scales

ABSTRACT: Histopathology is the study of tissue changes caused by diseases such as cancer. It plays a crucial role in disease diagnosis, survival analysis and development of new treatments. Using computer vision techniques, I focus on multiple tasks for automated analysis in digital histopathology images, which are challenging because histopathology images are heterogeneous and complex, due to the large variation of hundreds of cancer types in gigapixel resolution. In this thesis, I show how histopathology image analysis tasks can be viewed in three scales: Whole Slide Image (WSI)-level, patch-level and cellular-level, and present my contributions in each resolution level.

BIO: WSI-level analysis such as classifying WSIs into cancer types is challenging, because conventional classification methods such as off-the-shelf deep learning models cannot be applied directly on gigapixel WSIs due to computational limitations. I contribute a patch-based deep learning method that classifies gigapixel WSIs into cancer types and subtypes with close-to-human performance. This method is useful for computer-aided diagnosis. At patch-level, I contribute a novel method for histopathology image patch classification. On the task of identifying Tumor Infiltrating Lymphocyte (TIL) regions, the prediction result of this method correlates to the survival rate of patients. At cellular-level, I contribute novel methods for nucleus classification and roundness regression, which are interpretable features for histopathology studies. With this method, I generated a large-scale dataset of segmented nuclei, in WSIs from a large publicly available digital histopathology image dataset, to help advance histopathology research.

How to Succeed in Language Design Without Really Trying presented by Professor Brian Kernighan

ABSTRACT: Why do some languages succeed while others fall by the wayside? I've helped create nearly a dozen languages (mostly small) over the years; a handful are still in widespread use, while others have languished or simply disappeared. I've also been present at the creation of several other languages, including some really major ones. In this talk I'll give my humble, but correct, opinion on factors that affect success and failure, and try to offer some insight into what to do if you're trying to design a new language yourself, and why that might be a good thing.

BIO: Brian Kernighan received a PhD in electrical engineering from Princeton in 1969. He joined the Computer Science department at Princeton in 2000, after many years at Bell Labs. He is a co-creator of several programming languages, including AWK and AMPL, and of a number of tools for document preparation. He is the co-author of a dozen books and some technical papers, and holds 5 patents.
He is a member of the National Academy of Engineering and of the American Academy of Arts and Sciences. His research areas include programming languages, tools and interfaces that make computers easier to use, often for non-specialist users. He has also written two books on technology for
non-technical audiences: Understanding the Digital World in 2017 and Millions, Billions, Zillions: Defending Yourself in a World of Too Many Numbers, published in 2018. His most recent book, Unix: A History and a Memoir, was published in October 2019.

CSE 600 Talk: Squeezing Software Performance via Eliminating Wasteful Operations presented by Xu Liu

ABSTRACT: Inefficiencies abound in complex, layered software. A variety of inefficiencies show up as wasteful memory operations, such as redundant or useless memory loads and stores. Aliasing, limited optimization scopes, and insensitivity to input and execution contexts act as severe deterrents to static program analysis. Microscopic observation of whole executions at instruction- and operand-level granularity breaks down abstractions and helps recognize redundancies that masquerade in complex programs. In this talk, I will describe various wasteful memory operations, which pervasively exist in modern
software packages and expose great potential for optimization. I will discuss the design of a fine-grained instrumentation-based profiling framework that identifies wasteful operations in their contexts, which guides nontrivial performance improvement. Furthermore, I will show our recent improvement to the profiling framework by abandoning
instrumentation, which reduces the runtime overhead from 10x to 3% on average. I will show how our approach works for native binaries and various managed languages such as Java, yielding new performance insights for optimization.

BIO: Xu Liu is an assistant professor in the Department of Computer Science at College of William & Mary. He obtained his PhD from Rice University in 2014 and joined the College of William & Mary in the same year. Prof. Liu works on building performance tools to pinpoint and optimize inefficiencies in HPC code bases. He has developed several open-source profiling tools, which are used worldwide at universities, DOE national laboratories and industrial companies. Prof. Liu has published a number of papers in high-quality venues. His papers received Best Paper Award at SC'15, PPoPP'18, PPoPP'19 and ASPLOS'17 Highlights, as well as Distinguished Paper Award at ICSE'19. His recent ASPLOS'18 paper has been selected as ACM SIGPLAN Research Highlights in 2019 and nominated for CACM Research Highlights. Prof. Liu is the receipt of 2019 IEEE TCHPC Early Career Researchers Award for Excellence in High Performance Computing. Prof. Liu served on the program committee of conferences such as SC, PPoPP, IPDPS, CGO, HPCA and ASPLOS.