When: Thu: 10/28/2021, 10 am
Where: NCS Room 220, or
Zoom: https://stonybrook.zoom.us/j/97978463739?pwd=aVJFVERQa25jYjJrOFZEcWVuSzJLdz09

Deep Surface MeshesPascal FuaEPFLGeometric Deep Learning has recently made striking progress with the advent of Deep Implicit Fields (SDFs). They allow for detailed modeling of watertight surfaces of arbitrary topology while not relying on a 3D Euclidean grid, resulting in a learnable 3D surface parameterization that is not limited in resolution. Unfortunately, they have not yet reached their full potential for applications that require an explicit surface representation in terms of vertices and facets because converting the SDF to such a 3D mesh representation requires a marching-cube algorithm, whose output cannot be easily differentiated with respect to the SDF parameters. In this talk, I will discuss our approach to overcoming this limitation and implementing convolutional neural nets that output complex 3D surface meshes while remaining fully-differentiable and end-to-end trainable. I will also present applications to single view reconstruction, physically-driven Shape optimization, and bio-medical image segmentation.



Bio:

Pascal Fua received an engineering degree from Ecole Polytechnique, Paris, in 1984 and a Ph.D. in Computer Science from the University of Orsay in 1989. He joined EPFL (Swiss Federal Institute of Technology) in 1996 where he is a Professor in the School of Computer and Communication Science and head of the Computer Vision Lab. Before that, he worked at SRI International and at INRIA Sophia-Antipolis as a Computer Scientist. His research interests include shape modeling and motion recovery from images, analysis of microscopy images, and Augmented Reality. He has (co)authored over 300 publications in refereed journals and conferences. He has received several ERC grants. He is an IEEE Fellow and has been an Associate Editor of IEEE journal Transactions for Pattern Analysis and Machine Intelligence. He often serves as program committee member, area chair, and program chair of major vision conferences and has cofounded three spinoff companies. 


Topic: AI Seminar: Stanley Bak
Time: Monday Nov 1, 2021 12:00 PM Eastern Time (US and Canada)

Join Zoom Meeting
https://stonybrook.zoom.us/j/91227496273?pwd=M3EyUDlzK3Vzd2pDOGpDU1ZjN0k1UT09

Abstract: The field of formal verification has traditionally looked at proving properties about finite state machines or software programs. The surge in deep learning has been accompanied by a surge of progress in trying to apply mathematical and algorithmic techniques to prove things about the function being computed by a neural network.

This talk formalizes the neural network verification problem and describes technical methods for neural network verification based on reachability analysis. Improvements to analysis efficiency will be given, as well as research directions for further exploration. We also include an objective comparison performed this last summer trying to evaluate the best existing verification methods in terms of speed and network size. The competition was performed on common hardware and involved the participation of twelve international teams (the tool authors) on a common set of benchmarks. 

Biography: Stanley Bak is an assistant professor in the Department of Computer Science at Stony Brook University investigating the verification of autonomy, cyber-physical systems, and neural networks. He strives to develop practical formal methods that are both scalable and useful, which demands developing new theory, programming efficient tools and building experimental systems.
Stanley Bak received a Bachelor's degree in Computer Science from Rensselaer Polytechnic Institute (RPI) in 2007 (summa cum laude), and a Master's degree in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2009. He completed his PhD from the Department of Computer Science at UIUC in 2013. He received the Founders Award of Excellence for his undergraduate research at RPI in 2004, the Debra and Ira Cohen Graduate Fellowship from UIUC twice, in 2008 and 2009, and was awarded the Science, Mathematics and Research for Transformation (SMART) Scholarship from 2009 to 2013. From 2013 to 2018, Stanley was a Research Computer Scientist at the US Air Force Research Lab (AFRL), both in the Information Directorate in Rome, NY, and in the Aerospace Systems Directorate in Dayton, OH. He helped run Safe Sky Analytics, a research consulting company investigating verification and autonomous systems, and performed teaching at Georgetown University before joining Stony Brook University as an assistant professor in Fall 2020.

Speaker: Gary Kazantsev (Head of Quant Technology Strategy in the Office of the CTO at Bloomberg)

 

Date/Time: Friday, October 15, 2021 10:00AM-11:00AM EST

 

Title: Machine Learning in Finance

Abstract: Machine learning is changing our world at an accelerating pace. In this talk we will discuss the recent developments in how machine learning and artificial intelligence are changing finance, from a perspective of a technology company which is a key  participant in the financial markets. We will give an overview and discuss the evolution of selected flagship Bloomberg ML and AI projects, such as sentiment analysis, question answering, social media analysis, information extraction and prediction of market impact of news stories. We will discuss practical issues in delivering production machine learning solutions to problems of finance, highlighting issues such as interpretability, privacy and nonstationarity. We will also discuss current research directions in machine learning for finance. We will conclude with a Q&A session.

Bio: (https://www.techatbloomberg.com/people/gary-kazantsev/) Gary is the Head of Quant Technology Strategy in the Office of the CTO at Bloomberg. Prior to taking on this role, he created and headed the company's Machine Learning Engineering group, leading projects at the intersection of computational linguistics, machine learning and finance, such as sentiment analysis of financial news, market impact indicators, statistical text classification, social media analytics, question answering, and predictive modeling of financial markets.

Prior to joining Bloomberg in 2007, Gary had earned degrees in physics, mathematics, and computer science from Boston University.

He is engaged in advisory roles with FinTech and Machine Learning startups and has worked at a variety of technology and academic organizations over the last 20 years. In addition to speaking regularly at industry and academic events around the globe, he is a member of the KDD Data Science + Journalism workshop program committee and the advisory board for the AI & Data Science in Trading conference series. He is also a co-organizer of the annual Machine Learning in Finance conference at Columbia University.


Join Zoom Meetinghttps://stonybrook.zoom.us/j/93374426887?pwd=cE9zeW51VXFEN2R0YnNPbHF1WFp0Zz09Meeting ID: 933 7442 6887Passcode: 330347One tap mobile+16468769923,,93374426887# US (New York)+13126266799,,93374426887# US (Chicago)Dial by your location +1 646 876 9923 US (New York) +1 312 626 6799 US (Chicago) +1 301 715 8592 US (Washington DC) +1 346 248 7799 US (Houston) +1 408 638 0968 US (San Jose) +1 669 900 6833 US (San Jose) +1 253 215 8782 US (Tacoma)Meeting ID: 933 7442 6887



Time:
Sep 7, Tue, 11:00am EDT

Place:
NCS 220 or on Zoom (info below)

Title: Data-Driven Document Unwarping


Abstract:
Capturing document images is a common way to digitize and record physical documents due to the ubiquitousness of mobile cameras. To make text recognition easier, it is often desirable to digitally flatten a document image when the physical document sheet is folded or curved. However, unwarping a document from a single image in natural scenes is very challenging due to the complexity of document sheet deformation, document texture, and environmental conditions. Previous model-driven approaches struggle with inefficiency and limited generalizability. In this thesis, I investigate several data-driven approaches to tackle the document unwarping problem.

Data acquisition is the central challenge in data-driven methods. I first design an efficient data synthesis pipeline based on 2D image warping and train DocUNet, the pioneering data-driven document unwarping model, on the synthetic data. A benchmark dataset is also created to facilitate comprehensive evaluation and comparison. To improve the unwarping performance by training on more realistic data, I introduce the Doc3D dataset and DewarpNet. Supervised by 3D shape ground truth in Doc3D, DewarpNet is significantly better than DocUNet. DocUNet and DewarpNet depend on the synthetic data for the ground truth deformation annotation. To exploit the real-world images, I propose PaperEdge, a weakly supervised model trained with in-the-wild document images with easy-to-obtain boundary information. PaperEdge surpasses DewarpNet by utilizing both the synthetic data and weakly annotated real data in the Document In the Wild (DIW) dataset. Finally, I propose directly predicting the $uv$ parameterized 3D mesh of the document with 3D constraints and using the accessible 3D presentations like depth maps as training targets. Predicting the 3D mesh of the document solves the unwarping task and also benefits VR/AR applications.

Join Zoom Meeting
https://stonybrook.zoom.us/j/96440592912?pwd=ZU5waTdyUzRFNW5SRHM5ME84TWdFQT09

Meeting ID: 964 4059 2912
Passcode: 793149
One tap mobile
+16468769923,,96440592912# US (New York)
+13017158592,,96440592912# US (Washington DC)

Dial by your location
        +1 646 876 9923 US (New York)
        +1 301 715 8592 US (Washington DC)
        +1 312 626 6799 US (Chicago)
        +1 253 215 8782 US (Tacoma)
        +1 346 248 7799 US (Houston)
        +1 408 638 0968 US (San Jose)
        +1 669 900 6833 US (San Jose)
Meeting ID: 964 4059 2912
Find your local number: https://stonybrook.zoom.us/u/adxTt9ZbuJ

Subject: RADIOLOGY GRAND ROUNDS CT Colonography: An Effective Test for Colorectal Cancer Screening- Judy Yee, M.D.
When: Wednesday, May 12, 2021 12:00 PM-1:00 PM (UTC-05:00) Eastern Time (US & Canada).
Where: JOIN ZOOM MEETING

 

Judy Yee, MD

Chair, Department of Radiology

Professor, Department of Radiology

Abdominal Imaging

 

Join Zoom Meeting

https://einsteinmed.zoom.us/j/97782190723?pwd=clMzMys2SlZjZzJId1hUNzMyVUQ2UT09

 

Meeting ID: 977 8219 0723

Passcode: 101083

Title: AI-Driven Target Selection Methods for Touch and Gaze Input

Abstract: Accurately selecting targets is an essential aspect of  Human-Computer Interaction. Erroneous selections can cause tedious undo and redo actions. Additionally, some selection errors are non-reversible and can lead to undesirable consequences. However, high-accuracy target selection remains a challenge on touchscreen devices due to the small target size and imprecise touch inputs, and in gaze interaction because of the gaze tracking noise and no easy-to-use selection action. We first propose ReLM, a Reinforcement Learning-based Method for touchscreen target selection. ReLM can automatically show suggestions and require a second touch if the input is ambiguous, and can directly select a target candidate when the input is certain. Our empirical evaluation shows that ReLM reduces the error rate from 6.92% to 1.63%, and the selection time from 2.23s to 1.59s over Shift, an existing suggestion-based method. Compared to BayesianCommand, a direct selection-based method, our ReLM reduces the error rate from 3.64% to 0.89%, while increasing the selection time by only 200 ms. Secondly, we investigate how to improve target selection performance for gaze interaction. We propose BayesGaze, an eye-gaze based target selection method. It accumulates the signal of each gaze point for selecting a target calculated by Bayes Theorem, and uses a threshold mechanism to determine the target selection. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping method.

All are welcome. Here  is the zoom meeting link:
https://stonybrook.zoom.us/j/93130953411?pwd=Rm5IRlVPQ3M0cHJsTXpCVFljUlFGUT09Meeting ID: 931 3095 3411Passcode: 999413