Speaker: Gary Kazantsev (Head of Quant Technology Strategy in the Office of the CTO at Bloomberg)

 

Date/Time: Friday, October 15, 2021 10:00AM-11:00AM EST

 

Title: Machine Learning in Finance

Abstract: Machine learning is changing our world at an accelerating pace. In this talk we will discuss the recent developments in how machine learning and artificial intelligence are changing finance, from a perspective of a technology company which is a key  participant in the financial markets. We will give an overview and discuss the evolution of selected flagship Bloomberg ML and AI projects, such as sentiment analysis, question answering, social media analysis, information extraction and prediction of market impact of news stories. We will discuss practical issues in delivering production machine learning solutions to problems of finance, highlighting issues such as interpretability, privacy and nonstationarity. We will also discuss current research directions in machine learning for finance. We will conclude with a Q&A session.

Bio: (https://www.techatbloomberg.com/people/gary-kazantsev/) Gary is the Head of Quant Technology Strategy in the Office of the CTO at Bloomberg. Prior to taking on this role, he created and headed the company's Machine Learning Engineering group, leading projects at the intersection of computational linguistics, machine learning and finance, such as sentiment analysis of financial news, market impact indicators, statistical text classification, social media analytics, question answering, and predictive modeling of financial markets.

Prior to joining Bloomberg in 2007, Gary had earned degrees in physics, mathematics, and computer science from Boston University.

He is engaged in advisory roles with FinTech and Machine Learning startups and has worked at a variety of technology and academic organizations over the last 20 years. In addition to speaking regularly at industry and academic events around the globe, he is a member of the KDD Data Science + Journalism workshop program committee and the advisory board for the AI & Data Science in Trading conference series. He is also a co-organizer of the annual Machine Learning in Finance conference at Columbia University.


Join Zoom Meetinghttps://stonybrook.zoom.us/j/93374426887?pwd=cE9zeW51VXFEN2R0YnNPbHF1WFp0Zz09Meeting ID: 933 7442 6887Passcode: 330347One tap mobile+16468769923,,93374426887# US (New York)+13126266799,,93374426887# US (Chicago)Dial by your location +1 646 876 9923 US (New York) +1 312 626 6799 US (Chicago) +1 301 715 8592 US (Washington DC) +1 346 248 7799 US (Houston) +1 408 638 0968 US (San Jose) +1 669 900 6833 US (San Jose) +1 253 215 8782 US (Tacoma)Meeting ID: 933 7442 6887



Time:
Sep 7, Tue, 11:00am EDT

Place:
NCS 220 or on Zoom (info below)

Title: Data-Driven Document Unwarping


Abstract:
Capturing document images is a common way to digitize and record physical documents due to the ubiquitousness of mobile cameras. To make text recognition easier, it is often desirable to digitally flatten a document image when the physical document sheet is folded or curved. However, unwarping a document from a single image in natural scenes is very challenging due to the complexity of document sheet deformation, document texture, and environmental conditions. Previous model-driven approaches struggle with inefficiency and limited generalizability. In this thesis, I investigate several data-driven approaches to tackle the document unwarping problem.

Data acquisition is the central challenge in data-driven methods. I first design an efficient data synthesis pipeline based on 2D image warping and train DocUNet, the pioneering data-driven document unwarping model, on the synthetic data. A benchmark dataset is also created to facilitate comprehensive evaluation and comparison. To improve the unwarping performance by training on more realistic data, I introduce the Doc3D dataset and DewarpNet. Supervised by 3D shape ground truth in Doc3D, DewarpNet is significantly better than DocUNet. DocUNet and DewarpNet depend on the synthetic data for the ground truth deformation annotation. To exploit the real-world images, I propose PaperEdge, a weakly supervised model trained with in-the-wild document images with easy-to-obtain boundary information. PaperEdge surpasses DewarpNet by utilizing both the synthetic data and weakly annotated real data in the Document In the Wild (DIW) dataset. Finally, I propose directly predicting the $uv$ parameterized 3D mesh of the document with 3D constraints and using the accessible 3D presentations like depth maps as training targets. Predicting the 3D mesh of the document solves the unwarping task and also benefits VR/AR applications.

Join Zoom Meeting
https://stonybrook.zoom.us/j/96440592912?pwd=ZU5waTdyUzRFNW5SRHM5ME84TWdFQT09

Meeting ID: 964 4059 2912
Passcode: 793149
One tap mobile
+16468769923,,96440592912# US (New York)
+13017158592,,96440592912# US (Washington DC)

Dial by your location
        +1 646 876 9923 US (New York)
        +1 301 715 8592 US (Washington DC)
        +1 312 626 6799 US (Chicago)
        +1 253 215 8782 US (Tacoma)
        +1 346 248 7799 US (Houston)
        +1 408 638 0968 US (San Jose)
        +1 669 900 6833 US (San Jose)
Meeting ID: 964 4059 2912
Find your local number: https://stonybrook.zoom.us/u/adxTt9ZbuJ

Subject: RADIOLOGY GRAND ROUNDS CT Colonography: An Effective Test for Colorectal Cancer Screening- Judy Yee, M.D.
When: Wednesday, May 12, 2021 12:00 PM-1:00 PM (UTC-05:00) Eastern Time (US & Canada).
Where: JOIN ZOOM MEETING

 

Judy Yee, MD

Chair, Department of Radiology

Professor, Department of Radiology

Abdominal Imaging

 

Join Zoom Meeting

https://einsteinmed.zoom.us/j/97782190723?pwd=clMzMys2SlZjZzJId1hUNzMyVUQ2UT09

 

Meeting ID: 977 8219 0723

Passcode: 101083

Title: AI-Driven Target Selection Methods for Touch and Gaze Input

Abstract: Accurately selecting targets is an essential aspect of  Human-Computer Interaction. Erroneous selections can cause tedious undo and redo actions. Additionally, some selection errors are non-reversible and can lead to undesirable consequences. However, high-accuracy target selection remains a challenge on touchscreen devices due to the small target size and imprecise touch inputs, and in gaze interaction because of the gaze tracking noise and no easy-to-use selection action. We first propose ReLM, a Reinforcement Learning-based Method for touchscreen target selection. ReLM can automatically show suggestions and require a second touch if the input is ambiguous, and can directly select a target candidate when the input is certain. Our empirical evaluation shows that ReLM reduces the error rate from 6.92% to 1.63%, and the selection time from 2.23s to 1.59s over Shift, an existing suggestion-based method. Compared to BayesianCommand, a direct selection-based method, our ReLM reduces the error rate from 3.64% to 0.89%, while increasing the selection time by only 200 ms. Secondly, we investigate how to improve target selection performance for gaze interaction. We propose BayesGaze, an eye-gaze based target selection method. It accumulates the signal of each gaze point for selecting a target calculated by Bayes Theorem, and uses a threshold mechanism to determine the target selection. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping method.

All are welcome. Here  is the zoom meeting link:
https://stonybrook.zoom.us/j/93130953411?pwd=Rm5IRlVPQ3M0cHJsTXpCVFljUlFGUT09Meeting ID: 931 3095 3411Passcode: 999413





Time: 04/28 Wed 3pm-4pm

Remote Access
Join Zoom Meeting https://stonybrook.zoom.us/j/95617197636?pwd=KytzZ2pVRG9SZGpKZUtpNXJISjNjZz09 
Meeting ID: 956 1719 7636 Passcode: 924293

Title: Brain imaging genetics for Alzheimer's disease: integrated analysis and machine learning

Li Shen, Ph.D.
Professor of Informatics
Department of Biostatistics, Epidemiology and Informatics 
Perelman School of Medicine
University of Pennsylvania

Bio: Li Shen, Ph.D., is a Professor of Informatics in the Department of Biostatistics, Epidemiology and Informatics at the Perelman School of Medicine in the University of Pennsylvania. He is an elected fellow of the American Institute for Medical and Biological Engineering (AIMBE). He obtained his Ph.D. degree in Computer Science from Dartmouth College. The central theme of his lab is focused on developing computational and informatics methods for integrative analysis of multimodal imaging data, high throughput omics data, cognitive and other biomarker data, electronic health record (EHR) data, and rich biological knowledge such as pathways and networks, with applications to complex disorders. His research interests include medical image computing, biomedical informatics, machine learning, network science, imaging genomics, Alzheimer's disease, and big data science in biomedicine. He has authored over 280 peer-reviewed articles (h-index 57) in these fields. Dr. Shen's work has been continuously supported by the NIH and NSF, and he is presently the PI of multiple NIH and NSF grants on developing computational methods for various biomedical applications including brain imaging genomics, genetics of Alzheimer's disease, genetics of human connectome, mining drug effects from the EHR data, and big data mining in brain science. He is co-leading the NIA Alzheimer's Disease Sequencing Project AI4AD Consortium and oversees the imaging genomics aspect of this landmark project. Dr. Shen served as the Executive Director of the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society Board of Directors during 2016-2019. He has chaired and co-chaired various professional meetings in medical image computing and bioinformatics. He is an Associate Editor of BioData Mining and Frontiers in Radiology (Section of AI in Radiology), and serves on the Editorial Board of Medical Image Analysis and Brain Imaging and Behavior.

Abstract: Brain imaging genetics is an emerging data science field, where integrated analysis of brain imaging and genetics data, often combined with other biomarker, clinical and environmental data, is performed to gain new insights into the genetic, molecular and phenotypic characteristics of the brain as well as their impact on normal and disordered brain function and behavior. Many methodological advances in brain imaging genetics are attributed to large-scale landmark biobank projects such as the Alzheimer's Disease Sequencing Project, the Alzheimer's Disease Neuroimaging Initiative, and the UK Biobank. Using the study of Alzheimer's disease as an example, we will discuss fundamental concepts, state-of-the-art statistical and machine learning methods, and innovative applications in this rapidly evolving field. We show that the wide availability of brain imaging genetics data from various large-scale biobanks, coupled with advances in biomedical statistics, informatics and computing, provides enormous opportunities to contribute significantly to biomedical discoveries in brain science and to impact the development of new diagnostic, therapeutic and preventative approaches for complex brain disorders such as Alzheimer's disease.

More details:
https://bmi.stonybrookmedicine.edu/sites/default/files/shen_li_04_28_flyer.pdf

Visual Analytics and Machine Learning for Biomedical Imaging Diagnosis

 

Arie Kaufman

 

We present an integrated approach using visual analytics and machine learning (ML) to diagnose abnormalities in 3D radiological imaging and biological microscopes. The primary example will involve 3D virtual pancreatography (VP), a novel visualization-ML procedure and application for non-invasive diagnosis and classification of pancreatic lesions, the precursors of pancreatic cancer. Currently, non-invasive screening of patients is performed through visual inspection of 2D axis-aligned CT images, though the relevant features are often not clearly visible nor automatically detected. VP is an end-to-end visual diagnosis system that includes an ML-based automatic segmentation of the pancreatic gland and the lesions, a semi-automatic approach to extract the primary pancreatic duct, an ML-based automatic classification of lesions into four prominent types, and specialized 3D and 2D exploratory visualizations of the pancreas, lesions and surrounding anatomy. We combine volume rendering with pancreas- and lesion-centric visualizations and measurements for effective diagnosis. We designed VP through close collaboration and feedback from expert radiologists, and evaluated it on multiple real-world CT datasets with various pancreatic lesions and case studies examined by the expert radiologists. Other applications include virtual colonoscopy, COVID-19, pathology, brain neurites, etc.


Biography: Arie Kaufman is Distinguished Professor and formerChair of the Department of Computer Science at Stony Brook University, where he is also Director of the Center for Visual Computing (CVC), and Chief Scientist at the Center of Excellence in Wireless and Information Technology (CEWIT). 

He received his PhD in Computer Science at Ben-Gurion University of the Negev in 1977.   He is known for his work in visualization, graphics, virtual reality, user interfaces, multimedia, and their applications, especially in bio-medicine. He is especially well known for his work on the 3-dimensional virtual colonoscopy, a revolutionary low-risk technique for colon cancer screening, and for pioneering the use of Graphics Processing Units (GPUs) and GPU-clusters. In 2012, he presided over the development and opening of the Reality Deck, the largest virtual reality display in the world, at Stony Brook University.

Kaufman was the founding Editor in Chief of IEEE Transactions on Visualization and Computer Graphics (TVCG), co-founded the IEEE Visualization Conference and Volume Graphics series, and is currently the director of IEEE Computer Society Technical Committee on Visualization and Graphics. He is an IEEE Fellow, ACM Fellow, winner of many awards, including the IEEE Visualization Career Award, and member of the European Academy of Sciences.



Steven Skiena is inviting you to a scheduled Zoom meeting.

Topic: AI Seminar: Arie Kaufman
Time: Apr 21, 2021 10:00 AM Eastern Time (US and Canada)

Join Zoom Meeting
https://stonybrook.zoom.us/j/96017498640?pwd=SE0rdHB6ZVlCM2ZpY2RnRUxyVnR3Zz09