Visual Analytics and Machine Learning for Biomedical Imaging Diagnosis

 

Arie Kaufman

 

We present an integrated approach using visual analytics and machine learning (ML) to diagnose abnormalities in 3D radiological imaging and biological microscopes. The primary example will involve 3D virtual pancreatography (VP), a novel visualization-ML procedure and application for non-invasive diagnosis and classification of pancreatic lesions, the precursors of pancreatic cancer. Currently, non-invasive screening of patients is performed through visual inspection of 2D axis-aligned CT images, though the relevant features are often not clearly visible nor automatically detected. VP is an end-to-end visual diagnosis system that includes an ML-based automatic segmentation of the pancreatic gland and the lesions, a semi-automatic approach to extract the primary pancreatic duct, an ML-based automatic classification of lesions into four prominent types, and specialized 3D and 2D exploratory visualizations of the pancreas, lesions and surrounding anatomy. We combine volume rendering with pancreas- and lesion-centric visualizations and measurements for effective diagnosis. We designed VP through close collaboration and feedback from expert radiologists, and evaluated it on multiple real-world CT datasets with various pancreatic lesions and case studies examined by the expert radiologists. Other applications include virtual colonoscopy, COVID-19, pathology, brain neurites, etc.


Biography: Arie Kaufman is Distinguished Professor and formerChair of the Department of Computer Science at Stony Brook University, where he is also Director of the Center for Visual Computing (CVC), and Chief Scientist at the Center of Excellence in Wireless and Information Technology (CEWIT). 

He received his PhD in Computer Science at Ben-Gurion University of the Negev in 1977.   He is known for his work in visualization, graphics, virtual reality, user interfaces, multimedia, and their applications, especially in bio-medicine. He is especially well known for his work on the 3-dimensional virtual colonoscopy, a revolutionary low-risk technique for colon cancer screening, and for pioneering the use of Graphics Processing Units (GPUs) and GPU-clusters. In 2012, he presided over the development and opening of the Reality Deck, the largest virtual reality display in the world, at Stony Brook University.

Kaufman was the founding Editor in Chief of IEEE Transactions on Visualization and Computer Graphics (TVCG), co-founded the IEEE Visualization Conference and Volume Graphics series, and is currently the director of IEEE Computer Society Technical Committee on Visualization and Graphics. He is an IEEE Fellow, ACM Fellow, winner of many awards, including the IEEE Visualization Career Award, and member of the European Academy of Sciences.



Steven Skiena is inviting you to a scheduled Zoom meeting.

Topic: AI Seminar: Arie Kaufman
Time: Apr 21, 2021 10:00 AM Eastern Time (US and Canada)

Join Zoom Meeting
https://stonybrook.zoom.us/j/96017498640?pwd=SE0rdHB6ZVlCM2ZpY2RnRUxyVnR3Zz09

All are welcome to attend BMI grand rounds talk by Dr. Le Lu on 04/14. 

Le Lu, Ph.D 
Executive Director, PAII Inc 
Johns Hopkins University
IEEE Fellow, MICCAI Board Member


Time: Wednesday, April 14, 2021 3:00 pm - 4:00 pm 

Zoom Meeting 
https://stonybrook.zoom.us/j/95617197636?pwd=KytzZ2pVRG9SZGpKZUtpNXJISjNjZz09 
Meeting ID: 956 1719 7636 Passcode: 924293

Title: 
In Search of Effective and Reproducible Clinical Imaging Biomarkers for Population Health and Oncology Applications of Screening, Diagnosis and Prognosis

Bio: 
Le Lu received a PhD in 2007 from Johns Hopkins University. During his first six years at Siemens, he made significant contributions to the company's CT colonography and Lung CAD product lines. From 2013 to 2017, Dr. Lu served as a staff scientist in the Radiology and Imaging Sciences department of the National Institutes of Health Clinical Center. He then went on to found Nvidia's medical image analysis group and he held the position of senior research manager until June 2018. Since then, he has been the Executive Director at PAII Inc., Bethesda Research lab, Maryland, USA which has become one of the leading industrial research labs in medical imaging. He was the main technical leader for two of the most-impactful public radiology image dataset releases (NIH ChestXray14, NIH DeepLesion 2018). He won NIH Clinical Center Director Award in 2017, NIH Mentor of the year award in 2015, and won numerous best paper awards in MICCAI and RSNA from 2016 to 2020 (over 10000 citations). In 2021, He was elected into IEEE Fellow class cited for his contribution to machine learning for cancer detection and diagnosis, and MICCAI society board member (MICCAI-Industry Workgroup Chair). He is currently an Associate Editor for IEEE Trans. Pattern Analysis and Machine Intelligence and IEEE Signal Processing Letters. He has served as an Area Chair for recent MICCAI, AAAI, CVPR, WACV, ICIP and ICHI conferences for 14 times.

Abstract: 
This talk will first give an overall on the work of employing deep learning to permit novel clinical workflows in two population health tasks, namely using conventional ultrasound for liver steatosis screening and quantitative reporting; osteoporosis screening via conventional X-ray imaging and AI readers. These two tasks were generally considered as infeasible tasks for human readers, but as proved by our scientific and clinical studies and peer-reviewed publications, they are suitable for AI readers. AI can be a supplementary and useful tool to assist physicians for cheaper and more convenient/precision patient management. Next, the main part of this talk describes a roadmap on three key problems in pancreatic cancer imaging solution: early screening, precision differential diagnosis, and deep prognosis on patient survival prediction. (1) Based on a new self- learning framework, we train the pancreatic ductal adenocarcinoma (PDAC) segmentation model using a larger quantity of patients (≈1,000, four institutions), with a mix of annotated/unannotated venous or multi-phase CT images. Pseudo annotations are generated by combining two teacher models with different PDAC segmentation specialties on unannotated images, and can be further refined by a teaching assistant model that identifies associated vessels around the pancreas. Our approach makes it technically feasible for robust large-scale PDAC screening from multi-institutional multi-phase partially-annotated CT scans. (2) We propose a holistic segmentation-mesh classification network (SMCN) to provide patient-level diagnosis, by fully utilizing the geometry and location information. SMCN learns the pancreas and mass segmentation task and builds an anatomical correspondence-aware organ mesh model by progressively deforming a pancreas prototype on the raw segmentation mask. Our results are comparable to a multimodality clinical test that combines clinical, imaging, and molecular testing for clinical management of patients with cysts. (3) Accurate preoperative prognosis of resectable PDACs for personalized treatment is highly desired in clinical practice. We present a novel deep neural network for the survival prediction of resectable PDAC patients, 3D Contrast-Enhanced Convolutional Long Short-Term Memory network (CE- ConvLSTM), to derive the tumor attenuation signatures from CE-CT imaging studies. Our framework can significantly improve the prediction performances upon existing state-of-the-art survival analysis methods. This deep tumor signature has evidently added values (as a predictive biomarker) to be combined with the existing clinical staging system.

More information can be found at:
https://bmi.stonybrookmedicine.edu/sites/default/files/Lu_le_04_14.pdf

Topic: AI Seminar: Owen Rambow
Time: Mar 17, 2021 10:00 AM Eastern Time (US and Canada)
Join Zoom Meeting

https://stonybrook.zoom.us/j/93614644178?pwd=MzJtVDJYYmU5T1dtMzJiUFMxb0x4dz09
Meeting ID: 936 1464 4178.    Passcode: 965936






Natural Language Understanding and Semantic Parsing

(Partly joint work with former colleagues at Elemental Cognition)

Semantic parsing refers to the task of determining the propositional content of language: who did what to whom.  It is part of the larger task of natural language understanding (NLU).  I will start out by discussing what full NLU means, and argue that we are still far away, as a field, from solving full NLU, or even from knowing how to evaluate it.

In the second part of the talk, I will situate semantic parsing in the context of several other NLU subtasks.  Typically, the target representation of semantic parsing uses an ontology (such as PropBank or FrameNet).  Semantic parsing includes the subtasks of word sense disambiguation, argument detection, and argument role labeling.  I will discuss choices among possible target ontologies.  I will justify why we created a new ontology, Hector, based on FrameNet and the lexical resource NOAD, and explain some of its characteristics.

In the third part of the talk, I will present experiments we performed using transformer models.  We obtain best results using a two-phase model, in which we first choose the frame, and then, given the frame, choose the arguments.  We encode the problem for both tasks using indices in the sentence.  While we develop the parser for our new ontology Hector, this approach also beats the state of the art for FrameNet and PropBank parsing.Biography:  I am a professor in the Department of Linguistics at Stony Brook University with a joint appointment in IACS.

Until recently, I was a research scientist at Elemental Cognition. Elemental Cognition is working on deep natural language understanding.

I got my PhD with Aravind Joshi at the University of Pennsylvania in 1994. I have worked at CoGenTex, and at AT&T Labs -- Research, and for many years I was a research scientist at Columbia University in the Center for Computational Learning Systems.



Dates: 

Wednesday, March 3, 2021 - 6:00pm to 7:30pm

Location: 

Zoom - contact events@cs.stonybrook.edu for Zoom info.

Event Description: 

Women in Computer Science (WiCS), the Society of Women Engineers (SWE), and the Stony Brook Robotics Team (SBRT) are collaborating to host an event called Inspiring Women in STEM Academia: A Community Dialogue to address the lack of female representation in STEM academia. 
 

All are invited to attend so they may gain a better understanding of the challenges faced by their female colleagues and hear perspectives on how they can offer support in the workplace. Given the shockingly disproportionate number of female professionals in STEM academia, we feel that this event would be extremely beneficial for male faculty to listen to and amplify their voices.

It will begin with a discussion panel consisting of Stony Brook professors and faculty who will provide valuable insight into the issue. From there, we will split into smaller discussion groups where student and faculty attendees will be able to voice their opinions, hear about the thoughts/experiences of others, and participate in an engaging discussion with panelists.

The event will be held on March 3rd from 6:00 - 7:30 PM on Zoom.
 

The following Stony Brook faculty will be panelists:

Dr. Aruna Balasubramanian - Computer Science Professor, WiCS Advisor, WPhD Advisor

Dr. Xinwei Mao - Civil Engineering Assistant Professor

Urszula Zalewski - Director of Experiential Learning, Career Center Advisor (Healthcare)

Dr. Heather Lynch - Ecology and Evolution Professor, Lynch Lab for Quantitative Ecology

Karen Kernan - URECA Director, Simons Summer Research Program Director

Dr. Eszter Boros - Chemistry Assistant Professor, Boros Lab

Dr. Maria Nagan - Chemistry Lecturer, Nagan Research Lab

CG Group member (and SBU faculty) Chao Chen will speak on Fri, March 12, about the use of topological data analysis in machine learning for image analysis.
Chao has shared some of his research with the CG Group previously, and this will be a great opportunity to learn more about this exciting research area related to computational geometry/topology!

Time: Friday, March 12, 2pm-3pm
Place: Zoom
https://stonybrook.zoom.us/my/profweizhu?pwd=RjVIVXg3YUhudzZZQ3pheHUydTJBUT09



Title: Learning with Topological Information - Image Analysis and Label Noise
Speaker: Prof. Chao Chen (SBU)

Abstract: Modern machine learning faces new challenges. We are
analyzing highly complex data with unknown noise. Topology provides
novel structural information to model such data and noise. In this
talk, we discuss two directions in which we are using topological
information in the learning context. In image analysis, we propose a
topological loss to segment and to generate images with not only
per-pixel accuracy, but also topological accuracy. This is necessary
in analysis of images of fine-scale biomedical structures such as
neurons, vessels, etc.  Extracting these structures with correct
topology is essential for the success of downstream
analysis. Meanwhile, we discuss how to use topological information to
train classifiers robust to label noise. This is important in practice
especially when we are using deep neural networks which tend to
overfit noise. These results have been published in NeurIPS, ECCV,
ICML and ICLR.