On September 20, 2019, Daniel Rubenstein, Class of 1877 Professor of Zoology, EEB; Director, Program in Environmental Studies; Behavioral Ecology and Conservation at Princeton University, kicked off our 2019/2020 Shutterstock Computer Science Distinguished Lecture Series (DLS). Rubenstein’s talk, “AI for Conservation: AI and Humans Combating Extinction Together" was delivered in NCS, Room 120 and was followed by a Q&A session and a brief reception.
A brief summary of quotes from Rubenstein’s talk follows:
Populations of mammals, birds, fish, reptiles, and amphibians have on average declined on average by 60% between 1970 and 2014.... Earth is estimated to have lost 50% of its shallow water corals in the past 30 years largely due to bleaching. And 20% of the Amazon has disappeared in just 50 years. It's obviously increasing with the new current regime that wants to have economic development. And this tension is very real. Between improving people's livelihoods and impacting the landscape. And the species that live on that landscape with those people often suffer.
Urbanization is going to happen. Urbanization can leave the wild lands there, but can the connections of those urban areas to water, to the resources they need, to agriculture, be done in a way that doesn't harm species?
We don't have enough data, often, on the size of the populations nor their age structures to make informed decisions on whether they are in a sad state, stable state, increasing state, or one that really needs our assistance.... We need to get good data on numbers, we need to get good data on life histories. We need to know where vulnerabilities are.
Using available images, from a number of sources, we can get metadata about the animals. Challenges to this include poor, blurry or pixelated images; orientation issues, photobombing, the animal changing and growing, and photographer error.
Process has two stages: detection and identification. You have to take a photo, find every individual in it, identify who it is by putting a box around it and then export it to the identification stage. We write software that optimizes the image for this process. We annotate the species so the computer identifies it from the rest of the image and it then classifies this as a good photo.
After we have a good photo, we ask who is this? We use the HotSpotter algorithm to compare it to an existing database and identify it...the algorithm gets it right 99.8% of the time, but we still human verify and say yes that is correct.
For more information about Professor Rubenstein and his research, please visit his homepage.
Our next DLS talk will be on Friday, November 8, 2019. Professor Jason Nieh, Columbia University, will present, “Virtualization: Emerging Technologies and Trends.” For a complete schedule, please click here.