Talks and presentations

See a map of all the places I've given a talk!

Semantic Segmentation of Sea Ice Using Multi-scale Spatial Context

December 15, 2022

Talk, 2022 AGU Fall Meeting, Chicago, Illinois"

Semantic segmentation is known to generally benefit from combining local features with global scale context and semantic information. This is especially the case for remote sensing applications of image segmentation, where spatial contextual information can lead to better semantic identification. State-of-the-art semantic segmentation algorithms often incorporate local and global features at multiple scales, albeit with different approaches. However, the vast majority of these approaches have an inward-looking focus, meaning that they generate multi-scale features at different subscales, frequently using pooling, which can be interpreted as zooming in on the image’s prominent features. This might not be the optimal solution for semantic segmentation of sea ice type as the operational sea ice charts used to generate ground truth training samples often contain large polygons that can cover up to hundreds of square kilometers of area, or tens of thousands of pixels. Therefore, dividing the image into smaller regions may yield little to no additional information. Instead, we hypothesize that incorporating a larger spatial context could be beneficial in increasing the accuracy of semantic segmentation of sea ice type from Synthetic Aperture Radar (SAR) images.

Sea Ice Type Classification using Deep Convolutional Networks and Partial Label Learning (Invited)

December 12, 2022

Talk, 2022 AGU Fall Meeting, Chicago, Illinois"

Deep Convolutional Neural Networks (DCNNs) have been used to automate classification of sea ice properties, such as type, extent, and concentration, from remotely-sensed images using expert-generated operational sea ice charts as labels for training samples. These ice charts comprise a set of polygons, each containing up to three different ice types with corresponding partial concentration levels. This approach, which essentially assigns multiple labels (sea ice types) to each polygon, poses a challenge for conventional deep learning-based sea ice type classification algorithms, which are trained using single-label training samples. Additionally, training datasets for sea ice classification, and environmental remote sensing more generally, often suffer from class imbalance. This skews the performance of deep learning algorithms towards producing higher accuracy on the majority classes, even if the minority classes are of more importance, as is the case in sea ice classification, with the minority sea ice classes against the majority open water class.

Sea Ice Type Classification using Deep Convolutional Networks and Partial Label Learning (Invited)

December 12, 2022

Talk, 2022 AGU Fall Meeting, Chicago, Illinois"

Ice charts that map Arctic sea ice type serve tactical and strategic operational purposes in marine navigation while providing a reference for scientific studies in the arctic region. Traditionally, ice charts are generated by human experts who interpret daily/weekly collected remote sensing imagery (e.g., Sentinel-1 Synthetic Aperture Radar (SAR) imagery) and annotate the imagery by identifying areas/polygons with a relatively homogeneous distribution of sea ice types. For each polygon, the dominant types of oldest ice as well as the corresponding partial ice concentrations, are then identified via annotation. While such expert-generated maps are useful, manual generation of ice charts is laborious, unscalable, and error-prone, in turn limiting the coverage, recency, and accuracy of ice charts. In this study, we propose a novel multi-instance proportion label sea ice classification model to automate the generation of ice charts. In the past, many supervised learning methods have been deployed that leverage the manually generated ice charts as labeled data and train models for automated sea ice classification. However, since most developed models are defined to classify each pixel in the imagery by learning from “pixel-level” labels in the training dataset, all these methods resort to generating pixel-level labels for model training by approximating these labels from the “polygon-level” labels available in ice charts. In turn, such approximations reduce the accuracy of the trained sea ice classification models. To address the ill-posed problem of training “pixel-level” models from “polygon-level” labels, instead we propose a model, dubbed MIPL-Ice, that expects polygon-level labels (such as ice charts directly) as input for training, and learns to generate pixel-level and polygon-level class predictions as output. With extensive experimentation using the selection of ice charts, we show that our proposed model outperforms existing sea ice classification methods in terms of accuracy, while benefiting from more resource-efficient training and reduced training time.

Sea Ice Type Classification from Sentinel-1 SAR Imagery Using Deep Neural Networks

December 15, 2021

Talk, 2021 AGU Fall Meeting, New Orleans, Louisiana

Sea ice plays an important role in climate change and marine navigation. Mapping of sea ice types, however, still remains largely a manual effort. Automated classification of ice types in Synthetic Aperture Radar (SAR) imagery is a challenging task due to many different factors including, but not limited to: i) low ice type separability in SAR backscatter, and ii) spatial, temporal, and thematic inconsistencies in the quality of the Stage of Development (SoD) sea ice charts, which are often used as labels for training. When generating SoD charts, ice experts prioritize operational needs, mostly marine navigation, which leads to (generally) higher precision in delineating young and new ice polygon boundaries compared to multi-year, or first-year ice. Furthermore, the sea ice charts include polygons containing mixed ice types, reflecting the partial concentrations of each ice type in polygons, posing a challenge in using the charts as training data, especially since the reported partial concentrations of ice are approximate, and to some extent, subjective.

A Comparison Of Classic Deep Learning Architectures For Sea Ice Classification From SAR

December 15, 2021

Talk, 2021 AGU Fall Meeting, New Orleans, Louisiana

During the last decade, advances in the state-of-the-art deep learning models, in particular convolutional neural networks, have facilitated significant improvements in image recognition tasks. In fact, on the benchmark ImageNet dataset, the state of the art is now recognized as performing better than human. As a result, many adjacent tasks, including image recognition in remote sensing, have adopted these state-of-the-art models with little investigation into their transferability. For instance, the common image datasets—from which pre-trained model weights are derived or modern architectures are evaluated on—contain R-G-B images of everyday items such as animals, symbols, and vehicles. Needless to say, this is very different from the contents of a standard optical or radar image acquired by a satellite.

Exploring the Notion of Spatial Data Lenses

September 15, 2016

Talk, 9th International Conference on Geographic Information Science, Montreal, Canada

We explore the idea of spatial lenses as pieces of software interpreting data sets in a particular spatial view of an environment. The lenses serve to prepare the data sets for subsequent analysis in that view. Examples include a network lens to view places in a literary text, or a field lens to interpret pharmacy sales in terms of seasonal allergy risks. The theory underlying these lenses is that of core concepts of spatial information, but here we exploit how these concepts enhance the usability of data rather than that of systems. Spatial lenses also supply transformations between multiple views of an environment, for example, between field and object views. They lift these transformations from the level of data format conversions to that of understanding an environment in multiple ways. In software engineering terms, spatial lenses are defined by constructors, generating instances of core concept representations from spatial data sets. Deployed as web services or libraries, spatial lenses would make larger varieties of data sets amenable to mapping and spatial analysis, compared to today’s situation, where file formats determine and limit what one can do. To illustrate and evaluate the idea of spatial lenses, we present a set of experimental lenses, implemented in a variety of languages, and test them with a variety of data sets, some of them non-spatial.

Question Based Spatial Computing

June 15, 2016

Talk, 9th AGILE International Conference on Geographic Information Science, Helsinki, Finland

Geographic Information Systems (GIS) support spatial problem solving by large repositories of procedures, which are mainly operating on map layers. These procedures and their parameters are often not easy to understand and use, especially not for domain experts without extensive GIS training. This hinders a wider adoption of mapping and spatial analysis across disciplines. Building on the idea of core concepts of spatial information, and further developing the language for spatial computing based on them, we introduce an alternative approach to spatial analysis, based on the idea that users should be able to ask questions about the environment, rather than finding and executing procedures on map layers. We define such questions in terms of the core concepts of spatial information, and use data abstraction instead of procedural abstraction to structure command spaces for application programmers (and ultimately for end users). We sketch an implementation in Python that enables application programmers to dispatch computations to existing GIS capabilities. The gains in usability and conceptual clarity are illustrated through a case study from economics, comparing a traditional procedural solution with our declarative approach. The case study shows a reduction of computational steps by around 45 %, as well as smaller and better organized command spaces.