BiDS’25 kicks off with two dedicated days of Satellite Events on Monday September 29th and on Tuesday September 30th, such as community contributed Tutorials, Hackathons and Challenges.
Additionally, formats such as Lightning Talks and Birds of a Feather sessions will run throughout the conference from October 1st to 3rd.
Notification of acceptance for Tutorials, Hackathons and Challenges
Registration Opening for Satellite Events
Final submission of materials and instructions for attendees
BiDS’25 Satellite Events days
Tutorials: instructed half-day hands-on tutorials, where users experiment and follow along using their own laptops;
Hackathons: in-depth coding events, can be initiated prior to BiDS’25, while the finalisation, such as the final coding round or final pitch can be done at BiDS’25; or to run entirely at BiDS’25 up to 2 days.
EarthCODE (https://earthcode.esa.int/) proactively promotes open science and innovation by providing a federated ecosystem of FAIR and Open Science tools and platforms. EarthCODE integrates EO (Earth Observation) cloud-native platforms for big data analytics, through a federated ecosystem. It enables scalable data management, access, processing, and visualization of datasets as well as reusable data products and workflows.
This tutorial introduces participants to EarthCODE's capabilities, guiding them from finding, accessing and accessing EO datasets and workflows to publishing reproducible experiments. Through a hands-on demonstration with a integrated platform, Pangeo, this workshop will equip participants with the tools and knowledge to use EarthCODE for their own projects and (for ESA funded projects) contribute to the future of open science.
During this 3-hour hands-on workshop, participants will: 1. Understand EarthCODE’s role in advancing FAIR and Open Science, 2. Learn how to find, access, interoperate and reuse data and workflows on EarthCODE, 3. Execute analysis pipelines using Pangeo on EDC, and 4. Publish reusable data and experiments to EarthCODE
This session is ideal for EO researchers, data scientists, and application developers seeking to benefit from and contribute to open science. At the end of the workshop, we will take time for discussion and feedback on how to make EarthCODE better for the community.
Organisers and expected contributors: Deyan Samardzhiev (Lampata), Anne Fouilloux (Simula Research Laboratory), Tina Odaka (IFREMER), Anca Anghelea (ESA), Ewelina Agnieszka Dobrowolska (Serco), Chandra Taposeea-Fisher (Telespazio UK), Garin Smith (Telespazio UK)
2. A coupled atmosphere-hydrosphere-lithosphere digital twin system: physics based explainable AI paradigms [Tutorial, 3 hours]
Earth System models describe phenomena at scales of thousands of kilometers, for decades. However, climate change adaptation measures shall be applied at human activities scales, from 10m to 1km and from days to months. It is in the scope of the tutorial to promote the opportunities offered by the availability of Big EO Data, with a broad variety of sensing modalities, global coverage, and more than 40 years of observations in synergy with the new resources of AI and HPC. The novelty is the presentation of a Digital Twin (DT) concept for the analysis of coupled atmosphere-hydrosphere-lithosphere phenomena which provide for the first time an integrated view of how climate-change-stimulated phenomena can impact Erath structures and human activities. The DT is using the synergy of EO data as Sentinels images, GNSS, infrasound, seismic, or lightening records. Firstly, will be presented the concept of physics-aware AI using models of signal acquistion, scattering mechanisms, embedding domain knowledge, discovering causalities and explaining the observed processes. Secondly a hands-on interactive session will be organised.
Participants will have access to a DT system and in a collaborative session will be supported to demonstrate use cases to monitor and forecast natural and human activity in support of sustainable development, and resilience to climate change effects. During the session will be collected the participant feedback in report to be openly published.
Organisers and expected contributors: Bogdan Antonescu (National Institute for Earth Physics, Bucharest and University of Bucharest), Mihai Datcu (POLITEHNICA Bucharest)
The Earth Observation Training Data Lab (EOTDL) addresses critical challenges in the development of AI for Earth Observation (EO) applications. A major barrier for leveraging AI in EO is the lack of accessible, high-quality training datasets. These datasets are costly and complex to create, requiring extensive manual labeling, expert input, and often in-situ validation, which limits innovation and hinders the growth of EO-based solutions. The EOTDL aims to tackle these challenges by providing an open, collaborative platform that offers a suite of tools for generating, curating, and utilizing AI-ready training datasets. This platform includes a cloud-based repository with over 100 datasets spanning multiple EO applications, from computer vision tasks like classification and object detection to advanced parameter estimation and 3D analysis. In addition to the repository of training datasets, the platform also includes a repository of pre-trained machine learning models, which accelerates the development process for users by providing a starting point for various EO tasks.
Organisers and expected contributors: Juan Pedro (EarthPulse), Hector Rueda (EarthPulse)
Destination Earth is an initiative of the European Union to create a highly accurate digital model of Earth. At the heart of this initiative lies its Data Lake Services, which provide a powerful framework for storing, managing, and processing various datasets.
This tutorial will guide you through these services, focusing on Harmonised Data Access (HDA) and introducing EDGE services, which enable data fusion, processing in proximity to data sources and large-scale analytics.
You will learn how the HDA service streamlines data retrieval, ensuring seamless access to datasets from multiple sources including satellite imagery, climate models, and in-situ observations. You will also explore EDGE services, designed to bring computing closer to the data, reducing latency and enabling large-scale analytics:
STACK, a powerful environment featuring Jupyter Notebook and DASK, enabling interactive data analysis and distributed computing.
ISLET, an Infrastructure-as-a-Service (IaaS) solution providing scalable and distributed cloud-based computing resources to support intensive computational workloads.
HOOK, a workflow automation service that orchestrates data processing tasks, making it easier to manage complex workflows.
By the end of this tutorial, you will be equipped to use Data Lake Services, efficiently work with the HDA service and leverage EDGE services for advanced analytics. Scientist, developer, or policymaker, this guide will help you unlock the full potential of Destination Earth Data Lake and turn data into actionable insights for a more sustainable future!
Organisers and expected contributors: Danaele Puechmaille (EUMETSAT), Michael Schick (EUMETSAT), Patryk Grzybowski (CloudFerro), Aubin Lambare (CS Sopra Steria)
EODAG is an Open-Source Python library and Command Line Interface client that federates and unifies access to cross-providers Earth Observation data. This library is already massively used at ESA, EUMETSAT, CNES, EUSPA and other EO specialists including Airbus, Thales...
In this tutorial the attendees will learn to use the library efficiently as well as discover the latest features and improvements. Advanced users will also learn to implement their own plugin to be able to extend EODAG with their own data catalogue.
Organisers and expected contributors: Sylvain Brunato (CS GROUP), Arnaud Le Carvennec (CS GROUP), Aubin Lambaré (CS GROUP)
Recent advances in satellite technology have enabled frequent, high-resolution, and global-scale monitoring of the Earth. This explosion in the availability of Earth Observation (EO) data presents unprecedented opportunities to support critical societal challenges—from monitoring environmental change and improving agricultural practices to enhancing urban planning and disaster response. Yet, the potential of EO data remains underexploited, primarily due to its complexity, heterogeneity, and the technical expertise required to access and analyze it.
Artificial Intelligence, knowledge graphs, and semantic technologies are emerging as powerful enablers for bridging the gap between raw EO data and actionable insights. These technologies offer intuitive, flexible, and scalable approaches to organize, query, and interpret EO datasets, making them more accessible not only to domain experts but also to decision-makers, application developers, and the broader public.
This tutorial will walk participants through the full pipeline of building intelligent, user-friendly EO services—from ingesting raw satellite data to creating responsive, natural language-powered applications. Using a real-world use case focused on France, we demonstrate how raw EO data can be transformed into semi-structured knowledge and conduct spatiotemporal analysis with interactive querying. The tutorial is hands-on and modular, enabling participants to learn by doing, while also offering a blueprint that can be replicated across other regions and domains.
Organisers and expected contributors: Manolis Koubarakis (National and Kapodistrian University of Athens), Sergios Anestis Kefalidis (National and Kapodistrian University of Athens), Konstantinos Plas (National and Kapodistrian University of Athens), Despina-Athanasia Pantazi (National and Kapodistrian University of Athens), George Stamoulis (National and Kapodistrian University of Athens)
This hands-on tutorial introduces participants to JupyterGIS, a cloud-based platform for collaborative geospatial data analysis built on top of JupyterLab. Designed to support teamwork in spatial data workflows, JupyterGIS enables multiple users to interactively work with raster and vector datasets, apply symbology and filters, and perform spatial analysis using Python. The platform integrates seamlessly with QGIS and supports real-time collaboration features such as shared editing, live cursors, and geolocated comments.
During this 3-hour tutorial, participants will follow guided exercises to load and visualize geospatial data, apply styling rules, and script analyses using the JupyterGIS Python API. No local installation is required—participants will use a cloud instance via their browser.
This tutorial is ideal for researchers, analysts, and data scientists working with Earth Observation or spatial datasets who are interested in modern, collaborative, and open-source GIS tools. Participants should be familiar with basic GIS concepts and have some exposure to Jupyter Notebooks or Python.
Organisers and expected contributors: Anne Fouilloux (Simula Research Laboratory), Elifsu Filiz (Simula Research Laboratory)
This hands-on tutorial guides participants through the end-to-end process of working with Earth Observation (EO) data in cloud environments to develop EO-based indicators, to FAIR workflow and data (indicator) publication in a STAC catalogue and dissemination via engaging and interactive visual storytelling. Using open, cloud-optimized EO data from ESA, NASA, and JAXA, participants will build and publish simple indicators on EarthCODE-integrated platforms such as openEO federation on CDSE, DeepESDL or Euro Data Cube. These platforms enable transparent, reproducible science through open processing environments.
Next, guided by the FAIR and open science practices recommended by EarthCODE, they will describe and publish their data and methods to EO Dashboard. The indicator publication will also guide them through the process of configuring draft visualization and narrative structures based on EOxElements, ultimately transforming scientific data into accessible, public-facing stories.
Technologies like EOxHub Workspaces and eodash will be used throughout the session to explore federated data workflows and storytelling tools. This tutorial empowers participants to effectively communicate EO insights to a broad audience through clear, compelling, and interactive narratives.
Organisers and expected contributors: Anca Anghelea (European Space Agency), Daniel Santillan (EOX IT Services GmbH), Diego Moglioni (Starion Group)
In an era of unprecedented availability of Earth Observation (EO) data, the Copernicus Data Space Ecosystem (CDSE) plays a key role in bridging the gap between data accessibility and actionable insights. Despite the availability of freely accessible satellite data, widespread adoption of EO applications remains limited due to challenges in extracting meaningful information. Many EO based projects struggle with non-repeatable, non-reusable workflows, mainly due to the lack of standardized, scalable solutions.
CDSE tackles these barriers by adopting common standards and patterns, most notably through openEO.
This open-source solution is a community-driven standard that simplifies remote sensing data access, processing, and analysis by offering a unified platform. It empowers developers, researchers, and data scientists to use cloud-based resources and distributed computing environments to tackle complex geospatial challenges. Adhering to the FAIR principles (Findable, Accessible, Interoperable, and Reusable), it supports the global sharing and reuse of algorithms, enhancing collaboration and scalability.
Therefore, we aim to enhance Earth observation data analysis skills in this tutorial session using openEO. This session includes hands-on use of openEO in developing a user-defined process (UDPs) and sharing it with the user community as a practice of work reusability. Our training program combines theoretical explanation with practical application through hands-on exercises, ensuring a well-rounded learning experience. Real-world case studies will be presented, showcasing the successful implementation of the API.
Furthermore, we will also present several large-scale usecases that used openEO in their workflow and are shared among the community through the openEO Algorithm Plaza.
We invite participants to join us for a detailed tutorial to explore openEO's capabilities for developing scalable and reusable workflows within the Copernicus Data Space Ecosystem. Participants will learn how to design a reusable algorithm that is scalable for varying remote sensing applications.
Organisers and expected contributors: Pratichhya Sharma (Vlaamse Instelling voor Technologisch Onderzoek (VITO)), Jeroen Dries (Vlaamse Instelling voor Technologisch Onderzoek (VITO)), Hans Vanrompay (Vlaamse Instelling voor Technologisch Onderzoek (VITO))
This tutorial is designed to guide participants through the end-to-end process of training AI models using SCANEO, a state-of-the-art, AI-powered labeling tool specifically designed for satellite imagery. SCANEO uses active learning to significantly accelerate and improve the accuracy of the data labeling process, which is typically a bottleneck when working with large volumes of Earth Observation (EO) data. Participants will learn how to leverage SCANEO to generate high-quality labeled datasets for training deep learning models, particularly in the context of satellite image analysis.
Participants will gain hands-on experience with SCANEO, starting from selecting areas of interest in satellite images, to generating labeled datasets for tasks like semantic segmentation and object detection. The tool's active learning capabilities will allow participants to iteratively improve model performance by minimizing labeling errors and efficiently labeling large datasets. In addition to SCANEO, we will explore how to integrate these labeled datasets into the Earth Observation Training Data Lab (EOTDL) platform for storage and management, facilitating seamless collaboration and data sharing.
This tutorial is aimed at technical professionals who are familiar with deep learning and are looking to improve their data labeling workflows in Earth Observation applications. By the end of the session, participants will be able to use SCANEO to label EO data efficiently, integrate it with data management platforms, and train AI models for tasks like image classification, change detection, and more.
Organisers and expected contributors: Fran Martin (EarthPulse), Juan B. Pedro (EarthPulse)
Copernicus Data Space Ecosystem (CDSE) is the data sharing platform and cloud processing infrastructure for Sentinel Satellites. This platform integrates instant access to the satellite data archive, API-based streamlined integration into your code, free virtual machine resources and an open codebase, creating a transformative solution for Earth Observation. Sentinel Hub API-s are a family of RESTful API-s allowing in-code access to the full repository of Copernicus Data Space Ecosystem, supporting catalog, raster analysis and zonal statistics operations. CDSE hosts an intuitive web browser and Jupyter Lab environment, where users can develop and test code using a small virtual machine with kernels and dependencies already installed. This means scaleable earth observation processing chains can be developed and applied on free public virtual machines. This eliminates the need for creating and maintaining a local coding environment or powerful computing engines. Additionally, Copernicus Data Space Ecosystem data collections save much of the data preparation work normally required for large-scale data processing.
This tutorial will introduce you to Data processing and analysis in the Jupyter Lab using the Sentinel Hub API-s. We will start with a brief introduction to Copernicus Browser and Jupyter, and move on to learn about the various Sentinel Hub API-s and what they can do. Sentinel Hub API functionality includes catalog searches, raster processing with time series analysis and data fusion from different sensors, raster-to-vector integration for zonal statistics, and raster data ingestion for standardized processing. The example notebooks will guide you from the first steps of coding in Jupyter to performing end-to-end analysis for environmental research. Finally, we will explore data visualization and end-to-end processing examples from the field of environmental sciences.
Before the course, you need to have basic Earth Observation and GIS skills, and some basic experience of coding. After the course, you will have the skills necessary to adapt the tutorials to your own use case, to make use of the existing documentation and to develop new applications in the Jupyter Lab. The notebooks shown at the training event will be published openly on the CDSE repository, providing resources for participants to continue experimenting after the course.
Organisers and expected contributors: András Zlinszky (Sinergise d.o.o), William Ray (Sinergise Solutions GmbH)
We need tools to efficiently analyse the increasing stream of available remote sensing data. Spatiotemporal data cubes are becoming ever more abundant for this and are widely used in the Earth Observation community to handle geospatial raster data. Sophisticated frameworks in high-level programming languages like R and python allow scientists to draft and run their data analysis pipelines and to scale them in HPC or cloud environments.
While many data cube frameworks can handle harmonized analysis-ready data cubes very well, we repeatedly experienced problems when running complex analyses on multi-source data that was not homogenized. The problems arise when different datasets need to be resampled on the fly to a common resolution and have nonaligning chunk boundaries, which leads to very complex and often unresolvable task graphs in frameworks like xarray+dask.
In this workshop we present the emerging ecosystem of large-scale geodata processing and visualisation in the Julia programming language. Julia is an interactive scientific programming language, designed for HPC applications with primitives for Multi-threaded and Distributed computations built into the language.
We will demonstrate an example analysis where data from different sources (Sentinel-1, Sentinel-2, ...), summing to multiple TBs of data, can interoperate on-the-fly and scale well when run on different computing environments. We will also show how to combine these raster data with vector data to derive vector data cubes.
Organisers and expected contributors: Felix Cremer (Max-Planck Institute for Biogeochemistry), Gans Fabian (Max Planck Institute for Biogeochemistry), Anshul Singhvi (JuliaHub)
The EOPF Sentinel Zarr Samples Service by ESA provides Sentinel products in Zarr format, enabling scalable and efficient Earth Observation (EO) analysis. This tutorial offers a hands-on, cloud-native introduction to EO data workflows, focusing on the EOPF Sentinel Zarr Samples Service, the Zarr data format, and associated tools like STAC, pySTAC-client, xarray, and Dask and Julia.
Over the course of three interactive sessions, participants will gain practical experience discovering, accessing, and processing EO data using cloud-optimized formats. They will explore the EOPF STAC catalog for data discovery, learn to manipulate Zarr datasets, and implement scalable analysis workflows using modern libraries and frameworks.
Ideal for those new to cloud-based EO workflows, this tutorial will empower participants with the skills and knowledge to efficiently work with cloud-based Earth observation data. Following the tutorial, participants will have the opportunity to further develop their skills during a hands-on hackathon, applying what they’ve learned with the EOPF Sentinel Zarr Samples Service and Zarr.
Organisers and expected contributors: Gisela Romero Candanedo (thriveGEO GmbH), Anne Fouilloux (Simula), Tina Odaka (IFREMER), Konstantin Ntokas (Brockmann Consult), Julia Wagemann (thriveGEO GmbH)
In this tutorial, the participants will have access to harmonized, analysis-ready, gap-filled and complete Landsat global mosaics from 1997 onward in cloud-optimized GeoTIFF (COG) format (130 TB of data).
Spanning over 25 years and structured in 7 spectral bands (RGB, NIR, SWIR-1, SWIR-2 and thermal), this data is instrumental for long-term monitoring applications of land cover change, soil proprieties, vegetation productivity, land degradation, vegetation height and other environmental characteristics.
The global mosaics were produced via the Time-Series Iteration-free Reconstruction (TSIRF) framework over the entire Global Land Analysis and Discovery (GLAD) ARD Landsat archive (https://doi.org/10.7717/peerj.18585). The full dataset is publicly available as open data (CC-BY) via SpatioTemporal Asset Catalog (https://stac.openlandmap.org) and Copernicus Data Space Ecosystem (https://dataspace.copernicus.eu). Participants will learn about the methodological background and various ways of accessing and run on-the-fly computations in the dataset, including spatiotemporal machine learning using scikit-map library (https://github.com/openlandmap/scikit-map). Basic knowledge of GIS and intermediate level of programming language (Python or R) is required. This product is part of the projects Open-Earth-Monitor Cyberinfrastructure (Horizon Europe - No. 101059548) and Global Pasture Watch (Land & Carbon Lab at the World Resources Institute).
Organisers and expected contributors: Leandro Parente (OpenGeoHub Foundation), Davide Consoli (OpenGeoHub Foundation), Tomislav Hengl (OpenGeoHub Foundation), Yu-Feng Ho (OpenGeoHub Foundation)
EO data offers significant opportunities and challenges in environmental monitoring, climate change, and sustainable development. This hands-on tutorial, organized by Pangeo and EO4EU, will introduce participants to scalable tools and techniques for analyzing EO data. Using Pangeo and AI-powered workflows, attendees will learn to process and derive insights from large EO datasets, with a focus on European EO initiatives and the EO4EU project, which enhances EO data use for Europe’s environmental and policy needs.
Objectives:
Organisers and expected contributors: Vasileios Baousis (European Centre for Medium-Range Weather Forecasts (ECMWF)), Anne Fouilloux (Simula), Tina Odaka (ifremer), Mohanad Albughdadi (ECMWF), Claudio Pisa (ECMWF), Armagan Karatosun (EUMETSAT)
Track 1: EO Foundation Models – Explore large-scale EO models for diverse geospatial tasks such as land cover classification or biomass estimation. The focus is on experimentation with fine-tuning, adapting models across modalities and regions, and uncovering novel use cases or limitations.
Track 2: Weather Forecasting Models – This track centers on assessing and extending the capabilities of state-of-the-art weather models. Develop new evaluation frameworks and apply models to a variety of downstream forecasting tasks to identify strengths and gaps in current approaches.
Track 3: EO Vision-Language Models – Interact with EO data through natural language, using VLMs for tasks like image captioning, visual question answering, or EO data retrieval. The track emphasizes creative exploration of how text and vision can be combined to enhance interpretation and accessibility of EO imagery.
Each track supports creativity, collaborative exploration, and technical depth, with provided datasets, models, and example notebooks. Projects will be evaluated by a panel of experts based on effectiveness, innovation, clarity, and impact potential.
Organisers and expected contributors: Souzana Touloumtzi (National Observatory of Athens), Yi Wang (Technical University of Munich), Franziska Gerken (NVIDIA), Angelos Zavras (National Technical University of Athens), Esther Millet (Evenflow)
This hackathon invites participants—from newcomers to experts—to delve into the Earth Observation Processing Framework (EOPF) using cutting-edge, cloud-native formats like Zarr, STAC, and Dask or Julia. Focused on collaboration and innovation, the event aims to enhance EO data access and processing across all Copernicus Sentinel missions. Attendees will explore various themes, including optimizing chunking strategies, applying the Discrete Global Grid System (DGGS), trying out Parquet for metadata storage, discussing GeoZarr, CF-conventions, and leveraging the advanced capabilities of Zarr v3. Participants are also encouraged to bring their own data and unique challenges to the table.
The hackathon seeks to address real-world problems through cloud-native approaches, offering an opportunity to provide valuable feedback for the enhancement of the EOPF Zarr Samples Service and share insights gained during the event. For those unfamiliar with EOPF, a comprehensive full-day tutorial will be available the day before the hackathon, ensuring a solid foundation in the tools and technologies used.
Organisers and expected contributors: Anne Fouilloux (Simula Research Laboratory), Tina Odaka (IFREMER), Konstantin Ntokas (Brockmann Consult GmbH), Jean-Marc Delouis (CNRS/IFREMER), Julia Wagemann (thriveGEO GmbH), Felix Cremer (Max Planck Institute for Biogeochemistry)
Saluts invites space operators, AI developers, and data scientists to a hackathon on AI-Driven Space Mission Management using our CHIRB + NEROnaut platform. Participants will build chat-based workflows that ingest ephemeris (TLE), query Telemetry & Telecommand (TM&TC) databases, schedule mission operations, and optimize communication using our Robot-on-Chip simulator. During the hackathon, teams will prototype end-to-end pipelines—from mission tasking and pass scheduling to real-time AI analytics on downlinked data. This event targets developers and mission planners with Python/API experience and an interest in space big data. All tooling runs in browser-based environments—no local installs required.
Organisers and expected contributors: Mohamed Sobhy Fouda (Saluts)
Questions regarding Satellite events organisation may be directed to events.organisation@esa.int (Subject: BiDS’25 - Satellite Event).