📋 Task¶
autoPET V defines a single unified task: interactive lesion segmentation in whole-body PET/CT.
Participants develop algorithms that:
- Generate an initial segmentation of tracer-avid tumor lesions
- Iteratively refine this segmentation based on sparse corrective input during inference
Unlike previous editions, this challenge focuses on adaptive human–AI interaction, rather than static, fully automated segmentation.
All publicly available data and (pre-trained) models, including foundation models, are allowed.
Participants may develop:
- novel model architectures
- interaction-aware or prompt-based methods
- data-centric pipelines (pre-/post-processing)
🎯 Goal¶
The goal is to develop algorithms that are:
- Accurate → high-quality lesion segmentation
- Robust → generalize across tracers, centers, and acquisition conditions
- Adaptive → efficiently incorporate human corrective input
The challenge explicitly evaluates how quickly and reliably models improve when given minimal guidance, reflecting real clinical workflows.
🔄 Interactive Segmentation¶
The task models a realistic clinical workflow in which automated segmentations are iteratively refined through human input.
Each method is evaluated in an interactive loop:
- The algorithm produces an initial lesion segmentation
-
Corrective input is provided in the form of sparse scribbles targeting:
- false-positive regions (over-segmentation)
- false-negative regions (missed lesions)
-
The algorithm updates its prediction based on all previously provided input
This interaction is performed at inference time, without retraining or parameter updates between steps.
To balance reproducibility and clinical realism, all methods are evaluated under two complementary interaction regimes:
Category 1: Simulated Interaction¶
- Scribbles are generated automatically based on prediction errors
- Fully reproducible and standardized across submissions
- Fixed number of interaction steps per case
Category 2: Clinician-Driven Interaction¶
- Scribbles are provided by expert readers
- Reflect realistic clinical correction behavior
- Variable number of interactions per case
Each submission is evaluated under both regimes and automatically participates in both award categories.
📤 Input¶
- CT image (MHA file)
- PET image (MHA file)
- Lesion click(s) in foreground and background (JSON file)
📤 Output¶
- lesion segmentation mask (MHA file)