Release Note - Recently Published Apps - March 12, 2026

General

DeepVariant workflow  is an analysis pipeline that uses a deep neural network to call genetic variants from NGS DNA sequencing data. It is a highly accurate variant caller for ONT, PacBio, and multi-technology categories. It was updated from version 1.5.0 to 1.9.0.

Giraffe-DeepVariant workflow is a pipeline for calling small variants using the pangenome reference. Giraffe-DeepVariant workflow starts with sequenced reads (FASTQs, CRAM) and does analysis to detect small variants (VCF). Reads are mapped to a pangenome with vg giraffe and pre-processed (e.g. indel realignment) before the variant calling step. DeepVariant is used for calling small variants. It was updated from version 1.0 to 1.1.

Genotype GVCFs & Filter Variants workflow performs the main part of the Joint Discovery analysis of GATK. It starts with GVCF files, performs joint genotyping and gives back filtered vcf files. All independent tools of the workflow were upgraded to GATK 4.6.2.0 version.

nnUNet (nonewUNet) toolkit  is a fully automated, self-configuring deep learning framework for biomedical image segmentation. It eliminates manual architecture engineering by automatically adapting itself to any new dataset. It analyzes dataset properties, configures all training components, and delivers strong baseline performance without human tuning.

Below is the list of tools included in this toolkit, arranged in an nnUNetstyle workflow.

1 .nnUNetv2 convert_MSD_dataset  tool converts a Medical Segmentation Decathlon (MSD) dataset into the standardized nnUNet folder structure. It generates a valid dataset.json and restructures images/labels according to nnUNet conventions.

2 . nnUNetv2 plan_and_preprocess  tool analyzes dataset properties (voxel spacing, intensity statistics, modalities) and generates the experiment plan that determines patch size, architecture, and configuration. It performs preprocessing steps like:

  • Resampling

  • Normalization

  • Cropping

  • Dataset caching

3 .nnUNetv2 train tool initiates model training for a given configuration (e.g. 2d, 3d_fullres) and fold (0–4).
The training includes data augmentation, optimization, checkpointing, and validation to produce trained model weights.

4 .nnUNetv2 find_best_config tool aggregates cross validation results and identifies the best performing configuration and checkpoint. It is used to decide which model or ensemble should be deployed for test time inference.

5 .nnUNetv2 predict tool runs inference on unseen images using the selected trained model which gives following outputs:

  • Segmentation masks

  • Optional probability maps

6 .nnUNetv2 determine_postprocessing tool evaluates several postprocessing heuristics (e.g. removing small, connected components) on validation outputs.It automatically selects the rule set that improves performance metrics such as Dice and IoU.

7 .nnUNetv2 apply_postprocessing tool applies the chosen postprocessing operations to raw prediction outputs to refine segmentation masks and enhance overall accuracy.

8 .nnUNetv2 ensemble  tool combines predictions from multiple folds or configurations (e.g. 2D + 3D) by averaging their probability maps. Itensembles which improves robustness and generally increases accuracy.

9 .nnUNetv2 evaluate_folder tool computes metrics by comparing a folder of predicted labels with groundtruth labels (e.g., Dice, IoU, depending on configuration). It summarizes percase and aggregate scores, enabling quick validation of model performance or comparison across checkpoints, folds, or ensembles.

Resources

If you need help with accessing controlled study details, please contact your Velsera Seven Bridges representative or email [email protected].