Status: Needs Review
This page has not been reviewed for accuracy and completeness. Content may be outdated or contain errors.
Preprocessing Nodes¶
Current preprocessing covers normalization, geometric transforms, ROI cropping, and synthetic occlusion helpers.
Normalization¶
normalization
¶
Differentiable normalization nodes for BHWC hyperspectral data.
This module provides a collection of normalization nodes designed for hyperspectral imaging pipelines. All normalizers operate on BHWC format ([batch, height, width, channels]) and maintain gradient flow for end-to-end training.
Normalization strategies:
- MinMaxNormalizer: Scales data to [0, 1] range using min-max statistics
- ZScoreNormalizer: Standardizes data to zero mean and unit variance
- SigmoidNormalizer: Applies sigmoid transformation with median centering
- PerPixelUnitNorm: L2 normalization per pixel across channels
- IdentityNormalizer: No-op passthrough for testing or baseline comparisons
- SigmoidTransform: General-purpose sigmoid for logits→probabilities
Why Normalize?
Normalization is critical for stable anomaly detection and deep learning:
- Stable covariance estimation: RX detectors require well-conditioned covariance matrices
- Gradient stability: Prevents exploding/vanishing gradients during training
- Comparable scales: Ensures different spectral ranges contribute equally
- Faster convergence: Accelerates gradient-based optimization
BHWC Format Requirement
All normalizers expect BHWC input format. For HWC tensors, add batch dimension:
hwc_tensor = torch.randn(256, 256, 61) # [H, W, C] bhwc_tensor = hwc_tensor.unsqueeze(0) # [1, H, W, C]
IdentityNormalizer
¶
MinMaxNormalizer
¶
Bases: _ScoreNormalizerBase
Min-max normalization per sample and channel (keeps gradients).
Scales data to [0, 1] range using (x - min) / (max - min) transformation. Can operate in two modes:
- Per-sample normalization (use_running_stats=False): min/max computed per batch
- Global normalization (use_running_stats=True): uses running statistics from statistical initialization
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
eps
|
float
|
Small constant for numerical stability, prevents division by zero (default: 1e-6) |
1e-06
|
use_running_stats
|
bool
|
If True, use global min/max from statistical_initialization(). If False, compute min/max per batch during forward pass (default: True) |
True
|
**kwargs
|
dict
|
Additional arguments passed to Node base class |
{}
|
Attributes:
| Name | Type | Description |
|---|---|---|
running_min |
Tensor
|
Global minimum value computed during statistical initialization |
running_max |
Tensor
|
Global maximum value computed during statistical initialization |
Examples:
>>> from cuvis_ai.node.normalization import MinMaxNormalizer
>>> from cuvis_ai_core.training import StatisticalTrainer
>>> import torch
>>>
>>> # Mode 1: Global normalization with statistical initialization
>>> normalizer = MinMaxNormalizer(eps=1.0e-6, use_running_stats=True)
>>> stat_trainer = StatisticalTrainer(pipeline=pipeline, datamodule=datamodule)
>>> stat_trainer.fit() # Computes global min/max from training data
>>>
>>> # Inference uses global statistics
>>> output = normalizer.forward(data=hyperspectral_cube)
>>> normalized = output["normalized"] # [B, H, W, C], values in [0, 1]
>>>
>>> # Mode 2: Per-sample normalization (no initialization required)
>>> normalizer_local = MinMaxNormalizer(use_running_stats=False)
>>> output = normalizer_local.forward(data=hyperspectral_cube)
>>> # Each sample normalized independently using its own min/max
See Also
ZScoreNormalizer : Z-score standardization SigmoidNormalizer : Sigmoid-based normalization docs/tutorials/rx-statistical.md : RX pipeline with MinMaxNormalizer
Notes
Global normalization (use_running_stats=True) is recommended for RX detectors to ensure consistent scaling between training and inference. Per-sample normalization can be useful for real-time processing when training data is unavailable.
Source code in cuvis_ai/node/normalization.py
statistical_initialization
¶
Compute global min/max from data iterator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_stream
|
InputStream
|
Iterator yielding dicts matching INPUT_SPECS (port-based format) Expected format: {"data": tensor} where tensor is the scores/data |
required |
Source code in cuvis_ai/node/normalization.py
SigmoidNormalizer
¶
Bases: _ScoreNormalizerBase
Median-centered sigmoid squashing per sample and channel.
Applies sigmoid transformation centered at the median with standard deviation scaling:
sigmoid((x - median) / std)
Produces values in [0, 1] range with median mapped to 0.5.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
std_floor
|
float
|
Minimum standard deviation threshold to prevent division by zero (default: 1e-6) |
1e-06
|
**kwargs
|
dict
|
Additional arguments passed to Node base class |
{}
|
Examples:
>>> from cuvis_ai.node.normalization import SigmoidNormalizer
>>> import torch
>>>
>>> # Create sigmoid normalizer
>>> normalizer = SigmoidNormalizer(std_floor=1.0e-6)
>>>
>>> # Apply to hyperspectral data
>>> data = torch.randn(4, 256, 256, 61) # [B, H, W, C]
>>> output = normalizer.forward(data=data)
>>> normalized = output["normalized"] # [4, 256, 256, 61], values in [0, 1]
See Also
MinMaxNormalizer : Min-max scaling to [0, 1] ZScoreNormalizer : Z-score standardization
Notes
Sigmoid normalization is robust to outliers because extreme values are squashed asymptotically to 0 or 1. This makes it suitable for data with heavy-tailed distributions or sporadic anomalies.
Source code in cuvis_ai/node/normalization.py
ZScoreNormalizer
¶
Bases: _ScoreNormalizerBase
Z-score (standardization) normalization along specified dimensions.
Computes: (x - mean) / (std + eps) along specified dims. Per-sample normalization with no statistical initialization required.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dims
|
list[int]
|
Dimensions to compute statistics over (default: [1,2] for H,W in BHWC format) |
None
|
eps
|
float
|
Small constant for numerical stability (default: 1e-6) |
1e-06
|
keepdim
|
bool
|
Whether to keep reduced dimensions (default: True) |
True
|
Examples:
>>> # Normalize over spatial dimensions (H, W)
>>> zscore = ZScoreNormalizer(dims=[1, 2])
>>>
>>> # Normalize over all spatial and channel dimensions
>>> zscore_all = ZScoreNormalizer(dims=[1, 2, 3])
Source code in cuvis_ai/node/normalization.py
SigmoidTransform
¶
Bases: Node
Applies sigmoid transformation to convert logits to probabilities [0,1].
General-purpose sigmoid node for converting raw scores/logits to probability space. Useful for visualization or downstream nodes that expect bounded [0,1] values.
Examples:
>>> sigmoid = SigmoidTransform()
>>> # Route logits to both loss (raw) and visualization (sigmoid)
>>> graph.connect(
... (rx.scores, loss_node.predictions), # Raw logits to loss
... (rx.scores, sigmoid.data), # Logits to sigmoid
... (sigmoid.transformed, viz.scores), # Probabilities to viz
... )
Source code in cuvis_ai/node/normalization.py
forward
¶
Apply sigmoid transformation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Tensor
|
Input tensor |
required |
Returns:
| Type | Description |
|---|---|
dict[str, Tensor]
|
Dictionary with "transformed" key containing sigmoid output |
Source code in cuvis_ai/node/normalization.py
PerPixelUnitNorm
¶
Bases: _ScoreNormalizerBase
Per-pixel mean-centering and L2 normalization across channels.
Source code in cuvis_ai/node/normalization.py
Preprocessors¶
preprocessors
¶
Preprocessing Nodes.
This module provides nodes for preprocessing hyperspectral data, including wavelength-based band selection and filtering. These nodes help reduce dimensionality and focus analysis on specific spectral regions of interest.
See Also
cuvis_ai.node.channel_selector : Advanced channel selection methods cuvis_ai.node.normalization : Normalization and standardization nodes
BandpassByWavelength
¶
Bases: Node
Select channels by wavelength interval from BHWC tensors.
This node filters hyperspectral data by keeping only channels within a specified wavelength range. Wavelengths must be provided via the input port.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_wavelength_nm
|
float
|
Minimum wavelength (inclusive) to keep, in nanometers |
required |
max_wavelength_nm
|
float | None
|
Maximum wavelength (inclusive) to keep. If None, selects all wavelengths
|
None
|
Examples:
>>> # Create bandpass node
>>> bandpass = BandpassByWavelength(
... min_wavelength_nm=500.0,
... max_wavelength_nm=700.0,
... )
>>> # Filter cube in BHWC format with wavelengths from input port
>>> wavelengths_tensor = torch.from_numpy(wavelengths).float()
>>> filtered = bandpass.forward(data=cube_bhwc, wavelengths=wavelengths_tensor)["filtered"]
>>>
>>> # For single HWC images, add a batch dimension first:
>>> # filtered = bandpass.forward(data=cube_hwc.unsqueeze(0), wavelengths=wavelengths_tensor)["filtered"]
>>>
>>> # Use with wavelengths from upstream node
>>> pipeline.connect(
... (data_node.outputs.cube, bandpass.data),
... (data_node.outputs.wavelengths, bandpass.wavelengths),
... )
Source code in cuvis_ai/node/preprocessors.py
forward
¶
Filter cube by wavelength range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Tensor
|
Input hyperspectral cube [B, H, W, C]. |
required |
wavelengths
|
Tensor
|
Wavelengths tensor [C] in nanometers. |
required |
**kwargs
|
Any
|
Additional keyword arguments (unused). |
{}
|
Returns:
| Type | Description |
|---|---|
dict[str, Tensor]
|
Dictionary with "filtered" key containing filtered cube [B, H, W, C_filtered] |
Raises:
| Type | Description |
|---|---|
ValueError
|
If no channels are selected by the provided wavelength range |
Source code in cuvis_ai/node/preprocessors.py
SpatialRotateNode
¶
Bases: Node
Rotate spatial dimensions of cubes, masks, and RGB images.
Applies a fixed rotation (90, -90, or 180 degrees) to the H and W dimensions of all provided inputs. Wavelengths pass through unchanged.
Place immediately after a data node so all downstream consumers see correctly oriented data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
rotation
|
int | None
|
Rotation in degrees. Supported: 90, -90, 180 (and aliases 270, -270, -180). None or 0 means passthrough. |
None
|
Source code in cuvis_ai/node/preprocessors.py
forward
¶
Apply the configured rotation to the cube, mask, and rgb_image tensors.
Source code in cuvis_ai/node/preprocessors.py
BBoxRoiCropNode
¶
Bases: Node
Differentiable bbox cropping via torchvision roi_align.
Accepts BHWC images and xyxy bboxes, outputs NCHW crops resized to a
fixed output_size. Padding rows (all coords <= 0) are filtered out,
so the output N equals the number of valid detections.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
output_size
|
tuple[int, int]
|
Target crop size |
(256, 128)
|
aligned
|
bool
|
Use sub-pixel aligned roi_align (recommended). |
True
|
Source code in cuvis_ai/node/preprocessors.py
forward
¶
Crop and resize bounding-box regions from images.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
images
|
Tensor
|
|
required |
bboxes
|
Tensor
|
|
required |
Returns:
| Type | Description |
|---|---|
dict
|
|
Source code in cuvis_ai/node/preprocessors.py
ChannelNormalizeNode
¶
Bases: Node
Per-channel mean/std normalization for NCHW tensors.
Defaults to ImageNet statistics but accepts any per-channel values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mean
|
tuple[float, ...]
|
Per-channel mean. |
IMAGENET_MEAN
|
std
|
tuple[float, ...]
|
Per-channel std. |
IMAGENET_STD
|
Source code in cuvis_ai/node/preprocessors.py
forward
¶
Normalize images per channel.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
images
|
Tensor
|
|
required |
Returns:
| Type | Description |
|---|---|
dict
|
|
Source code in cuvis_ai/node/preprocessors.py
Occlusion Nodes¶
occlusion
¶
Synthetic occlusion nodes for tracking evaluation (pure PyTorch).
OcclusionNodeBase
¶
OcclusionNodeBase(
tracking_json_path,
track_ids,
occlusion_start_frame,
occlusion_end_frame,
**kwargs,
)
Bases: Node, ABC
Base class for synthetic occlusion from tracking masks.
Source code in cuvis_ai/node/occlusion.py
forward
¶
Conditionally occlude an RGB batch using tracking-derived masks.
Source code in cuvis_ai/node/occlusion.py
PoissonOcclusionNode
¶
PoissonOcclusionNode(
tracking_json_path,
track_ids,
occlusion_start_frame,
occlusion_end_frame,
fill_color="poisson",
*,
input_key=None,
max_iter=1000,
tol=1e-06,
occlusion_shape="bbox",
bbox_mode="static",
static_bbox_scale=1.2,
static_bbox_padding_px=0,
static_full_width_x=False,
**kwargs,
)
Bases: OcclusionNodeBase
Pure-PyTorch occlusion node for either RGB frames or hyperspectral cubes.
Source code in cuvis_ai/node/occlusion.py
205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 | |
forward
¶
Occlude either the provided RGB batch or cube batch for the current frame.
Source code in cuvis_ai/node/occlusion.py
SolidOcclusionNode
¶
SolidOcclusionNode(
tracking_json_path,
track_ids,
occlusion_start_frame,
occlusion_end_frame,
fill_color="poisson",
*,
input_key=None,
max_iter=1000,
tol=1e-06,
occlusion_shape="bbox",
bbox_mode="static",
static_bbox_scale=1.2,
static_bbox_padding_px=0,
static_full_width_x=False,
**kwargs,
)
Bases: PoissonOcclusionNode
Deprecated alias of PoissonOcclusionNode.
Source code in cuvis_ai/node/occlusion.py
205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 | |
PoissonCubeOcclusionNode
¶
PoissonCubeOcclusionNode(
tracking_json_path,
track_ids,
occlusion_start_frame,
occlusion_end_frame,
fill_color="poisson",
*,
input_key=None,
max_iter=1000,
tol=1e-06,
occlusion_shape="bbox",
bbox_mode="static",
static_bbox_scale=1.2,
static_bbox_padding_px=0,
static_full_width_x=False,
**kwargs,
)
Bases: PoissonOcclusionNode
Deprecated alias of PoissonOcclusionNode with cube-only ports.
Source code in cuvis_ai/node/occlusion.py
205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 | |
forward
¶
Apply cube-only occlusion using the parent implementation.