Status: Needs Review
This page has not been reviewed for accuracy and completeness. Content may be outdated or contain errors.
Data Nodes¶
Use this category for source nodes, readers, and frame iterators.
Included Surface¶
cuvis_ai.node.datacuvis_ai.node.json_filecuvis_ai.node.numpy_readercuvis_ai.node.video
Data Loaders¶
data
¶
Data preparation nodes for CU3S hyperspectral pipelines.
CU3SDataNode
¶
Bases: Node
General-purpose data node for CU3S hyperspectral sequences.
This node normalizes common CU3S batch inputs for pipelines:
- converts
cubefrom uint16 to float32 - passes optional
maskthrough unchanged - extracts 1D
wavelengthsfrom batched input
forward
¶
Normalize CU3S batch data for pipeline consumption.
Source code in cuvis_ai/node/data.py
LentilsAnomalyDataNode
¶
Bases: CU3SDataNode
Lentils-specific CU3S data node with binary anomaly label mapping.
Inherits shared CU3S normalization (cube + wavelengths) and additionally maps multi-class masks to binary anomaly masks.
Source code in cuvis_ai/node/data.py
forward
¶
Apply CU3S normalization and optional Lentils binary mask mapping.
Source code in cuvis_ai/node/data.py
Tracking And Detection JSON Readers¶
DetectionJsonReader
¶
Bases: Node
Read COCO detection JSON and emit tensors per frame.
Outputs per call:
- frame_id: int64 [1]
- bboxes: float32 [1, N, 4] (xyxy)
- category_ids: int64 [1, N]
- confidences: float32 [1, N]
- orig_hw: int64 [1, 2]
Source code in cuvis_ai/node/json_file.py
reset
¶
forward
¶
Emit detections for the next frame in the detection JSON stream.
Source code in cuvis_ai/node/json_file.py
TrackingResultsReader
¶
Bases: Node
Read tracking results JSON (bbox or mask format) and emit per-frame tensors.
Supports two JSON formats:
-
COCO bbox tracking —
images+annotationswithbboxandtrack_idfields. Emitsbboxes,category_ids,confidences,track_ids. -
Video COCO —
videos+annotationswithsegmentationslist of RLE dicts. Emitsmasklabel map andobject_ids.
Optional outputs are None when the format doesn't provide them.
Frame synchronization: When the optional frame_id input is connected
(e.g. from CU3SDataNode.mesu_index), the reader looks up detections for
that specific frame instead of cursor-advancing. This guarantees that the
emitted bboxes/masks correspond to the same frame as the cube data. When
frame_id is not connected, the reader uses the internal cursor (legacy
behavior).
Source code in cuvis_ai/node/json_file.py
reset
¶
forward
¶
Emit tracking tensors for an explicit frame or the next cursor frame.
Source code in cuvis_ai/node/json_file.py
NumPy Readers¶
numpy_reader
¶
Numpy-backed constant source node.
NpyReader
¶
Bases: Node
Load a .npy file once and return the same tensor every forward call.
Source code in cuvis_ai/node/numpy_reader.py
forward
¶
Video Frame Sources¶
video
¶
Video utilities: frame iteration, datasets, Lightning DataModule, and export nodes.
ToVideoNode
¶
ToVideoNode(
output_video_path,
frame_rate=10.0,
frame_rotation=None,
codec="mp4v",
overlay_title=None,
**kwargs,
)
Bases: Node
Write incoming RGB frames directly to a video file.
This node opens a single OpenCV VideoWriter and appends frames on each
forward call. It is intended for streaming pipelines where frames arrive
incrementally.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
output_video_path
|
str
|
Output path for the generated video file (for example |
required |
frame_rate
|
float
|
Video frame rate in frames per second. Must be positive. Default is |
10.0
|
frame_rotation
|
int | None
|
Optional frame rotation in degrees. Supported values are |
None
|
codec
|
str
|
FourCC codec string (length 4). Default is |
'mp4v'
|
overlay_title
|
str | None
|
Optional static title rendered at the top center with its own slim
darkened background block. Default is |
None
|
Source code in cuvis_ai/node/video.py
forward
¶
Append incoming RGB frames to the configured video file.
Source code in cuvis_ai/node/video.py
VideoFrameNode
¶
Bases: Node
Passthrough source node that receives RGB frames from the batch.
forward
¶
Pass through RGB frames and optional frame IDs from the batch.