Status: Needs Review
This page has not been reviewed for accuracy and completeness. Content may be outdated or contain errors.
Ports API Reference¶
Complete API reference for the Typed I/O port system in CUVIS.AI.
Overview¶
The port system provides typed input/output interfaces for all nodes, enabling type-safe connections, runtime validation, and flexible pipeline construction. Each node defines its input and output ports using PortSpec objects.
Core Components¶
PortSpec¶
The PortSpec class defines the specification for a port, including its type, shape constraints, and metadata.
Attributes:
- name: Port identifier
- port_type: "input" or "output"
- shape: Expected tensor shape with dimension constraints
- dtype: Expected data type (optional)
- description: Human-readable description
- stage: Execution stage ("train", "eval", "both")
Example:
from cuvis_ai_schemas.pipeline import PortSpec
# Define an input port for hyperspectral data
data_port = PortSpec(
name="data",
port_type="input",
shape=(-1, -1, -1, -1), # (batch, height, width, channels)
description="Raw hyperspectral cube input"
)
# Define an output port for normalized data
normalized_port = PortSpec(
name="normalized",
port_type="output",
shape=(-1, -1, -1, -1),
description="Normalized hyperspectral cube"
)
InputPort / OutputPort¶
Port instances that are attached to nodes and used for connections.
Creating Ports:
from cuvis_ai_schemas.pipeline import InputPort, OutputPort
# Create port instances
input_port = InputPort(spec=data_port, node=normalizer)
output_port = OutputPort(spec=normalized_port, node=normalizer)
Port Compatibility Rules¶
Ports can be connected if they satisfy compatibility rules:
Shape Compatibility¶
- Fixed dimensions must match exactly
- Variable dimensions (
-1) can match any size - Batch dimensions are typically variable
Type Compatibility¶
- Input ports can only connect to output ports
- Ports must have compatible data types
- Stage constraints must be satisfied
Connection Validation¶
# Check if ports are compatible
if input_port.is_compatible_with(output_port):
pipeline.connect(output_port, input_port)
else:
print("Ports are incompatible")
Node Port Declarations¶
Nodes declare their ports using INPUT_SPECS and OUTPUT_SPECS class attributes.
Example Node Implementation¶
from cuvis_ai_core.node.node import Node
from cuvis_ai_schemas.pipeline import PortSpec
class MinMaxNormalizer(Node):
"""Min-max normalization node."""
# Input port specifications
INPUT_SPECS = [
PortSpec(
name="data",
port_type="input",
shape=(-1, -1, -1, -1),
description="Raw hyperspectral cube"
)
]
# Output port specifications
OUTPUT_SPECS = [
PortSpec(
name="normalized",
port_type="output",
shape=(-1, -1, -1, -1),
description="Normalized cube [0, 1]"
)
]
def __init__(self, eps=1e-6, use_running_stats=True):
super().__init__()
self.eps = eps
self.use_running_stats = use_running_stats
def forward(self, **inputs):
data = inputs["data"]
# Normalization logic here
normalized = (data - self.running_min) / (self.running_max - self.running_min + self.eps)
return {"normalized": normalized}
Port-Based Connections¶
Basic Connection¶
Multiple Connections¶
# Fan-in multiple outputs to a single input (e.g., monitoring artifacts)
pipeline.connect(
(viz_mask.artifacts, tensorboard_node.artifacts),
(viz_rgb.artifacts, tensorboard_node.artifacts),
)
Stage-Aware Connections¶
# Connect nodes for specific execution stages
pipeline.connect(normalizer.normalized, selector.data, stage="train")
pipeline.connect(selector.selected, pca.features, stage="both")
Loss Nodes Without an Aggregator¶
LossAggregator has been removed—the trainer now collects individual loss nodes directly.
Register every loss/regularizer node with the GradientTrainer (or any custom trainer) and
feed their inputs through standard port connections, as shown in
examples//03_channel_selector.py.
pipeline.connect(
(logit_head.logits, bce_loss.predictions),
(data_node.mask, bce_loss.targets),
(selector.weights, entropy_loss.weights),
(selector.weights, diversity_loss.weights),
)
grad_trainer = GradientTrainer(
pipeline=pipeline,
datamodule=datamodule,
loss_nodes=[bce_loss, entropy_loss, diversity_loss],
metric_nodes=[metrics_anomaly],
trainer_config=training_cfg.trainer,
optimizer_config=training_cfg.optimizer,
)
Batch Distribution¶
The port system enables explicit batch distribution to specific input ports.
Single Input¶
# Distribute batch to a specific input port
outputs = pipeline.forward(batch={f"{normalizer.id}.data": input_data})
Multiple Inputs¶
# Distribute different data to different input ports
outputs = pipeline.forward(batch={
f"{node1.id}.data1": data1,
f"{node2.id}.data2": data2,
f"{node3.id}.features": features
})
Batch Key Format¶
Batch keys follow the pattern: {node_id}.{port_name}
Dimension Resolution¶
The port system automatically resolves variable dimensions during execution.
Dynamic Shape Resolution¶
# Port with variable dimensions
port_spec = PortSpec(
name="features",
port_type="input",
shape=(-1, -1, -1, -1) # All dimensions variable
)
# During execution, dimensions are resolved from input data
# Input shape: (32, 64, 64, 100) → Output shape: (32, 64, 64, n_components)
Constraint Validation¶
# Port with fixed channel dimension
port_spec = PortSpec(
name="features",
port_type="input",
shape=(-1, -1, -1, 100) # Fixed channel dimension
)
# Connection will fail if channel dimension doesn't match
Common Port Patterns¶
Normalization Nodes¶
Input Ports:
- data: Raw hyperspectral cube
Output Ports:
- normalized: Normalized data
Feature Extraction¶
Input Ports:
- features: Input features for transformation
Output Ports:
- projected: Transformed features
- explained_variance: Statistical metrics
Anomaly Detection¶
Input Ports:
- data: Features for anomaly scoring
Output Ports:
- scores: Anomaly detection scores
- logits: Logit-transformed scores
Loss Nodes¶
Input Ports: - Variadic inputs for loss computation
Output Ports:
- loss: Computed loss value
Error Handling¶
Port Not Found¶
try:
pipeline.connect(normalizer.nonexistent, selector.data)
except AttributeError as e:
print(f"Port error: {e}")
# Error: 'MinMaxNormalizer' object has no attribute 'nonexistent'
Incompatible Ports¶
try:
pipeline.connect(normalizer.normalized, pca.features)
except ValueError as e:
print(f"Compatibility error: {e}")
# Error: Port shapes are incompatible: (-1, -1, -1, -1) vs (-1, -1, -1, 3)
Missing Batch Distribution¶
try:
outputs = pipeline.forward(batch=input_data)
except KeyError as e:
print(f"Batch error: {e}")
# Error: Unable to find input port for batch key
Advanced Usage¶
Custom Port Specifications¶
# Create custom port with specific constraints
custom_port = PortSpec(
name="embedding",
port_type="output",
shape=(-1, 512), # Fixed embedding dimension
dtype=torch.float32,
description="Feature embeddings"
)
Port Inspection¶
# Inspect port properties
port = normalizer.normalized
print(f"Port name: {port.name}")
print(f"Port type: {port.port_type}")
print(f"Expected shape: {port.shape}")
print(f"Description: {port.description}")
Connection Graph¶
# Get all connections in the pipeline
connections = pipeline.get_connections()
for source, target in connections:
print(f"{source.node.name}.{source.name} → {target.node.name}.{target.name}")
Best Practices¶
- Use Descriptive Port Names: Choose names that clearly indicate the port's purpose
- Define Shape Constraints: Use fixed dimensions when possible for early error detection
- Document Ports: Provide clear descriptions for each port
- Test Port Compatibility: Validate connections during development
- Use Stage Filtering: Leverage stage-aware execution for performance
API Reference¶
pipeline
¶
Pipeline structure schemas.
ConnectionConfig
¶
Bases: _BaseConfig
Connection between two nodes.
Attributes:
| Name | Type | Description |
|---|---|---|
from_node |
str
|
Source node ID |
from_port |
str
|
Source port name |
to_node |
str
|
Target node ID |
to_port |
str
|
Target port name |
NodeConfig
¶
Bases: _BaseConfig
Node configuration within a pipeline.
Attributes:
| Name | Type | Description |
|---|---|---|
id |
str
|
Unique node identifier |
class_name |
str
|
Fully-qualified class name (e.g., 'my_package.MyNode') Alias: 'class' for backward compatibility |
params |
dict[str, Any]
|
Node parameters/hyperparameters Alias: 'hparams' for backward compatibility |
PipelineConfig
¶
Bases: _BaseConfig
Pipeline structure configuration.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Pipeline name |
nodes |
list[NodeConfig] | list[dict[str, Any]]
|
Node definitions (can be NodeConfig or dict for flexibility) |
connections |
list[ConnectionConfig] | list[dict[str, Any]]
|
Node connections (can be ConnectionConfig or dict for flexibility) |
frozen_nodes |
list[str]
|
Node IDs to keep frozen during training |
metadata |
PipelineMetadata | None
|
Optional pipeline metadata |
to_proto
¶
Convert to proto message.
Requires cuvis-ai-schemas[proto] to be installed.
Returns:
| Type | Description |
|---|---|
PipelineConfig
|
Proto message representation |
Source code in .venv/lib/python3.11/site-packages/cuvis_ai_schemas/pipeline/config.py
from_proto
classmethod
¶
Load from proto message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
proto_config
|
PipelineConfig
|
Proto message to deserialize |
required |
Returns:
| Type | Description |
|---|---|
PipelineConfig
|
Loaded configuration |
Source code in .venv/lib/python3.11/site-packages/cuvis_ai_schemas/pipeline/config.py
to_json
¶
from_json
classmethod
¶
to_dict
¶
from_dict
classmethod
¶
save_to_file
¶
Save pipeline configuration to YAML file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str | Path
|
Output file path |
required |
Source code in .venv/lib/python3.11/site-packages/cuvis_ai_schemas/pipeline/config.py
load_from_file
classmethod
¶
Load pipeline configuration from YAML file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str | Path
|
Input file path |
required |
Returns:
| Type | Description |
|---|---|
PipelineConfig
|
Loaded configuration |
Source code in .venv/lib/python3.11/site-packages/cuvis_ai_schemas/pipeline/config.py
PipelineMetadata
¶
Bases: _BaseConfig
Pipeline metadata for documentation and discovery.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Pipeline name |
description |
str
|
Human-readable description |
created |
str
|
Creation timestamp (ISO format) |
tags |
list[str]
|
Tags for categorization and search |
author |
str
|
Author name or email |
cuvis_ai_version |
str
|
Version of cuvis-ai-schemas used |
to_dict
¶
from_dict
classmethod
¶
to_proto
¶
Convert to proto message.
Requires cuvis-ai-schemas[proto] to be installed.
Returns:
| Type | Description |
|---|---|
PipelineMetadata
|
Proto message representation |
Source code in .venv/lib/python3.11/site-packages/cuvis_ai_schemas/pipeline/config.py
PortCompatibilityError
¶
Bases: Exception
Raised when attempting to connect incompatible ports.
DimensionResolver
¶
Utility class for resolving symbolic dimensions in port shapes.
resolve
staticmethod
¶
Resolve symbolic dimensions to concrete values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
shape
|
tuple[int | str, ...]
|
Shape specification with flexible (-1), fixed (int), or symbolic (str) dims. |
required |
node
|
Any | None
|
Node instance to resolve symbolic dimensions from. |
required |
Returns:
| Type | Description |
|---|---|
tuple[int, ...]
|
Resolved shape with concrete integer values. |
Raises:
| Type | Description |
|---|---|
AttributeError
|
If symbolic dimension references non-existent node attribute. |
Source code in .venv/lib/python3.11/site-packages/cuvis_ai_schemas/pipeline/ports.py
InputPort
¶
Proxy object representing a node's input port.
Initialize an input port proxy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
node
|
Any
|
The node instance that owns this port. |
required |
name
|
str
|
The name of the port on the node. |
required |
spec
|
PortSpec
|
The port specification defining type and shape constraints. |
required |
Source code in .venv/lib/python3.11/site-packages/cuvis_ai_schemas/pipeline/ports.py
OutputPort
¶
Proxy object representing a node's output port.
Initialize an output port proxy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
node
|
Any
|
The node instance that owns this port. |
required |
name
|
str
|
The name of the port on the node. |
required |
spec
|
PortSpec
|
The port specification defining type and shape constraints. |
required |
Source code in .venv/lib/python3.11/site-packages/cuvis_ai_schemas/pipeline/ports.py
PortSpec
dataclass
¶
Specification for a node input or output port.
resolve_shape
¶
Resolve symbolic dimensions in shape using node attributes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
node
|
Any
|
Node instance to resolve symbolic dimensions from. |
required |
Returns:
| Type | Description |
|---|---|
tuple[int, ...]
|
Resolved shape with all symbolic dimensions replaced by concrete integer values. |
See Also
DimensionResolver.resolve : Underlying resolution logic.
Source code in .venv/lib/python3.11/site-packages/cuvis_ai_schemas/pipeline/ports.py
is_compatible_with
¶
Check if this port can connect to another port.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
other
|
PortSpec | list[PortSpec]
|
Target port spec. If a list, it's a variadic port - extract the spec. |
required |
source_node
|
Any | None
|
Source node for dimension resolution |
required |
target_node
|
Any | None
|
Target node for dimension resolution |
required |
Returns:
| Type | Description |
|---|---|
tuple[bool, str]
|
(is_compatible, error_message) |
Source code in .venv/lib/python3.11/site-packages/cuvis_ai_schemas/pipeline/ports.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 | |
See Also¶
- Nodes API: Node implementations with port specifications
- Pipeline API: Pipeline and connection management
- Core Concepts: Understand the architecture
- Quickstart: Practical port usage examples