User Guide
This guide describes how to configure NTT(The TCP version of Mango BoostX NVMe-over-Fabrics Target) and NTI(The TCP version of Mango BoostX NVMe-over-Fabrics Initiator) through the OPI bridge interface. The guide assumes the following configuration, so please adjust the commands according to your environment.
- Target Node:
100.0.99.2 - Initiator Node:
100.0.99.3- SoC Linux:
192.168.1.2
- SoC Linux:
NTT Setup with OPI APIs
You can refer to NTT User Guide for more detailed information on setting up NTT. This document omits the details on NTT and focuses on how it can be configured with the OPI API.
Start OPI bridge for NTT
(target) ~$ sudo mango-ctl ntt start
mango-nvmf@0 started
For ease of use, you can set the bridge address as an environment variable. Below command assumes the calls to the OPI bridge will be made from the same node where the bridge is running.
(target) ~$ export BRIDGE_ADDR=127.0.0.1:50051
(target) ~$ ./opi-mangoboost-bridge
2025/06/20 14:19:51 Connection to SPDK will be via: unix detected from /var/tmp/spdk.sock
2025/06/20 14:19:51 gRPC server listening at [::]:50051
2025/06/20 14:19:51 HTTP Server listening at 8082
Create NVMe subsystem
(target) ~$ grpc_cli call --json_input --json_output $BRIDGE_ADDR CreateNvmeSubsystem \
'{
"nvme_subsystem": {
"spec": {
"nqn": "nqn.2019-07.io.spdk:cnode0",
"serial_number": "MB_NVMF",
"model_number": "MangoBoost NVMF",
"max_namespaces": 256
}
},
"nvme_subsystem_id": "subsystem0"
}'
connecting to 127.0.0.1:50051
{
"name": "nvmeSubsystems/subsystem0",
"spec": {
"nqn": "nqn.2019-07.io.spdk:cnode0",
"serialNumber": "MB_NVMF",
"modelNumber": "MangoBoost NVMF",
"maxNamespaces": "256"
},
"status": {
"firmwareRevision": "SPDK v25.05 git sha1 71aa511"
}
}
Rpc succeeded with OK status
Create NVMe remote controller
(target) ~$ grpc_cli call --json_input --json_output $BRIDGE_ADDR CreateNvmeRemoteController \
'{
"nvme_remote_controller": {
"multipath": "NVME_MULTIPATH_DISABLE"
},
"nvme_remote_controller_id": "nvme0"
}'
connecting to 127.0.0.1:50051
{
"name": "nvmeRemoteControllers/nvme0",
"multipath": "NVME_MULTIPATH_DISABLE"
}
Rpc succeeded with OK status
Create NVMe path
Before making the NVMe path, find the right PCIe address of the NVMe device and
make sure that vfio-pci driver is bound to the device.
(target) ~$ sudo mango-ctl dev show nvme
PCI NVMe Devices
0000:4f:00.0
Vendor ID: 144d
Device ID: a824
Revision ID: 0
Description: NVMe SSD Controller PM173X
BAR[0]: 0xbb110000 [size=0x8000]
NUMA Node: 0
PCIe MSI-X: 64
PCIe Link: Speed 16GT/s, Width x8
Kernel module: nvme
(target) ~$ sudo virsh nodedev-detach pci_0000_4f_00_0
Device pci_0000_4f_00_0 detached
(target) ~$ sudo mango-ctl dev show nvme
PCI NVMe Devices
0000:4f:00.0
Vendor ID: 144d
Device ID: a824
Revision ID: 0
Description: NVMe SSD Controller PM173X
BAR[0]: 0xbb110000 [size=0x8000]
NUMA Node: 0
PCIe MSI-X: 64
PCIe Link: Speed 16GT/s, Width x8
Kernel module: vfio-pci
After binding the vfio-pci driver to the NVMe device, you can create an NVMe path. The NVMe path is created under the remote controller created in the previous step.
(target) ~$ grpc_cli call --json_input --json_output $BRIDGE_ADDR CreateNvmePath \
'{
"parent": "nvmeRemoteControllers/nvme0",
"nvme_path": {
"trtype": "NVME_TRANSPORT_TYPE_PCIE",
"traddr": "0000:4f:00.0"
},
"nvme_path_id": "nvmepciepath0"
}'
connecting to 127.0.0.1:50051
{
"name": "nvmeRemoteControllers/nvme0/nvmePaths/nvmepciepath0",
"trtype": "NVME_TRANSPORT_TYPE_PCIE",
"traddr": "0000:4f:00.0"
}
Rpc succeeded with OK status
Create NVMe namespace
It's expected for the new namespace to have STATE_DISABLED as its state at this stage.
It will change to STATE_ENABLED after CreateNvmeController is called.
(target) ~$ grpc_cli call --json_input --json_output $BRIDGE_ADDR CreateNvmeNamespace \
'{
"parent": "nvmeSubsystems/subsystem0",
"nvme_namespace": {
"spec": {
"volume_name_ref": "nvme0n1"
}
},
"nvme_namespace_id": "namespace0"
}'
connecting to 127.0.0.1:50051
{
"name": "nvmeSubsystems/subsystem0/nvmeNamespaces/namespace0",
"spec": {
"hostNsid": 1,
"volumeNameRef": "nvme0n1"
},
"status": {
"state": "STATE_DISABLED",
"operState": "OPER_STATE_OFFLINE"
}
}
Rpc succeeded with OK status
Create NVMe Controller
(target) ~$ grpc_cli call --json_input --json_output $BRIDGE_ADDR CreateNvmeController \
'{
"parent": "nvmeSubsystems/subsystem0",
"nvme_controller": {
"spec": {
"nvme_controller_id": 0,
"trtype": "NVME_TRANSPORT_TYPE_TCP",
"fabrics_id": {
"traddr": "100.0.99.2",
"trsvcid": "4420",
"adrfam": "NVME_ADDRESS_FAMILY_IPV4"
}
}
},
"nvme_controller_id": "controller0"
}'
connecting to 127.0.0.1:50051
{
"name": "nvmeSubsystems/subsystem0/nvmeControllers/controller0",
"spec": {
"nvmeControllerId": -1,
"trtype": "NVME_TRANSPORT_TYPE_TCP",
"fabricsId": {
"traddr": "100.0.99.2",
"trsvcid": "4420",
"adrfam": "NVME_ADDRESS_FAMILY_IPV4"
}
},
"status": {
"active": true
}
}
Rpc succeeded with OK status
NTI Setup with OPI APIs
You can refer to NTI User Guide for more detailed information on setting up NTI. This document omits the details on NTI and focuses on how it can be configured with the OPI API.
Start OPI bridge for NTI
The initial configuration should be done on the DPU SoC Linux. Start the NTI service and OPI bridge on the DPU.
(soc) ~# mango-ctl nvme start
mango-nvme started
(soc) ~# ./opi-mangoboost-bridge
OPI API calls can be made from any server with the same network as the DPU SoC.
This guide assumes the initiator node (100.0.99.3) directly sends gRPC calls to the OPI bridge (192.168.1.2) to access the target node (100.0.99.2).
(initiator) ~# export BRIDGE_ADDR=192.168.1.2:50051 # adjust to DPU SoC OOB IP address
Create NVMe subsystem
(initiator) ~$ grpc_cli call --json_input --json_output $BRIDGE_ADDR CreateNvmeSubsystem \
'{
"nvme_subsystem": {
"spec": {
"nqn": "nqn.2019-07.io.spdk:cnode0",
"serial_number": "MB_NVMF",
"model_number": "MangoBoost NVMF",
"max_namespaces": 256
},
"nvme_subsystem_id": "subsystem0"
}
}'
connecting to 192.168.1.2:50051
{
"name": "nvmeSubsystems/subsystem0",
"spec": {
"nqn": "nqn.2019-07.io.spdk:cnode0",
"serialNumber": "MB_NVMF",
"modelNumber": "MangoBoost NVMF",
"maxNamespaces": "256"
},
"status": {
"firmwareRevision": "SPDK v25.05 git sha1 71aa511"
}
}
Rpc succeeded with OK status
Create PCIe NVMe controller
(initiator) ~$ grpc_cli call --json_input --json_output $BRIDGE_ADDR CreateNvmeController \
'{
"parent": "nvmeSubsystems/subsystem0",
"nvme_controller": {
"spec": {
"nvme_controller_id": -1,
"pcie_id": {
"physical_function": 0,
"virtual_function": 0,
"port_id": 0
},
"max_nsq": 32,
"max_ncq": 32,
"trtype": "NVME_TRANSPORT_TYPE_PCIE"
}
},
"nvme_controller_id": "controller1"
}'
connecting to 192.168.1.2:50051
{
"name": "nvmeSubsystems/subsystem0/nvmeControllers/controller1",
"spec": {
"nvmeControllerId": -1,
"trtype": "NVME_TRANSPORT_TYPE_PCIE",
"pcieId": {
"portId": 0,
"physicalFunction": 0,
"virtualFunction": 0
},
"maxNsq": 32,
"maxNcq": 32
},
"status": {
"active": true
}
}
Rpc succeeded with OK status
Create NVMe remote controller
(initiator) ~$ grpc_cli call --json_input --json_output $BRIDGE_ADDR CreateNvmeRemoteController \
'{
"nvme_remote_controller": {
"multipath": "NVME_MULTIPATH_DISABLE",
"tcp": {
"hdgst": false,
"ddgst": false
}
},
"nvme_remote_controller_id": "nvme0"
}'
connecting to 192.168.1.2:50051
{
"name": "nvmeRemoteControllers/nvme0",
"multipath": "NVME_MULTIPATH_DISABLE",
"tcp": {}
}
Rpc succeeded with OK status
Create NVMe TCP path
(initiator) ~$ grpc_cli call --json_input --json_output $BRIDGE_ADDR CreateNvmePath \
'{
"parent": "nvmeRemoteControllers/nvme0",
"nvme_path": {
"trtype": "NVME_TRANSPORT_TYPE_TCP",
"traddr": "100.0.99.2",
"fabrics": {
"subnqn": "nqn.2019-07.io.spdk:cnode0",
"trsvcid": "4420",
"adrfam": "NVME_ADDRESS_FAMILY_IPV4"
}
},
"nvme_path_id": "nvme0path"
}'
connecting to 192.168.1.2:50051
{
"name": "nvmeRemoteControllers/nvme0/nvmePaths/nvme0path",
"trtype": "NVME_TRANSPORT_TYPE_TCP",
"traddr": "100.0.99.2",
"fabrics": {
"trsvcid": "4420",
"subnqn": "nqn.2019-07.io.spdk:cnode0",
"adrfam": "NVME_ADDRESS_FAMILY_IPV4"
}
}
Rpc succeeded with OK status
Create NVMe namespace
(initiator) ~$ grpc_cli call --json_input --json_output $BRIDGE_ADDR CreateNvmeNamespace \
'{
"parent": "nvmeSubsystems/subsystem0",
"nvme_namespace": {
"spec": {
"volume_name_ref": "nvme0n1"
}
}
}'
connecting to 192.168.1.2:50051
{
"name": "nvmeSubsystems/subsystem0/nvmeNamespaces/72c1f9e7-df2d-479a-aecd-5f272f5e230d",
"spec": {
"hostNsid": 1,
"volumeNameRef": "nvme0n1"
},
"status": {
"state": "STATE_ENABLED",
"operState": "OPER_STATE_ONLINE"
}
}
Rpc succeeded with OK status
Bind NVMe device on host
As commented above, gRPC calls to the OPI bridge can be made from other systems, or even inside the DPU SoC Linux. However, the DPU should be bound as an NVMe interface on the host.
(initiator) ~# echo 0000:41:00.0 | sudo tee /sys/bus/pci/drivers/nvme/bind
(initiator) ~$ sudo nvme list
Node SN Model Namespace Usage Format FW Rev
-------------- --------------- --------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 MB_TOE_Nvme0 MangoBoost NVMe Virtual Device 1 137.44 GB / 137.44 GB 512 B + 0 B 23.09
Now, you can see the MangoBoost NVMe Virtual Device listed on the host.