Skip to main content
Version: 1.0.0

User Guide

info

Before reading this guide, refer to SDK Installation for the SDK package download and common configurations.

This guide describes the basic steps for NTI users to install the MangoBoost package and evaluate it in storage virtualization.

Prerequisites

As Mango BoostX NVMe-over-Fabrics Initiator (NTI refers to the TCP version) provides virtualized NVMe devices to host, it requires only CLI tool installation. This package can be easily installed using the package manager in each Linux distribution. In this document, we will use apt as an example.

~$ sudo apt install mango-cli

mango-cli package contains mango-ctl, a CLI tool for managing MangoBoost devices.

Verifying Card Installation

If a NTI card is installed correctly, the mango-ctl dev command will show two MangoBoost NTI PFs (03:00.0, 04:00.0) and one management unit (04:00.1) contained on one card.

~$ mango-ctl dev
PCI FPGA Devices
0000:03:00.0
Vendor ID: 1f52
Device ID: 1001
Description: MangoBoost NTI NVMe PF
BAR[0]: 0xef400000 [size=0x40000]
NUMA Node: 0
Kernel module:
0000:04:00.0
Vendor ID: 1f52
Device ID: 1001
Description: MangoBoost NTI NVMe PF
BAR[0]: 0xef300000 [size=0x40000]
NUMA Node: 0
Kernel module:
0000:04:00.1
Vendor ID: 1f52
Device ID: 8019
Description: MangoBoost In-Band Channel PF
BAR[0]: 0xef340000 [size=0x40000]
BAR[2]: 0xef380000 [size=0x1000]
NUMA Node: 0
Kernel module:

NVMe/TCP Target Setup

Before starting with NVMe/TCP initiator setup, please ensure that the NVMe-oF target is running correctly. The following is an example of setting up the SPDK NVMe-oF target.

Check and configure an IP address of 100Gbps NIC in the target node.

info

To achieve 2x100G bandwidth, you have to use the NIC's two ports and configure a SPDK target on each port.

(target) ~$ sudo lshw -c network -businfo
Bus info Device Class Description
==========================================================
pci@0000:4f:00.0 enp79s0f0np0 network MT42822 BlueField-2 integrated ConnectX-6 Dx network controller
pci@0000:4f:00.1 enp79s0f1np1 network MT42822 BlueField-2 integrated ConnectX-6 Dx network controller
(target) ~$ sudo ifconfig enp79s0f0np0 100.0.3.2/24 up
(target) ~$ sudo ifconfig enp79s0f1np1 100.1.3.2/24 up

Configure sysctl to avoid ARP flux.

(target) ~$ sudo sysctl -w net.ipv4.conf.all.arp_ignore=1
(target) ~$ sudo sysctl -w net.ipv4.conf.all.arp_announce=2

Download and build SPDK source codes manually.

(target) ~$ git clone https://github.com/spdk/spdk.git -b LTS
(target) ~$ cd spdk
(target) spdk$ git submodule update --init
(target) spdk$ ./scripts/pkgdep.sh
(target) spdk$ ./configure
(target) spdk$ make
info

We strongly recommend to use SPDK LTS version if you use an opensource SPDK for the target. We have seen several bugs for the previous versions.

Note that SPDK apps require some hugepages. The below commands allocate 16GB hugepages on both NUMA-0 and 1.

(target) $ echo 16384 | sudo tee /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
(target) $ echo 16384 | sudo tee /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages

We recommend using sample SPDK scripts in the deliverables for easy NVMe-oF target setup. Also, check the NUMA domain of your NIC and assign proper core affinities to the SPDK. For example, if your 100G NIC (e.g., ConnectX-5) is in the NUMA-1, provide enough SPDK cores (e.g., 64-127) in NUMA-1.

Null device (to remove the NVMe bandwidth limits)
(target) spdk$ sudo ./build/bin/nvmf_tgt -m [64-127] --json nvmf_tgt_tcp_null.json
NVMe device (vfio-pci driver should bind this device)
(target) spdk$ lspci -nn | grep "Non-Volatile"
75:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980 PRO [144d:a80a]
76:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980 PRO [144d:a80a]
(target) spdk$ echo 0000:75:00.0 | sudo tee /sys/bus/pci/drivers/nvme/unbind
(target) spdk$ echo 0000:76:00.0 | sudo tee /sys/bus/pci/drivers/nvme/unbind
(target) spdk$ echo 144d a80a | sudo tee /sys/bus/pci/drivers/vfio-pci/new_id
(target) spdk$ sudo ./build/bin/nvmf_tgt -m [64-127] --json nvmf_tgt_tcp_nvme.json
info

You may have to change the IP address or NVMe PCIe BDF in the scripts.

Multiple namespaces with null devices
(target) spdk$ sudo ./build/bin/nvmf_tgt -m [64-127] --json nvmf_tgt_tcp_null_namespace.json
info

User can add multiple namespaces to a target controller by using nvmf_subsystem_add_ns in SPDK. MB NVMe/TCP solution automatically enables multiple namespaces that the target controller has.

NVMe/TCP Initiator Setup

Mango NVMe/TCP initiator (NTI) agent on the NIC automatically sets up Mango NVMe/TCP service. To enable the NTI agent, you only need to do the following two steps.

  • Write the configuration information in the file
  • Enable the NTI agent

NTI JSON Configuration

The NTI agent uses a JSON format file as an input. You can enable all features of NTI by writing the JSON file. Below is the NTI JSON configuration table.

Field NameDescriptionTypeDefault
SoC Ethernet
(soc ethernet)
Interface Name (TOE_port_0)addressPort0 IP Addressstring
mtuPort0 MTU Sizeinteger1500
Interface Name (TOE_port_1)addressPort1 IP Addressstring
mtuPort1 MTU Sizeinteger1500
Target Subsystems
(targets)
Target Subsystem Information
(PF<n>)
subnqnTarget NQNstringRequired
devnameNVMe Dev NamestringRequired
hostnqnHost NQNstring
traddrTransport Addrstring listRequired
trsvcidService ID (Port)string listRequired
tytypeTransport Typestring listRequired
VFVirtual Functionsobject list

For detailed information on the JSON schema, please refer to 4.3. traddr, trsvcid, and trtype are in array format for the upcoming support of NVMe/TCP multi-path capabilities.

Example JSON
{
"soc ethernet": {
"TOE_port_0": {
"address": "100.0.3.3/24"
},
"TOE_port_1": {
"address": "100.1.3.3/24"
}
},
"targets": [
{
"subnqn": "nqn.2019-07.io.spdk:cnode0",
"devname": "Nvme0",
"hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d605796e-615a-46f4-9d3f-82e5d44fa618",
"traddr": ["100.0.3.2"],
"trsvcid": ["4420"],
"trtype": ["TCP"]
},
{
"subnqn": "nqn.2019-07.io.spdk:cnode1",
"devname": "Nvme1",
"hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d605796e-615a-46f4-9d3f-82e5d44fa618",
"traddr": ["100.1.3.2"],
"trsvcid": ["4420"],
"trtype": ["TCP"]
}
]
}

SoC Ethernet Information

You can set Mango network IP addresses using the configuration file. The NTI agent takes the IP information (address) for each network interface name, adjusts the netplan configuration, and applies netplan. The NTI agent provides options for static IP address assignment (e.g., "address": "100.0.3.3/24") or DHCP configuration (e.g., "address": "dhcp").

Target Subsystem Information

You must describe the NVMe/TCP target subsystem information using the configuration file. The target information is mandatory for using the NTI agent. The NTI agent processes target information (targets) consisting of PF<n>, which are Physical Functions mapped to port_<n>.

  • Physical Function (PF<n>)
    • Subnqn ID (subnqn) of the NVMe/TCP target subsystem (should be unique).
    • Type (type) of target backends.
    • Device name (devname) specified by the user (should be unique).
    • Hostnqn (optional, user-defined).
    • Transport address (traddr).
    • Transport service ID (trsvcid).
    • Transport type (trtype), supporting only TCP.
    • Virtual Functions (VF), each VF includes target subsystem information, except VF. (optional)

Mango NVMe/TCP service supports up to 18 NVMe subsystems (2-PFs and 8-VFs per PF), so the maximum properties of the target information should be 2, where each target function can have additional target information using virtual functions. The first entry subsystem(PF) will be enabled as the first host NVMe device, and the second subsystem as the second host NVMe device. Mango NVMe/TCP service only supports TCP as a transport type. Please refer to 3 for the advanced features; PFC and ECN.

info

The order of the host NVMe devices can change because U45N board exploits bifurcation. Fortunately, there is a way to find the NVMe device of the first entry subsystem you set. The PCI bus of the NVMe device will have another function for in-band channel.

Enable NTI Agent on the Host

We provide a full host driven approach to enable the NTI agent on the host. You don't need to access to the ARM. Please prepare the NTI JSON configuration file.

(host) ~$ sudo mango-ctl soc push <the json file to push> <bdf (e.g., 0000:04:00.1)>
Host file push success.

You can check the installed file on the NTI agent by pulling the file.

(host) ~$ sudo mango-ctl soc pull <destination path (e.g., ./pulled_here.json)> <bdf (e.g., 0000:04:00.1)>
SoC file pull success.

(host) ~$ cat <pulled file>
{
"soc ethernet": {
"TOE_port_0": {
"address": "100.0.3.3/24"
},
"TOE_port_1": {
"address": "100.1.3.3/24"
}
},
"targets": [
{
"subnqn": "nqn.2019-07.io.spdk:cnode0",
"devname": "Nvme0",
"traddr": ["100.0.3.2"],
"trsvcid":["4420"],
"trtype": ["TCP"]
},
{
"subnqn": "nqn.2019-07.io.spdk:cnode1",
"devname": "Nvme1",
"traddr": ["100.1.3.2"],
"trsvcid":["4420"],
"trtype": ["TCP"]
}
]
}

Then, start the agent.

(host) ~$ sudo mango-ctl soc service start mango-nti <bdf>

If you want the agent to start automatically when the server reboots, or if you plug the card into a different server, enable the agent.

(host) ~$ sudo mango-ctl soc service enable mango-nti <bdf>
Created symlink /etc/systemd/system/multi-user.target.wants/mango-nti.service /lib/systemd/system/mango-nti.service

It will take up to 2 minutes to initiate NTI. You can check the setup is done with the Mango CLI tool. It is done if INFO:root:Mango NVMe/TCP Initiator setup completed is on the service log. Also you can see the servic log keeps printing INFO:root:Periodic health-checker.

(host) ~$ sudo mango-ctl soc service status mango-nti <bdf>
mango-nti.service - MangoBoost NVME/TCP Initiator
Loaded: loaded (/lib/systemd/system/mango-nti.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2024-03-22 07:02:18 UTC; 20s ago
Main PID: 887 (python3)
Tasks: 1 (limit: 4234)
Memory: 8.5M
CGroup: /system.slice/mango-nti.service
└─887 /usr/bin/python3 /etc/mango-cli/scripts/mango_nti.py start

Mar 11 14:02:21 lx2162a mango-nti[6164]: INFO:root:Mango NVMe/TCP Initiator setup completed.
Mar 11 14:02:31 lx2162a mango-nti[6164]: INFO:root:Periodic health-checker
Mar 11 14:02:41 lx2162a mango-nti[6164]: INFO:root:Periodic health-checker

Enable NVMe Device on Host

The host NVMe devices are available after the NTI agent setup process is done. The host NVMe device is fully compatible with the standard Linux NVMe driver and CLI tool. You need to bind the NVMe PF to NVMe driver on the Host without system power cycle.

(host) ~$ sudo mango-ctl dev show
PCI FPGA Devices
0000:04:00.0
Vendor ID: 1f52
Device ID: 1001
Description: MangoBoost FVM/NVMf PF
BAR[0]: 0xef300000 [size=0x40000]
NUMA Node: 0
PCIe MSI-X: 32
PCIe Link: Speed 16GT/s, Width x8
Kernel module: nvme
0000:03:00.0
Vendor ID: 1f52
Device ID: 1001
Description: MangoBoost FVM/NVMf PF
BAR[0]: 0xef400000 [size=0x40000]
NUMA Node: 0
PCIe MSI-X: 32
PCIe Link: Speed 16GT/s, Width x8
Kernel module: nvme
0000:04:00.1
Vendor ID: 1f52
Device ID: 8019
Description: MangoBoost In-Band Channel PF
BAR[0]: 0xef340000 [size=0x40000]
BAR[2]: 0xef380000 [size=0x1000]
NUMA Node: 0
Kernel module:


(host) ~$ sudo modprobe nvme
(host) ~$ sudo su
(host) # echo <bdf (e.g., 0000:03:00.0)> > /sys/bus/pci/drivers/nvme/bind
(host) # echo <bdf (e.g., 0000:04:00.0)> > /sys/bus/pci/drivers/nvme/bind

(host) ~$ sudo nvme list
Node SN Model Namespace Usage Format FW Rev
-------------- --------------- --------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 MB_TOE_Nvme0 MangoBoost NVMe Virtual Device 1 137.44 GB / 137.44 GB 512 B + 0 B 23.09
/dev/nvme1n1 MB_TOE_Nvme1 MangoBoost NVMe Virtual Device 1 137.44 GB / 137.44 GB 512 B + 0 B 23.09

If multiple namespaces are enabled, nvme list will be shown as below. (It is the example when the provided nvmf_tgt_tcp_null_namespace.json is used.)

(host) ~$ sudo nvme list
Node SN Model Namespace Usage Format FW Rev
-------------- --------------- --------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 MB_TOE_Nvme0 MangoBoost NVMe Virtual Device 1 137.44 GB / 137.44 GB 512 B + 0 B 23.09
/dev/nvme0n2 MB_TOE_Nvme0 MangoBoost NVMe Virtual Device 2 137.44 GB / 137.44 GB 512 B + 0 B 23.09

System power cycle (cold reboot) can bind the NVMe devices in system boot time. You do not have to bind manually if you perform the system power cycle.

How to Change NTI Configuration

To change the NTI configuration, there are two options.

  • Restart the NTI agent manually
# Unbind nvme first on host
(host) ~$ echo 0000:04:00.0 | sudo tee /sys/bus/pci/drivers/nvme/unbind
(host) ~$ echo 0000:03:00.0 | sudo tee /sys/bus/pci/drivers/nvme/unbind

# Change the config file
(host) ~$ vi your_config.json
...(change the file)...
(host) ~$ sudo mango-ctl soc push your_config.json 0000:04:00.1

# Restart service
(host) ~$ sudo mango-ctl soc restart 0000:04:00.1
...(NTI Setup Done)...

# Bind nvme on host after setup done
(host) ~$ echo 0000:04:00.0 | sudo tee /sys/bus/pci/drivers/nvme/bind
(host) ~$ echo 0000:03:00.0 | sudo tee /sys/bus/pci/drivers/nvme/bind
  • Change the configuration file and reboot.
(host) ~$ vi your_config.json
...(change the file)...
(host) ~$ sudo mango-ctl soc push your_config.json 0000:04:00.1
Host file push success.
(host) ~$ sudo reboot
...(reboot)...

Importing a configuration file over the network

The NTI configuration file can be imported from a server connected via the card's 100Gbps network interface. You can configure the NTI JSON configuration file as follows:

{
"soc ethernet": {
"port_0": {
"address": "100.0.3.3/24"
},
"port_1": {
"address": "100.1.3.3/24"
}
},
"targets": [
{
"subnqn": "nqn.2019-07.io.spdk:cnode0",
"devname": "Nvme0",
"traddr": ["100.0.3.2"],
"trsvcid":["4420"],
"trtype": ["TCP"]
}
]
"import config": [
{
"scp": {"addr": "100.1.3.101",
"id": "user",
"pwd": "password",
"path": "/home/user/nti.json"}
},
{
"scp": {"addr": "100.1.3.102",
"id": "user",
"pwd": "password",
"path": "/home/user/nti_secondary.json"}
}
]
}

In the soc ethernet section, configure TOE_port_0 or TOE_port_1 with the appropriate IP address and subnet mask. Store the NTI JSON configuration file on a server accessible via the configured network. Specify the server's IP address, user credentials (username and password), and the path of the configuration file to download.

The import config field is a list of SCP transfer instructions. The NTI card will attempt to import the configuration files in the specified order. If the transfer of the file in the first element fails, it will proceed to the next file in the next element. Please specify at least one target in targets in case all files in import config are inaccessible. This function in the current version does not support passwords that need to be escaped.

NVMe/TCP Clean-up

Please do the following to disconnect NVMe-oF and stop MB NVMe/TCP.

(host) ~$ echo 0000:04:00.0 | sudo tee /sys/bus/pci/drivers/nvme/unbind
(host) ~$ echo 0000:03:00.0 | sudo tee /sys/bus/pci/drivers/nvme/unbind
(host) ~$ sudo mango-ctl soc service disable mango-nti 0000:04:00.1
(host) ~$ sudo mango-ctl soc restart 0000:04:00.1

If NVMe/TCP Does Not Work Properly

You can restart the MB NVMe/TCP service by running the following command:

(host) ~$ sudo mango-ctl soc restart 0000:04:00.1
info

If the issue persists even after executing the command above, please perform a cold reboot of the system to restore functionality.