Install the dependencies:
apt install make coreutils gcc gcc-multilib build-essential linux-headers-$(uname -r) git libnuma-dev pythonNow we need to set the environment variables to indicate where DPDK is located and what type of environment we are running on.
The easiest way of doing it is to add the following lines to the root .bashrc file (or the equivalent if you are unsing something other than bash):
export RTE_SDK=<path to dpdk>
export RTE_TARGET=x86_64-native-linuxapp-gccApply the changes:
source .bashrcNow we compile DPDK. Go to the DPDK directory (download the repository if needed) and generate the configuration:
make config O=$RTE_TARGET T=$RTE_TARGETNow compile DPDK using this configuration:
make O=$RTE_TARGET -j $(nproc)This will compile DPDK to the same directory.
Before running a DPDK app you should bind the NIC to uio_pci_generic:
modprobe uio_pci_generic
echo 0 | sudo tee /proc/sys/kernel/randomize_va_spaceWhen running a DPDK app, you shoulkd probably use the following command line options to specify the cores (-l), number of memory channels (-n) and the PCI address of the NIC you want to use (-w), e.g.,
<dpdk command> -l 7-15 -n 4 -w 82:00.0Many of the following suggestions were adapted from the DPDK documentation.
Before continuing, make sure that DPDK supports the NIC you are planning to use (DPDK Supported Hardware) Also ensure that this NIC has the latest NVM/firmware.
Reboot your system and enter the BIOS. Make sure to apply all of the following changes that are applicable to your system:
- Disable hyper-threading;
- Disable C-state and P-state transitions;
- Disable uncore power scaling;
- Make sure all options are optimized for performance
- Consider activating Turbo Boost -- although it may be a good idea to test if it improves performance in your scenario
- Disable all virtualization options.
Save your changes and reboot the system.
Make sure you have at least one memory DIMM inserted in every memory channel. To check this, run the following command:
dmidecode -t memory | grep 'Locator\|Size'You will see locators following the format DIMM_C2, where C is the channel and 2 is the DIMM. When there is a module installed in a given location, it will contain a size information. Otherwise, there will be a notice saying that no module is installed. Make sure you have a DIMM located in every different channel (i.e., all letters have at least one module installed).
You will need to know the PCIe code of the network interface you would like to use. There are many ways of finding it out. If the NIC is currently mapped to a Linux network interface, you can retrieve the PCIe code using ethtool.
If you don't already have ethtool installed, install it with:
apt install ethtoolThen, use it to determine the PCIe code of the network interface you would like to use (the interface name is the one you retrieve using the ip a command):
ethtool -i <interface name> | grep bus-infoIf you know the NIC model, you can also check the PCIe code using the lspci command.
If the lspci command is not available install it with:
apt install pciutilsFinally, check all the available ethernet devices with:
lspci | grep EthLook for you NIC model, its code is on the left column.
If your system has multiple CPU sockets you should ensure that the cores you are using are in the same NUMA node as the NIC you are using.
To to this, check the NUMA node where the NIC you want to use is. That is achievable with a command similar to the following. Replace 0000:82:00.0 with the equivalent PCIe code of the network interface you would like to check.
cat /sys/bus/pci/devices/0000\:82\:00.0/numa_nodeThe output will be the NUMA node where the NIC is located.
Now you want to find out which NUMA node each CPU core is located at. To do this use the following command:
lscpu | grep NUMAThis command will tell you how many NUMA nodes there are in the system. More importantly, it will tell you which cores are in each node.
You should make sure that Linux initializes hugepages as it boots. Moreover, to avoid unnecessary context switches, you should isolate the core you plan to use from the Linux scheduler, so that it does not try to schedule other processes in these cores.
The two things above can be accomplished editing Linux kernel parameters at the GRUB configuration file:
vim /etc/default/grubAdd the following parameters to the GRUB_CMDLINE_LINUX_DEFAULT, changing the isolcpus option to include all the cores you plan to use:
default_hugepagesz=1G hugepagesz=1G hugepages=8 isolcpus=2,3,4,5,6,7,8
Update GRUB to use the new configuration:
update-grubTo make hugepages available to dpdk they should be mounted at /mnt/huge. We also want to make the mount persist across reboots, add the following line to /etc/fstab:
nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0
Reboot the system to activate the changes:
rebootMake sure the Linux scaling governor is set to performance:
echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governorCheck the the frequency that current cores are running at:
cpufreq-infoTo disable ASLR run the following command:
echo 0 | sudo tee /proc/sys/kernel/randomize_va_spaceDuring the experiments it may be worthwhile to stop cron, so as to avoid that periodic tasks influence running experiments:
sudo systemctl stop cron