# Tips and Tricks for Jetson AGX Xavier This document contains useful software and links to documentation that we have found to be useful when working with Jetson AGX Xavier. Copied from: https://git.its.aau.dk/WW82ZE/docs_xavier/src/branch/master # Links Nvidia Developer forum Jetson AGX Xavier topic - https://forums.developer.nvidia.com/c/agx-autonomous-machines/jetson-embedded-systems/jetson-agx-xavier/75 Jetson Community Projects - https://developer.nvidia.com/embedded/community/jetson-projects Embedded Linux wiki with many further links - https://elinux.org/Jetson_AGX_Xavier Further links to resources - https://forums.developer.nvidia.com/t/links-to-jetson-agx-xavier-resources-wiki/64659 Jetson Nano tips: - https://github.com/sulpub/Jetson_nano/blob/master/README.md Build Jetson AGX Kernel and Modules: - https://github.com/jetsonhacks/buildJetsonXavierKernel NVIDIA Container Runtime on Jetson: - https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-Container-Runtime-on-Jetson Utils: - https://tsl0922.github.io/ttyd/ # Software packages ## JetPack JetPack 4.6 Production Release with L4T 32.6.1 - https://forums.developer.nvidia.com/t/jetpack-4-6-production-release-with-l4t-32-6-1/185596/7 Nvidia Jetson-specific components are collected in JetPack. They can be installed as Debian packages This enables, for example, installation of all JetPack components via the `nvidia-jetpack` metapackage. These commands on the developer kit will result in a full JetPack install: ```bash sudo apt update sudo apt install nvidia-jetpack ``` To view individual packages which are part of `nvidia-jetpack` metapackage, enter the command: ```bash sudo apt-cache show nvidia-jetpack ``` ## Boot from NVME ``` git clone https://github.com/jetsonhacks/rootOnNVMe.git cd rootOnNVMe ./copy-rootfs-ssd.sh ./setup-service.sh sudo reboot # If you want to boot from SD again, remove the file /etc/setssdroot.conf from the SD card. ``` ## Remove unused packages ``` sudo apt remove --purge libreoffice* sudo apt remove --purge thunderbird* sudo apt clean sudo apt autoremove ``` ## Remove Docker If you don't use a docker container, remove it. Docker daemons use some system resources. ``` apt-get remove docker docker-engine docker.io containerd runc ``` ## Docker Default Runtime https://github.com/dusty-nv/jetson-containers/#docker-default-runtime To enable access to the CUDA compiler (nvcc) during `docker build` operations, add `"default-runtime": "nvidia"` to your `/etc/docker/daemon.json` configuration file before attempting to build the containers: ``` json { "runtimes": { "nvidia": { "path": "nvidia-container-runtime", "runtimeArgs": [] } }, "default-runtime": "nvidia" } ``` You will then want to restart the Docker service or reboot your system before proceeding. ## NVIDIA Container Runtime on Jetson ``` # Allow containers to communicate with Xorg $ sudo xhost +si:localuser:root $ sudo docker run --runtime nvidia --network host -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-base:r32.3.1 root@nano:/# apt-get update && apt-get install -y --no-install-recommends make g++ root@nano:/# cp -r /usr/local/cuda/samples /tmp root@nano:/# cd /tmp/samples/5_Simulations/nbody root@nano:/# make root@nano:/# ./nbody ``` ## Building CUDA in Containers on Jetson Docker gives you the ability to build containers using the “docker build” command. Let's start with an example of how to do that on your Jetson device: ``` $ mkdir /tmp/docker-build && cd /tmp/docker-build $ cp -r /usr/local/cuda/samples/ ./ $ tee ./Dockerfile < Enable remote access to the desktop If true, allows remote access to the desktop via the RFB protocol. Users on remote machines may then connect to the desktop using a VNC viewer. false ``` 3. Compile the Gnome schema with : **sudo glib-compile-schemas /usr/share/glib-2.0/schemas** 4. After this, you can enable desktop sharing. ![Jetson enable desktop sharing](https://github.com/sulpub/Jetson_nano/blob/master/images/jetson_nano_desktop_sharing.png) Source information : http://bit.ly/2onnLIc ## Auto SSH login + no sudo passwd ``` # https://superuser.com/questions/8077/how-do-i-set-up-ssh-so-i-dont-have-to-type-my-password # on host: ssh-keygen -t rsa # upload the public key to the remote server: ssh-copy-id -i ~/.ssh/id_rsa.pub remote-user@remote-host # https://askubuntu.com/questions/147241/execute-sudo-without-password sudo visudo username ALL=(ALL) NOPASSWD:ALL ``` ## Docker https://ngc.nvidia.com/catalog/containers/nvidia:l4t-base In case of docker error: ``` # https://github.com/moby/moby/issues/13008 # [1.6.0][graphdriver] prior storage driver "devicemapper" failed: error intializing graphdriver sudo rm -rf /var/lib/docker sudo reboot ``` ## OpenCV https://github.com/raspberry-pi-maker/NVIDIA-Jetson/tree/master/useful_scripts https://forums.developer.nvidia.com/t/installing-opencv4-on-xavier-solved/65436 https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.5.0_Jetson.sh ## Samba ```bash sudo apt-get install samba sudo nano /etc/samba/smb.conf ``` Configure like: ``` #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home directory as \\server\username [homes] comment = Home Directories browseable = yes # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. read only = no # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. create mask = 0755 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. directory mask = 0755 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. # Un-comment the following parameter to make sure that only "username" # can connect to \\server\username # This might need tweaking when using external authentication schemes valid users = %S ``` Then: ```bash sudo service smbd restart sudo smbpasswd -a username ``` ## Python and PyTorch Install useful Python packages ```bash sudo apt install python3-pip libfreetype6-dev libffi-dev -y pip3 install cython pip3 install numpy jupyter jupyterlab matplotlib pandas ``` ## ROS2 https://forums.developer.nvidia.com/t/ros2-on-agx-xavier/121836/9 https://github.com/dusty-nv/jetson-containers/blob/master/Dockerfile.ros.foxy https://github.com/AndreV84/jetson-containers/blob/master/ros2_realsense/Dockerfile ## PyTorch https://github.com/AastaNV/JEP/blob/master/script/install_pyTorch_Xavier.sh Install PyTorch: Follow the instructions from [Nvidia forums](https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-6-0-now-available/72048) ```bash wget https://nvidia.box.com/shared/static/9eptse6jyly1ggt9axbja2yrmj6pbarc.whl -O torch-1.6.0-cp36-cp36m-linux_aarch64.whl sudo apt-get install python3-pip libopenblas-base libopenmpi-dev pip3 install Cython pip3 install numpy torch-1.4.0-cp36-cp36m-linux_aarch64.whl ``` Install torchvision ```bash sudo apt-get install libjpeg-dev zlib1g-dev git clone --branch release/0.7 https://github.com/pytorch/vision torchvision cd torchvision sudo python3 setup.py install ``` ## Tools ### jetson_stats Command line resource monitor [jetson_stats](https://github.com/rbonghi/jetson_stats). Works well in SSH. ``` sudo apt install python-pip python3-pip sudo -H pip install -U jetson-stats jtop ``` ### Visual Studio Code https://github.com/JetsonHacksNano/installVSCode ``` VERSION=latest wget -N -O vscode-linux-deb.arm64.deb https://update.code.visualstudio.com/$VERSION/linux-deb-arm64/stable sudo apt install ./vscode-linux-deb.arm64.deb ``` ### Arduino IDE https://github.com/JetsonHacksNano/installArduinoIDE ``` INSTALL_DIR=${HOME} # Direct Jetson support starts at 1.8.15 ARDUINO_VERSION=1.8.15 # Only download if newer version exists wget -N https://downloads.arduino.cc/arduino-$ARDUINO_VERSION-linuxaarch64.tar.xz tar -C $INSTALL_DIR/ -xvf arduino-${ARDUINO_VERSION}-linuxaarch64.tar.xz cd $INSTALL_DIR/arduino-${ARDUINO_VERSION} sudo ./install.sh ./arduino-linux-setup.sh "$USER" echo "You can delete the tar file if desired: arduino-"${ARDUINO_VERSION}"-linuxaarch64.tar.xz" ``` ### Netdata Web-based resource monitor [Netdata](https://github.com/netdata/netdata) ``` sudo apt install curl bash <(curl -Ss https://my-netdata.io/kickstart.sh) ``` open your browser to to see the Netdata interface. To disable Netdata, run ``` systemctl mask netdata systemctl stop netdata ``` To enable it again, use ``` systemctl unmask netdata systemctl start netdata ``` ### GPIO pins The GPIO pins can be controlled in Python using [jetson-gpio](https://github.com/NVIDIA/jetson-gpio) Install Jetson.GPIO ``` git clone https://github.com/NVIDIA/jetson-gpio.git cd jetson-gpio/ sudo python3 setup.py install ``` Setting User Permissions https://github.com/JetsonHacksNano/ServoKit/blob/master/scripts/setPermissions.sh ``` sudo groupadd -f -r gpio sudo usermod -a -G gpio $USER sudo usermod -aG i2c $USER sudo cp lib/python/Jetson/GPIO/99-gpio.rules /etc/udev/rules.d/ sudo udevadm control --reload-rules && sudo udevadm trigger ``` ### Synergy https://github.com/symless/synergy-core/wiki/Compiling#linux ``` sudo apt install xorg-dev libssl-dev qtbase5-dev libgdk-pixbuf2.0-dev libnotify-dev qttools5-dev-tools qttools5-dev git clone https://github.com/symless/synergy-core.git cd synergy-core mkdir build cd build cmake .. make # https://github.com/symless/synergy-core/wiki/Command-Line # ./bin/synergyc $SYNERGY_SERVER_IP ``` ### RealSense SDK https://github.com/JetsonHacksNano/installLibrealsense https://dev.intelrealsense.com/docs/nvidia-jetson-tx2-installation ``` sudo apt-key adv --keyserver keys.gnupg.net --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE sudo add-apt-repository "deb https://librealsense.intel.com/Debian/apt-repo bionic main" -u sudo apt-get install librealsense2-utils librealsense2-dev ``` With ```librealsense2-dev``` package installed, you can compile an application with librealsense using ```g++ -std=c++11 filename.cpp -lrealsense2``` or an IDE of your choice. To get started with RealSense using CMake check out librealsense/examples/cmake Reconnect the RealSense device and run: ```realsense-viewer``` to verify the installation # Tech specs ### Jetson AGX Xavier module The module itself contains the SOC and acts as the brains of the operation. Its specifications are shown in the table. | | Jetson AGX Xavier module | |--------------------|-------------------------------------------------------------------------------------------| | GPU | 512\-core Volta GPU with 64 Tensor Cores | | CPU | 8\-core Nvidia Carmel CPU \(ARM v8\.2 64\-bit CPU, 8MB L2 \+ 4MB L3\) \(up to 2,2656GHz\) | | Memory | 16GB 256\-Bit LPDDR4x \| 137GB/s | | Storage | 32GB eMMC 5\.1 | | DL Accelerator | \(2x\) NVDLA Engines | | Vision Accelerator | 7\-way VLIW Vision Processor | | Encoder/Decoder | \(2x\) 4Kp60 \| HEVC/\(2x\) 4Kp60 \| 12\-Bit Support | | Size | 105 mm x 105 mm x 65 mm | | Deployment | Module \(Jetson AGX Xavier\) | ### Developer Kit interface board In-depth specification of the port capabilities, as well as pinouts for the GPIO pins on the Developer kit board can be found in the [Carrier Board Specification](https://static5.arrow.com/pdfs/2018/12/12/12/23/1/848262/nvda_/manual/jetson_xavier_developer_kit_carrier_board_specification.pdf). A short summary of the ports is shown in the table. | | Jetson AGX Xavier Developer Kit Interface Board | |--------------------------|--------------------------------------------------------------------------------------------------| | PCIe x16 | x8 PCIe Gen4/x8 SLVS\-EC | | RJ45 | Gigabit Ethernet | | USB\-C | 2x USB 3\.1, DP \(Optional\), PD \(Optional\) Close\-System Debug and Flashing Support on 1 Port | | Camera Connector | \(16x\) CSI\-2 Lanes | | M\.2 Key M | NVMe | | M\.2 Key E | PCIe x1 \+ USB 2\.0 \+ UART \(for Wi\-Fi/LTE\) / I2S / PCM | | 40\-Pin Header | UART \+ SPI \+ CAN \+ I2C \+ I2S \+ DMIC \+ GPIOs | | HD Audio Header | High\-Definition Audio | | eSATAp \+ USB3\.0 Type A | SATA Through PCIe x1 Bridge \(PD \+ Data for 2\.5\-inch SATA\) \+ USB 3\.0 | | HDMI Type A | HDMI 2\.0 | | uSD/UFS Card Socket | SD/UFS | ## Disk Speed The disk speed of the AGX Xavier is around 116MB/s write and 293MB/s read, if you get read speed much over that (eg. 360MB/s), then it might be the cache you are reading from. You can perform a disk speed test using `dd` by first creating a test file and then copying it to `/dev/null` ``` dd if=/dev/zero of=testfile bs=2G count=1 oflag=direct dd if=testfile of=/dev/null ``` ## Performance Comparison In a small AI training test which was mildly CPU-intensive, the Jetson Xavier took 17 sec per epoch, while a Intel i7-4790K with a Nvidia GTX 960 took around 11 sec per epoch. This might give an idea about the performance of the Jetson for training, however your results may differ depending on how CPU- or GPU-intensive the workload is.