Johnny
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Note Insights Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    4
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    # Start with Jetson AGX Orin Developer Kit. ## 1. SDK-Manager #### Flash the last version of jetpack to NVME 22.04 Ubuntu is compatible with JetPack 6.0. **Taking account NVME must be in GPT format and ext4 partition.** With JetPack 6 install complete installation. Optional: DeepStream ## Build your kernel ### install-path is your path, check it with pwd ```bash sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison cd ~/ mkdir Projects && cd Projects wget https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v3.0/release/jetson_linux_r36.3.0_aarch64.tbz2 wget https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v3.0/release/tegra_linux_sample-root-filesystem_r36.3.0_aarch64.tbz2 tar -xvjf jetson_linux_r36.3.0_aarch64.tbz2 wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.8.9.tar.xz wget https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v3.0/release/tegra_linux_sample-root-filesystem_r36.3.0_aarch64.tbz2 tar -xvjf tegra_linux_sample-root-filesystem_r36.3.0_aarch64.tbz2 # -C ./Linux_for_Tegra/ xz -d linux-6.8.9.tar.xz tar -xf linux-6.8.9.tar mv linux-6.8.9 kernel-jammy-src cd Linux_for_Tegra/source/ ./source_sync.sh -k -t jetson_36.3 cd kernel && rm -rf kernel-jammy-src cd ~/Projects mv kernel-jammy-src Linux_for_Tegra/source/kernel/ cd Linux_for_Tegra/source/ # modify defconfig cd ~/Projects/Linux_for_Tegra/source/kernel/kernel-jammy-src/configs/aarch # Edit defconfig CONFIG_ARM64_PMEM=y CONFIG_PCIE_TEGRA194=y CONFIG_PCIE_TEGRA194_HOST=y CONFIG_BLK_DEV_NVME=y CONFIG_NVME_CORE=y CONFIG_FB_SIMPLE=y or make defconfig scripts/config --file .config --enable ARM64_PMEM scripts/config --file .config --enable PCIE_TEGRA194 scripts/config --file .config --enable PCIE_TEGRA194_HOST scripts/config --file .config --enable BLK_DEV_NVME scripts/config --file .config --enable NVME_CORE scripts/config --file .config --enable FB_SIMPLE # Build kernel make -j $(nproc) -C kernel export INSTALL_MOD_PATH=/home/johnny/Projects/Linux_for_Tegra/rootfs sudo -E make install -C kernel cp kernel/kernel-jammy-src/arch/arm64/boot/Image \ ~/Projects/Linux_for_Tegra/kernel/Image # make modules export KERNEL_HEADERS=$PWD/kernel/kernel-jammy-src export INSTALL_MOD_PATH=~/Projects/Linux_for_Tegra/rootfs make modules sudo -E make modules_install # OUT TREE MODULES cd ~/Projects/Linux_for_Tegra/source KERNEL_HEADERS=$PWD/kernel/kernel-jammy-src make modules sudo -E make modules_install # edit boot sudo gedit /boot/extlinux/exlinux.conf # make dtbs export KERNEL_HEADERS=$PWD/kernel/kernel-jammy-src make dtbs sudo cp nvidia-oot/device-tree/platform/generic-dts/dtbs/* /boot/dtb ``` ## Recommended libraries ```bash # Update package lists sudo apt-get -y update # MANDATORY Install a set of tools and libraries required for building and development sudo apt-get install -f -y --no-install-recommends \ ninja-build \ libopenblas-dev \ libopenmpi-dev \ openmpi-bin \ openmpi-common \ libomp-dev \ autoconf \ bc \ build-essential \ cmake \ ffmpeg \ g++ \ gcc \ gettext-base \ git \ gfortran \ hdf5-tools \ iputils-ping \ libatlas-base-dev \ libavcodec-dev \ libavdevice-dev \ libavfilter-dev \ libavformat-dev \ libavutil-dev \ libblas-dev \ libbz2-dev \ libc++-dev \ libcgal-dev \ libeigen3-dev \ libffi-dev \ libfreeimage-dev \ libfreetype6-dev \ libglew-dev \ libgflags-dev \ libgoogle-glog-dev \ libgtk-3-dev \ libgtk2.0-dev \ libhdf5-dev \ libjpeg-dev \ libjpeg-turbo8-dev \ libjpeg8-dev \ liblapack-dev \ liblapacke-dev \ liblzma-dev \ libncurses5-dev \ libncursesw5-dev \ libomp-dev \ libopenblas-dev \ libopenblas-base \ libopenexr-dev \ libopenjp2-7 \ libopenjp2-7-dev \ libopenmpi-dev \ libpng-dev \ libprotobuf-dev \ libreadline-dev \ libsndfile1 \ libsqlite3-dev \ libssl-dev \ libswresample-dev \ libswscale-dev \ libtbb-dev \ libtbb2 \ libtesseract-dev \ libtiff-dev \ libv4l-dev \ libx264-dev \ libxine2-dev \ libxslt1-dev \ libxvidcore-dev \ libxml2-dev \ locales \ moreutils \ openssl \ pkg-config \ python3-dev \ python3-numpy \ python3-pip \ python3-matplotlib \ qv4l2 \ rsync \ scons \ v4l-utils \ zlib1g-dev \ zip \ nvidia-l4t-gstreamer \ ubuntu-restricted-extras \ libsoup2.4-dev \ libjson-glib-dev # libwebp-dev # libpostproc-dev # GStreamer sudo apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio gstreamer1.0-rtsp libgstrtspserver-1.0-dev ``` ```bash # Install package sudo apt install -y ccache # Update symlinks sudo /usr/sbin/update-ccache-symlinks # Prepend ccache into the PATH echo 'export PATH="/usr/lib/ccache:$PATH"' | tee -a ~/.bashrc # Source bashrc to test the new PATH source ~/.bashrc && echo $PATH ``` # Install last version of CUDA, Cudnn, TensorRT (Optional) ```bash wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb sudo dpkg -i cuda-keyring_1.1-1_all.deb sudo apt-get update sudo apt-get -y install cuda-toolkit-12-8 cuda-compat-12-8 sudo apt-get install cudnn python3-libnvinfer python3-libnvinfer-dev tensorrt ```` ## Install cmake ```bash VERSION="3.31.7" sudo apt remove cmake wget https://cmake.org/files/v3.31/cmake-${VERSION}.tar.gz tar xf cmake-${VERSION}.tar.gz cd cmake-${VERSION} ./configure make -j $(nproc) sudo make install cmake --version ``` ## Install clang18 ```bash wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add - sudo add-apt-repository "deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-18 main" sudo apt update sudo apt install clang-18 lldb-18 lld-18 sudo update-alternatives --install /usr/bin/clang clang /usr/bin/clang-18 100 sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-18 100 ``` ## Build OpenBlas ```bash git clone https://github.com/xianyi/OpenBLAS.git cd OpenBLAS/ export OMP_NUM_THREADS=12 make TARGET=ARMV8 USE_OPENMP=1 sudo make PREFIX=/usr/local install sudo ldconfig ``` ```bash grep OPENBLAS_VERSION /usr/local/include/openblas_config.h ``` ## 3.1 Install Pytorch Native ```bash export TORCH_INSTALL=https://developer.download.nvidia.com/compute/redist/jp/v60dp/pytorch/torch-2.2.0a0+81ea7a4.nv24.01-cp310-cp310-linux_aarch64.whl ``` ```bash python3 -m pip install --upgrade pip python3 -m pip install --upgrade --no-cache $TORCH_INSTALL ``` ## 3.2 Install Tensorflow Native ```bash sudo apt-get -y update ``` ```bash sudo pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v60dp/tensorflow/tensorflow-2.14.0+nv24.01-cp310-cp310-linux_aarch64.whl ``` ## 4. Install Miniconda ARM ```bash wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh chmod +x Miniconda3-latest-Linux-aarch64.sh bash Miniconda3-latest-Linux-aarch64.sh ``` After install Miniconda, I recommend create a Python 3.12 Environment. ```bash conda create -n py312 python=3.12 ``` ## Build from Source Pytorch 2.4 with Miniconda and Cuda 12.6, Python 3.12 or download it [here](https://ubarcelona-my.sharepoint.com/:u:/g/personal/jnunezca11_alumnes_ub_edu/ESIIS5vgF21KtSxeDb9uUs0Bsnbphv9qPwxd-rSHzH6ipQ?e=2YSvaL) ### Required upgrade CUDA ```bash sudo nvpmodel -m 0 # this reboot system sudo jetson_clocks ``` ```bash git clone --recursive --branch v2.6.0 http://github.com/pytorch/pytorch export PYTORCH_BUILD_NUMBER=1 \ export TORCH_CXX_FLAGS="-D_GLIBCXX_USE_CXX11_ABI=0" \ export USE_NCCL=0 && \ export USE_QNNPACK=0 && \ export USE_PYTORCH_QNNPACK=0 && \ export USE_NATIVE_ARCH=1 && \ export USE_DISTRIBUTED=1 && \ export USE_TENSORRT=0 && \ export TORCH_CUDA_ARCH_LIST="8.7" export PYTORCH_BUILD_VERSION=2.6.0 && \ export MAX_JOBS="-j$(nproc)" cd pytorch pip3 install --no-cache-dir -r requirements.txt pip3 install --no-cache-dir scikit-build ninja conda install -c conda-forge libstdcxx-ng=12 sudo apt remove cmake -y pip install cmake --upgrade python setup.py bdist_wheel ``` ## Build TorchVision or download it [here](https://ubarcelona-my.sharepoint.com/:u:/g/personal/jnunezca11_alumnes_ub_edu/Ea29cyNJlLxDnrahZa-FZ8oBVyvCpghVcq6uqPPnp8-JJQ?e=kpRk38) ```bash git clone --branch v0.21.0 https://github.com/pytorch/vision torchvision # see below for version of torchvision to download cd torchvision export BUILD_VERSION=0.21.0 # where 0.x.0 is the torchvision version export TORCH_CUDA_ARCH_LIST="8.7" python setup.py bdist_wheel ``` ## Build TorchAudio or download it [here](https://ubarcelona-my.sharepoint.com/:u:/g/personal/jnunezca11_alumnes_ub_edu/EW_ayOzz2PlJqdry6QUeoL4BmkbBZNo1MMvptfbDXD7JSg?e=qjIxXv) ```bash git clone --branch v2.6.0 https://github.com/pytorch/audio torchaudio # see below for version of torchvision to download cd torchaudio export BUILD_VERSION=2.6.0 # where 0.x.0 is the torchvision version export TORCH_CUDA_ARCH_LIST="8.7" python setup.py bdist_wheel ``` ## Build TorchText or download it [here](https://ubarcelona-my.sharepoint.com/:u:/g/personal/jnunezca11_alumnes_ub_edu/EWiYlWdgZRxFm0MA3UtW5lEBg340Ge2Spdu6kyiblQAc4Q?e=SmEBWt) ```bash git clone --branch v0.19.1 https://github.com/pytorch/text.git torchtext # see below for version of torchvision to download cd torchtext export BUILD_VERSION=0.19.1 # where 0.x.0 is the torchvision version export TORCH_CUDA_ARCH_LIST="8.7" python setup.py build_ext -j $(nproc) bdist_wheel ``` ## Built TensorRT or download it [here](https://ubarcelona-my.sharepoint.com/:u:/g/personal/jnunezca11_alumnes_ub_edu/EccP4FiKIUFKsix8xRFywSoBpBF1mFFEfk_mWC4cie5ZFA?e=41dBcK) or for C++ [here](https://ubarcelona-my.sharepoint.com/:f:/g/personal/jnunezca11_alumnes_ub_edu/EivHsg55wClBsmL0EOSjgsMBim_CiE68i5IO7JMo_M9_hA?e=48lVqb) #### Python 3.12 ```bash export EXT_PATH=~/external export TRT_OSSPATH=~/external/TensorRT mkdir -p $EXT_PATH && cd $EXT_PATH git clone https://github.com/pybind/pybind11.git wget https://www.python.org/ftp/python/3.12.8/Python-3.12.8.tgz tar -xvf Python-3.12.8.tgz mkdir -p $EXT_PATH/python3.12/include cp -r Python-3.12.2/Include/* $EXT_PATH/python3.12/include wget http://http.us.debian.org/debian/pool/main/p/python3.12/libpython3.12-dev_3.12.8-5_arm64.deb ar x libpython3.12-dev*.deb mkdir debian && tar -xf data.tar.zst -C debian cp debian/usr/include/aarch64-linux-gnu/python3.12/pyconfig.h python3.12/include/ git clone --branch release/10.7 --recursive https://github.com/NVIDIA/TensorRT.git cd TensorRT mkdir -p build && cd build export TRT_LIBPATH=/usr/lib/aarch64-linux-gnu/ cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=12.6 -DCUDNN_VERSION=9.6 -DGPU_ARCHS="87" CC=/usr/bin/gcc make -j$(nproc) cd ../python TENSORRT_MODULE=tensorrt PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=12 TARGET_ARCHITECTURE=aarch64 TRT_OSSPATH=~/external/TensorRT ./build.sh pip install ./build/bindings_wheel/dist/tensorrt-*.whl ``` #### Python 3.11 ```bash export EXT_PATH=~/external export TRT_OSSPATH=~/external/TensorRT mkdir -p $EXT_PATH && cd $EXT_PATH git clone https://github.com/pybind/pybind11.git wget https://www.python.org/ftp/python/3.11.9/Python-3.11.9.tgz tar -xvf Python-3.11.9.tgz mkdir -p $EXT_PATH/python3.11/include cp -r Python-3.11.9/Include/* $EXT_PATH/python3.11/include wget http://http.us.debian.org/debian/pool/main/p/python3.11/libpython3.11-dev_3.11.9-1_arm64.deb ar x libpython3.11-dev*.deb mkdir debian && tar -xf data.tar.xz -C debian cp debian/usr/include/aarch64-linux-gnu/python3.11/pyconfig.h python3.11/include/ git clone --branch release/10.7 --recursive https://github.com/NVIDIA/TensorRT.git cd TensorRT mkdir -p build && cd build export TRT_LIBPATH=/usr/lib/aarch64-linux-gnu/ cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=12.6 -DCUDNN_VERSION=9.6 -DGPU_ARCHS="87" CC=/usr/bin/gcc make -j$(nproc) cd ../python TENSORRT_MODULE=tensorrt PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=11 TARGET_ARCHITECTURE=aarch64 TRT_OSSPATH=~/external/TensorRT ./build.sh pip install ./build/bindings_wheel/dist/tensorrt-*.whl ``` ```bash pip3 install -U tensorrt-10.7-cp311-none-linux_aarch64.whl ``` #### Python 3.10 ```bash export EXT_PATH=~/external export TRT_OSSPATH=~/external/TensorRT mkdir -p $EXT_PATH && cd $EXT_PATH git clone https://github.com/pybind/pybind11.git wget https://www.python.org/ftp/python/3.10.11/Python-3.10.11.tgz tar -xvf Python-3.10.11.tgz mkdir -p $EXT_PATH/python3.10/include cp -r Python-3.10.11/Include/* $EXT_PATH/python3.11/include wget http://http.us.debian.org/debian/pool/main/p/python3.10/libpython3.10-dev_3.10.12-1_amd64.deb ar x libpython3.10-dev*.deb mkdir debian && tar -xf data.tar.xz -C debian cp debian/usr/include/aarch64-linux-gnu/python3.10/pyconfig.h python3.10/include/ git clone --branch release/10.7 --recursive https://github.com/NVIDIA/TensorRT.git cd TensorRT mkdir -p build && cd build export TRT_LIBPATH=/usr/lib/aarch64-linux-gnu/ cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=12.6 -DCUDNN_VERSION=9.6 -DGPU_ARCHS="87" CC=/usr/bin/gcc make -j$(nproc) cd ../python TENSORRT_MODULE=tensorrt PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=10 TARGET_ARCHITECTURE=aarch64 TRT_OSSPATH=~/external/TensorRT ./build.sh pip install ./build/bindings_wheel/dist/tensorrt-*.whl ``` ```bash pip3 install -U tensorrt-10.4-cp311-none-linux_aarch64.whl ``` ## Build OpenCV CUDA ```bash conda create -n py311 python=3.11 conda activate py311 conda install cmake numpy --yes cd Projects wget -O opencv.zip https://github.com/opencv/opencv/archive/4.x.zip wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.x.zip unzip opencv.zip unzip opencv_contrib.zip # -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_PREFIX_PATH="/usr/lib/aarch64-linux-gnu;/usr/include" \ mkdir -p build && cd build export ENABLE_CONTRIB=1 export CMAKE_BUILD_PARALLEL_LEVEL=$(nproc) cmake -DCMAKE_BUILD_TYPE=RELEASE \ -DCMAKE_PREFIX_PATH="/usr/lib/aarch64-linux-gnu;/usr/include" \ -DCPACK_BINARY_DEB=ON \ -DBUILD_EXAMPLES=OFF \ -DBUILD_opencv_python2=OFF \ -DBUILD_opencv_python3=ON \ -DBUILD_opencv_java=OFF \ -DCMAKE_INSTALL_PREFIX=/usr/local \ -DCUDA_ARCH_BIN=8.7 \ -DCUDA_ARCH_PTX= \ -DCUDA_FAST_MATH=ON \ -DCUDNN_INCLUDE_DIR=/usr/include/ \ -DEIGEN_INCLUDE_PATH=/usr/include/eigen3 \ -DWITH_EIGEN=ON \ -DENABLE_NEON=ON \ -DOPENCV_DNN_CUDA=ON \ -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-12.6 \ -DOPENCV_ENABLE_NONFREE=ON \ -DOPENCV_EXTRA_MODULES_PATH=../opencv_contrib-4.x/modules \ -DOPENCV_GENERATE_PKGCONFIG=ON \ -DOpenGL_GL_PREFERENCE=GLVND \ -DWITH_CUBLAS=ON \ -DWITH_CUDA=ON \ -DWITH_CUDNN=ON \ -DWITH_GSTREAMER=ON \ -DWITH_LIBV4L=ON \ -DWITH_GTK=ON \ -DWITH_OPENGL=OFF \ -DWITH_OPENCL=OFF \ -DWITH_IPP=OFF \ -DWITH_TBB=ON \ -DBUILD_TIFF=ON \ -DBUILD_PERF_TESTS=OFF \ -DBUILD_TESTS=OFF \ -DBUILD_NEW_PYTHON_SUPPORT=ON \ -DBUILD_opencv_python3=ON \ -DHAVE_opencv_python3=ON \ ../opencv-4.x cmake --build . sudo make install ``` ## Built OpenCV Python or download it [here](https://ubarcelona-my.sharepoint.com/:u:/g/personal/jnunezca11_alumnes_ub_edu/EeYJgPoinetLo6VAxIlyfA8BJC-uI9EsLQm0xeGz897wWA?e=UrwiUa) ```bash git clone --recursive https://github.com/opencv/opencv-python.git cd opencv-python ``` ```bash wget https://raw.githubusercontent.com/dusty-nv/jetson-containers/master/packages/opencv/patches.diff git apply patches.diff || echo "failed to apply git patches" sed -i 's|weight != 1.0|(float)weight != 1.0f|' opencv/modules/dnn/src/cuda4dnn/primitives/normalize_bbox.hpp sed -i 's|nms_iou_threshold > 0|(float)nms_iou_threshold > 0.0f|' opencv/modules/dnn/src/cuda4dnn/primitives/region.hpp grep 'weight' opencv/modules/dnn/src/cuda4dnn/primitives/normalize_bbox.hpp grep 'nms_iou_threshold' opencv/modules/dnn/src/cuda4dnn/primitives/region.hpp export export ENABLE_CONTRIB=1 export CMAKE_BUILD_PARALLEL_LEVEL=$(nproc) export CMAKE_ARGS="\ -DCPACK_BINARY_DEB=ON \ -DBUILD_EXAMPLES=OFF \ -DBUILD_opencv_python2=OFF \ -DBUILD_opencv_python3=ON \ -DBUILD_opencv_java=OFF \ -DCMAKE_BUILD_TYPE=RELEASE \ -DCMAKE_INSTALL_PREFIX=/usr/local \ -DCUDA_ARCH_BIN=8.7 \ -DCUDA_ARCH_PTX= \ -DCUDA_FAST_MATH=ON \ -DCUDNN_INCLUDE_DIR=/usr/include/ \ -DEIGEN_INCLUDE_PATH=/usr/include/eigen3 \ -DWITH_EIGEN=ON \ -DENABLE_NEON=ON \ -DOPENCV_DNN_CUDA=ON \ -DOPENCV_ENABLE_NONFREE=ON \ -DOPENCV_EXTRA_MODULES_PATH=$(pwd)/opencv_contrib/modules \ -DOPENCV_GENERATE_PKGCONFIG=ON \ -DOpenGL_GL_PREFERENCE=GLVND \ -DWITH_CUBLAS=ON \ -DWITH_CUDA=ON \ -DWITH_CUDNN=ON \ -DWITH_GSTREAMER=ON \ -DWITH_LIBV4L=ON \ -DWITH_GTK=ON \ -DWITH_OPENGL=OFF \ -DWITH_OPENCL=OFF \ -DWITH_IPP=OFF \ -DWITH_TBB=ON \ -DBUILD_TIFF=ON \ -DBUILD_PERF_TESTS=OFF \ -DBUILD_TESTS=OFF" pip3 wheel --verbose . ``` Install OpenCV. ```bash pip install -U opencv_contrib_python-4.9.0.80-cp312-cp312-linux_aarch64.whl ``` ## Build OnnxRuntime-GPU ```bash git clone --recursive https://github.com/microsoft/onnxruntime cd onnxruntime export PATH="/usr/local/cuda/bin:${PATH}" export CUDACXX="/usr/local/cuda/bin/nvcc" pip3 install -U packaging ./build.sh --config Release --update --parallel --build --build_wheel --build_shared_lib --skip_tests \ --use_tensorrt --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu \ --tensorrt_home /usr/lib/aarch64-linux-gnu --cmake_extra_defines CMAKE_CXX_FLAGS="-Wno-unused-variable -I/usr/local/cuda/include" --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES="87" ``` # Install torch2trt ```bash git clone --recursive https://github.com/NVIDIA-AI-IOT/torch2trt cd torch2trt python setup.py install # cmake -B build . && cmake --build build --target install && sudo ldconfig cd scripts bash build_contrib.sh ``` # Install trt-Pose ```bash git clone --recursive https://github.com/NVIDIA-AI-IOT/trt_pose cd trt_pose python3 setup.py develop --user ``` # Example nanoSAM ``` git clone https://github.com/NVIDIA-AI-IOT/nanosam.git cd nanosam ``` Download the folder data(models) and unzip in nanosam folder. [Data Folder](https://drive.google.com/file/d/1CHaxy_4eP3mSjkgb9ViStjszMQkj_jrN/view?usp=sharing) ``` pip install -U pillow transformers ``` RUN ``` python3 examples/demo_click_segment_track.py ``` If your external camera doesn't work. ```bash sudo apt update sudo apt upgrade sudo apt install libffi-dev sudo apt install libglib2.0-0 sudo apt install --reinstall libffi7 sudo apt install v4l-utils export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libffi.so.7 # recommented to put in bashrc # LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 # export LD_PRELOAD='/home/jetson/miniconda3/lib/python3.12/site-packages/scikit_learn.libs/libgomp-d22c30c5.so.1.0.0' ``` ```bash v4l2-ctl --list-devices ``` ![Terminal](https://hackmd.io/_uploads/rJa8H8nx6.png) Check if your camera works. ```bash sudo apt install guvcview ``` Now update the code in demo_click_segment_track.py: Original code: ```python=3.12 cap = cv2.VideoCapture(0) ``` to ```python=3.12 camera_id = "/dev/video0" cap = cv2.VideoCapture(camera_id, cv2.CAP_V4L2) ``` ![](https://hackmd.io/_uploads/SkQg7r6gT.jpg) # Example llamaspeak 1. **Open a terminal** and Configure ngc cli 2. Download [riva](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/riva/resources/riva_quickstart_arm64) 3. Add "default-runtime": "nvidia" in /etc/docker/daemon.json. ```bash sudo gedit /etc/docker/daemon.json ``` ```json { "default-runtime": "nvidia", "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } } } ``` Save it and execute. ```bash sudo systemctl restart docker ``` 4. Init riva to download models ```bash bash riva_init.sh bash riva_start.sh ``` 5. **Open another terminal** and clone jetson-containers ```bash git clone --recursive https://github.com/dusty-nv/jetson-containers.git cd jetson-containers ``` 6. Create a folder and download the model(.bin are deprecated) ```bash mkdir data/models/text-generation-webui && cd $_ wget https://huggingface.co/TheBloke/Llama-2-13B-GGUF/resolve/main/llama-2-13b.Q4_0.gguf ``` 7. Move to jetson containers folders ```bash cd /path/to/your/jetson-containers/ ``` 9. Run text-web-ui ```bash ./run.sh --workdir /opt/text-generation-webui $(./autotag text-generation-webui) \ python3 server.py --listen --verbose --api \ --model-dir=/data/models/text-generation-webui \ --model=llama-2-13b.Q4_0.gguf \ --loader=llamacpp \ --n-gpu-layers=128 \ --n_ctx=4096 \ --n_batch=4096 \ --threads=$(($(nproc) - 2)) ``` 9. **Open another terminal** and move on jetson-containers folder ```bash cd /path/to/your/jetson-containers/data openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 365 -nodes -subj '/CN=localhost' ``` 10. Run and enjoy it! ```bash cd /path/to/your/jetson-containers ./run.sh --workdir=/opt/llamaspeak \ --env SSL_CERT=/data/cert.pem \ --env SSL_KEY=/data/key.pem \ $(./autotag llamaspeak) \ python3 chat.py --verbose ``` 11. Open navigator ```bash ``` Optional: You can try anothers model like codellama, mistral etc. ![](https://hackmd.io/_uploads/r1-YmHpep.png) ## 6. Install Docker with CUDA Support for use [Jetson Generative AI Playground](http://jetson-ai-playground.com/) ```bash sudo apt install -y curl sudo apt-get update ``` ```bash curl https://get.docker.com | sh \ && sudo systemctl --now enable docker ``` ```bash distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list ``` ```bash sudo apt-get update ``` ```bash sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker sudo nvidia-ctk runtime configure --runtime=containerd sudo systemctl restart containerd sudo nvidia-ctk runtime configure --runtime=crio sudo systemctl restart crio ``` ### Add permission user for not use "sudo docker" all time ```bash sudo chmod 666 /var/run/docker.sock sudo groupadd docker sudo usermod -aG docker $USER newgrp docker ``` ### Add "default-runtime": "nvidia" in /etc/docker/daemon.json. ```bash sudo gedit /etc/docker/daemon.json ``` ```json { "default-runtime": "nvidia", "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } } } ``` ```bash sudo systemctl restart docker ``` # Build FFMPEG ```bash wget https://www.ffmpeg.org/releases/ffmpeg-7.0.2.tar.gz && \ tar -xvzf ffmpeg-7.0.2.tar.gz ``` ```bash cd ffmpeg-7.0.2 mkdir -p ./libaom && \ cd ./libaom && \ git clone https://aomedia.googlesource.com/aom && \ mkdir -p aom_build && \ cd aom_build && \ PATH="$HOME/bin:$PATH" cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="$HOME/ffmpeg_build" -DENABLE_TESTS=OFF -DENABLE_NASM=on ../aom && \ PATH="$HOME/bin:$PATH" make && \ sudo make install ``` ```bash cd ffmpeg-7.0.2 git -C SVT-AV1 pull 2> /dev/null || git clone https://gitlab.com/AOMediaCodec/SVT-AV1.git && \ mkdir -p SVT-AV1/build && \ cd SVT-AV1/build && \ PATH="$HOME/bin:$PATH" cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="$HOME/ffmpeg_build" -DCMAKE_BUILD_TYPE=Release -DBUILD_DEC=OFF -DBUILD_SHARED_LIBS=OFF .. && \ PATH="$HOME/bin:$PATH" make && \ sudo make install ``` ```bash cd ffmpeg-7.0.2 sudo apt-get update && sudo apt-get install -y --no-install-recommends libdav1d-dev ``` ```bash cd ffmpeg-7.0.2 PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure \ --prefix="$HOME/ffmpeg_build" \ # --pkg-config-flags="--static" \ --extra-cflags="-I$HOME/ffmpeg_build/include" \ --extra-cflags="-I/usr/src/jetson_multimedia_api/include/" \ --extra-ldflags="-L$HOME/ffmpeg_build/lib -L/usr/lib/aarch64-linux-gnu/tegra" \ --extra-libs="-lpthread -lm -lnvbufsurface -lnvbufsurftransform" \ --ld="g++" \ --bindir="$HOME/bin" \ --enable-shared --disable-doc \ --enable-libaom --enable-libsvtav1 --enable-libdav1d \ --enable-nvv4l2dec --enable-libv4l2 && \ make && \ make install && \ ldconfig ``` ```bash ffmpeg -version ffmpeg -decoders ffmpeg -decoders | grep av1 || true ffmpeg -decoders | grep h264_nvv4l2dec || true ``` # Extra ## To join more models, please go to Jetson-Containes, thanks to Dusty and JetsonHacks. https://github.com/dusty-nv/jetson-containers.git

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully