Assessing the potential of autonomous AI devices such as Nvidia’s Jetson Xavier and Nano to develop portable real-time DNA sequencers and other deployable sensors. The idea is to create a truly portable real-time sequencing device which can be easily taken into the 'field' and results are reported as the sequencing is running. This will use the Nanopore minION DNA sequencer alongside a cheap single board computer (Nvidia Jetson based) which can be powered by off-the-shelf rechargeable batteries. The Nvidia powered technology will allow real-time base calling of DNA making direct detection/identification in the field a real possibility.
Additionally, we envisage this as a totally modular device not just limited to DNA sequencing. A wide range of sensors can be attached (i.e. temperature, humidity, water flow, camera's, …) which can report back in real-time. This alongside the ability to run off portable batteries, with potential solar charging options, makes for an extremely versatile base unit.
The whole package is extremely cost effective and has potential use cases across ESRs business (Environmental, Forensics, Health), as well as in other areas such as community outreach and education. Example use cases:
This started off initially as a bunch of notes, then started to morph into a blog post like structure. I now think that it's gotten so long that I might have to cut it into several different blog style posts. Something like:
I will however leave this 'collection' available on my HackMD account.
NVIDIA® JetsonTM Xavier is the latest addition to the Jetson platform. A powerful AI computer for autonomous machines, it gives you the performance of a GPU workstation in an embedded module under 30 W. With multiple operating modes at 10 W, 15 W, and 30 W, Jetson Xavier delivers more than 20X the performance and 10X the energy efficiency of its predecessor, the NVIDIA JetsonTM TX2.
Look what got delivered to my desk… shiny new package!
NVIDIA 512-Core Volta GPU with (64?) Tensor Cores
8-Core ARM v8.2 64-Bit CPU, 8 MB L2 + 4 MB L3
16 GB 256-Bit LPDDR4x Memory
32 GB eMMC 5.1 Flash Storage
(2x) NVDLA DL Accelerator Engines
7-Way VLIW Vision Processor
(2x) 4Kp60 | HEVC Video Encoder
(2x) 4Kp60 | 12-bit Video Decoder
Look at that power!
I've been quite impressed with the packaging of the Jetson products, nothing flashy but nice, clean and practical. The Xavier comes well protected.
Check out that heatsink! Big and shiny. There is a bit of heft to this device, but it's still tiny when you consider how much power is contained in such a small foot print.
Here's another angle, yep that's a full PCI express connection! The Xavier is able to take PCIe devices such as SSD drives, and Nvidia has confirmed that in the future you will be able to connect GPUs - very cool! You can also see the GPIO pins on the bottom edge. There is a pin breakout adapter included in the kit to make these more accessible.
This side of the Xavier has the bulk of the connectivity. From left to right:
As mentioned above there is also an additional usb-c port on the 'front' side, as well as a set of GPIO pins.
Other things included in the box (I'll update with a pic or two as well):
The Xaiver has a m.2 NVMe slot located under the large heat sink. To access this you have to remove the 4 screw located in the 'feet' of the device (2 in each foot).
Once you have removed the screw and the feet it should look like the below. Bonus image of the selected SSD.
Close up of the Western Digital Black 500GB NVMe SSD. We selected this drive as it has been tested and confirmed working previously on Jetson devices. Spoiler: it was fully supported out of the box - plug and play.
After the screws and feet are removed you are able to detach the heatsink/GPU module from the module board. You will need to apply a decent amount of pressure to pull these two components apart. WARNING: there is a fan(?) cable that connects the heatsink module to the board. Care should be taken as this is fragile. You can either detach the cable using something like a set of tweezers, or you can do as I did and place something under the heat sink to raise it up so there is no stress on the cable. This worked well for me, I figured that the cable wasn't in the way of the m.2 slot… but you have been warned.
To install the SSD first remove the screw that will hold the drive in place (see above). Next, locate the slot in the end of the SSD drive with the notch in the m.2 bay. Insert the card on a 45' angle (as below).
Close up of drive insertion below. Ensure that the drive is inserted enough so that when pushed down you are able to insert and tighten the screw to hold it in place. If the screw doesn't fit gently push the card futher into the slot untill it can be secured down.
Final installed ssd below.
Once the SSD has been installed you can reattach the heatsink module, being mindful of the tiny, fragile cable (ensure this doesn't get caught). There should be a nice 'click' as the heatsink seats back into place. Turn the Xavier over and relocate and screw down the two feet. Done.
The Xaiver has an m.2e slot on the bottom of the board to install a compatible wifi module.
Example wifi module. Intel AX200NGW.
NOTE: this wifi card is only currently supported in kernel 5.1+. Currently trying to build a kernel module against the 4.9.15 that comes with Jetpack 4.2.2, not as yet successful.
Until we are able to either build a kernel module or purchase a compatible wifi module we are using a wiPi usb wifi dongle (this was plug and play).
Below is a close up of the of the m.2 socket on the bottom of the Jetson Xaiver.
The wifi module is installed in the same way as the m.2 nvme ssd. Locate the slot in the correct manner and insert the module at a 45' angle and push down. If properly inserted the screw should locate and fasten to hold the module in place.
Example of installed wifi module.
Close up of installed wifi module.
Note: remeber to attach the antenne
There are two main methods for getting the Xavier up and running with OS and software. The first one I tried was using L4T:
Jetson modules run Linux with NVIDIA® Tegra® Linux Driver Package (L4T), which provides the Linux kernel, bootloader, NVIDIA drivers, flashing utilities, sample filesystem, and more for the Jetson platform.
To do this I connected a mouse and keyboard, HDMI (to TV in this case) and then the provided power supply. Upon pressing the the power button (far left in the picture below).
NOTE: The button layout shown in the below image:
Above image borrowed from: http://linuxgizmos.com/tiny-carrier-unleashes-nvidia-xavier-power-for-robotics-and-ai/
Connected Xaiver showing Nvidia boot logo (below).
The power led is locate next to the 'front' usb-c port (below).
NOTE: this front usb-c port is used for flashing the device when using the Nvidia SDK manager. It can also be used as an additional usb device connection port. There is a usb-c to standard usb converter provided with the development kit.
Kernel modules being loaded on first boot (below).
Setup screen for 'manual' OS install (below).
Following the provided prompts you will be able to boot into an Ubuntu GUI using a deafult user and password (both are 'nvidia').
The above setup works but is lacking all the fun bells and whistles that make the Jetson family so exciting. They can probably be cobbled together manually, however there is an 'easier' approach that will provide a fully functioning environment with tools such as Cuda, TensorFlow etc etc already installed and operational.
The other, generally more appropriate, method to get up and running is to use the SDK manager made available by Nvidia to flash the Xavier.
Ideally this is as simple as:
WARNING: this was not so easy…
Nvidia provide the Jetson family SDK manager made on their website. It's a Linux native piece of software (nice), which is stated to require Ubuntu 16.10 or 18.04. OK, Ubuntu is Debain right? So I downloaded the SDK manager and it installed just fine on my personal laptop running Debain Unstable (Siduction). It also ran just fine. So far so good.
DANGER: unless you know exactly what you are doing and you want dev tools etc locally, make sure to deselect the option to install 'host machine' (local computer) drivers/CUDA. If you select this option you run the risk of installing video drivers/CUDA libs that aren't compatible with your system and then experience issues.
The problem arose when selecting the Jetpack version for the selected device, see below image:
A nice helpful image saying "not supported on Linux"… sigh. After a bit of forum searching it turns out that the host OS MUST be Ubuntu for the tool to work… bigger sigh.
For those of you thinking of spinning up an Ubuntu VM, nope doesn't work. I considered it but saw the failed attempts in the forums. Turns out VMs don't handle USB device connections in a way that is suitable to flash Jetson devices with this SDK manager.
In the forums I noted a couple of people that were able to 'trick' the SDK by editing their OS release files. Really Nvidia?!
Running Debian Sid, have to 'trick' SDK manager
/usr/lib/os-release
/usr/lib/os-release
:NOTE: could try having another file, such as /usr/lib/os-release-sdk
, and export the env as such:
export LSB_OS_RELEASE=/usr/lib/os-release-sdk
it might work?
[section under construction]
need online nvidia dev account and login once sdk manager is installed (and the above 'tweak' is applied if you're not on Ubuntu) you should be able to select a jetpack image in the sdk choose the most recent image change download paths if needed I chose to download and flash later confirm that all packages and OS download
[section under construction]
initial connection needs to be in recovery mode
push and hold the middle button 'recovery'
push and hold the power button
release both buttons
check to see if entry is present using lsusb
- look for Nvidia entry
WARNING: ensure there are no other usb devices attached!! There seems to be an issue that pops up around an initial oversaturation of the USB bus.
[section under construction]
navigate to dir containing flash.sh
this will be where you downloaded the OS and packages to using the SDK manager
NOTE: if the flash doesn't work and you get an error about probing board and not found, try moving the usb cable to a different usb port. This worked for me. Also try a different usb cable if it still doesn't work.
[section under construction]
You should now be able to boot out of recovery by pushing the 'reset' button. Will load into Ubunut GUI, go through standard setup (i.e. locale, user, pw). In the sdk manager you can now use the 'automatically' option to flash/install packages such as cuda and tensorflow (you shouldn't have to change any of the settings).
There are a few different means to monitor various aspects of the system on Jetson boards. These range from the simple through to the more complex and very detailed:
tegrastats
- basic polling infobash <(curl -Ss https://my-netdata.io/kickstart.sh)
[section under construction]
DANGER: these are notes that I quickly jotted down for future reference. The general flow is sound but some of the drive locations etc might change across different devices. Please ensure you are comfortable working with either gparted or CLI disk tools before proceeding.
If you'd like a refresher this guide is quite good.
installed gparted:
Created a 'gpt' partition on the ssd (/dev/nvme0n1p1
)
Formatted to 'ext4'
Create a root directory for mounting:
Mount and make writable:
WARNING: the below is making an assumption that
Edit /etc/fstab
to allow automount
first create a backup:
install nano to edit text files
edit fstab:
Note: a drives UUID is assigned when you format and partition it. Once this is done you can find it in gparted by selecting the correct drive and right clicking on it, selected information and you'll be able to copy the UUID. This can then be pasted into fstab
as per below.
unmount:
test mount using fstab
try creating a test file
If run ls
you should see the created file on the newly mounted SSD:
As we're wanting to run software such as Guppy from ONT we'll need to ensure Cuda and Tensorflow are installed and working as expected.
Navigate to ssd and copy cuda samples dir across:
Compile the devicequery tool:
Run this:
Cuda looks like it's installed and running correctly.
Open a Python terminal and enter the following lines of code:
Some further typing:
To verify your installation just type:
…or, from the shell…
Again, Tensorflow installation looks good (apart from a lot of 'spammy' messages that get printed!).
UPDATE (13th Dec 2019): I have been doing some more extensive benchmarking with the Xavier and guppy basecalling. This is documented in this gist: https://gist.github.com/sirselim/2ebe2807112fae93809aa18f096dbb94
In the above I have identified a series of parameters that appears to provide the optimal speed increase.
We initially had a lot of issues trying to get a version of Guppy that was compiled for an arm-based processor. ONT offer binaries in their community portal, however arm-based versions aren't on offer. Finally after several attempts we obtained a version from ONT that was able to be installed directly on the xavier, allowing us to demonstrate the ability of this device to provide a significant boost in basecalling speed.
The test data set was generated on a flongle flow cell, which arrived with very few active pores. It did still manage to generate ~500Mbps of data, which is more than enough to run some tests with.
--compress_fastq
- to compress the output sequence data-c dna_r9.4.1_450bps_fast.cfg
- the fast base caller model config file-x 'auto'
- automatically identify the GPU device--recursive
- recursive search through input dir for all fast5 filesNote: we'll look at recalling with the high accuracy model at a later date. Here I wanted to compare apples with apples in terms of the work that was performed on the production server.
Basecalling up and running on the Xavier!
Check out that 100% GPU usage:
Confirming that it is indeed Nvidia Jetson Xavier hardware:
…and the results are in:
So the Xavier finished basecalling the data from this flongle run in ~478 seconds or a hair under 8 mins. To put this into context for those that aren't excited straight off the bat:
This means that the Jetson Xavier was able to basecall this data at only twice as slow a rate as 2 extremely powerful and expensive cards - that's insane!
IMPORTANT: there is a lot more testing and potential optimisation to be done on both the Xavier and our other GPU platforms. We haven't looked at modifying the default parameters etc, and I'm not convinced that both V100 GPUs were really working at full capacity. Watch this space.
not sure if there is a direct way to control the fan speed, but I noticed in my testing that when under 100% GPU load the fan didn't seem to speed up from ~30%. the GPU temp didn't seem to rise above ~60'C but it also looked like it may have throttled a little at times.
Some searching turned up the below as a way to 'force' the fan speed by adjusting pwm between 0 (off) and 255 (100%, fully on).
To check the current fan pwm:
I decided to see what would happen if we locked the fan at 100% for a Guppy run.
Here's a screen shot of jtop about a third of the way through:
You'll notice that the GPU temp is ~51'C in that shot. The highest I saw it spike was 56'C and it actually leveled off around the 47-48'C mark for the majority of the run. That's much better than the ~60'C cap that seemed to be enforced when the fan was being automatically controlled and would ramp past 30% speed. I also noted that there was no apparent thermal throttling of the GPU when running the fan at 100%.
Here are the run stats:
Again we come in around the 479 second mark for this data set - not bad at all!
ONT have a newish compression available for fast5 files, vbz. This is implemented in ont-fast5-api
and will be supported in future versions of MinKnow (it's supported in Guppy >= 3.4).
The Jetson Xavier doesn't have pip2 installed by default and it seems this api will only work on python2. So first need to install pip2 (warning: this may cause issues, but there is no arm conda yet so…):
Once installed you should be able to install the ONT api:
To test compression:
Warning: Can't currently get this working on the Xavier. Hence this is a work in progress.
[section under construction]