We have previously experimented the performance measurement of three different packet filter approaches, that is user level filting, kernel level filting and driver level filting. But the latancy is too high due to all those experiments were done on virtual machines. Today we'll retest them with my PC.
The platform we used is ubuntu 20.04 desktop, installing on AMD R5-3600 CPU with 6 core 3.6 GHZ.
We'll first set up the whole structure below:
We chose libpcap for user level filting, iptables as kernel level filting and xdp-filter as driver level filting. In the user space, there's a python program to specify timestamp_1, that is when the packet arrives to user space. There's also a xdp dump program load in driver level that can determine the timestamp_0, which is the time a packet going to the NIC card. So the total latency will be timestamp_1 - timestamp_0.
Let's start with a simple test. We can use:
$ sudo ./xdpdump -i enp4s0 --rx-capture entry
to capture all packets that come into NIC card with timestamp. Then we can simply use a python program from other computer to send a udp packet to the PC.
Now we can start the experiment.
We can use:
$ sudo ./xdp-filter load enp4s0 -f ipv4 -m skb
to load the filter to driver, and with:
$ sudo ./xdp-filter ip 192.168.69.9 -m src
to filter the packet from 192.168.69.9, which is a redundant rule.(won't match any incoming packets)