Try   HackMD

2022q1 Homework3 (fibdrv)

contributed by <yaohwang99>

Objective

  • Write program suitable for linux kernel level.
    • Learn core API such as ktimer, copy_to_user.
  • Review number system and bitwise operation in C.
  • Numerical analysis and arithimatic improvement strategy
  • Brief look at Linux VFS
  • Automatic testing mechanism
  • Performance evaluation

Environment

$ lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   39 bits physical, 48 bits virtual
CPU(s):                          8
On-line CPU(s) list:             0-7
Thread(s) per core:              2
Core(s) per socket:              4
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           158
Model name:                      Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
Stepping:                        9
CPU MHz:                         2800.000
CPU max MHz:                     3800.0000
CPU min MHz:                     800.0000
BogoMIPS:                        5599.85
Virtualization:                  VT-x
L1d cache:                       128 KiB
L1i cache:                       128 KiB
L2 cache:                        1 MiB
L3 cache:                        6 MiB
NUMA node0 CPU(s):               0-7

Time measurement

Refer to KYG-yaya573142's report and homework discription

First, create another client script for measuring the time.
Here, I have sampling time set to 10 for stabler result after post processing the data.

#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <time.h>
#include <unistd.h>
#define FIB_DEV "/dev/fibonacci"
#define SAMPLE_TIME 10
#define TIME_BUF_SIZE 2048
int main()
{
    char buf[1];
    
    char write_buf[] = "testing writing";
    int offset = 100; /* TODO: try test something bigger than the limit */
    FILE *fp = fopen("scripts/stat.txt", "w");
    struct timespec t1, t2;
    int fd = open(FIB_DEV, O_RDWR);
    if (fd < 0) {
        perror("Failed to open character device");
        exit(1);
    }
    for (int i = 0; i <= offset; i++) {
        char time_buf[TIME_BUF_SIZE];
        int used = 0;
        for (int j = 0; j < SAMPLE_TIME; j++) {
            
            long long sz, sz2;
            lseek(fd, i, SEEK_SET);
            clock_gettime(CLOCK_MONOTONIC, &t1);
            sz = read(fd, buf, 1);
            clock_gettime(CLOCK_MONOTONIC, &t2);
            sz2 = write(fd, write_buf, 0);
            snprintf(&time_buf[used],TIME_BUF_SIZE - used, 
                    "%ld %lld ", (long int)(t2.tv_nsec - t1.tv_nsec), sz2);
            used = strnlen(time_buf, TIME_BUF_SIZE);
        }
        fprintf(fp, "%d %s\n", i, time_buf);
    }
    close(fd);
    fclose(fp);
    return 0;
}

As suggested in homework discription, I use write() to measure time in kernel level.

static ssize_t fib_write(struct file *file,
                         const char *buf,
                         size_t size,
                         loff_t *offset)
{
    ktime_t kt = ktime_get();
    fib_sequence(*offset);
    kt = ktime_sub(ktime_get(), kt);
    if (unlikely(size == 1))
        return 1;
    return (ssize_t) ktime_to_ns(kt);
}

Because lots of process is running, the result is unstable.

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Follow the guide in homework discription, create a shell script to designate cpu 7 to client_stat and restore the parameters afterwards.

CPUID=7
ORIG_ASLR=`cat /proc/sys/kernel/randomize_va_space`
ORIG_GOV=`cat /sys/devices/system/cpu/cpu$CPUID/cpufreq/scaling_governor`
ORIG_TURBO=`cat /sys/devices/system/cpu/intel_pstate/no_turbo`

sudo bash -c "echo 0 > /proc/sys/kernel/randomize_va_space"
sudo bash -c "echo performance > /sys/devices/system/cpu/cpu$CPUID/cpufreq/scaling_governor"
sudo bash -c "echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo"

sudo insmod fibdrv.ko
sudo taskset -c 7 ./client_stat
sudo rmmod fibdrv
gnuplot plot.gp
# restore the original system settings
sudo bash -c "echo $ORIG_ASLR >  /proc/sys/kernel/randomize_va_space"
sudo bash -c "echo $ORIG_GOV > /sys/devices/system/cpu/cpu$CPUID/cpufreq/scaling_governor"
sudo bash -c "echo $ORIG_TURBO > /sys/devices/system/cpu/intel_pstate/no_turbo"

Iteration result

The result is now more acceptable.

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Fibonacci sequence

C99 variable-length array (VLA) is not allowed in Linux kernel, we can solve the problem by using fixed size array(or three variables).

static long long fib_sequence(long long k)
{
    
    long long f[3];
    if (unlikely(k < 2))
        return (long long) k;
    f[0] = 0;
    f[1] = 1;
    for (int i = 2; i <= k; i++) {
        f[2] = f[1] + f[0];
        f[0] = f[1];
        f[1] = f[2];
    }

    return f[2];
}

Fast Doubling

The important part is that when

F(2k) and
F(2k+1)
is calculated, which one is used for
F(k)
in the next iteration.

  • If the current bit is 1, then we use
    F(2k+1)
    as the new
    F(k)
    for the next iteration
  • If the current bit is 0, then we use
    F(2k)
    as the new
    F(k)
    for the next iteration
    For example:
    21=101012
    ,
    F(21)
    is caculated as follow:
    F(0)2k+1 F(1)2k F(2)2k+1 F(5)2k F(10)2k+1 F(21)
static long long fib_sequence_fdouble(long long k)
{
    if (unlikely(k < 2))
        return k;
    long long fk = 0, fk1 = 1, f2k = 0, f2k1 = 0;
    long long m = 1 << (63 - __builtin_clz(k));
    
    while(m){
        f2k = fk * (2 * fk1 - fk);
        f2k1 = fk * fk + fk1 * fk1;
        if (k & m) {
            fk = f2k1;
            fk1 = f2k + f2k1;
        }
        else {
            fk = f2k;
            fk1 = f2k1;
        }
        m >>= 1;
    }
    return fk;
}

Ignore the outliers, the result is much better then iteration.

Big Number

Create custom data structure for big number.
Because we need to allocate memory, I use malloc in user mode to make sure there is no memory leak.

#ifndef BIG_NUM_H_
#define BIG_NUM_H_
#include <stdlib.h>
#include <stdint.h>
#include <stdio.h>
typedef u_int32_t u32;
typedef u_int64_t u64;
typedef struct {
    u32 *block;
    size_t block_num;
} big_num_t;
big_num_t *big_num_create(size_t, u32);
big_num_t *big_num_add(big_num_t *, big_num_t *);
big_num_t *big_num_dup(big_num_t *);
void big_num_to_string(big_num_t *);
void big_num_free(big_num_t *);
#endif /* BIG_NUM_H_*/
big_num.c
#include "big_num.h"
big_num_t *big_num_add(big_num_t *a, big_num_t *b)
{
    big_num_t *big, *small;
    big = a->block_num >= b->block_num ? a : b;
    small = a->block_num < b->block_num ? a : b;
    big_num_t *c = big_num_create(big->block_num, 0);
    u32 cy = 0;
    for (size_t i = 0; i < small->block_num; ++i) {
        c->block[i] = a->block[i] + b->block[i] + cy;
        cy = (u32)(((u64) a->block[i] + (u64)b->block[i]) >>
                         32);
    }
    for (size_t i = small->block_num; i < big->block_num; ++i) {
        c->block[i] = big->block[i] + cy;
        cy = (u32)(((u64) big->block[i] + cy) >>
                         32);
    }
    if(cy){
        c->block_num += 1;
        c->block = realloc(c->block , sizeof(u32) * c->block_num);
        c->block[c->block_num - 1] = cy;
    }
    return c;
}
big_num_t *big_num_dup(big_num_t *a)
{
    big_num_t *b = big_num_create(a->block_num, 0);
    for (size_t i = 0; i < a->block_num; ++i)
        b->block[i] = a->block[i];
    return b;
}
big_num_t *big_num_create(size_t num, u32 init)
{
    big_num_t *a = malloc(sizeof(big_num_t));
    a->block = malloc(sizeof(u32) * num);
    a->block_num = num;
    for (size_t i = 1; i < num; ++i)
        a->block[i] = 0;
    a->block[0] = init;
    return a;
}
void big_num_to_string(big_num_t *a)
{
    size_t len = (a->block_num * sizeof(u32) * 8) / 3 + 2;
    char *ret = malloc(len * sizeof(char));
    for (size_t i = 0; i < len - 1; ++i){
        ret[i] = '0';
    }
    
    ret[len - 1] = '\0';
    for (int i = a->block_num - 1; i >= 0; --i) {
        for (u32 m = 1 << 31; m; m >>= 1){
            int cy = (a->block[i] & m) != 0;
            for (int j = len - 2; j >= 0; --j){
                ret[j] = (ret[j] - '0') * 2 + cy + '0';
                if (cy = ret[j] > '9')
                    ret[j] -= 10;
            }
        }
    }
    char *p = ret;
    for(;*p == '0'; ++p);
    printf("%s\n", p);
    big_num_free(a);
    free(ret);
}
void big_num_free(big_num_t *a)
{
    if(!a)
        return;
    free(a->block);
    free(a);
}

Convert big number to string

Converting big number to string is a little tricky

  1. Allocate sufficient space for the string.
    For a given number
    X
    , the number of digits needed is
    log10X
    ,
    log10X=log2Xlog210log2X3number of blocks  block size3number of blocks  block size3+1

    We need 1 more space for '\0', therefore size_t len = (a->block_num * sizeof(u32) * 8) / 3 + 2; is sufficient.
  2. Convert binary to decimal.
    We traverse the binary number from MSB to LSB, multiply the current decimal number by 2 and add 1 if the current bit is 1.
    For example, if the number is
    53=1101012
    1 1 0 1 0 1
    plus 1 yes yes no yes no yes
    1 2 4 8 16 32
    1 2 4 8 16
    1 2 4
    1
    return 1 3 6 13 26 53
void big_num_to_string(big_num_t *a)
{
    size_t len = (a->block_num * sizeof(u32) * 8) / 3 + 2;
    char *ret = malloc(len * sizeof(char));
    for (size_t i = 0; i < len - 1; ++i){
        ret[i] = '0';
    }
    
    ret[len - 1] = '\0';
    for (int i = a->block_num - 1; i >= 0; --i) {
        for (u32 m = 1 << 31; m; m >>= 1){
            int cy = (a->block[i] & m) != 0;
            for (int j = len - 2; j >= 0; --j){
                ret[j] = (ret[j] - '0') * 2 + cy + '0';
                if (cy = ret[j] > '9')
                    ret[j] -= 10;
            }
        }
    }
    char *p = ret;
    for(;*p == '0'; ++p);
    printf("%s\n", p);
    big_num_free(a);
    free(ret);
}

Test the result in user mode, using the basic iterative approach.
Free the unused big number so there is no memory leak

#include "big_num.h"

void fib_seq(int k)
{
    big_num_t *a = big_num_create(1, 0);
    big_num_t *b = big_num_create(1, 1);
    big_num_t *c = NULL;
    for (int i = 2; i <= k; ++i) {
        big_num_free(c);
        c = big_num_add(a, b);
        big_num_free(a);
        a = b;
        b = big_num_dup(c);
    }
    big_num_free(a);
    big_num_free(b);
    big_num_to_string(c);
}

int main()
{
    for (int i = 90; i <= 100; i++)
        fib_seq(i);
    return 0;
}

Verify that there is no memory leak with valgrind.

$ valgrind ./client
==35535== Memcheck, a memory error detector
==35535== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==35535== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
==35535== Command: ./client
==35535== 
2880067194370816120
4660046610375530309
7540113804746346429
12200160415121876738
19740274219868223167
31940434634990099905
51680708854858323072
83621143489848422977
135301852344706746049
218922995834555169026
354224848179261915075
==35535== 
==35535== HEAP SUMMARY:
==35535==     in use at exit: 0 bytes in 0 blocks
==35535==   total heap usage: 4,210 allocs, 4,210 frees, 47,702 bytes allocated
==35535== 
==35535== All heap blocks were freed -- no leaks are possible
==35535== 
==35535== For lists of detected and suppressed errors, rerun with: -s
==35535== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Allocate memory in kernel space

kvmalloc()

kvmalloc tries to use kmalloc, if failed(because there is no contiguous physical memory), use vmalloc.

kvrealloc() and kvfree()

kvrealloc() and kvfree() is defined in linux/mm/util.c, however, it is not defined in the headers that we can include, so I defined it myself according to linux/mm/util.c.

void *kvrealloc(void *p, size_t oldsize, size_t newsize, gfp_t flags)
{
    if (oldsize >= newsize) {
        kvfree(p);
        return NULL;
    }
    void *newp = kvmalloc(newsize, GFP_KERNEL);
    if (!newp) {
        kvfree(p);
        return NULL;
    }
    memcpy(newp, p, oldsize);
    kvfree(p);
    return newp;
}

void kvfree(const void *addr)
{
    if (is_vmalloc_addr(addr))
        vfree(addr);
    else
        kfree(addr);
}

Output big number

In kernel space, eliminate the leading zeros, copy the char array back to user space, then release the memory.

case 2:
    p = fib_sequence_big_num(*offset);
    r = p;
    for (; *r == '0' && *(r + 1); ++r);
    len = strlen(r) + 1;
    sz = copy_to_user(buf, r, len);
    kvfree(p);
    return sz;

In user space, use the last argument to choose which case to enter.
In this case, the memory in big_num_buf will be written by copy_to_user in kernel space.
If the buffer size of big_num_buf is too small, copy_to_user will return the remaining number of characters.

long long sz3 = read(fd, big_num_buf, 2);
printf("f_big_num(%d): %s\n", i, big_num_buf);
if (sz3)
    printf("f_big_num(%d) is truncated\n", i);

In client.c, I print out the result with the following format and successfully passed verify.py.

Reading from /dev/fibonacci at offset 90, returned the sequence 2880067194370816120.
Reading from /dev/fibonacci at offset 91, returned the sequence 4660046610375530309.
Reading from /dev/fibonacci at offset 92, returned the sequence 7540113804746346429.
Reading from /dev/fibonacci at offset 93, returned the sequence 12200160415121876738.
Reading from /dev/fibonacci at offset 94, returned the sequence 19740274219868223167.
Reading from /dev/fibonacci at offset 95, returned the sequence 31940434634990099905.
Reading from /dev/fibonacci at offset 96, returned the sequence 51680708854858323072.
Reading from /dev/fibonacci at offset 97, returned the sequence 83621143489848422977.
Reading from /dev/fibonacci at offset 98, returned the sequence 135301852344706746049.
Reading from /dev/fibonacci at offset 99, returned the sequence 218922995834555169026.
Reading from /dev/fibonacci at offset 100, returned the sequence 354224848179261915075.

Time measurement with median

Calculate each fibonacci number for a few times.
Plot the result using the median produces stable result.


Notice that there is a big jump at
F(48)
and
F(94)
, that is because one more block of memory is required to store the result.
232=4294967296,F(47)=2971215073,F(48)=4807526976

264=1.841019,F(93)=1.221019,F(94)=1.971019

#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <time.h>
#include <unistd.h>
#define FIB_DEV "/dev/fibonacci"
#define SAMPLE_TIME 21
#define OFFSET 100
int cmpfunc(const void *a, const void *b)
{
    return (*(int *) a - *(int *) b);
}
int main()
{
    char big_num_buf[100];
    char write_buf[] = "testing writing";
    // int OFFSET = 100; /* TODO: try test something bigger than the limit */
    int **sample_kernel = malloc((OFFSET + 1) * sizeof(int *));
    int **sample_user = malloc((OFFSET + 1) * sizeof(int *));
    FILE *fpm = fopen("scripts/stat_med.txt", "w");
    struct timespec t1, t2;
    int fd = open(FIB_DEV, O_RDWR);
    if (fd < 0) {
        perror("Failed to open character device");
        exit(1);
    }
    for (int i = 0; i <= OFFSET; i++) {
        sample_user[i] = malloc(SAMPLE_TIME * sizeof(int));
        sample_kernel[i] = malloc(SAMPLE_TIME * sizeof(int));
    }
    for (int j = 0; j < SAMPLE_TIME; j++) {
        for (int i = 0; i <= OFFSET; i++) {
            int used = 0;
            lseek(fd, i, SEEK_SET);
            long long sz, sz2;
            clock_gettime(CLOCK_MONOTONIC, &t1);
            sz = read(fd, big_num_buf, 2);
            clock_gettime(CLOCK_MONOTONIC, &t2);
            sz2 = write(fd, write_buf, 2);
            sample_kernel[i][j] = (int) sz2;
            sample_user[i][j] = (int) (t2.tv_nsec - t1.tv_nsec);
        }
    }
    for (int i = 0; i <= OFFSET; i++) {
        lseek(fd, i, SEEK_SET);
        long long sz3 = read(fd, big_num_buf, 2);
        printf("f_big_num(%d): %s\n", i, big_num_buf);
        if (sz3)
            printf("f_big_num(%d) is truncated\n", i);
        qsort(sample_kernel[i], SAMPLE_TIME, sizeof(int), cmpfunc);
        qsort(sample_user[i], SAMPLE_TIME, sizeof(int), cmpfunc);
        fprintf(fpm, "%d %d %d\n", i, sample_user[i][SAMPLE_TIME / 2],
                sample_kernel[i][SAMPLE_TIME / 2]);
    }
    close(fd);
    fclose(fpm);
    for (int i = 0; i <= OFFSET; i++) {
        free(sample_user[i]);
        free(sample_kernel[i]);
    }
    free(sample_kernel);
    free(sample_user);
    return 0;
}

Notice that the calculating sequence is

F(0)F(1)...F(100)F(0)...,
instead of
F(0)F(0)...F(0)F(1)
.
This way, each
F(x)
is calculated in very different time, the sample will therefore be less biased.

Kernel to user time


F(785)<2544<F(786)

F(370)<2256<F(371)

Iterative


Big jump when need to allocate new block of memory.

Fast doubling with big number

To calculate fibonacci number using fast doubling, we need multiplication and subtraction of big number.

For subtraction, calculate 2's complement, add, and discard the overflow value. Here we assume a is greater then b.

big_num_t *big_num_sub(big_num_t *a, big_num_t *b)
{
    // assume a > b
    if (!a || !b)
        return NULL;
    if (big_num_is_zero(b))
        return big_num_dup(a);
    big_num_t *d = big_num_2comp(b);
    d->block = kvrealloc(d->block, sizeof(u32) * d->block_num,
                         sizeof(u32) * a->block_num, GFP_KERNEL);
    while (d->block_num < a->block_num) {
        d->block_num += 1;
        d->block[d->block_num - 1] = 0;
    }

    // big_num_t *c = big_num_add(a, d);
    big_num_t *c = big_num_create(a->block_num, 0);
    if (!c)
        return NULL;
    u32 cy = 0;
    for (size_t i = 0; i < a->block_num; ++i) {
        c->block[i] = a->block[i] + d->block[i] + cy;
        cy = (u32)(((u64) a->block[i] + (u64) d->block[i]) >> 32);
    }
    big_num_free(d);
    return c;
}

For multiplication, we use the standard way of multipling 2 binary numbers.
Iterate through all the bits in a, add b to the result if the current bit is 1, then shift b left 1 bit for the next iteration.

big_num_t *big_num_mul(big_num_t *a, big_num_t *b)
{
    if (!a || !b)
        return NULL;
    big_num_t *c = big_num_create(1, 0);
    if (big_num_is_zero(a) || big_num_is_zero(b)) {
        return c;
    }
    if (!c)
        return NULL;
    big_num_t *b2 = big_num_dup(b);
    for (size_t i = 0; i < a->block_num; ++i) {
        for (int k = 0; k < 32; ++k) {
            u32 m = 1u << k;
            if (a->block[i] & m) {
                big_num_t *c2 = big_num_dup(c);
                big_num_free(c);
                c = big_num_add(c2, b2);
                big_num_free(c2);
            }
            big_num_lshift(b2, 1);
        }
    }
    big_num_free(b2);
    return c;
}

Version 0

Implement fast doubling with the above functions.
big_num_square() duplicates the argument then use big_num_mul().

char *fib_sequence_big_num_fdouble(long long k)
{
    big_num_t *fk = big_num_create(1, 0);
    if (unlikely(!k))
        return big_num_to_string(fk);
    big_num_t *fk1 = big_num_create(1, 1);
    big_num_t *f2k = big_num_create(1, 0);
    big_num_t *f2k1 = NULL;
    // test
    // big_num_free(fk);
    // fk = big_num_sub(fk1, f2k);

    // test
    long long m = 1 << (63 - __builtin_clz(k));
    while (m) {
        // f2k = fk * (2 * fk1 - fk);
        big_num_t *t1 = big_num_dup(fk1);
        big_num_t *t2 = big_num_add(fk1, t1);
        big_num_t *t3 = big_num_sub(t2, fk);
        big_num_free(f2k);
        f2k = big_num_mul(fk, t3);
        // f2k1 = fk * fk + fk1 * fk1;
        big_num_t *t4 = big_num_square(fk);
        big_num_t *t5 = big_num_square(fk1);
        big_num_free(f2k1);
        f2k1 = big_num_add(t4, t5);
        big_num_free(fk);
        big_num_free(fk1);
        if (k & m) {
            fk = big_num_dup(f2k1);
            fk1 = big_num_add(f2k, f2k1);
        } else {
            fk = big_num_dup(f2k);
            fk1 = big_num_dup(f2k1);
        }
        m >>= 1;
        big_num_free(t1);
        big_num_free(t2);
        big_num_free(t3);
        big_num_free(t4);
        big_num_free(t5);
    }
    big_num_free(fk1);
    big_num_free(f2k);
    big_num_free(f2k1);
    return big_num_to_string(fk);
}


The time measurement result is worse than the iterative approach.
Refer to KYG-yaya573142's report, the reason may include the following:

  1. In fib_sequence_big_num_fdouble(), I use a lot of temporary pointer and memory copy, which may be avoided.
  2. Use different Q-Matrix for fast doubling, this way, we avoid subtraction and 1 less iteration.

[F(2n1)F(2n)]=[0111]2n[F(0)F(1)]=[F(n1)F(n)F(n)F(n+1)][F(n1)F(n)F(n)F(n+1)][10]=[F(n)2+F(n1)2F(n)F(n)+F(n)F(n1)]

resulting in:

F(2k1)=F(k)2+F(k1)2F(2k)=F(k)[2F(k1)+F(k)]

  1. Using the advantage of 64-bit CPU(word size is 64 bit), use u64 for each block and use __int128 from gcc if needed.

perf

Use perf to check the programs hot spot.

+   99.96%     0.01%  client_stat  client_stat        [.] main                                                                                                                                                    ◆
+   99.96%     0.00%  client_stat  client_stat        [.] _start                                                                                                                                                  ▒
+   99.96%     0.00%  client_stat  libc-2.31.so       [.] __libc_start_main                                                                                                                                       ▒
+   99.75%     0.01%  client_stat  [kernel.kallsyms]  [k] entry_SYSCALL_64_after_hwframe                                                                                                                          ▒
+   99.71%     0.00%  client_stat  [kernel.kallsyms]  [k] do_syscall_64                                                                                                                                           ▒
+   99.68%     0.01%  client_stat  libc-2.31.so       [.] __GI___libc_write                                                                                                                                       ▒
+   99.48%     0.00%  client_stat  [kernel.kallsyms]  [k] __x64_sys_write                                                                                                                                         ▒
+   99.48%     0.00%  client_stat  [kernel.kallsyms]  [k] ksys_write                                                                                                                                              ▒
+   99.47%     0.00%  client_stat  [kernel.kallsyms]  [k] vfs_write                                                                                                                                               ▒
+   99.44%     0.00%  client_stat  [kernel.kallsyms]  [k] fib_write                                                                                                                                               ▒
+   99.39%     0.04%  client_stat  [kernel.kallsyms]  [k] fib_sequence_big_num_fdouble                                                                                                                            ▒
+   57.68%     2.46%  client_stat  [kernel.kallsyms]  [k] big_num_mul                                                                                                                                             ▒
+   39.12%     0.01%  client_stat  [kernel.kallsyms]  [k] big_num_square                                                                                                                                          ▒
+   36.94%    36.90%  client_stat  [kernel.kallsyms]  [k] big_num_to_string                                                                                                                                       ▒
+   29.19%     6.18%  client_stat  [kernel.kallsyms]  [k] big_num_create                                                                                                                                          ▒
+   24.08%     2.09%  client_stat  [kernel.kallsyms]  [k] kvmalloc_node                                                                                                                                           ▒
+   17.28%     1.49%  client_stat  [kernel.kallsyms]  [k] big_num_dup                                                                                                                                             ▒
+   16.90%     0.59%  client_stat  [kernel.kallsyms]  [k] big_num_free                                                                                                                                            ▒
+   16.63%     2.97%  client_stat  [kernel.kallsyms]  [k] big_num_add.part.0                                                                                                                                      ▒
+   16.48%     1.34%  client_stat  [kernel.kallsyms]  [k] kvfree                                                                                                                                                  ▒
+   15.68%    11.35%  client_stat  [kernel.kallsyms]  [k] __kmalloc_node                                                                                                                                          ▒
+   14.66%     9.70%  client_stat  [kernel.kallsyms]  [k] kfree                                                                                                                                                   ▒
+    8.93%     7.55%  client_stat  [kernel.kallsyms]  [k] big_num_lshift                                                                                                                                          ▒
+    6.43%     6.17%  client_stat  [kernel.kallsyms]  [k] memset_erms                                                                                                                                             ▒
+    5.42%     5.13%  client_stat  [kernel.kallsyms]  [k] memcg_slab_free_hook                                                                                                                                    ▒
+    2.72%     2.51%  client_stat  [kernel.kallsyms]  [k] kmalloc_slab                                                                                                                                            ▒
+    1.40%     0.79%  client_stat  [kernel.kallsyms]  [k] __cond_resched                                                                                                                                          ▒
+    1.35%     0.08%  client_stat  [kernel.kallsyms]  [k] kvrealloc                                                                                                                                               ▒
+    1.31%     0.03%  client_stat  [kernel.kallsyms]  [k] big_num_sub                                                                                                                                             ▒
+    0.87%     0.64%  client_stat  [kernel.kallsyms]  [k] is_vmalloc_addr                                                                                                                                         ▒
+    0.84%     0.04%  client_stat  [kernel.kallsyms]  [k] big_num_2comp                                                                                                                                           ▒
+    0.82%     0.59%  client_stat  [kernel.kallsyms]  [k] rcu_all_qs                                                                                                                                              ▒
+    0.61%     0.01%  client_stat  [kernel.kallsyms]  [k] big_num_add                                                                                                                                             ▒
     0.58%     0.26%  client_stat  [kernel.kallsyms]  [k] memset                                                                                                                                                  ▒
     0.50%     0.25%  client_stat  [kernel.kallsyms]  [k] should_failslab                                                                                                                                         ▒
     0.23%     0.22%  client_stat  [kernel.kallsyms]  [k] memcpy_erms  

At first glance, we might think that multiplication is slow. However, if we look at the distribution of the callee from the following statistic. kfree() and kmalloc() uses lots of resource because big_num_mul() or other function calls the function a lot.

+   37.22%  client_stat  [kernel.kallsyms]  [k] big_num_to_string                                                                                                                                                 ◆
+   11.25%  client_stat  [kernel.kallsyms]  [k] __kmalloc_node                                                                                                                                                    ▒
+   10.19%  client_stat  [kernel.kallsyms]  [k] kfree                                                                                                                                                             ▒
+    7.54%  client_stat  [kernel.kallsyms]  [k] memset_erms                                                                                                                                                       ▒
+    7.34%  client_stat  [kernel.kallsyms]  [k] big_num_lshift                                                                                                                                                    ▒
+    5.39%  client_stat  [kernel.kallsyms]  [k] memcg_slab_free_hook                                                                                                                                              ▒
+    3.84%  client_stat  [kernel.kallsyms]  [k] big_num_add.part.0                                                                                                                                                ▒
+    2.62%  client_stat  [kernel.kallsyms]  [k] kmalloc_slab                                                                                                                                                      ▒
+    2.40%  client_stat  [kernel.kallsyms]  [k] kvmalloc_node                                                                                                                                                     ▒
+    2.36%  client_stat  [kernel.kallsyms]  [k] big_num_mul                                                                                                                                                       ▒
+    1.59%  client_stat  [kernel.kallsyms]  [k] memcpy_erms                                                                                                                                                       ▒
+    1.42%  client_stat  [kernel.kallsyms]  [k] kvfree                                                                                                                                                            ▒
+    1.22%  client_stat  [kernel.kallsyms]  [k] big_num_dup                                                                                                                                                       ▒
+    1.08%  client_stat  [kernel.kallsyms]  [k] big_num_create                                                                                                                                                    ▒
+    1.07%  client_stat  [kernel.kallsyms]  [k] big_num_free                                                                                                                                                      ▒
+    0.84%  client_stat  [kernel.kallsyms]  [k] __cond_resched                                                                                                                                                    ▒
+    0.76%  client_stat  [kernel.kallsyms]  [k] is_vmalloc_addr                                                                                                                                                   ▒
     0.45%  client_stat  [kernel.kallsyms]  [k] rcu_all_qs                                                                                                                                                        ▒
     0.32%  client_stat  [kernel.kallsyms]  [k] memset                                                                                                                                                            ▒
     0.18%  client_stat  [kernel.kallsyms]  [k] should_failslab    

We can save a lot of resource by reusing the existing memory block, for example, void big_num_add(big_num_t *c, big_num_t *a, big_num_t *b) instead of big_num_t *big_num_add(big_num_t *a, big_num_t *b) because the latter requires c be freed to recieve the return value.

-    big_num_free(c);
-    c = big_num_add(a, b);
+    big_num_add(c, a, b);

Version 1

char *fib_sequence_big_num_fdouble(long long k)
{
    big_num_t *fk = big_num_create(1, 0);
    if (unlikely(!k))
        return big_num_to_string(fk);
    big_num_t *fk1 = big_num_create(1, 1);
    big_num_t *f2k = big_num_create(1, 0);
    big_num_t *f2k1 = big_num_create(1, 0);

    big_num_t *t1 = big_num_create(1, 0);
    big_num_t *t2 = big_num_create(1, 0);

    long long m = 1 << (63 - __builtin_clz(k));
    while (m) {
        // f2k = fk * (2 * fk1 - fk);
        big_num_cpy(t1, fk1);
        big_num_add(t2, fk1, t1);
        big_num_sub(t1, t2, fk);
        big_num_mul(f2k, fk, t1);
        // f2k1 = fk * fk + fk1 * fk1;
        big_num_square(t1, fk);
        big_num_square(t2, fk1);
        big_num_add(f2k1, t1, t2);
        if (k & m) {
            big_num_cpy(fk, f2k1);
            big_num_add(fk1, f2k, f2k1);
        } else {
            big_num_cpy(fk, f2k);
            big_num_cpy(fk1, f2k1);
        }
        m >>= 1;

    }
    big_num_free(fk1);
    big_num_free(f2k);
    big_num_free(f2k1);
    big_num_free(t1);
    big_num_free(t2);
    return big_num_to_string(fk);
}

The result is much better with a 40% improvement, from 790 ms to 475 ms.


Check again with perf

+   68.28%  client_stat  [kernel.kallsyms]  [k] big_num_to_string                                                                                                                                                 ▒
+   11.25%  client_stat  [kernel.kallsyms]  [k] big_num_lshift                                                                                                                                                    ◆
+    5.19%  client_stat  [kernel.kallsyms]  [k] big_num_mul_add                                                                                                                                                   ▒
+    3.30%  client_stat  [kernel.kallsyms]  [k] big_num_mul                                                                                                                                                       ▒
+    2.18%  client_stat  [kernel.kallsyms]  [k] kfree                                                                                                                                                             ▒
+    2.08%  client_stat  [kernel.kallsyms]  [k] __kmalloc_node                                                                                                                                                    ▒
+    1.78%  client_stat  [kernel.kallsyms]  [k] memset_erms                                                                                                                                                       ▒
+    0.88%  client_stat  [kernel.kallsyms]  [k] memcg_slab_free_hook                                                                                                                                              ▒
+    0.79%  client_stat  [kernel.kallsyms]  [k] memcpy_erms                                                                                                                                                       ▒
+    0.63%  client_stat  [kernel.kallsyms]  [k] kmalloc_slab                                                                                                                                                      ▒
+    0.57%  client_stat  [kernel.kallsyms]  [k] kvmalloc_node                                                                                                                                                     ▒
     0.36%  client_stat  [kernel.kallsyms]  [k] big_num_resize                                                                                                                                                    ▒
     0.26%  client_stat  [kernel.kallsyms]  [k] kvfree  

Next, I try to improve big_num_lshift().
By using cnt to record how many bits to shift when the current bit is 1, we don't have to shift every iteration.

void big_num_mul(big_num_t *c, big_num_t *a, big_num_t *b)
{
    if (!a || !b)
        return;
    big_num_reset(c);
    if (big_num_is_zero(a) || big_num_is_zero(b)) {
        return;
    }
    if (!c)
        return;
    big_num_t *b2 = big_num_dup(b);
+   int cnt = 0;
    for (size_t i = 0; i < a->block_num; ++i) {
        for (int k = 0; k < 32; ++k) {
            u32 m = 1u << k;
            if (a->block[i] & m) {
+               big_num_lshift(b2, cnt);
                big_num_mul_add(c, b2);
+               cnt = 0;
            }
-            big_num_lshift(b2, 1);
+           ++cnt;
        }
    }
    big_num_free(b2);
}

The result is finally better then the iterative approach

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

However, I if we don't measure the time without converting the number into string, the result is still not great and the fast doubling method is still not better.
Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

That is because the multiplication iterates through every bit and do addition each time.
If we change the multiplication function to calculate every 32 bits, the result is much more reasonable and faster.

void big_num_mul(big_num_t *c, big_num_t *a, big_num_t *b)
{
    if (!a || !b)
        return;
    big_num_reset(c);
    if (big_num_is_zero(a) || big_num_is_zero(b)) {
        return;
    }
    if (!c)
        return;
    for (size_t shift = 0; shift < b->block_num; ++shift) {
        u32 cy = 0;
        size_t i = 0;
        for (; i < a->block_num; i++) {
            u64 t1 = (u64) a->block[i] * (u64) b->block[shift] + cy;
            cy = (u32)(t1 >> 32);
            if (i + 1 + shift > c->block_num)
                big_num_resize(c, i + 1 + shift);
            u64 t2 = ((u64) c->block[i + shift]) + (u32) t1;
            cy += (u32)(t2 >> 32);
            c->block[i + shift] += (u32) t1;
        }
        if (cy) {
            if (i + 1 + shift > c->block_num)
                big_num_resize(c, i + 1 + shift);
            c->block[i + shift] += cy;
        }
    }
}


The reference is bignum.

+   15.80%  client_stat  [kernel.kallsyms]  [k] memset_erms                    ◆
+   15.49%  client_stat  [kernel.kallsyms]  [k] __kmalloc_node                 ▒
+   14.02%  client_stat  [kernel.kallsyms]  [k] kfree                          ▒
+    7.68%  client_stat  [kernel.kallsyms]  [k] big_num_mul                    ▒
+    7.14%  client_stat  [kernel.kallsyms]  [k] memcg_slab_free_hook           ▒
+    4.65%  client_stat  [kernel.kallsyms]  [k] kmalloc_slab                   ▒
+    4.24%  client_stat  [kernel.kallsyms]  [k] memcpy_erms                    ▒
+    2.80%  client_stat  [kernel.kallsyms]  [k] kvmalloc_node                  ▒
+    2.72%  client_stat  [kernel.kallsyms]  [k] syscall_exit_to_user_mode      ▒
+    2.48%  client_stat  [kernel.kallsyms]  [k] big_num_add.part.0             ▒
+    2.17%  client_stat  [kernel.kallsyms]  [k] kvfree                         ▒
+    1.93%  client_stat  [kernel.kallsyms]  [k] big_num_create                 ▒
+    1.60%  client_stat  [kernel.kallsyms]  [k] syscall_return_via_sysret      ▒
+    1.44%  client_stat  [kernel.kallsyms]  [k] kvrealloc                      ▒
+    1.43%  client_stat  [kernel.kallsyms]  [k] big_num_resize                 ▒
+    1.28%  client_stat  [kernel.kallsyms]  [k] big_num_dup                    ▒
+    1.20%  client_stat  [kernel.kallsyms]  [k] is_vmalloc_addr                ▒
+    0.96%  client_stat  [kernel.kallsyms]  [k] rcu_all_qs                     ▒
+    0.96%  client_stat  [kernel.kallsyms]  [k] __cond_resched                 ▒
+    0.88%  client_stat  [kernel.kallsyms]  [k] big_num_reset                  ▒
+    0.88%  client_stat  [kernel.kallsyms]  [k] __entry_text_start             ▒
+    0.80%  client_stat  [kernel.kallsyms]  [k] memset                         ▒
+    0.80%  client_stat  [kernel.kallsyms]  [k] big_num_sub                    ▒
+    0.72%  client_stat  [kernel.kallsyms]  [k] big_num_2comp                  ▒
+    0.56%  client_stat  [kernel.kallsyms]  [k] big_num_cpy                    ▒
+    0.56%  client_stat  [kernel.kallsyms]  [k] fib_sequence_big_num_fdouble

Now the bottleneck is that I used too many memory operation unnecessarily.
Use big_num_sub() as example, we don't need to calculate the 2's complement of the big number or set c to 0.

// c = a - b, assume a > b
void big_num_sub(big_num_t *c, big_num_t *a, big_num_t *b)
{
    if (!a || !b)
        return;
    big_num_resize(b, a->block_num);
    big_num_resize(c, a->block_num);
    u32 cy = 1;
    for (size_t i = 0; i < a->block_num; ++i) {
        u32 t = ~b->block[i];
        c->block[i] = a->block[i] + t + cy;
        cy = (u32)(((u64) a->block[i] + (u64) t) >> 32);
    }

}


We can improve big_num_square() by symmetry.
[Reference]
Squaring can be at most 2X faster than regular multiplication between arbitrary numbers. because of symmetry. For example, calculate the square of 1011 and try to spot a pattern that we can exploit. u0:u3 represent the bits in the number from the most significant to the least significant.

    1011 //                               u3 * u0 : u3 * u1 : u3 * u2 : u3 * u3
   1011  //                     u2 * u0 : u2 * u1 : u2 * u2 : u2 * u3       
  0000   //           u1 * u0 : u1 * u1 : u1 * u2 : u1 * u3                 
 1011    // u0 * u0 : u0 * u1 : u0 * u2 : u0 * u3                           
void big_num_square(big_num_t *c, big_num_t *a)
{
    if (!a)
        return;
    big_num_resize(c, 2 * a->block_num);
    big_num_reset(c);
    if (big_num_is_zero(a)) {
        big_num_trim(c);
        return;
    }
    
    if (!c)
        return;
    for (size_t shift = 0; shift < a->block_num; ++shift) {
        u32 cy = 0;
        size_t i = shift + 1;
        for (; i < a->block_num; ++i) {
            u64 t1 = (u64) a->block[i] * (u64) a->block[shift] + cy;
            cy = (u32)(t1 >> 32);
            u64 t2 = ((u64) c->block[i + shift]) + (u32) t1;
            cy += (u32)(t2 >> 32);
            c->block[i + shift] = (u32) t2;
        }   
        for (int j = 0; cy != 0; ++j) {
            u64 t = (u64)c->block[i + j + shift] + (u64)cy;
            c->block[i + j + shift] = (u32)t;
            cy = (u32)(t >> 32);
        }
    }
    big_num_lshift(c, 1);
    u32 cy = 0;
    for (size_t shift = 0; shift < a->block_num; ++shift) {
        u64 t1 = (u64) a->block[shift] * (u64) a->block[shift] + cy;
        cy = (u32)(t1 >> 32);
        u64 t2 = ((u64) c->block[2 * shift]) + (u32) t1;
        cy += (u32)(t2 >> 32);
        c->block[2 * shift] = (u32) t2;
        for (int j = 1; cy != 0; ++j) {
            u64 t = (u64)c->block[j + 2 * shift] + (u64)cy;
            c->block[j + 2 * shift] = (u32)t;
            cy = (u32)(t >> 32);
        }
    }
    big_num_trim(c);
}

The result is not improved much because of the addition cost.


Now the result for numbers less than 1000 can be done in 10 micro seconds.
The bottleneck is still memory allocation and free.

+   15.10%  client_stat  [kernel.kallsyms]  [k] kfree                          ◆
+   12.09%  client_stat  [kernel.kallsyms]  [k] __kmalloc_node                 ▒
+   10.47%  client_stat  [kernel.kallsyms]  [k] memset_erms                    ▒
+   10.33%  client_stat  [kernel.kallsyms]  [k] memcpy_erms                    ▒
+    5.47%  client_stat  [kernel.kallsyms]  [k] memcg_slab_free_hook           ▒
+    5.42%  client_stat  [kernel.kallsyms]  [k] big_num_square                 ▒
+    3.53%  client_stat  [kernel.kallsyms]  [k] kmalloc_slab                   ▒
+    3.49%  client_stat  [kernel.kallsyms]  [k] big_num_resize                 ▒
+    3.22%  client_stat  [kernel.kallsyms]  [k] syscall_exit_to_user_mode      ▒
+    2.83%  client_stat  [kernel.kallsyms]  [k] kvrealloc                      ▒
+    2.72%  client_stat  [kernel.kallsyms]  [k] big_num_mul                    ▒
+    2.63%  client_stat  [kernel.kallsyms]  [k] kvfree                         ▒
+    2.58%  client_stat  [kernel.kallsyms]  [k] big_num_add                    ▒
+    2.19%  client_stat  [kernel.kallsyms]  [k] kvmalloc_node                  ▒
+    1.94%  client_stat  [kernel.kallsyms]  [k] big_num_lshift                 ▒
+    1.87%  client_stat  [kernel.kallsyms]  [k] syscall_return_via_sysret      ▒
+    1.40%  client_stat  [kernel.kallsyms]  [k] is_vmalloc_addr                ▒
+    1.29%  client_stat  [kernel.kallsyms]  [k] fib_sequence_big_num_fdouble   ▒
+    1.07%  client_stat  [kernel.kallsyms]  [k] big_num_sub                    ▒
+    1.06%  client_stat  [kernel.kallsyms]  [k] __entry_text_start             ▒
+    1.05%  client_stat  [kernel.kallsyms]  [k] big_num_trim  

Pre-allocate memory

We can modify the structure to record current block number (the number of blocks that is in use, or non-zero) and allocated block number.

typedef struct {
    u32 *block;
    size_t block_num;
    size_t true_block_num;
} big_num_t;

In the resize function, we only allocate new memory when num > true_block_num which will not happen if we pre-allocate sufficient memory.

void big_num_resize(big_num_t *a, int num)
{
    // decrease size
    if (a->true_block_num >= num) {
        a->block_num = num;
    } else {  // num > true block num >= block num
        a->block = kvrealloc(a->block, sizeof(u32) * a->true_block_num,
                             sizeof(u32) * num, GFP_KERNEL);

        memset(&a->block[a->true_block_num], 0,
               sizeof(u32) * (num - a->true_block_num));
        a->true_block_num = a->block_num = num;
    }
}

Based on observation, the number of block needed increases by 1 every 46 fibonacci number.

big_num_t *fib_sequence_big_num_fdouble(long long k)
{
    size_t block_num = k/46 + 1;
    big_num_t *fk = big_num_create(block_num, 0);
    if (unlikely(!k))
        return fk;
    big_num_t *fk1 = big_num_create(block_num, 1);
    big_num_t *f2k = big_num_create(block_num, 0);
    big_num_t *f2k1 = big_num_create(block_num, 0);

    big_num_t *t1 = big_num_create(block_num, 0);
    big_num_t *t2 = big_num_create(block_num, 0);
    ...
}

After pre-allocating memory, the result is so much closer to the reference.


If we measure the time for fib(10000), it is clear that we still need lots of improvement.

Block size

64 bits cpu provides instruction for addition/multiplcation for 64 bits numbers.
By changing the block size of the big number, we can be twice as fast as the previous version.

Shortcut and asm

  1. We don't need to iterate through every block when the number can be stored in 64 bits.
void big_num_mul(big_num_t *c, big_num_t *a, big_num_t *b)
{
    ...
    // short cut
    if (a->block_num == 1 && b->block_num == 1) {
        base_t cy = 0;
        cy = base_mul(&c->block[0], a->block[0], b->block[0], cy);
        c->block[1] = cy;
        big_num_trim(c);
        return;
    }
    ...
}
  1. Use __asm__ instead of 128 bits integer.
base_t base_mul(base_t *c, base_t a, base_t b, base_t cy)
{
    base_t hi;
    __asm__("mulq %3" : "=a"(*c), "=d"(hi) : "%0"(a), "rm"(b));
    cy = ((*c += cy) < cy) + hi;
    return cy;
}
  1. Use compare to detect overflow instead of 128 bits integer.
base_t base_add(base_t *c, base_t a, base_t b, base_t cy)
{
    cy = (a += cy) < cy;
    cy += (*c = a + b) < a;
    return cy;
}

Result: