Try   HackMD

Q&A - Linux related

tags: 2022/09 linuxQ&A linux

(2022/8/14) Collections of Q&A related to Linux. This is to save the time and energy in Google search of some incorrect, and outdated articles, by providing the most effecitve resolutions from my experiences. This content can be brief, with intention for users to get a clear direction to get things solved, with reference links if users want to deep dive further.
(latest update on 2022/10/9)


Table of Contents


Q: Linux boot sequence - magic address 0x7c00

A: MBR 載入位址 0x7C00 的來源與意義的調查結果, or From the bootloader to the kernel


Q: Linux Kernel official documents

A:


Q: Build Linux kernel - Prerequisite - Toolchains

A: (Mainly from Compiling Linux kernel, with reference from linuxquestions.org and How to compile and install Linux Kernel 5.16.9 from source code)

# All distributions in the Debian family (such as Ubuntu) have a package called build-essential, which contains everything you need for most builds. 
# such as gcc, g++, cpp, etc.
sudo apt update
sudo apt install build-essential

# In addition to the basic build tools, the kernel configuration dialogue requires the development package (headers) for the ncurses library, and modern kernels require the following build 
sudo apt install libncurses-dev bison flex libssl-dev libelf-dev

# for ARM aarch64 toolchain, need to install more of the following
sudo apt install rsync bc

Q: Build Linux kernel - Using The Traditional Way by 'make'

A: (Reference How to compile and install Linux Kernel from source code

Follow below 12 steps: (based on Ubuntu 16.04 kernel 4.15.0-112)

  1. Follow Prerequisite first
  2. Download desired kernel to /usr/src. Go to www.kernel.org and select the kernel you want to install, e.g. linux-4.19.257.tar.xz
cd /usr/src
# sudo is required for shell commands under /usr/ directory
sudo wget https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.19.257.tar.xz
  1. (Skip signature check for now, and) Untar/unzip the kernel source file
# assume still under /usr/src/ directory
sudo tar xvf linux-4.19.257.tar.xz
# can remove the source file after sucessfully untar/unzip
sudo rm linux-4.19.257.tar.xz
  1. Preparing to configure the Linux kernel features and modules

Normal we re-use existing kernal features and modules for new kernel, which is saved under boot directory and file name of /boot/config-$(uname -r)

# assume still under /usr/src/ directory
cd linux-4.19.257
# copy current kernel config file
sudo cp -v /boot/config-$(uname -r) .config
  1. Configuring the kernel

Now we can start the kernel configuration by typing any one of the following command in source code directory. This step is optional, as we already copy existing kernel configuration. The purpose of doing this step is to show the possibility to select different kernel options needed.

$ make menuconfig # Text based color menus, radiolists & dialogs. This option also useful on remote server if you wanna compile kernel remotely.
$ make xconfig # X windows (Qt) based configuration tool, works best under KDE desktop
$ make gconfig # X windows (Gtk) based configuration tool, works best under Gnome Dekstop.

Most of the people select make menuconfig as it requires only text based, and no desktop environment required.

# recommend to use this method to configure kernel
sudo make menuconfig
# if keeping existing configuration without any change, 
# can just select 'exit' and 'Yes' to save
  1. Compile the Linux Kernel

Start compiling and tocreate a compressed kernel image, using as many processors as possible.

# get thread or cpu core count using nproc command
sudo make -j $(nproc)
# or we can add 'time' shell command to know how long it takes to compile the kernel
# it could easily take more than 30mins even running at 4 threads, also depending on CPU frequency
sudo time make -j $(nproc)
  1. Install the Linux kernel modules

This will only take few minutes.

sudo make modules_install
  1. Install the Linux kernel
sudo make install

It will install three files into /boot directory as well as modification to your kernel grub configuration file:

  • initramfs-4.19.257.img
  • System.map-4.19.257
  • vmlinuz-4.19.257
  1. Update grub config
sudo update-initramfs -c -k 4.19.257
sudo update-grub
  1. Reboot the computer into new kernel
sudo reboot now
  1. Check if system runs at new kernel
# after system complete reboot
uname -r
# to check if it shows '4.19.257'
  1. Congratulations to successfully compile and install Linux kernel

A: (Following Debian.net/Building a custom kernel from Debian kernel source, and reference The Debian Administrator's Handbook and How to Compile A Kernel - The Debian Way which is outdated.)

Personally, recommended Debian way to build the kernel for Debian/Ubuntu Linux based, as I found the /boot package file size is similar to the verion prior to new build, and the traditional way (or maybe I did it wrong there) package file size is much larger.

Follow below 10 steps: (upgrade from Ubuntu 16.04 kernel 4.15.0-112 to 4.19.257)

  1. Follow Prerequisite first

  2. Download desired kernel to /usr/src. Go to www.kernel.org and select the kernel you want to install, e.g. linux-4.19.257.tar.xz

cd /usr/src
# sudo is required for shell commands under /usr/ directory
sudo wget https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.19.257.tar.xz
  1. (Skip signature check for now, and) Untar/unzip the kernel source file
# assume still under /usr/src/ directory
sudo tar xvf linux-4.19.257.tar.xz
# can remove the source file after sucessfully untar/unzip
sudo rm linux-4.19.257.tar.xz
  1. Preparing to configure the Linux kernel features and modules

Normal we re-use existing kernal features and modules for new kernel, which is saved under boot directory and file name of /boot/config-$(uname -r)

# assume still under /usr/src/ directory
cd linux-4.19.257
# copy current kernel config file
sudo cp -v /boot/config-$(uname -r) .config
  1. Configuring the kernel

From now on, follow this article from $ make nconfig section and onwards.

Now we can start the kernel configuration by typing any one of the following command in source code directory. This step is optional, as we already copy existing kernel configuration. The purpose of doing this step is to show the possibility to select different kernel options needed.

$ make menuconfig # Text based color menus, radiolists & dialogs. This option also useful on remote server if you wanna compile kernel remotely.
$ make xconfig # X windows (Qt) based configuration tool, works best under KDE desktop
$ make gconfig # X windows (Gtk) based configuration tool, works best under Gnome Dekstop.

Most of the people select make menuconfig as it requires only text based, and no desktop environment required. Debian suggest make nconfig, the nconfig frontend requires the ncurses library, which is provided by the libncurses-dev package, which is also text based.

# Debian recommends to use this method to configure kernel
sudo make nconfig
# if keeping existing configuration without any change, 
# Press 'F9' to exit
  1. Build the package

After the configuration process is finished, the new or updated kernel configuration will be stored in .config file in the top-level directory. The build is started using the commands

sudo make clean
sudo make bindeb-pkg -j $(nproc)
  1. Unpack the kernel

As a result of the build, a custom kernel package linux-image-4.19.257_4.19.257-1_amd64.deb (name will reflect the version of the kernel and build number) will be created in the directory one level above the top of the tree. It may be installed using dpkg just as any other package:

sudo dpkg -i ../linux-image-4.19.257_4.19.257-1_amd64.deb

This command will unpack the kernel, generate the initrd if necessary (see Chapter 7, Managing the initial ramfs (initramfs) archive for details), and configure the bootloader to make the newly installed kernel the default one.

  1. Reboot the computer into new kernel
sudo reboot now
  1. Check if system runs at new kernel
# after system complete reboot
uname -r
# to check if it shows '4.19.257'
  1. Congratulations to successfully compile and install Linux kernel

Q: Build Linux kernel - Remove kernel

A:

Don't try this unless you are confident that there is an older Linux version to boot.

Remove the .deb under /boot, and update grub with update-grub

cd /boot
sudo rm -rf linux-4.19.257*
sudo rm -rf linux-headers-4.19.257_4.19.257-1_amd64.deb
sudo rm -rf linux-image-4.19.257_4.19.257-1_amd64.deb
sudo rm -rf linux-libc-dev_4.19.257-1_amd64.deb
sudo update-grub

Q: Buildroot - What is buildroot?

A:


Q: Suddenly no network in VMware - Ubuntu 18.04 network icon disappeared and no internet access in VMware?

A: (Answer from askubuntu) solved the problem by command

nmcli network on

Q: gcc, the GNU Compiler Collection

A: GCC, the GNU Compiler Collection


Q: gdb - 基本命令

A: from 用Open Source工具開發軟體: 新軟體開發關念 - Chapter 6. 除錯工具 gdb 基本命令


Q: gdb - 常用指令

A: from gdb 常用指令


Q: gdb - Youtube - gdb Tutorial

A: from Debuggin - gdb Tutorial


Q: kernel vs user space - 淺談 xv6 RISC-V 特權模式與模式切換

A: from 淺談特權模式與模式切換 - xv6 RISC-V :

恐龍書上的 User Mode 與 Kernel Mode

在恐龍書中有提到,作業系統一般會在 User Mode 與 Kernel Mode 之間切換,Kernel Mode 具有更高的系統控制權且掌管了多數的硬體資源。而 User Mode 通常用於執行 User Application,如果 User Application 呼叫了 System call,系統便會切換到 Kernel Mode 進行處理,並在處理完成後退回 User Mode。

kernel vs user space

恐龍本沒有教的事:特權指令

RISC-V priviledge levels
上圖為 RISC-V Spec 定義的特權模式,其權限由高到低為: Machine Mode, Supervisor Mode 以及 User Mode。
Machine Mode 必須執行絕對可性的程式碼以保護系統安全。
每個 Mode 都有屬於自己的中斷與異常暫存器與中斷向量表,使其可以各自維護中斷的紀錄與應對方式。


Q: kernel vs user space - What is difference between User space and Kernel space?

A: Extracted from unix.stackexchange - what is difference between user space and kernel space, check that link for more details.

CPU rings are the most clear distinction

In x86 protected mode, the CPU is always in one of 4 rings. The Linux kernel only uses 0 and 3:

  • 0 for kernel
  • 3 for users

This is the most hard and fast definition of kernel vs userland.

Why Linux does not use rings 1 and 2: https://stackoverflow.com/questions/6710040/cpu-privilege-rings-why-rings-1-and-2-arent-used

How is the current ring determined?

The current ring is selected by a combination of:

  • global descriptor table: a in-memory table of GDT entries, and each entry has a field Privl which encodes the ring.

    The LGDT instruction sets the address to the current descriptor table.

    See also: http://wiki.osdev.org/Global_Descriptor_Table

  • the segment registers CS, DS, etc., which point to the index of an entry in the GDT.

    For example, CS = 0 means the first entry of the GDT is currently active for the executing code.

What can each ring do?

The CPU chip is physically built so that:

  • ring 0 can do anything

  • ring 3 cannot run several instructions and write to several registers, most notably:

    • cannot change its own ring! Otherwise, it could set itself to ring 0 and rings would be useless.

      In other words, cannot modify the current segment descriptor, which determines the current ring.

    • cannot modify the page tables: https://stackoverflow.com/questions/18431261/how-does-x86-paging-work

      In other words, cannot modify the CR3 register, and paging itself prevents modification of the page tables.

      This prevents one process from seeing the memory of other processes for security / ease of programming reasons.

    • cannot register interrupt handlers. Those are configured by writing to memory locations, which is also prevented by paging.

      Handlers run in ring 0, and would break the security model.

      In other words, cannot use the LGDT and LIDT instructions.

    • cannot do IO instructions like in and out, and thus have arbitrary hardware accesses.

      Otherwise, for example, file permissions would be useless if any program could directly read from disk.

      More precisely thanks to Michael Petch: it is actually possible for the OS to allow IO instructions on ring 3, this is actually controlled by the Task state segment.

      What is not possible is for ring 3 to give itself permission to do so if it didn't have it in the first place.

      Linux always disallows it. See also: https://stackoverflow.com/questions/2711044/why-doesnt-linux-use-the-hardware-context-switch-via-the-tss

How do programs and operating systems transition between rings?

  • when the CPU is turned on, it starts running the initial program in ring 0 (well kind of, but it is a good approximation). You can think this initial program as being the kernel (but it is normally a bootloader that then calls the kernel still in ring 0).

  • when a userland process wants the kernel to do something for it like write to a file, it uses an instruction that generates an interrupt such as int 0x80 or syscall to signal the kernel. x86-64 Linux syscall hello world example:

    ​.data ​hello_world: ​ .ascii "hello world\n" ​ hello_world_len = . - hello_world ​.text ​.global _start ​_start: ​ /* write */ ​ mov $1, %rax ​ mov $1, %rdi ​ mov $hello_world, %rsi ​ mov $hello_world_len, %rdx ​ syscall ​ /* exit */ ​ mov $60, %rax ​ mov $0, %rdi ​ syscall

    compile and run:

    ​as -o hello_world.o hello_world.S ​ld -o hello_world.out hello_world.o ​./hello_world.out

    GitHub upstream.

    When this happens, the CPU calls an interrupt callback handler which the kernel registered at boot time. Here is a concrete baremetal example that registers a handler and uses it.

    This handler runs in ring 0, which decides if the kernel will allow this action, do the action, and restart the userland program in ring 3. x86_64

  • when the exec system call is used (or when the kernel will start /init), the kernel prepares the registers and memory of the new userland process, then it jumps to the entry point and switches the CPU to ring 3

  • If the program tries to do something naughty like write to a forbidden register or memory address (because of paging), the CPU also calls some kernel callback handler in ring 0.

    But since the userland was naughty, the kernel might kill the process this time, or give it a warning with a signal.

  • When the kernel boots, it setups a hardware clock with some fixed frequency, which generates interrupts periodically.

    This hardware clock generates interrupts that run ring 0, and allow it to schedule which userland processes to wake up.

    This way, scheduling can happen even if the processes are not making any system calls.

What is the point of having multiple rings?

There are two major advantages of separating kernel and userland:

  • it is easier to make programs as you are more certain one won't interfere with the other. E.g., one userland process does not have to worry about overwriting the memory of another program because of paging, nor about putting hardware in an invalid state for another process.
  • it is more secure. E.g. file permissions and memory separation could prevent a hacking app from reading your bank data. This supposes, of course, that you trust the kernel.

How to play around with it?

I ('I' here, and in following sentences mean the author Ciro Santilli) have created a bare metal setup that should be a good way to manipulate rings directly: https://github.com/cirosantilli/x86-bare-metal-examples

I didn't have the patience to make a userland example unfortunately, but I did go as far as paging setup, so userland should be feasible. I'd love to see a pull request.

Alternatively, Linux kernel modules run in ring 0, so you can use them to try out privileged operations, e.g. read the control registers: https://stackoverflow.com/questions/7415515/how-to-access-the-control-registers-cr0-cr2-cr3-from-a-program-getting-segmenta/7419306#7419306

Here is a convenient QEMU + Buildroot setup to try it out without killing your host.

The downside of kernel modules is that other kthreads are running and could interfere with your experiments. But in theory you can take over all interrupt handlers with your kernel module and own the system, that would be an interesting project actually.


Q: kernel vs user space - CPU Privilege Rings: Why rings 1 and 2 aren't used?

A: from previous article Why Linux does not use rings 1 and 2


Q: Linux Shell - Nohup Command in Linux?

A: Check the link

  • To redirect to a file and to standard error and output use the > filename 2>&1 attribute as shown
  • To start a process in the background use the & symbol at the end of the command.
  • To check the process when resuming the shell use the pgrep command
nohup ./hello.sh > myoutput.txt >2&1 &
pgrep -a hello.sh

Q: Linux Shell - Cursor disappear when using Ubuntu teminal?

A: from Stackexchange, use the following command to send the VT320 "unhide" command sequence:

echo -en "\e[?25h"

Q: Linux Shell - grep usage

A: One common grep usage below as an example, more from How to Grep for Text in Files

# find the text "getpid" within all the file names with ".c" or ".h" # under current directory # '-n' to display line number within that file grep -n "getpid" *.[ch]

Q: Linux Shell - find usage

A1 : Examples of find command usage below.

# find all the file names with ".python-version" under current directory find . -name ".python-version"

A1 : Examples of find command using wildcard *.

# find all the file names with ".py" Python files under current directory find . -name "*.py"

Q: Linux Shell - find and print usage

A : Find all the files with specific name and print it out.

# find all the file names with ".python-version" under current directory and print it out find . -name ".python-version" -exec sh -c 'for f; do echo "==> $f <=="; cat "$f"; done' _ {} +

Below is the explanation of how it works in above mentioned command line. Below is the generic command and explanation.

-exec sh -c 'SCRIPT' ARG0 {} +

What does this mean?

  • exec: For each found file, execute a command.
  • sh -c 'SCRIPT': Run a new shell (sh) and execute the 'SCRIPT' inside it.
  • for f; do ...; done: This is the shell script being executed.
  • ARG0 or _: This is the $0 (script name placeholder); it's conventionally ignored here by using _.
  • {}: These are the placeholders for the found filenames, passed as arguments to the shell script.
  • +: Tells find to group as many filenames as possible into a single sh execution (more efficient than running one shell per file).

Below is a small shell script run by sh -c, and it uses a for loop to handle multiple files.

'for f; do echo "==> $f <=="; cat "$f"; done'

Let’s go line by line:

for f; do /* * This loops through all the positional parameters $1, $2, ..., which were passed in from the find command (i.e., the .python-version files found). * Shorthand form of 'for f in "$@"; do ...; done.'' - The shorthand for f; do ... is cleaner and quicker when you're just looping through script or function arguments. - The long form for f in "$@" is more explicit and may be clearer to beginners or for more complex scripts. * The 'f' can be replaced by any string, like 'file' or 'filename', as long as it is consistent with the '$f', or '$file', '$filename' in below. */
echo "==> $f <==" /* * Prints a header line showing which file is being printed (useful for readability). */
cat "$f" /* * Outputs the contents of each file. */
done /* * Ends the loop. */

Q: Linux git - git set up for github

A: from Stackeoverflow

cd ~
git config --global user.name "myname"
git config --global user.email emailid@yahoo.com
ssh-keygen -t ed25519 -C "email@yahoo.com"
cd ~/.ssh
# copy ~/.ssh/id_ed25519.pub content (removing email in the end of the string) 
# to https://github.com/ - Settings - SSH and GPG keys - New SSH key - Key 
# (Title can be any string defined by user)

A: Check it out 跟我一起写Makefile


Q: Makefile - GNU make manual

A: Check it out gnu.org


Q: Makefile - One page introduction to Makefile - I

A: This article 一文入门Makefile provides a relatively good introduction to the usage of Makefile.


Q: Makefile - One page introduction to Makefile - II

A: This article Makefile入门(超详细一文读懂)


Q: Makefile - Automatic Variables

A: from gnu.org/Automatic-Variables.html

Here is a table of automatic variables:

$@ 表示生成的目标文件
The file name of the target of the rule. If the target is an archive member, then ‘$@’ is the name of the archive file. In a pattern rule that has multiple targets (see Introduction to Pattern Rules), ‘$@’ is the name of whichever target caused the rule’s recipe to be run.

$%
The target member name, when the target is an archive member. See Archives. For example, if the target is foo.a(bar.o) then ‘$%’ is bar.o and ‘$@’ is foo.a. ‘$%’ is empty when the target is not an archive member.

$< 代表第一个依赖文件
The name of the first prerequisite. If the target got its recipe from an implicit rule, this will be the first prerequisite added by the implicit rule (see Implicit Rules).

$?
The names of all the prerequisites that are newer than the target, with spaces between them. If the target does not exist, all prerequisites will be included. For prerequisites which are archive members, only the named member is used (see Archives).

‘$?’ is useful even in explicit rules when you wish to operate on only the prerequisites that have changed. For example, suppose that an archive named lib is supposed to contain copies of several object files. This rule copies just the changed object files into the archive:

lib: foo.o bar.o lose.o win.o ar r lib $?

$^ 表示所有的依赖文件
The names of all the prerequisites, with spaces between them. For prerequisites which are archive members, only the named member is used (see Archives). A target has only one prerequisite on each other file it depends on, no matter how many times each file is listed as a prerequisite. So if you list a prerequisite more than once for a target, the value of $^ contains just one copy of the name. This list does not contain any of the order-only prerequisites; for those see the ‘$|’ variable, below.

$+
This is like ‘$^’, but prerequisites listed more than once are duplicated in the order they were listed in the makefile. This is primarily useful for use in linking commands where it is meaningful to repeat library file names in a particular order.

$|
The names of all the order-only prerequisites, with spaces between them.

$*
The stem with which an implicit rule matches (see How Patterns Match). If the target is dir/a.foo.b and the target pattern is a.%.b then the stem is dir/foo. The stem is useful for constructing names of related files.

In a static pattern rule, the stem is part of the file name that matched the ‘%’ in the target pattern.

In an explicit rule, there is no stem; so ‘$*’ cannot be determined in that way. Instead, if the target name ends with a recognized suffix (see Old-Fashioned Suffix Rules), ‘$*’ is set to the target name minus the suffix. For example, if the target name is ‘foo.c’, then ‘$*’ is set to ‘foo’, since ‘.c’ is a suffix. GNU make does this bizarre thing only for compatibility with other implementations of make. You should generally avoid using ‘$*’ except in implicit rules or static pattern rules.

If the target name in an explicit rule does not end with a recognized suffix, ‘$*’ is set to the empty string for that rule.

Below are few examples describing the usage of automatic variables.

from Makefile入门(超详细一文读懂)

SRC = $(wildcard *.c) OBJ = $(patsubst %.c, %.o, $(SRC)) ALL: hello.out hello.out: $(OBJ) gcc $< -o $@ $(OBJ): $(SRC) gcc -c $< -o $@

Q: Makefile - Functions : wildcard and patsubst

A: from 一文入门Makefile, more functions can be found at gnu.org/Functions for Transforming Text section Text Functions

关于 makefile 函数

makefile也为我们提供了大量的函数,同样经常使用到的函数为以下两个。需要注意的是,makefile中所有的函数必须都有返回值。在以下的例子中,假如目录下有main.c,func1.c,func2.c三个文件。

wildcard:

用于查找指定目录下指定类型的文件,跟的参数就是目录+文件类型,比如:

src = $(wildcard ./src/*.c)

这句话表示:找到./src 目录下所有后缀为.c的文件,并赋给变量src。

命令执行完成后,src的值为:main.c func1.c fun2.c。

patsubst:

匹配替换,例如以下例子,用于从src目录中找到所有.c 结尾的文件,并将其替换为.o文件,并赋值给obj。

obj = $(patsubst %.c ,%.o ,$(src))

把src变量中所有后缀为.c的文件替换成.o。

命令执行完成后,obj的值为main.o func1.o func2.o

特别地,如果要把所有.o文件放在obj目录下,可用以下方法:

ob = $(patsubst ./src/%.c, ./obj/%.o, $(src))

Q: Makefile - Export usage in Makefile

A: from stackoverflow

The problem is that export exports the variable to the subshells used by the commands; it is not available for expansion in other assignments.

$ cat make_export_ex_1.mk
export TEST
VARIABLE=echo $$TEST
.PHONY: all
all:
    $(VARIABLE)
$ make -f make_export_ex_1.mk TEST=test
echo $TEST
test
$ echo $TEST

$ 

I cannot reproduce workaround of below example yet
There is a workaround, however: explicitly pass the exported variable as an environment variable to the shell function:

update := $(shell somevar='$(somevar)')

By prepending the shell command with <var>=<val>, that definition is added as an environment variable to the environment that the command sees - this is a generic shell feature.


from Hacking makefile variables to $(shell) environment

It's possible to hack something with .VARIABLES:

#MAKFILE

# get only the variables with plain names
MAKE_ENV := $(shell echo '$(.VARIABLES)' | awk -v RS=' ' '/^[a-zA-Z0-9]+$$/')
SHELL_EXPORT := $(foreach v,$(MAKE_ENV),$(v)='$($(v))')

export SUFFIX
TARGETS1:=$(shell ./targets.sh)
TARGETS2:=$(shell $(SHELL_EXPORT) ./targets.sh)

.PHONY: all $(TARGETS1) $(TARGETS2)
all: $(TARGETS1) $(TARGETS2)

$(TARGETS1):
	echo TARGET1: $@

$(TARGETS2):
	echo TARGET2: $@

Q: Makefile - Pass arguments to Makefile

A: from How to pass argument to Makefile from command line

$(filter-out $@,$(MAKECMDGOALS))

$(MAKECMDGOALS) is the list of "targets" spelled out on the command line, e.g. "action value1 value2".

$@ is an automatic variable for the name of the target of the rule, in this case "action".

filter-out is a function that removes some elements from a list. So $(filter-out bar, foo bar baz) returns foo baz (it can be more subtle, but we don't need subtlety here).

Put these together and $(filter-out $@,$(MAKECMDGOALS)) returns the list of targets specified on the command line other than command.

$ cat Makefile
pass-arguments:
    @echo $(filter-out $@,$(MAKECMDGOALS))
$ make pass-arguments arg1 arg2
arg1 arg2

Not yet tested
Another article show dark magic below.

args = $(foreach a,$($(subst -,_,$1)_args),$(if $(value $a),$a="$($a)"))

Q: Makefile - Differences of =, := and ?= in Makefile

A: from stackoverflow,

  • Simple assignment :=
    A simple assignment expression is evaluated only once, at the very first occurrence. For example, if CC :=${GCC} ${FLAGS} during the first encounter is evaluated to gcc -W then each time ${CC} occurs it will be replaced with gcc -W.

  • Recursive assignment =
    A Recursive assignment expression is evaluated everytime the variable is encountered in the code. For example, a statement like CC = ${GCC} {FLAGS} will be evaluated only when an action like ${CC} file.c is executed. However, if the variable GCC is reassigned i.e GCC=c++ then the ${CC} will be converted to c++ -W after the reassignment.

  • Conditional assignment ?=
    Conditional assignment assigns a value to a variable only if it does not have a value

  • Appending +=
    Assume that CC = gcc then the appending operator is used like CC += -w
    then CC now has the value gcc -W

For more check out these tutorials


Q: QEMU - What is QEMU?

A: from How To Emulate Firmware With QEMU - Hardware Hacking Tutorial#07 and June 2022 FreeBSD Developer Summit: QEMU User Mode

QEMU is a processor emulator and supports emulation of ARM, PowerPC, SPARC, x86, x86-64 and more, to support three types of applications.

  1. A user mode emulator: QEMU can launch Linux processes compiled for one CPU on another CPU, translating syscalls on the fly. This QEMU mode is faster than full system emulation, but is not a perfect abstraction. For instance, if a program reads /proc/cpuinfo, the contents will be returned by the host kernel and so will describe the host CPU instead of the emulated CPU.

  2. A full system emulator: QEMU emulates a full system (virtual machine), including a processor and various peripherals such as disk, ethernet controller etc.

  3. A virtualization environment: used by KVM and XEN virtualization environments. Not my interest, as there are alternatives of VMWare or Virtualbox

References:


Q: QEMU - Differences between qemu-arm and qemu-system-arm ?

A: Reference 用Qemu模擬ARM and ubuntu 20.04 安装qemu

qemu-arm和qemu-system-arm的區別是上一題的 1 跟 2

  1. qemu-arm是用戶模式的模擬器(更精確的表述應該是系統調用模擬器),qemu-arm僅可用來運行二進制文件,因此你可以交叉編譯完例如hello world之類的程序然後交給qemu-arm來運行,簡單而高效。
sudo apt install qemu            # install qemu-arch, qemu-system-arch, and qemu-img, etc.
# or
sudo apt install qemu-user       # install qemu-arch only
  1. 而qemu-system-arm則是系統模擬器,它可以模擬出整個機器並運行操作系統,而qemu-system-arm則需要你把hello world程序下載到客戶機操作系統能訪問到的硬盤裡才能運行。
sudo apt install qemu-system      # install qemu for all architectures
# or
sudo apt install qemu-system-arm  # install qemu system only for arm

Q: QEMU - A Fast and Portable Dynamic Translator

A: Suggest to read, very good article from QEMU, a Fast and Portable Dynamic Translator by Fabrice Bellard in USENIX '05 Technical Program, and Simplied Chinese translation QEMU:一个高速、可移植的动态翻译器

QEMU is made of several subsystems:

  • CPU emulator (currently x861, PowerPC, ARM and Sparc)
  • Emulated devices (e.g. VGA display, 16450 serial port, PS/2 mouse and keyboard, IDE hard disk, NE2000 network card, )
  • Generic devices (e.g. block devices, character devices, network devices) used to connect the emulated devices to the corresponding host devices
  • Machine descriptions (e.g. PC, PowerMac, Sun4m) instantiating the emulated devices
  • Debugger
  • User interface

A CPU emulator also faces other more classical but difficult problems:

  • Management of the translated code cache
  • Register allocation
  • Condition code optimizations
  • Direct block chaining
  • Memory management
  • Self-modifying code support
  • Exception support
  • Hardware interrupts
  • User mode emulation

Q: QEMU - how to install qemu-system-

# x86 32 bits sudo apt install qemu-system-i386 # x86 64 bits sudo apt install qemu-system-x86 # RISC-V 32 bits sudo apt install qemu-system-riscv32 # RISC-V 64 bits sudo apt install qemu-system-riscv64

Q: QEMU - a branch of QEMU to support STM32 board - Olimex STM32 P103 Development Kit.

QEMU with STM32 Microcontroller Implementation is a branch of QEMU based on v2.1.0. It requires re-compile/re-install of this STM32 customized QEMU version. See details in this example.


Q: QEMU - nothing shows up after QEMU VNC server running

After launching qemu, it lauches VMC server but nothing shows up

# qemu-launch is a script running qemu-system-arm with Raspberry Pi image
# The script does not contain option of `-nographic`
$ ./qemu-launch.sh
pulseaudio: set_sink_input_volume() failed
pulseaudio: Reason: Invalid argument
pulseaudio: set_sink_input_mute() failed
pulseaudio: Reason: Invalid argument
qemu-system-arm: warning: hub 0 with no nics
VNC server running on 127.0.0.1:5900
vpb_sic_write: Bad register offset 0x2c

A: Can try to validate if qemu-system-arm (or other qemu-system-architecture) works well by adding -nographic option. This article focus on qemu without -nographic option, and nothing shows up after qemu launching VNC server.

According to this Stackoverflow article, the message says that this QEMU is using the VNC protocol for graphics output. You can connect a VNC client to the 127.0.0.1:5900 port that it tells you about to see the graphics output.

If what you wanted was a native X11 window (GTK), then the problem is probably that you didn't have the necessary libraries installed to build the GTK support. QEMU's configure script's default behaviour is "build all the optional features that this host has the libraries installed for, and omit the features where the libraries aren't present". So if you don't have any of the GTK/SDL etc libraries when you build QEMU, the only thing you will get in the resulting QEMU binary is the lowest-common-denominator VNC support. If you want configure to report an error for a missing feature then you need to pass it the appropriate enable-whatever option to force the feature to be enabled (in this case, enable-gtk).

So, let me reinstall the qemu as follows

# Install libgtk-3-dev for qemu --enable-gtk
$ sudo apt-get install libgtk-3-dev

# (Re-)Install qemu
# I use qemu version 5.0.0 on Ubuntu 16.04, as latest qemu version 7.1.0
# requires >= Python 3.8, but not default Python 2.5 in Ubuntu 16.04.
$ wget https://download.qemu.org/qemu-5.0.0.tar.xz
$ tar xvJf qemu-5.0.0.tar.xz
$ cd qemu-5.0.0
$ ./configure --enable-gtk
$ sudo make install 

Q: QEMU - Examples

A: Suggest Marconi Jiang - QEMU for Beginner


Q: sudo vs su - sudo is preferred over su

A: This article The Difference Between sudo and su Explained gives clear guidelines.

Linux discourages working as root as it may cause unwanted system-wide changes and suggests using sudo instead.

  • su command stands for substitute user

    • This command su - changes to super user
    • su [user_name] to other user, and user environment remains the same
    • su - [user_name] to move to another user and switch to that target user environment
    • to change the root user. Switch to the root user and acquire the root environment with: sudo -i
    • su can also function as sudo and run a single command as the root:su -c [command]
    • While installing an Ubuntu OS, you create a user automatically labeled as part of the sudoers group. However, there is no root account setup. To enable the root user, you need to activate it manually.On the other hand, other Linux distributions, such as Fedora, create a root and user account upon installation.
  • sudo is used as a prefix to Linux commands, which allows the logged in user to execute commands that require root privileges.

    • For a user to execute a command that requires the sudo prefix, it has to be part of the sudoers group.
    • To add a user to the sudoers group, run the following command (as root or an account that already has sudo privileges):# usermod -aG sudo [user_name]
    • To see a list of accounts that belong to the sudoers group run: sudo getent group sudo
  • su requires the password of the target account, while sudo requires the password of the current user.


Q: sudo !! - Re-do the previous shell command with sudo previledge

A: History Substitution in the C Shell


Q: Toolchains : Arm - How to install cross compiler for arm on (Ubuntu) Linux

A: There are 2 ways of installation

  1. Installation using packages
# AArch32 bare-metal target (arm-none-eabi)
sudo apt-get install gcc-arm-none-eabi

# AArch32 GNU/Linux target with hard float (arm-none-linux-gnueabihf)
sudo apt install gcc-arm-linux-gnueabihf
  1. Download from official web site Arm GNU Toolchain Downloads and check Release note
# Install on Linux on x86_64 host CPU
tar xJf arm-gnu-toolchain-11.3.rel1-x86_64-<TRIPLE>.tar.xz -C /path/to/install/dir
# set the $PATH, and there you go

Q: Toolchains : Arm- Difference between arm-none-eabi and arm-linux-gnueabi?

A: (Answer from stackoverflow) Maybe this Linaro link to a description will help.

  • Probably the biggest difference:

    • "The bare-metal ABI will assume a different C library (newlib for example, or even no C library) to the Linux ABI (which assumes glibc). Therefore, the compiler may make different function calls depending on what it believes is available above and beyond the Standard C library."
  • What is the differences between “arm-none-eabi-” and “arm-linux-gnueabihf”? Can I use “arm-linux-gnueabihf” tool chain in bare-metal environment? How do you know which toolchain binary to use where?
    The general form of compiler/linker prefix is as follows:

    • A-B-C
      Where:
      A indicates the target (arm for AArch32 little-endian, aarch64 for AArch64 little-endian).
      B indicates the vendor (none or unknown for generic) . Note that this is optional (Eg: not present in arm-linux-gnueabihf)
      C indicates the ABI in use (linux-gnu* for Linux, linux-android* for Android, elf or eabi for ELF based bare-metal).
      C has values which seem odd until you understand the history behind it (basically AArch32 used to have a linux-gnu ABI which got changed so needed a new name so we have linux-gnueabi). For AArch32 we have linux-gnueabi and linux-gnueabihf which indicate soft float, and hard float respectively.
      The bare-metal ABI will assume a different C library (newlib for example, or even no C library) to the Linux ABI (which assumes glibc). Therefore, the compiler may make different function calls depending on what it believes is available above and beyond the Standard C library.
      Also the bare-metal ABI and Linux ABI for the 32-bit Instruction sets make different assumptions about the storage size of enums and wchar_t which you have to be careful of (not a complete list). And the difference between the 32-bit and 64-bit ABIs are also numerous and subtle (the obvious example being pointer sizes).
      Where can I get compiler to build

Q: Toolchains : Arm - Differences of Compilers between arm-none-eabi- and arm-none-linux-eabi- ? (Mandarin)

A: from 用Qemu模擬ARM

Arm Compiler 系統裡有兩套編譯鏈arm-none-eabi-和arm-none-linux-eabi-,很容易讓人混淆,可參考編譯鏈的命名規則:

arch(架構)-vendor(廠商名)–(os(操作系統名)–)abi(Application Binary Interface,應用程序二進制接口)

舉例說明:

  • x86_64-w64-mingw32 = x86_64 “arch”字段 (=AMD64), w64 (=mingw-w64 是”vendor”字段), mingw32 (=GCC所見的win32 API)
  • i686-unknown-linux-gnu = 32位 GNU/linux編譯鏈
  • arm-none-linux-gnueabi = ARM 架構, 無vendor字段, linux 系統, gnueabi ABI.
  • arm-none-eabi = ARM架構, 無廠商, eabi ABI(embedded abi)

兩種編譯鏈的主要區別在於庫的差別,前者沒有後者的庫多,後者主要用於在有操作系統的時候編譯APP用的。前者不包括標準輸入輸出庫在內的很多C標準庫,適合於做面向硬件的類似單片機那樣的開發。因而如果採用arm-none-eabi-gcc來編譯hello.c會出現鏈接錯誤。


Q: Toolchains : Arm - Error: Unable to locate package arm-none-eabi-gcc

A: from stackoverflow

I found the solution based on the discussion available at https://unix.stackexchange.com/questions/377345/installing-arm-none-eabi-gcc and the documentation available on https://mynewt.apache.org/latest/get_started/native_install/cross_tools.html#installing-the-arm-cross-toolchain.
The name and structure of the software changed over time. The arm-none-eabi-gcc is gcc-arm-none-eabi now, and so on.

# remove previous gcc-arm-none-aebi, etc, if it was installed before
apt-get remove binutils-arm-none-eabi gcc-arm-none-eabi
# Add apt repository then install
sudo add-apt-repository ppa:team-gcc-arm-embedded/ppa
sudo apt-get update
sudo apt-get install gcc-arm-none-eabi
sudo apt-get install gdb-arm-none-eabi 

Q: What does 'self-hosting' mean?

A: In computer programming, a self-hosting program is one that can modify, and interpret or compile, its own source code. The purpose of a self-hosting program is to create new versions of itself. For example, operating system kernels qualify as self-hosting if they contain and can compile their own source code. See original link


Q: VMware - How to increase HD size of Linux running on VMware?

A: From Stackoverflow

Easiest way is 2-step approach

  1. Increase Hard Disk Size Setting at VMware on host
    # make sure VM is shutdown
    # VMware - Edit Virtual Machine Setting - Hard Disk (IDE) - Expand

  2. Adjust VM HDD partition - using gparted

sudo apt-get install gparted
sudo gparted
# Resize/Move - Resize - Check


Q: pyenv, pyenv-virtualenv, virtualenv, venv and the differences

A: Stackoverflow 的文章-What is the difference between venv, pyvenv, pyenv, virtualenv, virtualenvwrapper, pipenv, etc? 有列出所有支援的 package, 包括 Python 標準的 library. 以及第三方的. 這裡就直接講我的心得.

來源 名稱 意見 安裝程序 使用方法
Python std lib venv 推薦適合初學者, Python 內建, 見 推薦初學者 venv
pyvenv 不推薦, Python 3.8 及後續版本已經不支援, 也跟 pyenv 名稱容易搞混
第三方 virtualenv 適合有經驗者, 見 virtualenv
pyenv 推薦有點經驗者, 見 推薦 virtualenv $ brew install pyenv pyenv-virtualenv $ source ~/.bashrc 當切換到目錄時, 會自動切換到設定的 Python 及 package版本
pyenv-virtualenv pyenv 的 plug-in 見 pyenv 見 pyenv

virtualenv

推薦初學者 venv

推薦 pyenv 跟 pyenv-virtualenv


References


back to marconi's blog
Next article - 操作系統原型-xv6分析與實驗
back to marconi's blog