# RHEL EX200
## Bash basic
#### Command execution path
```shell=
$ which hello
~/bin/hello
```
#### PATH environment variable
```shell=
echo $PATH
```
#### 強引號弱引號
```shell=
$ echo '# not a comment #' # 強引號,使得特殊符號沒意義
$ var=$(hostname -s); echo $var
MacBook-Pro
$ echo "hello $(hostname -s)"
hello MacBook-Pro
$ echo 'hello $(hostname -s)'
hello $(hostname -s)
$ echo '"Hello, world"'
"Hello, world"
```
#### Output in Shell Script
```shell=
$ cat hello
#!/bin/bash
echo "Hello, world"
echo "ERROR: Houston, we have a problem." >&2
$ bash hello 2> hello.log
Hello, world
$ cat hello.log
ERROR: Houston, we have a problem.
```
#### append or overwrite
```shell=
$ echo "hello1" > aaa.txt && cat aaa.txt
hello1
$ echo "hello2" > aaa.txt && cat aaa.txt
hello2
$ echo "hello3" >> aaa.txt && cat aaa.txt
hello2
hello3
```
#### error code
```shell=
$ test 1 -gt 0; echo $?
0
$ test 1 -gt 2; echo $?
1
# new syntex
$ [ 1 -eq 1 ]; echo $?
0
$ [ 1 -eq 3 ]; echo $?
1
# unary
# emptystring check
$ STRING=''; [ -z "$STRING" ]; echo $?
0
# nonzero check
$ STRING='abc'; [ -n "$STRING" ]; echo $?
0
```
#### file description
```shell=
0 代表鍵盤輸入
1 代表螢幕輸出
2 代表錯誤輸出
$ systemctl is-active psacct > /dev/null 2>&1
把標準出錯重定向到標準輸出,然後扔到/DEV/NULL下面去。
```
> Note: 急救仙丹 man test
>
# Schduling Future task
||一次性工作 |週期性工作|
|--------| -------- | -------- |
|**Service**| atd.service|crond.service|
|**Path**|/var/spool/at| /var/spool/cron|
|**Command**|at,atq,atm|crontab|
|**Output**|Mail to Job owner|Mail to Job owner|
### at
```shell=
echo "date" >> ~/myjob.txt | at now +3min # generate
atq # query schedule job
# assign a queue
at -q g teatime # execute commands in queue at teatime
at -c 2 # query index 2 job
atrm 2 # remove index 2 job
```
### crontab
```shell=
crontab -l #list
crontab -e #edit
crontab -r #remove all
crontab -u #user
```
> crontab 沒什麼好說的
```shell=
分時日月星期 Command
```
# Tuning
```shell=
yum install -y tuned
systemctl enable --now tuned
tuned-adm active
tuned-adm profile <mode>
tuned-adm off
tuned-adm active
```
# Process scheduling

```shell=
renice -n <level> <PID>
```
# SELinux
#### display SELinux contexts
```shell=
ps axZ # -Z option
ps -ZC httpd #specify service
ls -Z /var/www # specify path
```
#### changing mode
```shell=
getenforce
setenforce
setenforce [0 | 1] # enable=1
getenforce
```
> we can also configure settings from **/etc/selinux/config**
#### controlling selinux file contexts
```shell=
# changing context of a file
mkdir /virtual
ls -Zd /virtual
chcon -t httpd_sys_content_t /virtual # 把標籤打到這個目錄上面
ls -Zd /virtual
restorecon -v /virtual
# define rules
semanage fcontext [-a|-d|-l|-m]
semanage fcontext -a -t httpd_sys_content_t:s0 '/virtual(/.*)?' # add rule
restorecon -Rv /virtual # apply the rule above
# adjusting selinux policy w/ boolean
getsebool -a
getsebool httpd_enable_homedirs
setsebool httpd_enable_homedirs on # -P for persistent
semanage boolean -l | grep 'httpd_enable_homedirs'
getsebool httpd_enable_homedirs
# investigating SELinux issues
tail /var/log/audit/audit.log
tail /var/log/messages
sealert -l <uuid>
ausearch -m AVC -ts recent (type=AVC, ts=time)
```
# Manage basic storage
creating File Systems => add FS => Mount FS
### managing partition w/ parted
```shell=
# display partition table
parted <disk_name> # in powers of 10
# writing partition table
parted <disk_name> mklabel [msdos|gpt]
# creating MBR partition (interation)
parted <disk_name>
mkpart
primary
xfs # parted /dev/vdb help mkpart
2048s
1000MB
# size = 1000MB-2048s
quit
udevadm settle
# delete partition
parted <disk_name>
print # check disk number
rm <disk_number>
# MBR partition in one shot
parted /dev/vdb mkpart primary xfs 2048s 1000MB
# GPT partition in one shot
parted /dev/vdb mkpart userdata xfs 2048s 1000MB
```
```shell=
# creating FS
mkfs.xfs <partition_name>
# mounting FS
mount <partition_name> /mnt
# display mounting status
mount | grep 'nvme0n1p3'
# mounting FS persistently (preferred UUID)
## get uuid
lsblk --fs
"UUID=<uuid> /mnt xfs defaults 0 0" >> /etc/fstab
```
### Managing Swap space
```shell=
# creating a swap space
# 1. define size
# 2. set to linux-swap
parted <disk_name>
print
mkpart
swap1 # partition name
linux-swap # FS type
1000MB
1257MB
print
q
udevadm settle
# formatting the device (applies a swap signature)
mkswap <partition_name>
# Inspect
free
spapon --show
# Activating a swaping space
swapon <partition_name>
swapoff <partition_name>
# Swap partition in one shot
parted /dev/vdb mkpart myswap linux-swap 1001MB 1501MB
"UUID=<uuid> swap swap defaults 0 0" >> /etc/fstab
# swap space priority **(larger number first)**
"UUID=<uuid> swap swap pri=4 0 0" >> /etc/fstab
"UUID=<uuid> swap swap pri=10 0 0" >> /etc/fstab
> wiping
> dd if = /dev/zeor bs=1M count=20 of=/dev/vdb
```
# LVM
```shell=
# prepare physical device
parted -s /dev/vdb mkpart primary 1MiB 769MiB
parted -s /dev/vdb mkpart primary 770MiB 1026MiB
parted -s /dev/vdb set 1 lvm on
parted -s /dev/vdb set 2 lvm on
# create pv
pvcreate /dev/vdb2 /dev/vdb1
# create vg
vgcreate vg01 /dev/vdb2 /dev/vdb1 # 2pv in 1vg
# create lv
lvcreate -n lv01 -L 700M vg01
# add FS
mkfs.xfs /dev/vg01/lv01
# mount
mkdir /mnt/data
"/dev/vg01/lv01 /mnt/data xfs defaults 1 2" >> /etc/fstab
mount /mnt/data
# removing LVM
# need to comment the mount point on /etc/fstab
umount /mnt/data
lvremove /dev/vg01/lv01
vgremove vg01
pvremove /dev/vdb02 /dev/vdb01
```
#### Inspect LVM
```shell=
pvdisplay /dev/vdb01
vgdisplay vg01
lvdisplay /dev/mapper/vg01-lv01
```
#### Extending VG
```shell=
parted -s /dev/vdb mkpart primary 1027MiB 1539MiB
parted -s /dev/vdb set 3 lvm on
pvcreate /dev/vdb3
vgextend vg01 /dev/vdb3
```
#### Extending LV & FS
```shell=
vgdisplay vg01 # check free space
lvextend -L +300M /dev/vg01/lv01
**xfs_growfs /mnt/data**
# resize2fs /dev/vg01/lv01 => ext4
df -h /mountpoint
```
```shell=
# extending swap space
vgdisplay vgname
swapoff -v /dev/vgname/lvname
lvextend -l +extents /dev/vgname/lvname
mkswap /dev/vgname/lvname
swapon -va /dev/vgname/lvname
```
#### Reducing VG
```shell=
# Move the PE
pvmove /dev/vdb3
vgreduce vg01 /dev/vdb3
```
# Advanced Storage features
```shell=
yum install -y stratis-cli stratised
```
#### Assembling Block Storage into Stratis Pools
```shell=
stratis pool create pool1 /dev/vdb
stratis pool list
stratis pool add-data pool1 /dev/vdc
stratis blockdev list pool1
```
#### Managing Stratis FS
```shell=
stratis filesystem create pool1 fs1
stratis filesystem list
stratis filesystem snapshot pool1 fs1 snapshot1 (path=/stratis/labpool/labfs-snap)
stratis filesystem destroy stratispool1 stratis-filesystem1
```
#### Persistently Mounting Stratis File Systems
```shell=
lsblk --output=UUID /statis/pool1/fs1
"UUID=<uuid> /dir xfs defaults,x-systemd.requires=stratised.service 0 0"
```
### VDO
```shell=
# enabling VDO
yum install vdo kmod-kvdo
# creating a VDO Volume
vdo create --name=vdo1 --device=/dev/vdd --vdoLogicalSize=50G
# verify
vdo list
# Analyzing VDO volume
vdo status --name=vdo1
# vdo status
vdostats --human-readable
# FS vdo1
mkfs.xfs -K /dev/mapper/vdo1
mkdir /mnt/vdo1
mount /dev/mapper/vod1 /mnt/vdo1
```
> mount -o remount,rw /
# Accessing NAS
1. Identify
```shell=
sudo mkdir mountpoint
sudo mount serverb:/ mountpoint
sudo ls mountpoint
```
2. Mountpoint
```shell=
mkdir -p mountpoint
```
3. Mount
```shell=
sudo mount -t nfs -o rw,sync serverb:/share mountpoint
```
```shell=
# mount persistently
"serverb:/share /mountpoint nfs rw,sync 0 0" >> /etc/fstab
```
### AutoFS
1. Install
```shell=
yum install autofs
```
2. Modify setting
```shell=
# add a master map file
vim /etc/auto.master.d/demo.autofs
>> /shares /etc/auto.demo (auto.demo for sub settings, convention: auto.xxx)
vim /etc/auto.demo
>> work -rw,sync serverb:/shares/work
sudo systemctl enable --now autofs
```
#### Direct maps
```shell=
# master map file
>> /- /etc/auto.direct # /- as the base directory
# /etc/auto.direct
>> /mnt/docs -rw,sync serverb:/shares/docs
```
#### Indirect wildcard maps
```shell=
# /etc/auto.demo
>> * -rw,sync serverb:/shares/&
```
# Controlling the Boot Process
```shell=
# list all boot target
systemctl list-dependencies graphical.target | grep target graphical.target
systemctl list-units --type=target --all
# Selecting a target at runtime
systemctl isolate multi-user.target
systemctl cat graphical.target
systemctl cat cryptsetup.target
# Setting a default target
systemctl get-default
systemctl set-default graphical.target
```
### Get all systemctl dependencies list
```shell=
systemctl list-dependencies graphical.target | grep
```
##### Selecting a different target at a boot time
1. reboot
2. interrupt the boot loader menu by pressing any key
3. move the cursor to the kernel entry
4. e to edit
5. move the cursor to the line start w/ linux
6. append systemd.unit=rescue.target
7. CTRL+X to boot
### Emergency boot
1. append ^linux
```shell=
systemd.unit=emergency.target
mount # check all mount point
mount -o rw,remount /
mount -a
# edit /etc/fstab to fix
systemctl daemon-reload
mount -a
init 6
```
### PASSWORD RESETTING
1. reboot
2. on the select kernel page
3. press e to edit
4. ^linux append "rd.break" on tail
5. ctrl+x
```shell=
mount -o rw,remount /sysroot
chroot /sysroot
passwd root
touch /.autorelabel # selinux config
```
### Reparing FS
1. reboot
2. on the select kernel page
3. press e to edit
4. ^linux append "systemd.unit=emergency.target" on tail
5. ctrl+x
```shell=
mount -o rw,remount /
mount -a
vim /etc/fstab # 修修
systemctl daemon-reload
reboot
```
### Inspect Logs
> system journals => /run/log/journal
```shell=
vim /etc/systemd/journald.conf
# Storage=persistent
systemctl restart systemd-journald.service
journalctl -b -1 -p err # show error
```
# Managing Network security
```shell=
# set-default zone to dmz
firewall-cmd --set-default-zone=dmz
firewall-cmd --permanent --zone=internal --add-source=192.168.0.0/24
firewall-cmd --permanent --zone=internal --add-service=mysql
firewall-cmd --permanent --zone=public --add-port=82/tcp
firewall-cmd --reload
```
### service check
```shell=
systemctl status nftables
systemctl mask nftables
systemctl status firewalld
```
## Controlling SELinux Port Labeling
```shell=
# Listing port labels
semanage port -l
# Managing Port Labels
semanage port -a -t gopher_port_t -p tcp 71
# check local change
semanage port -l -C
# Removing port labels
semanage port -d -t gopher_port_t -p tcp 71
# Modifying port bindings (gopher_port_t=>http_port_t)
semanage port -m -t http_port_t -p tcp 71
# sealert
sealert -a /var/log/audit/audit.log
```
# Containers
### Basic commands
```shell=
# Install podman
yum module install container-tools
# info
podman info
# Image
podman images
podman login
podman search
podman pull
podman inspect
podman rmi
# Container
podman ps
podman run --name -p <hostPort:containerPort> -e <key=value> -v <containerHostDir:containerDir:Z>
podman port -a
# firewall-cmd --add-port=8000/tcp
podman stop | start | restart | kill
podman exec -it
podman rm {-af}
# Inspect Container images
podman images
podman inspect <repo_name>
skopeo inspect docker://registry.lab.example.com/rhel8/httpd-24
```
### Managing containers as Services
```shell=
loginctl enable-linger
loginctl show-user user
# Linger=yes
loginctl disable-linger
loginctl show-user user
# Linger=no
```

### Creating the Systemed Unit File
```shell=
cd ~/.config/systemd/user/
podman generate systemd --name web --files --new
```
### Creating & Managing Systemd User sevices
```shell=
ls ~/.config/systemd/user/
systemctl --user daemon-reload
systemctl --user enable myapp.service
systemctl --user start myapp.service
```
### LAB example
```shell=
podman run -d -name inventorydb -p 13306:3306 -v /home/podsvc/db_data:/var/lib/mysql/data:Z -e MYSQL_USER=operator1 -e MYSQL_PASSWORD=redhat -e MYSQL_DATABASE=inventory -e MYSQL_ROOT_PASSWORD=redhat <image>
```
### Network settings
```shell=
hostnamectl set-hostname serverb.lab.example.com
nmcli connection show
#NAME UUID TYPE DEVICE
#Wired connection 1 81b08161-8925-3cb9-a94c-e3c827e2adc3 ethernet eth0
nmcli con add con-name exam ifname eth0 type ethernet ipv4.method manual ipv4.addresses 172.25.250.11/24 ipv4.gateway 172.25.250.254 ipv4.dns 172.25.250.254
nmcli connection up exam
nmcli connection delete Wired\ connection\ 1
nmcli connection show
NAME UUID TYPE DEVICE
exam 6e17956e-1085-456d-b33c-92b1198ff833
```
# 考古題
## root 帳號密碼重置 (RH134 Chapter 10)
[student@workstation ~]$ lab boot-resetting start
Set the root password for servera to redhat.
```shell=
1. Reboot to boot menu
2. e to edit
3. ^Linux to append rd.break
4. Ctrl+x to restart
mount -o rw,remount /sysroot
chroot /sysroot
passwd root
touch /.autorelabel
```
## 建立使用者帳號 (RH124 Chapter 6)
Create a user totoro with a user-id of 3179. The password for this user should be password.
```shell=
useradd -u 3179 totoro
echo "password" | passwd totoro --stdin
```
## 密碼政策的管理 (RH124 Chapter 6)
All new local users should have passwords that expire after 168 days.
```shell=
vim /etc/login.def
# PASS_MAX_DAYS 99999 -> 168
```
## 設定 sudo 權限 (RH124 Chapter 6)
Configure sudo privileges for group devops such that has the ability to execute administrative commands without providing a password.
```shell=
vim /etc/sudoer.d/devops
# %devops ALL=(ALL) NOPASSWD:ALL
```
## 使用者帳號及群組管理 (RH124 Chapter 6)
Create the following users, groups, and group memberships:
A group named sysadms, GID 1400
A user andy who belongs to sysadms as a secondary group, home directory is set at /netusers/andy
A user bill who also belongs to sysadms as a secondary group
A user cindy who does not have access to an interactive shell on the system, and who is not a member of sysadms,
andy , bill , and cindy should all have the password of password
```shell=
groupadd -g 1400 sysadms
useradd andy -G sysadms -d /netusers/andy
useradd bill -G sysadms
useradd cindy -s /sbin/nologin
# check
cat /etc/passwd
echo "password" | passwd andy --stdin
echo "password" | passwd bill --stdin
echo "password" | passwd cindy --stdin
```
## 建立協同作業目錄 (RH124 Chapter 7)
Create a collaborative directory /home/sysadms with the following characteristics:
Group ownership of /home/sysadms is sysadms
The directory should be readable, writable, and accessible to members of sysadms , but not to any other user. (It is understood that root has access to all files and directories on the system)
Files created in /home/sysadms automatically have group ownership set to the sysadms group
```shell=
mkdir /home/sysadms
chown :sysadms /home/sysadms
chmod 770 /home/sysadms
chmod g+s /home/sysadms
```
## Default permission 的設定 (RH124 Chapter 7)
Configure permissions for user bill such that:
All newly created files for user bill should have -rw-r----- as the default permission
All newly created directories for the same user should have drwxr-x--- as the default permission
```shell=
su - bill
vim .bashrc
# umask 027
```
## NTP的設定 (RH124 Chapter 11)
Configure servera is an NTP client of classroom.example.com
```shell=
vim /etc/chrony.conf
server classroom.example.com iburst
systemctl restart chronyd.service
```
## 備份檔案的建立 (RH124 Chapter 13)
Create a tar archive named /root/archive.tar.bz2 which contains the content of /usr/local. This file must be compressed using bzip2.
```shell=
tar -cjf /root/archive.tar.bz2 /usr/local
```
## 軟體倉儲的設定 (RH124 Chapter 14)
[root@servera ~]# rm -rf /etc/yum.repos.d/*
Configure servera to use these locations as default repositories.
http://content.example.com/rhel8.2/x86_64/dvd/BaseOS
http://content.example.com/rhel8.2/x86_64/dvd/AppStream
```shell=
rm -rf /etc/yum.repos.d/*
vim /etc/yum.repos.d/ex200
[BaseOS]
name=BaseOS
baseurl=http://content.example.com/rhel8.2/x86_64/dvd/BaseOS
enabled=1
gpgcheck=0
[AppStream]
name=AppStream
baseurl=http://content.example.com/rhel8.2/x86_64/dvd/AppStream
enabled=1
gpgcheck=0
```
## 檔案的搜尋 (RH124 Chapter 15)
Locate all the files owned by cindy and place a copy of them in the /root/findresults directory.
```shell=
find / -user cindy -type f -exec cp {} /root/findresults/ \;
```
## 特定資料的搜尋 (RH134 Chapter 1)
Find all lines in the file /usr/share/mime/packages/freedesktop.org.xml that contain the string ich. Put a copy of all these lines in the original order in the file /root/lines. /root/lines should contain no empty lines and all lines must be exact copies of the original lines in the original file.
```shell=
grep 'ich' /usr/share/mime/packages/freedesktop.org.xml > /root/lines
```
## 排程工作 (RH134 Chapter 2)
The user bill must configure a cron job that runs at 23:58 local time everyday and executes: /usr/bin/echo Hello
Configure a cron job that runs every 2 minutes and executes: logger "EX200 is so easy" as the user cindy
```shell=
crontab -e -u bill
58 23 * * * /usr/bin/echo Hello
vim /etc/cron.d/cindy
*/2 * * * * cindy logger "EX200 is so easy"
```
## System tuning 的設定 (RH134 Chapter 3)
Choose the recommended tuned profile for your servera and set it as the default.
```shell=
yum install -y tuned
systemctl enable --now tuned
tuned-adm recommend
# >> virtual-guest
tuned-adm profile virtual-guest
tuned-adm active
```
## ACL permission 的設定 (RH134 Chapter 4)
Copy the file /etc/fstab to /var/tmp/fstab. Configure the permissions of /var/tmp/fstab:
the file /var/tmp/fstab is owned by the root user
the file /var/tmp/fstab belongs to the group root
the file /var/tmp/fstab should not be executable by anyone
the user andy is able to read and write /var/tmp/fstab
the user bill can neither write nor read /var/tmp/fstab
all other users (current or future) have the ability to read /var/tmp/fstab
```shell=
cp /etc/fstab /var/tmp/fstab
chown root:root /var/tmp/fstab
chmod a-x /var/tmp/fstab
getfacl /var/tmp/fstab
setfacl -Rm u:andy:rw- /var/tmp/fstab
setfacl -Rm u:bill:- /var/tmp/fstab
getfacl /var/tmp/fstab
```
## 使用者家目錄的自動掛載 (RH134 Chapter 9)
[student@workstation ~]$ lab netstorage-nfs start
Configure autofs to automount the home directories of remote users as follows:
serverb.lab.example.com NFS-exports /shares to your system. This filesystem contains a pre-configured home directory for the user andy
andy's home directory is serverb.lab.example.com:/shares/public
andy's home directory should be automounted locally beneath /netusers as /netusers/andy
home directories must be writable by their users
andy's password is password
```shell=
yum install -y autofs
# set master file
vim /etc/auto.master.d/andy.autofs
# /netusers /etc/auto.andy
vim /etc/auto.andy
# * -rw,sync,fstype=nfs4 serverb.lab.example.com:/shares/public/&
systemctl enable --now autofs
```
## SELinux 與 Firewall (RH134 Chapter 11)
[student@workstation ~]$ lab netsecurity-ports start
A web server running on non-standard port 82 is having issues serving content.
Debug and fix the issues as necessary so that:
The web server on your system can serve all the existing HTML files from /var/www/html(Note: Do not remove or otherwise alter the existing file content)
The web server serves this content on port 82
The web server starts automatically at system boot time
```shell=
# checking ?
cat /var/log/messages
cat /var/log/audit/audit.log
sealert -a /var/log/messages
# 看起來82port被擋掉了
# 先check SELinux
semanage port -l | grep http
# >> http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000
semanage port -a -t http_port_t -p tcp 82
systemctl daemon-reload
firewall-cmd --permanent --add-port=82/tcp
firewall-cmd --reload
```
## Container 的設定與管理 (RH134 Chapter 13)
[student@workstation ~]$ lab containers-basic start
Create a container named logsrv using the httpd-24(registry.lab.example.com/rhel8/httpd-24) image
that is available from your registry server
Configure it to run as a systemd service that should run for the existing user student
The service should be named container-logsrv and should automatically start after a system reboot
Configure persistent storage for a container from the previous task:
Create a directory on the container host named journal_local under /home/student
Configure the container service to automatically mount the directory /home/student/journal_local under /var/log/journal on the container
When the command: logger -p user.info "EX200 is so easy"
is run on the container the message "EX200 is so easy" should appear in both /var/log/journal/test.log on the container as well as in /home/student/journal_local/test.log on the container host.
```shell=
yum module -y install container-tools
podman login registry.lab.example.com
podman pull registry.lab.example.com/rhel8/httpd-24
podman images
mkdir /home/student/journal_local
podman run --name logsrv -v /home/student/journal_local:/var/log/journal:Z
# testing
###############################################################
podman exec -it -u root logsrv /bin/bash
vim /etc/systemd/journald.conf
Storage=auto -> Storage=persistent
exit
podman restart logsrv
podman exec -it -u root logsrv /bin/bash
echo 'EX200 is so easy' > /var/log/journal/test.log
exit
###############################################################
cd /.config/systemd/user
podman generate systemd -name logsrv --files --new
systemctl --user daemon-reload
systemd --user enable --now container-logsrv.service
loginctl enable-linger student
loginctl show-user student
```
## 網路組態設定 (RH124 Chapter 12)
Configure serverb to have the following network configuration:
Hostname: serverb.lab.example.com
IP address: 172.25.250.11
Netmask: 255.255.255.0
Gateway: 172.25.250.254
Name server: 172.25.250.254
可新增 Connection 後刪除舊有 Connection或直接修改現有的Connection
```shell=
hostnamectl set-hostname serverb.lab.example.com
nmcli connection show
nmcli con add con-name exam ifname eth0 type ethernet ipv4.method manual ipv4.addresses 172.25.250.11/24 ipv4.gateway 172.25.250.254 ipv4.dns 172.25.250.254
nmcli connection up exam
nmcli connection delete Wired\ Connection\ 1
nmcli connection show
```
## 設定一支簡易的程式 (RH134 Chapter 1)
Configure the application HAHAHA.sh such that when HAHAHA.sh is run as user student it displays the message
EX200 is so easy
```bash=
#/bin/bash
if [$(id -un) == 'student']; then
echo "EX200 is so easy"
fi
chmod a+x /usr/local/bin/HAHAHA.sh
```
## 建立一支搜尋檔案的 script (RH134 Chapter 1)
Create a script named catchme.sh to locate files under /usr
The script should locate all files under /usr which are larger than 30k and smaller than 10M in size and have set user ID (SETUID) permissions
Place the script catchme.sh under /usr/local/bin
When executed, the script should save the list of found files into /root/find_list
```bash=
vim /usr/local/bin/catchme.sh
# #!/bin/bash
# find /usr -size +30k -size -10M -perm /4000 > /root/find_list
chmod a+x /usr/local/bin/catchme.sh
```
## 建立 Swap space (RH134 Chapter 6)
[root@serverb ~]#
執行 http://content.example.com/pub/disk.txt 檔案裡的所有指令
Add an additional swap partition of 256MiB on serverb.
The swap partition should automatically active when your system boots.
Do not remove or alter any existing partitions and file system.
```shell=
lsblk -f # /dev/vdb is ok
parted -s /dev/vdb mkswap linux-swap 1025MiB 1281MiB
mkswap /dev/vdb2
vim /etc/fstab
# UUID=XXXXX swap swap defaults 0 0
free -h
swapon -a
free -h
```
## Logical volume 的 Extending (RH134 Chapter 7)
Resize the logical volume alien and its filesystem to 600 MiB. Make sure that the filesystem contents remain intact. Note: partitions are seldom exactly the same size requested, so a size within the range of 540 MiB to 660 MiB is acceptable.
```shell=
lvs
vgs # 看夠不夠
lvextend -L 600MiB -r /dev/predator/alien
```
## 建立 Logical volume (RH134 Chapter 7)
Create a new logical volume according to the following requirements:
The logical volume is named avatar and belongs to the fox volume group and has a size of 50 extents
Logical volumes in the fox volume group should have an extent size of 16 MiB
Format the new logical volume with a vfat filesystem. The logical volume should be automatically mounted under /mnt/avatar at system boot time
```shell=
# 題目要求要800MiB
parted /dev/vdb unit MiB print
parted /dev/vdb mkpart pv2 1281MiB 2305MiB
parted /dev/vdb set 3 lvm on
pvcreate /dev/vdb3
vgcreate -s 16m fox /dev/vdb3
lvcreate -l 50 -n avatar fox
mkfs.vfat /dev/fox/avatar
mkdir /mnt/avatar
vim /etc/fstab
# /dev/fox/avatar /mnt/avatar vfat defaults 1 2
mount -a
df -h
```
## 建立 VDO volume (RH134 Chapter 8)
Create a new VDO volume according to the following requirements:
Use the unpartitioned disk(/dev/vdc)
The volume is named starwar
The volume has a logical size of 50G
The volume is formatted with the xfs filesystem
The volume is mounted (at boot time) under /mnt/startrek
```shell=
yum install -y vdo
systemctl enable --now vdo.service
vdo create --name=starwar --device=/dev/vdc --vdoLogicalSize=50G
mkfs.xfs -K /dev/mapper/starwar
lsblk -fs /dev/mapper/starwar
mkdir /mnt/startrek
vim /etc/fstab
# UUID=<uuid> /mnt/startrek xfs defaults,x-systemd.requires=vdo.service 0 0
mount -a
df -h
```