###### tags: `Linux kernel` # Linux中關於copy_to_user()的盤查 最近跟著The Linux Kernel Module Programming Guide的範例做了一遍,看著copy_to_user()和copy_from_user()這兩個API,愈想愈覺得奇怪,這兩個API到底是怎麼實做的呢?Kernel mode底下怎麼去撈user mode的memory呢? 進[bootlin](https://elixir.bootlin.com/linux/latest/source)一路看下去: ``` copy_to_user(void __user *to, const void *from, unsigned long n) { if (likely(check_copy_size(from, n, true))) n = _copy_to_user(to, from, n); return n; } ``` ``` unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n) { might_fault(); if (should_fail_usercopy()) return n; if (likely(access_ok(to, n))) { instrument_copy_to_user(to, from, n); n = raw_copy_to_user(to, from, n); } return n; } ``` 這裡raw_copy_to_user()就分不同架構往下走不同路線了,我熟悉ARM就看ARM底下的實做: ``` raw_copy_to_user(void __user *to, const void *from, unsigned long n) { #ifndef CONFIG_UACCESS_WITH_MEMCPY unsigned int __ua_flags; __ua_flags = uaccess_save_and_enable(); n = arm_copy_to_user(to, from, n); uaccess_restore(__ua_flags); return n; #else return arm_copy_to_user(to, from, n); #endif } ``` ``` arm_copy_to_user(void __user *to, const void *from, unsigned long n) { if (n < 64) { unsigned long ua_flags = uaccess_save_and_enable(); n = __copy_to_user_std(to, from, n); uaccess_restore(ua_flags); } else { n = __copy_to_user_memcpy(uaccess_mask_range_ptr(to, n), from, n); } return n; } ``` 往下走就有些分支不是很清楚,但最後會看到熟悉的memcpy出現了: ``` __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n) { ... ua_flags = uaccess_save_and_enable(); memcpy((void *)to, from, tocopy); uaccess_restore(ua_flags); ... } ``` 怎麼上下包了兩個uaccess_save_and_enable()和uaccess_restore(),就可以若無其事的memcpy()呢? ``` static __always_inline unsigned int uaccess_save_and_enable(void) { #ifdef CONFIG_CPU_SW_DOMAIN_PAN unsigned int old_domain = get_domain(); /* Set the current domain access to permit user accesses */ set_domain((old_domain & ~domain_mask(DOMAIN_USER)) | domain_val(DOMAIN_USER, DOMAIN_CLIENT)); return old_domain; #else return 0; #endif } ``` 這裡做的事情: 1. 拿出原先的DACR(Domain access control register)值 2. 把DOMAIN_USER原先的access bits清掉 3. 重新設定DOMAIN_USER access bits為client 之前有搞過ARM MMU一陣子,所以這裡都算清楚,簡單來說就是把L1 page table entry裡面domain為DOMAIN_USER的access right打開來。在這個case裡其實他們看到的memory都是同一個,只是access right不一樣而已。 ![](https://i.imgur.com/dYucNfN.png) 另外看起來Linux ARM只用到四個domain:DOMAIN_IO, DOMAIN_KERNEL, DOMAIN_USER, DOMAIN_VECTOR 至於為什麼要用domain去變更access right?不是每個L2 page table entry都有AP了嗎?最低還可以定到1k範圍?Domain看起來沒辦法定到那麼細啊? 之前工作有做過兩個MMU相關的機制,一個是設定page table的AP來做到每個task的stack protection,另一個是設定domain來做到peripheral-mapped memory space的protection,因為兩種都做過所以才能回答這個問題,不過國外的網友回答的很好,我就偷懶貼[他的答案](https://stackoverflow.com/questions/36613000/domain-in-arm-architecture-means-what): Compare work with the DACR versus updating the MMU tables. 1. Change at least the L1 page tables to map correct profile. 2. Clean/invalidate the L1 table and others in page table update 3. Invalidate the TLB entries (most likely the whole thing for simplicity). 4. Invalidate the cache entries in MMU table; probably the whole thing again. 不要懷疑,用設定page table AP來做就是這麼麻煩,改完AP要清TLB,清cache,相對來說設定domain只要動一個DACR register,實在是方便許多。但反過來說,就之前工作上的task stack protection,也無法用domain來實現,因為ARM domain只有16個。