[TOC] # NS-3.41 ### 下載 ns-3 [NS-3.41檔案,](https://www.nsnam.org/releases/ns-3-41/download/) [下載教學影片](https://www.youtube.com/watch?v=b-HhXjXb4dI) - 這裡建議去官網用解壓縮下載,下面的gitlab內建沒有NetAnim可視化工具 ```shell= //官網下載解壓縮完後 (this****) $ wget https://www.nsnam.org/releases/ns-allinone-3.41.tar.bz2 $ tar xvf ns-allinone-3.41.tar.bz2 $ cd ns-allinone-3.41 $ ./build.py --enable-examples --enable-tests & ./ns3 build //測試可否運行 $ ./ns3 run hello-simulator $ ./ns3 run first.cc ``` ### 下載 NR module ```shell= cd contrib git clone https://gitlab.com/cttc-lena/nr.git cd nr git checkout -b 5g-lena-v3.0.y origin/5g-lena-v3.0.y ./ns3 configure --enable-examples --enable-tests ``` ![image](https://hackmd.io/_uploads/Skla4BOpp.png) ```shell= ./ns3 build ``` - running example ```shell= ./ns3 show targets | grep nr ``` ![image](https://hackmd.io/_uploads/SypoNSupT.png) ```shell= ./ns3 run cttc-nr-demo ``` ![image](https://hackmd.io/_uploads/S13s4rOpa.png) # ns3-ai ###### tags: `Construction` ## Using AI/ML algorithms with ns-3. If we want to implement AI algorithms in your custom network protocol/application under ns-3, then the direct way is to incorporate an existing AI/ML/DL framework with it. If it will be an AI/ML C++ based framework, then it can be incorporated with the C++ code of ns-3. But the existing AI/ML frameworks are huge and getting rapidly improving over time. Further, there are popular frameworks that are based on python and make it hard to merge with ns-3 natively. So instead of directly incorporating such huge AI/ML frameworks with ns-3, ‘ns3-ai Model’ provides a way to communicate with independently running AI/ML frameworks from an ns-3 simulation. ![image](https://hackmd.io/_uploads/SyLy3xPgA.png) ## Features of ns3-ai ns3-ai does not include any AI algorithms in it. Further, it will not rely on one particular framework. This framework provides the ways to communicate and use the AI framework, which is installed separately in the system. * High-performance data interaction module (using shared memory). * Provide a high-level interface for different AI algorithms. * Easy to integrate with other AI frameworks ## Requirements ```shell= Boost C++ libraries $ sudo apt install libboost-all-dev Protocol buffers $ sudo apt install libprotobuf-dev protobuf-compiler pybind11 $ sudo apt install pybind11-dev ``` ## Installation of ns3-ai Module ```shell= 1. Clone this repository at contrib/ai $ cd ns-allinone-3.41/ns-3.41 $ git clone https://github.com/hust-diangroup/ns3-ai.git contrib/ai 2. Configure and build the ai library $ cd ns-allinone-3.41/ns-3.41 $ ./ns3 configure --enable-examples $ ./ns3 build ai 3. Setup Python interfaces. It's recommended to use a separate Conda environment for ns3-ai. $ pip3 install -e contrib/ai/python_utils $ pip3 install -e contrib/ai/model/gym-interface/py ``` * Build the examples (optional) All targets named ns3ai_* can be built separately. #### build all examples in all versions ```shell= ./ns3 build ns3ai_apb_gym ns3ai_apb_msg_stru ns3ai_apb_msg_vec ns3ai_multibss ns3ai_rltcp_gym ns3ai_rltcp_msg ns3ai_ratecontrol_constant ns3ai_ratecontrol_ts ns3ai_ltecqi_msg ``` ## SmartMME Base Station Power,1.7856e-05,3,0,0.0662636 log_file_bs << "Base Station Power," << Simulator::Now().GetSeconds()<<","<<cell_id<<","<< totaloldEnergyConsumption << "," << totalnewEnergyConsumption << std::endl; UE Power,0.00023214114,0,5.22317e-05 log_file_ue << "UE Power," <<Simulator::Now().GetSeconds()<< UE_id<<","<<totaloldEnergyConsumption << "," << totalnewEnergyConsumption <<std::endl; ## Implementation ```shell= ## Ping $ git clone https://github.com/Molianoxo/SmartMME.git $ git clone https://github.com/Molianoxo/LSTM.git $ cd SmartMME/ $ ./ns3 configure --disable-python --enable-examples && ./ns3 build $ ./ns3 run scratch/UE_pos.cc ``` ### NetAnim in NS3 (可視化界面) [參考 ](https://blog.csdn.net/Lee_0808/article/details/131225197)[教學](https://www.nsnam.org/wiki/NetAnim_3.108) - To run NetAnim, you need to add the following two lines of code to the .cc file you wish to execute. ```cpp= #include "ns3/netanim-module.h" //add header ... AnimationInterface anim ("filename.xml"); //before Simulator::Run(); ``` - netanim eaxmple - The .xml file will be located under ns-3.41. ```shell= ./ns3 run wireless-animation ``` - Open NetAnim ```shell= cd ns-allinone-3.41/netanim-3.109 ./NetAnim ``` - Finally,open .xml - **If want to run own file without encountering errors, it should be placed under the scratch.** ### Flow monitor (#`Д´)ノ - Record base station traffic. - Add header ```cpp= #include "ns3/flow-monitor-helper.h" ``` - Typical use ```cpp= Ptr<FlowMonitor> flowMonitor; FlowMonitorHelper flowHelper; flowMonitor = flowHelper.InstallAll(); Simulator::Stop (Seconds (simTime)); Simulator::Run(); flowMonitor->SerializeToXmlFile("NameOfFile.xml", true, true); ``` ### cell traffic log - before main ```cpp= // bs_traffic (new) void UpdateEnbTraffic(uint16_t cellId, uint64_t txPackets, uint64_t rxPackets, std::string fileName) { std::ofstream logFile; logFile.open(fileName, std::ios::out | std::ios::app); logFile << Simulator::Now().GetSeconds() << ", " << cellId << ", " << txPackets << ", " << rxPackets << std::endl; logFile.close(); } void TxRxTrace(std::string context, Ptr<const Packet> packet, const Address& address) { uint32_t nodeId = Names::Find<Node>(context)->GetId(); Ptr<Node> node = NodeList::GetNode(nodeId); Ptr<MmWaveEnbNetDevice> enbNetDevice = node->GetDevice(0)->GetObject<MmWaveEnbNetDevice>(); uint16_t cellId = enbNetDevice->GetCellId(); static std::map<uint16_t, uint64_t> txPackets; static std::map<uint16_t, uint64_t> rxPackets; if (context.find("/Tx") != std::string::npos) { txPackets[cellId]++; } else if (context.find("/Rx") != std::string::npos) { rxPackets[cellId]++; } // Update log file every 100 ms UpdateEnbTraffic(cellId, txPackets[cellId], rxPackets[cellId], "enb_traffic_log.txt"); } ``` - main ```cpp= for (uint32_t i = 0; i < mmWaveEnbNodes.GetN(); i++) { Ptr<MmWaveEnbNetDevice> mmdev = DynamicCast<MmWaveEnbNetDevice>(mmWaveEnbNodes.Get(i)->GetDevice(0)); if (mmdev) { // Connect Tx trace source mmdev->GetPhy()->TraceConnectWithoutContext("DlPhyTransmission", MakeCallback(&TxRxTrace)); // Connect Rx trace source mmdev->GetPhy()->TraceConnectWithoutContext("UlPhyReception", MakeCallback(&TxRxTrace)); // Log the traffic updates Simulator::Schedule(Seconds(0.1), &UpdateEnbTraffic, mmdev->GetCellId(), 0, 0, "enb_traffic_log.txt"); } ``` ### RxPacketTrace ![image](https://hackmd.io/_uploads/ryHiUPFxA.png) ```cpp= void MmWavePhyTrace::RxPacketTraceUeCallback (Ptr<MmWavePhyTrace> phyStats, std::string path, RxPacketTraceParams params) { if (!m_rxPacketTraceFile.is_open ()) { m_rxPacketTraceFile.open (m_rxPacketTraceFilename.c_str ()); m_rxPacketTraceFile << "DL/UL\ttime\tcellId\ttbSize" << std::endl; if (!m_rxPacketTraceFile.is_open ()) { NS_FATAL_ERROR ("Could not open tracefile"); } } m_rxPacketTraceFile << "DL\t" << Simulator::Now ().GetSeconds () << "\t" << params.m_cellId << "\t" << params.m_tbSize << std::endl; if (params.m_corrupt) { NS_LOG_DEBUG ("DL TB error\t" << params.m_frameNum << "\t" << +params.m_sfNum << "\t" << +params.m_slotNum << "\t" << +params.m_symStart << "\t" << +params.m_numSym << "\t" << params.m_rnti << "\t" << +params.m_ccId << "\t" << params.m_tbSize << "\t" << +params.m_mcs << "\t" << +params.m_rv << "\t" << 10 * std::log10 (params.m_sinr) << "\t" << params.m_tbler << "\t" << params.m_corrupt); } } ``` ![image](https://hackmd.io/_uploads/ByKnLwFlC.png) ```cpp= void MmWavePhyTrace::RxPacketTraceEnbCallback (Ptr<MmWavePhyTrace> phyStats, std::string path, RxPacketTraceParams params) { if (!m_rxPacketTraceFile.is_open ()) { m_rxPacketTraceFile.open (m_rxPacketTraceFilename.c_str ()); m_rxPacketTraceFile << "DL/UL\ttime\tcellId\ttbSize" << std::endl; if (!m_rxPacketTraceFile.is_open ()) { NS_FATAL_ERROR ("Could not open tracefile"); } } m_rxPacketTraceFile << "UL\t" << Simulator::Now ().GetSeconds () << "\t" << params.m_cellId << "\t" << params.m_tbSize << std::endl; if (params.m_corrupt) { NS_LOG_DEBUG ("UL TB error\t" << params.m_frameNum << "\t" << +params.m_sfNum << "\t" << +params.m_slotNum << "\t" << +params.m_symStart << "\t" << +params.m_numSym << "\t" << params.m_rnti << "\t" << +params.m_ccId << "\t" << params.m_tbSize << "\t" << +params.m_mcs << "\t" << +params.m_rv << "\t" << 10 * std::log10 (params.m_sinr) << "\t" << params.m_tbler << "\t" << params.m_corrupt << "\t" << params.m_sinrMin); } } ``` 在SmartMME/src/mmwave/helper/mmwave-phy-trace.cc 這兩個副函式修改,精簡到DL/UL,time,cellID,tbSize 執行UE_pos.cc後,等到RA開始,RxPacketTrace.txt檔會持續更新 ![image](https://hackmd.io/_uploads/S1ysvDFl0.png) 但tbSize不等同於packetSize 要在研究一下 ## Using C++ based ML Frameworks in ns-3 ### TensorFlow C API * libtensorflow Installation * If model/libtensorflow directory exists, targets using TensorFlow C API are automatically enabled. ### For x86-64-based operating systems * Download prebuilt library from TensorFlow official website. [TensorFlow official website](https://www.tensorflow.org/install/lang_c?hl=zh-tw) ### Check tensorflow is installed * Example code ```shell= $ touch hello_tf.c $ vim hello_tf.c #include <stdio.h> #include <tensorflow/c/c_api.h> int main() { printf("Hello from TensorFlow C library version %s\n", TF_Version()); return 0; } ``` * Before ```shell= $ gcc hello_tf.c -ltensorflow -o hello_tf hello_tf.c:2:10: fatal error: tensorflow/c/c_api.h: No such file or directory 2 | #include <tensorflow/c/c_api.h> | ^~~~~~~~~~~~~~~~~~~~~~ compilation terminated. ``` * After ```shell= $ sudo tar -C /usr/local -xzf libtensorflow-gpu-linux-x86_64-2.6.0.tar.gz $ sudo ldconfig $ gcc hello_tf.c -ltensorflow -o hello_tf /usr/bin/ld: /usr/local/lib/libtensorflow.so: .dynsym local symbol at index 3 (>= sh_info of 3) $ ./hello_tf Hello from TensorFlow C library version 2.6.0-dev20210809 ``` ### run_online_lstm.py ```shell= import tensorflow as tf from tensorflow import keras import numpy as np import tensorflow as tf import keras from keras.layers import * import sys import gc import keras.backend as K import ns3ai_ltecqi_py as py_binding from ns3ai_utils import Experiment import traceback from tensorflow.keras.layers import Layer class ExpandDimsLayer(Layer): def __init__(self, axis, **kwargs): super(ExpandDimsLayer, self).__init__(**kwargs) self.axis = axis def call(self, inputs): return tf.expand_dims(inputs, axis=self.axis) # MWNL # delta for prediction delta = int(sys.argv[1]) MAX_RBG_NUM = 32 def new_print(filename="log", print_screen=False): old_print = print def print_fun(s): if print_screen: old_print(s) with open(filename, "a+") as f: f.write(s) f.write('\n') return print_fun old_print = print print = new_print(filename="log_" + str(delta), print_screen=False) tf.random.set_seed(0) np.random.seed(1) input_len = 200 pred_len = 40 batch_size = 20 alpha = 0.6 not_train = False lstm_input_vec = Input(shape=(input_len, 1), name="input_vec") dense1 = Dense(30, activation='selu', kernel_regularizer='l1',)( lstm_input_vec[:, :, 0]) old_print(dense1) expand_dims_layer = ExpandDimsLayer(axis=-1) lstm_l1_mse = expand_dims_layer(dense1) lstm_mse = LSTM(20)(lstm_l1_mse) predict_lstm_mse = Dense(1)(lstm_mse) lstm_model_mse = keras.Model(inputs=lstm_input_vec, outputs=predict_lstm_mse) lstm_model_mse.compile(optimizer="adam", loss="MSE") def simple_MSE(y_pred, y_true): return (((y_pred - y_true)**2)).mean() def weighted_MSE(y_pred, y_true): return (((y_pred - y_true)**2) * (1 + np.arange(len(y_pred))) / len(y_pred)).mean() cqi_queue = [] prediction = [] last = [] right = [] corrected_predict = [] target = [] train_data = [] is_train = True CQI = 0 delay_queue = [] exp = Experiment("ns3ai_ltecqi_msg", "../../../../../", py_binding, handleFinish=True) msgInterface = exp.run(show_output=True) try: while True: msgInterface.PyRecvBegin() if msgInterface.PyGetFinished(): break gc.collect() # Get CQI CQI = msgInterface.GetCpp2PyStruct().wbCqi msgInterface.PyRecvEnd() if CQI > 15: break old_print("get: %d" % CQI) # CQI = next(get_CQI) delay_queue.append(CQI) if len(delay_queue) < delta: CQI = delay_queue[-1] else: CQI = delay_queue[-delta] if not_train: msgInterface.PySendBegin() msgInterface.GetPy2CppStruct().new_wbCqi = CQI msgInterface.PySendEnd() continue cqi_queue.append(CQI) if len(cqi_queue) >= input_len + delta: target.append(CQI) if len(cqi_queue) >= input_len: one_data = cqi_queue[-input_len:] train_data.append(one_data) else: msgInterface.PySendBegin() msgInterface.GetPy2CppStruct().new_wbCqi = CQI msgInterface.PySendEnd() old_print("set: %d" % CQI) continue data_to_pred = np.array(one_data).reshape(-1, input_len, 1) / 10 _predict_cqi = lstm_model_mse.predict(data_to_pred) old_print(_predict_cqi) del data_to_pred prediction.append(int(_predict_cqi[0, 0] + 0.49995)) last.append(one_data[-1]) corrected_predict.append(int(_predict_cqi[0, 0] + 0.49995)) del one_data if len(train_data) >= pred_len + delta: err_t = weighted_MSE( np.array(last[(-pred_len - delta):-delta]), np.array(target[-pred_len:])) err_p = weighted_MSE( np.array(prediction[(-pred_len - delta):-delta]), np.array(target[-pred_len:])) if err_p <= err_t * alpha: if err_t < 1e-6: corrected_predict[-1] = last[-1] print(" ") print("OK %d %f %f" % ((len(cqi_queue)), err_t, err_p)) right.append(1) pass else: corrected_predict[-1] = last[-1] if err_t <= 1e-6: msgInterface.PySendBegin() msgInterface.GetPy2CppStruct().new_wbCqi = CQI msgInterface.PySendEnd() print("set: %d" % CQI) continue else: print("train %d" % (len(cqi_queue))) right.append(0) lstm_model_mse.fit(x=np.array( train_data[-delta - batch_size:-delta]).reshape( batch_size, input_len, 1) / 10, y=np.array(target[-batch_size:]), batch_size=batch_size, epochs=1, verbose=0) else: corrected_predict[-1] = last[-1] # sm.Set(corrected_predict[-1]) msgInterface.PySendBegin() msgInterface.GetPy2CppStruct().new_wbCqi = CQI msgInterface.PySendEnd() print("set: %d" % corrected_predict[-1]) except Exception as e: exc_type, exc_value, exc_traceback = sys.exc_info() print("Exception occurred: {}".format(e)) print("Traceback:") traceback.print_tb(exc_traceback) exit(1) else: with open("log_" + str(delta), "a+") as f: f.write("\n") if len(right): f.write("rate = %f %%\n" % (sum(right) / len(right))) f.write("MSE_T = %f %%\n" % (simple_MSE(np.array(target[delta:]), np.array(target[:-delta])))) f.write("MSE_p = %f %%\n" % (simple_MSE( np.array(corrected_predict[delta:]), np.array(target[:delta])))) finally: print("Finally exiting...") del exp ``` * Extract tarball under model. ```shell= $ cd YOUR_NS3_DIRECTORY $ mkdir contrib/ai/model/libtensorflow $ cd contrib/ai/model/libtensorflow $ wget https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-linux-x86_64-2.6.0.tar.gz $ tar -xf PATH_TO_TARBALL -C contrib/ai/model/libtensorflow e.q. tar -xf ../../libtensorflow-gpu-linux-x86_64-2.6.0.tar.gz -C contrib/ai/model/libtensorflow ``` * Execute run_online_lstm.py ```shell= python3 run_online_lstm.py 1 ## The parameter `1` is the delta for prediction. ``` ### Simulation process 1. The user reports the CQI to the BS. 2. BS send the CQI to the online training module. 3. Using LSTM to predict the CQI and shipping back to ns-3. 4. Using the feedback value for the next scheduling decision. The scheduler type is Round-Robin, which means that every user has an equal number of times being scheduled. We mainly concern the total throughput as the performance metric. ### Simulation scenario This scenario is implemented to test the performance of high-speed situations, and multi-users are attached to the base station to test the downlink scheduling performance. CQI預測:通過改進預測信道質量指標(CQI),來提高5G系統在高移動性場景下的傳輸性能。 動機:考慮到快速變化的通道對5G系統吞吐量性能的嚴格限制,特別是在高移動性場景下的適應性調變編碼(AMC)系統性能會因為通道快速變化而大幅降低。因此,可靠的信道預測來預測信道變化是必要的。 仿真場景:實現了一個用於測試高速情況下性能的場景,多個用戶連接到基站以測試下行鏈路調度性能。 仿真過程: 用戶向基站報告CQI。 基站將CQI發送到在線訓練模塊。 使用LSTM預測CQI並反饋給ns-3。 使用反饋值進行下一次的調度決策。 調度器:使用輪轉方式(Round-Robin),這意味著每個用戶都有相同的調度次數。主要關注總吞吐量作為性能指標。 運行範例 設置ns3-ai環境。 構建C++可執行文件和Python綁定。 運行Python腳本。 1 是預測的延遲值。 ## SmartMME + ns3-ai ```shell= git clone https://github.com/Molianoxo/ns3-mmwave.git cd ns3-mmwave ./ns3 configure --enable-python --enable-examples && ./ns3 build //add ai mudule git clone https://github.com/hust-diangroup/ns3-ai.git contrib/ai ./ns3 configure --enable-examples //會報下面的錯誤 ``` ### cmakelist error ![image](https://hackmd.io/_uploads/HybrJesx0.png) - ns3-mmwave/scratch/CMakeLists.txt 加在最前面 ```shell= find_package(Boost REQUIRED COMPONENTS program_options) find_package(Protobuf REQUIRED) ``` ```shell= ./ns3 build ai ``` - ./ns3 run scratch/ue_pos.cc 報錯 ![image](https://hackmd.io/_uploads/S1R0NZieA.png) - 進入到ns3-mmwave/contrib/ai/examples/multi-bss/vr-app/model/vr-burst-generator.cc ![image](https://hackmd.io/_uploads/BJ2brbjlR.png) - ue_pos.cc就可以跑了 ### Create AI model * 在contrib/ai/examples/CMakeLists.txt 加入新建資料夾名稱 ![截圖 2024-04-16 凌晨4.49.18](https://hackmd.io/_uploads/BkKCzzsgC.png) ```shell= $ pip3 install cppyy 在contrib/ai/examples/nes建立CMakeLists.txt才能build成功 $ cd ns3-mmwave $ ./ns3 build ai ``` * build nes ![截圖 2024-04-16 清晨5.43.29](https://hackmd.io/_uploads/r1Jcymjx0.png) ```shell= ./ns3 build ns3ai_nes_msg_stru [0/2] Re-checking globbed directories... [6/7] Linking CXX shared module ....cpython-310-x86_64-linux-gnu.so lto-wrapper: warning: using serial compilation of 4 LTRANS jobs [7/7] Linking CXX executable /ho...8.rc1-ns3ai_nes_msg_stru-default Finished executing the following commands: cd cmake-cache; /home/eric/.local/bin/cmake --build . -j 19 --target ns3ai_nes_msg_stru ; cd .. ``` ![截圖 2024-04-16 清晨5.44.33](https://hackmd.io/_uploads/Hk1AJ7ieA.png) * 換成PYBIND11_MODULE(ns3ai_nes_py_stru, m) ![截圖 2024-04-16 下午1.28.31](https://hackmd.io/_uploads/H12YntsgR.png) ```shell= $ ./nes build ns3ai_nes_msg_stru $ cd contrib/ai/examples/nes/use-msg-nes $ python3 nes.py set: 4,1; get: 5 set: 1,9; get: 10 set: 1,6; get: 7 set: 8,5; get: 13 set: 1,1; get: 2 set: 9,10; get: 19 set: 2,5; get: 7 ``` ### change duration time in DlRlc/UlRlc ```shell= cd ns3-mmwave/src/mmwave/helper/mmwave-bearer-stats-calculator.cc ``` ![image](https://hackmd.io/_uploads/HJnVV3igC.png) 相同檔案 下面有ShowResults函式,調整輸出檔格式 ![image](https://hackmd.io/_uploads/SkoPPnseR.png) UL 標題改為 時間點/cellID/RxPDU/RxBytes DL 標題改為 時間點/cellID/TxPDU/TxBytes 因為主要看基站的throughput ![image](https://hackmd.io/_uploads/rylJuhsgC.png) ![image](https://hackmd.io/_uploads/ryvbu2seR.png) 再來是實際輸出WriteUlResults 跟 WriteDlResults![image](https://hackmd.io/_uploads/HkLNOnolR.png) ![image](https://hackmd.io/_uploads/SkTHdhjxR.png) ## Data preprocess ```shell= import pandas as pd # Read the Excel file into a pandas DataFrame. traffic_df = pd.read_excel("ns3_traffic.xlsx") # Group by 'Time' and 'CellId' and sum the 'TxBytes', then reset the index. grouped_df = traffic_df.groupby(['Time', 'CellId']).sum().reset_index() # Pivot the table to get 'CellId' as columns and 'Time' as rows. # Fill values with the sum of 'TxBytes', and fill missing entries with zeros. pivot_df = grouped_df.pivot(index='Time', columns='CellId', values='TxBytes').fillna(0) # Reindex the DataFrame to ensure all expected cell columns are present, # filling any missing ones with zeros. max_cell_id = pivot_df.columns.max() pivot_df = pivot_df.reindex(columns=range(1, max_cell_id + 1), fill_value=0) # Rename the columns to 'cell1', 'cell2', etc. column_names = ['cell{}'.format(i) for i in range(1, max_cell_id + 1)] pivot_df.columns = column_names # Save the transformed DataFrame to a new Excel file. pivot_df.to_excel("preprocess_ns3_traffic.xlsx", index=True) ``` ## New Idea ### LSTM預測 * 根據LSTM模型預測了未來每秒的流量,就可以將每秒的功率設定為對應的值 * 資料準備與模型訓練階段: * 歷史數據來預測特定時間和地點的流量模式 * 收集資料:根據歷史資料訓練的,這些資料來自NS-3模擬場景 * 資料預處理 * 設計LSTM模型 * 訓練LSTM模型 * 預測與優化參數生成階段: * 生成預測:使用訓練好的LSTM模型,基於最新可用的資料預測接下來每秒的流量。 * 計算優化功率:流量預測越高,功率設定越大;或者是一個更複雜的函數,考慮到節能和UE Qos * NS-3模擬pre-configure階段: * 配置模擬:在NS-3中設置基站節點,並將其初始發射功率設定為LSTM模型預測的第一秒的功率值 * 設定定時器:設定一個定時器,以每秒為間隔調用一個函數,這個函數會讀取LSTM模型預測的下一秒的功率值,並更新基站節點的發射功率設定。 * NS-3模擬運行階段: * 運行模擬:啟動模擬,根據設定的定時器,模擬過程中基站的發射功率會按照LSTM模型的預測進行動態調整。 ## New Idea * 基站位置可固定也可不固定,需確認覆蓋範圍 * UE位置固定情況下,不考慮換手 * 如果一個區域的流量需求降低,相關的基站可以完全關閉節省功耗。 * 針對流量,我只要用幾個BS,只管基站開關 * 區域集中服務:根據UE的地理位置集中,將信號集中發送到人口密集的區域,減少對郊區的覆蓋,從而節省能耗。 * UE位置在移動的情況下(為了說服MTK、Qualcomm),不考慮換手 * 分析UE移動的軌跡和路徑來預測哪些基站將會被需要,從而動態調整相鄰基站的開關狀態。 * 當某個基站在過去一小時內的平均流量低於設定的閾值(比如10%的最大承載能力)時,該基站應自動關閉。 * 當流量超過某個高閾值(比如80%的最大承載能力)時,確保相關基站開啟或者增加相鄰基站的功率支援。 * 閾值判斷:根據收集的數據和設定的閾值,判斷是否需要開啟或關閉基站。 * 定義流量閾值:設計一個基於閾值的數學模型,來決定基站是否該開啟或關閉 * ![截圖 2024-04-17 凌晨12.27.47](https://hackmd.io/_uploads/SJ3HPX2gR.png) * ![截圖 2024-04-17 凌晨12.27.38](https://hackmd.io/_uploads/SkMIDQ3gC.png) - [x] UE 基於RandomWalk2dMobilityModel隨機移動 - [ ] 測試基站最大負載能力 - [ ] 計算基站流量threshold - [x] 畫成流程圖 - [x] 演算法 - [ ] 定義一天要跑多少模擬時間 - [ ] 要設定一個QoE判斷 - [ ] Current period,不會誤判,delta t時間,觀測不要只看當下,有個window,對應的平均如何,兩個threshold high與low level,不同的閥值,從關到開與開到關可以不一樣, - [ ] 如果擔心開錯了,預設的window要大一點,不會突然沒人,要省電的時候,短短的window就可以做決策了,參數就是預測的window,window跟thredhold做比較,每一個參數都可以預測準 - [ ] UE SINR算出data rate,計算bandwidth part才可以滿足,有公式計算容量,跟UE SINR有關, * 模擬場景: * 在NS-3,根據商業區與住宅區配置基站與UE,根據UE移動情形,模擬一天商業區與住宅區基站的流量 * 設計方法: * NS-3產生時間序列的基站流量,使用LSTM進行預測 * 根據在不同區域的基站,在不同時間點,動態調整流量threhold判斷是否switch on/off,如果低於某個threhold,會成為candidate switch off 基站,同時,如果UE SINR低於某個threhold,兩個條件皆滿足,基站就會直接關閉,並且都在滿足UE QoE的情況下。 * 演算法: * 它結合了流量閾值和用戶設備(UE)的信號對干擾和噪聲比(SINR)以決定是否關閉基站。此算法同時考慮了滿足UE的服務質量(QoE)的需求 * 初始化所有基站和UEs,設定流量閾值和SINR閾值。 * 持續監測每個基站的流量,如果流量低於設定的閾值,則將其列為關閉候選基站。 * 如果關閉候選基站上的UEs的SINR低於閾值,則關閉該基站。 * UE將持續檢查其SINR,如果低於閾值,則嘗試連接到最近的基站。 * 如果UE失去連接或SINR低於閾值,則嘗試確保UE的QoE,開啟被關閉的基站並進行連接。 ## UE position log file ```shell= void UpdateNodeVelocity(Ptr<Node> node, Vector velocity) { Ptr<ConstantVelocityMobilityModel> mobility = node->GetObject<ConstantVelocityMobilityModel>(); if (mobility) { mobility->SetVelocity(velocity); } } ofstream positionLogFile; void PositionChangeCallback(uint32_t nodeId, Ptr<const MobilityModel> model) { Vector pos = model->GetPosition(); double timeInSeconds = Simulator::Now().GetSeconds(); // positionLogFile << Simulator::Now() << " Node " << nodeId << ": Position: " << pos.x << ", " << pos.y << ", " << pos.z << std::endl; positionLogFile << timeInSeconds << " Node " << nodeId << ": Position: " << pos.x << ", " << pos.y << ", " << pos.z << std::endl; } void InstallPositionChangeCallback(NodeContainer &nodes) { for (NodeContainer::Iterator i = nodes.Begin(); i != nodes.End(); ++i) { Ptr<Node> node = *i; Ptr<MobilityModel> mobility = node->GetObject<MobilityModel>(); if (mobility) { mobility->TraceConnectWithoutContext("CourseChange", MakeBoundCallback(&PositionChangeCallback, node->GetId())); } } } ``` InstallPositionChangeCallback(ueNodes);加在uemobility.Install(ueNodes); ### ConstantVelocityMobilityModel ```cpp= uemobility.SetMobilityModel("ns3::ConstantVelocityMobilityModel"); ... Vector velocity(30.0, -50.0, 0.0); // 每秒移动的距离 Time stopTime = Seconds(0.1); // 在2秒后停止 Time startTime = Seconds(0.15); // 在5秒后再次开始移动 // 调度每个UE的位置更新,开始时刻为0.1秒,停止时间为2.0秒,重新开始时间为5.0秒 for (uint32_t i = 0; i < ueNodes.GetN(); ++i) { Simulator::Schedule(Seconds(0.0), &UpdatePosition, ueNodes.Get(i), velocity, stopTime, startTime, false); } ``` - UpdatePosition放在main之前 ```cpp= void UpdatePosition(Ptr<Node> node, Vector velocity, Time stopTime, Time startTime, bool isMovingBack) { Time now = Simulator::Now(); Ptr<ConstantVelocityMobilityModel> mobility = node->GetObject<ConstantVelocityMobilityModel>(); if (mobility == nullptr) return; // 如果没有获取到移动模型,直接返回 // 判断是否到达开始反向移动的时间,若是,则反转速度向量 if (now >= startTime && !isMovingBack) { velocity.x = -velocity.x; velocity.y = -velocity.y; isMovingBack = true; } // 更新位置,如果当前时间小于停止时间或者是反向移动状态 if (now < stopTime || isMovingBack) { Vector pos = mobility->GetPosition(); Vector newPos = Vector(pos.x + velocity.x * 0.1, pos.y + velocity.y * 0.1, pos.z); mobility->SetPosition(newPos); } // 继续调度更新 Simulator::Schedule(Seconds(0.005), &UpdatePosition, node, velocity, stopTime, startTime, isMovingBack); } ``` #### 位置生成 1 | 1 7 | 1 ```cpp= MobilityHelper uemobility; uemobility.SetMobilityModel ("ns3::ConstantVelocityMobilityModel"); Ptr<ListPositionAllocator> uePositionAlloc = CreateObject<ListPositionAllocator> (); Ptr<UniformRandomVariable> randomX = CreateObject<UniformRandomVariable>(); Ptr<UniformRandomVariable> randomY = CreateObject<UniformRandomVariable>(); randomX->SetAttribute("Min", DoubleValue(0)); randomX->SetAttribute("Max", DoubleValue(500)); randomY->SetAttribute("Min", DoubleValue(500)); randomY->SetAttribute("Max", DoubleValue(1000)); Ptr<ListPositionAllocator> initialUePositionAlloc = CreateObject<ListPositionAllocator>(); for (uint32_t i = 0; i < 7; ++i) { double x = randomX->GetValue(); double y = randomY->GetValue(); initialUePositionAlloc->Add(Vector(x, y, 1.5)); } //固定的UE (左下 右上 右下) // 添加左下角1个UE的位置 Ptr<UniformRandomVariable> randomXLB = CreateObject<UniformRandomVariable>(); randomXLB->SetAttribute("Min", DoubleValue(0)); randomXLB->SetAttribute("Max", DoubleValue(500)); Ptr<UniformRandomVariable> randomYLB = CreateObject<UniformRandomVariable>(); randomYLB->SetAttribute("Min", DoubleValue(0)); randomYLB->SetAttribute("Max", DoubleValue(500)); initialUePositionAlloc->Add(Vector(randomXLB->GetValue(), randomYLB->GetValue(), 1.5)); // 添加右上角1个UE的位置 Ptr<UniformRandomVariable> randomXRT = CreateObject<UniformRandomVariable>(); randomXRT->SetAttribute("Min", DoubleValue(500)); randomXRT->SetAttribute("Max", DoubleValue(1000)); Ptr<UniformRandomVariable> randomYRT = CreateObject<UniformRandomVariable>(); randomYRT->SetAttribute("Min", DoubleValue(500)); randomYRT->SetAttribute("Max", DoubleValue(1000)); initialUePositionAlloc->Add(Vector(randomXRT->GetValue(), randomYRT->GetValue(), 1.5)); // 添加右下角1个UE的位置 Ptr<UniformRandomVariable> randomXRB = CreateObject<UniformRandomVariable>(); randomXRB->SetAttribute("Min", DoubleValue(500)); randomXRB->SetAttribute("Max", DoubleValue(1000)); Ptr<UniformRandomVariable> randomYRB = CreateObject<UniformRandomVariable>(); randomYRB->SetAttribute("Min", DoubleValue(0)); randomYRB->SetAttribute("Max", DoubleValue(500)); initialUePositionAlloc->Add(Vector(randomXRB->GetValue(), randomYRB->GetValue(), 1.5)); randomX->SetAttribute("Min", DoubleValue(0)); randomX->SetAttribute("Max", DoubleValue(500)); randomY->SetAttribute("Min", DoubleValue(500)); randomY->SetAttribute("Max", DoubleValue(1000)); ![Uploading file..._ep1oz077l]() ``` ![image](https://hackmd.io/_uploads/BJE0qw0eR.png) ## mmWaveOutputFilename Log #### MmWaveSinrTime.txt * m_mmWaveSinrOutFile << Simulator::Now ().GetNanoSeconds () / 1.0e9 << " " << imsi << " " << cellId << " " << 10 * std::log10 (sinr) << std::endl; #### MmWaveSwitchStats.txt * m_mmWaveOutFile << "SwitchToMmWave " << Simulator::Now ().GetNanoSeconds () / 1.0e9 << " " << imsi << " " << cellId << " " << rnti << " " << std::endl; #### void SwitchOffmmwaveBS ```cpp= std::map<uint32_t, uint64_t> mmWaveBsTraffic; // 基站ID映射到流量(bytes) void InitializeTrafficForBaseStations(const NodeContainer& mmWaveEnbNodes) { Ptr<UniformRandomVariable> randomTraffic = CreateObject<UniformRandomVariable>(); randomTraffic->SetAttribute("Min", DoubleValue(0)); // 最小流量值 randomTraffic->SetAttribute("Max", DoubleValue(100000)); // 最大流量值 for (uint32_t i = 0; i < mmWaveEnbNodes.GetN(); i++) { mmWaveBsTraffic[i] = static_cast<uint64_t>(randomTraffic->GetValue()); } } void SwitchOffmmwaveBS(Vector UE_pos , int u_id , Ptr<const MobilityModel> model) { // std::cout << "Checking for mmWave BS to switch off at " << Simulator::Now().GetSeconds() << " seconds\n"; for (uint32_t i = 0; i < mmWaveEnbNodes.GetN(); i++) { if (total_UEs_connected[i] == 0 && Bs_status[i]) { std::cout << "Switching off mmWave BS id: " << i << '\n'; Ptr<MmWaveEnbPhy> enbPhy = mmWaveEnbNodes.Get(i)->GetDevice(0)->GetObject<MmWaveEnbNetDevice>()->GetPhy(); Ptr<MmWaveSpectrumPhy> enbdl = enbPhy->GetDlSpectrumPhy(); Ptr<MmWaveSpectrumPhy> enbul = enbPhy->GetUlSpectrumPhy(); Bs_status[i] = false; // mmWave BS to switch off } else { uint64_t traffic = mmWaveBsTraffic[i]; // 假设流量 uint64_t trafficThreshold = 100000; // 流量阈值 if (traffic < trafficThreshold) { std::cout << "Traffic for mmWave BS id " << i << " is below threshold: " << traffic << " bytes\n"; // 你可以在这里添加关闭基站的代码,如果需要的话 } else { std::cout << "Traffic for mmWave BS id " << i << " is above threshold: " << traffic << " bytes\n"; // 你可以在这里添加打开基站的代码,如果需要的话 } } } } void PeriodicSwitchOffmmwaveBS(uint32_t uid, NodeContainer ueNodes) { Ptr<MobilityModel> mModel = ueNodes.Get(uid)->GetObject<MobilityModel>(); Vector UE_pos = mModel->GetPosition(); // std::cout << "Periodic call at simulation time: " << Simulator::Now().GetSeconds() << " seconds\n"; // UE有移動才會觸發SwitchOffmmwaveBS SwitchOffmmwaveBS(UE_pos, uid, mModel); // Call the function directly Simulator::Schedule(Seconds(0.05), &PeriodicSwitchOffmmwaveBS, uid, ueNodes); // Schedule next call } int main(int argc, char *argv[]) { for(uint32_t uid = 0 ; uid < ueNodes.GetN(); uid++) { Simulator::Schedule(Seconds(0.2), &PeriodicSwitchOffmmwaveBS, uid, ueNodes); // Initialize the periodic function call } // 初始化基站的流量 InitializeTrafficForBaseStations(mmWaveEnbNodes); } ``` #### NotifyMmWaveSinr ```cpp= ==================mmwave-bearer-stats-connector.h========================= static void NotifyMmWaveSinr (MmWaveBearerStatsConnector* c, std::string context, uint64_t imsi, uint16_t cellId, long double sinr); =====================ue_pos.cc==================================== #include "ns3/mmwave-bearer-stats-calculator.h" void NotifyMmWaveSinr (std::string context, uint64_t imsi, uint16_t cellId, long double sinr) { std::cout << "* " << Simulator::Now().GetSeconds() << " " << context << " CellId " << cellId << " IMSI " << imsi << " SINR " << sinr << "\n"; } int main(int argc, char *argv[]) { Config::ConnectFailSafe ("/NodeList/*/DeviceList/*/LteEnbRrc/NotifyMmWaveSinr", MakeBoundCallback (&NotifyMmWaveSinr)); } ``` #### gNB positions ```cpp= // Install Mobility Model Ptr<ListPositionAllocator> enbPositionAlloc = CreateObject<ListPositionAllocator>(); // Define grid quarters double centerX = 500; // X coordinate of the grid center double centerY = 500; // Y coordinate of the grid center double quarterWidth = 500 / 2; // Half width of the grid quarter double quarterHeight = 500 / 2; // Half height of the grid quarter // Place base stations in the quarters // Left-Top Quarter: 3 base stations enbPositionAlloc->Add(Vector(centerX - 3.5 * quarterWidth / 4, centerY + 2.5 * quarterHeight / 4, 10)); enbPositionAlloc->Add(Vector(centerX - quarterWidth / 3.5, centerY + quarterHeight / 4, 10)); enbPositionAlloc->Add(Vector(centerX - quarterWidth / 2.5, centerY + 3 * quarterHeight / 4, 10)); // Right-Bottom Quarter: 4 base stations enbPositionAlloc->Add(Vector(centerX + quarterWidth / 4, centerY - quarterHeight / 4, 10)); enbPositionAlloc->Add(Vector(centerX + 3 * quarterWidth / 4, centerY - 3 * quarterHeight / 4, 10)); enbPositionAlloc->Add(Vector(centerX + quarterWidth / 2, centerY - quarterHeight / 2, 10)); enbPositionAlloc->Add(Vector(centerX + 3 * quarterWidth / 4, centerY - quarterHeight / 4, 10)); // Left-Bottom Quarter: 2 base stations enbPositionAlloc->Add(Vector(centerX - 3 * quarterWidth / 4, centerY - quarterHeight / 4, 10)); enbPositionAlloc->Add(Vector(centerX - quarterWidth / 4, centerY - 3 * quarterHeight / 4, 10)); // Right-Top Quarter: 1 base station enbPositionAlloc->Add(Vector(centerX + 3 * quarterWidth / 4, centerY + quarterHeight / 4, 10)); // Set mobility model MobilityHelper enbmobility; enbmobility.SetMobilityModel("ns3::ConstantPositionMobilityModel"); enbmobility.SetPositionAllocator(enbPositionAlloc); enbmobility.Install(allEnbNodes); ``` #### Ue positions ```cpp= // RngSeedManager::SetSeed(time(NULL)); // RngSeedManager::SetRun(0); MobilityHelper uemobility; uemobility.SetMobilityModel("ns3::ConstantVelocityMobilityModel"); Ptr<ListPositionAllocator> uePositionAlloc = CreateObject<ListPositionAllocator> (); Ptr<UniformRandomVariable> randomX = CreateObject<UniformRandomVariable>(); Ptr<UniformRandomVariable> randomY = CreateObject<UniformRandomVariable>(); //------------------Fixed UE in specical area-------------- randomX->SetAttribute("Min", DoubleValue(250)); // Min X value randomX->SetAttribute("Max", DoubleValue(500)); // Max X value randomY->SetAttribute("Min", DoubleValue(500)); // Min Y value randomY->SetAttribute("Max", DoubleValue(750)); // Max Y value // 为左下角的4个UE随机分配位置 for (int i = 0; i < 4; ++i) { double x = randomX->GetValue(); double y = randomY->GetValue(); uePositionAlloc->Add(Vector(x, y, 1.5)); } // 为另外两个UE固定分配位置 uePositionAlloc->Add(Vector(250.5, 570.5, 1.5)); // 固定位置1 uePositionAlloc->Add(Vector(270.5, 450.5, 1.5)); // 固定位置2 // for (int i = 0; i < 6; ++i) { // uePositionAlloc->Add(Vector(300 + i * 10, 700 + i * 5, 1.5)); // } uePositionAlloc->Add(Vector(600.5, 350.5, 1.5)); uePositionAlloc->Add(Vector(660.5, 340.5, 1.5)); uePositionAlloc->Add(Vector(450.5, 350.5, 1.5)); uePositionAlloc->Add(Vector(600.5, 600.5, 1.5)); std::vector<ns3::Vector> initialPositions; uemobility.SetPositionAllocator(uePositionAlloc); uemobility.Install(ueNodes); PrintUEPositions(ueNodes); // Vector velocity(30.0, -50.0, 0.0); // Vector velocity(150.0, -250.0, 0.0); // Time downtown_stopTime = Seconds(0.35); // Time startTime = Seconds(0.4); // // Time reverseStartTime = Seconds(0.25); Time reverseStartTime; for (uint32_t i = 0; i < ueNodes.GetN(); ++i) { if (i < 4) { // 只有左下角的前4個UE會移動 Simulator::Schedule(Seconds(0.25), &UpdatePosition, ueNodes.Get(i), velocity, downtown_stopTime, startTime, false, reverseStartTime); } } Simulator::Schedule(Seconds(0.25), &RecordPositions, ueNodes, std::ref(positionLogFile)); void PrintUEPositions(NodeContainer& ueNodes) { // std::ofstream outFile; // outFile.open("UEPositions.txt"); // Open a file to store the positions for (uint32_t i = 0; i < ueNodes.GetN(); ++i) { Ptr<Node> node = ueNodes.Get(i); Ptr<MobilityModel> mobility = node->GetObject<MobilityModel>(); Vector pos = mobility->GetPosition(); std::cout << "UE " << i << ": Position(" << pos.x << ", " << pos.y << ", " << pos.z << ")" << std::endl; // outFile << "UE " << i << ": Position(" << pos.x << ", " << pos.y << ", " << pos.z << ")" << std::endl; } // outFile.close(); // Close the file after writing } ofstream positionLogFile; void UpdatePosition(Ptr<Node> node, Vector velocity, Time downtown_stopTime, Time startTime, bool& isMovingBack, Time& reverseStartTime) { Time now = Simulator::Now(); Ptr<ConstantVelocityMobilityModel> mobility = node->GetObject<ConstantVelocityMobilityModel>(); if (!mobility) return; // 如果沒有移動模型,則直接返回 // 如果到達開始反向移動的時間,且當前不是反向狀態 if (now >= startTime && !isMovingBack) { velocity.x = -velocity.x; velocity.y = -velocity.y; isMovingBack = true; reverseStartTime = now; // 設定折返開始的時間 } // 更新位置,如果當前時間小於停止時間或者是反向移動狀態 if (now < downtown_stopTime || isMovingBack) { Vector pos = mobility->GetPosition(); Vector newPos = Vector(pos.x + velocity.x * 0.1, pos.y + velocity.y * 0.1, pos.z); mobility->SetPosition(newPos); } // 無條件調度下一次更新,除非在反向移動0.15秒後 // if (isMovingBack && (now - reverseStartTime) >= Seconds(0.05)) { if (isMovingBack && (now - reverseStartTime) >= Seconds(0.05)) { return; // 如果已經折返0.15秒,則不再調度 } Simulator::Schedule(Seconds(0.005), &UpdatePosition, node, velocity, downtown_stopTime, startTime, isMovingBack, reverseStartTime); } void RecordPositions(NodeContainer &nodes, std::ofstream &logFile) { double timeInSeconds = Simulator::Now().GetSeconds(); for (NodeContainer::Iterator i = nodes.Begin(); i != nodes.End(); ++i) { Ptr<Node> node = *i; Ptr<MobilityModel> mobility = node->GetObject<MobilityModel>(); if (mobility) { Vector pos = mobility->GetPosition(); uint32_t nodeId = node->GetId(); logFile << timeInSeconds << " Node " << nodeId << ": Position: " << pos.x << ", " << pos.y << ", " << pos.z << std::endl; } } // 重新调度RecordPositions函数以保持每0.005秒记录一次位置 Simulator::Schedule(MilliSeconds(5), &RecordPositions, nodes, std::ref(logFile)); } int main(int argc, char *argv[]) { positionLogFile.open("positionLog.txt"); PrintUEPositions(ueNodes); Simulator::Stop (Seconds (simTime)); AnimationInterface anim ("ue_pos.xml"); Simulator::Run (); positionLogFile.close(); 加這一行 } ``` ### ue_pos.cc ```cpp= ```