owned this note
owned this note
Published
Linked with GitHub
# HPDS Group Meeting Presentation
[TOC]
## Schedule and Selected Titles
Please sort the Date column from latest to former.
Link to a paper is recommended to be a DOI URL.
Default order: 柏叡→承瀚→守維→俊凱→冠宏
| Date | Speaker | Minute Taker | Presentation Title | Keywords |
| ------------- | ------- | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 7/13 | 承瀚 | 守維 | Thesis | (None) |
| 7/13 | 柏叡 | 守維 | Thesis | (None) |
| 7/13 | 冠宏 | 守維 | Thesis | (None) |
| 7/6 | 俊凱 | 承瀚 | Thesis | (None) |
| 7/6 | 守維 | 承瀚 | Thesis | (None) |
| 6/29 | 承瀚 | 俊凱 | Thesis | (None) |
| 6/29 | 柏叡 | 俊凱 | Thesis | (None) |
| 6/29 | 冠宏 | 俊凱 | Thesis | (None) |
| 6/22 | 俊凱 | 冠宏 | Thesis | (None) |
| 6/22 | 守維 | 冠宏 | Thesis | (None) |
| 6/15 | 承瀚 | 守維 | Thesis | (None) |
| 6/15 | 柏叡 | 守維 | Thesis | (None) |
| 6/15 | 冠宏 | 守維 | Thesis | (None) |
| 6/8 | 俊凱 | 柏叡 | Working Report | (None) |
| 6/8 | 守維 | 柏叡 | Working Report | (None) |
| 6/1 | 承瀚 | 俊凱 | Thesis: Improving Self-training for Botnet Detection under Unlabled Class Imbalance via HDBSCAN-based Zonal Partitioning | (None) |
| 6/1 | 柏叡 | 俊凱 | Thesis: Optimizing Ceph for Deep Learning via Configuration Auto-Tuning | (None) |
| 6/1 | 冠宏 | 俊凱 | Working Report | (None) |
| 5/25 | 俊凱 | 承瀚 | Working Report | (None) |
| 5/25 | 守維 | 承瀚 | Thesis: ONPL | (None) |
| 5/18 | 承瀚 | 守維 | Working Report | (None) |
| 5/18 | 柏叡 | 守維 | Working Report: Thesis Introduction and Experiments | (None) |
| 5/18 | 冠宏 | 守維 | Working Report | (None) |
| 5/11 | 俊凱 | 冠宏 | Working Report | (None) |
| 5/11 | 守維 | 冠宏 | Working Report | (None) |
| 5/4 | 冠宏 | 俊凱 | Working Report | (None) |
| 4/27 | 俊凱 | 柏叡 | Working Report | (None) |
| 4/27 | 守維 | 柏叡 | Working Report | (None) |
| 4/20 | 承瀚 | 守維 | Working Report | (None) |
| 4/20 | 柏叡 | 守維 | [Magpie: Automatically Tuning Static Parameters for Distributed File Systems using Deep Reinforcement Learning](https://doi.org/10.1109/IC2E55432.2022.00023) | Performance optimization,<br>Parameter tuning,<br>Reinforcement learning,<br>Distributed storage systems,<br>Cluster configuration |
| 4/20 | 冠宏 | 守維 | Working Report | (None) |
| 4/13 | 俊凱 | 承瀚 | Working Report | (None) |
| 4/13 | 守維 | 承瀚 | Working Report | (None) |
| 3/29 | 承瀚 | 俊凱 | Working Report | (None) |
| 3/29 | 冠宏 | 俊凱 | Working Report | (None) |
| 3/23 | 俊凱 | 冠宏 | Working Report | (None) |
| 3/23 | 守維 | 冠宏 | Working Report | (None) |
| 3/16 | 承瀚 | 守維 | Working Report | (None) |
| 3/16 | 柏叡 | 守維 | Working Report: Thesis Introduction and Related Work | (None) |
| 3/16 | 冠宏 | 守維 | Working Report | (None) |
| 3/9 | 俊凱 | 柏叡 | Working Report | (None) |
| 3/9 | 守維 | 柏叡 | Working Report: RPL Evaluation and Proposed Enhancement Method | (None) |
| 3/2 | 承瀚 | 俊凱 | [DroidEvolver: Self-Evolving Android Malware Detection System](https://doi.org/10.1109/EuroSP.2019.00014) | Feature extraction,<br>Malware,<br>Aging,<br>Training,<br>Adaptation models,<br>Manuals,<br>Labeling |
| 3/2 | 柏叡 | 俊凱 | Working Report and [Storage Benchmarking with Deep Learning Workloads [PDF]](https://newtraell.cs.uchicago.edu/files/tr_authentic/TR-2021-01.pdf) | Storage systems,<br>Object storage,<br>Database,<br>DL,<br>Performance evaluation |
| 3/2 | 冠宏 | 俊凱 | Working Report | (None) |
| 2/23 | 俊凱 | 承瀚 | Working Report | (None) |
| 2/23 | 守維 | 承瀚 | Working Report | (None) |
| 2/16 | 承瀚 | 守維 | Working Report | (None) |
| 2/16 | 柏叡 | 守維 | Working Report: Disk Benchmark Summary | (None) |
| 2/16 | 冠宏 | 守維 | Working Report | (None) |
| 2/9 | 守維 | 柏叡 | Working Report: Experimental Insights: Problems and Solutions | (None) |
| 1/26 | 承瀚 | 俊凱 | Working Report | (None) |
| 1/26 | 柏叡 | 俊凱 | Working Report: Degree Thesis Proposal | (None) |
| 1/26 | 冠宏 | 俊凱 | Working Report | (None) |
| 1/26 | 俊凱 | 冠宏 | Working Report | (None) |
| 1/26 | 守維 | 冠宏 | Working Report | (None) |
| 1/12 | 柏叡18 | 守維 | Working Report: Degree Thesis Proposals | (None) |
| 1/12 | 冠宏16 | 守維 | Working Report | (None) |
| 1/5 | 守維17 | 承瀚 | Working Report: TabNet Implementation & Evaluation | (None) |
| 12/29 | 承瀚17 | 俊凱 | Working Report | (None) |
| 12/29 | 柏叡17 | 俊凱 | Working Report: A Survey of Metrics for Distributed System Benchmarks | (None) |
| 12/28 | 冠宏15 | 俊凱 | Working Report | (None) |
| 12/22 | 俊凱16 | 柏叡 | [A Lean and Modular Two-Stage Network Intrusion Detection System for IoT Traffic](https://doi.org/10.1109/LATINCOM62985.2024.10770670) | Training,<br>Measurement,<br>Network intrusion detection,<br>Complexity theory,<br>Proposals,<br>Internet of Things,<br>Security,<br>Computer crime,<br>Optimization,<br>Open source software |
| 12/22 | 守維16 | 柏叡 | Working Report: BERT NIDS Implementation | (None) |
| 12/15 | 承瀚16 | 守維 | Working Report | (None) |
| 12/15 | 柏叡16 | 守維 | Working Report: Disk & Compression Benchmark | (None) |
| 12/15 | 冠宏14 | 守維 | Working Report | (None) |
| 12/8 | 俊凱15 | 冠宏 | Working Report | (None) |
| 12/8 | 守維15 | 冠宏 | [Unknown Network Attack Detection Based on Open-Set Recognition and Active Learning in Drone Network](https://doi.org/10.1002/ett.4212) | (Unknown) |
| 12/1 | 承瀚15 | 俊凱 | Working Report | (None) |
| 12/1 | 柏叡15 | 俊凱 | Working Report: Grafana and Gzip | (None) |
| 12/1 | 冠宏13 | 俊凱 | Working Report | (None) |
| 11/24 | 俊凱14 | 承瀚 | Working Report | (None) |
| 11/24 | 守維14 | 承瀚 | Working Report | (None) |
| 11/17 | 承瀚14 | 俊凱 | Working Report | (None) |
| 11/17 | 柏叡14 | 俊凱 | Working Report: Disk Benchmakrk & Environment Setup | (None) |
| 11/17 | 冠宏12 | 俊凱 | [Predicting Network Attacks with CNN by Constructing Images from Netflow Data](https://doi.org/10.1109/BigDataSecurity-HPSC-IDS.2019.00022) | NetFlow data,<br>Intrusion detection,<br>Deep Learning,<br>CNN,<br>ResNet |
| 11/21 | 俊凱13 | 自己 | Working Report | (None) |
| 11/10 | 守維13 | 柏叡 | Working Report: Open Set Recognition (OSR) Task | (None) |
| 11/3 | 承瀚13 | 守維 | [Confidence May Cheat : Self-Training on Graph Neural Networks under Distribution Shift](https://doi.org/10.1145/3485447.3512172) | Graph Neural Networks,<br>Self-Training,<br>Information Gain |
| 11/3 | 柏叡13 | 守維 | Working Report: DFS Choices for Out Lab: Weighing Pros and Cons | (None) |
| 11/3 | 冠宏11 | 守維 | Working Report | (None) |
| 10/20 | 俊凱12 | 冠宏 | Working Report: Transformer-LSTM | (None) |
| 10/20 | 守維12 | 冠宏 | Working Report: Open Set Recognition (OSR) Task | (None) |
| 10/13 | 承瀚12 | 守維 | Working Report | (None) |
| 10/13 | 冠宏10 | 守維 | Working Report | (None) |
| 10/13 | 柏叡12 | 守維 | Handover from Tse-An Lin (see below) | (None) |
| 10/6 | 守維11 | 承瀚 | Working Report | (None) |
| 10/6 | 俊凱11 | 承瀚 | [An Intrusion Detection Method Based on Transformer-LSTM Model](https://doi.org/10.1109/NNICE58320.2023.10105733) | Intrusion detection,<br>Transformer,<br>LSTM,<br>Deep learning |
| 9/29 | 柏叡11 | 俊凱 | Handover from Tse-An Lin (see below) | (None) |
| 9/29 | 承瀚11 | 俊凱 | [Energy-based Out-of-distribution Detection](https://proceedings.neurips.cc/paper/2020/hash/f5496252609c43eb8a3d147ab9b9c006-Abstract.html) | (None) |
| 9/29 | 冠宏9 | 俊凱 | [Considerably Improving Clustering Algorithms Using UMAP Dimensionality Reduction Technique: A Comparative Study](https://doi.org/10.1007/978-3-030-51935-3_34) | Dimensionality reduction,<br>UMAP,<br>Clustering,<br>Embedding manifold,<br>Big data analytics,<br>ML,<br>Comparative study |
| 9/22 | 俊凱10 | 柏叡 | [Network Intrusion Detection via Flow-to-Image Conversion and Vision Transformer Classification](https://doi.org/10.1109/ACCESS.2022.3200034) | NIDS,<br>Flow-to-image conversion,<br>CNN,<br>Vision transformers,<br>Image classification |
| 9/22 | 守維10 | 柏叡 | [A Hybrid Approach to Network Intrusion Detection Based on Graph Neural Networks and Transformer Architectures](https://openreview.net/forum?id=0a7OXKwmw9¬eId=QFLMpJgTu0) | Graph neural network,<br>GraphSAGE,<br>Transformer,<br>NIDS |
| 9/15 | 柏叡10 | 守維 | Handover from Tse-An Lin (see below) | (None) |
| 9/15 | 承瀚10 | 守維 | Working Report | (None) |
| 9/8 | 柏叡9 | 冠宏 | Handover from Tse-An Lin: A MLOps Framework for Enhancing Self-Training utilizing In-Memory Caching of Pseudo-Labels | (None) |
| 9/8 | 俊凱9 | 冠宏 | Using Consumer Lag for Autoscaling on Kafka-Centric Model Serving | (None) |
| 9/8 | 守維9 | 冠宏 | Working Report: Implementation of FlowTransformer | (None) |
| 8/25 | 承瀚9 | 俊凱 | Working Report | (None) |
| 8/25 | 冠宏8 | 俊凱 | [Mean teachers are better role models:Weight-averaged consistency targets improve semi-supervised deep learning results](https://proceedings.neurips.cc/paper/2017/hash/68053af2923e00204c3ca7c6a3150cf7-Abstract.html) | (None) |
| 8/18 | 俊凱8 | 柏叡 | [An Attention-Based Convolutional Neural Network for Intrusion Detection Model](https://doi.org/10.1109/ACCESS.2023.3271408) | NIDS,<br>CNN,<br>Computational complexity,<br>Security,<br>Network systems,<br>Image synthesis,<br>Computational efficiency |
| 8/18 | 守維8 | 柏叡 | [FlowTransformer: A Transformer Framework for Flow-Based Network Intrusion Detection Systems](https://doi.org/10.1016/j.eswa.2023.122564) | Transformers,<br>NIDS,<br>Machine learning,<br>Generative pre-trained transformer,<br>Network flow |
| 8/11 | 承瀚8 | 守維 | Handover: Enhancing Self-Training by Feature-Fusion and Similarity-Based Weighting for Botnet Detection | |
| 8/11 | 柏叡8 | 守維 | [A Cost-Efficient Container Orchestration Strategy in Kubernetes-Based Cloud Computing Infrastructures with Heterogeneous Resources](https://doi.org/10.1145/3378447) | Cluster management,<br>Container orchestration,<br>Resource heterogeneity,<br>Cost efficiency |
| 8/11 | 冠宏7 | 守維 | Handover: Using a Hierarchical Clustering Approach with a Distribution Similarity Strategy for Multi-Class Botnet Labeling in Real-World Traffic | |
| 8/4 | 俊凱7 | 柏叡 | [Sentiment Analysis Using Pre-Trained Language Model With No Fine-Tuning and Less Resource](https://doi.org/10.1109/ACCESS.2022.3212367) | Sentiment analysis,<br>Natural language processing |
| 8/4 | 守維7 | 柏叡 | [RTIDS: A Robust Transformer-Based Approach for Intrusion Detection System](https://doi.org/10.1109/ACCESS.2022.3182333) | NIDS,<br>Feature representation,<br>Self-attention mechanism,<br>Transformer |
| 6/2 | 承瀚7 | 守維 | [Feature extraction for machine learning-based intrusion detection in IoT networks](https://doi.org/10.1016/j.dcan.2022.08.012) | Feature extraction,<br>Machine learning,<br>NIDS,<br>IoT |
| 5/26 | 柏叡7 | 承瀚 | [Heats: Heterogeneity-and Energy-Aware Task-Based Scheduling](https://doi.org/10.1109/EMPDP.2019.8671554) | (None) |
| 5/19 | 冠宏6 | 柏叡 | [Adaptive Intrusion Detection in the Networking of Large Scale LANs with Segmented Federated Learning](https://doi.org/10.1109/OJCOMS.2020.3044323) | Cybersecurity,<br>Deep learning,<br>NIDS,<br>Segmented-federated learning,<br>LAN,<br>CNN |
| 5/12 | 俊凱6 | 冠宏 | Local LLM + RAG | (None) |
| 5/5 | 守維6 | 俊凱 | [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://doi.org/10.48550/arXiv.2312.00752) | (None) |
| 4/28 | 承瀚6 | 守維 | Self-organizing Maps (SOM) | (None) |
| 4/21 | 柏叡6 | 承瀚 | [KubeShare: A Framework to Manage GPUs as First-Class and Shared Resources in Container Cloud](https://doi.org/10.1145/3369583.3392679) | Cloud computing,<br>GPU,<br>Container,<br>Scheduling,<br>Cloud computing,<br>GPU,<br>Container,<br>Scheduling |
| 4/14 | 冠宏5 | 柏叡 | [Federated Deep Learning for Zero-Day Botnet Attack Detection in IoT-Edge Devices](https://doi.org/10.1109/JIOT.2021.3100755) | Botnet detection,<br>Cybersecurity,<br>Deep learning,<br>Deep neural network,<br>Federated learning,<br>IoT |
| 3/31 | 守維5 | 冠宏 | [Efficiently Modeling Long Sequences with Structured State Spaces](https://doi.org/10.48550/arXiv.2111.00396) | (None) |
| 3/24 | 承瀚5 | 守維 | [BotChase: Graph-Based Bot Detection Using Machine Learning](https://doi.org/10.1109/TNSM.2020.2972405) | Security management,<br>Botnet detection,<br>Machine learning |
| 3/17 | 柏叡5 | 承瀚 | [NetMARKS: Network Metrics-AwaRe Kubernetes Scheduler Powered by Service Mesh](https://doi.org/10.1109/INFOCOM42981.2021.9488670) | K8s,<br>Network statistics,<br>Scheduling,<br>Service mesh,<br>Latency,<br>Interoperability,<br>5G,<br>Containerized network |
| 3/10 | 俊凱5 | 柏叡 | Azure OpenAI | (None) |
| 3/3 | 冠宏4 | 俊凱 | [A Deep Learning Model for Network Intrusion Detection with Imbalanced Data](https://doi.org/10.3390/electronics11060898) | NIDS,<br>Bi-LSTM,<br>Attention mechanism,<br>NSL-KDD |
| 2/4 | 俊凱4 | 冠宏 | Azure OpenAI | (None) |
| 1/28 | 承瀚4 | 俊凱 | [BotGM: Unsupervised Graph Mining to Detect Botnets in Traffic Flows](https://doi.org/10.1109/CSNET.2017.8241990) | IP networks,<br>Ports (Computers),<br>Security,<br>Windows,<br>Silicon,<br>Focusing |
| 1/21 | 守維4 | 承瀚 | [Hunt for Unseen Intrusion: Multi-Head Self-Attention Neural Detector](https://doi.org/10.1109/ACCESS.2021.3113124) | Deep neural network,<br>NIDS,<br>Multi-head attention,<br>Realistic prediction performance evaluation,<br>Self-attention |
| 2024/<br>1/14 | 柏叡4 | 守維 | [A Novel Flow-vector Generation Approach for Malicious Traffic Detection](https://doi.org/10.1016/j.jpdc.2022.06.004) | Deep learning,<br>Malicious traffic,<br>Embedding,<br>Attention mechanism |
| 12/24 | 冠宏3 | 柏叡 | [E-GraphSAGE: A Graph Neural Network based Intrusion Detection System for IoT](https://doi.org/10.1109/NOMS54207.2022.9789878) | Graph Neural Networks,<br>NIDS,<br>IoT |
| 12/17 | 俊凱3 | 冠宏 | [A Fuzzy Logic based feature engineering approach for Botnet detection using ANN](https://doi.org/10.1016/j.jksuci.2021.06.018) | Artificial Neural Networ,<br>BotnetFuzzy Logic,<br>Fuzzy rules,<br>CTU-13,<br>Cyber Security |
| 12/10 | 守維3 | 俊凱 | [A Neural Attention Model for Real-Time Network Intrusion Detection](https://doi.org/10.1109/LCN44214.2019.8990890) | NIDS,<br>Deep learning,<br>Attention model,<br>Network security |
| 12/3 | 承瀚3 | 守維 | [Evading Machine Learning Botnet Detection Models via Deep Reinforcement Learning](https://doi.org/10.1109/ICC.2019.8761337) | Botnet,<br>Adversarial,<br>Reinforcement learning |
| 11/26 | 冠宏2 | 承瀚 | [IoT Malware Network Traffic Classification using Visual Representation and Deep Learning](https://doi.org/10.1109/NetSoft48620.2020.9165381) | Network traffic,<br>Machine learning,<br>Security,<br>NIDS |
| 11/19 | 柏叡3 | 冠宏 | [Unsupervised Detection of Botnet Activities using Frequent Pattern Tree Mining](https://doi.org/10.1007/s40747-021-00281-5) | Botnet detection,<br>Internet security,<br>Frequent pattern tree,<br>Data mining |
| 11/12 | 俊凱2 | 柏叡 | [An Advanced Computing Approach for IoT-Botnet Detection in Industrial Internet of Things](https://doi.org/10.1109/TII.2022.3152814) | Botnet,<br>Malware,<br>Feature extraction,<br>IIoT,<br>Static analysis,<br>Informatics,<br>Heuristic algorithms |
| 11/5 | 守維2 | 俊凱 | [A Unified Approach to Interpreting Model Predictions [PDF]](https://proceedings.neurips.cc/paper_files/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf) | (none) |
| 10/29 | 承瀚2 | 守維 | [Evading Machine Learning Botnet Detection Models via Deep Reinforcement Learning](https://doi.org/10.1109/ICC.2019.8761337) | Botnet,<br>Adversarial,<br>Reinforcement learning |
| 10/15 | 冠宏1 | 柏叡 | [Train Without Label: A Self-supervised One-Class Classification Approach for IoT Anomaly Detection](https://doi.org/10.1007/978-3-031-43792-2_8) | IoT,<br>Self-supervised learning,<br>One-class classification,<br>Cyber-attacks,<br>NIDS |
| 10/8 | 俊凱1 | 冠宏 | [A Hybrid Model for Botnet Detection using Machine Learning](https://doi.org/10.1109/ICBATS57792.2023.10111161) | Botnet detection,<br>kmeans,<br>Rule-based system,<br>decision tree,<br>CTU-13 dataset |
| 9/24 | 守維1 | 俊凱 | [BotStop : Packet-based efficient and explainable IoT botnet detection using machine learning](https://doi.org/10.1016/j.comcom.2022.06.039) | IoT,<br>Botnet,<br>NIDS,<br>Explainable machine learning |
| 9/17 | 承瀚1 | 守維 | [Botnet Detection in the Internet of Things using Deep Learning Approaches](https://doi.org/10.1109/IJCNN.2018.8489489) | Botnet,<br>Computer crime,<br>Machine learning,<br>Servers,<br>IoT |
| 2023/<br>9/10 | 柏叡1 | 承瀚 | [A KNN-Based Intrusion Detection Model for Smart Cities Security](https://doi.org/10.1007/978-981-19-3679-1_20) | Smart cities,<br>IoT,<br>Security,<br>AI,<br>Machine learning,<br>Classification |
Keyword abbreviations:
1. **NIDS**: Network intrusion detection, network intrusion detection system, intrusion detection system, intrusion detection
1. **ML**: Machine Learning
1. **IoT**: Internet of Things
1. **IIoT**: Industrial Internet of Things
1. **CNN**: Convolutional neural network, convolutional neural networks
1. **K8s**: Kubernetes
1. **[UMAP](https://umap-learn.readthedocs.io/en/latest/)**: Uniform Manifold Approximation and Projection for Dimension Reduction
1. **LSTM**: Long short-term memory
## Notes and Guidelines
<style>
span.comment {
color: gray;
font-size: 70%;
}
</style>
### 1. In Preparation
Recommended outlines of a paper (including its slides):
1. **Abstract (摘要)**: Write your own English version. Read out the English version word by word, and then briefly introduce it in Chinese.
1. **Introduction (介紹含背景)**
1. **Related Works (相關研究成果)**
1. **Proposed Approach (或Methodology,提出的解決方案)**
1. **Experimentation (或Evaluation,實驗及結果)**
1. **Conclusion and Future Work (結論和可改善的點)**
Related to the choice of paper to deliver a speech:
1. Avoid papers whose contribution is applying some approaches only. This is not researches, but experiment reports. <span class=comment>(2023-09-10 謝老師給柏叡)</span>
1. Proposed approach can train with or handle newer (10 years ago to present) botnet datasets. <span class=comment>(2023-09-10 俊又給柏叡)</span>
1. Figure out what aspects are challenging and forward-looking related to our researches. Specifically, for session-based approaches (instead of packet-based.) <span class=comment>(2023-09-10 謝老師給柏叡)</span> For example, RNN and attention are not practical in our researches, for such techniques require time sequence in packets which NetFlow doesn't provide. <span class=comment>(2023-09-17 謝老師給承瀚)</span>
1. Avoid outdated approaches. Specifically, rule-based machine learning (RBML). <span class=comment>(2023-11-19 謝老師給柏叡)</span>
1. Understand implementation details to the extent that you can reproduce it. Details include where and how each procedure runs, how to set initial parameters and update them, how to generate/preprocess each dataset, how to retrieve or calculate each metrics. You cannot understand partially or superficially. <span class=comment>(2023-09-17 張老師給承瀚、2023-09-24 謝老師和張老師給守維、2023-10-08 謝老師和張老師給冠宏、2023-10-08 謝老師給以薰、2023-10-15 謝老師和張老師給冠宏)</span>
1. Prevent presenting papers whose statements and experiment results are suspicious (possibly fake). <span class=comment>(2023-10-08 謝老師給冠宏)</span>
1. If some related work is comparable to the propose approach, such a proposition is less valuable. <span class=comment>(2023-09-24 謝老師給守維)</span> <span class=comment>(2023-10-15 張老師給冠宏)</span>
Related to slides, highly recommended to follow:
1. Tell audience only the bottom line (重點、重要的事實). <span class=comment>(2023-09-10 俊又給柏叡)</span> You could highlight them in paragraphs. <span class=comment>(2023-09-17 張老師給承瀚)</span>
1. Replace ambiguous adjectives with precise numbers or practical solutions, or supplement them after adjectives. It's because we are researchers. <span class=comment>(2023-09-10 俊又給柏叡)</span>
1. Prevent typos and grammar errors. <span class=comment>(2023-09-17 張老師給承瀚)</span> Only humans can "propose" an approach; a thesis cannot. <span class=comment>(2023-10-08 謝老師給冠宏)</span> Machines are infected **before** attacking. [Self-supervised learning vs self learning](https://chat.openai.com/share/20591c24-7a1e-46ee-91c0-c967cb51e907): Self-supervised learning focuses on learning representations from data, while self-learning focuses on learning behaviors or policies through interaction with an environment. <span class=comment>(2023-10-15 謝老師給冠宏)</span>
Related to slides, better if followed:
1. Distinguish different approaches by preprocessing time, system-designing effort, training time, testing time, testing memory usage, testing F$_1$ score (or testing accuracy, precision and recall.) <span class=comment>(2023-09-10 俊又給柏叡)</span>
1. Think how can propose approaches apply in real world. <span class=comment>(2023-10-15 謝老師給冠宏)</span>
Related to working reports:
1. It is a good direction to improve seniors' work (like standing on the shoulders of giants) by identifying potential improvements, making it complete, applying it in real world after researching related work. <span class=comment>(2023-09-10 謝老師給碩星、2023-09-17 謝老師給哲賢、2023-10-08 謝老師和張老師給碩星、2023-10-08 謝老師給以薰)</span>
1. The following directions are hard for us to research: federated learning. <span class=comment>(2023-09-17 謝老師給子豪)</span> <span class=comment>(2023-09-17 張老師給子豪)</span>
1. Figure out the bottlenecks of the proposed approach before finding their solutions. <span class=comment>(2023-09-24 張老師給澤安)</span>
1. Try to gradually transfer (redirect) workloads from old to new system for migration. <span class=comment>(2023-09-10 俊又給碩星)</span>
1. Think of data dependency and parallelism when designing architecture and system. <span class=comment>(2023-09-10 俊又給紘維)</span>
1. Testing with more datasets, maybe with the latest or real-world ones, proves viability of proposed approaches. Old datasets and researches cannot reflect the network traffic today. <span class=comment>(2023-09-17 謝老師給俊昇)</span> <span class=comment>(2023-09-17 張老師給哲賢)</span>
1. Try to derive applications from experiments as contributions. <span class=comment>(2023-09-17 謝老師給俊昇)</span>
1. In implementation, you could try to request source code from the authors of related work to compare. <span class=comment>(2023-09-17 張老師給俊昇)</span>
1. In experiments, you have to label real-world traffic before evaluation. <span class=comment>(2023-09-17 張老師給哲賢)</span>
1. Generalize your ideas/policies rather than experiments with method of exhaustion (all cases). <span class=comment>(2023-09-24 謝老師給紘維、2023-10-15 張老師給哲賢)</span>
todo: from 2023-10-15哲賢Da-Jun
### 2. Two Days before Meeting
When you're about (in 2 days) to have paper report, please **send the title of paper to the thread [報論文的時間和主題](https://discord.com/channels/990556058449764352/1147769705982074880)**. If possible, update the schedule part in this notebook to prevent the others from choosing the same paper for presentation.
### 3. During Meeting
#### Speaker (報告人)
(todo)
#### Minutes Taker (作會議記錄的人)
The next scheduled speaker* has to take the meeting minutes, including the following content for **each** speaker:
1. Name of the speaker.
1. Title of the presentation.
1. Questions and suggestion from teachers Shieh and Chang, and from the senior Jun-You. (Answers from the speaker can also be included.)
The following text block is a sample meeting minutes. (Thanks 守維 and 柏叡.)
Chinese version:
```
2023/09/17 會議紀錄
林俊昇:QLD-IDS working report
謝老師:
* 多找幾個dataset試看看
* 做完實驗後要思考如何應用,發揮最大效果
張老師:
* 可以嘗試向原作者詢問原始碼,看和自己重現的部分有哪些差異
------------------------------
(Minutes in the same format as the above for the other speakers.)
```
English version:
```
Meeting Minutes 2023-09-17
SPEAKER:
Title: TODO
Teacher Shieh:
1. TODO
Teacher Chang:
1. TODO
Senior Jun-Yo:
1. TODO
------------------------------
(Minutes in the same format as the above for the other speakers.)
```
\* The next scheduled speaker: For example, when 柏叡 is reporting, the next speaker is 承翰, referred from the default order in the schedule.
### 4. After Meeting
After the meeting ends, send the meeting minutes to the channel [會議記錄](https://discord.com/channels/990556058449764352/1038696376135057479) in Discord.
## Pronunciation Correction