# CIM meeting
## 2024/07/18
### Ginger
Resnet18 CIM version
floating 75.5% => CIM_train 71.1%
Conv weight sacling =>
S = max
advisor recommend:
要多小的LLM模型才行 ?
先從分類的開始
### Xin-You
Discuss new experiment analysis => measure end-to-end energy comsumption
using FPGA's arm core
### 宗叡
ddpm and cimddpm model evalutaion on CIFAR-10
### 嘉政
1. arrange 3D hand-object pose slide
2. 研究內視鏡設計圖
### 佑鑫
1. 補論文實驗 => Lightweight Model
2. 口試投影片及論文第一版完成
Comparsion with RegTR Parameters
Feature Encoder => CART Parameters +134%
(因為 INPUT 的feature多兩倍)
Transformer Cross Encoder => CART Parameters -84.3%
Conclusion => 雖然網路提高30% 但 performance 提升70%
### 定芳
survey paper related to depth estimation
1. Mind the Edge => 預測邊界
2. The Devil is in the Edge => 結合Edge以及RGB data 來訓練
## 2024/07/23
### V3 version 晶片參數
1. Embedding dimension : 512
2. support number of head : (head 8 dim 64) or (head 16 dim 32)
3. fully connected = 512 * 512
4. activation and weight: 8 bits
5. Layer Norm : 512
6. 每次累加後若overflow何時quantization 還是未知 ?
7. softmax 有些是用lookup table 數字還未知 ?
8. 我們的 Pruning 方法是甚麼 Structure pruning or Unstructure pruning ?
# 2024/07/26
## 君哲
Post_train mobilennet CIM 61.8% floating 67.5%
VIT with cifar-100, accuracy =>43.1%
imagenet21K => too large
solution =>
1. model pruning
2. checking other methods
input 32x32 ,patch 4x4, dim 有縮小 ,accuracy:43.1%
Others results is bad,either.
next: find 改 pretrain model
## Xin-You
HPCA Version Editing update
Comparsion with small model:
CIM:1~2sec 純硬體 ,5~10sec (include preemption)
Raspberry Pi 3: 10.5s
ARM FPGA: 163.5s
sending draft
## 廷剛
read paper
1. DeepCache: Accelerating Diffusion Models For Free
2. Cache Me if You Can: Accelerating Diffusion Models through Block Caching
## 宗叡
請假
## 定芳
放到眼鏡上的sensor
VR-headsets sensor comparsion
1. Apple Vision Pro
1. 2 high resolution camera
2. Lidar Scanner
2. Meata Quest Pro
1. additional hand controller
DataL from ARkit:
1. hand tracking =>( resolution ? distance ?)
2. world sensing =>( resolution ? distance ?)
survey: ZED1 specs
=> accuracy <2% up to 3m (~6cm)
=> accuracy 4% up to 15m
advisor : What data we can gain from those VR sensor data(raw data)
next: 1. find dataset metadata 2. based on 佑鑫 improve model=> more precision with camera
## 嘉政
Study implementation of HOISDF
## 佑鑫
Discuss advices from professors in defense
# 2024/8/1
## 君哲
Train ViT
input 32x32 dim 768: 85.6%
input 32x32 dim 512: 60.6%
survey pruning paper
RACS24 paper
## Xin-You
HPCA editing
## 廷剛
paper survey
one step generate picture
SnapFusion
MobileDiffusion
## 宗叡
survey evaluation metrics for image generation
training conditional ddpm using cifar-10 (in progress)
## 定芳
1.VR-headsets depth data (apple 沒開源, meta quest 有api)
=> 需要 IR相機 和 IR光源
2. 找其他人的 work
## 嘉政
## 佑鑫
1.Revised master thesis
## 貴鴻
1.study model pruning
2.survey and implement model pruning
depgraph
# 2024/8/8
## 君哲
1. train from high precision model
2. activation bit influence little
3. weight bit influence a lot
3. Quantized ViT 85.6% -> 83.4%
## Xin-You
## 廷剛
## 宗叡
## 定芳
## 嘉政
## 佑鑫
## 貴鴻
#2024 9/19
## Xin-You
1. collect path planing on image dataset
2. DATE 2025 editing
3. CIM V2 platform building
## 廷剛
1. prepare poster for eccv
2. slimflow paper
## 宗叡
1. solving problem that ddim sample image quality
## 洪敏
1. calculate deep cache size
2. survey dd
## 貴鴻
1. cosine similarity v2 result
2. survey paper (2016 ~ 2021)
## 嘉政
modfiy code for hoisdf
## ㄉㄧ