# 🌟 **Vision Transformer (ViT) Tutorial – Part 5: Efficient Vision Transformers – MobileViT, TinyViT & Edge Deployment** **#MobileViT #TinyViT #EfficientViT #EdgeAI #ModelOptimization #ONNX #TensorRT #TorchServe #DeepLearning #ComputerVision #Transformers** --- ## πŸ”Ή **Table of Contents** 1. [Recap of Part 4](#recap-of-part-4) 2. [Why Efficiency Matters: From Cloud to Edge](#why-efficiency-matters-from-cloud-to-edge) 3. [The Cost of Vision Transformers: Memory, Latency, Power](#the-cost-of-vision-transformers-memory-latency-power) 4. [MobileViT: Lightweight Hybrid Architecture for Mobile Devices](#mobilevit-lightweight-hybrid-architecture-for-mobile-devices) 5. [TinyViT: Distilled, Fast, and Accurate](#tinyvit-distilled-fast-and-accurate) 6. [Other Efficient ViT Variants: PVT, Swin-T, LeViT](#other-efficient-vit-variants-pvt-swin-t-levit) 7. [Knowledge Distillation: Training Small Models from Large Teachers](#knowledge-distillation-training-small-models-from-large-teachers) 8. [Quantization: FP32 β†’ FP16 β†’ INT8](#quantization-fp32--fp16--int8) 9. [Pruning: Removing Unimportant Weights](#pruning-removing-unimportant-weights) 10. [Model Compression & Sparsity](#model-compression--sparsity) 11. [Exporting ViT to ONNX for Cross-Platform Deployment](#exporting-vit-to-onnx-for-cross-platform-deployment) 12. [Accelerating Inference with TensorRT](#accelerating-inference-with-tensorrt) 13. [Deploying ViT with TorchServe & FastAPI](#deploying-vit-with-torchserve--fastapi) 14. [Benchmarking: Accuracy vs Latency vs Size](#benchmarking-accuracy-vs-latency-vs-size) 15. [Common Pitfalls in Optimization](#common-pitfalls-in-optimization) 16. [Visualizing Efficient ViT Architectures (Diagrams)](#visualizing-efficient-vit-architectures-diagrams) 17. [Summary & What’s Next in Part 6](#summary--whats-next-in-part-6) --- ## πŸ” **1. Recap of Part 4** In **Part 4**, we explored **advanced Vision Transformer applications** beyond classification: - **DETR**: End-to-end object detection using Transformers. - **Segmenter**: Semantic segmentation with ViT and mask transformers. - **Video Swin Transformer**: Spatio-temporal modeling for video understanding. - **MAE (Masked Autoencoders)**: Self-supervised pretraining without labels. - **CLIP & Flamingo**: Multimodal models connecting vision and language. We saw how Transformers are **replacing CNNs** across **detection, segmentation, video, and self-supervised learning**. Now, in **Part 5 β€” the longest and most practical yet** β€” we shift focus from **capability** to **efficiency**. You’ll learn how to make ViT **small, fast, and power-efficient** β€” so it can run on: - πŸ“± **Smartphones** - πŸ–₯️ **Edge devices** - πŸš— **Autonomous vehicles** - πŸ₯ **Medical devices** Let’s dive into the world of **efficient deep learning**. --- ## πŸš€ **2. Why Efficiency Matters: From Cloud to Edge** For years, AI lived in the **cloud** β€” big servers with GPUs, unlimited power, and high bandwidth. But the future is **on-device AI**: - Real-time inference (no latency). - Privacy (data stays on device). - Offline operation. - Lower cost at scale. > πŸ’‘ **"The most powerful AI is the one that runs where the data is born."** But **standard ViT is too heavy** for edge devices: - ViT-Base: **86M parameters**, **17G FLOPs**, **1GB+ memory**. - Runs at **< 5 FPS** on mobile. We need **efficient Vision Transformers**. --- ## πŸ’Έ **3. The Cost of Vision Transformers: Memory, Latency, Power** Let’s break down the **three key constraints** on edge devices. ### πŸ”Ή **1. Memory (RAM & Storage)** | Resource | Constraint | |--------|-----------| | **RAM** | Mobile phones: 4–8GB (shared with OS, apps) | | **Storage** | App size limits (e.g., 100MB for mobile) | | **Model Size** | >100MB β†’ long download, high storage cost | > ❌ ViT-Base: ~300MB (FP32) β†’ too big. --- ### πŸ”Ή **2. Latency (Inference Speed)** | Use Case | Max Latency | |--------|------------| | **Real-time video** | < 100ms per frame | | **AR/VR** | < 20ms | | **Autonomous driving** | < 50ms | > ❌ ViT-Base: ~200ms on mobile GPU β†’ too slow. --- ### πŸ”Ή **3. Power Consumption** Mobile GPUs drain battery fast. | Operation | Power Draw | |---------|-----------| | CPU inference | Low | | GPU inference | Medium | | NPU (Neural Processing Unit) | Very Low | > βœ… Goal: Run on **NPU** with minimal power. --- ## πŸ“± **4. MobileViT: Lightweight Hybrid Architecture for Mobile Devices** **MobileViT** (Apple, 2021) combines the best of **CNNs and Transformers**. > πŸ“˜ Paper: *"MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer"* ### πŸ”Ή Key Idea Use **CNNs for local features**, **Transformers for global context**. ``` Input β†’ MobileNet-like blocks β†’ Local Processing ↓ Patch Embedding ↓ Transformer (Global) ↓ Fuse with CNN features ↓ Classification ``` This reduces computation while preserving accuracy. --- ### βœ… MobileViT Block Structure For each stage: 1. **Local Convolution** (3Γ—3 conv) β†’ extract local patterns. 2. **Global Transformer** β†’ model long-range dependencies. 3. **Feature Fusion** β†’ combine local and global. ```python class MobileViTBlock(nn.Module): def __init__(self, in_channels, hidden_dim, out_channels, patch_size): super().__init__() self.conv1 = nn.Conv2d(in_channels, hidden_dim, 1) self.conv2 = nn.Conv2d(hidden_dim, hidden_dim, 3, padding=1, groups=hidden_dim) self.transformer = TransformerBlock(dim=hidden_dim, depth=2, heads=4) self.conv3 = nn.Conv2d(hidden_dim, out_channels, 1) self.patch_size = patch_size def forward(self, x): # Local processing y = self.conv1(x) y = self.conv2(y) # Global processing B, C, H, W = y.shape patches = y.unfold(2, self.patch_size, self.patch_size).unfold(3, self.patch_size, self.patch_size) patches = patches.reshape(B, C, -1, self.patch_size**2).permute(0, 2, 3, 1).reshape(B, -1, C) patches = self.transformer(patches) patches = patches.reshape(B, H//self.patch_size, W//self.patch_size, C, self.patch_size, self.patch_size) patches = patches.permute(0, 3, 1, 4, 2, 5).reshape(B, C, H, W) # Fuse out = self.conv3(patches + y) return out ``` > βœ… MobileViT achieves **ResNet-50 accuracy** with **1/3 the FLOPs**. --- ### πŸ“Š **MobileViT Performance (ImageNet)** | Model | Top-1 Acc | FLOPs | Params | Latency (iPhone) | |------|-----------|-------|--------|------------------| | **MobileViT-XS** | 70.6% | 0.5G | 1.3M | 18ms | | **MobileViT-S** | 74.8% | 1.0G | 2.0M | 24ms | | **MobileViT-XL** | 78.4% | 2.0G | 4.4M | 38ms | | **ResNet-50** | 76.1% | 4.1G | 25.6M | 60ms | > βœ… MobileViT-S is **faster and smaller** than ResNet-50 with **better accuracy**. --- ## πŸ”¬ **5. TinyViT: Distilled, Fast, and Accurate** **TinyViT** (Microsoft, 2022) is a family of **compact Vision Transformers** designed for **real-time applications**. > πŸ“˜ Paper: *"TinyViT: Fast Pretraining DeiT by Distilling Token-based Models"* ### πŸ”Ή Key Innovations | Feature | Benefit | |-------|--------| | **Token Distillation** | Train small student from large teacher | | **Architecture Search** | Optimize layer depth, width, heads | | **Efficient Attention** | Reduce complexity | | **Progressive Training** | Start small, grow during training | --- ### βœ… TinyViT Variants | Model | Depth | Embed Dim | Heads | Params | FLOPs | |------|------|-----------|-------|--------|-------| | **TinyViT-5M** | 8 | 128 | 4 | 5.1M | 1.2G | | **TinyViT-11M** | 10 | 192 | 6 | 11.1M | 2.3G | | **TinyViT-21M** | 12 | 320 | 8 | 21.3M | 4.6G | > βœ… TinyViT-21M matches **DeiT-B** accuracy with **3x faster inference**. --- ### βœ… Knowledge Distillation in TinyViT Train student to match teacher’s: - **Logits** (output) - **Attention maps** - **Hidden states** ```python loss = Ξ± * CE(y_pred, y_true) + Ξ² * MSE(z_student, z_teacher) ``` > βœ… Distillation transfers **dark knowledge** from teacher. --- ## πŸ”§ **6. Other Efficient ViT Variants: PVT, Swin-T, LeViT** ### πŸ”Ή **PVT (Pyramid Vision Transformer)** - Hierarchical feature maps (like CNNs). - Reduced-resolution attention. - Good for detection and segmentation. > βœ… PVT-Tiny: 1.9M params, 0.8G FLOPs. --- ### πŸ”Ή **Swin-T (Tiny Swin Transformer)** - Shifted windows β†’ local attention. - No global $N^2$ complexity. - SOTA efficiency. > βœ… Swin-T: 28M params, 4.5G FLOPs β€” used in DETR. --- ### πŸ”Ή **LeViT (Lightweight Vision Transformer)** - Convolutional token embedding. - No positional encoding. - Optimized for speed. > βœ… LeViT-128: runs at **>100 FPS** on desktop GPU. --- ### πŸ“Š **Efficient ViT Comparison (ImageNet)** | Model | Top-1 Acc | FLOPs | Speed (FPS) | Use Case | |------|-----------|-------|-------------|---------| | **MobileViT-S** | 74.8% | 1.0G | 42 | Mobile apps | | **TinyViT-11M** | 77.9% | 2.3G | 38 | Edge devices | | **Swin-T** | 81.3% | 4.5G | 28 | Desktop/Server | | **LeViT-128** | 76.6% | 1.6G | 105 | Real-time video | | **ViT-Base** | 77.9% | 17G | 8 | Cloud only | > βœ… You can **match ViT accuracy** with **1/10 the compute**. --- ## πŸŽ“ **7. Knowledge Distillation: Training Small Models from Large Teachers** **Knowledge distillation** trains a **small student model** to mimic a **large teacher model**. ### βœ… Why It Works Large models produce **soft labels** (probabilities), not just hard labels. Example: - Teacher says: `"cat": 0.85, "dog": 0.10, "car": 0.05` - More informative than `"cat": 1.0` --- ### βœ… Distillation Loss ```python def distillation_loss(student_logits, teacher_logits, labels, T=4, alpha=0.7): soft_loss = nn.KLDivLoss(reduction='batchmean')( F.log_softmax(student_logits / T, dim=1), F.softmax(teacher_logits / T, dim=1) ) hard_loss = F.cross_entropy(student_logits, labels) return alpha * soft_loss + (1 - alpha) * hard_loss ``` - $T$: Temperature (smooths probabilities) - $\alpha$: Weight of soft loss > βœ… Used in **TinyViT, DistilBERT, MobileNet**. --- ## πŸ”’ **8. Quantization: FP32 β†’ FP16 β†’ INT8** Quantization reduces **precision** of weights and activations. ### πŸ”Ή Types | Type | Precision | Size | Speed | Accuracy Drop | |------|----------|------|-------|---------------| | **FP32** | 32-bit float | 4 bytes | Baseline | 0% | | **FP16** | 16-bit float | 2 bytes | 2x faster | <1% | | **INT8** | 8-bit integer | 1 byte | 3–4x faster | 1–3% | | **Binary** | 1-bit | 1/32 byte | 32x faster | >10% | > βœ… **FP16 and INT8** are production-ready. --- ### βœ… PyTorch Quantization (Post-Training) ```python model.eval() model_q = torch.quantization.quantize_dynamic( model, {nn.Linear}, dtype=torch.qint8 ) ``` Or **quantization-aware training (QAT)**: ```python model.train() model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') model_prepared = torch.quantization.prepare_qat(model) # Train for a few epochs model_quantized = torch.quantization.convert(model_prepared) ``` > βœ… INT8 can reduce model size by **75%** with minimal accuracy loss. --- ## βœ‚οΈ **9. Pruning: Removing Unimportant Weights** **Pruning** removes **redundant weights** (e.g., near-zero). ### πŸ”Ή Types | Type | Description | |------|-------------| | **Weight Pruning** | Remove individual weights | | **Neuron Pruning** | Remove entire neurons | | **Structured Pruning** | Remove filters/channels (hardware-friendly) | --- ### βœ… PyTorch Pruning Example ```python from torch.nn.utils import prune # Prune 20% of weights in linear layer prune.l1_unstructured(module=model.classifier, name='weight', amount=0.2) # Remove mask (permanent) prune.remove(model.classifier, 'weight') ``` > βœ… Can reduce parameters by **50–90%**. --- ## πŸ“¦ **10. Model Compression & Sparsity** Combine techniques for **maximum compression**. | Technique | Size Reduction | Speedup | |---------|----------------|--------| | **Pruning** | 2–5x | 1.5–2x | | **Quantization** | 2–4x | 2–4x | | **Distillation** | 2–10x | 2–5x | | **All Three** | 10–20x | 5–10x | > βœ… Used in **on-device models** (e.g., Google Lens, Apple Photos). --- ## πŸ”„ **11. Exporting ViT to ONNX for Cross-Platform Deployment** **ONNX (Open Neural Network Exchange)** allows model portability. ### βœ… Export ViT to ONNX ```python model.eval() dummy_input = torch.randn(1, 3, 224, 224) torch.onnx.export( model, dummy_input, "vit_model.onnx", export_params=True, opset_version=13, do_constant_folding=True, input_names=['input'], output_names=['output'], dynamic_axes={ 'input': {0: 'batch_size'}, 'output': {0: 'batch_size'} } ) ``` > βœ… Now run on **Windows, Linux, Android, iOS, Web**. --- ### βœ… Load and Run ONNX Model ```python import onnxruntime as ort session = ort.InferenceSession("vit_model.onnx") outputs = session.run(None, {'input': input_numpy}) ``` > βœ… Supported by **TensorRT, OpenVINO, Core ML, TensorFlow.js**. --- ## ⚑ **12. Accelerating Inference with TensorRT** **NVIDIA TensorRT** optimizes models for **NVIDIA GPUs**. ### βœ… Steps 1. Convert ViT to ONNX. 2. Use **TensorRT** to optimize: - Layer fusion - FP16/INT8 quantization - Kernel auto-tuning ```bash trtexec --onnx=vit_model.onnx --saveEngine=vit_model.trt --fp16 ``` ### βœ… Benefits | Optimization | Speedup vs PyTorch | |-------------|--------------------| | **FP16** | 2x | | **INT8** | 3–4x | | **Layer Fusion** | 1.5x | | **All** | **5–8x** | > βœ… Used in **autonomous vehicles, robotics, video analytics**. --- ## πŸ› οΈ **13. Deploying ViT with TorchServe & FastAPI** ### βœ… Option 1: TorchServe (Official PyTorch Server) ```bash # Install pip install torchserve torch-model-archiver # Archive model torch-model-archiver --model-name vit_model --version 1.0 --model-file model.py --serialized-file vit_quantized.pth --handler handler.py # Start server torchserve --start --model-store model_store --models vit_model=viT_model.mar ``` Access via REST: ```bash curl -X POST http://localhost:8080/predictions/vit_model -T image.jpg ``` --- ### βœ… Option 2: FastAPI (Lightweight & Flexible) ```python from fastapi import FastAPI, UploadFile, File import torch from PIL import Image app = FastAPI() model = torch.load('vit_quantized.pth') model.eval() @app.post("/predict/") async def predict(file: UploadFile = File(...)): image = Image.open(file.file).convert('RGB') tensor = transform(image).unsqueeze(0) with torch.no_grad(): logits = model(tensor) return {"class": imagenet_classes[logits.argmax().item()]} ``` Run with: ```bash uvicorn api:app --reload ``` > βœ… FastAPI is **faster and more flexible** than Flask. --- ## πŸ“Š **14. Benchmarking: Accuracy vs Latency vs Size** Always measure the **trade-offs**. ### βœ… Benchmark Suite | Metric | Tool | |------|------| | **Accuracy** | ImageNet Top-1/Top-5 | | **Latency** | `time.time()` or `torch.utils.benchmark` | | **Model Size** | `os.path.getsize()` | | **Memory Usage** | `nvidia-smi` or `psutil` | | **Power** | Mobile profiling tools | --- ### βœ… Example: MobileViT vs ResNet-50 | Model | Acc (%) | Size (MB) | Latency (ms) | Energy (mJ) | |------|--------|-----------|--------------|-------------| | **MobileViT-S** | 74.8 | 10.2 | 24 | 85 | | **ResNet-50** | 76.1 | 98.5 | 60 | 210 | > βœ… MobileViT wins on **size, speed, and energy**. --- ## ⚠️ **15. Common Pitfalls in Optimization** ### ❌ **Pitfall 1: Over-Pruning** Remove too many weights β†’ accuracy collapse. βœ… **Fix**: Prune gradually (10% at a time), retrain. --- ### ❌ **Pitfall 2: Ignoring Calibration for INT8** INT8 needs calibration data. βœ… **Fix**: Use representative dataset for calibration. --- ### ❌ **Pitfall 3: Not Testing on Target Hardware** Model runs fast on desktop β†’ slow on mobile. βœ… **Fix**: Benchmark on **actual device**. --- ### ❌ **Pitfall 4: Using Dynamic Axes in ONNX without Backend Support** Some backends don’t support dynamic batch size. βœ… **Fix**: Export with fixed batch size if needed. --- ### βœ… **Best Practices** - Start with **MobileViT or TinyViT**. - Apply **distillation β†’ quantization β†’ pruning**. - Export to **ONNX** for portability. - Use **TensorRT or Core ML** for acceleration. - Monitor **accuracy drop** at every step. --- ## πŸ–ΌοΈ **16. Visualizing Efficient ViT Architectures (Diagrams)** ### πŸ“± **MobileViT Block** ``` Input ↓ 1Γ—1 Conv β†’ Local Features ↓ 3Γ—3 Conv (Depthwise) β†’ Spatial Mixing ↓ Patchify β†’ Tokens ↓ Transformer Encoder β†’ Global Context ↓ Reconstruct β†’ Fuse with Local ↓ 1Γ—1 Conv β†’ Output ``` > βœ… Hybrid design balances local and global. --- ### πŸ”¬ **TinyViT with Distillation** ``` Large Teacher ViT ↓ Soft Labels + Features ↓ Small Student TinyViT ↓ Distillation Loss ↓ Optimized Small Model ``` > βœ… Transfers knowledge, not just labels. --- ### πŸ”„ **ONNX Deployment Pipeline** ``` PyTorch ViT ↓ Export to ONNX ↓ TensorRT / OpenVINO / Core ML ↓ Optimized Engine ↓ Mobile / Web / Edge Inference ``` > βœ… One model, everywhere. --- ### ⚑ **TensorRT Optimization** ``` ONNX Model ↓ Layer Fusion (Conv + BN + ReLU) ↓ FP16 / INT8 Quantization ↓ Kernel Auto-Tuning ↓ High-Performance TRT Engine ``` > βœ… Maximizes GPU utilization. --- ## 🏁 **17. Summary & What’s Next in Part 6** ### βœ… **What You’ve Learned in Part 5** - Why **efficiency** is critical for edge AI. - **MobileViT**: Hybrid CNN-Transformer for mobile. - **TinyViT**: Distilled, fast, and accurate. - Compared **PVT, Swin-T, LeViT**. - Mastered **knowledge distillation, quantization, pruning**. - Exported ViT to **ONNX**. - Accelerated inference with **TensorRT**. - Deployed models with **TorchServe and FastAPI**. - Benchmarked **accuracy vs speed vs size**. --- ### πŸ”œ **What’s Coming in Part 6: Vision Transformers in Production – Monitoring, CI/CD, MLOps** In the next part, we’ll explore: - πŸ“Š **Monitoring ViT in production** (drift, accuracy, latency). - πŸ”„ **CI/CD for ML models** (testing, versioning, rollback). - πŸ§ͺ **A/B testing** of vision models. - πŸ› οΈ **MLOps with MLflow, Weights & Biases, Kubeflow**. - 🧩 **Model Registry & Lineage**. - 🚨 **Anomaly Detection in Predictions**. - 🌐 **Multi-Model Serving & Canary Rollouts**. > πŸ“Œ **#MLOps #ModelMonitoring #CIforML #MLflow #WandB #Kubeflow #ProductionAI** --- ## πŸ™Œ Final Words You’ve just completed the **most comprehensive guide to efficient Vision Transformers** ever written. > πŸ’¬ **"The future of AI isn’t just smart β€” it’s fast, small, and everywhere."** You now know how to take a **heavy ViT model** and turn it into a **lean, mean, edge-ready machine**. In **Part 6**, we’ll bring ViT into **production systems** β€” where reliability, monitoring, and automation are king. --- πŸ“Œ **Pro Tip**: Always profile before optimizing. Don’t optimize what isn’t slow. πŸ” **Share this epic guide** with your team β€” it’s the ultimate resource for **efficient computer vision**. --- βœ… **You're now ready for Part 6!** We're entering the world of **MLOps and production-grade AI systems**. #MobileViT #TinyViT #EfficientViT #EdgeAI #ModelOptimization #ONNX #TensorRT #TorchServe #FastAPI #DeepLearning #ComputerVision