Multi repo: - AI - Backend - Front - Deep stream Externalize Env variables Docummentation api Gestion de projet avec Jira Setup CICD Setup unit test (prise en compte nvr découpage selon le canal) ** Mise en place load balancing IOT** ![](https://hackmd.io/_uploads/rkSb2IP92.png) ![](https://hackmd.io/_uploads/BJ__0ZAcn.png) ## 1 - Infrastructure We'll need a robust and scalable server infrastructure. Depending on the processing needs, this could range from a single high-powered server to a cluster of machines. Each machine should have a powerful CPU, a high amount of RAM (at least 16GB, preferably more). We should also plan for sufficient storage space for the video and frame data. ### 1.1 - Installing Kubernetes: We will use VMWare as our virtualization platform and install Kubernetes with [kubeadm](https://kubernetes.io/fr/docs/setup/production-environment/tools/kubeadm/install-kubeadm). When setting up Kubernetes, one node will act as the master node controlling the rest of the cluster, while the other nodes will be worker nodes where the containers will run. A good practice is to have more than one master node to ensure high availability. ### 1.2 - Setting Up Networking: Networking is a crucial part of a Kubernetes setup. we'll use [Weave](https://www.weave.works/oss/net/) network plugin for this. It will ensure that all nodes in the cluster can communicate with each other and also manage ingress and egress traffic for the containers. ### 1.3 - Installing a Container Runtime: Kubernetes supports several [container runtimes](https://kubernetes.io/docs/setup/production-environment/container-runtimes/). But Docker is the most commonly used so we'll go with it. ### 1.4 - Monitoring and Logging: Monitoring the health and performance of your cluster is essential. Kubernetes supports a range of monitoring tools like [Prometheus](https://prometheus.io) for monitoring and [Grafana](https://grafana.com) for visualization. For logging, you can use tools like [Fluentd](https://www.fluentd.org) and [Elastic Stack](https://www.elastic.co/fr/elastic-stack/). ### 1.5 - Security: We should follow best practices for Kubernetes security. This includes keeping Kubernetes up-to-date, restricting access to the Kubernetes API, using role-based access control (RBAC), scanning our containers for vulnerabilities, and encrypting sensitive data. This is a high-level plan and we would need to adapt this based on the specific requirements of our project. Using a cloud provider like [AWS](aws.amazon.com/) should be our first option. ## 2 - Surveillance Video Acquisition This involves fetching video feeds from the surveillance system and bringing them into our system. We should ensure secure connections to maintain the privacy and integrity of video feeds. ### 2.1 - Identifying Video Sources: The first task involves identifying all the sources of your video feeds. They could be cameras installed at various points, or they could be video feeds coming from a centralized video surveillance system. ### 2.2 - Choosing a Suitable Protocol: The chosen protocol should be compatible with the surveillance system. For IP cameras, Real-Time Streaming Protocol (RTSP) is frequently used to stream video. ### 2.3 - Accessing Video Feeds: You need the right credentials and addresses (URLs) to access the video feeds from your sources. This may require configuring the sources to enable streaming and obtaining the correct URLs. ### 2.4 - Fetching Video Streams: Once we have the URLs, we can start fetching the video feeds into our system. we can use a variety of libraries for this task such as [OpenCV](https://opencv.org) ### 2.5 - Securing Video Feeds: The video feeds should be secure during transit by setting up encrypted connections to maintain the privacy and integrity of the video feeds with [OpenSSL](https://www.openssl.org). ### 2.6 - Decoding and Preprocessing Video Data: The incoming video data needs to be decoded into frames, which might need to be preprocessed (e.g., resized or normalized) before they are input to the computer vision models. :::info <small>**:bulb: This is handled by the box :package:** </small> ::: ### 2.7 - Managing Multiple Video Feeds: If we're dealing with multiple video feeds, the system should be designed to handle them concurrently. This might involve running multiple threads or processes in parallel. :::info <small>**:bulb: This is handled by the box :package:** </small> ::: ### 2.8 - Video Storage (Optional): You may also need to store the video data for future use or backtracking. You should consider the storage requirements and choose an appropriate video codec and container format to balance between video quality and storage size. [FFmpeg](https://www.ffmpeg.org) can be used for this task. ## 3 - Preprocessing Pipeline: Convert the video into frames to be processed individually. This might involve resizing the frames, adjusting color balance, or normalizing the images to improve the computer vision model's results. :::info <small>**:bulb: This is handled by the box :package:** </small> ::: ## 4 - Computer Vision Processing: Implement deep learning models to analyze the video frames. This could be convolutional neural networks (CNNs) for object detection, tracking, or other tasks. Open-source libraries like OpenCV, TensorFlow, or PyTorch can be used for this purpose. :::info <small>**:bulb: This is handled by the box :package:** </small> ::: ## 5 - Postprocessing and Analysis: Once the computer vision model has processed the frames, the outputs can be post-processed and analyzed. This might involve marking or tagging objects of interest in the video, aggregating results over time, or detecting patterns/anomalies. :::info <small>**:bulb: This is handled by the box :package:** </small> ::: ## 6 - Data Storage: Depending on your requirements, you may want to store raw video, preprocessed frames, model outputs, or a combination of these. We should have a scalable database or file system to handle high data volumes. In our basic usecase, it involves handling the storage of results, particularly the detected alerts from your analysis. ### 6.1 - Data Definition: The first step is to define the data you need to store for each detected alert. This could include details such as the time of detection, the location in the video frame, the type of object detected, confidence scores, and potentially other metadata. we might also want to store a snapshot or clip of the video when the alert was triggered. ### 6.2 - Database Setup: To store the alert data, we can use a [PostgreSQL](https://www.postgresql.org) database (which [Supabase](https://supabase.com/docs) extends with real-time capabilities and a user-friendly API). ### 6.3 - Database Schema ### 6.4 - Data Insertion: The Data Insertion are done into the database each time an alert is detected. This could be done directly from the computer vision processing code **(the box :package:)**, or you might use a separate service or thread to handle database operations. One of the benefits of Supabase is the ability to handle real-time updates. This allows the user interface or other parts of the system to receive real-time updates when new alert data is inserted into the database. For a more complex system, we might want to implement a publish-subscribe (pub/sub) system. This allows different parts of the system to publish messages (e.g., new alert data) and other parts to subscribe to these messages. :::info <small>**:bulb: The (pub/sub) system will be interesting with multiple box usecase** </small> ::: ### 6.5 - Data Backup and Recovery: Finally, we should consider how to back up the data and recover it in case of any issues. This could involve regular database backups, replication, or other strategies. **Supabase** includes built-in tools for backup and recovery. ## 7 - Alerting and Notification System: Depending on the nature of surveillance, an alerting system can be put in place. For instance, if the system is meant to detect intruders, an alert can be issued when an anomaly is detected. ### 7.1 - Notification Delivery: We'll need to deliver notifications to the users. This could be through various channels such as email, SMS, push notifications, or in-app notifications. ### 7.2 - User Preferences: We might allow users to customize their notification preferences, such as what types of alerts they receive, or how they receive them. User preferences can be stored in your Supabase database, and you can write custom logic in your notification code to respect these preferences. ### 7.3 - Alert Review and Management: Finally, users should be able to review past alerts, mark them as handled, or otherwise manage their alerts. This functionality can be built into your user interface. The UI could provide a view of the alerts table in your database, allow users to filter and sort alerts, and provide actions to manage alerts. ## 8 - User Interface: The user interface allows users to interact with the system. This might include live video feeds with overlays of the analysis, a dashboard showing statistics or patterns, and settings for adjusting parameters. **React**, **Vue** or **Angular** could be used for frontend development. ### 8.1 - Requirements Analysis: First, we need to understand the requirements of the users. This might involve having live video feeds displayed, a dashboard with metrics and statistics, alert notifications, or settings to adjust parameters. Each project will have unique requirements, so gathering these effectively is crucial. ### 8.2 - Designing the User Interface: Based on the requirements, we should design the User Interface. This will typically involve sketching wireframes and deciding on the layout and components of the interface. [Figma](https://www.figma.com/fr/) should be greate for this ### 8.3 - Frontend Development: We'll need to implement the design in code. We should use the [Atomic Design Methodology](https://atomicdesign.bradfrost.com/chapter-2/) to reuse components for other projects. ### 8.4 - Connecting with Backend: The frontend will need to interact with the backend to fetch data (like alert data from the database) and potentially send data back (like changes to settings). Supabase provides client libraries that can be used to interact with the backend services in a simple manner. Axios or Fetch API can also be used for making HTTP requests from the frontend to the backend. ### 8.5 - Displaying Video Feeds: If the UI includes displaying video feeds, we'll need a way to stream video data to the frontend and display it in the browser. [Video.js](https://videojs.com) is an open-source library for working with video on the web. It can handle a variety of video formats and streaming protocols. ### 8.6 - Displaying Alerts and Updates: For displaying real-time alerts and updates, you'll need a way to push data from the backend to the frontend. Supabase's real-time subscriptions can be used to listen for changes in the database and update the frontend in real time. ### 8.7 - User Authentication: If the system requires user accounts, you'll need a way to handle user authentication, including sign up, login, and session management. Supabase includes user authentication functionality, including OAuth integration and secure role-based access control. ### 8.8 - User Interface Testing: Finally, make sure to test your UI thoroughly. This could involve unit tests, integration tests, and end-to-end tests, as well as manual testing. Tools like [Jest](https://jestjs.io/fr/) (for unit testing) and [Cypress](https://www.cypress.io) (for end-to-end testing) can be used in this step. ## 9 - Security: The Security section is a critical aspect of your system, as it involves protecting your data and ensuring the integrity of your system. ### 9.1 - Secure Communication: The communication between different components of your system and any external interfaces must be secure. Use encryption protocols like HTTPS/TLS for your API and web interfaces. OpenSSL is an open-source software library to provide secure communication. ### 9.2 - User Authentication and Authorization: Ensure that only authorized users can access your system. Implement strong user authentication and make sure each user has the right permissions to perform operations in your system. Supabase Auth provides built-in user authentication with secure JSON Web Tokens (JWTs) and role-based access control. ### 9.3 - Secure Database Access: Database access should be restricted to minimize potential data leakage. Only authorized and necessary services should be able to connect and interact with your database. PostgreSQL (part of Supabase) has robust built-in security features for user management and access control. ### 9.4 - Secure Data Storage: Ensure that your data storage, including videos, are securely stored. Use encryption for data at rest. OpenSSL can also be used for data encryption. PostgreSQL also supports data encryption at rest. ### 9.5 - Security of Video Feeds: The video feeds from your surveillance system should be secure, ensuring that no unauthorized party can tap into your feeds. OpenSSL can be used to set up secure connections for video streams. ### 9.6 - Vulnerability Scanning and Patch Management: Regularly scan the system for any potential vulnerabilities and apply necessary patches and updates to your system components. [OpenVAS](https://openvas.org) is an open-source tool for vulnerability scanning. For containerized applications, tools like [Clair](https://github.com/quay/clair) and [Anchore](https://anchore.com) can be used. ### 9.7 - Intrusion Detection: Use intrusion detection systems to monitor your system and network for malicious activities or policy violations. [Snort](https://www.snort.org) is an open-source intrusion detection system. ## 10 - DevOps: DevOps is a methodology that involves combining software development (Dev) and IT operations (Ops) to shorten the development lifecycle and provide continuous delivery of high-quality software. ### 10.1 - Source Control: Source control (or version control) is the practice of tracking and managing changes to code. It is essential for collaborating on software projects and maintaining a history of changes. GitHub or GitLab are popular platforms for hosting Git repositories, with GitLab offering self-hosted options. ### 10.2 - Continuous Integration/Continuous Deployment (CI/CD): CI/CD are practices that involve automatically building and testing your code whenever changes are made, and then automatically deploying your code to production if it passes all tests. GitLab has robust built-in CI/CD features. ### 10.3 - Infrastructure as Code (IaC): IaC involves managing and provisioning computing infrastructure through machine-readable definition files, rather than manual hardware configuration or interactive configuration tools. Tools like [Terraform](https://www.terraform.io) and [Ansible](https://www.ansible.com) are widely used for implementing IaC. ### 10.4 - Containerization: Containers are a lightweight alternative to virtual machines that include an application and its dependencies. They can run reliably across different computing environments. Even the developper environment should be Containerize. Docker is a widely-used tool for creating and managing containers. ### 10.5 - Orchestration: Orchestration involves managing the lifecycles of containers, especially in large, dynamic environments. Kubernetes is the most popular open-source tool for orchestration, which you've mentioned using already. ### 10.6 - Monitoring and Logging: Monitoring your application and keeping logs can help you spot issues before they become problems, understand the application's performance, and debug issues when they occur. Prometheus is a popular open-source monitoring tool and Grafana is an open-source tool for visualizing metrics. For logging, you can use the Elastic Stack (Elasticsearch, Logstash, Kibana). ### 10.7 - Configuration Management: Configuration management involves keeping track of the system's configuration and ensuring it remains in its desired state. Ansible and [Puppet](https://www.puppet.com) are popular open-source tools for configuration management. ### 10.8 - Testing: Automated testing is a key part of DevOps. This could involve unit tests, integration tests, end-to-end tests, and other types of tests. Tools for testing will depend on the programming language and the types of tests we want to write. For example, PyTest and unittest for Python, Jest for JavaScript, etc. ### 10.9 - Security: DevOps combined with Security forms DevSecOps which emphasizes the need to incorporate security early on in the lifecycle. [OpenSCAP](https://www.open-scap.org) is an open-source tool for automated vulnerability management, and [OWASP ZAP](https://www.zaproxy.org) is a free security tool for finding vulnerabilities in web applications. ## 11 - Scalability and Future Proofing: Scalability and future-proofing are essential aspects of system architecture, as they ensure that your system can handle increased demand and remain relevant as technologies evolve. ### 11.1 - Horizontal and Vertical Scaling: Plan for both horizontal scaling (adding more machines to your pool of resources) and vertical scaling (adding more power to an existing machine). Kubernetes excels in this aspect. It supports both horizontal and vertical auto-scaling. You can scale your services automatically based on CPU or custom metrics utilization. ### 11.2 - Load Balancing: A load balancer can distribute network traffic across multiple servers to ensure no single server bears too much demand. Kubernetes also has built-in service types (like NodePort, LoadBalancer) that support load balancing. ### 11.3. Statelessness: Whenever possible, design the services to be stateless. Stateless applications are easier to scale because any request can be served by any instance of the application. ### 11.4. Database Scalability: The database is often the hardest part to scale. Choose a database that supports replication and sharding. PostgreSQL (part of Supabase) supports replication and partitioning. ### 11.5 - Use of Microservices: Splitting your application into microservices can improve scalability because each service can be scaled independently. ### 11.6 - Asynchronous Processing: For tasks that can be delayed, such as SMS or email notifications, we should use an asynchronous processing model. This can offload heavy tasks from your main application and improve scalability. Apache Kafka or redis are open-source tools that provide asynchronous messaging capabilities. ### 11.7 - Future Proofing: We should design the application with a modular architecture. It makes the system more flexible to changes, making it easier to swap out one component for another if needed.