owned this note changed 2 years ago
Published Linked with GitHub

Image Recognition on a multi-server architecture

Finn Jensen (fjensen@bu.edu)
Abhinav Srivastava (sabhinav@bu.edu)

GitHub Link : https://github.com/AbhinavMir/miniproject
GENI Link: urn:publicid:IDN+ch.geni.net:CS655-Fall2022+slice+Miniproject
Slice: MiniProject

Introduction / Problem Statement

Google ImageNet is a large-scale image dataset created by Google that contains millions of labeled images. It is used by researchers and developers to train their machine learning models for computer vision tasks such as image classification, object detection, and segmentation.

ImageNet contains 14 million labeled images in over 22 thousand categories. The dataset is organized into a hierarchy of concepts, such as animals, plants, objects, scenes, and so on. Each image is labeled with a unique identifier and associated with a set of labels that describe the image. The dataset is divided into a training set for training machine learning models and a validation set for evaluating the accuracy of the model.

Google ImageNet is an invaluable resource for machine learning researchers, as it provides a large set of labeled images that can be used to train models quickly and accurately. It is also useful to developers who need to create applications that can recognize objects in images.

Experimental Methodology

We used three servers, and a router and a client. The client is connected to the router which is then in turn connected to the clients. The topology of the network is as follows.

We used previous knowledge to configure our systems, and initially considered NGROK, but ended up using native solutions.

A load balancer is used to increase capacity, reliability, and performance of applications by distributing incoming traffic across multiple servers, or resources. This process is known as load balancing. By distributing the workload among multiple resources, a load balancer can improve the overall efficiency of an application and reduce the risk of downtime due to resource overload. Additionally, a load balancer can provide a single point of access to multiple servers, making applications more secure and easier to manage.

From the client, all traffic is routed to the loadbalancer on router server. The loadbalancer pings all three servers to check for status code. A 503 means the server is busy and we must send to another server. We use a mempool-like folder on the router to store the images. A background service picks the images and sends it to a free server. Once processed, the image is then deleted. All data regarding the server is stored in a CSV. Initially we used a database (SQLite), but ran into some errors w.r.t concurrency.

The user can navigate to http://128.95.190.66:5000/results to view the results for any image uploaded from their IP address.

Model Selection and Optimzation

The model selected for this project is the Vision Transformer to be exact the base-path16-224 version that can be found on Hugging Face. Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. This model was not the first choice but was rather choosen due to the constraints put forth by the GENI server hardward. Any models that were large, and had a more diverse set of possible predictions, resulted in Out of Memory errors leading to the server to kill the process, so ViT was choosen because of its much smaller size even if it is less accurate than other models.

The size of an image can affect the computation time of imageNet in a few different ways. First, a larger image will have more pixels, which means that there will be more data to process. This can increase the amount of time it takes to run imageNet on the image. Additionally, if the image is very large, it may need to be resized in order to fit within the constraints of the imageNet model. This resizing process can also add to the computation time. However, it's important to note that the impact of image size on computation time can vary depending on the specific hardware and software being used.

Results

Usage Instructions

If you're storing your private key in ~/.ssh, it makes sense to chmod 400 ~/.ssh/id_geni_ssh_rsa to prevent others from reading it. This also prevents ssh from complaining about the key being world-readable.

Public Addresses are provided below. Client can be accessed at http://128.95.190.66:5000

server-0 (128.95.190.67:5000)
server-1 (128.95.190.68:5000)
server-2 (128.95.190.69:5000)
router (128.95.190.66:5000)
client (128.95.190.64:5000)

We only used public IP on servers for ease of testing, but internally, we used non-public IPs for safety reasons. Ideally, we'd have no public ports, but for use of postman and other services, we kept it open.

To launch Imagenet Server
Connect to server-N on which Anaconda and the needed python packages along with app.py are located

launch Anaconda

source ~/anaconda3/bin/activate

Then launch Flask Server

python -m flask run --host=0.0.0.0

Screenshot of Web based User Interface

Image fed to ImageNet

Screenshot of loadbearer handling the posted image from user

Results of Image processing and extra data from loadbear

Analysis

Based on the information gathered, it appears that the dataset contains the size of an image in bytes and the time it took for the image to be recognized by a server and the result to be sent back to the client. This information could be useful for understanding the performance of the system and identifying any potential bottlenecks or issues.

One possible analysis of this dataset could be to look at the relationship between the size of the image and the time it took to be recognized and sent back. A scatter plot could be used to visualize this relationship, with the image size on the x-axis and the recognition time on the y-axis. This could show if there is a correlation between the two variables, and if larger images tend to take longer to be recognized and sent back.

Conclusion

In this project, we implemented a system for routing traffic from a client to one of three servers, where the servers use ImageNet to recognize images and send the results back to the client. The system was tested using a variety of images with different sizes, and the results showed that the system was able to accurately recognize images and send the results back to the client in a timely manner. The result was as expected - larger images required more time. We ranked on basis on dimensions and size both, but since dimension didn't give us useful result, we focused on byte size of the file.

Overall, the system performed well and met the project's goals of providing efficient image recognition capabilities. The use of multiple servers and a router allowed for efficient routing of traffic and distributed processing, which helped to improve the overall performance of the system.

In the future, there are several potential areas for improvement and further development. For example, the system could be expanded to support more servers and handle a larger volume of traffic. Additionally, the recognition algorithms used by the servers could be refined and optimized to improve the accuracy and speed of image recognition. A lot of inspiration for the project came from recent OpenAI's attempt to scale being met with incredible traffic while developing GPT systems.

Overall, this project has demonstrated the feasibility and effectiveness of using ImageNet, multiple servers, and a router to provide efficient image recognition capabilities.

Division of Labor

Finn handled the creation of the imagenet as well as the server that the imagenet is hosted on that can be reached both interally from GENI or externally. The Server uses Flask as the imagenet is written in python and keeping both in the same langauge reduced the workload.

Abhinav handled the router side of things and loadbalancers. He also contributed to the overall backend and helped create the client.

Select a repo