# Essay 4
*Beets* is a media management software mainly running on PCs. The scalability is one of the important aspect when analyzing and evaluating *Beets*. In general, there are several ways measuring the scalability of the software. In this essay, we are going to measure the time and space consumption of *Beets*, to see whether it scales well when the task size increases.
## Scalability Challenges
The two key usages of *Beets* are importing collections and querying for musics and albums. Here we mainly focus on the scalability of collections import, since query mainly relies on MySQL interface where MySQL's scalability is the most important factor in this situation.
When importing musics and collections, there are two scenarios that may challenge the scalability of *Beets*. The first scenario is that there are many music files under a directory. In this way, *Beets* has to import and tag each file accordingly. Another situation is that the music files are stored in a very deep folder structure as we show below, which requires *beets* to traverse all subdirectories to import musics.
```
Musics/
-- Singer 1/
---- Album 1/
-------- music 1
-------- music 2
-------- music 3
-------- ...
---- Album 2/
-------- music 4
-------- music 5
-------- music 6
-------- ...
-- Singer 2/
---- Album 3/
-------- music 7
-------- music 8
-------- music 9
-------- ...
---- Album 4/
-------- music 10
-------- music 11
-------- music 12
-------- ...
---- Album ../
-- Singer ../
```
Both of these two situations could limit the scalability of *Beets* and we will look into one of them to analyze the scalability of *Beets* quantitatively.
## Quantitative Analysis of *Beets*' Performance
We will analyze the first situation mentioned in the previous section, that is when many files are in the same directory, to get a better insight of the scalability of *Beets*. We vary the number of musics in a single directory to see the time and memory performance of importing musics in *Beets*. To get a more general results, we enable the *autotagger* in the experiments, which can be turned off if the user specify an argument.
The results are shown in the figure below. We can tell the import time is positively proportional to the number of files in the folder. With the number of musics increased, the import time also increased linearly. This poses a threat to the scalability of *Beets*, as it will take a very long time to import hundreds of music files. On the other hand, *Beets* assumes musics under the same folder are from the same album. Since an album may only contains twenty of thirty musics, this threat will be mitigated.
{{<image file="response_time.png" caption="The response time when the number of musics varies">}}
We also measure the memory usage of importing music files in *Beets*. The results are shown below. The memory usage are unstable, though the general trend is that the memory usage remains constant when the number of files changes. We believe that the major factor of memory usage is the *autotagger*. Since *autotagger* can only tag one music at one time, it is reasonable that the memory usage is constant.
{{<image file="space_performance.png" caption="The memory usage when the number of musics varies">}}
## Architectural Decisions that Affect Scalability
In the previous sections, we have analyzed the senarios that challenge the scalcability of *Beets* and conducted experiments to show the results clearly. And in this section, we will show what kind of architectural decisions leading to this.
Although the process of importing is time-consuming, there are one architecural decision leading to better scalibility. Autotag and some plugins could be enabled or disabled by revising the configuration file. Users could choose to disable *autotag* to get a better performance for *import*.
However, another architectural decision that limit the scalibility, the *autotag* module. The decision to build the architecure in pure Python is one effect that restrict scalibility. In this process, two strings has to be compared by calculating the basic distance between them in order to get the accurate tag for the given item. This process is based on a turn ASCII characters[^1] into lower case and normalized by string length. As we all know, all the types in Python is a kind of object compared to the fixed data types in C or C++, thus it would definitely waste a lot of time to identify the type of variants and then calculate their values.
## Proposals for Architectural Changes
As we mentioned before, the programming language of the whole system of *Beets* is Python. Admittedly, Pyhton is good for its features, such as functional and abundant libraries, easy and clear to use, and so forth. However, it is not efficient enough. Python is an interpreted language rather than a compiled language. Internally Python code is interpreted during run time rather than being compiled to native code hence it is difficult to optimized and leads to lower efficiency.
Therefore, it might be effective to choose another programming language to implement some modules in order to address the issues that we identified in the previous part. Static programming language such as C, C++ or Java might be a good choice for some performance-demanding scenarios as we mentioned before. There are three main steps[^2] to carry out. First, we have to re-write the targeted functions or modules in these languages. Second, we compile these modules to static or dynamic libraries. Third, we link these modules to the Python runtime. In this way, we could reduce the execution time for some time-consuming modules or functions like *autotag* and *plugin*.
## Architecture Designs: Present and Future
As aforementioned, the whole *Beets* system is implemented with Python, including the high-level APIs and the low-level libraries (dbcore, autotag and etc).
To discuss the potential alternation that we can make to the architecture design for sake of performance improvement, we choose the classes and key methods that are called during *import* as an example. The image *current architecture* presents the UML diagram of *Beets*'s present architecture regarding the procedure of *import*. In the UML diagram, the solid line represents the inheritance relationship between the classes, while the dashed line represents the calling dependencies between the classes.
{{<image file="beets_UML_before.png" caption="current architecture">}}
The low-level operations in `dbcore.query` and `autotag.mb` can be very time-consuming due to the aforementioned drawbacks of python. As we mentioned previously, one feasible way to achieve higher performance could be implementing the low-level libraries with static languages like Java, C and C++. For example, we can replace the libraries written in Python with pre-compiled .so files (shared libraries). The .so file can be generated either from C or C++. If we choose to implement the low-level libraries with C, we can use *Cython* to add a wrapper to the C functions, then call `cpdef` to expose the interface (also called CAPI) to python for calling, and eventually compile the C code and generate .so file with *gcc*. After that, we are able to dynamically load the shared libraries with `CDLL` in python code. The architecture after the alternation is shown in image *architecture after alternation*.
{{<image file="beets_UML_after.png" caption="architecture after alternation">}}
## How the Proposed Change Address the Scalability Issue
It is however not practical for us to implement this alternation in a short time since it requires a lot of code to be changed. Hence, we will not conduct experiments to verify the influence of this change. Instead, we will analyze it from a theoretical view. And the experiments from other projects will be referred to prove the feasibility of our proposed architecture change.
Although we are not going to implement and test the CAPI by ourselves, it is easy to find successful examples of such a practice from other projects. *Numpy* is a very classic instance for achieving high performance by applying Cython and CAPI in Python project. *Numpy* includes 35.1% C in the total amount of its code. And because *Numpy* arrays are of homogeneous type, they are densely packed in memory and release memory faster. Therefore, in general, tasks performed by *Numpy* are 5 to 100 times faster than standard Python lists, which is a significant speedup[^3].
An experiment [^4] can also verify this point of view. In the experiments, the tester starts with implementing a *Cython* module with pure python. In this case, *Cython* still does exactly what the Python interpreter does. And the resulting performance is 1.41 s per loop.
In the second iteration, the tester use custom *Cython* syntax to add types, so that they are now breaking Python source compatibility. In this scenario, the running speed reaches 828 ms per loop.
In the final iteration, they use efficient indexing to access the data buffer directly at C speed. From the result of experiments we can see that this alternation furthur breaks the performance bottleneck and reaches 11.6 ms per loop.
Similar examples can be found for *Pandas* and *Scipy*. Now, we can also assume that by replacing the low-level libraries with CAPI, the response time and the space usage can be both reduced, which significantly improves the scalability of *Beets*.
## References
[^1]:https://github.com/beetbox/beets/blob/master/beets/importer.py
[^2]:https://towardsdatascience.com/write-your-own-c-extension-to-speed-up-python-x100-626bb9d166e7
[^3]:https://towardsdatascience.com/how-fast-numpy-really-is-e9111df44347
[^4]:https://cython.readthedocs.io/en/latest/src/tutorial/numpy.html