# App Find – Machine Learning
## Find Android application alternatives that require fewer permissions.
This recommendation engine was developed to help privacy-concerned users to find alternatives for their Android applications.
## How does it work?
Topic modeling was applied to the application descriptions of about 571k Android applications to create a model that we used to assign topic IDs to applications. These topic IDs group similar applications together.
A single page application (SPA) was developed based on React and Redux, which consumes the metadata and recommendations from the backend. The backend itself is developed in Python and fetches data from a PostgreSQL database that contains all the data needed.
## Functions
- [x] Search and filter functions to browse for Google Play Store applications
- [x] Find alternatives on the application detail page that require fewer permissions
- [x] Compare our recommendation with Google's filtered application recommendations
## Repository Description
This repository contains the machine learning part of the application. The other parts of the application can be found in other repos:
App Find Frontend: https://github.zhaw.ch/neut/appfind-frontend
App Find Backend: https://github.zhaw.ch/neut/appfind-backend
App Find Deployment: https://github.zhaw.ch/neut/appfind-deployment
## Machine Learning Setup
Our implementation works with Python v3.7. Please make sure to use the same Python version.
Python needs an environment to run the Python scripts with its dependencies. To keep dependencies of different projects separate, we recommend using a Python virtual environment. There are many tools like venv, conda, or pipenv to create virtual environments. In our example, we will use pipenv.
Pipenv has to be installed first, as described in the documentation: https://github.com/pypa/pipenv
Then, open a shell in the root folder. The folder contains a file called "Pipfile" in the root location. All the dependencies should be installed into the virtual environment by running the following command: `pipenv install`
## Data
The machine learning process requires data from the Google Play Store. Unfortunately, we cannot provide the metadata for download due to legal reasons.
However, the access key with limited access can be obtained from the data provider called "AppMonsta". They provide the data free of charge until larger datasets are required. The plan can be updated anytime, to have a subscription for larger datasets.
A free API key can be obtained here: https://appmonsta.com/dashboard/get_api_key
More information about the datasets and the API can be found here: https://appmonsta.com/dashboard/api-documentation
To download the data and save it to a file, execute this command in the command line:
```
curl --compressed -u "{API_KEY}:X" \
"https://api.appmonsta.com/v1/stores/android/details.json?date=2020-05-30&country=US"
```
## Scripts
For the machine learning part of the application, many scripts were created to automate topic modeling, topic inference, cross-validation, scoring, etc. In the table below there is an overview of the scripts with a brief description of what they do.
| Script | Description |
|---------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1_clean_descriptions.py | This script is used to preprocess and clean the raw data in the data preparation process. It removes, for example, emails, links, and phone numbers from the application description texts used to create the topic models. |
| 2_hdp_tomotopy_topic_modelling.py | This script applies topic modeling with Hierarchical Dirichlet Process (HDP) implemented in the Tomotopy library over the application descriptions. |
| 2_lda_gensim_topic_modelling.py | This script applies topic modeling with Latent Dirichlet Allocation (LDA) implemented in the Gensim library over the application descriptions. |
| 2_lda_tomotopy_topic_modelling.py | This script applies topic modeling with Latent Dirichlet Allocation (LDA) implemented in the Tomotopy library over the application descriptions. |
| 2_minibatchkmeans_sklearn_topic_modelling.py | This script applies topic modeling with Minibatch-K-Means implemented in the Scikit-learn library over the application descriptions. |
| 2_nmf_gensim_topic_modelling.py | This script applies topic modeling with Non-Negative Matrix Factorization (NMF) implemented in the Gensim library over the application descriptions. |
| 2_nmf_sklearn_topic_modelling.py | This script applies topic modeling with Non-Negative Matrix Factorization (NMF) implemented in the Scikit-learn library over the application descriptions. |
| 3_hdp_tomotopy_topic_mapping.py | This script annotates a topic id to each application description. The used topic model is generated with Hierarchical Dirichlet Process (HDP) implemented in the Tomotopy library. |
| 3_lda_gensim_topic_mapping.py | This script annotates a topic id to each application description. The used topic model is generated with Latent Dirichlet Allocation (LDA) implemented in the Gensim library. |
| 3_lda_tomotopy_topic_mapping.py | This script annotates a topic id to each application description. The used topic model is generated with Latent Dirichlet Allocation (LDA) implemented in the Tomotopy library. |
| 3_minibatchkmeans_sklearn_topic_mapping.py | This script annotates a topic id to each application description. The used topic model is generated with Minibatch-K-Means implemented in the Scikit-learn library. |
| 3_nmf_gensim_topic_mapping.py | This script annotates a topic id to each application description. The used topic model is generated with Non-Negative Matrix Factorization (NMF) implemented in the Gensim library. |
| 3_nmf_sklearn_topic_mapping.py | This script annotates a topic id to each application description. The used topic model is generated with Non-Negative Matrix Factorization (NMF) implemented in the Scikit-learn library. |
| 4_hdp_tomotopy_topic_mapping_loop.py | This script is automatically executing the script "3_hdp_tomotopy_topic_mapping.py" with multiple numbers of topics. |
| 4_lda_gensim_topic_mapping_loop.py | This script is automatically executing the script "3_hdp_gensim_topic_mapping.py" with multiple numbers of topics. |
| 4_lda_gensim_topic_modelling_loop.py | This script is automatically executing the script "2_lda_gensim_topic_modelling.py" with multiple numbers of topics. |
| 4_lda_tomotopy_topic_mapping_loop.py | This script is automatically executing the script "2_lda_tomotopy_topic_modelling.py" with multiple numbers of topics. |
| 4_minibatchkmeans_sklearn_topic_mapping_loop.py | This script is automatically executing the script "3_minibatchkmeans_sklearn_topic_mapping.py" with multiple numbers of topics. |
| 4_minibatchkmeans_sklearn_topic_modelling_loop.py | This script is automatically executing the script "2_minibatch_sklearn_topic_modelling.py" with multiple numbers of topics. |
| 4_nmf_gensim_topic_mapping_loop.py | This script is automatically executing the script "2_hdp_tomotopy_topic_modelling.py" with multiple numbers of topics. |
| 4_nmf_gensim_topic_modelling_loop.py | This script is automatically executing the script "2_nmf_gensim_topic_modelling.py" with multiple numbers of topics. |
| 4_nmf_sklearn_topic_mapping_loop.py | This script is automatically executing the script "3_nmf_sklearn_topic_mapping.py" with multiple numbers of topics. |
| 4_nmf_sklearn_topic_modelling_loop.py | This script is automatically executing the script "2_nmf_sklearn_topic_modelling.py" with multiple numbers of topics. |
| 5_analayze_coherence.py | This script selects the generated topic models, calculates the coherence values, and plots the results. |
| 5_create_cross_validation_sets.py | This script creates sub-datasets required for cross-validation. It is used to create K-folds for Stratisfied Cross-Validation and a train-test set for Hold-Out Cross-Validation. |
| 5_execute_all.py | This script executes all topic modeling, mapping, coherence calculation processes with all the algorithms and libraries. |
| 5_run_cross_validation.py | This script runs cross validation with datasets created with "5_create_cross_validation_sets.py" automatically. |
| 5_run_cross_validation_coherence.py | This script calculates the coherence values of the models created with "5_run_cross_validation.py" automatically. |
| 6_distribution_analysis.py | This script analyzes distributions following distributions: genre, word count, topic, permissions, free paid, price, downloads, ads, rating, release year. |
| 6_scoring_manual_groups.py | This script ranks the models after manually defined applications sets, which we believe should share the same topic id. |
| 6_scoring_related_apps.py | This script ranks the models after comparing the topic ids generated by the models against Google's recommendations. |
| 7_infer_final_model.py | This script executes the script "3_hdp_tomotopy_topic_mapping" with the data of the final model. |
| 7_label_final.model.py | This script labels the topics of the final model. |
| 7_train_final_model.py | The models can be trained further, even if the training was stopped. This script trains the final model further with more iterations. |
| 8_benchmark_hdp.py | This script extracts information from a Tomotopy HDP model, such as: topic training time, inference time, model size. |
| 8_benchmark_hdp_analyze.py | This script analyzes the data generated with the script "8_benchmark_hdp.py" and plots them. |
| 9_download_app_images.py | This script downloads images of all applications from Google Play Store. It is used to save the file locally, in order not to hit Google's servers each time the recommendation engine is used. |