# 1e100 IBU prompt
2022-02-11 13:00 CET
Who will change the slides to the next ones? Can I do this on my own, please?
## Slide 0 (Title)
HELLO, EVERYBODY
I'm Maciej Malicki and with Matthew Leonowicz we form a team called *ten to the power one hundred IBU*.
So the main project motivation was to pass a meme as a Machine Learning project.
The vague TLDR what we achieve is predicting star ratings for beers and assign sentences of beer reviews to some aspects, that are rated by beer drinkers.
*Next slide, please.*
## Slide 1 (Data)
So we came up with our project idea by finding Stanford Large Dataset Collection Ratebeer dataset. Unfortunately the dataset was removed as requested by RateBeer.com, but we managed to get it anyway. Later on we excitedly scrapped a similar Polish webpage too!
We take into consideration multi-aspect reviews meaning that a single review consists of a text and numerical scores for a few aspects like aroma, taste, appearance, palate which means taste in a broader sense, connected to tongue feeling. And last but not least overall score.
*Next slide, please.*
## Slide 2 (Research Questions)
Along with the dataset we found a paper from 2012 introducing a model predicting the most probable aspect of a given sentence. So we take a look at a sentence of a review and predict for example that it's about beer aroma.
As a side effect the model produced word clouds, which were quite common back in 2010s and could predict values of aspect ratings too. The model weights can be used for scores prediction too.
So we enthusiastically thought of implementing and assessing it.
*Next slide, please.*
## Slide 3 (Tools and Techniques)
On the left we can see that paper that I just mentioned.
It learns sentence-aspect assignment by unsupervised gradient descent on Negative Log Likelihood.
Then we can use the trained model weights for other tasks too.
We thought of that we can do something more and as a bonus.
That's why we also mapped our reviews to word or sentence embeddings and then proceeded with K-nearest neighbours for predicting stars for aroma, smell, overall rating and so on.
We used Pytorch for easy gradients & textual data representation and the SpaCy Natural Language Processing tool for tokenisation and sentence split.
We played with Facebook FAISS library for indexing 1M of our encoded reviews for fast search.
*Next slide, please.*
## Slide 4 (NLL Minimization)
Let's deep diver into how our aspect assignment model is trained.
We associate every word with a set of weights.
Two types of weights are introduced.
Thetas describe which aspect is word about.
Phis measure how positive is the sentiment of a word considering a given aspect.
Using these weights we can calculate numerical probability scores for all sentences we have.
Then we softmax them to pick the most probable aspect for a given sentence.
*Next slide, please.*
## Slide 5 (How This Thing Can be Even Unsupervised?)
We have no data about real aspect assignment for any sentence.
So how can we know which aspects are described by sentences?
If a sentneces mentions 'appearance', then it's about 'appearance'. That's why we initialize words mentioning the aspect for one for all the aspects and proceed and proceed with training.
At first the model will promote words occurring with word 'appearance' as discussing 'appearance' for example.
And it's trained for some time with the objective being maximizing Negative Log Likelihood of observed aspect assignment.
So we require the model to be more certain about its aspect predictions.
## Slide 5 (Cloud pictures)
*Next slide, please.*
<!-- ## Slide 6 (Actual Polish cloud)
Let's maybe skip this one for now, as we can possibly run out of time.
## Slide 7 (English cloud)
jaka chmura jest, każdy widzi -->
## Slide 8 Lessons learned
O ASPEKT RATIGNU:
the first approach was to examine rating distribution and correlation, which you can see in these graphs. It turned out that people are pretty predictible. So we decided to exploit this information and create a model that focuses on probability of a given score. Our accuracy was 35% and our mse was 1.4 which was pretty good for a naive model but rather unsatisfying as the results were similar to each other.
We also use phis from former aspect model which describe words sentiment. It helps to add some noise yet still, unsatisfying.
So we decided to try a diffrent approach and use words embedding.
KNN, big k removes extreeme ratings, accuracy 45% and mse 1.3.
Still pretty unsatisfying but we agreed to end it here.
After a month and a half we achieved our first representative word cloud milestone. It was pretty satisfying to look at, but at the same time really depressing as it was at this moment that we realized its January already. The clock was ticking and we covered only one of the research questions. So after this time of struggles we decided to take it more seriously and come up with a model for predicting aspect rating. We had some inital problems with understanding and recreating the paper and even wondered whether it makes any sense.
We tried different methods and come up with our own modified model at the end. It was a simple idea based on a fact that we can exploit the ratings distribution and use it to infer ratings for reviews outside of our dataset.
It proved really effective, yet again - depressing. Because of the fact that extreme ratings were rarerly seen, our model wasn't really able to predict such outcomes. He couldn't handle some absymal beer or a best IPA you have ever seen.
So we tried using different approach, which is - words embedding. I (personally) don't know much about those models except the fact that they do work. We focused on SBERT and FASTTEXT with KNN and tried them out. The results were shocking (postiviely). SBERT was able to come up with a rating for a review with a 46% accuracy (remember that there are 10 * 20 * 10 * 5 * 5 = 50000 possible ratings). After closer examination it turned out, that unfortunetly it has the same weakness. It can't predict extreme ratings (and it's probably because we use K bigger than 15 in KNN). Nevertheless the results were better and we were close the deadline.
So to sum up:
We succeded with providing an answer to every research question that we posed and we are satified with the models and proud of the progess that we've made. Our results are easily interpretable and explainable. We created some fancy graphs
Although, there are many paths we can take to improve our work.
Weight initialisation plays a vital role in training.
*Next slide, please.*
## Slide 9 (Further work possible)
* Datasets from other domains (wines, toys and games, ebooks, pubs, ...)
* Use LDA (Latent Dirichlet Allocation) as a baseline
* Explore SVM extension paper
* Experiment with meta parameters, in particular weight decay (penalize high weights to regularize)
* Get rid of PyTorch and find derivatives analytically, which shouldn't be hard
## Slide n
As we ran out of time, we encourage everybody to take a look at the wordclouds blablablalbla repo github nanana
## Backup 1
## Backup 2
Both our segmentation and summarization tasks can be expressed as weighted bipartite graph cover.
Each of the sentences at left (from a BeerAdvocate review) must be matched to an aspect.
The optimal cover is highlighted using bold edges. For the segmentation task (left graph), five nodes are constrained to match to each of the five aspects, ensuring that each aspect appears at least once in the segmentation (the remaining two unconstrained aspects are both ‘smell’ in this case).
The summarization task (right graph) includes precisely one node for each aspect, so that each aspect is summarized
using the sentence that most closely aligns with that aspect’s rating.
*Cheers* :) :beers: