@def title = "FastAI.jl Tabular"
<!-- @def reeval = true -->
# Working with Tabular Data using [FastAI.jl](https://www.github.com/FluxML/FastAI.jl) [GSOC]
\toc
## Introduction
This summer, I worked on FastAI.jl as a part of Google Summer of Code'21 under The Julia Language. My work involved adding tabular support to the package.
If you are unfamiliar with FastAI.jl, it is a package inspired by [`fastai`](https://github.com/fastai/fastai)[^1] and is a repository of best practices for deep learning in julia. It offers a layered API that lets you use the higher-level functionalities to perform various learning tasks in around 5 lines of code, and the middle and lower-level APIs can be mixed and matched to create new approaches.
The GSOC project page can be found [here](https://summerofcode.withgoogle.com/projects/#5088642453733376).
## Implemented Functionalities
We'll go through an end-to-end task to understand the added functionalities. Here's we will be working with the `adult` dataset and try to perform tabular classification on the `salary` column to predict if its `<50k` or `>=50k`
### Containers
[Link to PR](https://github.com/FluxML/FastAI.jl/pull/26)
The first thing I worked on was adding an index-based data container suitable for tabular data, which follows the interface defined by [MLDataPattern.jl](https://github.com/JuliaML/MLDataPattern.jl).
`TableDataset` accepts any type satisfying the [Tables.jl](https://github.com/JuliaData/Tables.jl) interface and allows querying for any row using `getobs` and the total number of observations using `nobs`.
On top of this, among some of the other things the container lets you do are performing arbitrary functions on the observations lazily, splitting containers, and creating a `DataLoader` suitable for training.
```julia:container_dummy
#hideall
using FastAI
ENV["DATADEPS_ALWAYS_ACCEPT"] = "true"
data = TableDataset(joinpath(datasetpath("adult_sample"), "adult.csv"));
```
```julia:container
using FastAI
data = TableDataset(joinpath(datasetpath("adult_sample"), "adult.csv"));
getobs(data, 1), nobs(data)
```
\show{container}
We'll now split off our target column from the data container using `mapobs`.
```julia:split
splitdata = mapobs(row -> (row, row[:salary]), data)
```
### Transformations
[Link to PR](https://github.com/lorenzoh/DataAugmentation.jl/pull/45)
Once we have our container, to make it usable for training we'll pre-process it using the tabular transformations added to DataAugmentation.jl which are -
* **Normalization** - Normalizes a row of data using column mean and standard deviation.
* **Fill Missing** - Fills any `missing` values in the row.
* **Categorify** - Label encodes the categorical columns
As these transformations are applied on individual rows (or `TabularItem`), we collect all the needed dataset statistics beforehand. This is done by creating an indexable collection (such as `Dict`) with the column as keys and required statistics as the value. For example, to create the normalization dictionary, we'll need the values to be tuples of the mean and standard deviations of the columns.
There are various helper methods defined which make this process very easy if there is a `TableDataset` created already, but it is still possible to prepare everything manually and pass in the required statistics for maximum flexibility.
```julia:transforms
using DataAugmentation
using FastAI # hide
catcols, contcols = FastAI.getcoltypes(data)
normdict = FastAI.gettransformdict(data, DataAugmentation.NormalizeRow, contcols)
```
\show{transforms}
Now we can create the transform by passing in the constructed collections.
```julia:norm
normalize = DataAugmentation.NormalizeRow(normdict, contcols)
```
\show{norm}
To now use the transformation, we'll have to create a `TabularItem` to apply it on, and then call `apply` on it.
```julia:item
using Tables
item = TabularItem(getobs(data, 1), Tables.columnnames(data.table))
```
\show{item}
```julia:applytrans
apply(normalize, item).data
```
\show{applytrans}
The other transforms work similarly.
### Model
[Link to Model PR](https://github.com/FluxML/FastAI.jl/pull/124)\\
[Link to Embedding PR](https://github.com/FluxML/Flux.jl/pull/1656)
We created a deep learning model suitable for tabular data by re-engineering the model present in `fastai`[^2].
The backbone of the model is structured with a categorical and a continuous component.
It accepts a 2-tuple of categorical (label or one-hot encoded) and continuous values, where each backbone applies to each element of the tuple.
The output from these backbones is then concatenated and passed through a series of linear-batchnorm-dropout layers before a `finalclassifier` block, whose size could be task dependant.
The categorical backbone consists of an embedding layer for each categorical column and makes use of the concept of entity embeddings[^3]. Instead of just one-hot encoding the categorical columns, representing these values in an "embedding space" makes it so that similar values could be closer to each other. This can reveal the intrinsic properties of the categorical variables and even reduce memory usage.
We have mainly two methods for creating a `TabularModel`.
* by passing required metadata
* by passing custom backbones
The simplest method of constructing a `TabularModel` is through the first method, which involves passing the number of continuous columns, output size and a collection of cardinalities (number of unique classes) for the categorical columns.
```julia:simp_model
num_cont = length(contcols)
outsize = 2
catdict = FastAI.gettransformdict(data, DataAugmentation.Categorify, catcols)
cardinalities = [length(catdict[col]) for col in catcols]
FastAI.TabularModel(num_cont, outsize; cardinalities=cardinalities)
```
\show{simp_model}
While using the second method for creating the model, we'll have to define our categorical backbone, continuous backbone, and an optional finalclassifier layer.
To create a categorical backbone with entity embeddings, we'll have to decide on the embedding dimensions. By default, these can be calculated using `fastai`'s rule of thumb (see `emb_sz_rule` function) but can be overridden easily using `size_overrides`. Or we can even pass in an arbitrary vector of embedding sizes if you prefer not to use `get_emb_sz`.
Check out the docs for more information about different methods available for `get_emb_sz`.
```julia:embedsize
embedszs = FastAI.Models.get_emb_sz(cardinalities, catcols, Dict(:workclass => 20))
```
\show{embedsize}
```julia:backs
contback = FastAI.Models.tabular_continuous_backbone(6)
catback = FastAI.Models.tabular_embedding_backbone(embedszs, 0.2)
```
\show{backs}
And now we can pass these in `TabularModel`, along with a bunch of other optional keyword args to get our model.
```julia:comp_model
FastAI.TabularModel(
catback,
contback,
Chain(Dense(100, 2), x->FastAI.Models.sigmoidrange(x, 2, 5)),
layersizes=(200, 100, 100),
dropout_rates = [0.1, 0.2, 0.1],
activation=Flux.sigmoid
)
```
\show{comp_model}
`layersizes` and `size_overrides` keyword args are also available in the first method if needed.
### Learning Methods
[Link to PR](https://github.com/FluxML/FastAI.jl/pull/141)
A Learning Method can be thought of as a concrete approach for solving a "learning task" (eg. tabular classification, tabular regression etc.)[^4]
To implement a Learning Method in FastAI.jl, we have 2 main options.
* Implement a Learning Method from scratch satisfying the DLPipelines.jl interface.
* Use the DataBlock API to create a `BlockMethod`, or use an implemented high-level wrapper.
In most cases, using the DataBlock API should be sufficient for most learning tasks.
The DataBlock API lets us put together a learning method using "data blocks" which represent the type of data. For example, a `TableRow` data block carries all the information about the row, like which columns are categorical and continuous, and what classes a particular categorical column can have. In addition to `TableRow`, a `Continuous` block was also added which can help in regression tasks.
One important thing to keep in mind is that these data blocks don't actually carry the data, but just contain meta-data about the underlying data.
In addition to specifying the data blocks, we can also choose to apply any processing step which might be required to make the data suitable for training. For example `TabularPreprocessing` allows us to use the transformations specified [previously](#transformations) to pre-process the rows.
Since our target column is a categorical value and we want to perform classification, we'll use the `Label` block to represent the target.
```julia:classblock
method = BlockMethod(
(
TableRow(catcols, contcols, catdict),
Label(unique(data.table[:, :salary]))
),
((FastAI.TabularPreprocessing(data)), FastAI.OneHot())
)
```
\show{classblock}
The DataBlock API is very flexible and we can put together an arbitrary number of blocks for any kind of learning task.
Creating a learning method for regression tasks is as easy as substituting the target block with a `Continuous` block where the size would represent the number of target columns.
```julia:contblock
method2 = BlockMethod(
(
TableRow(catcols, contcols, catdict),
Continuous(3)
),
((FastAI.TabularPreprocessing(data),))
)
```
\show{contblock}
If our learning task is either tabular classification or regression, we can even use the high-level wrappers defined for these tasks to get the `method` easily.
```julia:tabclas
method = TabularClassificationSingle(
catcols,
contcols,
unique(data.table[:, :salary]);
data)
```
\show{tabclas}
With this method, it is possible for us to encode our data, get a model and loss function suitable for the data, and even directly create a `Learner` which can be used for training.
To get a quick summary of the steps which will be performed while encoding, we can use the `describemethod` function.
```julia:describe
describemethod(method)
```
\show{describe}
Let's use this method to encode a row of data.
The input here is a tuple of all row values and the target value.
```julia:getrow
row = getobs(splitdata, 1000)
```
\show{getrow}
On encoding, we get back a tuple where the input values have been normalized, missing values filled with the column median, and categorical values label encoded. The output value has been one-hot encoded.
```julia:encode
encode(method, Training(), row)
```
\show{encode}
To get a model suitable for this learning method, we can use the `methodmodel` function.
```julia:defmod
methodmodel(method, NamedTuple())
```
\show{defmod}
Here the second parameter is a `NamedTuple` of backbones. The keys can be
* `:categorical`
* `:continuous`
* `:finalclassifier`
corresponding to the specific backbones, and we can choose to specify any combination of these to customize our model.
```julia:catmod
methodmodel(method, (categorical=catback,))
```
\show{catmod}
To get an iterable for our data, we can use the `methoddataloader` function
```julia:datalod
traindl, valdl = methoddataloaders(splitdata, method, 128; pctgval = 0.2, shuffle = true, buffered=false)
```
\show{datalod}
and for getting a suitable loss function, `methodlossfn` comes in handy.
```julia:loss
methodlossfn(method)
```
\show{loss}
All of these steps can be customized according to requirements and can be put together to create a `Learner` for training.
The `methodlearner` function abstracts these functions to directly get a `Learner`.
```julia:learn
learner = methodlearner(method, splitdata, (categorical=catback,), Metrics(accuracy), batchsize=128, dlkwargs=NamedTuple(zip([:buffered], [false])))
```
\show{learn}
Now, we can use this for training our model by calling the `fit` or `fitonecycle` function on it provided by `FluxTraining`.
```julia:fit
fit!(learner, 1)
```
\show{fit}
## Summary of PRs
| Link to PR/commit | Description | Status |
|-------------------|-------------|--------|
| [FastAI.jl #26](https://github.com/FluxML/FastAI.jl/pull/26) | Adds TableContainer | Merged |
| [DataAugmentation.jl #45](https://github.com/lorenzoh/DataAugmentation.jl/pull/45) | Adds table transforms and item | Merged |
| [FastAI.jl #124](https://github.com/FluxML/FastAI.jl/pull/124) | Adds table model | Merged |
| [Flux.jl #1656](https://github.com/FluxML/Flux.jl/pull/1656) | Updates `Embedding` layer, adds `Flux.outputsize` support for `Embedding` layer, and doc updates | Waiting for a decision on the best way to handle this case for `Flux.outputsize` |
| [FastAI.jl #141](https://github.com/FluxML/FastAI.jl/pull/141) | Adds tabular blocks and encodings, fixes some bugs, adds hash for `adult_sample` dataset. | Merged |
| [fluxml.github.io #94](https://github.com/FluxML/fluxml.github.io/pull/94) | Blog post showing some of the implemented functionalities.| Will be moved to the new FastAI.jl website (under construction) |
## Future Work
More comprehensive documentation and notebooks can be added to demonstrate the various features. Out of the box support for additional learning methods like multiple column classification or a combination of regression and classification tasks would also be nice to have. Implementation for complex tabular models like SAINT[^5] will further improve the capabilities of the package.
## Acknowledgement
All of this wouldn't have been possible at all without the continuous support and teachings of my mentors Kyle Daruwalla, Brian Chen and Lorenz Ohly. From knowing nothing about the Julia Language to completing this project really shows the amount of effort they have put in for helping me through every step of this project.
The whole community has been really helpful as well, with their constant support and suggestions. From the informative references Ari provided about what's happening in the ecosystem, the various critiques and suggestions from Dhairya, the helpful reviews from Michael, Logan and the rest of the community, it has really been a very fun and great learning experience.
I would also like to thank Google and the whole Summer of Code team for creating a really wonderful program and providing us with this opportunity.
## References
[^1]: [fastai: A Layered API for Deep Learning](https://arxiv.org/abs/2002.04688)
[^2]: [fastai's TabularModel](https://docs.fast.ai/tabular.model.html#TabularModel)
[^3]: [Entity Embeddings of Categorical Variables](https://arxiv.org/abs/1604.06737)
[^4]: [From the DLPipelines.jl docs](https://lorenzoh.github.io/DLPipelines.jl/dev/docstrings/DLPipelines.LearningMethod.html)
[^5]: [SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training](https://arxiv.org/abs/2106.01342)