owned this note
owned this note
Published
Linked with GitHub
<h1 style="text-align: center"> Phase 2 Challenge</h1>
<div style="text-align: center">Nguyen Truong Phat, Nguyen Trong Tung</div>
<div style="text-align: center">March 2020</div>
# 1. Define Problem
Invoices and receipts image store a massive amount of valuable information which seem to be helpful in many perspectives ranging from financial, accounting to taxational areas. Due to this convenience, it is a crucial task for human to extract these useful information automatically with help of advanced machine intelligence.
This problem, however, is challenged because most of invoices and receipts come from unstructured data type which are not usually organized in an pre-defined data model.
In this phase of Cinnamon's bootcamp, we will implement and deploy a system comprised of three main modules for solving the above problem. Generally, our task is to:
- classify the corresponding types of text inside the bill image, whether it falls into these 5 categories:
- company
- address
- date
- price
- others
- in order to increase the reliability of our system, we also integrate an explainer into the system to help the customer visualize those evidence which make our system end up with the final decision
# 2.Pipeline
Concretely, text location defined by bounding box will be extracted after passing through our first component called text line detection. Then, texts inside pre-defined bounding boxes will be recognized with the second module known as optical character recognition. With the help of third module called key value matching, final decision will be made by classifying each bounding box text inside bill to its correct label. Explainer now comes to help user visualize the final results and give explanation about these results by tracing back to those text regions inside the bill which help the system to conclude the classified label.
Below is our system pipeline:
![](https://i.imgur.com/w3SsnO7.png)
Detailed information about each module will be discussed in following sections.
## 2.1. Text line detection
Text line detection involves with finding and locating the bounding boxes which cover whole text lines. Normally, texts which lie on the same line and are separated by not too many spaces will be considered to be one bounding box in the bill.
![img](https://i.imgur.com/36tbwa8.png =300x)
### 2.1.1. Define problem
Problem is defined as following:
![](https://i.imgur.com/sZvarMI.png)
### 2.1.2. Architecture
To bypass the text line detection problem in receipts and invoices domain, the plug-in learning model must have the abilitity to handle multi-scale and multilanguage text. Moreover, output of this module will be used as input for OCR in the next phase, any steps of post-processing of integrated learning model should be rejected to reduce the results' latency
In this module, we utilize CTPN(Connectionist Text Proposal Network) due to its ability to satisfy all of criteria we have mentioned before.
CTPN comprises of three main components:
- Fine-scale text proposals is extracted by using a densely spatial window of size 3x3 to slide through the last feature maps of VGG16 architecture. CTPN focus mostly on predicting the vertical coordinates of each prosal instead of horizontal one (fixed text piece with 16-pixel width) which can be more difficult to predict.
- Integrating sequential learning model specifically Bi-LSTM to exploit the rich dependencies in textual domain. Generally, sequences of W(width of last feature maps in VGG) features after performing convolutional operation before in text proposals will be connected through a recurrent network in order to learn sequential information.
- A side-refinement progress is directly learned inside the model instead of performing any additional post-processing step.
CTPN is also be trained in an end-to-end manner by loss function constructed from summing three smaller loss function for solving multi-task learning: text/non-text , coordinate and side-refinement error. CTPN architecture are summarized in below pipeline proposed by the author
![](https://i.imgur.com/0T8erCB.png)
### 2.1.3. Evaluation
Due to the property of full flow Flax, we decide to record the metric in the last module which is key-value matching
### 2.1.4. Demo
Bill image after being processed by text line detection module will return results as following:
![](https://i.imgur.com/ZlOohr3.png)
## 2.2. Optical Character Recognition
### 2.2.1. Define problem
Optical Character Recognition involves with recognizing the texts located inside pre-defined bounding boxes. This problem can also be extended to a higher and more general problem called image-based sequence regconition.
Image-based sequence recognition tackles with scene text recognition by combining the advantage of both advanced visual and textual technique. With these techniques, image contain not only spatial information but also sequential information where the occurence of earlier and later regions inside image can affect the occurence of current region.
There has been many proposed methods for solving image-based sequence recognition which are differentiated by their preprocessing phase or choices of hand-crafted learning features. In our system:
- We use Convolutional recurrent neural network (CRNN) thanks to its ability of handling images of varying dimensions, producing predictions with different lengths and high performance in term of effectiveness and efficiency.
- Image and text are unified into CRNN and are trained in an end-to-end manner instead of separately trained and tuned.
### 2.2.2. Architecture
As we have mentioned before, we employ the architecture of CRNN to tackle OCR module in our system
CRNN comprised of three main layers:
- convolutional layer for visual information extraction,
- recurrent neural network for character generating (utilize LSTM to tackle long-term dependency)
- transcriptional layer to refine the probability of sequence prediction.
CRNN also utilize CTC loss as loss function to tackle problem of variational sizes of text character inside images.
<center>
<img src=https://i.imgur.com/D5PSxyJ.png)>
Figure x. The architecture of CRNN
</center>
### 2.2.3. Evaluation
Due to the property of full flow Flax, we decide to record the metric in the last module which is key-value matching
### 2.2.4. Demo
Bill image after passing through text line detection module to produce text bounding box will be scanned through by OCR module to produce recognized text. Below shows the result of OCR module:
Bill image: ![](https://i.imgur.com/lBJKGe3.png)
Results: ![](https://i.imgur.com/wTNnaWz.png)
## 2.3. Key-value matching
### 2.3.1. Define problem
**Input**: From the results produced from OCR module, what we got will be a file containing texts recognized inside the corresponding bounding box as following format:
![](https://i.imgur.com/GItzlWM.png)
**Output**: Key-Value matching will received this file as input and classify the label corresponding to each bounding box inside receipt based on the textual information and geometrical location of all bounding box inside image. The label which is considered in this problem is company, price, date and address.
### 2.3.2. Architecture
To tackle this problem, we use Graph Convolutional Network to do classification on each textline of a given receipt image, we used the one that is introduce in [1]. The great thing about GCN it is a very light-weight for classifcation on nodes.
A graph is represented by 2 components: The adjacency tensor $A$ which indicates the connection between nodes and the feature matrix $V$, indicates feature vector for each nodes in the graph.
The adjacency tensors comprise of $L$ adjacency matrices, in which each slice of the adjancency matrix lie on the third dimension of the adjacency tensor. Each adjacency matrix represents the connection between each node in a particular context (For e.g: Above to,Left to, etc.)
A filter can be by taking the sum of all adjacency matrices with corresponding weights for each one:
$$H^{(c)} = \sum _ {l=1}^{L}h_l^{(c)}A_l$$
To pass an input $V_{in}$ using the filter, we perform matrix multiplication and take all the sum as follow for every c-th component in feature vector:
$$V_{out} = \sum_{c=1}^CH^{(c)}V_{in}^{(c)} + b $$
We also adopt the Linear Embedding operation to perform linear transformation within each node:
$$V^{}_{out} = V_{in} \theta_W + \theta_b $$
### 2.3.3. Evaluation
For evaluation task we use F1 score to measure the efficiency of our full flow Flax system, the F1 score is measured with **90%** on the test set.
### 2.3.4. Demo
Some below examples show how our KV-module works:
![](https://i.imgur.com/nhUKlkY.png)
![](https://i.imgur.com/Z4v122k.png)
![](https://i.imgur.com/PSUiAdG.png)
## 2.4. Explaining Graph Neural Network
### 2.4.1. Define problem
We define the problem as inputs and outputs:
- Input: A trained GCN, an sample and a chosen node which you want to explain
- Output: A subgraph that indicates important nodes to explain that chosen node
### 2.4.2. Architecture
To tackle this problem, we use the GNN Explainer introduced in [2]. A key idea of the paper is to introduce a way to find a important subgraph with minimal numbers of nodes that best explains a chosen node.
A graph is represented by 2 components: The adjacency tensor $A$ which indicates the connection between nodes and the feature matrix $V$, indicates feature vector for each nodes in the graph.
[2] Given a node $i$ , our goal to find a subgraph $G_S\subseteq G_C$ (in which $G_C$ is the whole graph) that are important for GNN's prediction on $i$-th chosen note $\hat{y}_{Ci}$.
We first construct a raw mask $M$ with the shape of our adjacency tensor $A$. For the mask to be in range from 0 to 1, we apply the sigmoid function $\sigma$, the final mask will then become $\sigma(M)$.
Our new adjacency tensor after masking is
$$A' = A\odot \sigma(M) $$
Therefore the new prediction on a given node $i$ will then become:
$$\hat{y}_{Si} = \text{GNN}(A',V,i)$$
.We're interested in finding:
- A subgraph with minimal affect on changing the output of the model, we have the prediction loss based on cross-entropy to match 2 probability distribution:
$$\mathcal{L}_{pred}(\hat{y}_{Ci},\hat{y}_{Si}) = -\sum \hat{y}_{Ci}*log(\hat{y}_{Si})$$
- A subgraph with minimal nodes, we have size loss:
$$\mathcal{L}_{size}(M) = \sum \sigma(M) $$
- A subgraph in which every nodes in the subgraph has high certainty of contributing to the output, we have entropy loss for the mask:
$$\mathcal{L}_{ent} (M) = -\sum \sigma(M) \log (\sigma(M)) $$
- We combine three loss to have single-value objective function:
$$\mathcal{L} = \alpha . \mathcal{L}_{pred} + \beta. \mathcal{L}_{size} + \gamma. \mathcal{L}_{ent}$$
with $\alpha$, $\beta$ and $\gamma$ are corresponding hypermeters that indicates the contribution of each small losses to the $\mathcal{L}$. Emperically we see that $\alpha=1,\beta=0.0005,\gamma=1$ worked best for our problem.
### 2.4.3. Evaluation
Unfortunately, we don't have ground-truth on how the explainer should explain. However, our implemented explainer gives reasonable explanation for most chosen nodes.
### 2.4.4. Demo
![](https://i.imgur.com/mwI3fTZ.png)
![](https://i.imgur.com/YU0VkEs.png)
![](https://i.imgur.com/MBmUxZN.png)
### 2.4.5. Interactive Explainer
We already made an interactive explainer which you can choose a textline to explain just by clicking on it. The notebook can be accessed at
![](https://i.imgur.com/mTFLpdv.png)
### 3. References
[1] [Petroski Such, Shagan Sah - Robust Spatial Filtering with Graph Convolutional Neural Networks](https://ieeexplore.ieee.org/document/7979525)
[2] [Rex Xing - GNNExplainer: Generating Explanations for Graph Neural Networks](https://arxiv.org/abs/1903.03894)