owned this note
owned this note
Published
Linked with GitHub
# Newton for KimCNN
[toc]
## Members
Yun-Ang Wu,YuMeng Tang
## 6.8-6.14
### Loss Convergence (Tang)
##### Summary
We want to test the correctness of our optimizer. We use a small portion of the SMSspam dataset $|D| = 10,30,50$ for training, and observe the loss. If our optimizer is correct, it should converges to $0$ after some training.
##### Results
- loss converges to $0$ after some epochs.
- $|D| = 10$:

- $|D| = 30$:

| epoch | loss |
| ----- | --------- |
| 0 | 0.0335 |
| 1 | 0.0180 |
| 2 | 0.0105 |
| 3 | 0.0065 |
| 4 | 0.0042 |
| 5 | 0.0028 |
| 6 | 0.0019 |
| 7 | 0.0013 |
| 8 | 0.0009 |
| 9 | 0.0007 |
- $|D| = 50$:

| epoch | loss |
| ----- | --------- |
| 0 | 0.1159 |
| 1 | 0.0628 |
| 2 | 0.0445 |
| 3 | 0.0359 |
| 4 | 0.0306 |
| 5 | 0.0264 |
| 6 | 0.0224 |
| 7 | 0.0179 |
| 8 | 0.0131 |
| 9 | 0.0082 |
### Matlab jvp and vjp (Wu)
##### summary
We already know that `Jv` is much more expensive in PyTorch due to the double reverse mode trick. We want to confirm this by running the SimpleNN-MATLAB, since SimpleNN-MATLAB supports forward mode for `Jv`.
##### Results
- Time of `Jv` and `JTv` is similar in MATLAB
```
profile on
sample("-max_iter 100 -Jacobian 0");
profile viewer
```

### Forward mode AD in PyTorch (Wu)
##### Summary
Forward mode AD is available in PyTorch>=1.11. We figured out
- How to use forward mode AD in PyTorch.
Suppose we have a model $f(x, \theta)$ with $\theta$ as parameter and $x$ as input. We conducted some experiment:
- $J_{\theta}v$ ($\theta$ : parameter) is faster using forward mode AD
- $J_{x}v$ ($x$ : input) is somehow slower using forward mode AD (need more experiment)
##### Dual Number
Dual number is a number system, their expression is
$$
a + b\epsilon\qquad(a,b\in\mathbb{R})
$$
Where $\epsilon$ is a symbol taken to satisfy $\epsilon^2 = 0$ with $\epsilon \neq 0$. For example
$$
(a+b\epsilon)(c+d\epsilon) = ac + (ad+bc)\epsilon
$$
This number system is widely used in forward mode AD. consider a polynomial
$$
P(x) = qx^2 + rx + s
$$
If we plug $a+ b\epsilon$ into $P(x)$, we get
$$
P(a+b\epsilon) = q(a^2+2ab\epsilon) + r(a + b\epsilon) + s = P(a) + P^{\prime}(a)b\epsilon
$$
For convenience, we use pair notation $\langle a,b \rangle$ to represent $a + b\epsilon$ .
In general, for any (analytic) real function $f:\mathbb{R} \to \mathbb{R}$, we can get the product of derivative at point a $f^{\prime}(a)$ and $b$ by forwarding $\langle a,b \rangle$ into this function:
$$
f(\langle a,b \rangle) = \langle f(a),f^{\prime}(a)b \rangle
$$
We can expand this to multivariate function $f:\mathbb{R}^{n} \to \mathbb{R}^{m}$. for a point $x \in \mathbb{R}^{n}$ and a direction $v \in \mathbb{R}^{n}$, we can calculate the Jacobian vector product $J_{f}(x)v$ by
$$
f(\langle x_{1},v_{1} \rangle, \langle x_{2},v_{2} \rangle, \ldots ,\langle x_{n},v_{n} \rangle) = (\langle y_{1},y^{\prime}_{1} \rangle, \langle y_{2},y^{\prime}_{2} \rangle, \ldots ,\langle y_{m},y^{\prime}_{m} \rangle)
$$
Where
$$
(y_1, \ldots, y_m) = y = f(x) \\ (y^{\prime}_1, \ldots, y^{\prime}_m) = y^{\prime} = J_{f}(x)v
$$
##### PyTorch implementation of dual number
We use `torch.autograd.forward_ad` to do forward mode AD in PyTorch.
The following are the two main resources on this topic. The first one is the official documentation on forward mode, the second one is the github issue related to forward mode AD.
- [Forward-mode Automatic Differentiation (Beta)](https://pytorch.org/tutorials/intermediate/forward_ad_usage.html)
- [[feature request] Forward-mode automatic differentiation#10223](https://github.com/pytorch/pytorch/issues/10223)
Let's look at a simple example:
```python
import torch
import torch.autograd.forward_ad as fwAD
primal = torch.tensor([2,5], dtype=float)
tangent = torch.tensor([1,0], dtype=float)
def fn(x):
return torch.log(x[0]) + x[0]*x[1] - torch.sin(x[1])
with fwAD.dual_level(): #context manager for forward mode
dual_input = fwAD.make_dual(primal, tangent) #make dual number
dual_output = fn(dual_input) #forward mode pass
jvp = fwAD.unpack_dual(dual_output) #unpack dual number
print(jvp.primal)
print(jvp.tangent)
```
This returns
```python
tensor(11.6521, dtype=torch.float64)
tensor(5.5000, dtype=torch.float64)
```
##### PyTorch forward mode AD profiling
Suppose we have a model $f(\theta, x) = z$ where $\theta$ is the parameter and $x, z$ is the input and output. In our PyTorch implementation, we use reverse mode AD twice to calculate $J_{\theta}v$. We want to know whether forward mode AD is faster. Luckily, this issue was already discussed in the original issue.
Link: https://github.com/pytorch/pytorch/issues/10223#issuecomment-950213842
> [@albanD](https://github.com/albanD) Thanks for the wonderful work of forward-mode AD!
>
> As mentioned in the [first comment of this issue](https://github.com/pytorch/pytorch/issues/10223#issue-347538989), I think the main use case of the forward-mode AD is computing JVP for the Jacobian of the model outputs **w.r.t. model parameters** (not w.r.t. model input).
It appears our use case aligns perfectly with the intended use of `torch.autograd.forward_ad`.
They also provides some profiling results. I will first show how to do forward mode AD on PyTorch:
```python
from torch.nn.utils._stateless import functional_call
dim = 1024
n_layers = 100
model = Sequential().to(device)
for i in range(n_layers):
model.add_module(f'fc{i}', Linear(dim, dim).to(device))
x = torch.randn(batch_size, dim).to(device) # model input
v = [torch.randn_like(p) for p in model.parameters()] # v of JVP
def jvp_by_forward_ad_2():
with torch.no_grad():
with forward_ad.dual_level(): #enable fwad context manager
params = {}
for i, (name, p) in enumerate(model.named_parameters()):
params[name] = forward_ad.make_dual(p, v[i]) #dual number for params
rst = functional_call(model, params, x) #forawrd pass w.r.t. params
_, jvp = forward_ad.unpack_dual(rst) #unpack dual number
return jvp
```
This code is written by the author of `torch.autograd.forward_ad` **[@albanD](https://github.com/albanD)**.
The result provided by the author is
```
batch_size: 8
dim: 1024
n_layers: 100
device: cpu
-------------
jvp_by_reverse_ad: 0.651s
jvp_by_forward_ad_2: 0.208s
```
I have also ran the code on my computer, the results are similar, about 2x faster.
```
batch_size: 8
dim: 1024
n_layers: 100
device: cpu
-------------
jvp_by_reverse_ad: 0.054228s (max memory allocated: 0.00GB)
jvp_by_forward_ad_2: 0.022454s (max memory allocated: 0.00GB)
```
```
batch_size: 8
dim: 1024
n_layers: 100
device: cuda
cuda device: NVIDIA GeForce RTX 4080
-------------
jvp_by_reverse_ad: 0.007386s (max memory allocated: 2.37GB)
jvp_by_forward_ad_2: 0.004058s (max memory allocated: 1.97GB)
```
```
batch_size: 64
dim: 1024
n_layers: 4
device: cuda
cuda device: NVIDIA GeForce RTX 4080
-------------
jvp_by_reverse_ad: 0.000390s (max memory allocated: 0.11GB)
jvp_by_forward_ad_2: 0.000221s (max memory allocated: 0.10GB)
```
```
batch_size: 64
dim: 1024
n_layers: 64
device: cuda
cuda device: NVIDIA GeForce RTX 4080
-------------
jvp_by_reverse_ad: 0.006047s (max memory allocated: 1.55GB)
jvp_by_forward_ad_2: 0.003298s (max memory allocated: 1.27GB)
```
```
batch_size: 64
dim: 1024
n_layers: 256
device: cuda
cuda device: NVIDIA GeForce RTX 4080
-------------
jvp_by_reverse_ad: 0.022146s (max memory allocated: 6.15GB)
jvp_by_forward_ad_2: 0.012105s (max memory allocated: 5.02GB)
```
##### forward mode AD w.r.t. input
Somehow forward mode AD is slower w.r.t. input.
> `autograd.functional.jvp` computes the jvp by using the backward of the backward (sometimes called the double backwards trick). This is not the most performant way of computing the jvp. Please consider using [`torch.func.jvp()`](https://pytorch.org/docs/stable/generated/torch.func.jvp.html#torch.func.jvp) or the [low-level forward-mode AD API](https://pytorch.org/docs/stable/autograd.html#forward-mode-ad) instead.
I have done some experiment on `autograd.functional.jvp` and `torch.func.jvp` (high-level API for the method we introduced in the previous chapter). but `torch.func.jvp` is always slower when doing jvp w.r.t. the input. why?
## Meeting 6.16
### summary
現在forward mode AD不需要做太深
重點是要先證明newton在text classification task上的有用性
如果真的有用的話去optimize forward mode AD才會有意義
所以現在的重點應該擺在證明newton的有效性這件事上
第一個goal: 證明newton在text classification是competitive的
### next goal
- 在現在的code上加上forward mode AD,看看performance有沒有變好
- 完善我們現在的code,讓他可以跑一些大一點的實驗
- sub gradient
- more dataset
- some refactoring
## 6.19-6.27
### Summary
- Implemented forward mode AD for gauss-newton matrix vector product
- About 2 times faster than reverse mode AD
- Implemented subgradient, can run LEDGAR and other dataset now
- Some experiment on dataset other than spam
### Implement forward mode AD in Gv (Wu)
##### Implementation
In our original code, we calculate `BJv` and `Jv` together.
```python
def Gv_legacy(loss, outputs, v, damping, model):
grads_outputs = torch.autograd.grad(loss, outputs, create_graph=True)
BJv = Rop(grads_outputs, model.parameters(), v)
JBJv = torch.autograd.grad(
outputs, model.parameters(), grad_outputs=BJv.reshape_as(outputs), retain_graph=True)
return parameters_to_vector(JBJv).detach() + damping * v
```
The problem is, this trick will not work in forward mode. When using forward mode AD, we need to first calculate `Jv`, then `BJv`.
Splitting the `BJv` calculation into two part is also doable in reverse mode AD.
```python
def Gv_reverse(outputs, v, damping, model):
Jv = Rop2(outputs, model.parameters(), v)
BJv = 2 * Jv / outputs.numel()
JBJv = torch.autograd.grad(
outputs, model.parameters(), grad_outputs=BJv.reshape_as(outputs), retain_graph=True)
return parameters_to_vector(JBJv).detach() + damping * v
```
We take a closer look at this line:
```python
BJv = 2 * Jv / outputs.numel()
```
This is equal to
```python
BJv = 2 * Jv / (batch_size * num_labels)
```
In the original paper, the loss
$$
\lVert z^{L-1} - y \rVert^{2}
$$
Is used. The hessian matrix of this loss function is
$$
2I = \begin{bmatrix} 2 & \dots & 0\\ \vdots & \ddots & \vdots\\ 0 & \dots & 2 \end{bmatrix}
$$
The pytorch implementation of this loss function is:
$$
\frac{1}{M} \lVert z^{L-1} - y \rVert^{2}
$$
Where $M$ is the dimension of $y$ and $z^{L-1}$.
The hessian matrix will be
$$
\frac{2}{M}I = \begin{bmatrix}
\frac{2}{M} & \dots & 0\\
\vdots & \ddots & \vdots\\
0 & \dots & \frac{2}{M}
\end{bmatrix}
$$
Since we are taking average over every data point inside that batch, the hessian matrix should be
$$
\frac{2}{Ml}I = \begin{bmatrix}
\frac{2}{Ml} & \dots & 0\\
\vdots & \ddots & \vdots\\
0 & \dots & \frac{2}{Ml}
\end{bmatrix}
$$
Where $l$ is the size of the batch.
We only need to change the calculation of `Jv` into forward mode:
```python
def Gv_forward(x, outputs, v, damping, model):
_, Jv = Jv_forward(x, v, model)
BJv = 2 * Jv / outputs.numel()
JBJv = torch.autograd.grad(
outputs, model.parameters(), grad_outputs=BJv.reshape_as(outputs), retain_graph=True)
return parameters_to_vector(JBJv).detach() + damping * v
```
```python
def Jv_forward(x, v, model):
with torch.no_grad():
with forward_ad.dual_level(): #enable fwad context manager
params = {}
pos = 0
for i, (name, p) in enumerate(model.named_parameters()):
num_elems = p.numel()
v_part = v[pos:pos+num_elems]
v_part = v_part.view(p.shape)
params[name] = forward_ad.make_dual(p, v_part) #dual number for params
pos += num_elems
rst = functional_call(model, params, x) #forawrd pass w.r.t. params
opts, Jv = forward_ad.unpack_dual(rst) #unpack dual number
return opts, parameters_to_vector(Jv)
```
We ran the experiment and recorded the runtime of different implementation on `spam` using `RTX4080`:
- Three experiment on each subsampling rate (`Gv_forward`, `Gv_reverse`, `Gv_legacy`)
- The range of the bar is $[\mu - \sigma, \mu + \sigma]$

We get similar results on `CPU` (forward mode 1~2x faster).

##### Implementation: Cross Entropy Loss
In the original paper and our current code, we use `MSELoss` as our loss function. We might also want to use `CrossEntropyLoss`. The problem is, the hessian matrix of `CrossEntropyLoss` is not diagonal. Luckily, the hessian matrix still has some nice structure that allow us to avoid the matrix-matrix product when calculating `BJv`.

If we set $\hat{y} = Softmax(z^{L+1})$, the hessian matrix of `CrossEntropyLoss` is
$$
diag(\hat{y}) - \hat{y}\hat{y}^{\intercal}
$$
This is essentially a diagonal-matrix matrix product and two vector matrix product, way cheaper than a matrix-matrix product.
##### Numerical differences
We created a small model and run some experiment on it to ensure these three methods are the same:
```python
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.fc1 = nn.Linear(in_features=4, out_features=4)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(in_features=4, out_features=4)
def forward(self, x):
x = self.relu(self.fc1(x))
x = self.fc2(x)
return x
```
```python
#Gv1: legacy, Gv2: reverse, Gv3: forward
print(Gv1 - Gv3)
print(Gv2 - Gv3)
print(Gv1 - Gv2)
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
grad_fn=<SubBackward0>)
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
grad_fn=<SubBackward0>)
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
```
The output of these three method is also the same when running on spam.
### Implement subgradient (Wu)
In order to run huge dataset like LEDGAR, subsample hessian method is not enough. Function evaluation and gradient evaluation consumes a lot of memory if we use the full batch. If we split the dataset into small batches and add the result together, we can cut off a lot of memory consumption.
We can now run LEDGAR on our code:
```
python main.py -ep 50 -mb 0.02 -cg 50 -ls 10
-gv forward -ds LEDGAR -bs 32 -nf 256 -fs 2 4 6
[*] evaluation
Test Accuracy: 83.9100%
Test Micro F1: 83.9100%
Test Macro F1: 72.4343%
```
This runs on `A100` for about 40min. (~40Gs VRAM)
The Micro-F1 of BERT from [Chalkidis et al. (2022)]() is 87.6%.
If we train longer and try out more model sizes, this score might be reachable.
Also Macro-F1 is much lower. Why?
### Add dataset + run some experiment (Tang)
##### Summary
We make DataLoader not limited to the original data set, but generic.
We add early-stopping method.
We added 3 new datasets: `Trec`, `Ecomm` and `20news`.
Here are some experiment results based on forward mode.
- ecomm
```
Newton:
Test Accuracy: 93.4259%
Test Micro F1: 93.4259%
Test Macro F1: 93.5371%
```
- spam(less than 30 mins)
```
sgd:
Test Accuracy: 91.2029%
Test Micro F1: 91.2029%
Test Macro F1: 73.7902%
Adam:
Test Accuracy: 98.7433%
Test Micro F1: 98.7433%
Test Macro F1: 97.1751%
Newton:
Test Accuracy: 98.0251%
Test Micro F1: 98.0251%
Test Macro F1: 95.4791%
```
- trec(less than 1 hour)
```
sgd:
Test Accuracy: 62.4750%
Test Micro F1: 62.4750%
Test Macro F1: 24.8341%
Adam:
Test Accuracy: 79.6407%
Test Micro F1: 79.6407%
Test Macro F1: 59.9765%
Newton:
Test Accuracy: 76.4471%
Test Micro F1: 76.4471%
Test Macro F1: 47.6093%
```
- 20news
running(too much time : 3 epoch per hour)
## meeting 7.1
### summary
- 数据集不能随便使用,要有正确对照(news20按照YuChen的分割法才有对照性)
### next goal
- prepare code review
- 和YuChen一共在其类LibMultiLabel framework上加入KimCNN,尝试运行sgd with momentum,adam...并draw validation accuracy curve
- 在上一条基础上加入newton法从而比较找到newton法的优点
- 远期:newton on Bert not KimCNN
accumulae_grad_batches
## meeting 7.5 (discuss experiment settings)
### Plans
- **Target:** [Text Classification Baseline Table 1 (Yu-Chen, et al.)](https://www.csie.ntu.edu.tw/~cjlin/papers/text_classification_baseline/text_classification_baseline.pdf)

- **Datasets & Configuration Files**: [SCOTUS](https://github.com/JamesLYC88/long_documents_project/blob/main/config/scotus/bert_tune.yml), [20News](https://github.com/JamesLYC88/long_documents_project/blob/main/config/20news/bert_tune.yml), [LEDGAR](https://github.com/JamesLYC88/long_documents_project/blob/main/config/ledgar/bert_tune.yml)
- **Action items**
- check if accum_grad can be used in pl.lightening: Yun-An
- set configuration file (SCOTUS): Tonmo
- learning rate scheduler: 紹軒
- run exp on SCOTUS using the config
- config, full batch (GPU?, test memory usage)
...
## 7.5-7.15
### Cross entropy loss is better? (Yun-An)



### set configuration file for SCOTUS (Tonmo)
initial configuration:
- max seq length : 512
- learning_rate : 0.1 0.03 0.01 0.003 0.001
- weight_decay : 0
- val_metric : Micro-F1
- batch_size : 16
- loss function : cross entropy
- optimizer : adam
- network_config:reference example_config/EUR-Lex/kim_cnn_tune.yml
- grid search about 100
- lr_scheduler: ReduceLROnPlateau
scheduler_config:
factor: 0.9
patience: 9
min_lr: 0.0001
- network_config:
activation: relu
embed_dropout: [0, 0.2, 0.4]
encoder_dropout: [0, 0.2, 0.4]
filter_sizes: [2, 4, 8]
num_filter_per_size: [128, 256, 384, 512, 1024]
### SCOTUS results
| Model + Optimizer/Loss | Macro-F1 | Micro-F1 |
| ----------------------- | ---------------- | --------------- |
| **KimCNN + SGD/MSE loss**
| (best val) | 0.5426 | 0.6750 |
| (test) | 0.4608 | **0.6407** |
| **KimCNN + Adam/Cross Entropy loss**
| (best val) | 0.6637 | 0.7407 |
| (test) | 0.5919 | **0.6864** |
| **KimCNN + Newton/MSE loss (our code)** | 0.4101| 0.6429|
| **KimCNN + Adam/MSE loss (our code)** | 0.4834|0.6657|
| **BERT (tuned) + Adam/Cross Entropy loss** | 0.559 | 0.671 |
| **Linear** | 0.689 | **0.781** |
increasing the subsampling rate doesn't necessarily yield better results.
### End goal (11 ??)

### meeting 7.15
- check if the gradient norm goes to zero when running Newton
- run more datasets
### Check Gradient (Wu)






## 7.15-8.15
### news20 results
<!--
```
newton
Test Accuracy: 82.7934%
Test Micro F1: 82.7934%
Test Macro F1: 82.1395%
newton with early stopping
Test Accuracy: 82.3022%
Test Micro F1: 82.3022%
Test Macro F1: 81.6102%
adam
Test Accuracy: 85.3691%
Test Micro F1: 85.3691%
Test Macro F1: 84.6662%
```
-->
| Model + Optimizer/Loss | Macro-F1 | Micro-F1 |
| ----------------------- | ---------------- | --------------- |
| **KimCNN + SGD/MSE loss**| 0.7962 | 0.8035 |
| **KimCNN + Adam/Cross Entropy loss**| 0.8381 | 0.8435 |
| **KimCNN + Newton/MSE loss (our code)** | 0.8214 | 0.8279 |
| **KimCNN + Adam/MSE loss (our code)** | **0.8467** | **0.8537** |
| **BERT (tuned) + Adam/Cross Entropy loss** | **0.849** | **0.856** |
| **Linear** | 0.846 | 0.853 |
### LEDGAR results
<!--
```
newton
Test Accuracy: 82.3300%
Test Micro F1: 82.3300%
Test Macro F1: 69.8297%
newton 100 epoch
Test Accuracy: 83.7800%
Test Micro F1: 83.7800%
Test Macro F1: 72.0632%
adam
Test Accuracy: 85.1900%
Test Micro F1: 85.1900%
Test Macro F1: 78.0785%
```
-->
| Model + Optimizer/Loss | Macro-F1 | Micro-F1 |
| ----------------------- | ---------------- | --------------- |
| **KimCNN + SGD/MSE loss**| **0.8128** | **0.8702** |
| **KimCNN + Adam/Cross Entropy loss**| 0.7705 | 0.8409 |
| **KimCNN + Newton/MSE loss (our code)** | 0.7206 | 0.8378 |
| **KimCNN + Adam/MSE loss (our code)** | 0.7808 | 0.8519 |
| **BERT (tuned) + Adam/Cross Entropy loss** | **0.807** | **0.870** |
| **Linear** | 0.800 | 0.864 |
### SCOTUS results
| Model + Optimizer/Loss | Macro-F1 | Micro-F1 |
| ----------------------- | ---------------- | --------------- |
| **KimCNN + SGD/MSE loss**| 0.4608 | 0.6407 |
| **KimCNN + Adam/Cross Entropy loss**| 0.5919 | **0.6864** |
| **KimCNN + Newton/MSE loss (our code)** | 0.4101| 0.6429|
| **KimCNN + Adam/MSE loss (our code)** | 0.4834|0.6657|
| **BERT (tuned) + Adam/Cross Entropy loss** | 0.559 | 0.671 |
| **Linear** | 0.689 | **0.781** |
## 8.16-完结撒花!
Add Regularization parameter for KimCNN
```
ssh tonmoregulus@peanuts.csie.ntu.edu.tw
mlgroup
cd KimCNN
cd KimCNN-regularization
source KimCNN/bin/activate
python main.py -ds news20 -cg 50 -ls 10 -mb 0.05 -bs 256 -ep 100 -gv forward -nf 256 -fs 2 4 8 -reg 100
```

### news20 results
| lg( C ) | Macro-F1 | Micro-F1 | loss(Gv) | loss(reg) | loss(reg)/loss(Gv)量级 |
| ----------------------- | ---------------- | --------------- | - | - | - |
| **None**| **0.8117** | **0.8184** | —— | —— | —— |
| 0 | 0.0050 | 0.0527 | 0.0472 | 0.0002 | e-2 |
| 1 | 0.0050 | 0.0527 | 0.0458 | 0.0006 | e-2 |
| 2 | 0.0050 | 0.0527 | 0.0452 | 0.0002 | e-2 |
| 3 | 0.1216 | 0.1922 | 0.0421 | 0.0029 | e-1 |
| 4 | 0.0177 | 0.0584 | 0.7959 | 57.6422 | e2 |
| 5 | 0.7879 | 0.7848 | 0.0123 | 10.4114 | e3 |
| 6 | 0.8103 | 0.8161 | 0.0006 | 1.1359 | e4 |
| 7 | 0.8131 | 0.8194 | 0.0017 | 0.1142 | e2 |
| 8 | 0.8103 | 0.8176 | 0.0016 | 0.0114 | e1 |
| 9 | 0.8134 | 0.8198 | 0.0021 | 0.0011 | e0 |
| 10 | 0.8088 | 0.8156 | 0.0017 | 0.0001 | e-1 |
| 11 | 0.8155 | 0.8228 | 0.0020 | 1.1426e-5 | e-2 |
| 12 | 0.8124 | 0.8194 | 0.0023 | 1.1426e-6 | e-3 |
| 13 | 0.8150 | 0.8222 | 0.0016 | 1.1426e-7 | e-4 |
| 14 | 0.8120 | 0.8186 | 0.0018 | 1.1426e-8 | e-5 |
| 15 | 0.8163 | 0.8229 | 0.0018 | 1.1426e-9 | e-6 |
#### 对于loss(reg)/loss(Gv)在10e2~10e-2间重复三次实验取均值
| lg( C ) | Macro-F1(avg) | Micro-F1(avg) | loss(Gv)(avg) | loss(reg)(avg) |
| ----------------------- | ---------------- | --------------- | - | - |
| 7 | 0.8144 | **0.8212** | 0.0017 | 0.1142 |
| 8 | 0.8116 | 0.8189 | 0.0020 | 0.0114 |
| 9 | 0.8133 | 0.8204 | 0.0018 | 0.0011 |
| 10 | 0.8138 | 0.8204 | 0.0018 | 0.0001 |
| 11 | 0.8142 | **0.8212** | 0.0018 | 1.1426e−5 |
| 15(近似无正则项参数情形) | 0.8117 | **0.8184** | 0.0019 | 1.1426e−9 |
#### 具体实验数据
attempt 1 (stored in `results_1`)
| lg( C ) | Micro-F1 | Macro-F1 | loss(Gv) | loss(reg) | loss(reg)/loss(Gv) |
| ----------------------- | ---------------- | --------------- | - | - | - |
| 7 | 0.8200 | 0.8126 | 0.0015 | 0.1142 | |
| 8 | 0.8172 | 0.8096 | 0.0021 | 0.0114 | |
| 9 | 0.8177 | 0.8108 | 0.0018 | 0.0011 | |
| 10 | 0.8176 | 0.8115 | 0.0017 | 0.0001 | |
| 11 | 0.8214 | 0.8139 | 0.0018| 1.1426e-5 | ||
| 15 | 0.8153 | 0.8082 | 0.0021 | 1.1426e-9 | |
attempt 2 (stored in `results_2`)
| lg( C ) | Micro-F1 | Macro-F1 | loss(Gv) | loss(reg) | loss(reg)/loss(Gv) |
| ----------------------- | ---------------- | --------------- | - | - | - |
| 7 | 0.8230 | 0.8168 | 0.0018 | 0.1142 | |
| 8 | 0.8210 | 0.8140 | 0.0020 | 0.0114 | |
| 9 | 0.8214 | 0.8144 | 0.0019 | 0.0011 | |
| 10 | 0.8186 | 0.8118 | 0.0016 | 0.0001 | |
| 11 | 0.8213 | 0.8141 | 0.0018| 1.1426e-5 | ||
| 15 | 0.8213 | 0.8149 | 0.0018 | 1.1426e-9 | |
attempt 3 (stored in `results_3`)
| lg( C ) | Micro-F1 | Macro-F1 | loss(Gv) | loss(reg) | loss(reg)/loss(Gv) |
| ----------------------- | ---------------- | --------------- | - | - | - |
| 7 | 0.8206 | 0.8137 | 0.0017 | 0.1142 | |
| 8 | 0.8185 | 0.8111 | 0.0018 | 0.0114 | |
| 9 | 0.8220 | 0.8148 | 0.0016 | 0.0011 | |
| 10 | 0.8249 | 0.8182 | 0.0022 | 0.0001 | |
| 11 | 0.8208 | 0.8145 | 0.0020| 1.1426e-5 | ||
| 15 | 0.8186 | 0.8121 | 0.0017 | 1.1426e-9 | |
