# 孟潔英文 ###### tags: `others` ## Introduction adding only a small set of feature-maps to the “collective knowledge” of the network and keep the remaining featuremaps unchanged—and the final classifier makes a decision based on all feature-maps in the network. - adding only a small set of feature-maps to the “collective knowledge” 意思是 只把一小set的feature map送到collective knowledge - keep the remaining featuremaps unchanged 意思是 讓其他不變 - the final classifier makes a decision based on all feature-maps in the network. 最終分類器的決定是根據整個network的feature map。 ## Growth rate If each function $H'$ produces $k$ featuremaps, it follows that the $\ell^{th}$ layer has $k_0 +k ×(\ell-1)$ input feature-maps, where $k_0$ is the number of channels in the input layer. - it follows that the $\ell^{th}$ layer has $k_0 +k ×(\ell-1)$ input feature-maps 是用來描述前面那句的 $k$ featuremaps怎麼算出來的 - where $k_0$ is the number of channels in the input layer. 是單純描述$k_0$ 是channel數 One explanation for this is that each layer has access to all the preceding feature-maps in its block and, therefore, to the network’s “collective knowledge”. One can view the feature-maps as the global state of the network - 以that分界,把句子分兩部分看 - 對此的一種解釋是 - 每一層都可以訪問其block的所有先前feature map,因此可以訪問網絡的“collective knowledge”。 - 人們可以將feature map視為網絡的global stage ## Compression If a dense block contains $m$ feature-maps, we let the following transition layer generate $[θ_m]$ output feature maps, where $0 <θ ≤1$ is referred to as the compression factor. When $θ = 1$, the number of feature-maps across transition layers remains unchanged. - where $0 <θ ≤1$ is referred to as the compression factor是要形容前面那句 - 翻譯 - 如果一個dense block包含 m 個特徵圖,我們讓下面的transition layer生成 $[θ_m]$ 個output feature maps,其中 $0 <θ ≤1$ 被稱為compression factor。 當 $θ = 1$ 時,經過transition layers的feature maps數量保持不變。 ## Implementation Details. On all datasets except ImageNet, the DenseNet used in our experiments has three dense blocks that each has an equal number of layers. Before entering the first dense block, a convolution with 16 (or twice the growth rate for DenseNet-BC) output channels is performed on the input images - 在除 ImageNet 之外的所有數據集上,我們實驗中使用的 DenseNet 具有三個dense block,每個block都有相同的layer。 在進入第一個dense block之前,對輸入圖像執行具有 16 個(或 DenseNet-BC 增長率的兩倍)輸出通道的捲積 In our experiments on ImageNet, we use a DenseNet-BC structure with 4 dense blocks on 224×224 input images. The initial convolution layer comprises 2k convolutions of size 7×7 with stride 2; the number of feature-maps in all other layers also follow from setting k. - 在我們對 ImageNet 的實驗中,我們在 224×224 輸入圖像上使用具有 4 個dense block的 DenseNet-BC 結構。 初始conv層包含 $2k$ 個大小為 7×7、stride為 2 的捲積; 所有其他層中的feature map數量也來自設置 $k$。 <mark>DenseNet和DenseNet-BC不是同一個架構喔</mark> 原本的 BN-ReLU-conv(3x3) BC BN-ReLU-conv(1x1)-BN-ReLU-conv(3x3) `好像是這樣 你再確認一下` ---- :::success 小tips 以一些詞為分界 把句子拆開來看 例如: and,what,that .... :::