Home › Forums › Dayu Febri Computer Vision CNN › Bagian 10: Visualisasi Model VGG, CNN Simple, dan MobilenetV1
- This topic is empty.
Viewing 1 post (of 1 total)
-
AuthorPosts
-
March 16, 2024 at 21:51 #10691mulkan syarifKeymaster
Bagian 12: Visualisasi Model VGG, CNN Simple, dan MobilenetV1
Model CNN Simple
[caption id="attachment_10692" align="aligncenter" width="794"] Simple CNN[/caption]
Untuk lebih jelasnya yaitu
Net( (conv1): Conv2d(3, 10, kernel_size=(5, 5), stride=(1, 1)) (conv2): Conv2d(10, 20, kernel_size=(5, 5), stride=(1, 1)) (conv2_drop): Dropout2d(p=0.5, inplace=False) (fc1): Linear(in_features=320, out_features=50, bias=True) (fc2): Linear(in_features=50, out_features=133, bias=True) (log): LogSoftmax(dim=1) ) ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 10, 24, 24] 760 Conv2d-2 [-1, 20, 8, 8] 5,020 Dropout2d-3 [-1, 20, 8, 8] 0 Linear-4 [-1, 50] 16,050 Linear-5 [-1, 133] 6,783 LogSoftmax-6 [-1, 133] 0 ================================================================ Total params: 28,613 Trainable params: 28,613 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.01 Forward/backward pass size (MB): 0.07 Params size (MB): 0.11 Estimated Total Size (MB): 0.18 ----------------------------------------------------------------
Model VGG16
[caption id="attachment_10693" align="aligncenter" width="850"] VGG16[/caption]
lebih jelasnya yaitu
VGG16( (layer1): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer2): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (layer3): Sequential( (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer4): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (layer5): Sequential( (0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer6): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer7): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (layer8): Sequential( (0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer9): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer10): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (layer11): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer12): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer13): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (fc): Sequential( (0): Dropout(p=0.5, inplace=False) (1): Linear(in_features=4608, out_features=4096, bias=True) (2): ReLU() ) (fc1): Sequential( (0): Dropout(p=0.5, inplace=False) (1): Linear(in_features=4096, out_features=1024, bias=True) (2): ReLU() ) (fc2): Sequential( (0): Linear(in_features=1024, out_features=512, bias=True) (1): ReLU() ) (fc3): Sequential( (0): Linear(in_features=512, out_features=256, bias=True) (1): ReLU() ) (fc4): Sequential( (0): Linear(in_features=256, out_features=128, bias=True) (1): ReLU() ) (fc5): Sequential( (0): Linear(in_features=128, out_features=64, bias=True) (1): ReLU() ) (fc6): Sequential( (0): Linear(in_features=64, out_features=32, bias=True) (1): ReLU() ) (fc7): Sequential( (0): Linear(in_features=32, out_features=133, bias=True) (1): LogSoftmax(dim=1) ) ) ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 120, 120] 1,792 BatchNorm2d-2 [-1, 64, 120, 120] 128 ReLU-3 [-1, 64, 120, 120] 0 Conv2d-4 [-1, 64, 120, 120] 36,928 BatchNorm2d-5 [-1, 64, 120, 120] 128 ReLU-6 [-1, 64, 120, 120] 0 MaxPool2d-7 [-1, 64, 60, 60] 0 Conv2d-8 [-1, 128, 60, 60] 73,856 BatchNorm2d-9 [-1, 128, 60, 60] 256 ReLU-10 [-1, 128, 60, 60] 0 Conv2d-11 [-1, 128, 60, 60] 147,584 BatchNorm2d-12 [-1, 128, 60, 60] 256 ReLU-13 [-1, 128, 60, 60] 0 MaxPool2d-14 [-1, 128, 30, 30] 0 Conv2d-15 [-1, 256, 30, 30] 295,168 BatchNorm2d-16 [-1, 256, 30, 30] 512 ReLU-17 [-1, 256, 30, 30] 0 Conv2d-18 [-1, 256, 30, 30] 590,080 BatchNorm2d-19 [-1, 256, 30, 30] 512 ReLU-20 [-1, 256, 30, 30] 0 Conv2d-21 [-1, 256, 30, 30] 590,080 BatchNorm2d-22 [-1, 256, 30, 30] 512 ReLU-23 [-1, 256, 30, 30] 0 MaxPool2d-24 [-1, 256, 15, 15] 0 Conv2d-25 [-1, 512, 15, 15] 1,180,160 BatchNorm2d-26 [-1, 512, 15, 15] 1,024 ReLU-27 [-1, 512, 15, 15] 0 Conv2d-28 [-1, 512, 15, 15] 2,359,808 BatchNorm2d-29 [-1, 512, 15, 15] 1,024 ReLU-30 [-1, 512, 15, 15] 0 Conv2d-31 [-1, 512, 15, 15] 2,359,808 BatchNorm2d-32 [-1, 512, 15, 15] 1,024 ReLU-33 [-1, 512, 15, 15] 0 MaxPool2d-34 [-1, 512, 7, 7] 0 Conv2d-35 [-1, 512, 7, 7] 2,359,808 BatchNorm2d-36 [-1, 512, 7, 7] 1,024 ReLU-37 [-1, 512, 7, 7] 0 Conv2d-38 [-1, 512, 7, 7] 2,359,808 BatchNorm2d-39 [-1, 512, 7, 7] 1,024 ReLU-40 [-1, 512, 7, 7] 0 Conv2d-41 [-1, 512, 7, 7] 2,359,808 BatchNorm2d-42 [-1, 512, 7, 7] 1,024 ReLU-43 [-1, 512, 7, 7] 0 MaxPool2d-44 [-1, 512, 3, 3] 0 Dropout-45 [-1, 4608] 0 Linear-46 [-1, 4096] 18,878,464 ReLU-47 [-1, 4096] 0 Dropout-48 [-1, 4096] 0 Linear-49 [-1, 1024] 4,195,328 ReLU-50 [-1, 1024] 0 Linear-51 [-1, 512] 524,800 ReLU-52 [-1, 512] 0 Linear-53 [-1, 256] 131,328 ReLU-54 [-1, 256] 0 Linear-55 [-1, 128] 32,896 ReLU-56 [-1, 128] 0 Linear-57 [-1, 64] 8,256 ReLU-58 [-1, 64] 0 Linear-59 [-1, 32] 2,080 ReLU-60 [-1, 32] 0 Linear-61 [-1, 133] 4,389 LogSoftmax-62 [-1, 133] 0 ================================================================ Total params: 38,500,677 Trainable params: 38,500,677 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.16 Forward/backward pass size (MB): 92.20 Params size (MB): 146.87 Estimated Total Size (MB): 239.23 ----------------------------------------------------------------
Model VGG16 revisi
Berikut model VGG16 yang telah dilakukan revisi
dengan detail sebagai berikut
VGG16( (layer1): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer2): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (layer3): Sequential( (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer4): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (layer5): Sequential( (0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer6): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer7): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (layer8): Sequential( (0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer9): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer10): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (layer11): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer12): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) (layer13): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (fc): Sequential( (0): Dropout(p=0.5, inplace=False) (1): Linear(in_features=25088, out_features=4096, bias=True) (2): ReLU() ) (fc1): Sequential( (0): Dropout(p=0.5, inplace=False) (1): Linear(in_features=4096, out_features=4096, bias=True) (2): ReLU() ) (fc2): Sequential( (0): Linear(in_features=4096, out_features=133, bias=True) ) ) ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 227, 227] 1,792 BatchNorm2d-2 [-1, 64, 227, 227] 128 ReLU-3 [-1, 64, 227, 227] 0 Conv2d-4 [-1, 64, 227, 227] 36,928 BatchNorm2d-5 [-1, 64, 227, 227] 128 ReLU-6 [-1, 64, 227, 227] 0 MaxPool2d-7 [-1, 64, 113, 113] 0 Conv2d-8 [-1, 128, 113, 113] 73,856 BatchNorm2d-9 [-1, 128, 113, 113] 256 ReLU-10 [-1, 128, 113, 113] 0 Conv2d-11 [-1, 128, 113, 113] 147,584 BatchNorm2d-12 [-1, 128, 113, 113] 256 ReLU-13 [-1, 128, 113, 113] 0 MaxPool2d-14 [-1, 128, 56, 56] 0 Conv2d-15 [-1, 256, 56, 56] 295,168 BatchNorm2d-16 [-1, 256, 56, 56] 512 ReLU-17 [-1, 256, 56, 56] 0 Conv2d-18 [-1, 256, 56, 56] 590,080 BatchNorm2d-19 [-1, 256, 56, 56] 512 ReLU-20 [-1, 256, 56, 56] 0 Conv2d-21 [-1, 256, 56, 56] 590,080 BatchNorm2d-22 [-1, 256, 56, 56] 512 ReLU-23 [-1, 256, 56, 56] 0 MaxPool2d-24 [-1, 256, 28, 28] 0 Conv2d-25 [-1, 512, 28, 28] 1,180,160 BatchNorm2d-26 [-1, 512, 28, 28] 1,024 ReLU-27 [-1, 512, 28, 28] 0 Conv2d-28 [-1, 512, 28, 28] 2,359,808 BatchNorm2d-29 [-1, 512, 28, 28] 1,024 ReLU-30 [-1, 512, 28, 28] 0 Conv2d-31 [-1, 512, 28, 28] 2,359,808 BatchNorm2d-32 [-1, 512, 28, 28] 1,024 ReLU-33 [-1, 512, 28, 28] 0 MaxPool2d-34 [-1, 512, 14, 14] 0 Conv2d-35 [-1, 512, 14, 14] 2,359,808 BatchNorm2d-36 [-1, 512, 14, 14] 1,024 ReLU-37 [-1, 512, 14, 14] 0 Conv2d-38 [-1, 512, 14, 14] 2,359,808 BatchNorm2d-39 [-1, 512, 14, 14] 1,024 ReLU-40 [-1, 512, 14, 14] 0 Conv2d-41 [-1, 512, 14, 14] 2,359,808 BatchNorm2d-42 [-1, 512, 14, 14] 1,024 ReLU-43 [-1, 512, 14, 14] 0 MaxPool2d-44 [-1, 512, 7, 7] 0 Dropout-45 [-1, 25088] 0 Linear-46 [-1, 4096] 102,764,544 ReLU-47 [-1, 4096] 0 Dropout-48 [-1, 4096] 0 Linear-49 [-1, 4096] 16,781,312 ReLU-50 [-1, 4096] 0 Linear-51 [-1, 133] 544,901 ================================================================ Total params: 134,813,893 Trainable params: 134,813,893 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.59 Forward/backward pass size (MB): 327.49 Params size (MB): 514.27 Estimated Total Size (MB): 842.36 ----------------------------------------------------------------
Model MobilenetV1
[caption id="attachment_10694" align="aligncenter" width="1268"] MobilenetV1[/caption]
lebih jelasnya yaitu
MobileNetV1( (model): Sequential( (0): Sequential( (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (1): Sequential( (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (2): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=64, bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (3): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (4): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=128, bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (5): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (6): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=256, bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (7): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (8): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (9): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (10): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (11): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (12): Sequential( (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=512, bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (13): Sequential( (0): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1024, bias=False) (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (14): AdaptiveAvgPool2d(output_size=1) ) (fc): Linear(in_features=1024, out_features=133, bias=True) ) ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 32, 55, 55] 864 BatchNorm2d-2 [-1, 32, 55, 55] 64 ReLU-3 [-1, 32, 55, 55] 0 Conv2d-4 [-1, 32, 55, 55] 288 BatchNorm2d-5 [-1, 32, 55, 55] 64 ReLU-6 [-1, 32, 55, 55] 0 Conv2d-7 [-1, 64, 55, 55] 2,048 BatchNorm2d-8 [-1, 64, 55, 55] 128 ReLU-9 [-1, 64, 55, 55] 0 Conv2d-10 [-1, 64, 28, 28] 576 BatchNorm2d-11 [-1, 64, 28, 28] 128 ReLU-12 [-1, 64, 28, 28] 0 Conv2d-13 [-1, 128, 28, 28] 8,192 BatchNorm2d-14 [-1, 128, 28, 28] 256 ReLU-15 [-1, 128, 28, 28] 0 Conv2d-16 [-1, 128, 28, 28] 1,152 BatchNorm2d-17 [-1, 128, 28, 28] 256 ReLU-18 [-1, 128, 28, 28] 0 Conv2d-19 [-1, 128, 28, 28] 16,384 BatchNorm2d-20 [-1, 128, 28, 28] 256 ReLU-21 [-1, 128, 28, 28] 0 Conv2d-22 [-1, 128, 14, 14] 1,152 BatchNorm2d-23 [-1, 128, 14, 14] 256 ReLU-24 [-1, 128, 14, 14] 0 Conv2d-25 [-1, 256, 14, 14] 32,768 BatchNorm2d-26 [-1, 256, 14, 14] 512 ReLU-27 [-1, 256, 14, 14] 0 Conv2d-28 [-1, 256, 14, 14] 2,304 BatchNorm2d-29 [-1, 256, 14, 14] 512 ReLU-30 [-1, 256, 14, 14] 0 Conv2d-31 [-1, 256, 14, 14] 65,536 BatchNorm2d-32 [-1, 256, 14, 14] 512 ReLU-33 [-1, 256, 14, 14] 0 Conv2d-34 [-1, 256, 7, 7] 2,304 BatchNorm2d-35 [-1, 256, 7, 7] 512 ReLU-36 [-1, 256, 7, 7] 0 Conv2d-37 [-1, 512, 7, 7] 131,072 BatchNorm2d-38 [-1, 512, 7, 7] 1,024 ReLU-39 [-1, 512, 7, 7] 0 Conv2d-40 [-1, 512, 7, 7] 4,608 BatchNorm2d-41 [-1, 512, 7, 7] 1,024 ReLU-42 [-1, 512, 7, 7] 0 Conv2d-43 [-1, 512, 7, 7] 262,144 BatchNorm2d-44 [-1, 512, 7, 7] 1,024 ReLU-45 [-1, 512, 7, 7] 0 Conv2d-46 [-1, 512, 7, 7] 4,608 BatchNorm2d-47 [-1, 512, 7, 7] 1,024 ReLU-48 [-1, 512, 7, 7] 0 Conv2d-49 [-1, 512, 7, 7] 262,144 BatchNorm2d-50 [-1, 512, 7, 7] 1,024 ReLU-51 [-1, 512, 7, 7] 0 Conv2d-52 [-1, 512, 7, 7] 4,608 BatchNorm2d-53 [-1, 512, 7, 7] 1,024 ReLU-54 [-1, 512, 7, 7] 0 Conv2d-55 [-1, 512, 7, 7] 262,144 BatchNorm2d-56 [-1, 512, 7, 7] 1,024 ReLU-57 [-1, 512, 7, 7] 0 Conv2d-58 [-1, 512, 7, 7] 4,608 BatchNorm2d-59 [-1, 512, 7, 7] 1,024 ReLU-60 [-1, 512, 7, 7] 0 Conv2d-61 [-1, 512, 7, 7] 262,144 BatchNorm2d-62 [-1, 512, 7, 7] 1,024 ReLU-63 [-1, 512, 7, 7] 0 Conv2d-64 [-1, 512, 7, 7] 4,608 BatchNorm2d-65 [-1, 512, 7, 7] 1,024 ReLU-66 [-1, 512, 7, 7] 0 Conv2d-67 [-1, 512, 7, 7] 262,144 BatchNorm2d-68 [-1, 512, 7, 7] 1,024 ReLU-69 [-1, 512, 7, 7] 0 Conv2d-70 [-1, 512, 4, 4] 4,608 BatchNorm2d-71 [-1, 512, 4, 4] 1,024 ReLU-72 [-1, 512, 4, 4] 0 Conv2d-73 [-1, 1024, 4, 4] 524,288 BatchNorm2d-74 [-1, 1024, 4, 4] 2,048 ReLU-75 [-1, 1024, 4, 4] 0 Conv2d-76 [-1, 1024, 4, 4] 9,216 BatchNorm2d-77 [-1, 1024, 4, 4] 2,048 ReLU-78 [-1, 1024, 4, 4] 0 Conv2d-79 [-1, 1024, 4, 4] 1,048,576 BatchNorm2d-80 [-1, 1024, 4, 4] 2,048 ReLU-81 [-1, 1024, 4, 4] 0 AdaptiveAvgPool2d-82 [-1, 1024, 1, 1] 0 Linear-83 [-1, 133] 136,325 ================================================================ Total params: 3,343,301 Trainable params: 3,343,301 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.14 Forward/backward pass size (MB): 28.85 Params size (MB): 12.75 Estimated Total Size (MB): 41.74 ----------------------------------------------------------------
-
AuthorPosts
Viewing 1 post (of 1 total)
- You must be logged in to reply to this topic.