nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True,padding_mode='zero')
in_channels:输入信号通道个数
out_channels:输出信号通道个数
kernel_size:卷积核的大小
stride:步长
padding:输入的每一条边补充0的层数
dilation:卷积核默认的间距,默认为1
groups:从输入通道到输出通道的将数据分为几组,每组channel
重用几次,out_channels/groups
计算得到,其必须被输出通道和输入通道整除,默认为1
输出大小为:
import torch
import torch.nn as nn
a = nn.Conv2d(in_channels=3,out_channels=4,kernel_size=2,padding=0)
print(a)
print(list(a.parameters())[0].shape)
X = torch.rand((1,3,3,3))
print(X)
print(a(X))
得出的结果为
Conv2d(3, 4, kernel_size=(2, 2), stride=(1, 1))
torch.Size([4, 3, 2, 2])
tensor([[[[0.2443, 0.9249, 0.4239],
[0.3013, 0.4009, 0.5954],
[0.6660, 0.8274, 0.7630]],
[[0.3257, 0.9352, 0.8601],
[0.2355, 0.1280, 0.5163],
[0.0043, 0.2029, 0.5315]],
[[0.7454, 0.9867, 0.6015],
[0.1719, 0.9849, 0.5443],
[0.3487, 0.0569, 0.1718]]]])
tensor([[[[-0.5957, -0.3891],
[-0.6881, -0.7487]],
[[ 0.5373, 0.6412],
[-0.0632, 0.3590]],
[[ 0.6515, 0.5075],
[ 0.5245, 0.6769]],
[[ 0.2223, 0.4225],
[ 0.2579, 0.1483]]]], grad_fn=)
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)