我的pytorch模型串联引发错误

我正在尝试合并两个模型,包括图像(resnet)和数字分类数据。我要引入分类列和数值列。我已从数据框中提取它们,并将它们转换为张量。然后,我在图像的最后添加了一个经过修改的resnet-50模型(将输出增加2而不是1000)。

我为分类图像和数字图像定义了一个模型,并尝试与resnet连接。我收到以下错误:

NotImplementedError Traceback (most recent call last)
in 
20
21
—> 22 y_pred = combined_model(image,categorical_data,numerical_data)
23 single_loss = loss_function(y_pred,label)
24 aggregated_losses.append(single_loss)

C:\programdata\Anaconda3\lib\site-packages\torch\nn\modules\module.py in call(self,*input,**kwargs)
539 result = self._slow_forward(*input,**kwargs)
540 else:
–> 541 result = self.forward(*input,**kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self,input,result)

C:\programdata\Anaconda3\lib\site-packages\torch\nn\modules\module.py in forward(self,*input)
95 registered hooks while the latter silently ignores them.
96 “”"
—> 97 raise NotImplementedError
98
99 def register_buffer(self,name,tensor):

NotImplementedError:

这是我定义模型,损失和训练循环的方式:

class Image_Embedd(nn.Module):
    def __init__(self,model,embedding_size,num_numerical_cols,output_size,layers,p = 0.4):
    '''
    Args
    ---------------------------
    embedding_size: Contains the embedding size for the categorical columns
    num_numerical_cols: Stores the total number of numerical columns
    output_size: The size of the output layer or the number of possible outputs.
    layers: List which contains number of neurons for all the layers.
    p: Dropout with the default value of 0.5

    '''
        super().__init__()

        self.model = model
        #list of ModuleList objects for all categorical columns
        self.all_embeddings = nn.ModuleList([nn.Embedding(ni,nf) for ni,nf in embedding_size])

        #drop out value for all layers
        self.embedding_dropout = nn.Dropout(p)

        #list of 1 dimension batch normalization objects for all numerical columns
        self.batch_norm_num = nn.BatchNorm1d(num_numerical_cols)

        #the number of categorical and numerical columns are added together and stored in input_size
        all_layers = []
        num_categorical_cols = sum((nf for ni,nf in embedding_size))
        input_size = num_categorical_cols + num_numerical_cols

         #loop iterates to add corresonding layers to all_layers list above
       for i in layers:
            all_layers.append(nn.Linear(input_size,i))
            all_layers.append(nn.ReLU(inplace=True))
            all_layers.append(nn.BatchNorm1d(i))
            all_layers.append(nn.Dropout(p))
            input_size = i

        #append output layer to list of layers    
        all_layers.append(nn.Linear(layers[-1],output_size))

        #pass all layers to the sequential class
        self.layers = nn.Sequential(*all_layers)


        #define the forward method

    def forward(self,x_categorical,x_numerical):
        #this starts the embedding of categorical columns
        embeddings = []
        for i,e in enumerate(self.all_embeddings):
            embeddings.append(e(x_categorical[:,i]))
        x = torch.cat(embeddings,1)
        x = self.embedding_dropout(x)

        #normalizing numerical columns
        x_numerical = self.batch_norm_num(x_numerical)

        #concatenating numerical and categorical columns
        x = torch.cat([x,x_numerical],1)
        x = self.layers(x)

        x2 = model(x2)
        x_final = torch.concat(x,x2)
        x_final = F.softmax(x_final,dim = 1)
        return x

实例化模型

combined_model = Image_Embedd(model = cnnmodel,embedding_size=categorical_embedding_sizes,num_numerical_cols=numerical_data.shape[1],output_size = 2,layers = [256,128,64,32,2],p = 0.4)

损失,优化器

torch.manual_seed(101) criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.Adam(combined_model.parameters(),lr=0.001)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer,step_size=7,gamma=0.1) combined_model = combined_model.cuda()

训练循环

epochs = 1
aggregated_losses = []

max_trn_batch = 25

for i in range(epochs):
for b,(image,label,policy) in enumerate(train_loader):
image = image.cuda()
label = label.cuda()
categorical_data = categorical_data.cuda()
numerical_data = numerical_data.cuda()
#print(image,numerical_data)

    #count batches
    b += 1

    #throttle teh batches
    if b == max_trn_batch:
        break


    y_pred = combined_model(image,numerical_data)
    single_loss = loss_function(y_pred,label)
    aggregated_losses.append(single_loss)

    # statistics
    running_loss += single_loss.item() * image.size(0)
    running_corrects += torch.sum(y_pred == label.data)



    print(f'train-epoch: {i},train-batch: {b}')

    optimizer.zero_grad()
    single_loss.backward()
    optimizer.step()

我不确定在这个过程中我搞砸了。我希望使用分类嵌入和数值数据来帮助进行图像分类。

mj5an 回答:我的pytorch模型串联引发错误

暂时没有好的解决方案,如果你有好的解决方案,请发邮件至:iooj@foxmail.com
本文链接:https://www.f2er.com/3094575.html

大家都在问