keras中的LSTM input_shape

根据Udemy教程,我正在keras Spyder中实现LSTM网络。在教程视频中,代码正在运行,但是当我自己运行代码时,该步骤与教程视频代码完全相同,因此会出现以下错误:

  

检查目标时出错:预期density_26具有3维,但数组的形状为(1198,1)

它与Spyder的版本有关吗? x_train的大小为(1198,60,1)y_train的大小为(1198,)

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

#Importing the training set
dataset_train = pd.read_csv('E:\\downloads_1\\Recurrent_Neural_Networks\\Google_Stock_Price_Train.csv')
training_set = dataset_train.iloc[:,1:2].values
#Feature Scaling 
from sklearn.preprocessing import MinmaxScaler
sc = MinmaxScaler(feature_range=(0,1))
training_set_scaled = sc.fit_transform(training_set)
#Creating a Data Structure with 60 timesteps and 1 output
x_train = []
y_train = []
for i in range(60,1258):
    x_train.append(training_set_scaled[i-60:i,0])
    y_train.append(training_set_scaled[i,0])
x_train,y_train = np.array(x_train),np.array(y_train)

#Reshaping
x_train = np.reshape(x_train,(x_train.shape[0],x_train.shape[1],1))



#Part 2_Building the Rnn

#importing te keras libraries and Packages
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM


#initializing the RNN
regressor = Sequential()

#Adding the first layer LSTM and some Dropout Regularisations
regressor.add(LSTM(units = 50,return_sequences= True,input_shape = (x_train.shape[1],1)))
regressor.add(Dropout(0.2))

#Adding the second layer LSTM and some Dropout Regularisations
regressor.add(LSTM(units = 50,return_sequences= True))
regressor.add(Dropout(0.2))

#Adding the third layer LSTM and some Dropout Regularisations
regressor.add(LSTM(units = 50,return_sequences= True))
regressor.add(Dropout(0.2))

#Adding the fourth layer LSTM and some Dropout Regularisations
regressor.add(LSTM(units = 50,return_sequences= True))
regressor.add(Dropout(0.2))

#Adding the output layer
regressor.add(Dense(units = 1))

#compiling the RNN
regressor.compile(optimizer= 'adam',loss = 'mean_squared_error')

#Fitting the RNN TO the training set`enter code here`
regressor.fit(x_train,y_train,epochs = '100',batch_size = 32 )
gxl6330395 回答:keras中的LSTM input_shape

您的LSTM返回一个序列(即return_sequences=True)。因此,您的最后一个LSTM层返回一个(batch_size,timesteps,50)大小的3-D张量。然后,密集层返回一个3-D预测(即(batch_size,time steps,1))数组。

但是看来您正在输入二维输入作为输出(即1192x1)。因此,该模型必须为(1192,60,1)数组,模型才能使用标签。

这确实取决于您的问题。另一个选择是,如果每个数据点实际上只有1个输出值,则需要在最后一个LSTM上使用return_sequences=False

#initializing the RNN
regressor = Sequential()

#Adding the first layer LSTM and some Dropout Regularisations
regressor.add(LSTM(units = 50,return_sequences= True,input_shape = (x_train.shape[1],1)))
regressor.add(Dropout(0.2))

#Adding the second layer LSTM and some Dropout Regularisations
regressor.add(LSTM(units = 50,return_sequences= True))
regressor.add(Dropout(0.2))

#Adding the third layer LSTM and some Dropout Regularisations
regressor.add(LSTM(units = 50,return_sequences= True))
regressor.add(Dropout(0.2))

#Adding the fourth layer LSTM and some Dropout Regularisations
regressor.add(LSTM(units = 50,return_sequences= False))
regressor.add(Dropout(0.2))

PS:同样要注意的是,您的Dense层实际上是3-D的。如果您要这样做,那很好。但是,在时序模型中使用Dense层的典型方式是使用TimeDistributed层,如下所示。

regressor.add(TimeDistirbuted(Dense(units = 1)))

本文链接:https://www.f2er.com/3127235.html

大家都在问