预测XGboost时python中的“列表索引超出范围”错误

我在预测目标变量时遇到了以上错误。这个错误是什么意思?以及如何处理错误

data_set_train=pd.read_csv("train.csv")
data_set_testing=pd.read_csv("test.csv")

target_train_Y = data_set_train['target'].values
train_X = data_set_train.drop(['target','ID_code'],axis=1)
test_X = data_set_testing.drop(['ID_code'],axis=1)

sc=StandardScaler()
train=sc.fit_transform(train_X)
test=sc.transform(test_X)

xgb_prediction = []
K = 5
kf = KFold(n_splits = K,random_state = 3228,shuffle = True)
for train_index,test_index in kf.split(train):
    train_X,valid_X = train[train_index],train[test_index]
    train_y,valid_y = target_train_Y[train_index],target_train_Y[test_index]
    xgb_params = {'max_depth': 8,'objective': 'binary:logistic','eval_metric':'auc'}


d_train = xgb.DMatrix(train_X,train_y)
d_valid = xgb.DMatrix(valid_X,valid_y)
d_test = xgb.DMatrix(test)
watchlist = [(d_train,'train'),(d_valid,'valid')]
model = xgb.train(xgb_params,d_train,500,watchlist,early_stopping_rounds=20) 

xgb_pred = model.predict(d_test)
xgb_prediction.append(list(xgb_pred))

fig,ax = plt.subplots(1,1,figsize=(10,12))
xgb.plot_importance(model,max_num_features=30,ax=ax,importance_type="cover",xlabel="Cover")
print("---- drawing features by importance in descending order---")
plt.show()

preds=[]
for i in range(len(xgb_prediction[0])):
    sum=0
    for j in range(K):
       sum+=xgb_prediction[j][i]
    preds.append(sum / K)

错误:

IndexError
Traceback (most recent call last)
    <ipython-input-17-70a24f3309e5> in <module>
          3     sum=0
          4     for j in range(K):
    ----> 5       sum+=xgb_prediction[j][i]
          6     preds.append(sum / K)
    IndexError: list index out of range
ci123042908 回答:预测XGboost时python中的“列表索引超出范围”错误

暂时没有好的解决方案,如果你有好的解决方案,请发邮件至:iooj@foxmail.com
本文链接:https://www.f2er.com/2988416.html

大家都在问