python-3.x – Keras(TensorFlow,CPU):训练循环中的顺序模型吃掉内存

前端之家收集整理的这篇文章主要介绍了python-3.x – Keras(TensorFlow,CPU):训练循环中的顺序模型吃掉内存前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我试图在一个循环中训练1000x的Sequential模型.在每个循环中,我的程序都会泄漏内存,直到我用完并获得OOM异常.

我之前已经问了一个类似的问题
(Training multiple Sequential models in a row slows down)

并看到其他类似问题(Keras: Out of memory when doing hyper parameter grid search)

并且解决方案始终是在完成模型使用后将K.clear_session()添加代码中.所以我在上一个问题中做到了这一点,我仍在泄露记忆

这是重现问题的代码.

  1. import random
  2. import time
  3. from keras.models import Sequential
  4. from keras.layers import Dense
  5. from keras import backend as K
  6. import tracemalloc
  7.  
  8.  
  9. def run():
  10. tracemalloc.start()
  11. num_input_nodes = 12
  12. num_hidden_nodes = 8
  13. num_output_nodes = 1
  14.  
  15. random_numbers = random.sample(range(1000),50)
  16. train_x,train_y = create_training_dataset(random_numbers,num_input_nodes)
  17.  
  18. for i in range(100):
  19. snapshot = tracemalloc.take_snapshot()
  20. for j in range(10):
  21. start_time = time.time()
  22. nn = Sequential()
  23. nn.add(Dense(num_hidden_nodes,input_dim=num_input_nodes,activation='relu'))
  24. nn.add(Dense(num_output_nodes))
  25. nn.compile(loss='mean_squared_error',optimizer='adam')
  26. nn.fit(train_x,train_y,nb_epoch=300,batch_size=2,verbose=0)
  27. K.clear_session()
  28. print("Iteration {iter}. Current time {t}. Took {elapsed} seconds".
  29. format(iter=i*10 + j + 1,t=time.strftime('%H:%M:%S'),elapsed=int(time.time() - start_time)))
  30.  
  31. top_stats = tracemalloc.take_snapshot().compare_to(snapshot,'lineno')
  32.  
  33. print("[ Top 5 differences ]")
  34. for stat in top_stats[:5]:
  35. print(stat)
  36.  
  37.  
  38. def create_training_dataset(dataset,input_nodes):
  39. """
  40. Outputs a training dataset (train_x,train_y) as numpy arrays.
  41. Each item in train_x has 'input_nodes' number of items while train_y items are of size 1
  42. :param dataset: list of ints
  43. :param input_nodes:
  44. :return: (numpy array,numpy array),train_x,train_y
  45. """
  46. data_x,data_y = [],[]
  47. for i in range(len(dataset) - input_nodes - 1):
  48. a = dataset[i:(i + input_nodes)]
  49. data_x.append(a)
  50. data_y.append(dataset[i + input_nodes])
  51. return numpy.array(data_x),numpy.array(data_y)
  52.  
  53. run()

这是我从第一个内存调试打印得到的输出

/tensorflow/python/framework/ops.py:121:size = 3485 KiB(3485 KiB),count = 42343(42343)
/tensorflow/python/framework/ops.py:1400:size = 998 KiB(998 KiB),count = 8413(8413)
/tensorflow/python/framework/ops.py:116:size = 888 KiB(888 KiB),count = 32468(32468)
/tensorflow/python/framework/ops.py:1185:size = 795 KiB(795 KiB),count = 3179(3179)
/tensorflow/python/framework/ops.py:2354:size = 599 KiB(599 KiB),count = 5886(5886)

系统信息:

> python 3.5
> keras(1.2.2)
> tensorflow(1.0.0)

解决方法

内存泄漏源于Keras和TensorFlow,它使用单个“默认图”来存储网络结构,随着内部for循环的每次迭代,内存大小会增加.

调用K.clear_session()释放了迭代之间与默认图关联的一些(后端)状态,但需要额外调用tf.reset_default_graph()来清除Python状态.

请注意,可能存在更有效的解决方案:由于nn不依赖于任何一个循环变量,因此可以在循环外定义它,并在循环内重用相同的实例.如果这样做,则无需清除会话或重置默认图表,并且性能会提高,因为您可以从迭代之间的缓存中受益.

猜你在找的Python相关文章