Tensorflow上的多GPU训练比单GPU慢

我创建了3个虚拟GPU(具有1个GPU),并尝试加速图像的矢量化。但是,将下面提供的代码与来自离文档(here)的手动放置一起使用时,我得到了奇怪的结果:在所有GPU上的训练要比在单个GPU上慢两倍。还要在具有3个物理GPU的机器上检查此代码(并删除虚拟设备初始化)-相同。

环境:Python 3.6,Ubuntu 18.04.3,tensorflow-gpu 1.14.0。

代码(此示例创建了3个虚拟设备,您可以在具有一个GPU的PC上对其进行测试):

import os
import time
import numpy as np
import tensorflow as tf

start = time.time()

def load_graph(frozen_graph_filename):
    # We load the protobuf file from the disk and parse it to retrieve the
    # unserialized graph_def
    with tf.gfile.GFile(frozen_graph_filename,"rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    # Then,we import the graph_def into a new Graph and returns it
    with tf.Graph().as_default() as graph:
        # The name var will prefix every op/nodes in your graph
        # Since we load everything in a new graph,this is not needed
        tf.import_graph_def(graph_def,name="")
    return graph

path_to_graph = '/imagenet/'  # Path to imagenet folder where graph file is placed
GRAPH = load_graph(os.path.join(path_to_graph,'classify_image_graph_def.pb'))

# Create Session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.9
config.gpu_options.allow_growth = True
session = tf.Session(graph=GRAPH,config=config)

output_dir = '/vectors/'  # where to saved vectors from images

# Single GPU vectorization
for image_index,image in enumerate(selected_list):
    with Image.open(image) as f:
        image_data = f.convert('RGB')
        feature_tensor = session.graph.get_tensor_by_name('pool_3:0')
        feature_vector = session.run(feature_tensor,{'DecodeJpeg:0': image_data})
        feature_vector = np.squeeze(feature_vector)
        outfile_name = os.path.basename(image) + ".vc"
        out_path = os.path.join(output_dir,outfile_name)
        # Save vector
        np.savetxt(out_path,feature_vector,delimiter=',')

print(f"Single GPU: {time.time() - start}")
start = time.time()

print("Start calculation on multiple GPU")
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  # Create 3 virtual GPUs with 1GB memory each
  try:
    tf.config.experimental.set_virtual_device_configuration(
        gpus[0],[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
    logical_gpus = tf.config.experimental.list_logical_devices('GPU')
    print(len(gpus),"Physical GPU,",len(logical_gpus),"Logical GPUs")
  except RuntimeError as e:
    # Virtual devices must be set before GPUs have been initialized
    print(e)

print("Create prepared ops")
start1 = time.time()
gpus = logical_gpus  # comment this line to use physical GPU devices for calculations

image_list = ['1.jpg','2.jpg','3.jpg']  # list with images to vectorize (tested on 100 and 1000 examples)
# Assign chunk of list to each GPU
# image_list1,image_list2,image_list3 = image_list[:len(image_list)],\
#                                         image_list[len(image_list):2*len(image_list)],\
#                                         image_list[2*len(image_list):]
selected_list = image_list # commit this line if you want to try to assign chunk of list manually to each GPU
output_vectors = []
if gpus:
  # Replicate your computation on multiple GPUs
  feature_vectors = []
  for gpu in gpus:  # iterating on a virtual GPU devices,not physical
    with tf.device(gpu.name):
      print(f"Assign list of images to {gpu.name.split(':',4)[-1]}")
      # Try to assign chunk of list with images to each GPU - work the same time as single GPU
      # if gpu.name.split(':',4)[-1] == "GPU:0":
      #     selected_list = image_list1
      # if gpu.name.split(':',4)[-1] == "GPU:1":
      #     selected_list = image_list2
      # if gpu.name.split(':',4)[-1] == "GPU:2":
      #     selected_list = image_list3
      for image_index,image in enumerate(selected_list):
          with Image.open(image) as f:
            image_data = f.convert('RGB')
            feature_tensor = session.graph.get_tensor_by_name('pool_3:0')
            feature_vector = session.run(feature_tensor,{'DecodeJpeg:0': image_data})
            feature_vectors.append(feature_vector)

print("All images has been assigned to GPU's")
print(f"Time spend on prep ops: {time.time() - start1}")
print("Start calculation on multiple GPU")
start1 = time.time()
for image_index,image in enumerate(image_list):
  feature_vector = np.squeeze(feature_vectors[image_index])
  outfile_name = os.path.basename(image) + ".vc"
  out_path = os.path.join(output_dir,outfile_name)
  # Save vector
  np.savetxt(out_path,')

# Close session
session.close()
print(f"Calc on GPU's spend: {time.time() - start1}")
print(f"All time,spend on multiple GPU: {time.time() - start}")

提供输出视图(具有100张图像的列表):

1 Physical GPU,3 Logical GPUs
Single GPU: 18.76301646232605
Start calculation on multiple GPU
Create prepared ops
Assign list of images to GPU:0
Assign list of images to GPU:1
Assign list of images to GPU:2
All images has been assigned to GPU's
Time spend on prep ops: 18.263537883758545
Start calculation on multiple GPU
Calc on GPU's spend: 11.697082042694092
All time,spend on multiple GPU: 29.960679531097412

我尝试了什么:将包含图像的列表分成3个块,并将每个块分配给GPU(请参阅代码的提交行)。这样可以将多GPU时间减少到17秒,比单GPU运行18秒(〜5%)要快一点。

预期结果:MultiGPU版本比SingleGPU版本快(至少1.5倍加速)。

想法,为什么会发生:我用错误的方式编写了计算

zhangyukexin 回答:Tensorflow上的多GPU训练比单GPU慢

引起您麻烦的两个基本误会:

  1. with tf.device(...):适用于在范围内创建的图形节点,而不适用于Session.run调用。

  2. Session.run是阻止呼叫。它们不并行运行。 TensorFlow只能并行化单个Session.run的内容。

现代TF可以使这一过程变得更加容易。

主要是您可以停止使用tf.Sessiontf.Graph。改用@tf.function,我相信这个基本结构会起作用:

@tf.function
def my_function(inputs,gpus,model):
  results = []
  for input,gpu in zip(inputs,gpus):
    with tf.dvice(gpu):
      result.append(model(input)) 
  return results

但是您将要尝试更实际的测试。仅使用3张图像,您根本无法衡量真实性能。

还请注意:

  1. 通过将设备规范与正在运行的tf.distribute.Strategy分开,@tf.function类可以help simplify some of thisstrategy.experimental_run_v2(my_function,args=(dataset_inputs,))
  2. 使用tf.data.Dataset输入管道将帮助您将加载/预处理与模型执行重叠。

但是,如果您真的打算使用tf.Graphtf.Session来执行此操作,我认为您基本上需要从此重新组织代码:

# Your code:
# Builds a graph
graph = build_graph()

for gpu in gpus():
  with tf.device(gpu):
    # Calls `gpu` in each device scope.
    session.run(...)

对此:

g = tf.Graph()
with g.as_default():
  results = []
  for gup in gpus:
    # Build the graph,on each device
    input = iterator.get_next()
    with tf.device(gpu):
      results.append(my_function(input))

# Use a single `Session.run` call
np_result = session.run(results,feed_dict={inputs: my_inputs})
本文链接:https://www.f2er.com/2859524.html

大家都在问