这是我正在使用的 api,来自办公网站keras Multi-GPU
我使用镜像策略:
cross_device_ops = tf.distribute.HierarchicalCopyAllReduce()
strategy = tf.distribute.MirroredStrategy(["device:GPU:%d" % i for i in range(2)],cross_device_ops=cross_device_ops) #,"/gpu:1","/gpu:0"]
print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) # out 2
# Open a strategy scope.
with strategy.scope():
model = UNet()
model.summary()
model.compile(optimizer=Adam(lr=0.0001),loss=dice_coef_loss,metrics=[dice_coef])