如何从unet_learner(fastai)获得正确的输出预测?

请,我正在一个图像分割项目上,我使用了fastai库(特别是unet_learner)。我已经训练了模型,很好的是这里的代码(在训练阶段):

#codes = np.loadtxt('codes.txt',dtype=str)
codes = np.array(['bg','edge'],dtype='<U4')# bg= background
get_y_fn = lambda x: path_lbl/f'{x.stem}{x.suffix}'

# fastai codes
data = (SegmentationItemList.from_folder(path_img)
    .split_by_rand_pct()
    .label_from_func(get_y_fn,classes=codes)
    #.add_test_folder()
    #.transform(get_transforms(),tfm_y=True,size=384)
    .databunch(bs=2,path=dataset) # bs = mimi-patch size
    .normalize(imagenet_stats))

 learn = unet_learner(data,models.resnet34,wd=1e-2)

 learn.lr_find() # find learning rate
 learn.recorder.plot() # plot learning rate graph

lr = 1e-02 # pick a lr
learn.fit_one_cycle(3,slice(lr),pct_start=0.3) # train model ---- epochs=3

learn.unfreeze() # unfreeze all layers  

# find and plot lr again
 learn.lr_find()
 learn.recorder.plot()

 learn.fit_one_cycle(10,slice(lr/400,lr/4),pct_start=0.3)

 learn.save('model-stage-1') # save model
 learn.load('model-stage-1');

 learn.export()

我的问题是,当我尝试使用经过训练的模型进行预测时,输出始终为黑色图像。下面是预测阶段的代码:

 img = open_image('/content/generated_samples_masks/545.png')
 prediction = learn.predict(img)
 prediction[0].show(figsize=(8,8))

如何从unet_learner(fastai)获得正确的输出预测?

请问,关于如何解决此问题的任何想法?谢谢

chenxiaodan19881018 回答:如何从unet_learner(fastai)获得正确的输出预测?

我认为这个预测还可以。您是否期望这样的事情?

prediction result

此结果基于您发布的预测图像。

要检查运行情况,请尝试以下操作:

 interp = SegmentationInterpretation.from_learner(learn)
 mean_cm,single_img_cm = interp._generate_confusion()
 df = interp._plot_intersect_cm(mean_cm,"Mean of Ratio of Intersection given 
 True Label")
 i = 0 #Some image index
 df = interp._plot_intersect_cm(single_img_cm[i],f"Ratio of Intersection given True Label,Image:{i}")
 interp.show_xyz(i)

Based on fast.ai docs

关于您的预测结果,它是基于您的班级值的图像。如果从该图像中获取(r,g,b)值,则背景为(r,g,b) == 0,边缘为(r,b) == 1。如果您有更多课程,下一个课程将为(r,b) == 2,依此类推。

因此,您可以仅对预测结果进行着色。我是用OpenCV做到的,就像这样:

  frame = cv2.imread("yourPredictionHere.png",1)
  frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) 
  for x in range(384): #width based on the size of your image.
      for y in range(384): #height based on the size of your image.
          b,r = frame[x,y]
          if (b,r) == (0,0): #background
              frame[x,y] = (0,0)
          elif (b,r) == (1,1,1): #edges
              frame[x,y] = (85,85,255)

  cv2.imwrite("result.png",frame)

最诚挚的问候!

,

要查看覆盖在原始图像上的unet_learner的预测,您可以执行以下操作:

img = open_image("your_test_image.png")
prediction = learn.predict(img)
img.show(y=prediction[0])

以下是fastai文档中的一个示例:https://docs.fast.ai/tutorial.inference.html#A-segmentation-example

本文链接:https://www.f2er.com/2766557.html

大家都在问