我正在尝试计算立体相机设置中两个相应像素到它们各自的对极线的距离。我为此目的实现的代码如下:
#include <iostream>
#include <vector>
#include <opencv2/calib3d/calib3d.hpp>
float calculateDistanceToEpiLinesum(const cv::Mat2f& left_candidate,const cv::Mat2f& right_candidate,const cv::Matx33f& fundamental_mat) {
// Calculate epipolar lines
cv::Mat epiLineRight=cv::Mat(1,3,CV_32FC1);
cv::Mat epiLineLeft=cv::Mat(1,CV_32FC1);
cv::computeCorrespondEpilines(left_candidate,2,fundamental_mat,epiLineRight);
cv::computeCorrespondEpilines(right_candidate,1,epiLineLeft);
// Calculate distances of the image points to their corresponding epipolar line
float distance_left_im=std::abs(epiLineLeft.at<float>(0)*left_candidate[0][0][0]+
epiLineLeft.at<float>(1)*left_candidate[0][0][1]+
epiLineLeft.at<float>(2))/
std::sqrt(std::pow(epiLineLeft.at<float>(0),2.f)+std::pow(epiLineLeft.at<float>(1),2.f));
float distance_right_im=std::abs(epiLineRight.at<float>(0)*right_candidate[0][0][0]+
epiLineRight.at<float>(1)*right_candidate[0][0][1]+
epiLineRight.at<float>(2))/
std::sqrt(std::pow(epiLineRight.at<float>(0),2.f)+std::pow(epiLineRight.at<float>(1),2.f));
return distance_left_im+distance_right_im;
}
int main()
{
cv::Matx33f fundamental_mat=cv::Matx33f{-0.000000234008931f,-0.000013193232976f,0.010025275471910f,-0.000017896532640f,0.000009948056751f,0.414125924093639f,0.006296743991557f,-0.411007947095269f,-4.695511356888332f};
cv::Vec2f left_candidate_vec=cv::Vec2f(135.,289.);
cv::Vec2f right_candidate_vec=cv::Vec2f(205.,311.);
cv::Mat2f left_candidate=cv::Mat(left_candidate_vec);
cv::Mat2f right_candidate=cv::Mat(right_candidate_vec);
float distance_sum=calculateDistanceToEpiLinesum(left_candidate,right_candidate,fundamental_mat);
std::cout<<"The sum of the distances equals "<<distance_sum<<" pixels\n";
return 0;
}
我面临的问题是,我将不得不每秒执行数千次此操作。我知道cv::computeCorrespondEpilines
的第一个输入可以是像素矢量,它可以采用更矢量化的方法,并且可能会加快速度。问题是,我不能使用此功能,因为我不使用常规相机,但是使用event-based sensors,因此我将异步接收像素(而不是接收帧)。
现在,我想了解以下内容: