OpenCV for Unity 2.6.4
Enox Software / Please refer to OpenCV official document ( http://docs.opencv.org/4.10.0/index.html ) for the details of the argument of the method.
|
Static Public Member Functions | |
static void | amFilter (Mat joint, Mat src, Mat dst, double sigma_s, double sigma_r) |
Simple one-line Adaptive Manifold Filter call. | |
static void | amFilter (Mat joint, Mat src, Mat dst, double sigma_s, double sigma_r, bool adjust_outliers) |
Simple one-line Adaptive Manifold Filter call. | |
static void | anisotropicDiffusion (Mat src, Mat dst, float alpha, float K, int niters) |
Performs anisotropic diffusion on an image. | |
static void | bilateralTextureFilter (Mat src, Mat dst) |
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see [Cho2014]. | |
static void | bilateralTextureFilter (Mat src, Mat dst, int fr) |
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see [Cho2014]. | |
static void | bilateralTextureFilter (Mat src, Mat dst, int fr, int numIter) |
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see [Cho2014]. | |
static void | bilateralTextureFilter (Mat src, Mat dst, int fr, int numIter, double sigmaAlpha) |
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see [Cho2014]. | |
static void | bilateralTextureFilter (Mat src, Mat dst, int fr, int numIter, double sigmaAlpha, double sigmaAvg) |
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see [Cho2014]. | |
static void | colorMatchTemplate (Mat img, Mat templ, Mat result) |
Compares a color template against overlapped color image regions. | |
static double | computeBadPixelPercent (Mat GT, Mat src, in Vec4i ROI) |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold) | |
static double | computeBadPixelPercent (Mat GT, Mat src, in Vec4i ROI, int thresh) |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold) | |
static double | computeBadPixelPercent (Mat GT, Mat src, in(int x, int y, int width, int height) ROI) |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold) | |
static double | computeBadPixelPercent (Mat GT, Mat src, in(int x, int y, int width, int height) ROI, int thresh) |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold) | |
static double | computeBadPixelPercent (Mat GT, Mat src, Rect ROI) |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold) | |
static double | computeBadPixelPercent (Mat GT, Mat src, Rect ROI, int thresh) |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold) | |
static double | computeMSE (Mat GT, Mat src, in Vec4i ROI) |
Function for computing mean square error for disparity maps. | |
static double | computeMSE (Mat GT, Mat src, in(int x, int y, int width, int height) ROI) |
Function for computing mean square error for disparity maps. | |
static double | computeMSE (Mat GT, Mat src, Rect ROI) |
Function for computing mean square error for disparity maps. | |
static void | contourSampling (Mat src, Mat _out, int nbElt) |
Contour sampling . | |
static void | covarianceEstimation (Mat src, Mat dst, int windowRows, int windowCols) |
Computes the estimated covariance matrix of an image using the sliding window forumlation. | |
static AdaptiveManifoldFilter | createAMFilter (double sigma_s, double sigma_r) |
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines. | |
static AdaptiveManifoldFilter | createAMFilter (double sigma_s, double sigma_r, bool adjust_outliers) |
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines. | |
static ContourFitting | createContourFitting () |
create ContourFitting algorithm object | |
static ContourFitting | createContourFitting (int ctr) |
create ContourFitting algorithm object | |
static ContourFitting | createContourFitting (int ctr, int fd) |
create ContourFitting algorithm object | |
static DisparityWLSFilter | createDisparityWLSFilter (StereoMatcher matcher_left) |
Convenience factory method that creates an instance of DisparityWLSFilter and sets up all the relevant filter parameters automatically based on the matcher instance. Currently supports only StereoBM and StereoSGBM. | |
static DisparityWLSFilter | createDisparityWLSFilterGeneric (bool use_confidence) |
More generic factory method, create instance of DisparityWLSFilter and execute basic initialization routines. When using this method you will need to set-up the ROI, matchers and other parameters by yourself. | |
static DTFilter | createDTFilter (Mat guide, double sigmaSpatial, double sigmaColor) |
Factory method, create instance of DTFilter and produce initialization routines. | |
static DTFilter | createDTFilter (Mat guide, double sigmaSpatial, double sigmaColor, int mode) |
Factory method, create instance of DTFilter and produce initialization routines. | |
static DTFilter | createDTFilter (Mat guide, double sigmaSpatial, double sigmaColor, int mode, int numIters) |
Factory method, create instance of DTFilter and produce initialization routines. | |
static EdgeAwareInterpolator | createEdgeAwareInterpolator () |
Factory method that creates an instance of the EdgeAwareInterpolator. | |
static EdgeBoxes | createEdgeBoxes () |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta, float eta) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta, float eta, float minScore) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta, float eta, float minScore, int maxBoxes) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio, float minBoxArea) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio, float minBoxArea, float gamma) |
Creates a Edgeboxes. | |
static EdgeBoxes | createEdgeBoxes (float alpha, float beta, float eta, float minScore, int maxBoxes, float edgeMinMag, float edgeMergeThr, float clusterMinMag, float maxAspectRatio, float minBoxArea, float gamma, float kappa) |
Creates a Edgeboxes. | |
static EdgeDrawing | createEdgeDrawing () |
Creates a smart pointer to a EdgeDrawing object and initializes it. | |
static FastBilateralSolverFilter | createFastBilateralSolverFilter (Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma) |
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines. | |
static FastBilateralSolverFilter | createFastBilateralSolverFilter (Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda) |
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines. | |
static FastBilateralSolverFilter | createFastBilateralSolverFilter (Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter) |
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines. | |
static FastBilateralSolverFilter | createFastBilateralSolverFilter (Mat guide, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter, double max_tol) |
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines. | |
static FastGlobalSmootherFilter | createFastGlobalSmootherFilter (Mat guide, double lambda, double sigma_color) |
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines. | |
static FastGlobalSmootherFilter | createFastGlobalSmootherFilter (Mat guide, double lambda, double sigma_color, double lambda_attenuation) |
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines. | |
static FastGlobalSmootherFilter | createFastGlobalSmootherFilter (Mat guide, double lambda, double sigma_color, double lambda_attenuation, int num_iter) |
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines. | |
static FastLineDetector | createFastLineDetector () |
Creates a smart pointer to a FastLineDetector object and initializes it. | |
static FastLineDetector | createFastLineDetector (int length_threshold) |
Creates a smart pointer to a FastLineDetector object and initializes it. | |
static FastLineDetector | createFastLineDetector (int length_threshold, float distance_threshold) |
Creates a smart pointer to a FastLineDetector object and initializes it. | |
static FastLineDetector | createFastLineDetector (int length_threshold, float distance_threshold, double canny_th1) |
Creates a smart pointer to a FastLineDetector object and initializes it. | |
static FastLineDetector | createFastLineDetector (int length_threshold, float distance_threshold, double canny_th1, double canny_th2) |
Creates a smart pointer to a FastLineDetector object and initializes it. | |
static FastLineDetector | createFastLineDetector (int length_threshold, float distance_threshold, double canny_th1, double canny_th2, int canny_aperture_size) |
Creates a smart pointer to a FastLineDetector object and initializes it. | |
static FastLineDetector | createFastLineDetector (int length_threshold, float distance_threshold, double canny_th1, double canny_th2, int canny_aperture_size, bool do_merge) |
Creates a smart pointer to a FastLineDetector object and initializes it. | |
static GraphSegmentation | createGraphSegmentation () |
Creates a graph based segmentor. | |
static GraphSegmentation | createGraphSegmentation (double sigma) |
Creates a graph based segmentor. | |
static GraphSegmentation | createGraphSegmentation (double sigma, float k) |
Creates a graph based segmentor. | |
static GraphSegmentation | createGraphSegmentation (double sigma, float k, int min_size) |
Creates a graph based segmentor. | |
static GuidedFilter | createGuidedFilter (Mat guide, int radius, double eps) |
Factory method, create instance of GuidedFilter and produce initialization routines. | |
static GuidedFilter | createGuidedFilter (Mat guide, int radius, double eps, double scale) |
Factory method, create instance of GuidedFilter and produce initialization routines. | |
static void | createQuaternionImage (Mat img, Mat qimg) |
creates a quaternion image. | |
static RFFeatureGetter | createRFFeatureGetter () |
static RICInterpolator | createRICInterpolator () |
Factory method that creates an instance of the RICInterpolator. | |
static StereoMatcher | createRightMatcher (StereoMatcher matcher_left) |
Convenience method to set up the matcher for computing the right-view disparity map that is required in case of filtering with confidence. | |
static ScanSegment | createScanSegment (int image_width, int image_height, int num_superpixels) |
Initializes a ScanSegment object. | |
static ScanSegment | createScanSegment (int image_width, int image_height, int num_superpixels, int slices) |
Initializes a ScanSegment object. | |
static ScanSegment | createScanSegment (int image_width, int image_height, int num_superpixels, int slices, bool merge_small) |
Initializes a ScanSegment object. | |
static SelectiveSearchSegmentation | createSelectiveSearchSegmentation () |
Create a new SelectiveSearchSegmentation class. | |
static SelectiveSearchSegmentationStrategyColor | createSelectiveSearchSegmentationStrategyColor () |
Create a new color-based strategy. | |
static SelectiveSearchSegmentationStrategyFill | createSelectiveSearchSegmentationStrategyFill () |
Create a new fill-based strategy. | |
static SelectiveSearchSegmentationStrategyMultiple | createSelectiveSearchSegmentationStrategyMultiple () |
Create a new multiple strategy. | |
static SelectiveSearchSegmentationStrategyMultiple | createSelectiveSearchSegmentationStrategyMultiple (SelectiveSearchSegmentationStrategy s1) |
Create a new multiple strategy and set one subtrategy. | |
static SelectiveSearchSegmentationStrategyMultiple | createSelectiveSearchSegmentationStrategyMultiple (SelectiveSearchSegmentationStrategy s1, SelectiveSearchSegmentationStrategy s2) |
Create a new multiple strategy and set two subtrategies, with equal weights. | |
static SelectiveSearchSegmentationStrategyMultiple | createSelectiveSearchSegmentationStrategyMultiple (SelectiveSearchSegmentationStrategy s1, SelectiveSearchSegmentationStrategy s2, SelectiveSearchSegmentationStrategy s3) |
Create a new multiple strategy and set three subtrategies, with equal weights. | |
static SelectiveSearchSegmentationStrategyMultiple | createSelectiveSearchSegmentationStrategyMultiple (SelectiveSearchSegmentationStrategy s1, SelectiveSearchSegmentationStrategy s2, SelectiveSearchSegmentationStrategy s3, SelectiveSearchSegmentationStrategy s4) |
Create a new multiple strategy and set four subtrategies, with equal weights. | |
static SelectiveSearchSegmentationStrategySize | createSelectiveSearchSegmentationStrategySize () |
Create a new size-based strategy. | |
static SelectiveSearchSegmentationStrategyTexture | createSelectiveSearchSegmentationStrategyTexture () |
Create a new size-based strategy. | |
static StructuredEdgeDetection | createStructuredEdgeDetection (string model) |
static StructuredEdgeDetection | createStructuredEdgeDetection (string model, RFFeatureGetter howToGetFeatures) |
static SuperpixelLSC | createSuperpixelLSC (Mat image) |
Class implementing the LSC (Linear Spectral Clustering) superpixels. | |
static SuperpixelLSC | createSuperpixelLSC (Mat image, int region_size) |
Class implementing the LSC (Linear Spectral Clustering) superpixels. | |
static SuperpixelLSC | createSuperpixelLSC (Mat image, int region_size, float ratio) |
Class implementing the LSC (Linear Spectral Clustering) superpixels. | |
static SuperpixelSEEDS | createSuperpixelSEEDS (int image_width, int image_height, int image_channels, int num_superpixels, int num_levels) |
Initializes a SuperpixelSEEDS object. | |
static SuperpixelSEEDS | createSuperpixelSEEDS (int image_width, int image_height, int image_channels, int num_superpixels, int num_levels, int prior) |
Initializes a SuperpixelSEEDS object. | |
static SuperpixelSEEDS | createSuperpixelSEEDS (int image_width, int image_height, int image_channels, int num_superpixels, int num_levels, int prior, int histogram_bins) |
Initializes a SuperpixelSEEDS object. | |
static SuperpixelSEEDS | createSuperpixelSEEDS (int image_width, int image_height, int image_channels, int num_superpixels, int num_levels, int prior, int histogram_bins, bool double_step) |
Initializes a SuperpixelSEEDS object. | |
static SuperpixelSLIC | createSuperpixelSLIC (Mat image) |
Initialize a SuperpixelSLIC object. | |
static SuperpixelSLIC | createSuperpixelSLIC (Mat image, int algorithm) |
Initialize a SuperpixelSLIC object. | |
static SuperpixelSLIC | createSuperpixelSLIC (Mat image, int algorithm, int region_size) |
Initialize a SuperpixelSLIC object. | |
static SuperpixelSLIC | createSuperpixelSLIC (Mat image, int algorithm, int region_size, float ruler) |
Initialize a SuperpixelSLIC object. | |
static void | dtFilter (Mat guide, Mat src, Mat dst, double sigmaSpatial, double sigmaColor) |
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage. | |
static void | dtFilter (Mat guide, Mat src, Mat dst, double sigmaSpatial, double sigmaColor, int mode) |
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage. | |
static void | dtFilter (Mat guide, Mat src, Mat dst, double sigmaSpatial, double sigmaColor, int mode, int numIters) |
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage. | |
static void | edgePreservingFilter (Mat src, Mat dst, int d, double threshold) |
Smoothes an image using the Edge-Preserving filter. | |
static void | fastBilateralSolverFilter (Mat guide, Mat src, Mat confidence, Mat dst) |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations. | |
static void | fastBilateralSolverFilter (Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial) |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations. | |
static void | fastBilateralSolverFilter (Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma) |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations. | |
static void | fastBilateralSolverFilter (Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma) |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations. | |
static void | fastBilateralSolverFilter (Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda) |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations. | |
static void | fastBilateralSolverFilter (Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter) |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations. | |
static void | fastBilateralSolverFilter (Mat guide, Mat src, Mat confidence, Mat dst, double sigma_spatial, double sigma_luma, double sigma_chroma, double lambda, int num_iter, double max_tol) |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations. | |
static void | fastGlobalSmootherFilter (Mat guide, Mat src, Mat dst, double lambda, double sigma_color) |
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations. | |
static void | fastGlobalSmootherFilter (Mat guide, Mat src, Mat dst, double lambda, double sigma_color, double lambda_attenuation) |
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations. | |
static void | fastGlobalSmootherFilter (Mat guide, Mat src, Mat dst, double lambda, double sigma_color, double lambda_attenuation, int num_iter) |
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations. | |
static void | FastHoughTransform (Mat src, Mat dst, int dstMatDepth) |
Calculates 2D Fast Hough transform of an image. | |
static void | FastHoughTransform (Mat src, Mat dst, int dstMatDepth, int angleRange) |
Calculates 2D Fast Hough transform of an image. | |
static void | FastHoughTransform (Mat src, Mat dst, int dstMatDepth, int angleRange, int op) |
Calculates 2D Fast Hough transform of an image. | |
static void | FastHoughTransform (Mat src, Mat dst, int dstMatDepth, int angleRange, int op, int makeSkew) |
Calculates 2D Fast Hough transform of an image. | |
static void | findEllipses (Mat image, Mat ellipses) |
Finds ellipses fastly in an image using projective invariant pruning. | |
static void | findEllipses (Mat image, Mat ellipses, float scoreThreshold) |
Finds ellipses fastly in an image using projective invariant pruning. | |
static void | findEllipses (Mat image, Mat ellipses, float scoreThreshold, float reliabilityThreshold) |
Finds ellipses fastly in an image using projective invariant pruning. | |
static void | findEllipses (Mat image, Mat ellipses, float scoreThreshold, float reliabilityThreshold, float centerDistanceThreshold) |
Finds ellipses fastly in an image using projective invariant pruning. | |
static void | fourierDescriptor (Mat src, Mat dst) |
Fourier descriptors for planed closed curves. | |
static void | fourierDescriptor (Mat src, Mat dst, int nbElt) |
Fourier descriptors for planed closed curves. | |
static void | fourierDescriptor (Mat src, Mat dst, int nbElt, int nbFD) |
Fourier descriptors for planed closed curves. | |
static void | getDisparityVis (Mat src, Mat dst) |
Function for creating a disparity map visualization (clamped CV_8U image) | |
static void | getDisparityVis (Mat src, Mat dst, double scale) |
Function for creating a disparity map visualization (clamped CV_8U image) | |
static void | GradientDericheX (Mat op, Mat dst, double alpha, double omega) |
Applies X Deriche filter to an image. | |
static void | GradientDericheY (Mat op, Mat dst, double alpha, double omega) |
Applies Y Deriche filter to an image. | |
static void | guidedFilter (Mat guide, Mat src, Mat dst, int radius, double eps) |
Simple one-line (Fast) Guided Filter call. | |
static void | guidedFilter (Mat guide, Mat src, Mat dst, int radius, double eps, int dDepth) |
Simple one-line (Fast) Guided Filter call. | |
static void | guidedFilter (Mat guide, Mat src, Mat dst, int radius, double eps, int dDepth, double scale) |
Simple one-line (Fast) Guided Filter call. | |
static void | jointBilateralFilter (Mat joint, Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace) |
Applies the joint bilateral filter to an image. | |
static void | jointBilateralFilter (Mat joint, Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace, int borderType) |
Applies the joint bilateral filter to an image. | |
static void | l0Smooth (Mat src, Mat dst) |
Global image smoothing via L0 gradient minimization. | |
static void | l0Smooth (Mat src, Mat dst, double lambda) |
Global image smoothing via L0 gradient minimization. | |
static void | l0Smooth (Mat src, Mat dst, double lambda, double kappa) |
Global image smoothing via L0 gradient minimization. | |
static void | niBlackThreshold (Mat _src, Mat _dst, double maxValue, int type, int blockSize, double k) |
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired. | |
static void | niBlackThreshold (Mat _src, Mat _dst, double maxValue, int type, int blockSize, double k, int binarizationMethod) |
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired. | |
static void | niBlackThreshold (Mat _src, Mat _dst, double maxValue, int type, int blockSize, double k, int binarizationMethod, double r) |
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired. | |
static void | PeiLinNormalization (Mat I, Mat T) |
static void | qconj (Mat qimg, Mat qcimg) |
calculates conjugate of a quaternion image. | |
static void | qdft (Mat img, Mat qimg, int flags, bool sideLeft) |
Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array. | |
static void | qmultiply (Mat src1, Mat src2, Mat dst) |
Calculates the per-element quaternion product of two arrays. | |
static void | qunitary (Mat qimg, Mat qnimg) |
divides each element by its modulus. | |
static void | RadonTransform (Mat src, Mat dst) |
Calculate Radon Transform of an image. | |
static void | RadonTransform (Mat src, Mat dst, double theta) |
Calculate Radon Transform of an image. | |
static void | RadonTransform (Mat src, Mat dst, double theta, double start_angle) |
Calculate Radon Transform of an image. | |
static void | RadonTransform (Mat src, Mat dst, double theta, double start_angle, double end_angle) |
Calculate Radon Transform of an image. | |
static void | RadonTransform (Mat src, Mat dst, double theta, double start_angle, double end_angle, bool crop) |
Calculate Radon Transform of an image. | |
static void | RadonTransform (Mat src, Mat dst, double theta, double start_angle, double end_angle, bool crop, bool norm) |
Calculate Radon Transform of an image. | |
static int | readGT (string src_path, Mat dst) |
Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16. | |
static void | rollingGuidanceFilter (Mat src, Mat dst) |
Applies the rolling guidance filter to an image. | |
static void | rollingGuidanceFilter (Mat src, Mat dst, int d) |
Applies the rolling guidance filter to an image. | |
static void | rollingGuidanceFilter (Mat src, Mat dst, int d, double sigmaColor) |
Applies the rolling guidance filter to an image. | |
static void | rollingGuidanceFilter (Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace) |
Applies the rolling guidance filter to an image. | |
static void | rollingGuidanceFilter (Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace, int numOfIter) |
Applies the rolling guidance filter to an image. | |
static void | rollingGuidanceFilter (Mat src, Mat dst, int d, double sigmaColor, double sigmaSpace, int numOfIter, int borderType) |
Applies the rolling guidance filter to an image. | |
static void | thinning (Mat src, Mat dst) |
Applies a binary blob thinning operation, to achieve a skeletization of the input image. | |
static void | thinning (Mat src, Mat dst, int thinningType) |
Applies a binary blob thinning operation, to achieve a skeletization of the input image. | |
static void | transformFD (Mat src, Mat t, Mat dst) |
transform a contour | |
static void | transformFD (Mat src, Mat t, Mat dst, bool fdContour) |
transform a contour | |
static void | weightedMedianFilter (Mat joint, Mat src, Mat dst, int r) |
Applies weighted median filter to an image. | |
static void | weightedMedianFilter (Mat joint, Mat src, Mat dst, int r, double sigma) |
Applies weighted median filter to an image. | |
static void | weightedMedianFilter (Mat joint, Mat src, Mat dst, int r, double sigma, int weightType) |
Applies weighted median filter to an image. | |
static void | weightedMedianFilter (Mat joint, Mat src, Mat dst, int r, double sigma, int weightType, Mat mask) |
Applies weighted median filter to an image. | |
Static Public Attributes | |
const int | AM_FILTER = 4 |
const int | ARO_0_45 = 0 |
const int | ARO_315_0 = 3 |
const int | ARO_315_135 = 6 |
const int | ARO_315_45 = 4 |
const int | ARO_45_135 = 5 |
const int | ARO_45_90 = 1 |
const int | ARO_90_135 = 2 |
const int | ARO_CTR_HOR = 7 |
const int | ARO_CTR_VER = 8 |
const int | BINARIZATION_NIBLACK = 0 |
const int | BINARIZATION_NICK = 3 |
const int | BINARIZATION_SAUVOLA = 1 |
const int | BINARIZATION_WOLF = 2 |
const int | DTF_IC = 1 |
const int | DTF_NC = 0 |
const int | DTF_RF = 2 |
const int | FHT_ADD = 2 |
const int | FHT_AVE = 3 |
const int | FHT_MAX = 1 |
const int | FHT_MIN = 0 |
const int | GUIDED_FILTER = 3 |
const int | HDO_DESKEW = 1 |
const int | HDO_RAW = 0 |
const int | MSLIC = 102 |
const int | SLIC = 100 |
const int | SLICO = 101 |
const int | THINNING_GUOHALL = 1 |
const int | THINNING_ZHANGSUEN = 0 |
const int | WMF_COS = 1 << 3 |
const int | WMF_EXP = 1 |
const int | WMF_IV1 = 1 << 1 |
const int | WMF_IV2 = 1 << 2 |
const int | WMF_JAC = 1 << 4 |
const int | WMF_OFF = 1 << 5 |
|
static |
Simple one-line Adaptive Manifold Filter call.
joint | joint (also called as guided) image or array of images with any numbers of channels. |
src | filtering image with any numbers of channels. |
dst | output image. |
sigma_s | spatial standard deviation. |
sigma_r | color space standard deviation, it is similar to the sigma in the color space into bilateralFilter. |
adjust_outliers | optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper. |
|
static |
Simple one-line Adaptive Manifold Filter call.
joint | joint (also called as guided) image or array of images with any numbers of channels. |
src | filtering image with any numbers of channels. |
dst | output image. |
sigma_s | spatial standard deviation. |
sigma_r | color space standard deviation, it is similar to the sigma in the color space into bilateralFilter. |
adjust_outliers | optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper. |
|
static |
Performs anisotropic diffusion on an image.
The function applies Perona-Malik anisotropic diffusion to an image. This is the solution to the partial differential equation:
\[{\frac {\partial I}{\partial t}}={\mathrm {div}}\left(c(x,y,t)\nabla I\right)=\nabla c\cdot \nabla I+c(x,y,t)\Delta I\]
Suggested functions for c(x,y,t) are:
\[c\left(\|\nabla I\|\right)=e^{{-\left(\|\nabla I\|/K\right)^{2}}}\]
or
\[ c\left(\|\nabla I\|\right)={\frac {1}{1+\left({\frac {\|\nabla I\|}{K}}\right)^{2}}} \]
src | Source image with 3 channels. |
dst | Destination image of the same size and the same number of channels as src . |
alpha | The amount of time to step forward by on each iteration (normally, it's between 0 and 1). |
K | sensitivity to the edges |
niters | The number of iterations |
|
static |
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see [Cho2014].
src | Source image whose depth is 8-bit UINT or 32-bit FLOAT |
dst | Destination image of the same size and type as src. |
fr | Radius of kernel to be used for filtering. It should be positive integer |
numIter | Number of iterations of algorithm, It should be positive integer |
sigmaAlpha | Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated. |
sigmaAvg | Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper. |
|
static |
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see [Cho2014].
src | Source image whose depth is 8-bit UINT or 32-bit FLOAT |
dst | Destination image of the same size and type as src. |
fr | Radius of kernel to be used for filtering. It should be positive integer |
numIter | Number of iterations of algorithm, It should be positive integer |
sigmaAlpha | Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated. |
sigmaAvg | Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper. |
|
static |
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see [Cho2014].
src | Source image whose depth is 8-bit UINT or 32-bit FLOAT |
dst | Destination image of the same size and type as src. |
fr | Radius of kernel to be used for filtering. It should be positive integer |
numIter | Number of iterations of algorithm, It should be positive integer |
sigmaAlpha | Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated. |
sigmaAvg | Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper. |
|
static |
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see [Cho2014].
src | Source image whose depth is 8-bit UINT or 32-bit FLOAT |
dst | Destination image of the same size and type as src. |
fr | Radius of kernel to be used for filtering. It should be positive integer |
numIter | Number of iterations of algorithm, It should be positive integer |
sigmaAlpha | Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated. |
sigmaAvg | Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper. |
|
static |
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see [Cho2014].
src | Source image whose depth is 8-bit UINT or 32-bit FLOAT |
dst | Destination image of the same size and type as src. |
fr | Radius of kernel to be used for filtering. It should be positive integer |
numIter | Number of iterations of algorithm, It should be positive integer |
sigmaAlpha | Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated. |
sigmaAvg | Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper. |
|
static |
Compares a color template against overlapped color image regions.
img | Image where the search is running. It must be 3 channels image |
templ | Searched template. It must be not greater than the source image and have 3 channels |
result | Map of comparison results. It must be single-channel 64-bit floating-point |
|
static |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)
GT | ground truth disparity map |
src | disparity map to evaluate |
ROI | region of interest |
thresh | threshold used to determine "bad" pixels |
|
static |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)
GT | ground truth disparity map |
src | disparity map to evaluate |
ROI | region of interest |
thresh | threshold used to determine "bad" pixels |
|
static |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)
GT | ground truth disparity map |
src | disparity map to evaluate |
ROI | region of interest |
thresh | threshold used to determine "bad" pixels |
|
static |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)
GT | ground truth disparity map |
src | disparity map to evaluate |
ROI | region of interest |
thresh | threshold used to determine "bad" pixels |
|
static |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)
GT | ground truth disparity map |
src | disparity map to evaluate |
ROI | region of interest |
thresh | threshold used to determine "bad" pixels |
|
static |
Function for computing the percent of "bad" pixels in the disparity map (pixels where error is higher than a specified threshold)
GT | ground truth disparity map |
src | disparity map to evaluate |
ROI | region of interest |
thresh | threshold used to determine "bad" pixels |
|
static |
Function for computing mean square error for disparity maps.
GT | ground truth disparity map |
src | disparity map to evaluate |
ROI | region of interest |
|
static |
Function for computing mean square error for disparity maps.
GT | ground truth disparity map |
src | disparity map to evaluate |
ROI | region of interest |
|
static |
Function for computing mean square error for disparity maps.
GT | ground truth disparity map |
src | disparity map to evaluate |
ROI | region of interest |
|
static |
Contour sampling .
src | contour type vector<Point> , vector<Point2f> or vector<Point2d> |
out | Mat of type CV_64FC2 and nbElt rows |
nbElt | number of points in out contour |
|
static |
Computes the estimated covariance matrix of an image using the sliding window forumlation.
src | The source image. Input image must be of a complex type. |
dst | The destination estimated covariance matrix. Output matrix will be size (windowRows*windowCols, windowRows*windowCols). |
windowRows | The number of rows in the window. |
windowCols | The number of cols in the window. The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner. Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix. |
|
static |
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.
sigma_s | spatial standard deviation. |
sigma_r | color space standard deviation, it is similar to the sigma in the color space into bilateralFilter. |
adjust_outliers | optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper. |
For more details about Adaptive Manifold Filter parameters, see the original article [Gastal12] .
|
static |
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.
sigma_s | spatial standard deviation. |
sigma_r | color space standard deviation, it is similar to the sigma in the color space into bilateralFilter. |
adjust_outliers | optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper. |
For more details about Adaptive Manifold Filter parameters, see the original article [Gastal12] .
|
static |
create ContourFitting algorithm object
ctr | number of Fourier descriptors equal to number of contour points after resampling. |
fd | Contour defining second shape (Target). |
|
static |
create ContourFitting algorithm object
ctr | number of Fourier descriptors equal to number of contour points after resampling. |
fd | Contour defining second shape (Target). |
|
static |
create ContourFitting algorithm object
ctr | number of Fourier descriptors equal to number of contour points after resampling. |
fd | Contour defining second shape (Target). |
|
static |
Convenience factory method that creates an instance of DisparityWLSFilter and sets up all the relevant filter parameters automatically based on the matcher instance. Currently supports only StereoBM and StereoSGBM.
matcher_left | stereo matcher instance that will be used with the filter |
|
static |
More generic factory method, create instance of DisparityWLSFilter and execute basic initialization routines. When using this method you will need to set-up the ROI, matchers and other parameters by yourself.
use_confidence | filtering with confidence requires two disparity maps (for the left and right views) and is approximately two times slower. However, quality is typically significantly better. |
|
static |
Factory method, create instance of DTFilter and produce initialization routines.
guide | guided image (used to build transformed distance, which describes edge structure of guided image). |
sigmaSpatial | \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter. |
sigmaColor | \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter. |
mode | one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article. |
numIters | optional number of iterations used for filtering, 3 is quite enough. |
For more details about Domain Transform filter parameters, see the original article [Gastal11] and Domain Transform filter homepage.
|
static |
Factory method, create instance of DTFilter and produce initialization routines.
guide | guided image (used to build transformed distance, which describes edge structure of guided image). |
sigmaSpatial | \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter. |
sigmaColor | \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter. |
mode | one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article. |
numIters | optional number of iterations used for filtering, 3 is quite enough. |
For more details about Domain Transform filter parameters, see the original article [Gastal11] and Domain Transform filter homepage.
|
static |
Factory method, create instance of DTFilter and produce initialization routines.
guide | guided image (used to build transformed distance, which describes edge structure of guided image). |
sigmaSpatial | \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter. |
sigmaColor | \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter. |
mode | one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article. |
numIters | optional number of iterations used for filtering, 3 is quite enough. |
For more details about Domain Transform filter parameters, see the original article [Gastal11] and Domain Transform filter homepage.
|
static |
Factory method that creates an instance of the EdgeAwareInterpolator.
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a Edgeboxes.
alpha | step size of sliding window search. |
beta | nms threshold for object proposals. |
eta | adaptation rate for nms threshold. |
minScore | min score of boxes to detect. |
maxBoxes | max number of boxes to detect. |
edgeMinMag | edge min magnitude. Increase to trade off accuracy for speed. |
edgeMergeThr | edge merge threshold. Increase to trade off accuracy for speed. |
clusterMinMag | cluster min magnitude. Increase to trade off accuracy for speed. |
maxAspectRatio | max aspect ratio of boxes. |
minBoxArea | minimum area of boxes. |
gamma | affinity sensitivity. |
kappa | scale sensitivity. |
|
static |
Creates a smart pointer to a EdgeDrawing object and initializes it.
|
static |
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
lambda | parameter defining the amount of regularization |
sigma_color | parameter, that is similar to color space sigma in bilateralFilter. |
lambda_attenuation | internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts. |
num_iter | number of iterations used for filtering, 3 is usually enough. |
For more details about Fast Global Smoother parameters, see the original paper [Min2014]. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.
|
static |
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
lambda | parameter defining the amount of regularization |
sigma_color | parameter, that is similar to color space sigma in bilateralFilter. |
lambda_attenuation | internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts. |
num_iter | number of iterations used for filtering, 3 is usually enough. |
For more details about Fast Global Smoother parameters, see the original paper [Min2014]. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.
|
static |
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
lambda | parameter defining the amount of regularization |
sigma_color | parameter, that is similar to color space sigma in bilateralFilter. |
lambda_attenuation | internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts. |
num_iter | number of iterations used for filtering, 3 is usually enough. |
For more details about Fast Global Smoother parameters, see the original paper [Min2014]. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.
|
static |
Creates a smart pointer to a FastLineDetector object and initializes it.
length_threshold | Segment shorter than this will be discarded |
distance_threshold | A point placed from a hypothesis line segment farther than this will be regarded as an outlier |
canny_th1 | First threshold for hysteresis procedure in Canny() |
canny_th2 | Second threshold for hysteresis procedure in Canny() |
canny_aperture_size | Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image. |
do_merge | If true, incremental merging of segments will be performed |
|
static |
Creates a smart pointer to a FastLineDetector object and initializes it.
length_threshold | Segment shorter than this will be discarded |
distance_threshold | A point placed from a hypothesis line segment farther than this will be regarded as an outlier |
canny_th1 | First threshold for hysteresis procedure in Canny() |
canny_th2 | Second threshold for hysteresis procedure in Canny() |
canny_aperture_size | Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image. |
do_merge | If true, incremental merging of segments will be performed |
|
static |
Creates a smart pointer to a FastLineDetector object and initializes it.
length_threshold | Segment shorter than this will be discarded |
distance_threshold | A point placed from a hypothesis line segment farther than this will be regarded as an outlier |
canny_th1 | First threshold for hysteresis procedure in Canny() |
canny_th2 | Second threshold for hysteresis procedure in Canny() |
canny_aperture_size | Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image. |
do_merge | If true, incremental merging of segments will be performed |
|
static |
Creates a smart pointer to a FastLineDetector object and initializes it.
length_threshold | Segment shorter than this will be discarded |
distance_threshold | A point placed from a hypothesis line segment farther than this will be regarded as an outlier |
canny_th1 | First threshold for hysteresis procedure in Canny() |
canny_th2 | Second threshold for hysteresis procedure in Canny() |
canny_aperture_size | Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image. |
do_merge | If true, incremental merging of segments will be performed |
|
static |
Creates a smart pointer to a FastLineDetector object and initializes it.
length_threshold | Segment shorter than this will be discarded |
distance_threshold | A point placed from a hypothesis line segment farther than this will be regarded as an outlier |
canny_th1 | First threshold for hysteresis procedure in Canny() |
canny_th2 | Second threshold for hysteresis procedure in Canny() |
canny_aperture_size | Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image. |
do_merge | If true, incremental merging of segments will be performed |
|
static |
Creates a smart pointer to a FastLineDetector object and initializes it.
length_threshold | Segment shorter than this will be discarded |
distance_threshold | A point placed from a hypothesis line segment farther than this will be regarded as an outlier |
canny_th1 | First threshold for hysteresis procedure in Canny() |
canny_th2 | Second threshold for hysteresis procedure in Canny() |
canny_aperture_size | Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image. |
do_merge | If true, incremental merging of segments will be performed |
|
static |
Creates a smart pointer to a FastLineDetector object and initializes it.
length_threshold | Segment shorter than this will be discarded |
distance_threshold | A point placed from a hypothesis line segment farther than this will be regarded as an outlier |
canny_th1 | First threshold for hysteresis procedure in Canny() |
canny_th2 | Second threshold for hysteresis procedure in Canny() |
canny_aperture_size | Aperturesize for the sobel operator in Canny(). If zero, Canny() is not applied and the input image is taken as an edge image. |
do_merge | If true, incremental merging of segments will be performed |
|
static |
Creates a graph based segmentor.
sigma | The sigma parameter, used to smooth image |
k | The k parameter of the algorythm |
min_size | The minimum size of segments |
|
static |
Creates a graph based segmentor.
sigma | The sigma parameter, used to smooth image |
k | The k parameter of the algorythm |
min_size | The minimum size of segments |
|
static |
Creates a graph based segmentor.
sigma | The sigma parameter, used to smooth image |
k | The k parameter of the algorythm |
min_size | The minimum size of segments |
|
static |
Creates a graph based segmentor.
sigma | The sigma parameter, used to smooth image |
k | The k parameter of the algorythm |
min_size | The minimum size of segments |
|
static |
Factory method, create instance of GuidedFilter and produce initialization routines.
guide | guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used. |
radius | radius of Guided Filter. |
eps | regularization term of Guided Filter. \({eps}^2\) is similar to the sigma in the color space into bilateralFilter. |
scale | subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter) |
For more details about (Fast) Guided Filter parameters, see the original articles [Kaiming10] [Kaiming15] .
|
static |
Factory method, create instance of GuidedFilter and produce initialization routines.
guide | guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used. |
radius | radius of Guided Filter. |
eps | regularization term of Guided Filter. \({eps}^2\) is similar to the sigma in the color space into bilateralFilter. |
scale | subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter) |
For more details about (Fast) Guided Filter parameters, see the original articles [Kaiming10] [Kaiming15] .
|
static |
creates a quaternion image.
img | Source 8-bit, 32-bit or 64-bit image, with 3-channel image. |
qimg | result CV_64FC4 a quaternion image( 4 chanels zero channel and B,G,R). |
|
static |
|
static |
Factory method that creates an instance of the RICInterpolator.
|
static |
Convenience method to set up the matcher for computing the right-view disparity map that is required in case of filtering with confidence.
matcher_left | main stereo matcher instance that will be used with the filter |
|
static |
Initializes a ScanSegment object.
The function initializes a ScanSegment object for the input image. It stores the parameters of the image: image_width and image_height. It also sets the parameters of the F-DBSCAN superpixel algorithm, which are: num_superpixels, threads, and merge_small.
image_width | Image width. |
image_height | Image height. |
num_superpixels | Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size). Use getNumberOfSuperpixels() to get the actual number. |
slices | Number of processing threads for parallelisation. Setting -1 uses the maximum number of threads. In practice, four threads is enough for smaller images and eight threads for larger ones. |
merge_small | merge small segments to give the desired number of superpixels. Processing is much faster without merging, but many small segments will be left in the image. |
|
static |
Initializes a ScanSegment object.
The function initializes a ScanSegment object for the input image. It stores the parameters of the image: image_width and image_height. It also sets the parameters of the F-DBSCAN superpixel algorithm, which are: num_superpixels, threads, and merge_small.
image_width | Image width. |
image_height | Image height. |
num_superpixels | Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size). Use getNumberOfSuperpixels() to get the actual number. |
slices | Number of processing threads for parallelisation. Setting -1 uses the maximum number of threads. In practice, four threads is enough for smaller images and eight threads for larger ones. |
merge_small | merge small segments to give the desired number of superpixels. Processing is much faster without merging, but many small segments will be left in the image. |
|
static |
Initializes a ScanSegment object.
The function initializes a ScanSegment object for the input image. It stores the parameters of the image: image_width and image_height. It also sets the parameters of the F-DBSCAN superpixel algorithm, which are: num_superpixels, threads, and merge_small.
image_width | Image width. |
image_height | Image height. |
num_superpixels | Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size). Use getNumberOfSuperpixels() to get the actual number. |
slices | Number of processing threads for parallelisation. Setting -1 uses the maximum number of threads. In practice, four threads is enough for smaller images and eight threads for larger ones. |
merge_small | merge small segments to give the desired number of superpixels. Processing is much faster without merging, but many small segments will be left in the image. |
|
static |
Create a new SelectiveSearchSegmentation class.
|
static |
Create a new color-based strategy.
|
static |
Create a new fill-based strategy.
|
static |
Create a new multiple strategy.
|
static |
Create a new multiple strategy and set one subtrategy.
s1 | The first strategy |
|
static |
Create a new multiple strategy and set two subtrategies, with equal weights.
s1 | The first strategy |
s2 | The second strategy |
|
static |
Create a new multiple strategy and set three subtrategies, with equal weights.
s1 | The first strategy |
s2 | The second strategy |
s3 | The third strategy |
|
static |
Create a new multiple strategy and set four subtrategies, with equal weights.
s1 | The first strategy |
s2 | The second strategy |
s3 | The third strategy |
s4 | The forth strategy |
|
static |
Create a new size-based strategy.
|
static |
Create a new size-based strategy.
|
static |
|
static |
|
static |
Class implementing the LSC (Linear Spectral Clustering) superpixels.
image | Image to segment |
region_size | Chooses an average superpixel size measured in pixels |
ratio | Chooses the enforcement of superpixel compactness factor of superpixel |
The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space.
|
static |
Class implementing the LSC (Linear Spectral Clustering) superpixels.
image | Image to segment |
region_size | Chooses an average superpixel size measured in pixels |
ratio | Chooses the enforcement of superpixel compactness factor of superpixel |
The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space.
|
static |
Class implementing the LSC (Linear Spectral Clustering) superpixels.
image | Image to segment |
region_size | Chooses an average superpixel size measured in pixels |
ratio | Chooses the enforcement of superpixel compactness factor of superpixel |
The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space.
|
static |
Initializes a SuperpixelSEEDS object.
image_width | Image width. |
image_height | Image height. |
image_channels | Number of channels of the image. |
num_superpixels | Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number. |
num_levels | Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time. |
prior | enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5]. |
histogram_bins | Number of histogram bins. |
double_step | If true, iterate each block level twice for higher accuracy. |
The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step.
The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.
|
static |
Initializes a SuperpixelSEEDS object.
image_width | Image width. |
image_height | Image height. |
image_channels | Number of channels of the image. |
num_superpixels | Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number. |
num_levels | Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time. |
prior | enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5]. |
histogram_bins | Number of histogram bins. |
double_step | If true, iterate each block level twice for higher accuracy. |
The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step.
The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.
|
static |
Initializes a SuperpixelSEEDS object.
image_width | Image width. |
image_height | Image height. |
image_channels | Number of channels of the image. |
num_superpixels | Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number. |
num_levels | Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time. |
prior | enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5]. |
histogram_bins | Number of histogram bins. |
double_step | If true, iterate each block level twice for higher accuracy. |
The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step.
The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.
|
static |
Initializes a SuperpixelSEEDS object.
image_width | Image width. |
image_height | Image height. |
image_channels | Number of channels of the image. |
num_superpixels | Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number. |
num_levels | Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time. |
prior | enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5]. |
histogram_bins | Number of histogram bins. |
double_step | If true, iterate each block level twice for higher accuracy. |
The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step.
The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.
|
static |
Initialize a SuperpixelSLIC object.
image | Image to segment |
algorithm | Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels. |
region_size | Chooses an average superpixel size measured in pixels |
ruler | Chooses the enforcement of superpixel smoothness factor of superpixel |
The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture.
|
static |
Initialize a SuperpixelSLIC object.
image | Image to segment |
algorithm | Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels. |
region_size | Chooses an average superpixel size measured in pixels |
ruler | Chooses the enforcement of superpixel smoothness factor of superpixel |
The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture.
|
static |
Initialize a SuperpixelSLIC object.
image | Image to segment |
algorithm | Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels. |
region_size | Chooses an average superpixel size measured in pixels |
ruler | Chooses the enforcement of superpixel smoothness factor of superpixel |
The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture.
|
static |
Initialize a SuperpixelSLIC object.
image | Image to segment |
algorithm | Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels. |
region_size | Chooses an average superpixel size measured in pixels |
ruler | Chooses the enforcement of superpixel smoothness factor of superpixel |
The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture.
|
static |
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.
guide | guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels. |
src | filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels. |
dst | destination image |
sigmaSpatial | \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter. |
sigmaColor | \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter. |
mode | one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article. |
numIters | optional number of iterations used for filtering, 3 is quite enough. |
bilateralFilter, guidedFilter, amFilter
|
static |
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.
guide | guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels. |
src | filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels. |
dst | destination image |
sigmaSpatial | \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter. |
sigmaColor | \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter. |
mode | one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article. |
numIters | optional number of iterations used for filtering, 3 is quite enough. |
bilateralFilter, guidedFilter, amFilter
|
static |
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.
guide | guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels. |
src | filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels. |
dst | destination image |
sigmaSpatial | \({\sigma}_H\) parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter. |
sigmaColor | \({\sigma}_r\) parameter in the original article, it's similar to the sigma in the color space into bilateralFilter. |
mode | one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article. |
numIters | optional number of iterations used for filtering, 3 is quite enough. |
bilateralFilter, guidedFilter, amFilter
|
static |
Smoothes an image using the Edge-Preserving filter.
The function smoothes Gaussian noise as well as salt & pepper noise. For more details about this implementation, please see [ReiWoe18] Reich, S. and Wörgötter, F. and Dellen, B. (2018). A Real-Time Edge-Preserving Denoising Filter. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp, 85-94, 4. DOI: 10.5220/0006509000850094.
src | Source 8-bit 3-channel image. |
dst | Destination image of the same size and type as src. |
d | Diameter of each pixel neighborhood that is used during filtering. Must be greater or equal 3. |
threshold | Threshold, which distinguishes between noise, outliers, and data. |
|
static |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. |
confidence | confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel. |
dst | destination image. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. |
confidence | confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel. |
dst | destination image. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. |
confidence | confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel. |
dst | destination image. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. |
confidence | confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel. |
dst | destination image. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. |
confidence | confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel. |
dst | destination image. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. |
confidence | confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel. |
dst | destination image. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. |
confidence | confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel. |
dst | destination image. |
sigma_spatial | parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter. |
sigma_luma | parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter. |
sigma_chroma | parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter. |
lambda | smoothness strength parameter for solver. |
num_iter | number of iterations used for solver, 25 is usually enough. |
max_tol | convergence tolerance used for solver. |
For more details about the Fast Bilateral Solver parameters, see the original paper [BarronPoole2016].
|
static |
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. |
dst | destination image. |
lambda | parameter defining the amount of regularization |
sigma_color | parameter, that is similar to color space sigma in bilateralFilter. |
lambda_attenuation | internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts. |
num_iter | number of iterations used for filtering, 3 is usually enough. |
|
static |
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. |
dst | destination image. |
lambda | parameter defining the amount of regularization |
sigma_color | parameter, that is similar to color space sigma in bilateralFilter. |
lambda_attenuation | internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts. |
num_iter | number of iterations used for filtering, 3 is usually enough. |
|
static |
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.
guide | image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. |
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. |
dst | destination image. |
lambda | parameter defining the amount of regularization |
sigma_color | parameter, that is similar to color space sigma in bilateralFilter. |
lambda_attenuation | internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts. |
num_iter | number of iterations used for filtering, 3 is usually enough. |
|
static |
Calculates 2D Fast Hough transform of an image.
dst | The destination image, result of transformation. |
src | The source (input) image. |
dstMatDepth | The depth of destination image |
op | The operation to be applied, see cv::HoughOp |
angleRange | The part of Hough space to calculate, see cv::AngleRangeOption |
makeSkew | Specifies to do or not to do image skewing, see cv::HoughDeskewOption |
The function calculates the fast Hough transform for full, half or quarter range of angles.
|
static |
Calculates 2D Fast Hough transform of an image.
dst | The destination image, result of transformation. |
src | The source (input) image. |
dstMatDepth | The depth of destination image |
op | The operation to be applied, see cv::HoughOp |
angleRange | The part of Hough space to calculate, see cv::AngleRangeOption |
makeSkew | Specifies to do or not to do image skewing, see cv::HoughDeskewOption |
The function calculates the fast Hough transform for full, half or quarter range of angles.
|
static |
Calculates 2D Fast Hough transform of an image.
dst | The destination image, result of transformation. |
src | The source (input) image. |
dstMatDepth | The depth of destination image |
op | The operation to be applied, see cv::HoughOp |
angleRange | The part of Hough space to calculate, see cv::AngleRangeOption |
makeSkew | Specifies to do or not to do image skewing, see cv::HoughDeskewOption |
The function calculates the fast Hough transform for full, half or quarter range of angles.
|
static |
Calculates 2D Fast Hough transform of an image.
dst | The destination image, result of transformation. |
src | The source (input) image. |
dstMatDepth | The depth of destination image |
op | The operation to be applied, see cv::HoughOp |
angleRange | The part of Hough space to calculate, see cv::AngleRangeOption |
makeSkew | Specifies to do or not to do image skewing, see cv::HoughDeskewOption |
The function calculates the fast Hough transform for full, half or quarter range of angles.
Finds ellipses fastly in an image using projective invariant pruning.
The function detects ellipses in images using projective invariant pruning. For more details about this implementation, please see [jia2017fast] Jia, Qi et al, (2017). A Fast Ellipse Detector using Projective Invariant Pruning. IEEE Transactions on Image Processing.
image | input image, could be gray or color. |
ellipses | output vector of found ellipses. each vector is encoded as five float $x, y, a, b, radius, score$. |
scoreThreshold | float, the threshold of ellipse score. |
reliabilityThreshold | float, the threshold of reliability. |
centerDistanceThreshold | float, the threshold of center distance. |
|
static |
Finds ellipses fastly in an image using projective invariant pruning.
The function detects ellipses in images using projective invariant pruning. For more details about this implementation, please see [jia2017fast] Jia, Qi et al, (2017). A Fast Ellipse Detector using Projective Invariant Pruning. IEEE Transactions on Image Processing.
image | input image, could be gray or color. |
ellipses | output vector of found ellipses. each vector is encoded as five float $x, y, a, b, radius, score$. |
scoreThreshold | float, the threshold of ellipse score. |
reliabilityThreshold | float, the threshold of reliability. |
centerDistanceThreshold | float, the threshold of center distance. |
|
static |
Finds ellipses fastly in an image using projective invariant pruning.
The function detects ellipses in images using projective invariant pruning. For more details about this implementation, please see [jia2017fast] Jia, Qi et al, (2017). A Fast Ellipse Detector using Projective Invariant Pruning. IEEE Transactions on Image Processing.
image | input image, could be gray or color. |
ellipses | output vector of found ellipses. each vector is encoded as five float $x, y, a, b, radius, score$. |
scoreThreshold | float, the threshold of ellipse score. |
reliabilityThreshold | float, the threshold of reliability. |
centerDistanceThreshold | float, the threshold of center distance. |
|
static |
Finds ellipses fastly in an image using projective invariant pruning.
The function detects ellipses in images using projective invariant pruning. For more details about this implementation, please see [jia2017fast] Jia, Qi et al, (2017). A Fast Ellipse Detector using Projective Invariant Pruning. IEEE Transactions on Image Processing.
image | input image, could be gray or color. |
ellipses | output vector of found ellipses. each vector is encoded as five float $x, y, a, b, radius, score$. |
scoreThreshold | float, the threshold of ellipse score. |
reliabilityThreshold | float, the threshold of reliability. |
centerDistanceThreshold | float, the threshold of center distance. |
Fourier descriptors for planed closed curves.
For more details about this implementation, please see [PersoonFu1977]
src | contour type vector<Point> , vector<Point2f> or vector<Point2d> |
dst | Mat of type CV_64FC2 and nbElt rows A VERIFIER |
nbElt | number of rows in dst or getOptimalDFTSize rows if nbElt=-1 |
nbFD | number of FD return in dst dst = [FD(1...nbFD/2) FD(nbFD/2-nbElt+1...:nbElt)] |
|
static |
Fourier descriptors for planed closed curves.
For more details about this implementation, please see [PersoonFu1977]
src | contour type vector<Point> , vector<Point2f> or vector<Point2d> |
dst | Mat of type CV_64FC2 and nbElt rows A VERIFIER |
nbElt | number of rows in dst or getOptimalDFTSize rows if nbElt=-1 |
nbFD | number of FD return in dst dst = [FD(1...nbFD/2) FD(nbFD/2-nbElt+1...:nbElt)] |
|
static |
Fourier descriptors for planed closed curves.
For more details about this implementation, please see [PersoonFu1977]
src | contour type vector<Point> , vector<Point2f> or vector<Point2d> |
dst | Mat of type CV_64FC2 and nbElt rows A VERIFIER |
nbElt | number of rows in dst or getOptimalDFTSize rows if nbElt=-1 |
nbFD | number of FD return in dst dst = [FD(1...nbFD/2) FD(nbFD/2-nbElt+1...:nbElt)] |
Function for creating a disparity map visualization (clamped CV_8U image)
src | input disparity map (CV_16S depth) |
dst | output visualization |
scale | disparity map will be multiplied by this value for visualization |
|
static |
Function for creating a disparity map visualization (clamped CV_8U image)
src | input disparity map (CV_16S depth) |
dst | output visualization |
scale | disparity map will be multiplied by this value for visualization |
|
static |
Applies X Deriche filter to an image.
For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf
op | Source 8-bit or 16bit image, 1-channel or 3-channel image. |
dst | result CV_32FC image with same number of channel than _op. |
alpha | double see paper |
omega | double see paper |
|
static |
Applies Y Deriche filter to an image.
For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf
op | Source 8-bit or 16bit image, 1-channel or 3-channel image. |
dst | result CV_32FC image with same number of channel than _op. |
alpha | double see paper |
omega | double see paper |
|
static |
Simple one-line (Fast) Guided Filter call.
If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.
guide | guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used. |
src | filtering image with any numbers of channels. |
dst | output image. |
radius | radius of Guided Filter. |
eps | regularization term of Guided Filter. \({eps}^2\) is similar to the sigma in the color space into bilateralFilter. |
dDepth | optional depth of the output image. |
scale | subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter) |
|
static |
Simple one-line (Fast) Guided Filter call.
If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.
guide | guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used. |
src | filtering image with any numbers of channels. |
dst | output image. |
radius | radius of Guided Filter. |
eps | regularization term of Guided Filter. \({eps}^2\) is similar to the sigma in the color space into bilateralFilter. |
dDepth | optional depth of the output image. |
scale | subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter) |
|
static |
Simple one-line (Fast) Guided Filter call.
If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.
guide | guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used. |
src | filtering image with any numbers of channels. |
dst | output image. |
radius | radius of Guided Filter. |
eps | regularization term of Guided Filter. \({eps}^2\) is similar to the sigma in the color space into bilateralFilter. |
dDepth | optional depth of the output image. |
scale | subsample factor of Fast Guided Filter, use a scale less than 1 to speeds up computation with almost no visible degradation. (e.g. scale==0.5 shrinks the image by 2x inside the filter) |
|
static |
Applies the joint bilateral filter to an image.
joint | Joint 8-bit or floating-point, 1-channel or 3-channel image. |
src | Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image. |
dst | Destination image of the same size and type as src . |
d | Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . |
sigmaColor | Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. |
sigmaSpace | Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . |
borderType |
|
static |
Applies the joint bilateral filter to an image.
joint | Joint 8-bit or floating-point, 1-channel or 3-channel image. |
src | Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image. |
dst | Destination image of the same size and type as src . |
d | Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . |
sigmaColor | Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. |
sigmaSpace | Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . |
borderType |
Global image smoothing via L0 gradient minimization.
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth. |
dst | destination image. |
lambda | parameter defining the smooth term weight. |
kappa | parameter defining the increasing factor of the weight of the gradient data term. |
For more details about L0 Smoother, see the original paper [xu2011image].
|
static |
Global image smoothing via L0 gradient minimization.
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth. |
dst | destination image. |
lambda | parameter defining the smooth term weight. |
kappa | parameter defining the increasing factor of the weight of the gradient data term. |
For more details about L0 Smoother, see the original paper [xu2011image].
|
static |
Global image smoothing via L0 gradient minimization.
src | source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth. |
dst | destination image. |
lambda | parameter defining the smooth term weight. |
kappa | parameter defining the increasing factor of the weight of the gradient data term. |
For more details about L0 Smoother, see the original paper [xu2011image].
|
static |
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.
The function transforms a grayscale image to a binary image according to the formulae:
\[dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}\]
\[dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}\]
where \(T(x,y)\) is a threshold calculated individually for each pixel.The threshold value \(T(x, y)\) is determined based on the binarization method chosen. For classic Niblack, it is the mean minus \( k \) times standard deviation of \(\texttt{blockSize} \times\texttt{blockSize}\) neighborhood of \((x, y)\).
The function can't process the image in-place.
_src | Source 8-bit single-channel image. |
_dst | Destination image of the same size and the same type as src. |
maxValue | Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types. |
type | Thresholding type, see cv::ThresholdTypes. |
blockSize | Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on. |
k | The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean. |
binarizationMethod | Binarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods. |
r | The user-adjustable parameter used by Sauvola's technique. This is the dynamic range of standard deviation. |
threshold, adaptiveThreshold
|
static |
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.
The function transforms a grayscale image to a binary image according to the formulae:
\[dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}\]
\[dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}\]
where \(T(x,y)\) is a threshold calculated individually for each pixel.The threshold value \(T(x, y)\) is determined based on the binarization method chosen. For classic Niblack, it is the mean minus \( k \) times standard deviation of \(\texttt{blockSize} \times\texttt{blockSize}\) neighborhood of \((x, y)\).
The function can't process the image in-place.
_src | Source 8-bit single-channel image. |
_dst | Destination image of the same size and the same type as src. |
maxValue | Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types. |
type | Thresholding type, see cv::ThresholdTypes. |
blockSize | Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on. |
k | The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean. |
binarizationMethod | Binarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods. |
r | The user-adjustable parameter used by Sauvola's technique. This is the dynamic range of standard deviation. |
threshold, adaptiveThreshold
|
static |
Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired.
The function transforms a grayscale image to a binary image according to the formulae:
\[dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}\]
\[dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}\]
where \(T(x,y)\) is a threshold calculated individually for each pixel.The threshold value \(T(x, y)\) is determined based on the binarization method chosen. For classic Niblack, it is the mean minus \( k \) times standard deviation of \(\texttt{blockSize} \times\texttt{blockSize}\) neighborhood of \((x, y)\).
The function can't process the image in-place.
_src | Source 8-bit single-channel image. |
_dst | Destination image of the same size and the same type as src. |
maxValue | Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types. |
type | Thresholding type, see cv::ThresholdTypes. |
blockSize | Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on. |
k | The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean. |
binarizationMethod | Binarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods. |
r | The user-adjustable parameter used by Sauvola's technique. This is the dynamic range of standard deviation. |
threshold, adaptiveThreshold
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
calculates conjugate of a quaternion image.
qimg | quaternion image. |
qcimg | conjugate of qimg |
|
static |
Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.
img | quaternion image. |
qimg | quaternion image in dual space. |
flags | quaternion image in dual space. only DFT_INVERSE flags is supported |
sideLeft | true the hypercomplex exponential is to be multiplied on the left (false on the right ). |
Calculates the per-element quaternion product of two arrays.
src1 | quaternion image. |
src2 | quaternion image. |
dst | product dst(I)=src1(I) . src2(I) |
divides each element by its modulus.
qimg | quaternion image. |
qnimg | conjugate of qimg |
Calculate Radon Transform of an image.
src | The source (input) image. |
dst | The destination image, result of transformation. |
theta | Angle resolution of the transform in degrees. |
start_angle | Start angle of the transform in degrees. |
end_angle | End angle of the transform in degrees. |
crop | Crop the source image into a circle. |
norm | Normalize the output Mat to grayscale and convert type to CV_8U |
This function calculates the Radon Transform of a given image in any range. See https://engineering.purdue.edu/~malcolm/pct/CTI_Ch03.pdf for detail. If the input type is CV_8U, the output will be CV_32S. If the input type is CV_32F or CV_64F, the output will be CV_64F The output size will be num_of_integral x src_diagonal_length. If crop is selected, the input image will be crop into square then circle, and output size will be num_of_integral x min_edge.
|
static |
Calculate Radon Transform of an image.
src | The source (input) image. |
dst | The destination image, result of transformation. |
theta | Angle resolution of the transform in degrees. |
start_angle | Start angle of the transform in degrees. |
end_angle | End angle of the transform in degrees. |
crop | Crop the source image into a circle. |
norm | Normalize the output Mat to grayscale and convert type to CV_8U |
This function calculates the Radon Transform of a given image in any range. See https://engineering.purdue.edu/~malcolm/pct/CTI_Ch03.pdf for detail. If the input type is CV_8U, the output will be CV_32S. If the input type is CV_32F or CV_64F, the output will be CV_64F The output size will be num_of_integral x src_diagonal_length. If crop is selected, the input image will be crop into square then circle, and output size will be num_of_integral x min_edge.
|
static |
Calculate Radon Transform of an image.
src | The source (input) image. |
dst | The destination image, result of transformation. |
theta | Angle resolution of the transform in degrees. |
start_angle | Start angle of the transform in degrees. |
end_angle | End angle of the transform in degrees. |
crop | Crop the source image into a circle. |
norm | Normalize the output Mat to grayscale and convert type to CV_8U |
This function calculates the Radon Transform of a given image in any range. See https://engineering.purdue.edu/~malcolm/pct/CTI_Ch03.pdf for detail. If the input type is CV_8U, the output will be CV_32S. If the input type is CV_32F or CV_64F, the output will be CV_64F The output size will be num_of_integral x src_diagonal_length. If crop is selected, the input image will be crop into square then circle, and output size will be num_of_integral x min_edge.
|
static |
Calculate Radon Transform of an image.
src | The source (input) image. |
dst | The destination image, result of transformation. |
theta | Angle resolution of the transform in degrees. |
start_angle | Start angle of the transform in degrees. |
end_angle | End angle of the transform in degrees. |
crop | Crop the source image into a circle. |
norm | Normalize the output Mat to grayscale and convert type to CV_8U |
This function calculates the Radon Transform of a given image in any range. See https://engineering.purdue.edu/~malcolm/pct/CTI_Ch03.pdf for detail. If the input type is CV_8U, the output will be CV_32S. If the input type is CV_32F or CV_64F, the output will be CV_64F The output size will be num_of_integral x src_diagonal_length. If crop is selected, the input image will be crop into square then circle, and output size will be num_of_integral x min_edge.
|
static |
Calculate Radon Transform of an image.
src | The source (input) image. |
dst | The destination image, result of transformation. |
theta | Angle resolution of the transform in degrees. |
start_angle | Start angle of the transform in degrees. |
end_angle | End angle of the transform in degrees. |
crop | Crop the source image into a circle. |
norm | Normalize the output Mat to grayscale and convert type to CV_8U |
This function calculates the Radon Transform of a given image in any range. See https://engineering.purdue.edu/~malcolm/pct/CTI_Ch03.pdf for detail. If the input type is CV_8U, the output will be CV_32S. If the input type is CV_32F or CV_64F, the output will be CV_64F The output size will be num_of_integral x src_diagonal_length. If crop is selected, the input image will be crop into square then circle, and output size will be num_of_integral x min_edge.
|
static |
Calculate Radon Transform of an image.
src | The source (input) image. |
dst | The destination image, result of transformation. |
theta | Angle resolution of the transform in degrees. |
start_angle | Start angle of the transform in degrees. |
end_angle | End angle of the transform in degrees. |
crop | Crop the source image into a circle. |
norm | Normalize the output Mat to grayscale and convert type to CV_8U |
This function calculates the Radon Transform of a given image in any range. See https://engineering.purdue.edu/~malcolm/pct/CTI_Ch03.pdf for detail. If the input type is CV_8U, the output will be CV_32S. If the input type is CV_32F or CV_64F, the output will be CV_64F The output size will be num_of_integral x src_diagonal_length. If crop is selected, the input image will be crop into square then circle, and output size will be num_of_integral x min_edge.
|
static |
Function for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16.
src_path | path to the image, containing ground-truth disparity map |
dst | output disparity map, CV_16S depth |
|
static |
Applies the rolling guidance filter to an image.
For more details, please see [zhang2014rolling]
src | Source 8-bit or floating-point, 1-channel or 3-channel image. |
dst | Destination image of the same size and type as src. |
d | Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . |
sigmaColor | Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. |
sigmaSpace | Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . |
numOfIter | Number of iterations of joint edge-preserving filtering applied on the source image. |
borderType |
|
static |
Applies the rolling guidance filter to an image.
For more details, please see [zhang2014rolling]
src | Source 8-bit or floating-point, 1-channel or 3-channel image. |
dst | Destination image of the same size and type as src. |
d | Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . |
sigmaColor | Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. |
sigmaSpace | Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . |
numOfIter | Number of iterations of joint edge-preserving filtering applied on the source image. |
borderType |
|
static |
Applies the rolling guidance filter to an image.
For more details, please see [zhang2014rolling]
src | Source 8-bit or floating-point, 1-channel or 3-channel image. |
dst | Destination image of the same size and type as src. |
d | Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . |
sigmaColor | Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. |
sigmaSpace | Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . |
numOfIter | Number of iterations of joint edge-preserving filtering applied on the source image. |
borderType |
|
static |
Applies the rolling guidance filter to an image.
For more details, please see [zhang2014rolling]
src | Source 8-bit or floating-point, 1-channel or 3-channel image. |
dst | Destination image of the same size and type as src. |
d | Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . |
sigmaColor | Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. |
sigmaSpace | Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . |
numOfIter | Number of iterations of joint edge-preserving filtering applied on the source image. |
borderType |
|
static |
Applies the rolling guidance filter to an image.
For more details, please see [zhang2014rolling]
src | Source 8-bit or floating-point, 1-channel or 3-channel image. |
dst | Destination image of the same size and type as src. |
d | Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . |
sigmaColor | Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. |
sigmaSpace | Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . |
numOfIter | Number of iterations of joint edge-preserving filtering applied on the source image. |
borderType |
|
static |
Applies the rolling guidance filter to an image.
For more details, please see [zhang2014rolling]
src | Source 8-bit or floating-point, 1-channel or 3-channel image. |
dst | Destination image of the same size and type as src. |
d | Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . |
sigmaColor | Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. |
sigmaSpace | Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . |
numOfIter | Number of iterations of joint edge-preserving filtering applied on the source image. |
borderType |
Applies a binary blob thinning operation, to achieve a skeletization of the input image.
The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.
src | Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values. |
dst | Destination image of the same size and the same type as src. The function can work in-place. |
thinningType | Value that defines which thinning algorithm should be used. See cv::ximgproc::ThinningTypes |
|
static |
Applies a binary blob thinning operation, to achieve a skeletization of the input image.
The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.
src | Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values. |
dst | Destination image of the same size and the same type as src. The function can work in-place. |
thinningType | Value that defines which thinning algorithm should be used. See cv::ximgproc::ThinningTypes |
transform a contour
src | contour or Fourier Descriptors if fd is true |
t | transform Mat given by estimateTransformation |
dst | Mat of type CV_64FC2 and nbElt rows |
fdContour | true src are Fourier Descriptors. fdContour false src is a contour |
|
static |
transform a contour
src | contour or Fourier Descriptors if fd is true |
t | transform Mat given by estimateTransformation |
dst | Mat of type CV_64FC2 and nbElt rows |
fdContour | true src are Fourier Descriptors. fdContour false src is a contour |
|
static |
Applies weighted median filter to an image.
For more details about this implementation, please see [zhang2014100+]
joint | Joint 8-bit, 1-channel or 3-channel image. |
src | Source 8-bit or floating-point, 1-channel or 3-channel image. |
dst | Destination image. |
r | Radius of filtering kernel, should be a positive integer. |
sigma | Filter range standard deviation for the joint image. |
weightType | weightType The type of weight definition, see WMFWeightType |
mask | A 0-1 mask that has the same size with I. This mask is used to ignore the effect of some pixels. If the pixel value on mask is 0, the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling. |
|
static |
Applies weighted median filter to an image.
For more details about this implementation, please see [zhang2014100+]
joint | Joint 8-bit, 1-channel or 3-channel image. |
src | Source 8-bit or floating-point, 1-channel or 3-channel image. |
dst | Destination image. |
r | Radius of filtering kernel, should be a positive integer. |
sigma | Filter range standard deviation for the joint image. |
weightType | weightType The type of weight definition, see WMFWeightType |
mask | A 0-1 mask that has the same size with I. This mask is used to ignore the effect of some pixels. If the pixel value on mask is 0, the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling. |
|
static |
Applies weighted median filter to an image.
For more details about this implementation, please see [zhang2014100+]
joint | Joint 8-bit, 1-channel or 3-channel image. |
src | Source 8-bit or floating-point, 1-channel or 3-channel image. |
dst | Destination image. |
r | Radius of filtering kernel, should be a positive integer. |
sigma | Filter range standard deviation for the joint image. |
weightType | weightType The type of weight definition, see WMFWeightType |
mask | A 0-1 mask that has the same size with I. This mask is used to ignore the effect of some pixels. If the pixel value on mask is 0, the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling. |
|
static |
Applies weighted median filter to an image.
For more details about this implementation, please see [zhang2014100+]
joint | Joint 8-bit, 1-channel or 3-channel image. |
src | Source 8-bit or floating-point, 1-channel or 3-channel image. |
dst | Destination image. |
r | Radius of filtering kernel, should be a positive integer. |
sigma | Filter range standard deviation for the joint image. |
weightType | weightType The type of weight definition, see WMFWeightType |
mask | A 0-1 mask that has the same size with I. This mask is used to ignore the effect of some pixels. If the pixel value on mask is 0, the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling. |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |
|
static |