OpenCV for Unity 2.6.5
Enox Software / Please refer to OpenCV official document ( http://docs.opencv.org/4.10.0/index.html ) for the details of the argument of the method.
|
►CApplicationException | |
COpenCVForUnity.CoreModule.CvException | The exception that is thrown by OpenCVForUntiy |
COpenCVForUnity.ArucoModule.Aruco | |
COpenCVForUnity.UnityUtils.ARUtils | AR utilities |
COpenCVForUnity.BgsegmModule.Bgsegm | |
COpenCVForUnity.BioinspiredModule.Bioinspired | |
COpenCVForUnity.UnityUtils.MOT.ByteTrack.BYTETracker | BYTETracker C# implementation of ByteTrack that does not include an object detection algorithm. The implementation is based on "ByteTrack-cpp". (https://github.com/derpda/ByteTrack-cpp/) Only tracking algorithm are implemented. Any object detection algorithm can be easily combined. Some code has been modified to obtain the same processing results as the original code below. https://github.com/ifzhang/ByteTrack/tree/main/deploy/ncnn/cpp https://github.com/ifzhang/ByteTrack/tree/main/yolox/tracker |
COpenCVForUnity.Calib3dModule.Calib3d | |
COpenCVForUnity.UtilsModule.Converters | |
COpenCVForUnity.CoreModule.Core | |
COpenCVForUnity.CoreModule.CvType | |
COpenCVForUnity.UnityUtils.DebugMatUtils | |
COpenCVForUnity.DnnModule.Dnn | |
COpenCVForUnity.Dnn_superresModule.Dnn_superres | |
COpenCVForUnity.FaceModule.Face | |
COpenCVForUnity.Features2dModule.Features2d | |
COpenCVForUnity.UnityUtils.Helper.FpsManager | |
►CIComparable | |
COpenCVForUnity.UnityUtils.Vec10d | 10-Vector struct of double [CvType.CV_64FC10, Moments] |
COpenCVForUnity.UnityUtils.Vec10d | 10-Vector struct of double [CvType.CV_64FC10, Moments] |
COpenCVForUnity.UnityUtils.Vec2b | 2-Vector struct of byte [CvType.CV_8UC2] |
COpenCVForUnity.UnityUtils.Vec2b | 2-Vector struct of byte [CvType.CV_8UC2] |
COpenCVForUnity.UnityUtils.Vec2c | 2-Vector struct of sbyte [CvType.CV_8SC2] |
COpenCVForUnity.UnityUtils.Vec2c | 2-Vector struct of sbyte [CvType.CV_8SC2] |
COpenCVForUnity.UnityUtils.Vec2d | 2-Vector struct of double [CvType.CV_64FC2, Point] |
COpenCVForUnity.UnityUtils.Vec2d | 2-Vector struct of double [CvType.CV_64FC2, Point] |
COpenCVForUnity.UnityUtils.Vec2f | 2-Vector struct of float [CvType.CV_32FC2] |
COpenCVForUnity.UnityUtils.Vec2f | 2-Vector struct of float [CvType.CV_32FC2] |
COpenCVForUnity.UnityUtils.Vec2i | 2-Vector struct of int [CvType.CV_32SC2, Range] |
COpenCVForUnity.UnityUtils.Vec2i | 2-Vector struct of int [CvType.CV_32SC2, Range] |
COpenCVForUnity.UnityUtils.Vec2s | 2-Vector struct of short [CvType.CV_16SC2] |
COpenCVForUnity.UnityUtils.Vec2s | 2-Vector struct of short [CvType.CV_16SC2] |
COpenCVForUnity.UnityUtils.Vec2w | 2-Vector struct of ushort [CvType.CV_16UC2, Size] |
COpenCVForUnity.UnityUtils.Vec2w | 2-Vector struct of ushort [CvType.CV_16UC2, Size] |
COpenCVForUnity.UnityUtils.Vec3b | 3-Vector struct of byte [CvType.CV_8UC3] |
COpenCVForUnity.UnityUtils.Vec3b | 3-Vector struct of byte [CvType.CV_8UC3] |
COpenCVForUnity.UnityUtils.Vec3c | 3-Vector struct of sbyte [CvType.CV_8SC3] |
COpenCVForUnity.UnityUtils.Vec3c | 3-Vector struct of sbyte [CvType.CV_8SC3] |
COpenCVForUnity.UnityUtils.Vec3d | 3-Vector struct of double [CvType.CV_64FC3, Point3] |
COpenCVForUnity.UnityUtils.Vec3d | 3-Vector struct of double [CvType.CV_64FC3, Point3] |
COpenCVForUnity.UnityUtils.Vec3f | 3-Vector struct of float [CvType.CV_32FC3] |
COpenCVForUnity.UnityUtils.Vec3f | 3-Vector struct of float [CvType.CV_32FC3] |
COpenCVForUnity.UnityUtils.Vec3i | 3-Vector struct of int [CvType.CV_32SC3] |
COpenCVForUnity.UnityUtils.Vec3i | 3-Vector struct of int [CvType.CV_32SC3] |
COpenCVForUnity.UnityUtils.Vec3s | 3-Vector struct of short [CvType.CV_16SC3] |
COpenCVForUnity.UnityUtils.Vec3s | 3-Vector struct of short [CvType.CV_16SC3] |
COpenCVForUnity.UnityUtils.Vec3w | 3-Vector struct of ushort [CvType.CV_16UC3] |
COpenCVForUnity.UnityUtils.Vec3w | 3-Vector struct of ushort [CvType.CV_16UC3] |
COpenCVForUnity.UnityUtils.Vec4b | 4-Vector struct of byte [CvType.CV_8UC4] |
COpenCVForUnity.UnityUtils.Vec4b | 4-Vector struct of byte [CvType.CV_8UC4] |
COpenCVForUnity.UnityUtils.Vec4c | 4-Vector struct of sbyte [CvType.CV_8SC4] |
COpenCVForUnity.UnityUtils.Vec4c | 4-Vector struct of sbyte [CvType.CV_8SC4] |
COpenCVForUnity.UnityUtils.Vec4d | 4-Vector struct of double [CvType.CV_64FC4] |
COpenCVForUnity.UnityUtils.Vec4d | 4-Vector struct of double [CvType.CV_64FC4] |
COpenCVForUnity.UnityUtils.Vec4f | 4-Vector struct of float [CvType.CV_32FC4, DMatch] |
COpenCVForUnity.UnityUtils.Vec4f | 4-Vector struct of float [CvType.CV_32FC4, DMatch] |
COpenCVForUnity.UnityUtils.Vec4i | 4-Vector struct of int [CvType.CV_32SC4, Rect] |
COpenCVForUnity.UnityUtils.Vec4i | 4-Vector struct of int [CvType.CV_32SC4, Rect] |
COpenCVForUnity.UnityUtils.Vec4s | 4-Vector struct of short [CvType.CV_16SC4] |
COpenCVForUnity.UnityUtils.Vec4s | 4-Vector struct of short [CvType.CV_16SC4] |
COpenCVForUnity.UnityUtils.Vec4w | 4-Vector struct of ushort [CvType.CV_16UC4, Rect2d, Scalar] |
COpenCVForUnity.UnityUtils.Vec4w | 4-Vector struct of ushort [CvType.CV_16UC4, Rect2d, Scalar] |
COpenCVForUnity.UnityUtils.Vec5d | 5-Vector struct of double [CvType.CV_64FC5, RotatedRect] |
COpenCVForUnity.UnityUtils.Vec5d | 5-Vector struct of double [CvType.CV_64FC5, RotatedRect] |
COpenCVForUnity.UnityUtils.Vec5f | 5-Vector struct of float [CvType.CV_32FC5, MatOfRotatedRect] |
COpenCVForUnity.UnityUtils.Vec5f | 5-Vector struct of float [CvType.CV_32FC5, MatOfRotatedRect] |
COpenCVForUnity.UnityUtils.Vec6f | 6-Vector struct of float [CvType.CV_32FC6, KeyPoint] |
COpenCVForUnity.UnityUtils.Vec6f | 6-Vector struct of float [CvType.CV_32FC6, KeyPoint] |
COpenCVForUnity.UnityUtils.Vec7d | 7-Vector struct of double [CvType.CV_32FC7] |
COpenCVForUnity.UnityUtils.Vec7d | 7-Vector struct of double [CvType.CV_32FC7] |
COpenCVForUnity.UnityUtils.Vec7f | 7-Vector struct of float [CvType.CV_32FC7, KeyPoint] |
COpenCVForUnity.UnityUtils.Vec7f | 7-Vector struct of float [CvType.CV_32FC7, KeyPoint] |
►COpenCVForUnity.UnityUtils.MOT.ByteTrack.IDetectionBase | |
COpenCVForUnity.UnityUtils.MOT.ByteTrack.Detection | |
►CIDisposable | |
►COpenCVForUnity.DisposableObject | |
►COpenCVForUnity.DisposableOpenCVObject | |
COpenCVForUnity.ArucoModule.EstimateParameters | Pose estimation parameters |
COpenCVForUnity.BgsegmModule.BackgroundSubtractorLSBPDesc | This is for calculation of the LSBP descriptors |
COpenCVForUnity.Calib3dModule.UsacParams | |
►COpenCVForUnity.CoreModule.Algorithm | This is a base class for all more or less complex algorithms in OpenCV |
COpenCVForUnity.BgsegmModule.SyntheticSequenceGenerator | Synthetic frame sequence generator for testing background subtraction algorithms |
COpenCVForUnity.BioinspiredModule.Retina | Class which allows the Gipsa/Listic Labs model to be used with OpenCV |
COpenCVForUnity.BioinspiredModule.Retina | Class which allows the Gipsa/Listic Labs model to be used with OpenCV |
COpenCVForUnity.BioinspiredModule.Retina | Class which allows the Gipsa/Listic Labs model to be used with OpenCV |
COpenCVForUnity.BioinspiredModule.RetinaFastToneMapping | Wrapper class which allows the tone mapping algorithm of Meylan&al(2007) to be used with OpenCV |
COpenCVForUnity.BioinspiredModule.RetinaFastToneMapping | Wrapper class which allows the tone mapping algorithm of Meylan&al(2007) to be used with OpenCV |
COpenCVForUnity.BioinspiredModule.RetinaFastToneMapping | Wrapper class which allows the tone mapping algorithm of Meylan&al(2007) to be used with OpenCV |
COpenCVForUnity.BioinspiredModule.TransientAreasSegmentationModule | Class which provides a transient/moving areas segmentation module |
COpenCVForUnity.BioinspiredModule.TransientAreasSegmentationModule | Class which provides a transient/moving areas segmentation module |
COpenCVForUnity.BioinspiredModule.TransientAreasSegmentationModule | Class which provides a transient/moving areas segmentation module |
►COpenCVForUnity.Calib3dModule.StereoMatcher | The base class for stereo correspondence algorithms |
COpenCVForUnity.Calib3dModule.StereoBM | Class for computing stereo correspondence using the block matching algorithm, introduced and contributed to OpenCV by K. Konolige |
COpenCVForUnity.Calib3dModule.StereoBM | Class for computing stereo correspondence using the block matching algorithm, introduced and contributed to OpenCV by K. Konolige |
COpenCVForUnity.Calib3dModule.StereoBM | Class for computing stereo correspondence using the block matching algorithm, introduced and contributed to OpenCV by K. Konolige |
COpenCVForUnity.Calib3dModule.StereoSGBM | The class implements the modified H. Hirschmuller algorithm [HH08] that differs from the original one as follows: |
COpenCVForUnity.DnnModule.Layer | This interface class allows to build new Layers - are building blocks of networks |
COpenCVForUnity.FaceModule.BIF | |
►COpenCVForUnity.FaceModule.FaceRecognizer | Abstract base class for all face recognition models |
►COpenCVForUnity.FaceModule.BasicFaceRecognizer | |
COpenCVForUnity.FaceModule.EigenFaceRecognizer | |
COpenCVForUnity.FaceModule.FisherFaceRecognizer | |
COpenCVForUnity.FaceModule.LBPHFaceRecognizer | |
►COpenCVForUnity.FaceModule.Facemark | Abstract base class for all facemark models |
COpenCVForUnity.FaceModule.FacemarkKazemi | |
►COpenCVForUnity.FaceModule.FacemarkTrain | Abstract base class for trainable facemark models |
COpenCVForUnity.FaceModule.FacemarkAAM | |
COpenCVForUnity.FaceModule.FacemarkLBF | |
COpenCVForUnity.FaceModule.MACE | Minimum Average Correlation Energy Filter useful for authentication with (cancellable) biometrical features. (does not need many positives to train (10-50), and no negatives at all, also robust to noise/salting) |
►COpenCVForUnity.Features2dModule.DescriptorMatcher | Abstract base class for matching keypoint descriptors |
COpenCVForUnity.Features2dModule.BFMatcher | Brute-force descriptor matcher |
COpenCVForUnity.Features2dModule.FlannBasedMatcher | Flann-based descriptor matcher |
►COpenCVForUnity.Features2dModule.Feature2D | Abstract base class for 2D image feature detectors and descriptor extractors |
COpenCVForUnity.Features2dModule.AKAZE | Class implementing the AKAZE keypoint detector and descriptor extractor, described in [ANB13] |
COpenCVForUnity.Features2dModule.AffineFeature | Class for implementing the wrapper which makes detectors and extractors to be affine invariant, described as ASIFT in [YM11] |
COpenCVForUnity.Features2dModule.AgastFeatureDetector | Wrapping class for feature detection using the AGAST method. : |
COpenCVForUnity.Features2dModule.BRISK | Class implementing the BRISK keypoint detector and descriptor extractor, described in [LCS11] |
COpenCVForUnity.Features2dModule.FastFeatureDetector | Wrapping class for feature detection using the FAST method. : |
COpenCVForUnity.Features2dModule.GFTTDetector | Wrapping class for feature detection using the goodFeaturesToTrack function. : |
COpenCVForUnity.Features2dModule.KAZE | Class implementing the KAZE keypoint detector and descriptor extractor, described in [ABD12] |
COpenCVForUnity.Features2dModule.MSER | Maximally stable extremal region extractor |
COpenCVForUnity.Features2dModule.ORB | Class implementing the ORB (oriented BRIEF) keypoint detector and descriptor extractor |
COpenCVForUnity.Features2dModule.SIFT | Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) algorithm by D. Lowe [Lowe04] |
COpenCVForUnity.Features2dModule.SimpleBlobDetector | Class for extracting blobs from an image. : |
►COpenCVForUnity.Xfeatures2dModule.AffineFeature2D | Class implementing affine adaptation for key points |
COpenCVForUnity.Xfeatures2dModule.TBMR | Class implementing the Tree Based Morse Regions (TBMR) as described in [Najman2014] extended with scaled extraction ability |
COpenCVForUnity.Xfeatures2dModule.BEBLID | Class implementing BEBLID (Boosted Efficient Binary Local Image Descriptor), described in [Suarez2020BEBLID] |
COpenCVForUnity.Xfeatures2dModule.BoostDesc | Class implementing BoostDesc (Learning Image Descriptors with Boosting), described in [Trzcinski13a] and [Trzcinski13b] |
COpenCVForUnity.Xfeatures2dModule.BriefDescriptorExtractor | Class for computing BRIEF descriptors described in [calon2010] |
COpenCVForUnity.Xfeatures2dModule.DAISY | Class implementing DAISY descriptor, described in [Tola10] |
COpenCVForUnity.Xfeatures2dModule.FREAK | Class implementing the FREAK (Fast Retina Keypoint) keypoint descriptor, described in [AOV12] |
COpenCVForUnity.Xfeatures2dModule.HarrisLaplaceFeatureDetector | Class implementing the Harris-Laplace feature detector as described in [Mikolajczyk2004] |
COpenCVForUnity.Xfeatures2dModule.LATCH | |
COpenCVForUnity.Xfeatures2dModule.LUCID | Class implementing the locally uniform comparison image descriptor, described in [LUCID] |
COpenCVForUnity.Xfeatures2dModule.MSDDetector | Class implementing the MSD (Maximal Self-Dissimilarity) keypoint detector, described in [Tombari14] |
COpenCVForUnity.Xfeatures2dModule.StarDetector | The class implements the keypoint detector introduced by [Agrawal08], synonym of StarDetector. : |
COpenCVForUnity.Xfeatures2dModule.TEBLID | Class implementing TEBLID (Triplet-based Efficient Binary Local Image Descriptor), described in [Suarez2021TEBLID] |
COpenCVForUnity.Xfeatures2dModule.VGG | Class implementing VGG (Oxford Visual Geometry Group) descriptor trained end to end using "Descriptor Learning Using Convex Optimisation" (DLCO) aparatus described in [Simonyan14] |
►COpenCVForUnity.Img_hashModule.ImgHashBase | The base class for image hash algorithms |
COpenCVForUnity.Img_hashModule.AverageHash | Computes average hash value of the input image |
COpenCVForUnity.Img_hashModule.BlockMeanHash | Image hash based on block mean |
COpenCVForUnity.Img_hashModule.ColorMomentHash | Image hash based on color moments |
COpenCVForUnity.Img_hashModule.MarrHildrethHash | Marr-Hildreth Operator Based Hash, slowest but more discriminative |
COpenCVForUnity.Img_hashModule.PHash | PHash |
COpenCVForUnity.Img_hashModule.RadialVarianceHash | Image hash based on Radon transform |
COpenCVForUnity.ImgprocModule.CLAHE | Base class for Contrast Limited Adaptive Histogram Equalization |
COpenCVForUnity.ImgprocModule.CLAHE | Base class for Contrast Limited Adaptive Histogram Equalization |
COpenCVForUnity.ImgprocModule.CLAHE | Base class for Contrast Limited Adaptive Histogram Equalization |
►COpenCVForUnity.ImgprocModule.GeneralizedHough | Finds arbitrary template in the grayscale image using Generalized Hough Transform |
COpenCVForUnity.ImgprocModule.GeneralizedHoughBallard | Finds arbitrary template in the grayscale image using Generalized Hough Transform |
COpenCVForUnity.ImgprocModule.GeneralizedHoughGuil | Finds arbitrary template in the grayscale image using Generalized Hough Transform |
COpenCVForUnity.ImgprocModule.GeneralizedHough | Finds arbitrary template in the grayscale image using Generalized Hough Transform |
COpenCVForUnity.ImgprocModule.GeneralizedHough | Finds arbitrary template in the grayscale image using Generalized Hough Transform |
COpenCVForUnity.ImgprocModule.LineSegmentDetector | Line segment detector class |
COpenCVForUnity.ImgprocModule.LineSegmentDetector | Line segment detector class |
COpenCVForUnity.ImgprocModule.LineSegmentDetector | Line segment detector class |
►COpenCVForUnity.MlModule.StatModel | Base class for statistical models in OpenCV ML |
COpenCVForUnity.MlModule.ANN_MLP | Artificial Neural Networks - Multi-Layer Perceptrons |
COpenCVForUnity.MlModule.ANN_MLP | Artificial Neural Networks - Multi-Layer Perceptrons |
COpenCVForUnity.MlModule.ANN_MLP | Artificial Neural Networks - Multi-Layer Perceptrons |
►COpenCVForUnity.MlModule.DTrees | The class represents a single decision tree or a collection of decision trees |
COpenCVForUnity.MlModule.Boost | Boosted tree classifier derived from DTrees |
COpenCVForUnity.MlModule.RTrees | The class implements the random forest predictor |
COpenCVForUnity.MlModule.RTrees | The class implements the random forest predictor |
COpenCVForUnity.MlModule.RTrees | The class implements the random forest predictor |
COpenCVForUnity.MlModule.EM | The class implements the Expectation Maximization algorithm |
COpenCVForUnity.MlModule.EM | The class implements the Expectation Maximization algorithm |
COpenCVForUnity.MlModule.EM | The class implements the Expectation Maximization algorithm |
COpenCVForUnity.MlModule.KNearest | The class implements K-Nearest Neighbors model |
COpenCVForUnity.MlModule.LogisticRegression | Implements Logistic Regression classifier |
COpenCVForUnity.MlModule.LogisticRegression | Implements Logistic Regression classifier |
COpenCVForUnity.MlModule.LogisticRegression | Implements Logistic Regression classifier |
COpenCVForUnity.MlModule.NormalBayesClassifier | Bayes classifier for normally distributed data |
COpenCVForUnity.MlModule.SVM | Support Vector Machines |
COpenCVForUnity.MlModule.SVM | Support Vector Machines |
COpenCVForUnity.MlModule.SVM | Support Vector Machines |
COpenCVForUnity.MlModule.SVMSGD | |
COpenCVForUnity.MlModule.SVMSGD | |
COpenCVForUnity.MlModule.SVMSGD | |
COpenCVForUnity.ObjdetectModule.ArucoDetector | The main functionality of ArucoDetector class is detection of markers in an image with detectMarkers() method |
COpenCVForUnity.ObjdetectModule.BaseCascadeClassifier | |
COpenCVForUnity.ObjdetectModule.CharucoDetector | |
►COpenCVForUnity.Phase_unwrappingModule.PhaseUnwrapping | Abstract base class for phase unwrapping |
COpenCVForUnity.Phase_unwrappingModule.HistogramPhaseUnwrapping | Class implementing two-dimensional phase unwrapping based on [histogramUnwrapping] This algorithm belongs to the quality-guided phase unwrapping methods. First, it computes a reliability map from second differences between a pixel and its eight neighbours. Reliability values lie between 0 and 16*pi*pi. Then, this reliability map is used to compute the reliabilities of "edges". An edge is an entity defined by two pixels that are connected horizontally or vertically. Its reliability is found by adding the the reliabilities of the two pixels connected through it. Edges are sorted in a histogram based on their reliability values. This histogram is then used to unwrap pixels, starting from the highest quality pixel |
►COpenCVForUnity.PhotoModule.AlignExposures | The base class for algorithms that align images of the same scene with different exposures |
COpenCVForUnity.PhotoModule.AlignMTB | This algorithm converts images to median threshold bitmaps (1 for pixels brighter than median luminance and 0 otherwise) and than aligns the resulting bitmaps using bit operations |
COpenCVForUnity.PhotoModule.AlignMTB | This algorithm converts images to median threshold bitmaps (1 for pixels brighter than median luminance and 0 otherwise) and than aligns the resulting bitmaps using bit operations |
COpenCVForUnity.PhotoModule.AlignMTB | This algorithm converts images to median threshold bitmaps (1 for pixels brighter than median luminance and 0 otherwise) and than aligns the resulting bitmaps using bit operations |
►COpenCVForUnity.PhotoModule.CalibrateCRF | The base class for camera response calibration algorithms |
COpenCVForUnity.PhotoModule.CalibrateDebevec | Inverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. Objective function is constructed using pixel values on the same position in all images, extra term is added to make the result smoother |
COpenCVForUnity.PhotoModule.CalibrateRobertson | Inverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. This algorithm uses all image pixels |
►COpenCVForUnity.PhotoModule.MergeExposures | The base class algorithms that can merge exposure sequence to a single image |
COpenCVForUnity.PhotoModule.MergeDebevec | The resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response |
COpenCVForUnity.PhotoModule.MergeMertens | Pixels are weighted using contrast, saturation and well-exposedness measures, than images are combined using laplacian pyramids |
COpenCVForUnity.PhotoModule.MergeRobertson | The resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response |
►COpenCVForUnity.PhotoModule.Tonemap | Base class for tonemapping algorithms - tools that are used to map HDR image to 8-bit range |
COpenCVForUnity.PhotoModule.TonemapDrago | Adaptive logarithmic mapping is a fast global tonemapping algorithm that scales the image in logarithmic domain |
COpenCVForUnity.PhotoModule.TonemapMantiuk | This algorithm transforms image to contrast using gradients on all levels of gaussian pyramid, transforms contrast values to HVS response and scales the response. After this the image is reconstructed from new contrast values |
COpenCVForUnity.PhotoModule.TonemapReinhard | This is a global tonemapping operator that models human visual system |
COpenCVForUnity.XphotoModule.TonemapDurand | This algorithm decomposes image into two layers: base layer and detail layer using bilateral filter and compresses contrast of the base layer thus preserving all the details |
COpenCVForUnity.PlotModule.Plot2d | |
COpenCVForUnity.PlotModule.Plot2d | |
COpenCVForUnity.PlotModule.Plot2d | |
►COpenCVForUnity.Structured_lightModule.StructuredLightPattern | Abstract base class for generating and decoding structured light patterns |
COpenCVForUnity.Structured_lightModule.GrayCodePattern | Class implementing the Gray-code pattern, based on [UNDERWORLD] |
COpenCVForUnity.Structured_lightModule.GrayCodePattern | Class implementing the Gray-code pattern, based on [UNDERWORLD] |
COpenCVForUnity.Structured_lightModule.GrayCodePattern | Class implementing the Gray-code pattern, based on [UNDERWORLD] |
COpenCVForUnity.Structured_lightModule.SinusoidalPattern | Class implementing Fourier transform profilometry (FTP) , phase-shifting profilometry (PSP) and Fourier-assisted phase-shifting profilometry (FAPS) based on [faps] |
COpenCVForUnity.Structured_lightModule.SinusoidalPattern | Class implementing Fourier transform profilometry (FTP) , phase-shifting profilometry (PSP) and Fourier-assisted phase-shifting profilometry (FAPS) based on [faps] |
COpenCVForUnity.Structured_lightModule.SinusoidalPattern | Class implementing Fourier transform profilometry (FTP) , phase-shifting profilometry (PSP) and Fourier-assisted phase-shifting profilometry (FAPS) based on [faps] |
COpenCVForUnity.TextModule.ERFilter | Base class for 1st and 2nd stages of Neumann and Matas scene text detection algorithm [Neumann12]. : |
COpenCVForUnity.TrackingModule.legacy_MultiTracker | This class is used to track multiple objects using the specified tracker algorithm |
COpenCVForUnity.TrackingModule.legacy_MultiTracker | This class is used to track multiple objects using the specified tracker algorithm |
COpenCVForUnity.TrackingModule.legacy_MultiTracker | This class is used to track multiple objects using the specified tracker algorithm |
►COpenCVForUnity.TrackingModule.legacy_Tracker | Base abstract class for the long-term tracker: |
COpenCVForUnity.TrackingModule.legacy_TrackerBoosting | Boosting tracker |
COpenCVForUnity.TrackingModule.legacy_TrackerCSRT | CSRT tracker |
COpenCVForUnity.TrackingModule.legacy_TrackerKCF | KCF (Kernelized Correlation Filter) tracker |
COpenCVForUnity.TrackingModule.legacy_TrackerMIL | The MIL algorithm trains a classifier in an online manner to separate the object from the background |
COpenCVForUnity.TrackingModule.legacy_TrackerMOSSE | MOSSE (Minimum Output Sum of Squared Error) tracker |
COpenCVForUnity.TrackingModule.legacy_TrackerMedianFlow | Median Flow tracker |
COpenCVForUnity.TrackingModule.legacy_TrackerTLD | TLD (Tracking, learning and detection) tracker |
COpenCVForUnity.TrackingModule.legacy_Tracker | Base abstract class for the long-term tracker: |
COpenCVForUnity.TrackingModule.legacy_Tracker | Base abstract class for the long-term tracker: |
►COpenCVForUnity.VideoModule.BackgroundSubtractor | Base class for background/foreground segmentation. : |
COpenCVForUnity.BgsegmModule.BackgroundSubtractorCNT | Background subtraction based on counting |
COpenCVForUnity.BgsegmModule.BackgroundSubtractorGMG | Background Subtractor module based on the algorithm given in [Gold2012] |
COpenCVForUnity.BgsegmModule.BackgroundSubtractorGSOC | Implementation of the different yet better algorithm which is called GSOC, as it was implemented during GSOC and was not originated from any paper |
COpenCVForUnity.BgsegmModule.BackgroundSubtractorLSBP | Background Subtraction using Local SVD Binary Pattern. More details about the algorithm can be found at [LGuo2016] |
COpenCVForUnity.BgsegmModule.BackgroundSubtractorMOG | Gaussian Mixture-based Background/Foreground Segmentation Algorithm |
COpenCVForUnity.VideoModule.BackgroundSubtractorKNN | K-nearest neighbours - based Background/Foreground Segmentation Algorithm |
COpenCVForUnity.VideoModule.BackgroundSubtractorMOG2 | Gaussian Mixture-based Background/Foreground Segmentation Algorithm |
►COpenCVForUnity.VideoModule.DenseOpticalFlow | |
COpenCVForUnity.VideoModule.DISOpticalFlow | DIS optical flow algorithm |
COpenCVForUnity.VideoModule.FarnebackOpticalFlow | Class computing a dense optical flow using the Gunnar Farneback's algorithm |
COpenCVForUnity.VideoModule.VariationalRefinement | Variational optical flow refinement |
►COpenCVForUnity.VideoModule.SparseOpticalFlow | Base interface for sparse optical flow algorithms |
COpenCVForUnity.VideoModule.SparsePyrLKOpticalFlow | Class used for calculating a sparse optical flow |
COpenCVForUnity.VideoModule.SparsePyrLKOpticalFlow | Class used for calculating a sparse optical flow |
COpenCVForUnity.VideoModule.SparsePyrLKOpticalFlow | Class used for calculating a sparse optical flow |
COpenCVForUnity.Xfeatures2dModule.PCTSignatures | Class implementing PCT (position-color-texture) signature extraction as described in [KrulisLS16]. The algorithm is divided to a feature sampler and a clusterizer. Feature sampler produces samples at given set of coordinates. Clusterizer then produces clusters of these samples using k-means algorithm. Resulting set of clusters is the signature of the input image |
COpenCVForUnity.Xfeatures2dModule.PCTSignaturesSQFD | Class implementing Signature Quadratic Form Distance (SQFD) |
COpenCVForUnity.XimgprocModule.AdaptiveManifoldFilter | Interface for Adaptive Manifold Filter realizations |
COpenCVForUnity.XimgprocModule.ContourFitting | Class for ContourFitting algorithms. ContourFitting match two contours \($ z_a \)$ and \($ z_b \)$ minimizing distance |
COpenCVForUnity.XimgprocModule.DTFilter | Interface for realizations of Domain Transform filter |
►COpenCVForUnity.XimgprocModule.DisparityFilter | Main interface for all disparity map filters |
COpenCVForUnity.XimgprocModule.DisparityWLSFilter | Disparity map filter based on Weighted Least Squares filter (in form of Fast Global Smoother that is a lot faster than traditional Weighted Least Squares filter implementations) and optional use of left-right-consistency-based confidence to refine the results in half-occlusions and uniform areas |
COpenCVForUnity.XimgprocModule.DisparityWLSFilter | Disparity map filter based on Weighted Least Squares filter (in form of Fast Global Smoother that is a lot faster than traditional Weighted Least Squares filter implementations) and optional use of left-right-consistency-based confidence to refine the results in half-occlusions and uniform areas |
COpenCVForUnity.XimgprocModule.DisparityWLSFilter | Disparity map filter based on Weighted Least Squares filter (in form of Fast Global Smoother that is a lot faster than traditional Weighted Least Squares filter implementations) and optional use of left-right-consistency-based confidence to refine the results in half-occlusions and uniform areas |
COpenCVForUnity.XimgprocModule.DisparityFilter | Main interface for all disparity map filters |
COpenCVForUnity.XimgprocModule.DisparityFilter | Main interface for all disparity map filters |
COpenCVForUnity.XimgprocModule.EdgeBoxes | Class implementing EdgeBoxes algorithm from [ZitnickECCV14edgeBoxes] : |
COpenCVForUnity.XimgprocModule.EdgeDrawing | Class implementing the ED (EdgeDrawing) [topal2012edge], EDLines [akinlar2011edlines], EDPF [akinlar2012edpf] and EDCircles [akinlar2013edcircles] algorithms |
COpenCVForUnity.XimgprocModule.FastBilateralSolverFilter | Interface for implementations of Fast Bilateral Solver |
COpenCVForUnity.XimgprocModule.FastGlobalSmootherFilter | Interface for implementations of Fast Global Smoother filter |
COpenCVForUnity.XimgprocModule.FastLineDetector | Class implementing the FLD (Fast Line Detector) algorithm described in [Lee14] |
COpenCVForUnity.XimgprocModule.FastLineDetector | Class implementing the FLD (Fast Line Detector) algorithm described in [Lee14] |
COpenCVForUnity.XimgprocModule.FastLineDetector | Class implementing the FLD (Fast Line Detector) algorithm described in [Lee14] |
COpenCVForUnity.XimgprocModule.GraphSegmentation | Graph Based Segmentation Algorithm. The class implements the algorithm described in [PFF2004] |
COpenCVForUnity.XimgprocModule.GuidedFilter | Interface for realizations of (Fast) Guided Filter |
COpenCVForUnity.XimgprocModule.RFFeatureGetter | |
COpenCVForUnity.XimgprocModule.RidgeDetectionFilter | Applies Ridge Detection Filter to an input image. Implements Ridge detection similar to the one in Mathematica using the eigen values from the Hessian Matrix of the input image using Sobel Derivatives. Additional refinement can be done using Skeletonization and Binarization. Adapted from [segleafvein] and [M_RF] |
COpenCVForUnity.XimgprocModule.ScanSegment | Class implementing the F-DBSCAN (Accelerated superpixel image segmentation with a parallelized DBSCAN algorithm) superpixels algorithm by Loke SC, et al. [loke2021accelerated] for original paper |
COpenCVForUnity.XimgprocModule.SelectiveSearchSegmentation | Selective search segmentation algorithm The class implements the algorithm described in [uijlings2013selective] |
►COpenCVForUnity.XimgprocModule.SelectiveSearchSegmentationStrategy | Strategie for the selective search segmentation algorithm The class implements a generic stragery for the algorithm described in [uijlings2013selective] |
COpenCVForUnity.XimgprocModule.SelectiveSearchSegmentationStrategyColor | Color-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in [uijlings2013selective] |
COpenCVForUnity.XimgprocModule.SelectiveSearchSegmentationStrategyFill | Fill-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in [uijlings2013selective] |
COpenCVForUnity.XimgprocModule.SelectiveSearchSegmentationStrategyMultiple | Regroup multiple strategies for the selective search segmentation algorithm |
COpenCVForUnity.XimgprocModule.SelectiveSearchSegmentationStrategySize | Size-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in [uijlings2013selective] |
COpenCVForUnity.XimgprocModule.SelectiveSearchSegmentationStrategyTexture | Texture-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in [uijlings2013selective] |
►COpenCVForUnity.XimgprocModule.SparseMatchInterpolator | Main interface for all filters, that take sparse matches as an input and produce a dense per-pixel matching (optical flow) as an output |
COpenCVForUnity.XimgprocModule.EdgeAwareInterpolator | Sparse match interpolation algorithm based on modified locally-weighted affine estimator from [Revaud2015] and Fast Global Smoother as post-processing filter |
COpenCVForUnity.XimgprocModule.RICInterpolator | Sparse match interpolation algorithm based on modified piecewise locally-weighted affine estimator called Robust Interpolation method of Correspondences or RIC from [Hu2017] and Variational and Fast Global Smoother as post-processing filter. The RICInterpolator is a extension of the EdgeAwareInterpolator. Main concept of this extension is an piece-wise affine model based on over-segmentation via SLIC superpixel estimation. The method contains an efficient propagation mechanism to estimate among the pieces-wise models |
COpenCVForUnity.XimgprocModule.StructuredEdgeDetection | Class implementing edge detection algorithm from [Dollar2013] : |
COpenCVForUnity.XimgprocModule.SuperpixelLSC | Class implementing the LSC (Linear Spectral Clustering) superpixels algorithm described in [LiCVPR2015LSC] |
COpenCVForUnity.XimgprocModule.SuperpixelSEEDS | Class implementing the SEEDS (Superpixels Extracted via Energy-Driven Sampling) superpixels algorithm described in [VBRV14] |
COpenCVForUnity.XimgprocModule.SuperpixelSLIC | Class implementing the SLIC (Simple Linear Iterative Clustering) superpixels algorithm described in [Achanta2012] |
►COpenCVForUnity.XphotoModule.WhiteBalancer | The base class for auto white balance algorithms |
COpenCVForUnity.XphotoModule.GrayworldWB | Gray-world white balance algorithm |
COpenCVForUnity.XphotoModule.LearningBasedWB | More sophisticated learning-based automatic white balance algorithm |
COpenCVForUnity.XphotoModule.SimpleWB | A simple white balance algorithm that works by independently stretching each of the input image channels to the specified range. For increased robustness it ignores the top and bottom \($p\%\)$ of pixel values |
►COpenCVForUnity.CoreModule.Mat | N-dimensional dense array class |
COpenCVForUnity.CoreModule.MatOfByte | A specialized Mat class for storing single-channel byte data (CV_8UC1) |
COpenCVForUnity.CoreModule.MatOfDMatch | A specialized Mat class for storing DMatch objects with 32-bit floating-point attributes (CV_32FC4) |
COpenCVForUnity.CoreModule.MatOfDMatch | A specialized Mat class for storing DMatch objects with 32-bit floating-point attributes (CV_32FC4) |
COpenCVForUnity.CoreModule.MatOfDouble | A specialized Mat class for storing single-channel double-precision floating-point data (CV_64FC1) |
COpenCVForUnity.CoreModule.MatOfFloat | A specialized Mat class for storing single-channel floating-point data (CV_32FC1) |
COpenCVForUnity.CoreModule.MatOfFloat4 | A specialized Mat class for storing 4-channel floating-point data (CV_32FC4) |
COpenCVForUnity.CoreModule.MatOfFloat4 | A specialized Mat class for storing 4-channel floating-point data (CV_32FC4) |
COpenCVForUnity.CoreModule.MatOfFloat6 | A specialized Mat class for storing 6-channel floating-point data (CV_32FC6) |
COpenCVForUnity.CoreModule.MatOfFloat6 | A specialized Mat class for storing 6-channel floating-point data (CV_32FC6) |
COpenCVForUnity.CoreModule.MatOfInt | A specialized Mat class for storing single-channel integer data (CV_32SC1) |
COpenCVForUnity.CoreModule.MatOfInt4 | A specialized Mat class for storing 4-channel integer data (CV_32SC4) |
COpenCVForUnity.CoreModule.MatOfInt4 | A specialized Mat class for storing 4-channel integer data (CV_32SC4) |
COpenCVForUnity.CoreModule.MatOfKeyPoint | A specialized Mat class for storing keypoints with 32-bit floating-point attributes (CV_32FC7) |
COpenCVForUnity.CoreModule.MatOfKeyPoint | A specialized Mat class for storing keypoints with 32-bit floating-point attributes (CV_32FC7) |
COpenCVForUnity.CoreModule.MatOfPoint | A specialized Mat class for storing 2D points with 32-bit integer coordinates (CV_32SC2) |
COpenCVForUnity.CoreModule.MatOfPoint | A specialized Mat class for storing 2D points with 32-bit integer coordinates (CV_32SC2) |
COpenCVForUnity.CoreModule.MatOfPoint2f | A specialized Mat class for storing 2D points with floating-point coordinates (CV_32FC2) |
COpenCVForUnity.CoreModule.MatOfPoint2f | A specialized Mat class for storing 2D points with floating-point coordinates (CV_32FC2) |
COpenCVForUnity.CoreModule.MatOfPoint3 | A specialized Mat class for storing 3D points with 32-bit integer coordinates (CV_32SC3) |
COpenCVForUnity.CoreModule.MatOfPoint3 | A specialized Mat class for storing 3D points with 32-bit integer coordinates (CV_32SC3) |
COpenCVForUnity.CoreModule.MatOfPoint3f | A specialized Mat class for storing 3D points with 32-bit floating-point coordinates (CV_32FC3) |
COpenCVForUnity.CoreModule.MatOfPoint3f | A specialized Mat class for storing 3D points with 32-bit floating-point coordinates (CV_32FC3) |
COpenCVForUnity.CoreModule.MatOfRect | A specialized Mat class for storing rectangles with 32-bit integer coordinates (CV_32SC4) |
COpenCVForUnity.CoreModule.MatOfRect | A specialized Mat class for storing rectangles with 32-bit integer coordinates (CV_32SC4) |
COpenCVForUnity.CoreModule.MatOfRect2d | A specialized Mat class for storing rectangles with 64-bit floating-point coordinates (CV_64FC4) |
COpenCVForUnity.CoreModule.MatOfRect2d | A specialized Mat class for storing rectangles with 64-bit floating-point coordinates (CV_64FC4) |
COpenCVForUnity.CoreModule.MatOfRotatedRect | A specialized Mat class for storing rotated rectangles with 32-bit floating-point coordinates (CV_32FC5) |
COpenCVForUnity.CoreModule.MatOfRotatedRect | A specialized Mat class for storing rotated rectangles with 32-bit floating-point coordinates (CV_32FC5) |
COpenCVForUnity.CoreModule.Mat | N-dimensional dense array class |
COpenCVForUnity.CoreModule.Mat | N-dimensional dense array class |
COpenCVForUnity.CoreModule.TickMeter | Class to measure passing time |
COpenCVForUnity.DnnModule.DictValue | This struct stores the scalar value (or array) of one of the following type: double, cv::String or int64 |
COpenCVForUnity.DnnModule.Image2BlobParams | Processing params of image to blob |
COpenCVForUnity.DnnModule.Image2BlobParams | Processing params of image to blob |
COpenCVForUnity.DnnModule.Image2BlobParams | Processing params of image to blob |
►COpenCVForUnity.DnnModule.Model | This class is presented high-level API for neural networks |
COpenCVForUnity.DnnModule.ClassificationModel | This class represents high-level API for classification models |
COpenCVForUnity.DnnModule.DetectionModel | This class represents high-level API for object detection networks |
COpenCVForUnity.DnnModule.KeypointsModel | This class represents high-level API for keypoints models |
COpenCVForUnity.DnnModule.SegmentationModel | This class represents high-level API for segmentation models |
►COpenCVForUnity.DnnModule.TextDetectionModel | Base class for text detection networks |
COpenCVForUnity.DnnModule.TextDetectionModel_DB | This class represents high-level API for text detection DL networks compatible with DB model |
COpenCVForUnity.DnnModule.TextDetectionModel_EAST | This class represents high-level API for text detection DL networks compatible with EAST model |
COpenCVForUnity.DnnModule.TextRecognitionModel | This class represents high-level API for text recognition networks |
COpenCVForUnity.DnnModule.Model | This class is presented high-level API for neural networks |
COpenCVForUnity.DnnModule.Model | This class is presented high-level API for neural networks |
COpenCVForUnity.DnnModule.Net | This class allows to create and manipulate comprehensive artificial neural networks |
COpenCVForUnity.DnnModule.Net | This class allows to create and manipulate comprehensive artificial neural networks |
COpenCVForUnity.DnnModule.Net | This class allows to create and manipulate comprehensive artificial neural networks |
COpenCVForUnity.Dnn_superresModule.DnnSuperResImpl | A class to upscale images via convolutional neural networks. The following four models are implemented: |
►COpenCVForUnity.FaceModule.PredictCollector | Abstract base class for all strategies of prediction result handling |
COpenCVForUnity.FaceModule.StandardCollector | Default predict collector |
COpenCVForUnity.Features2dModule.BOWImgDescriptorExtractor | Class to compute an image descriptor using the bag of visual words |
►COpenCVForUnity.Features2dModule.BOWTrainer | Abstract base class for training the bag of visual words vocabulary from a set of descriptors |
COpenCVForUnity.Features2dModule.BOWKMeansTrainer | Kmeans -based class to train visual vocabulary using the bag of visual words approach. : |
COpenCVForUnity.Features2dModule.BOWKMeansTrainer | Kmeans -based class to train visual vocabulary using the bag of visual words approach. : |
COpenCVForUnity.Features2dModule.BOWKMeansTrainer | Kmeans -based class to train visual vocabulary using the bag of visual words approach. : |
COpenCVForUnity.Features2dModule.SimpleBlobDetector_Params | |
COpenCVForUnity.ImgprocModule.IntelligentScissorsMB | Intelligent Scissors image segmentation |
COpenCVForUnity.ImgprocModule.IntelligentScissorsMB | Intelligent Scissors image segmentation |
COpenCVForUnity.ImgprocModule.IntelligentScissorsMB | Intelligent Scissors image segmentation |
COpenCVForUnity.ImgprocModule.Subdiv2D | |
COpenCVForUnity.ImgprocModule.Subdiv2D | |
COpenCVForUnity.ImgprocModule.Subdiv2D | |
COpenCVForUnity.MlModule.ParamGrid | The structure represents the logarithmic grid range of statmodel parameters |
COpenCVForUnity.MlModule.TrainData | Class encapsulating training data |
►COpenCVForUnity.ObjdetectModule.Board | Board of ArUco markers |
COpenCVForUnity.ObjdetectModule.CharucoBoard | ChArUco board is a planar chessboard where the markers are placed inside the white squares of a chessboard |
COpenCVForUnity.ObjdetectModule.CharucoBoard | ChArUco board is a planar chessboard where the markers are placed inside the white squares of a chessboard |
COpenCVForUnity.ObjdetectModule.CharucoBoard | ChArUco board is a planar chessboard where the markers are placed inside the white squares of a chessboard |
COpenCVForUnity.ObjdetectModule.GridBoard | Planar board with grid arrangement of markers |
COpenCVForUnity.ObjdetectModule.GridBoard | Planar board with grid arrangement of markers |
COpenCVForUnity.ObjdetectModule.GridBoard | Planar board with grid arrangement of markers |
COpenCVForUnity.ObjdetectModule.Board | Board of ArUco markers |
COpenCVForUnity.ObjdetectModule.Board | Board of ArUco markers |
COpenCVForUnity.ObjdetectModule.CascadeClassifier | Cascade classifier class for object detection |
COpenCVForUnity.ObjdetectModule.CascadeClassifier | Cascade classifier class for object detection |
COpenCVForUnity.ObjdetectModule.CascadeClassifier | Cascade classifier class for object detection |
COpenCVForUnity.ObjdetectModule.CharucoParameters | |
COpenCVForUnity.ObjdetectModule.DetectorParameters | Struct DetectorParameters is used by ArucoDetector |
COpenCVForUnity.ObjdetectModule.Dictionary | Dictionary is a set of unique ArUco markers of the same size |
COpenCVForUnity.ObjdetectModule.FaceDetectorYN | DNN-based face detector |
COpenCVForUnity.ObjdetectModule.FaceDetectorYN | DNN-based face detector |
COpenCVForUnity.ObjdetectModule.FaceDetectorYN | DNN-based face detector |
COpenCVForUnity.ObjdetectModule.FaceRecognizerSF | DNN-based face recognizer |
►COpenCVForUnity.ObjdetectModule.GraphicalCodeDetector | |
COpenCVForUnity.ObjdetectModule.BarcodeDetector | |
COpenCVForUnity.ObjdetectModule.QRCodeDetector | |
COpenCVForUnity.ObjdetectModule.QRCodeDetectorAruco | |
COpenCVForUnity.ObjdetectModule.HOGDescriptor | Implementation of HOG (Histogram of Oriented Gradients) descriptor and object detector |
COpenCVForUnity.ObjdetectModule.HOGDescriptor | Implementation of HOG (Histogram of Oriented Gradients) descriptor and object detector |
COpenCVForUnity.ObjdetectModule.HOGDescriptor | Implementation of HOG (Histogram of Oriented Gradients) descriptor and object detector |
COpenCVForUnity.ObjdetectModule.QRCodeDetectorAruco_Params | |
COpenCVForUnity.ObjdetectModule.QRCodeEncoder | Groups the object candidate rectangles. rectList Input/output vector of rectangles. Output vector includes retained and grouped rectangles. (The Python list is not modified in place.) weights Input/output vector of weights of rectangles. Output vector includes weights of retained and grouped rectangles. (The Python list is not modified in place.) groupThreshold Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. eps Relative difference between sides of the rectangles to merge them into a group |
COpenCVForUnity.ObjdetectModule.QRCodeEncoder_Params | QR code encoder parameters |
COpenCVForUnity.ObjdetectModule.RefineParameters | Struct RefineParameters is used by ArucoDetector |
COpenCVForUnity.Phase_unwrappingModule.HistogramPhaseUnwrapping_Params | Parameters of phaseUnwrapping constructor |
COpenCVForUnity.Structured_lightModule.SinusoidalPattern_Params | Parameters of SinusoidalPattern constructor width Projector's width. height Projector's height. nbrOfPeriods Number of period along the patterns direction. shiftValue Phase shift between two consecutive patterns. methodId Allow to choose between FTP, PSP and FAPS. nbrOfPixelsBetweenMarkers Number of pixels between two consecutive markers on the same row. setMarkers Allow to set markers on the patterns. markersLocation vector used to store markers location on the patterns |
►COpenCVForUnity.TextModule.BaseOCR | |
COpenCVForUnity.TextModule.OCRBeamSearchDecoder | OCRBeamSearchDecoder class provides an interface for OCR using Beam Search algorithm |
COpenCVForUnity.TextModule.OCRHMMDecoder | OCRHMMDecoder class provides an interface for OCR using Hidden Markov Models |
COpenCVForUnity.TextModule.ERFilter_Callback | Callback with the classifier is made a class |
COpenCVForUnity.TextModule.OCRBeamSearchDecoder_ClassifierCallback | Callback with the character classifier is made a class |
COpenCVForUnity.TextModule.OCRHMMDecoder_ClassifierCallback | Callback with the character classifier is made a class |
►COpenCVForUnity.TextModule.TextDetector | An abstract class providing interface for text detection algorithms |
COpenCVForUnity.TextModule.TextDetectorCNN | TextDetectorCNN class provides the functionallity of text bounding box detection. This class is representing to find bounding boxes of text words given an input image. This class uses OpenCV dnn module to load pre-trained model described in [LiaoSBWL17]. The original repository with the modified SSD Caffe version: https://github.com/MhLiao/TextBoxes. Model can be downloaded from DropBox. Modified .prototxt file with the model description can be found in opencv_contrib/modules/text/samples/textbox.prototxt |
COpenCVForUnity.TrackingModule.TrackerCSRT_Params | |
COpenCVForUnity.TrackingModule.TrackerKCF_Params | |
COpenCVForUnity.VideoModule.KalmanFilter | Kalman filter class |
►COpenCVForUnity.VideoModule.Tracker | Base abstract class for the long-term tracker |
COpenCVForUnity.TrackingModule.TrackerCSRT | CSRT tracker |
COpenCVForUnity.TrackingModule.TrackerKCF | KCF (Kernelized Correlation Filter) tracker |
COpenCVForUnity.VideoModule.TrackerDaSiamRPN | |
COpenCVForUnity.VideoModule.TrackerGOTURN | GOTURN (Generic Object Tracking Using Regression Networks) tracker |
COpenCVForUnity.VideoModule.TrackerMIL | The MIL algorithm trains a classifier in an online manner to separate the object from the background |
COpenCVForUnity.VideoModule.TrackerNano | Nano tracker is a super lightweight dnn-based general object tracking |
COpenCVForUnity.VideoModule.TrackerVit | VIT tracker is a super lightweight dnn-based general object tracking |
COpenCVForUnity.VideoModule.Tracker | Base abstract class for the long-term tracker |
COpenCVForUnity.VideoModule.Tracker | Base abstract class for the long-term tracker |
COpenCVForUnity.VideoModule.TrackerDaSiamRPN_Params | |
COpenCVForUnity.VideoModule.TrackerGOTURN_Params | |
COpenCVForUnity.VideoModule.TrackerMIL_Params | |
COpenCVForUnity.VideoModule.TrackerNano_Params | |
COpenCVForUnity.VideoModule.TrackerVit_Params | |
COpenCVForUnity.VideoModule.TrackerVit_Params | |
COpenCVForUnity.VideoModule.TrackerVit_Params | |
COpenCVForUnity.VideoioModule.VideoCapture | Class for video capturing from video files, image sequences or cameras |
COpenCVForUnity.VideoioModule.VideoWriter | Video writer class |
COpenCVForUnity.VideoioModule.VideoWriter | Video writer class |
COpenCVForUnity.VideoioModule.VideoWriter | Video writer class |
COpenCVForUnity.Wechat_qrcodeModule.WeChatQRCode | WeChat QRCode includes two CNN-based models: A object detection model and a super resolution model. Object detection model is applied to detect QRCode with the bounding box. super resolution model is applied to zoom in QRCode when it is small |
COpenCVForUnity.XimgprocModule.EdgeDrawing_Params | |
►CIEquatable | |
COpenCVForUnity.CoreModule.DMatch | Class for matching keypoint descriptors |
COpenCVForUnity.CoreModule.KeyPoint | Data structure for salient point detectors |
COpenCVForUnity.CoreModule.Point | Template class for 2D points specified by its coordinates x and y |
COpenCVForUnity.CoreModule.Point3 | Template class for 3D points specified by its coordinates x, y and z |
COpenCVForUnity.CoreModule.Range | Template class specifying a continuous subsequence (slice) of a sequence |
COpenCVForUnity.CoreModule.Rect | Template class for 2D rectangles |
COpenCVForUnity.CoreModule.Rect2d | Template class for 2D rectangles |
COpenCVForUnity.CoreModule.RotatedRect | The class represents rotated (i.e. not up-right) rectangles on a plane |
COpenCVForUnity.CoreModule.Scalar | Template class for a 4-element vector derived from Vec |
COpenCVForUnity.CoreModule.Size | Template class for specifying the size of an image or rectangle |
COpenCVForUnity.CoreModule.TermCriteria | The class defining termination criteria for iterative algorithms |
COpenCVForUnity.ImgprocModule.Moments | Template class for moments |
COpenCVForUnity.UnityUtils.Vec10d | 10-Vector struct of double [CvType.CV_64FC10, Moments] |
COpenCVForUnity.UnityUtils.Vec2b | 2-Vector struct of byte [CvType.CV_8UC2] |
COpenCVForUnity.UnityUtils.Vec2c | 2-Vector struct of sbyte [CvType.CV_8SC2] |
COpenCVForUnity.UnityUtils.Vec2d | 2-Vector struct of double [CvType.CV_64FC2, Point] |
COpenCVForUnity.UnityUtils.Vec2f | 2-Vector struct of float [CvType.CV_32FC2] |
COpenCVForUnity.UnityUtils.Vec2i | 2-Vector struct of int [CvType.CV_32SC2, Range] |
COpenCVForUnity.UnityUtils.Vec2s | 2-Vector struct of short [CvType.CV_16SC2] |
COpenCVForUnity.UnityUtils.Vec2w | 2-Vector struct of ushort [CvType.CV_16UC2, Size] |
COpenCVForUnity.UnityUtils.Vec3b | 3-Vector struct of byte [CvType.CV_8UC3] |
COpenCVForUnity.UnityUtils.Vec3c | 3-Vector struct of sbyte [CvType.CV_8SC3] |
COpenCVForUnity.UnityUtils.Vec3d | 3-Vector struct of double [CvType.CV_64FC3, Point3] |
COpenCVForUnity.UnityUtils.Vec3f | 3-Vector struct of float [CvType.CV_32FC3] |
COpenCVForUnity.UnityUtils.Vec3i | 3-Vector struct of int [CvType.CV_32SC3] |
COpenCVForUnity.UnityUtils.Vec3s | 3-Vector struct of short [CvType.CV_16SC3] |
COpenCVForUnity.UnityUtils.Vec3w | 3-Vector struct of ushort [CvType.CV_16UC3] |
COpenCVForUnity.UnityUtils.Vec4b | 4-Vector struct of byte [CvType.CV_8UC4] |
COpenCVForUnity.UnityUtils.Vec4c | 4-Vector struct of sbyte [CvType.CV_8SC4] |
COpenCVForUnity.UnityUtils.Vec4d | 4-Vector struct of double [CvType.CV_64FC4] |
COpenCVForUnity.UnityUtils.Vec4f | 4-Vector struct of float [CvType.CV_32FC4, DMatch] |
COpenCVForUnity.UnityUtils.Vec4i | 4-Vector struct of int [CvType.CV_32SC4, Rect] |
COpenCVForUnity.UnityUtils.Vec4s | 4-Vector struct of short [CvType.CV_16SC4] |
COpenCVForUnity.UnityUtils.Vec4w | 4-Vector struct of ushort [CvType.CV_16UC4, Rect2d, Scalar] |
COpenCVForUnity.UnityUtils.Vec5d | 5-Vector struct of double [CvType.CV_64FC5, RotatedRect] |
COpenCVForUnity.UnityUtils.Vec5f | 5-Vector struct of float [CvType.CV_32FC5, MatOfRotatedRect] |
COpenCVForUnity.UnityUtils.Vec6f | 6-Vector struct of float [CvType.CV_32FC6, KeyPoint] |
COpenCVForUnity.UnityUtils.Vec7d | 7-Vector struct of double [CvType.CV_32FC7] |
COpenCVForUnity.UnityUtils.Vec7f | 7-Vector struct of float [CvType.CV_32FC7, KeyPoint] |
►COpenCVForUnity.UnityUtils.Helper.IMatUpdateFPSProvider | |
COpenCVForUnity.UnityUtils.Helper.AsyncGPUReadback2MatHelper | A helper component class for efficiently converting Unity Texture objects, such as RenderTexture and external texture format Texture2D , to OpenCV Mat format using AsyncGPUReadback |
COpenCVForUnity.UnityUtils.Helper.Image2MatHelper | A helper component class for loading an image file using OpenCV's Imgcodecs.imread method and converting it to an OpenCV Mat format |
COpenCVForUnity.UnityUtils.Helper.WebCamTexture2MatAsyncGPUHelper | A helper component class for efficiently obtaining camera frames from WebCamTexture and converting them to OpenCV Mat format in real-time using AsyncGPUReadback |
COpenCVForUnity.Img_hashModule.Img_hash | |
COpenCVForUnity.ImgcodecsModule.Imgcodecs | |
COpenCVForUnity.ImgprocModule.Imgproc | |
►COpenCVForUnity.UnityUtils.MOT.ByteTrack.IRectBase | |
COpenCVForUnity.UnityUtils.MOT.ByteTrack.TlwhRect | |
►COpenCVForUnity.UnityUtils.Helper.ISource2MatHelper | |
►COpenCVForUnity.UnityUtils.Helper.ICameraSource2MatHelper | |
►COpenCVForUnity.UnityUtils.Helper.WebCamTexture2MatHelper | A helper component class for obtaining camera frames from WebCamTexture and converting them to OpenCV Mat format in real-time |
COpenCVForUnity.UnityUtils.Helper.WebCamTexture2MatAsyncGPUHelper | A helper component class for efficiently obtaining camera frames from WebCamTexture and converting them to OpenCV Mat format in real-time using AsyncGPUReadback |
►COpenCVForUnity.UnityUtils.Helper.IImageSource2MatHelper | |
COpenCVForUnity.UnityUtils.Helper.Image2MatHelper | A helper component class for loading an image file using OpenCV's Imgcodecs.imread method and converting it to an OpenCV Mat format |
►COpenCVForUnity.UnityUtils.Helper.ITextureSource2MatHelper | |
COpenCVForUnity.UnityUtils.Helper.AsyncGPUReadback2MatHelper | A helper component class for efficiently converting Unity Texture objects, such as RenderTexture and external texture format Texture2D , to OpenCV Mat format using AsyncGPUReadback |
►COpenCVForUnity.UnityUtils.Helper.IVideoSource2MatHelper | |
COpenCVForUnity.UnityUtils.Helper.UnityVideoPlayer2MatHelper | A helper component class for obtaining video frames from a file using Unity's VideoPlayer and converting them to OpenCV Mat format |
COpenCVForUnity.UnityUtils.Helper.VideoCapture2MatHelper | A helper component class for obtaining video frames from a file using OpenCV's VideoCapture and converting them to OpenCV Mat format |
COpenCVForUnity.UnityUtils.Helper.MultiSource2MatHelper | A versatile helper component class for obtaining frames as OpenCV Mat objects from multiple sources, allowing dynamic switching between different ISource2MatHelper classes |
►CIStructuralComparable | |
COpenCVForUnity.UnityUtils.Vec10d | 10-Vector struct of double [CvType.CV_64FC10, Moments] |
COpenCVForUnity.UnityUtils.Vec2b | 2-Vector struct of byte [CvType.CV_8UC2] |
COpenCVForUnity.UnityUtils.Vec2c | 2-Vector struct of sbyte [CvType.CV_8SC2] |
COpenCVForUnity.UnityUtils.Vec2d | 2-Vector struct of double [CvType.CV_64FC2, Point] |
COpenCVForUnity.UnityUtils.Vec2f | 2-Vector struct of float [CvType.CV_32FC2] |
COpenCVForUnity.UnityUtils.Vec2i | 2-Vector struct of int [CvType.CV_32SC2, Range] |
COpenCVForUnity.UnityUtils.Vec2s | 2-Vector struct of short [CvType.CV_16SC2] |
COpenCVForUnity.UnityUtils.Vec2w | 2-Vector struct of ushort [CvType.CV_16UC2, Size] |
COpenCVForUnity.UnityUtils.Vec3b | 3-Vector struct of byte [CvType.CV_8UC3] |
COpenCVForUnity.UnityUtils.Vec3c | 3-Vector struct of sbyte [CvType.CV_8SC3] |
COpenCVForUnity.UnityUtils.Vec3d | 3-Vector struct of double [CvType.CV_64FC3, Point3] |
COpenCVForUnity.UnityUtils.Vec3f | 3-Vector struct of float [CvType.CV_32FC3] |
COpenCVForUnity.UnityUtils.Vec3i | 3-Vector struct of int [CvType.CV_32SC3] |
COpenCVForUnity.UnityUtils.Vec3s | 3-Vector struct of short [CvType.CV_16SC3] |
COpenCVForUnity.UnityUtils.Vec3w | 3-Vector struct of ushort [CvType.CV_16UC3] |
COpenCVForUnity.UnityUtils.Vec4b | 4-Vector struct of byte [CvType.CV_8UC4] |
COpenCVForUnity.UnityUtils.Vec4c | 4-Vector struct of sbyte [CvType.CV_8SC4] |
COpenCVForUnity.UnityUtils.Vec4d | 4-Vector struct of double [CvType.CV_64FC4] |
COpenCVForUnity.UnityUtils.Vec4f | 4-Vector struct of float [CvType.CV_32FC4, DMatch] |
COpenCVForUnity.UnityUtils.Vec4i | 4-Vector struct of int [CvType.CV_32SC4, Rect] |
COpenCVForUnity.UnityUtils.Vec4s | 4-Vector struct of short [CvType.CV_16SC4] |
COpenCVForUnity.UnityUtils.Vec4w | 4-Vector struct of ushort [CvType.CV_16UC4, Rect2d, Scalar] |
COpenCVForUnity.UnityUtils.Vec5d | 5-Vector struct of double [CvType.CV_64FC5, RotatedRect] |
COpenCVForUnity.UnityUtils.Vec5f | 5-Vector struct of float [CvType.CV_32FC5, MatOfRotatedRect] |
COpenCVForUnity.UnityUtils.Vec6f | 6-Vector struct of float [CvType.CV_32FC6, KeyPoint] |
COpenCVForUnity.UnityUtils.Vec7d | 7-Vector struct of double [CvType.CV_32FC7] |
COpenCVForUnity.UnityUtils.Vec7f | 7-Vector struct of float [CvType.CV_32FC7, KeyPoint] |
►CIStructuralEquatable | |
COpenCVForUnity.UnityUtils.Vec10d | 10-Vector struct of double [CvType.CV_64FC10, Moments] |
COpenCVForUnity.UnityUtils.Vec2b | 2-Vector struct of byte [CvType.CV_8UC2] |
COpenCVForUnity.UnityUtils.Vec2c | 2-Vector struct of sbyte [CvType.CV_8SC2] |
COpenCVForUnity.UnityUtils.Vec2d | 2-Vector struct of double [CvType.CV_64FC2, Point] |
COpenCVForUnity.UnityUtils.Vec2f | 2-Vector struct of float [CvType.CV_32FC2] |
COpenCVForUnity.UnityUtils.Vec2i | 2-Vector struct of int [CvType.CV_32SC2, Range] |
COpenCVForUnity.UnityUtils.Vec2s | 2-Vector struct of short [CvType.CV_16SC2] |
COpenCVForUnity.UnityUtils.Vec2w | 2-Vector struct of ushort [CvType.CV_16UC2, Size] |
COpenCVForUnity.UnityUtils.Vec3b | 3-Vector struct of byte [CvType.CV_8UC3] |
COpenCVForUnity.UnityUtils.Vec3c | 3-Vector struct of sbyte [CvType.CV_8SC3] |
COpenCVForUnity.UnityUtils.Vec3d | 3-Vector struct of double [CvType.CV_64FC3, Point3] |
COpenCVForUnity.UnityUtils.Vec3f | 3-Vector struct of float [CvType.CV_32FC3] |
COpenCVForUnity.UnityUtils.Vec3i | 3-Vector struct of int [CvType.CV_32SC3] |
COpenCVForUnity.UnityUtils.Vec3s | 3-Vector struct of short [CvType.CV_16SC3] |
COpenCVForUnity.UnityUtils.Vec3w | 3-Vector struct of ushort [CvType.CV_16UC3] |
COpenCVForUnity.UnityUtils.Vec4b | 4-Vector struct of byte [CvType.CV_8UC4] |
COpenCVForUnity.UnityUtils.Vec4c | 4-Vector struct of sbyte [CvType.CV_8SC4] |
COpenCVForUnity.UnityUtils.Vec4d | 4-Vector struct of double [CvType.CV_64FC4] |
COpenCVForUnity.UnityUtils.Vec4f | 4-Vector struct of float [CvType.CV_32FC4, DMatch] |
COpenCVForUnity.UnityUtils.Vec4i | 4-Vector struct of int [CvType.CV_32SC4, Rect] |
COpenCVForUnity.UnityUtils.Vec4s | 4-Vector struct of short [CvType.CV_16SC4] |
COpenCVForUnity.UnityUtils.Vec4w | 4-Vector struct of ushort [CvType.CV_16UC4, Rect2d, Scalar] |
COpenCVForUnity.UnityUtils.Vec5d | 5-Vector struct of double [CvType.CV_64FC5, RotatedRect] |
COpenCVForUnity.UnityUtils.Vec5f | 5-Vector struct of float [CvType.CV_32FC5, MatOfRotatedRect] |
COpenCVForUnity.UnityUtils.Vec6f | 6-Vector struct of float [CvType.CV_32FC6, KeyPoint] |
COpenCVForUnity.UnityUtils.Vec7d | 7-Vector struct of double [CvType.CV_32FC7] |
COpenCVForUnity.UnityUtils.Vec7f | 7-Vector struct of float [CvType.CV_32FC7, KeyPoint] |
COpenCVForUnity.UnityUtils.MOT.ByteTrack.KalmanFilter | |
COpenCVForUnity.UnityUtils.MOT.ByteTrack.Lapjv | |
COpenCVForUnity.UnityUtils.MatUtils | |
COpenCVForUnity.CoreModule.Core.MinMaxLocResult | |
COpenCVForUnity.MlModule.Ml | |
►CMonoBehaviour | |
COpenCVForUnity.UnityUtils.Helper.ARHelper | A helper component for managing AR (Augmented Reality) functionalities |
COpenCVForUnity.UnityUtils.Helper.AsyncGPUReadback2MatHelper | A helper component class for efficiently converting Unity Texture objects, such as RenderTexture and external texture format Texture2D , to OpenCV Mat format using AsyncGPUReadback |
COpenCVForUnity.UnityUtils.Helper.Image2MatHelper | A helper component class for loading an image file using OpenCV's Imgcodecs.imread method and converting it to an OpenCV Mat format |
COpenCVForUnity.UnityUtils.Helper.ImageOptimizationHelper | A helper component for optimizing image processing in Unity by managing frame skipping and downscaling operations. v1.1.1 |
COpenCVForUnity.UnityUtils.Helper.MultiSource2MatHelper | A versatile helper component class for obtaining frames as OpenCV Mat objects from multiple sources, allowing dynamic switching between different ISource2MatHelper classes |
COpenCVForUnity.UnityUtils.Helper.UnityVideoPlayer2MatHelper | A helper component class for obtaining video frames from a file using Unity's VideoPlayer and converting them to OpenCV Mat format |
COpenCVForUnity.UnityUtils.Helper.VideoCapture2MatHelper | A helper component class for obtaining video frames from a file using OpenCV's VideoCapture and converting them to OpenCV Mat format |
COpenCVForUnity.UnityUtils.Helper.VideoCaptureToMatHelper | VideoCapture to mat helper. v 1.0.4 |
COpenCVForUnity.UnityUtils.Helper.WebCamTexture2MatHelper | A helper component class for obtaining camera frames from WebCamTexture and converting them to OpenCV Mat format in real-time |
►COpenCVForUnity.UnityUtils.Helper.WebCamTextureToMatHelper | WebCamTexture to mat helper. v 1.1.6 |
COpenCVForUnity.UnityUtils.Helper.VideoCaptureCameraInputToMatHelper | VideoCaptureCameraInput to mat helper. v 1.0.1 Depends on OpenCVForUnity version 2.4.4 (WebCamTextureToMatHelper v 1.1.3) or later. (Use the WebCamDevice.isFrontFacing and WebCamTexture.videoRotationAngle properties to flip the camera input image in VideoCaptue to the correct orientation.) |
COpenCVForUnity.ObjdetectModule.Objdetect | |
COpenCVForUnity.Phase_unwrappingModule.Phase_unwrapping | |
COpenCVForUnity.PhotoModule.Photo | |
COpenCVForUnity.PlotModule.Plot | |
COpenCVForUnity.UnityUtils.PoseData | |
COpenCVForUnity.UnityUtils.MOT.ByteTrack.RectOperations | |
COpenCVForUnity.UnityUtils.Helper.Source2MatHelperUtils | |
COpenCVForUnity.Structured_lightModule.Structured_light | |
COpenCVForUnity.TextModule.Text | |
COpenCVForUnity.UnityUtils.MOT.ByteTrack.Track | |
COpenCVForUnity.TrackingModule.Tracking | |
COpenCVForUnity.UnityUtils.Utils | |
COpenCVForUnity.VideoModule.Video | |
COpenCVForUnity.VideoioModule.Videoio | |
COpenCVForUnity.Wechat_qrcodeModule.Wechat_qrcode | |
COpenCVForUnity.Xfeatures2dModule.Xfeatures2d | |
COpenCVForUnity.XimgprocModule.Ximgproc | |
COpenCVForUnity.XphotoModule.Xphoto | |
►CUnityEvent | |
COpenCVForUnity.UnityUtils.Helper.Source2MatHelperErrorUnityEvent | |
COpenCVForUnity.UnityUtils.Helper.VideoCaptureToMatHelper.ErrorUnityEvent | |
COpenCVForUnity.UnityUtils.Helper.WebCamTextureToMatHelper.ErrorUnityEvent |