Camera_Calibration
Multi Camera (Stereo Vision) Calibration for AR/VR headset (extended reality/mixed reality) 3D Image Processing with Deep Learning
introduction
Geometric camera calibration, also referred to as camera re-sectioning, estimates the parameters of a lens and image sensor of an image or video camera. These parameters can be used to correct for lens distortion, measure the size of an object in world units, or determine the location of the camera in a scene. These tasks are used in applications such as machine vision to detect and measure objects. They are also used in robotics, navigation systems, and 3-D scene reconstruction. Without any knowledge of the calibration of the cameras, it is impossible to do better than projective reconstruction (MathWorks).
Non-intrusive scene measurement tasks, such as 3D reconstruction, object inspection, target or self-localization or scene mapping require a calibrated camera model (Orghidan et al. 2011). Camera calibration is the process of approximating the parameters of a pinhole camera model (Tsai 1987; Stein 1995; Heikkila & Silven 1997) of a given photograph or video.
There are four main categories of camera calibration methods whereby a number of algorithms have been proposed for each categories/methods, namely knowing object based camera calibration, semi auto calibration, camera self-calibration method, and camera calibration method based on active vision.
In computer vision methods, image information from cameras can yield geometric information pertaining to three-dimensional objects. Non-intrusive scene measurement tasks, such as 3D reconstruction, object inspection, target or self-localization, or scene mapping require a calibrated camera model (Orghidan et al. 2011). The correlation between the geographical point and camera image pixel is necessary for camera calibration. Hence, the camera’s parameter, which constitutes the geometric model of camera imaging, are utilized to establish the correlation between the three-dimensional geometric location of one point and a corresponding point in an image (Wang et al. 2010). Typically, experiments are conducted to attain the aforementioned parameters and relevant calculation, which is a process called camera calibration (Hyunjoon et al. 2014; Jianyang et al. 2014; Mohedano et al. 2014; Navarro et al. 2014).
Image information from cameras can be used to elucidate the geometric information of a 3D object. The process of estimating the parameters of a pinhole camera model is called camera calibration. The more accurate the estimated parameters, the better the compensation that can be performed for the next stage of the application. In the data collection stage, a camera will take photos of a camera calibration pattern(Tsai 1987; Stein 1995; Heikkila & Silven 1997; Zhengyou 2000). Another angle of the issue is to create a set of pair images from both cameras via high quality images and increased range of slope of calibration pattern. The current methods simply create images upon the detection of calibration pattern. Nonetheless, the consensus in literature is that accurate camera calibration necessitates pure rotation (Zhang et al. 2008) and require sharp images. Recent breakthrough methods, such as Zhang’s (Zhengyou 2000), use fixed threshold to elucidate pixel difference between the frames and pre-setting variables, where slope information for image frame selection in camera calibration phase has been neglected (Audet & Okutomi 2009). Conversely, these approaches become less reliable when image frames are blurred. These problems necessitates that the camera calibration algorithm be enhanced (Wang et al. 2010).
OpenCV
Deep Learning
Engineering of Camera Calibration
Occasionally the out-of-the-box solution does not work, and you need some modified version of the algorithms.
The first step of camera calibration is using known pattern images, such as chessboard. However, sometimes the image quality and pattern are not match with standard approach of calibration process.
I use some other technique to enhance the result. In the first step, we need to improve the corner detection, and it may be done by fallowing steps.
* The chessboard is used as a pattern of alternating black and white squares,
- which ensures that there is no bias toward one side or the other in measurement.
* The image must be an grayscale (single-channel) image.
- img - Input image. It should be grayscale and float32 type.
* gradianet x and y direction together (for better detection)
- cv.morphologyEx( src, op, kernel[, dst[, anchor[, iterations[, borderType[, borderValue]]]]] ) -> dst # different kernel is required
* using Harris corner detection, which is a matrix of the second-order derivatives of the image intensities.
- cv.cornerHarris( src, blockSize, ksize, k[, dst[, borderType]] ) -> dst # the parameters a and b and c should be modified
> img - Input image. It should be grayscale and float32 type.
> blockSize - It is the size of neighborhood considered for corner detection
> ksize - Aperture parameter of the Sobel derivative used.
> k - Harris detector free parameter in the equation.
* contours to remove some noise:
- cv.connectedComponentsWithStats( image[, labels[, stats[, centroids[, connectivity[, ltype]]]]] ) -> retval, labels, stats, centroids
* subpixel corners: corner detection come with integer coordinates but sometimes require real-valued coordinates
cv.cornerSubPix( image, corners, winSize, zeroZone, criteria ) -> corners
- image Input single-channel, 8-bit or float image.
- corners Initial coordinates of the input corners and refined coordinates provided for output.
- winSize Half of the side length of the search window. (5*5 will be 11)
- zeroZone It is used sometimes to avoid possible singularities of the auto correlation matrix.
- criteria Criteria for termination of the iterative process of corner refinement.
* remove duplicate corners: for example corners are in less than 5 pixels should be remove
Reference:
https://theailearner.com/tag/cv2-cornersubpix/
https://docs.opencv.org/3.4/dc/d0d/tutorial_py_features_harris.html
#Camera_Calibration #Camera-resectioning
See more: https://www.tiziran.com/topics/camera_calibration
If you found the content informative, you may Follow me by LinkedIn, twitter, for more!
#FarshidPirahanSiah #tiziran
Source code
Basic camear calibration source code by using OpenCV library in Jupyter notebook
Reference
Semi-Auto Calibration for multi-camera system (Pirahansiah's method 2022) + prognostic analysis [ using QR code in center of calibration pattern with four different colors in each courners of the QR code for show the direction which use for sincronize the points for all cameras)
Book Chapter (Springer):
Camera Calibration and Video Stabilization Framework for Robot Localization https://link.springer.com/chapter/10.1007/978-3-030-74540-0_12
IEEE paper:
Pattern image significance for camera calibration https://ieeexplore.ieee.org/abstract/document/8305440
Camera calibration for multi-modal robot vision based on image quality assessment https://www.researchgate.net/profile/Farshid-Pirahansiah/publication/288174690_Camera_calibration_for_multi-modal_robot_vision_based_on_image_quality_assessment/links/5735bc2908aea45ee83c999e/Camera-calibration-for-multi-modal-robot-vision-based-on-image-quality-assessment.pdf
Part 3.
Basic of camera calibration + source code (Python+OpenCV) https://www.tiziran.com/topics/camera_calibration
Geometric camera calibration, also referred to as camera re-sectioning, estimates the parameters of a lens and image sensor of an image or video camera. These parameters can be used to correct for lens distortion, measure the size of an object in world units, or determine the location of the camera in a scene. These tasks are used in applications such as machine vision to detect and measure objects. They are also used in robotics, navigation systems, and 3-D scene reconstruction. Without any knowledge of the calibration of the cameras, it is impossible to do better than projective reconstruction (MathWorks).
Non-intrusive scene measurement tasks, such as 3D reconstruction, object inspection, target or self-localization or scene mapping require a calibrated camera model (Orghidan et al. 2011). Camera calibration is the process of approximating the parameters of a pinhole camera model (Tsai 1987; Stein 1995; Heikkila & Silven 1997) of a given photograph or video.
There are four main categories of camera calibration methods whereby a number of algorithms have been proposed for each categories/methods, namely knowing object based camera calibration, semi auto calibration, camera self-calibration method, and camera calibration method based on active vision.
#camera_calibration #3D #multi_camera_calibration #extended_reality #mixed_reality
REFERENCES
Abdullah, S. N. H. S., F. PirahanSiah, M. Khalid & K. Omar 2010. An evaluation of classification techniques using enhanced Geometrical Topological Feature Analysis. 2nd Malaysian Joint Conference on Artificial Intelligence (MJCAI 2010). Malaysia, 28-30 July, 2010.
Abdullah, S. N. H. S., F. PirahanSiah, N. H. Zainal Abidin & S. Sahran 2010. Multi-threshold approach for license plate recognition system. International Conference on Signal and Image Processing WASET Singapore August 25-27, 2010 ICSIP. pp. 1046-1050.
Abidin, N. H. Z., S. N. H. S. Abdullah, S. Sahran & F. PirahanSiah 2011. License plate recognition with multi-threshold based on entropy. Electrical Engineering and Informatics (ICEEI), 2011 International Conference on. pp. 1-6.
Agapito, L., E. Hayman & I. Reid 2001. Self-calibration of rotating and zooming cameras. International Journal of Computer Vision 45(2): 107-127.
Alcala-Fdez, J. & J. M. Alonso 2015. A Survey of Fuzzy Systems Software: Taxonomy, Current Research Trends and Prospects. Fuzzy Systems, IEEE Transactions on PP(99): 40-56.
Alcantarilla, P., O. Stasse, S. Druon, L. Bergasa & F. Dellaert 2013. How to localize humanoids with a single camera? Autonomous Robots 34(1-2): 47-71.
Alejandro Héctor Toselli, E. Vidal & F. Casacuberta. 2011. Multimodal Interactive Pattern Recognition and Applications Ed.: Springer.
Álvarez, S., D. F. Llorca & M. A. Sotelo 2014. Hierarchical camera auto-calibration for traffic surveillance systems. Expert Systems with Applications 41(4, Part 1): 1532-1542.
Amanatiadis, A., A. Gasteratos, S. Papadakis & V. Kaburlasos. 2010. Image Stabilization in Active Robot Vision Ed.: INTECH Open Access Publisher.
Anuar, A., H. Hanizam, S. M. Rizal & N. N. Anuar 2015. Comparison of camera calibration method for a vision based meso-scale measurement system. Proceedings of Mechanical Engineering Research Day 2015: MERD'15 2015: 139-140.
Audet, S. & M. Okutomi 2009. A user-friendly method to geometrically calibrate projector-camera systems. Computer Vision and Pattern Recognition Workshops, 2009. CVPR Workshops 2009. IEEE Computer Society Conference on. pp. 47-54.
Baharav, Z. & R. Kakarala 2013. Visually significant QR codes: Image blending and statistical analysis. Multimedia and Expo (ICME), 2013 IEEE International Conference on. pp. 1-6.
Baker, S. & I. Matthews 2004. Lucas-Kanade 20 Years On: A Unifying Framework. International Journal of Computer Vision 56(3): 221-255.
Baker, S., D. Scharstein, J. P. Lewis, S. Roth, M. Black & R. Szeliski 2011. A Database and Evaluation Methodology for Optical Flow. International Journal of Computer Vision 92(1): 1-31.
Banks, J. & P. Corke 2001. Quantitative evaluation of matching methods and validity measures for stereo vision. The International Journal of Robotics Research 20(7): 512-532.
Barron, J. L., D. J. Fleet & S. S. Beauchemin 1994. Performance of optical flow techniques. International Journal of Computer Vision 12(1): 43-77.
Battiato, S., G. Gallo, G. Puglisi & S. Scellato 2007. SIFT Features Tracking for Video Stabilization. Image Analysis and Processing, 2007. ICIAP 2007. 14th International Conference on. pp. 825-830.
Botterill, T., S. Mills & R. Green 2013. Correcting Scale Drift by Object Recognition in Single-Camera SLAM. Cybernetics, IEEE Transactions on PP(99): 1-14.
Brox, T., A. Bruhn, N. Papenberg & J. Weickert 2004. High Accuracy Optical Flow Estimation Based on a Theory for Warping. Computer Vision - ECCV 2004 3024: 25-36.
Bruhn, A., J. Weickert & C. Schnörr 2005. Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods. International Journal of Computer Vision 61(3): 211-231.
Burt, P. J. & E. H. Adelson 1983. The Laplacian pyramid as a compact image code. Communications, IEEE Transactions on 31(4): 532-540.
Butler, D. J., J. Wulff, G. B. Stanley & M. J. Black 2012. A naturalistic open source movie for optical flow evaluation. Proceedings of the 12th European conference on Computer Vision - Volume Part VI 611-625. Springer-Verlag. Florence, Italy,
Cai, J. & R. Walker 2009. Robust video stabilisation algorithm using feature point selection and delta optical flow. Iet Computer Vision 3(4): 176-188.
Carrillo, L. R. G., I. Fantoni, E. Rondon & A. Dzul 2015. Three-Dimensional Position and Velocity Regulation of a Quad-Rotorcraft Using Optical Flow. Ieee Transactions on Aerospace and Electronic Systems 51(1): 358-371.
Chang, H. C., S. H. Lai, K. R. Lu & Ieee. 2004. A robust and efficient video stabilization algorithm Ed. New York: IEEE.
Chao, H. Y., Y. Gu, J. Gross, G. D. Guo, M. L. Fravolini, M. R. Napolitano & Ieee 2013. A Comparative Study of Optical Flow and Traditional Sensors in UAV Navigation. 2013 American Control Conference: 3858-3863.
Chen, S. Y. 2012. Kalman Filter for Robot Vision: A Survey. IEEE Transactions on Industrial Electronics 59(11): 4409-4420.
Cignoni, P., C. Rocchini & R. Scopigno 1998. Metro: measuring error on simplified surfaces. Computer Graphics Forum. 17(2) pp. 167-174.
Courchay, J., A. S. Dalalyan, R. Keriven & P. Sturm 2012. On camera calibration with linear programming and loop constraint linearization. International Journal of Computer Vision 97(1): 71-90.
Crivelli, T., M. Fradet, P. H. Conze, P. Robert & P. Perez 2015. Robust Optical Flow Integration. IEEE Transactions on Image Processing 24(1): 484-498.
Cui, Y., F. Zhou, Y. Wang, L. Liu & H. Gao 2014. Precise calibration of binocular vision system used for vision measurement. Optics Express 22(8): 9134-9149.
Dang, T., C. Hoffmann & C. Stiller 2009. Continuous Stereo Self-Calibration by Camera Parameter Tracking. Image Processing, IEEE Transactions on 18(7): 1536-1550.
Danping, Z. & T. Ping 2013. CoSLAM: Collaborative Visual SLAM in Dynamic Environments. Pattern Analysis and Machine Intelligence, IEEE Transactions on 35(2): 354-366.
De Castro, E. & C. Morandi 1987. Registration of translated and rotated images using finite Fourier transforms. IEEE Transactions on Pattern Analysis & Machine Intelligence(5): 700-703.
De Ma, S. 1996. A self-calibration technique for active vision systems. Robotics and Automation, IEEE Transactions on 12(1): 114-120.
de Paula, M. B., C. R. Jung & L. G. da Silveira Jr 2014. Automatic on-the-fly extrinsic camera calibration of onboard vehicular cameras. Expert Systems with Applications 41(4, Part 2): 1997-2007.
Dellaert, F., D. Fox, W. Burgard & S. Thrun 1999. Monte carlo localization for mobile robots. Robotics and Automation, 1999. Proceedings. 1999 IEEE International Conference on. 2 pp. 1322-1328.
Deqing, S., S. Roth & M. J. Black 2010. Secrets of optical flow estimation and their principles. Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. pp. 2432-2439.
Deshpande, P. P. & D. Sazou. 2015. Corrosion Protection of Metals by Intrinsically Conducting Polymers Ed.: CRC Press.
Dong, J. & Y. Xia 2014. Real-time video stabilization based on smoothing feature trajectories. Computer and Information Technology 519-520: 640-643.
DongMing, L., S. Lin, X. Dianguang & Z. LiJuan 2012. Camera Linear Calibration Algorithm Based on Features of Calibration Plate. Advances in Electric and Electronics: 689-697.
Dorini, L. B. & N. J. Leite 2013. A Scale-Space Toggle Operator for Image Transformations. International Journal of Image and Graphics 13(04): 1350022-32.
Dubská, M., A. Herout, R. Juranek & J. Sochor 2014. Fully automatic roadside camera calibration for traffic surveillance. 1162-1171.
Dufaux, F. & F. Moscheni 1995. Motion estimation techniques for digital TV: A review and a new contribution. Proceedings of the IEEE 83(6): 858-876.
Elamsy, T., A. Habed & B. Boufama 2012. A new method for linear affine self-calibration of stationary zooming stereo cameras. Image Processing (ICIP), 2012 19th IEEE International Conference on. pp. 353-356.
Elamsy, T., A. Habed & B. Boufama 2014. Self-Calibration of Stationary Non-Rotating Zooming Cameras. Image and Vision Computing 32(3): 212-226.
Eruhimov, V. 2016. OpenCV: Camera calibration and 3D reconstruction. http://docs.opencv.org/master/d4/d94/tutorial_camera_calibration.html#gsc.tab=0 (Accessed October 2016).
Estalayo, E., L. Salgado, F. Jaureguizar & N. García 2006. Efficient image stabilization and automatic target detection in aerial FLIR sequences. Defense and Security Symposium. pp. 62340N-62340N-12.
Fan, C. & G. Yao 2012. Full-range spectral domain Jones matrix optical coherence tomography using a single spectral camera. Optics Express 20(20): 22360-22371.
Farnebäck, G. 2003. Two-frame motion estimation based on polynomial expansion. Image Analysis: 363-370.
Felsberg, M. & G. Sommer 2004. The Monogenic Scale-Space: A Unifying Approach to Phase-Based Image Processing in Scale-Space. Journal of Mathematical Imaging and Vision 21(1-2): 5-26.
Feng, Y., J. Ren, J. Jiang, M. Halvey & J. Jose 2012. Effective venue image retrieval using robust feature extraction and model constrained matching for mobile robot localization. Machine Vision and Applications 23(5): 1011-1027.
Feng, Y., A. M. Zoubir, C. Fritsche & F. Gustafsson 2013. Robust cooperative sensor network localization via the EM criterion in LOS/NLOS environments. Signal Processing Advances in Wireless Communications (SPAWC), 2013 IEEE 14th Workshop on. pp. 505-509.
Ferstl, D., C. Reinbacher, G. Riegler, M. Rüther & H. Bischof 2015. Learning Depth Calibration of Time-of-Flight Cameras. Proceedings of the British Machine Vision Conference (BMVC). pp. 1-12.
Ferzli, R. & L. J. Karam 2005. No-reference objective wavelet based noise immune image sharpness metric. Image Processing, 2005. ICIP 2005. IEEE International Conference on. 1 pp. I-405-8.
Florez, J., F. Calderon & C. Parra 2013. Video stabilization taken with a snake robot. Image, Signal Processing, and Artificial Vision (STSIVA), 2013 XVIII Symposium of. pp. 1-5.
Fortun, D., P. Bouthemy & C. Kervrann 2015. Optical flow modeling and computation: a survey. Computer Vision and Image Understanding 134: 1-21.
Fuchs, S. 2012. Calibration and multipath mitigation for increased accuracy of time-of-flight camera measurements in robotic applications.Tesis Universitätsbibliothek der Technischen Universität Berlin,
Fuentes-Pacheco, J., J. Ruiz-Ascencio & J. Rendón-Mancha 2015. Visual simultaneous localization and mapping: a survey. Artificial Intelligence Review 43(1): 55-81.
Fuentes-Pacheco, J., J. Ruiz-Ascencio & J. M. Rendón-Mancha 2012. Visual simultaneous localization and mapping: a survey. Artificial Intelligence Review 43(1): 55-81.
Furukawa, Y., B. Curless, S. M. Seitz & R. Szeliski 2009. Manhattan-world stereo. Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. pp. 1422-1429.
Garg, V. & K. Deep 2015. Performance of Laplacian Biogeography-Based Optimization Algorithm on CEC 2014 continuous optimization benchmarks and camera calibration problem. Swarm and Evolutionary Computation.
Geiger, A. 2013. Probabilistic models for 3D urban scene understanding from movable platforms Ed. 25. KIT Scientific Publishing.
Geiger, A., P. Lenz, C. Stiller & R. Urtasun 2013. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research: 0278364913491297.
Geiger, A., F. Moosmann, O. Car & B. Schuster 2012. Automatic camera and range sensor calibration using a single shot. Robotics and Automation (ICRA), 2012 IEEE International Conference on. pp. 3936-3943.
Gibson, J. J. 1950. The perception of the visual world. Oxford, England: Houghton Mifflin The perception of the visual world.(1950). xii 242 pp.
Goncalves Lins, R., S. N. Givigi & P. R. Gardel Kurka 2015. Vision-Based Measurement for Localization of Objects in 3-D for Robotic Applications. Instrumentation and Measurement, IEEE Transactions on 64(11): 2950-2958.
Groeger, M., G. Hirzinger & Insticc. 2006. Optical flow to analyse stabilised images of the beating heart Ed. Vol 2. VISAPP 2006: Proceedings of the First International Conference on Computer Vision Theory and Applications, .
Grundmann, M., V. Kwatra, D. Castro & I. Essa 2012. Calibration-free rolling shutter removal. Computational Photography (ICCP), 2012 IEEE International Conference on. pp. 1-8.
Grundmann, M., V. Kwatra & I. Essa 2011. Auto-directed video stabilization with robust l1 optimal camera paths. Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. pp. 225-232.
Gueaieb, W. & M. S. Miah 2008. An intelligent mobile robot navigation technique using RFID technology. Instrumentation and Measurement, IEEE Transactions on 57(9): 1908-1917.
Gurdjos, P. & P. Sturm 2003. Methods and geometry for plane-based self-calibration. Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on. 1 pp. I-491-I-496.
Haiyang, C., G. Yu & M. Napolitano 2013. A survey of optical flow techniques for UAV navigation applications. Unmanned Aircraft Systems (ICUAS), 2013 International Conference on. pp. 710-716.
Hanning, G., N. Forslöw, P.-E. Forssén, E. Ringaby, D. Törnqvist & J. Callmer 2011. Stabilizing cell phone video using inertial measurement sensors. Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on. pp. 1-8.
Hartley, R. & A. Zisserman. 2003. Multiple view geometry in computer vision Ed.: Cambridge university press.
Heidarzade, A., I. Mahdavi & N. Mahdavi-Amiri 2015. Multiple attribute group decision making in interval type-2 fuzzy environment using a new distance formulation. International Journal of Operational Research 24(1): 17-37.
Heikkila, J. 2000. Geometric camera calibration using circular control points. Pattern Analysis and Machine Intelligence, IEEE Transactions on 22(10): 1066-1077.
Heikkila, J. & O. Silven 1997. A four-step camera calibration procedure with implicit image correction. Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on. pp. 1106-1112.
Herrera C, D., J. Kannala, Heikkil, x00E & Janne 2012. Joint Depth and Color Camera Calibration with Distortion Correction. Pattern Analysis and Machine Intelligence, IEEE Transactions on 34(10): 2058-2064.
Holmes, S. A. & D. W. Murray 2013. Monocular SLAM with Conditionally Independent Split Mapping. Pattern Analysis and Machine Intelligence, IEEE Transactions on 35(6): 1451-1463.
Hong, Y., G. Ren & E. Liu 2015. Non-iterative method for camera calibration. Optics Express 23(18): 23992-24003.
Horn, B. K. & B. G. Schunck 1981. Determining optical flow. 1981 Technical symposium east. pp. 319-331.
Horn, B. K. P. 1977. Understanding image intensities. Artificial Intelligence 8(2): 201-231.
Hovden, A.-M. 2015. Removing outliers from the Lucas-Kanade method with a weighted median filter.
Hu, H., J. Liang, Z.-z. Xiao, Z.-z. Tang, A. K. Asundi & Y.-x. Wang 2012. A four-camera videogrammetric system for 3-D motion measurement of deformable object. Optics and Lasers in Engineering 50(5): 800-811.
Hyunjoon, L., E. Shechtman, W. Jue & L. Seungyong 2014. Automatic Upright Adjustment of Photographs With Robust Camera Calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on 36(5): 833-844.
Irani, M. & P. Anandan 2000. About Direct Methods. Proceedings of the International Workshop on Vision Algorithms: Theory and Practice. Springer-Verlag.
Ismail, K., T. Sayed, N. Saunier & M. Bartlett 2013. A methodology for precise camera calibration for data collection applications in urban traffic scenes. Canadian Journal of Civil Engineering 40(1): 57-67.
Jacobs, N., A. Abrams & R. Pless 2013. Two Cloud-Based Cues for Estimating Scene Structure and Camera Calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on 35(10): 2526-2538.
JAFELICE, R. M., A. M. BERTONE & R. C. BASSANEZI 2015. A Study on Subjectivities of Type 1 and 2 in Parameters of Differential Equations. TEMA (São Carlos) 16: 51-60.
Jen-Shiun, C., H. Chih-Hsien & L. Hsin-Ting 2013. High density QR code with multi-view scheme. Electronics Letters 49(22): 1381-1383.
Jia, C. & B. L. Evans 2014. Constrained 3D rotation smoothing via global manifold regression for video stabilization. Signal Processing, IEEE Transactions on 62(13): 3293-3304.
Jia, Z., J. Yang, W. Liu, F. Wang, Y. Liu, L. Wang, C. Fan & K. Zhao 2015. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system. Optics Express 23(12): 15205-15223.
Jiang, H., Z.-N. Li & M. S. Drew 2004. Optimizing motion estimation with linear programming and detail-preserving variational method. Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on. 1 pp. I-738-I-745 Vol. 1.
Jianyang, L., L. Youfu & C. Shengyong 2014. Robust Camera Calibration by Optimal Localization of Spatial Control Points. Instrumentation and Measurement, IEEE Transactions on 63(12): 3076-3087.
Joshi, P. & S. Prakash 2014. Image quality assessment based on noise detection. Signal Processing and Integrated Networks (SPIN), 2014 International Conference on. pp. 755-759.
Kaehler, A. & G. Bradski. 2016. Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library 1st Edition Ed.: O'Reilly Media, Inc.
Kahaki, S. M. M., M. J. Nordin & A. H. Ashtari 2014. Contour-based corner detection and classification by using mean projection transform. Sensors 14(3): 4126-4143.
Karnik, N. N. & J. M. Mendel 2001. Operations on type-2 fuzzy sets. Fuzzy sets and systems 122(2): 327-348.
Karpenko, A., D. Jacobs, J. Baek & M. Levoy 2011. Digital video stabilization and rolling shutter correction using gyroscopes. CSTR 1: 2.
Kearney, J. K., W. B. Thompson & D. L. Boley 1987. Optical Flow Estimation: An Error Analysis of Gradient-Based Methods with Local Optimization. Pattern Analysis and Machine Intelligence, IEEE Transactions on PAMI-9(2): 229-244.
Kennedy, R. & C. J. Taylor 2015. Hierarchically-Constrained Optical Flow. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Kim, A. & R. M. Eustice 2013. Real-Time Visual SLAM for Autonomous Underwater Hull Inspection Using Visual Saliency. Robotics, IEEE Transactions on PP(99): 1-15.
Kim, J.-H. & B.-K. Koo 2013. Linear stratified approach using full geometric constraints for 3D scene reconstruction and camera calibration. Optics Express 21(4): 4456-4474.
Ko, N. Y. & T.-Y. Kuc 2015. Fusing Range Measurements from Ultrasonic Beacons and a Laser Range Finder for Localization of a Mobile Robot. Sensors 15(5): 11050-11075.
Koch, H., A. Konig, A. Weigl-Seitz, K. Kleinmann & J. Suchy 2013. Multisensor contour following with vision, force, and acceleration sensors for an industrial robot. Instrumentation and Measurement, IEEE Transactions on 62(2): 268-280.
Kumar, A., M. K. Panda, S. Kundu & V. Kumar 2012. Designing of an interval type-2 fuzzy logic controller for Magnetic Levitation System with reduced rule base. Computing Communication & Networking Technologies (ICCCNT), 2012 Third International Conference on. pp. 1-8.
Kumar, S., H. Azartash, M. Biswas & T. Nguyen 2011. Real-Time Affine Global Motion Estimation Using Phase Correlation and its Application for Digital Image Stabilization. Ieee Transactions on Image Processing 20(12): 3406-3418.
Kumar, S. & R. M. Hegde 2015. An Efficient Compartmental Model for Real-Time Node Tracking Over Cognitive Wireless Sensor Networks. Signal Processing, IEEE Transactions on 63(7): 1712-1725.
Lazaros, N., G. C. Sirakoulis & A. Gasteratos 2008. Review of stereo vision algorithms: from software to hardware. International Journal of Optomechatronics 2(4): 435-462.
Lee, C., D. Clark & J. Salvi 2013. SLAM with dynamic targets via single-cluster PHD filtering. Selected Topics in Signal Processing, IEEE Journal of PP(99): 1-1.
Lee, H., E. Shechtman, J. Wang & S. Lee 2013. Automatic Upright Adjustment of Photographs with Robust Camera Calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on PP(99): 1-1.
Lee, K.-Y., Y.-Y. Chuang, B.-Y. Chen & M. Ouhyoung 2009. Video stabilization using robust feature trajectories. Computer Vision, 2009 IEEE 12th International Conference on. pp. 1397-1404.
Lei, W., K. Sing Bing, S. Heung-Yeung & X. Guangyou 2004. Error analysis of pure rotation-based self-calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on 26(2): 275-280.
Leitner, J., S. Harding, M. Frank, A. Forster & J. Schmidhuber 2012. Learning Spatial Object Localization from Vision on a Humanoid Robot. International Journal of Advanced Robotic Systems 9: 1-10.
Li, D., T. Li & T. Zhao 2014. A New Clustering Method Based On Type-2 Fuzzy Similarity and Inclusion Measures. Journal of Computers 9(11): 2559-2569.
Li, Q., H. Feng & Z. Xu 2005. Auto-focus apparatus with digital signal processor. Photonics Asia 2004. pp. 416-423.
Li, W., J. Hu, Z. Li, L. Tang & C. Li 2011. Image Stabilization Based on Harris Corners and Optical Flow. Knowledge Science, Engineering and Management 7091: 387-394.
Liang, Q. & J. M. Mendel 2000. Interval type-2 fuzzy logic systems: theory and design. Fuzzy Systems, IEEE Transactions on 8(5): 535-550.
Liming, S., W. Wenfu, G. Junrong & L. Xiuhua 2013. Survey on Camera Calibration Technique. Intelligent Human-Machine Systems and Cybernetics (IHMSC), 2013 5th International Conference on. 2 pp. 389-392.
Linchao, B., Y. Qingxiong & J. Hailin 2014. Fast Edge-Preserving PatchMatch for Large Displacement Optical Flow. Image Processing, IEEE Transactions on 23(12): 4996-5006.
Lindeberg, T. 1994. Scale-space theory: A basic tool for analyzing structures at different scales. Journal of applied statistics 21(1-2): 225-270.
Lins, R. G., S. N. Givigi & P. R. G. Kurka 2015. Vision-Based Measurement for Localization of Objects in 3-D for Robotic Applications. Ieee Transactions on Instrumentation and Measurement 64(11): 2950-2958.
Litvin, A., J. Konrad & W. C. Karl 2003. Probabilistic video stabilization using Kalman filtering and mosaicing. Electronic Imaging 2003. pp. 663-674.
Liu, F., M. Gleicher, H. Jin & A. Agarwala 2009. Content-preserving warps for 3D video stabilization. ACM Transactions on Graphics (TOG). 28(3) pp. 44.
Liu, F., M. Gleicher, J. Wang, H. Jin & A. Agarwala 2011. Subspace video stabilization. ACM Trans. Graph. 30(1): 1-10.
Liu, F., M. Gleicher, J. Wang, H. Jin & A. Agarwala 2011. Subspace video stabilization. ACM Transactions on Graphics (TOG) 30(1): 4.
Liu, S., L. Yuan, P. Tan & J. Sun 2013. Bundled camera paths for video stabilization. ACM Trans. Graph. 32(4): 1-10.
Liu, S., L. Yuan, P. Tan & J. Sun 2014. Steadyflow: Spatially smooth optical flow for video stabilization. Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on. pp. 4209-4216.
Liu, Y., D. G. Xi, Z. L. Li & Y. Hong 2015. A new methodology for pixel-quantitative precipitation nowcasting using a pyramid Lucas Kanade optical flow approach. Journal of Hydrology 529: 354-364.
Long Thanh, N. 2011. Refinement CTIN for general type-2 fuzzy logic systems. Fuzzy Systems (FUZZ), 2011 IEEE International Conference on. pp. 1225-1232.
Lowe, D. G. 2004. Distinctive image features from scale-invariant keypoints. International journal of computer vision 60(2): 91-110.
Lu, C.-S. & C.-Y. Hsu 2012. Constraint-optimized keypoint inhibition/insertion attack: security threat to scale-space image feature extraction. Proceedings of the 20th ACM international conference on Multimedia. pp. 629-638.
Lucas, B. D. & T. Kanade 1981. An iterative image registration technique with an application to stereo vision. IJCAI. 81 pp. 674-679.
Martin, F., C. E. Aguero & J. M. Canas 2015. Active Visual Perception for Humanoid Robots. International Journal of Humanoid Robotics 12(1): 22.
MathWorks. 2016/01/01. Evaluating the Accuracy of Single Camera Calibration. http://www.mathworks.com/examples/matlab-computer-vision/704-evaluating-the-accuracy-of-single-camera-calibration (Accessed).
Matsushita, Y., E. Ofek, W. Ge, X. Tang & H.-Y. Shum 2006. Full-frame video stabilization with motion inpainting. Pattern Analysis and Machine Intelligence, IEEE Transactions on 28(7): 1150-1163.
Mendel, J. M., H. Hagras, W.-W. Tan, W. W. Melek & H. Ying 2014. Appendix A T2 FLC Software: From Type-1 to zSlices-Based General Type-2 FLCs. Introduction to Type-2 Fuzzy Logic Control: 315-337.
Mendel, J. M., R. John & F. Liu 2006. Interval type-2 fuzzy logic systems made simple. Fuzzy Systems, IEEE Transactions on 14(6): 808-821.
Mendel, J. M. & R. I. B. John 2002. Type-2 fuzzy sets made simple. Fuzzy Systems, IEEE Transactions on 10(2): 117-127.
Meng, X. Q. & Z. Y. Hu 2003. A new easy camera calibration technique based on circular points. Pattern Recognition 36(5): 1155-1164.
Menze, M., C. Heipke & A. Geiger 2015. Discrete Optimization for Optical Flow. Pattern Recognition: 16-28.
Ming-Jun, C., L. K. Cormack & A. C. Bovik 2013. No-Reference Quality Assessment of Natural Stereopairs. Image Processing, IEEE Transactions on 22(9): 3379-3391.
Miraldo, P. & H. Araujo 2013. Calibration of Smooth Camera Models. Pattern Analysis and Machine Intelligence, IEEE Transactions on 35(9): 2091-2103.
Mohedano, R., A. Cavallaro & N. Garcia 2014. Camera Localization UsingTrajectories and Maps. Pattern Analysis and Machine Intelligence, IEEE Transactions on 36(4): 684-697.
Moorthy, A. K. & A. C. Bovik 2010. Automatic Prediction of Perceptual Video Quality: Recent Trends and Research Directions. High-Quality Visual Experience: 3-23.
Morimoto, C. & R. Chellappa 1996. Fast electronic digital image stabilization. Pattern Recognition, 1996., Proceedings of the 13th International Conference on. 3 pp. 284-288.
Morimoto, C. & R. Chellappa 1997. Fast Electronic Digital Image Stabilization for O-Road Navigation. Real-Time Imaging: 285-296.
Murray, D. & C. Jennings 1997. Stereo vision based mapping and navigation for mobile robots. Robotics and Automation, 1997. Proceedings., 1997 IEEE International Conference on. 2 pp. 1694-1699.
Myers, R. L. 2003. Display interfaces: fundamentals and standards Ed.: John Wiley & Sons.
Naeimizaghiani, M., F. PirahanSiah, S. N. H. S. Abdullah & B. Bataineh 2013. Character and object recognition based on global feature extraction. Journal of Theoretical and Applied Information Technology 54(1): 109-120.
Nagel, H.-H. 1983. Displacement vectors derived from second-order intensity variations in image sequences. Computer Vision, Graphics, and Image Processing 21(1): 85-117.
Navarro, H., R. Orghidan, M. Gordan, G. Saavedra & M. Martinez-Corral 2014. Fuzzy Integral Imaging Camera Calibration for Real Scale 3D Reconstructions. Display Technology, Journal of 10(7): 601-608.
Ni, W.-F., S.-C. Wei, T. Lin & S.-B. Chen 2015. A Self-calibration Algorithm with Chaos Particle Swarm Optimization for Autonomous Visual Guidance of Welding Robot. Robotic Welding, Intelligence and Automation: RWIA’2014: 185-195.
Nomura, A., H. Miike & K. Koga 1991. Field theory approach for determining optical flow. Pattern Recognition Letters 12(3): 183-190.
Okade, M., G. Patel & P. K. Biswas 2016. Robust Learning-Based Camera Motion Characterization Scheme With Applications to Video Stabilization. IEEE Transactions on Circuits and Systems for Video Technology 26(3): 453-466.
Oreifej, O., L. Xin & M. Shah 2013. Simultaneous Video Stabilization and Moving Object Detection in Turbulence. Pattern Analysis and Machine Intelligence, IEEE Transactions on 35(2): 450-462.
Orghidan, R., M. Danciu, A. Vlaicu, G. Oltean, M. Gordan & C. Florea 2011. Fuzzy versus crisp stereo calibration: A comparative study. Image and Signal Processing and Analysis (ISPA), 2011 7th International Symposium on. pp. 627-632.
Ozek, M. B. & Z. H. Akpolat 2008. A software tool: Type‐2 fuzzy logic toolbox. Computer Applications in Engineering Education 16(2): 137-146.
Park, I. W., B. J. Lee, S. H. Cho, Y. D. Hong & J. H. Kim 2012. Laser-Based Kinematic Calibration of Robot Manipulator Using Differential Kinematics. Ieee-Asme Transactions on Mechatronics 17(6): 1059-1067.
Park, Y., S. Yun, C. Won, K. Cho, K. Um & S. Sim 2014. Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board. Sensors 14(3): 5333-5353.
Perez, J., F. Caballero & L. Merino 2014. Integration of Monte Carlo Localization and place recognition for reliable long-term robot localization. Autonomous Robot Systems and Competitions (ICARSC), 2014 IEEE International Conference on. pp. 85-91.
Pérez, J., F. Caballero & L. Merino 2015. Enhanced Monte Carlo Localization with Visual Place Recognition for Robust Robot Localization. Journal of Intelligent & Robotic Systems 80(3): 641-656.
Pillai, A. V., A. A. Balakrishnan, R. A. Simon, R. C. Johnson & S. Padmagireesan 2013. Detection and localization of texts from natural scene images using scale space and morphological operations. Circuits, Power and Computing Technologies (ICCPCT), 2013 International Conference on. pp. 880-885.
PirahanSiah, F., S. N. H. S. Abdullah & S. Sahran 2010. Adaptive image segmentation based on peak signal-to-noise ratio for a license plate recognition system. Computer Applications and Industrial Electronics (ICCAIE), 2010 International Conference on. pp. 468-472.
PirahanSiah, F., S. N. H. S. Abdullah & S. Sahran 2011. Comparison single thresholding method for handwritten images segmentation. Pattern Analysis and Intelligent Robotics (ICPAIR), 2011 International Conference on. 1 pp. 92-96.
PirahanSiah, F., S. N. H. S. Abdullah & S. Sahran 2012. 2D versus 3D Map for Environment Movement Object. 2nd National Doctoral Seminar on Artificial Intelligence Technology. Center for Artificial Intelligence Technology (CAIT), Universiti Kebangsaan Malaysia. Residence Hotel, UNITEN, Malaysia,
PirahanSiah, F., S. N. H. S. Abdullah & S. Sahran 2013. Peak Signal-To-Noise Ratio Based on Threshold Method for Image Segmentation. Journal of Theoretical and Applied Information Technology 57(2).
PirahanSiah, F., S. N. H. S. Abdullah & S. Sahran 2013. Simultaneous Localization and Mapping Trends and Humanoid Robot Linkages. Asia-Pacific Journal of Information Technology and Multimedia 2(2): 12.
PirahanSiah, F., S. N. H. S. Abdullah & S. Sahran 2014. Adaptive Image Thresholding Based On the Peak Signal-To-Noise Ratio. Research Journal of Applied Sciences, Engineering and Technology.
PirahanSiah, F., S. N. H. S. Abdullah & S. Sahran 2015. Augmented optical flow methods for video stabilization. 4th Artificial Intelligence Technology Postgraduate Seminar (CAITPS 2015). Faculty of Information Science and Technology (FTSM) - UKM on 22 and 23 December 2015. pp. 47-52.
PirahanSiah, F., S. N. H. S. Abdullah & S. Sahran 2015. Camera calibration for multi-modal robot vision based on image quality assessment. Control Conference (ASCC), 2015 10th Asian. pp. 1-6.
Prasad, A. K., R. J. Adrian, C. C. Landreth & P. W. Offutt 1992. Effect of resolution on the speed and accuracy of particle image velocimetry interrogation. Experiments in Fluids 13(2): 105-116.
Puig, L., J. Bermúdez, P. Sturm & J. J. Guerrero 2012. Calibration of omnidirectional cameras in practice: A comparison of methods. Computer Vision and Image Understanding 116(1): 120-137.
Qian, C., Y. Wang & L. Guo 2015. Monocular optical flow navigation using sparse SURF flow with multi-layer bucketing screener. Control Conference (CCC), 2015 34th Chinese. pp. 3785-3790.
Rada-Vilela, J. 2013. Fuzzylite: a fuzzy logic control library in C++. PROCEEDINGS OF THE OPEN SOURCE DEVELOPERS CONFERENCE.
Reddy, B. S. & B. N. Chatterji 1996. An FFT-based technique for translation, rotation, and scale-invariant image registration. IEEE transactions on image processing 5(8): 1266-1271.
Reimers, M. 2010. Making Informed Choices about Microarray Data Analysis. PLoS Comput Biol 6(5): e1000786.
Ren, Q. 2012. Type-2 Takagi-Sugeno-Kang Fuzzy Logic System and Uncertainty in Machining.Tesis École Polytechnique de Montréal,
Ren, Q., M. Balazinski, L. Baron & K. Jemielniak 2011. TSK fuzzy modeling for tool wear condition in turning processes: an experimental study. Engineering Applications of Artificial Intelligence 24(2): 260-265.
Ren, Q., L. Baron & M. Balazinski 2009. Application of type-2 fuzzy estimation on uncertainty in machining: an approach on acoustic emission during turning process. Fuzzy Information Processing Society, 2009. NAFIPS 2009. Annual Meeting of the North American. pp. 1-6.
Revaud, J., P. Weinzaepfel, Z. Harchaoui & C. Schmid 2015. EpicFlow: Edge-Preserving Interpolation of Correspondences for Optical Flow. arXiv preprint arXiv:1501.02565.
Rezaee, B. 2008. A new approach to design of interval type-2 fuzzy logic systems. Hybrid Intelligent Systems, 2008. HIS'08. Eighth International Conference on. pp. 234-239.
Rhudy, M. B., Y. Gu, H. Y. Chao & J. N. Gross 2015. Unmanned Aerial Vehicle Navigation Using Wide-Field Optical Flow and Inertial Sensors. Journal of Robotics.
Richardson, A., J. Strom & E. Olson 2013. AprilCal: Assisted and repeatable camera calibration. Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. pp. 1814-1821.
Ricolfe-Viala, C., A.-J. Sanchez-Salmeron & A. Valera 2012. Calibration of a trinocular system formed with wide angle lens cameras. Optics Express 20(25): 27691-27696.
Robotics, T. 2016/01/01. Darwin-OP Humanoid Research Robot - Deluxe Edition. http://www.trossenrobotics.com/p/darwin-OP-Deluxe-humanoid-robot.aspx (Accessed).
Rosch, W. L. 2003. The Winn L. Rosch Hardware Bible Ed.: Que Publishing.
Rudakova, V. & P. Monasse 2014. Camera matrix calibration using circular control points and separate correction of the geometric distortion field. Computer and Robot Vision (CRV), 2014 Canadian Conference on. pp. 195-202.
Sadeghian, A., J. M. Mendel & H. Tahayori. 2013. Advances in Type-2 Fuzzy Sets and Systems Ed.
Salgado, A., J. Sanchez & Ieee 2006. Temporal regularizer for large optical flow estimation. 2006 IEEE International Conference on Image Processing, ICIP 2006, Proceedings: 1233-1236.
Sarunic, P. & R. Evans 2014. Hierarchical model predictive control of UAVs performing multitarget-multisensor tracking. Aerospace and Electronic Systems, IEEE Transactions on 50(3): 2253-2268.
Schnieders, D. & K.-Y. K. Wong 2013. Camera and light calibration from reflections on a sphere. Computer Vision and Image Understanding 117(10): 1536-1547.
Sciacca, L. 2002. Distributed Electronic Warfare Sensor Networks. Association of Old Crows Convention.
Sevilla-Lara, L., D. Sun, E. G. Learned-Miller & M. J. Black 2014. Optical flow estimation with channel constancy. Computer Vision–ECCV 2014: 423-438.
Shirmohammadi, S. & A. Ferrero 2014. Camera as the instrument: the rising trend of vision based measurement. Instrumentation & Measurement Magazine, IEEE 17(3): 41-47.
Shuaicheng, L., W. Yinting, Y. Lu, B. Jiajun, T. Ping & S. Jian 2012. Video stabilization with a depth camera. Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. pp. 89-95.
Silvatti, A. P., F. A. Salve Dias, P. Cerveri & R. M. L. Barros 2012. Comparison of different camera calibration approaches for underwater applications. Journal of Biomechanics 45(6): 1112-1116.
Sinha, U. 2016. QR-Code. http://appnee.com/psytec-qr-code-editor/ (Accessed October 2016).
Sobel, I. & G. Feldman 1968. A 3x3 isotropic gradient operator for image processing.
Stein, G. P. 1995. Accurate internal camera calibration using rotation, with analysis of sources of error. Computer Vision, 1995. Proceedings., Fifth International Conference on. pp. 230-236.
Sudin, M. N., S. N. H. S. Abdullah, M. F. Nasrudin & S. Sahran 2014. Trigonometry Technique for Ball Prediction in Robot Soccer. Robot Intelligence Technology and Applications 2: Results from the 2nd International Conference on Robot Intelligence Technology and Applications: 753-762.
Sudin, M. N., M. F. Nasrudin & S. N. H. S. Abdullah 2014. Humanoid localisation in a robot soccer competition using a single camera. Signal Processing & its Applications (CSPA), 2014 IEEE 10th International Colloquium on. pp. 77-81.
Sun, B., L. Liu, C. Hu & M. Q. Meng 2010. 3D reconstruction based on Capsule Endoscopy image sequences. Audio Language and Image Processing (ICALIP), 2010 International Conference on. pp. 607-612.
Sun, D., S. Roth & M. Black 2014. A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them. International Journal of Computer Vision 106(2): 115-137.
Sun, D., J. Wulff, E. B. Sudderth, H. Pfister & M. J. Black 2013. A fully-connected layered model of foreground and background flow. Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on. pp. 2451-2458.
Szeliski, R. 2010. Computer vision: algorithms and applications Ed.: Springer Science & Business Media.
Tao, M., J. Bai, P. Kohli & S. Paris 2012. SimpleFlow: A Non‐iterative, Sublinear Optical Flow Algorithm. Computer Graphics Forum. 31(2pt1) pp. 345-353.
Thrun, S., D. Fox, W. Burgard & F. Dellaert 2001. Robust Monte Carlo localization for mobile robots. Artificial Intelligence 128(1–2): 99-141.
Tomasi, M., M. Vanegas, F. Barranco, J. Diaz & E. Ros 2010. High-Performance Optical-Flow Architecture Based on a Multi-Scale, Multi-Orientation Phase-Based Model. Ieee Transactions on Circuits and Systems for Video Technology 20(12): 1797-1807.
Tong, S., Y. Li & P. Shi 2009. Fuzzy adaptive backstepping robust control for SISO nonlinear system with dynamic uncertainties. Information Sciences 179(9): 1319-1332.
Torr, P. H. S. & A. Zisserman 2000. Feature Based Methods for Structure and Motion Estimation. Proceedings of the International Workshop on Vision Algorithms: Theory and Practice: 278-294.
Trifan, A., A. J. R. Neves, N. Lau & B. Cunha. 2012. A modular real-time vision module for humanoid robots. J. Roning & D. P. Casasent. Ed. 8301. Bellingham: Spie-Int Soc Optical Engineering.
Tsai, R. Y. 1986. An efficient and accurate camera calibration technique for 3D machine vision. IEEE Conference on Computer Vision and Pattern Recognition. pp. 364-374.
Tsai, R. Y. 1987. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. Robotics and Automation, IEEE Journal of 3(4): 323-344.
Tschirsich, M. & A. Kuijper 2015. Notes on discrete Gaussian scale space. Journal of Mathematical Imaging and Vision 51(1): 106-123.
Valencia, R., M. Morta, J. Andrade-Cetto & J. M. Porta 2013. Planning Reliable Paths With Pose SLAM. Robotics, IEEE Transactions on PP(99): 1-10.
Veon, K. L., M. H. Mahoor & R. M. Voyles 2011. Video stabilization using SIFT-ME features and fuzzy clustering. Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on. pp. 2377-2382.
Vijay, G., E. Ben Ali Bdira & M. Ibnkahla 2011. Cognition in wireless sensor networks: A perspective. Sensors Journal, IEEE 11(3): 582-592.
Vogel, C., K. Schindler & S. Roth 2015. 3D Scene Flow Estimation with a Piecewise Rigid Scene Model. International Journal of Computer Vision 115(1): 1-28.
Wagner, C. 2013. Juzzy - A Java based toolkit for Type-2 Fuzzy Logic. Advances in Type-2 Fuzzy Logic Systems (T2FUZZ), 2013 IEEE Symposium on. pp. 45-52.
Wagner, C. & H. Hagras 2010. Toward General Type-2 Fuzzy Logic Systems Based on zSlices. Fuzzy Systems, IEEE Transactions on 18(4): 637-660.
Walton, L., A. Hampshire, D. M. C. Forster & A. A. Kemeny 1997. Stereotactic Localization with Magnetic Resonance Imaging: A Phantom Study To Compare the Accuracy Obtained Using Two-dimensional and Three-dimensional Data Acquisitions. Neurosurgery 41(1): 131-139.
Wang, J., F. Shi, J. Zhang & Y. Liu 2008. A new calibration model of camera lens distortion. Pattern Recognition 41(2): 607-615.
Wang, L., S. B. Kang, H.-Y. Shum & G. Xu 2004. Error analysis of pure rotation-based self-calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on 26(2): 275-280.
Wang, Q., L. Fu & Z. Liu 2010. Review on camera calibration. Chinese Control and Decision Conference (CCDC), 2010 pp. 3354-3358.
Wang, Z. & H. Huang 2015. Pixel-wise video stabilization. Multimedia Tools and Applications: 1-16.
Wei, J. & G. Jinwei 2015. Video stitching with spatial-temporal content-preserving warping. Computer Vision and Pattern Recognition Workshops (CVPRW), 2015 IEEE Conference on. pp. 42-48.
Weinzaepfel, P., J. Revaud, Z. Harchaoui & C. Schmid 2013. Deepflow: Large displacement optical flow with deep matching. Computer Vision (ICCV), 2013 IEEE International Conference on. pp. 1385-1392.
Weinzaepfel, P., J. Revaud, Z. Harchaoui & C. Schmid 2015. Learning to Detect Motion Boundaries. CVPR 2015 - IEEE Conference on Computer Vision & Pattern Recognition. Boston, United States, 2015-06-08.
Won Park, J. & D. T. Harper 1996. An efficient memory system for the SIMD construction of a Gaussian pyramid. Parallel and Distributed Systems, IEEE Transactions on 7(8): 855-860.
Woo, D.-M. & D.-C. Park 2009. Implicit camera calibration based on a nonlinear modeling function of an artificial neural network. Advances in Neural Networks–ISNN 2009: 967-975.
Wulff, J. & M. J. Black 2015. Efficient sparse-to-dense optical flow estimation using a learned basis and layers. Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on. pp. 120-130.
Wulff, J., D. Butler, G. Stanley & M. Black 2012. Lessons and Insights from Creating a Synthetic Optical Flow Benchmark. Computer Vision – ECCV 2012. Workshops and Demonstrations 7584: 168-177.
Xianghua, Y., P. Kun, H. Yongbo, G. Sheng, K. Jing & Z. Hongbin 2013. Self-Calibration of Catadioptric Camera with Two Planar Mirrors from Silhouettes. Pattern Analysis and Machine Intelligence, IEEE Transactions on 35(5): 1206-1220.
Xin, L. 2002. Blind image quality assessment. Image Processing. Proceedings. 2002 International Conference on. 1 pp. I-449-I-452.
Xuande, Z., F. Xiangchu, W. Weiwei & X. Wufeng 2013. Edge Strength Similarity for Image Quality Assessment. Signal Processing Letters, IEEE 20(4): 319-322.
Yang, J. & H. Li 2015. Dense, Accurate Optical Flow Estimation with Piecewise Parametric Model. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1019-1027.
Yao, F. H., A. Sekmen, M. Malkani & Ieee 2008. A Novel Method for Real-time Multiple Moving Targets Detection from Moving IR Camera. 19th International Conference on Pattern Recognition, Vols 1-6: 1356-1359.
Ye, J. & J. Yu 2014. Ray geometry in non-pinhole cameras: a survey. The Visual Computer 30(1): 93-112.
Yong, D., W. Shaoze & Z. Dong 2014. Full-reference image quality assessment using statistical local correlation. Electronics Letters 50(2): 79-81.
Yoo, J. K. & J. H. Kim 2015. Gaze Control-Based Navigation Architecture With a Situation-Specific Preference Approach for Humanoid Robots. IEEE-ASME Transactions on Mechatronics 20(5): 2425-2436.
Zadeh, L. A. 1965. Fuzzy sets. Information and Control 8(3): 338-353.
Zadeh, L. A. 1975. The concept of a linguistic variable and its application to approximate reasoning—I. Information Sciences 8(3): 199-249.
Zhang, L. 2001. Camera calibration Ed.: Aalborg University. Department of Communication Technology.
Zhang, Q. J., L. Zhao & I. Destech Publicat 2015. Efficient Video Stabilization Based on Improved Optical Flow Algorithm. International Conference on Electrical Engineering and Mechanical Automation (Iceema 2015): 620-625.
Zhang, Z., Y. Wan & L. Cai 2013. Research of Camera Calibration Based on DSP. Research Journal of Applied Sciences, Engineering and Technology 6(17): 3151-3155.
Zhang, Z. & G. Xu 1997. A general expression of the fundamental matrix for both perspective and affine cameras. Proceedings of the Fifteenth international joint conference on Artifical intelligence-Volume 2. pp. 1502-1507.
Zhang, Z., D. Zhu, J. Zhang & Z. Peng 2008. Improved robust and accurate camera calibration method used for machine vision application. Optical Engineering 47(11): 117201-117201-11.
Zhao, B. & Z. Hu 2015. Camera self-calibration from translation by referring to a known camera. Applied Optics 54(25): 7789-7798.
Zhengyou, Z. 2000. A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on 22(11): 1330-1334.
Zhengyou, Z. 2004. Camera calibration with one-dimensional objects. Pattern Analysis and Machine Intelligence, IEEE Transactions on 26(7): 892-899.
Zhou, W., A. C. Bovik, H. R. Sheikh & E. P. Simoncelli 2004. Image quality assessment: from error visibility to structural similarity. Image Processing, IEEE Transactions on 13(4): 600-612.
Zhu, S. P. & L. M. Xia 2015. Human Action Recognition Based on Fusion Features Extraction of Adaptive Background Subtraction and Optical Flow Model. Mathematical Problems in Engineering 2015: 1-11.
Ҫelik, K., A. K. Somani, B. Schnaufer, P. Y. Hwang, G. A. McGraw & J. Nadke 2013. Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS). 8713 pp. 87130U-87130U-15.