opencv,google-project-tango , What is the Project Tango lens distortion model?

What is the Project Tango lens distortion model?


Tag: opencv,google-project-tango

The Project Tango C API documentation says that the TANGO_CALIBRATION_POLYNOMIAL_3_PARAMETERS lens distortion is modeled as:

x_corr_px = x_px (1 + k1 * r2 + k2 * r4 + k3 * r6) y_corr_px = y_px (1 + k1 * r2 + k2 * r4 + k3 * r6)

That is, the undistorted coordinates are a power series function of the distorted coordinates. There is another definition in the Java API, but that description isn't detailed enough to tell which direction the function maps.

I've had a lot of trouble getting things to register properly, and I suspect that the mapping may actually go in the opposite direction, i.e. the distorted coordinates are a power series of the undistorted coordinates. If the camera calibration was produced using OpenCV, then the cause of the problem may be that the OpenCV documentation contradicts itself. The easiest description to find and understand is the OpenCV camera calibration tutorial, which does agree with the Project Tango docs:

enter image description here

But on the other hand, the OpenCV API documentation specifies that the mapping goes the other way:

enter image description here

My experiments with OpenCV show that its API documentation appears correct and the tutorial is wrong. A positive k1 (with all other distortion parameters set to zero) means pincushion distortion, and a negative k1 means barrel distortion. This matches what Wikipedia says about the Brown-Conrady model and will be opposite from the Tsai model. Note that distortion can be modeled either way depending on what makes the math more convenient. I opened a bug against OpenCV for this mismatch.

So my question: Is the Project Tango lens distortion model the same as the one implemented in OpenCV (documentation notwithstanding)?

Here's an image I captured from the color camera (slight pincushioning is visible):

enter image description here

And here's the camera calibration reported by the Tango service:

distortion = {double[5]@3402}
[0] = 0.23019999265670776
[1] = -0.6723999977111816
[2] = 0.6520439982414246
[3] = 0.0
[4] = 0.0
calibrationType = 3
cx = 638.603
cy = 354.906
fx = 1043.08
fy = 1043.1
cameraId = 0
height = 720
width = 1280

Here's how to undistort with OpenCV in python:

>>> import cv2
>>> src = cv2.imread('tango00042.png')
>>> d = numpy.array([0.2302, -0.6724, 0, 0, 0.652044])
>>> m = numpy.array([[1043.08, 0, 638.603], [0, 1043.1, 354.906], [0, 0, 1]])
>>> h,w = src.shape[:2]
>>> mDst, roi = cv2.getOptimalNewCameraMatrix(m, d, (w,h), 1, (w,h))
>>> dst = cv2.undistort(src, m, d, None, mDst)
>>> cv2.imwrite('foo.png', dst)

And that produces this, which is maybe a bit overcorrected at the top edge but much better than my attempts with the reverse model:

enter image description here


The Tango C-API Docs state that (x_corr_px, y_corr_px) is the "corrected output position". This corrected output position is actually the pixel location in distorted pixel coordinates. (Update) This corrected output position needs to then be scaled by focal length and offset by center of projection to correspond to a distorted pixel coordinates.

So, to project a point onto an image, you would have to:

  1. Transform the 3D point so that it is in the frame of the camera
  2. Convert the point into normalized image coordinates (x, y)
  3. Calculate r2, r4, r6 for the normalized image coordinates (r2 = x*x + y*y)
  4. Compute (x_corr_px, y_corr_px) based on the mentioned equations:

    x_corr_px = x (1 + k1 * r2 + k2 * r4 + k3 * r6)
    y_corr_px = y (1 + k1 * r2 + k2 * r4 + k3 * r6)

    x_corr_px = x_px (1 + k1 * r2 + k2 * r4 + k3 * r6)
    y_corr_px = y_px (1 + k1 * r2 + k2 * r4 + k3 * r6)

5. Draw (x_corr_px, y_corr_px) on the original, distorted image buffer.


  1. Compute distorted coordinates

    x_dist_px = x_corr_px * fx + cx
    y_dist_px = y_corr_px * fy + cy
  2. Draw (x_dist_px, y_dist_px) on the original, distorted image buffer.

This also means that the distorted (aka corrected) corrected coordinates are the undistorted normalized coordinates scaled by a power series of the normalized image coordinates' magnitude. (this is the opposite of what the question suggests)

Looking at the implementation of cvProjectPoints2 in OpenCV (see [opencv]/modules/calib3d/src/calibration.cpp), the "Poly3" distortion in OpenCV is being applied the same direction as in Tango. And, since 'distorted coordinates' and 'corrected coordinates' are the same thing, all 3 versions (Tango Docs, OpenCV Tutorials, OpenCV API) are consistent and correct.

Good luck, and hopefully this helps!

(Update: Taking a closer look at a the code, it looks like the corrected coordinates and distorted coordinates are not the same. I've crossed out the incorrect parts of my response, and the remaining parts of this answer are still correct.)


opencv convertTo not working

I know this question has been asked and answered by others. But I still can't not solve my question. I read a frame from a video, which has format unsigned char (CV_8U). I hope to convert it to double precision(CV_64F). I do as following: VideoCapture capture(fileName); Mat image; capture >>...

OpenCV Save a Mat as Binary (1-bit depth) TIFF

Suppose we have a Mat after applying the OpenCv's Imgproc.adaptiveThreshold: Mat srcImage = ...; Mat binaryImage = new Mat(); Imgproc.adaptiveThreshold(srcImage, binaryImage, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, THRESHOLD_BLOCK_SIZE, 10); When I save the binaryImage using Highgui.imwrite: boolean isOk = Highgui.imwrite("sample.tiff", binaryImage); Everything is ok except that output TIFF is not actually a binary...

Creating and referencing a library for Android project (using command line and gradle)

I need to create an Android app that uses the OpenCV library, and I would like to do this using the command line (linux) only. Before I start I wanted to get a feel for creating projects that use libraries, but I am having trouble referencing a sample library I...

Visual Studio 2013 LINK : fatal error LNK1181: cannot open input file

I am using Visual Studio 2013. I'm trying to build some code given to me from my professor and I keep getting this error: LINK : fatal error LNK1181: cannot open input file 'C:\Users\manduchi\Documents\eyegaze\EyeGazeDemo..\Libraries\OpenCV\lib\opencv_core249.lib' However, on my computer opencv_core249.lib is located somewhere else. I've tried updating the linker directories to...

What's the fastest way to compare point elements with each other´╝čI have used Nested for loop to do that, but it's very slow

I want to find the points which the distant between points less than 3.For example, some points as follow, (220,221)(220,119)(220,220)(20,90)(220,222). I use (220,221) to find points.Then i can get (220,221)(220,119)(220,220)(220,222) I use (220,119) to find points.Then i can get (220,221)(220,119)(220,220) I have used Nested for loop to do that, but...

how can i know if the image is in RGB or BGR format?

is there any way to know in advance if an image used as an input to a system is in RGB or BGR format? I am using opencv with java API and i would like to convert an input image into grayscale or Lab* color space, and in opencv you...

Error : Cmake can't generate openCV

I am trying to instal opencv C++ with codeBlocks in Windows 8 by following this but I am blocked in step 4.when I try to generate Cmake I have this Error : CMake Error: CMake was unable to find a build program corresponding to "MinGW Makefiles". CMAKE_MAKE_PROGRAM is not...

Creating a 25fps slow motion video from a 100fps GoPro .mp4 video with C++/OpenCV

I have a 100fps .mp4 GoPro video and I want to create from it a slow Motion one with 25fps. I'm trying since two days but to no avail. I could play the video, save a video from the GoPro's WiFi stream, but when I try to read the 100fps...

Fastest way to copy some rows from one matrix to another in OpenCV

I have a [32678 x 10] matrix (w2c) and I want to copy 24700 rows of it to another matrix(out). I have the index of the rows to be copied in a vector(index). For doing this in matlab I do: out = w2c(index_im,:); It takes approximately 0.002622 seconds. In OpenCV:...

Is there a way to prevent rounding in opencv matrix divison

I have an integer matrix and I want to perform an integer division on it. But opencv always rounds the result. I know I can divide each element manually but I want to know is there a better way for this or not? Mat c = (Mat_ <int> (1,3) <<...

Heap Corruption using cv::FlannBasedMatcher and std::vector

I am developing a breast imaging features for object recognition, using FlannBasedMatcher to compute spatial histograms. Mat ComputeSpatialHistogram(Mat features, Mat vocabulary, int* region_index, int level, Ptr<DescriptorMatcher> flann_matcher) { int vocab_size = vocabulary.rows; Mat descriptor = Mat::zeros(1, vocab_size*my_pow(4, level), CV_32FC1); if (features.rows > 0) { vector<DMatch> matches; flann_matcher->match(features, matches); int word_idx,...

Dividing main function into other functions in opencv using c++

I am learning opencv using c++. As I don't have any background knowledge in c++.I am learning it parallel with opencv. Here is my doubt.My main program is very big.So,I want to divide it into small functions and call them whenever necessary in a loop using conditional statements.I have searched...

OpenCV FAST corner detection SSE implementation walkthrough

Could someone help me understanding the SSE implementation of the FAST corner detection in OpenCV? I understand the algorithm but not the implementation. Could somebody walk me through the code? The code is long, so thank you in advance. I am using OpenCV 2.4.11 and the code goes like this:...

Opencv mlp Same Data Different Results

Let Me simplify this question. If I run opencv MLP train and classify consecutively on the same data, I get different results. Meaning, if I put training a new mlp on the same train data and classifying on the same test data in a for loop, each iteration will give...

Cmake errors: The CXX Compiler identification is unknown, The C compiler identification is unknown

I'm trying to install OpenCV on Fedora 21 with a cross compiler for ARM processor. However, when i try to configure using Cmake 3.03, it gives the error: *The CXX compiler identification is unknown The C compiler identification is unknown Check for working CXX compiler: /opt/FriendlyARM/toolschain/4.5.1/bin/arm-linux-g++ Check for working CXX...

Sending live video frame over network in python opencv

I'm trying to send live video frame that I catch with my camera to a server and process them. I'm usig opencv for image processing and python for the language. Here is my code import cv2 import numpy as np import socket import sys import pickle cap=cv2.VideoCapture(0) clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM) clientsocket.connect(('localhost',8089))...

opencv window not refreshing at mouse callback

I am trying to draw with mouse move in an opencv window. But when I draw, nothing draws on the window. When I try to close the window from the cross in the topleft(ubuntu), it opens a new window which it should be as I haven't pressed escape, and in...

How to create rotated rectangular or polygonal ROI/mask?

Let's say i have the following image: And my region of interest looks like this: And i want to have the following result: How can i achieve this knowing that the ROI is denoted by four points: Point pt1(129,9); Point pt2(284,108); Point pt3(223,205); Point pt4(67,106); ...

error using already compiled version of openCV

I've used the already compiled version of openCV for Raspberry Pi. link for anyone who is interested After trying to compile using this command line g++ test3.cpp -o test3 -I/usr/local/include/ -lraspicam -lraspicam_cv -L/opt/vc/lib -lmmal -lmmal_core -lmmal_util -I/usr/include -lopencv_core -lopencv_highgui -lopencv_imgproc -lwiringPi -lpthread I get the following error lines. //usr/local/lib/ undefined...

opencv after install characters look awkward [duplicate]

This question already has an answer here: opencv :: Multiple unwanted window with Garbage name 2 answers Good day I installed VS2013 on windows 8.1 x64 I was trying to install opencv and i had alot of problems the final error LNK1112: module machine type 'X86' conflicts with target...

conversion between Mat and Mat1b/Mat3b

I want to match my code into a given interface. Inside my class OperateImage in all methods I use cv::Mat format. When putting it in SubMain function which uses cv::Mat3b and returns cv::Mat1b it does not work. How can I change it so that I can use my written class?...

best way to create a mat from a CIImage?

I am using CIDetector to detect faces, then using OpenCV on the lower half of each face to detect the size of any smiles. I am using the below code to create the cv::mat which OpenCV can perform the detection on, as you can see the image goes through the...

OpenCV - Method 'knnMatch' could not be resolved

I have just a little problem with the right resource. I am using opencv 2.4.8 and I couldn't find the right resource for knnMatch(). I tried the following which didn't work: #include "opencv2/core/core.hpp" #include <opencv2/nonfree/features2d.hpp> #include "opencv2/nonfree/nonfree.hpp" #include <opencv2/ml/ml.hpp> //#include "opencv2/features2d/features2d.hpp" //#include <opencv2/legacy/legacy.hpp> Can someone say me the right resource?...

Sending a Mat object over socket from Java to Java

I understand Sockets over Java and sending Int,String,bytes etc over it. What i just want to know is that is there a way to decode a Mat object to byte array and then send it over java socket and then retrieve back the Mat object from the byte received? Till...

Using OpenCV in Swift iOS

After adding the OpenCV 2 framework in my xcode project, I tried searching for samlpes or tutorials for integration with swift. Are there any good tutorials for the same?...

Detecting face using haar-like cascade in opencv using c++

I have been trying to execute the below code but couldn't compile it.I searched for error by masking some lines as comments.Finally I reached CascadeClassifier face_cascade;by removing all the other lines from face_cascade.load to rectangle(image,faces[i],Scalar(0,125,165),2,8,0); But I couldn't understand what the error is!Did I declare CascadeClassiier in a wrong way?...

java.lang.NoClassDefFoundError: org/opencv/core/Core - Java Servlet + OpenCV

I am trying to use opencv 2.4.9. in a Java Servlet with NetBeans, i have two files - the first one is a Servlet java file which is called by index.html , and the second one is a java file with all the opencv imports but this file...

Extracting Points from Lines using OpenCV

I am trying to extract points from a line on an image using openCV in C++ Language. The line is programmed to display on the image, but I need to know how do you extract points from line and input it into a text file?

OpenCV / Image Processing techniques to find the centers of bright spots in an image

I'm currently doing a project based on the methodology described in this paper: Camera calibration from a single night sky image As a beginner in computer vision, I do not quite understand how I can implement the method used in the paper to find the centre of all the bright...

How to install shared library and include files manually in linux?

I am trying to build and install TBB library from source so that it can be used for OpenCV to take advantages of multiple cores on my raspberry pi. I was able to build TBB from source without any problems using this steps. (Source : How do I build OpenCV...

Kinectv2 normalizing depth values

I am using Kinect v2 to capture the depth frames. I saw Kinect SDK 1.x codes in C++, they used this BYTE depth = 255 - (BYTE)(256*realDepth/0x0fff); I want to know, what is the purpose of this command and do I need to use this also for Kinect v2? If...

Extracting polygon given coordinates from an image using OpenCV

I've a set of points like the following: <data:polygon> <data:point x="542" y="107"/> <data:point x="562" y="102"/> <data:point x="582" y="110"/> <data:point x="598" y="142"/> <data:point x="600" y="192"/> <data:point x="601" y="225"/> <data:point x="592" y="261"/> <data:point x="572" y="263"/> <data:point x="551" y="245"/> <data:point x="526" y="220"/> <data:point x="520" y="188"/> <data:point x="518" y="152"/> <data:point x="525" y="127"/> <data:point...

Camera Calibration with OpenCV: Using the distortion and rotation-translation matrix

I am reading the following documentation: I have managed to successfully calibrate the camera obtaining the camera matrix and the distortion matrix. I had two sub-questions: 1) How do I use the distortion matrix as I don't know 'r'? 2) For all the views I have the rotation and...

Android edit images on the fly

It seems that in Android there are no "easy" libraries for editing and save pictures, neither free or with licenses). Aviary is no longer available. I have to implement an app for Taking picture, edite with some effects on the fly (Vignetting contrast, brightness...), add some text and save. A...

How to call OpenCV's MatchTemplate method from C#

I'm trying to get the template's position on the image using OpenCVSharp library from NuGet. Here is the code I've wrote: var image = Cv.LoadImage("Image.png"); var template = Cv.LoadImage("Template.png"); var w = (image.Width - template.Width) + 1; var h = (image.Height - template.Height) + 1; IplImage result = new IplImage(w,...

Surface normal on depth image

How to estimate the surface normal of point I(i,j) on a depth image (pixel value in mm) without using Point Cloud Library(PCL)? I've gone through (1), (2), and (3) but I'm looking for a simple estimation of surface normal on each pixel with C++ standard library or openCV. ...

c++ read in image set with different file names without hardcoding

Is there any way of reading in a set of images from file that all have varying names to each other, i.e. no continuity at all? So if you had 4 images in the same folder with the file names of: head.jpg shoulders.png knees.tiff toes.bmp Without hard coding the file...

Camera calibration and conversion of coordinates(OpenCV)

I am trying to map the 2D pixel coordinates in an image to real world 3D coordinates with respect to a fixed webcam. The calibration tutorial on the OpenCV page ( has given me the following xml file: <?xml version="1.0"?> <opencv_storage> <calibration_Time>"Wed Jun 17 12:02:01 2015"</calibration_Time> <nrOfFrames>25</nrOfFrames> <image_Width>640</image_Width> <image_Height>480</image_Height>...

how to use SIFT features for bag of words in opencv?

I have read a lot of articles about implementing bag of words after taking sift features of an image, but I'm still confused what to do next. What do i specifically do? Thank you so much in advance for the guidance. This is the code that i have so far....

adaptive thresholding ---ValueError: too many values to unpack

I'm pretty amateur at image processing. I could successfully do normal thresholding but however I'm facing an error in Adaptive Thresholding. Here is my code: import cv2 import numpy as np img = cv2.imread("vehicle004.jpg") img = cv2.medianBlur(img,5) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) _,th2=cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,11,2) cv2.imshow("window2",th2) cv2.waitKey(0) cv2.destroyAllWindows() Error Message: line 7, in <module>...

OpenCV 2.4.3 Download [on hold]

Looking to learn OpenCV with C++, but all the tutorials I've found on youtube are with CV 2.4.3. On OpenCV's website, there is no 2.4.3 download link. Does anyone know why, or where I can find an alternative? Thanks...

Error for cv::FileStorage in JNI

I am getting errors while compiling my native.cpp file in Eclipse ADT/NDK. While compiling the following lines of code FileStorage storage(nativepath, FileStorage::WRITE); storage << "img" << mat; storage.release(); The errors are D:/androidworkspace/Augmented-Reality//obj/local/arm64-v8a/objs/ndksetup/native.o: In function Java_com_shahrukh_AugmentedReality_CAMShiftDetection_savemat(_JNIEnv*, _jobject*, long long, _jstring*)': D:\androidworkspace\Augmented-Reality/jni/native.cpp:14: undefined reference...

Bandpass Filter in Python for Image Processing

I have a noisy dataset (a stack of images) which films dim particles moving about some really bright artefacts (which are immobilized). I would like to somehow remove the immobilized artefacts from the images by applying some sort of bandpass filter wherein only pixels within a specific range are converted...

solvePnP: Obtaining the rotation translation matrix

I am trying to image coordinates to 3D coordinates. Using the solvePnP function (in C++)has given me 3X1 rotation matrix and 3X1 translation matrix. But isn't the [R|t] matrix supposed to be 3X4? Any help will be greatly appreciated!...

OpenCV return keypoints coordinates and area from blob detection, Python

I followed a blob detection example (using cv2.SimpleBlobDetector) and successfully detected the blobs in my binary image. But then I don't know how to extract the coordinates and area of the keypoints. Here are the code for the blob detections: # I skipped the parameter setting part. blobParams = cv2.SimpleBlobDetector_Params()...

How to create thumbnails using opencv-python?

I'm trying to downsample my image(using anti-aliasing) using Python-Pillow's im.thumbnail() method. My code looks like: MAXSIZE = 1024 im.thumbnail(MAXSIZE, Image.ANTIALIAS) Can you tell me some alternative in opencv-python to perform this re-sizing operation ?...

Must compile Opencv with Mingw in order to use in QT under Winodws?

I've visit these blogs How to link opencv in QtCreator and use Qt library All of them are using Mingw to compile Opencv through Cmake. If I want to use Opencv in QT, is compiling with Mingw the only way? I have this question because I already compiled...

OpenCV - Detection of moving object C++

I am working on Traffic Surveillance System an OpenCv project, I need to detect moving cars and people. I am using background subtraction method to detect moving objects and thus drawing counters. I have a problem : When two car are moving on road closely them my system detects it...

How to evenly distribute numbers 0 to n into m different containers

I am trying to write an algorithm for a program to draw an even, vertical gradient across an image. I.e. I want change the pixel color from 0 to 255 along the m rows of an image, but cannot find a good generic algorithm to do so. I've tried to...

OpenCV & Python: quickly superimpose mask over image without overflow

I'd like to superimpose a binary mask over a color image, such that where the mask is "on", the pixel value changes by an amount that I can set. The result should look like this: I am using OpenCV 2.4 and Python 2.7.6. I have a way that works well,...