Calibrate Monocular Vision Cameras With Fisheye Lens¶
Fisheye Calibration Basics¶
A fisheye camera is a pinhole camera equipped with a fisheye lens. Normally, pictures taken by a fisheye camera are extremely distorted.
Background theories about fisheye calibration is a kind of complicated. Two sources have been provided in the following for further investigation.
Matlab Fisheye Calibration Basics: This official Matlab Fisheye Calibration implements Davide Scaramuzza’s paper A Toolbox for Easily Calibrating Omnidirectional Cameras .
OpenCV’s Fisheye camera model: According to the StackOverFlow’s Talk, current OpenCV’s Fisheye camera model implements Jean-Yves Bouguet’s Camera Calibration Toolbox for Matlab, probably heavily inspired by Juho Kannala and Sami S. Brandt’s paper A Generic Camera Model and Calibration Method for Conventional, Wide-Angle, and Fish-Eye Lenses, which is the UNIQUE paper related to fisheye provided on Jean-Yves Bouguet’s A few links related to camera calibration .
Anyway, since we carry out our test based on OpenCV, we’ll cite foluma deducted on OpenCV’s Fisheye camera model. After the pinhole camera projection, let’s denote:
In such, fisheye distortion is defined as:
,where angle \(\theta\) can be of any value.
The distorted point coordinates are:
Finally, the pixel coordinates are:
Our target of calibrating a fisheye camera is just to calculate the 4 parameters: \(k_1, k_2, k_3, k_4\) .
Demonstrations¶
Preparation¶
Again, our demos of fisheye calibration are based on the chessboard pattern and the cirle-grid pattern respectively, but with a different camera: JeVois Smart Machine Vision Camera with its 120 degree fisheye lens .
➜ ~ lsusb
......
Bus 001 Device 010: ID 1d6b:0102 Linux Foundation EEM Gadget
......
JeVois Streaming By Python OpenCV¶
Code Snippet: jevois_streaming.py¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | ################################################################################
# #
# #
# IMPORTANT: READ BEFORE DOWNLOADING, COPYING AND USING. #
# #
# #
# Copyright [2017] [ShenZhen Longer Vision Technology], Licensed under #
# ******** GNU General Public License, version 3.0 (GPL-3.0) ******** #
# You are allowed to use this file, modify it, redistribute it, etc. #
# You are NOT allowed to use this file WITHOUT keeping the License. #
# #
# Longer Vision Technology is a startup located in Chinese Silicon Valley #
# NanShan, ShenZhen, China, (http://www.longervision.cn), which provides #
# the total solution to the area of Machine Vision & Computer Vision. #
# The founder Mr. Pei JIA has been advocating Open Source Software (OSS) #
# for over 12 years ever since he started his PhD's research in England. #
# #
# Longer Vision Blog is Longer Vision Technology's blog hosted on github #
# (http://longervision.github.io). Besides the published articles, a lot #
# more source code can be found at the organization's source code pool: #
# (https://github.com/LongerVision/OpenCV_Examples). #
# #
# For those who are interested in our blogs and source code, please do #
# NOT hesitate to comment on our blogs. Whenever you find any issue, #
# please do NOT hesitate to fire an issue on github. We'll try to reply #
# promptly. #
# #
# #
# Version: 0.0.1 #
# Author: JIA Pei #
# Contact: jiapei@longervision.com #
# URL: http://www.longervision.cn #
# Create Date: 2017-03-12 #
# Modified Date: 2020-01-18 #
################################################################################
import sys
# Standard imports
import os
import cv2
from cv2 import aruco
import numpy as np
cap = cv2.VideoCapture(2)
#initialize the jevois cam. See below - don't change these as it capture videos without any process
cap.set(3,640) #width
cap.set(4,480) #height
cap.set(5,15) #fps
s,img = cap.read()
count = 0
while(True):
ret, frame = cap.read() # Capture frame-by-frame
frame = cv2.flip(frame, 1) # flip accordingly
if ret == True:
filename = "img" + str(count).zfill(3) +".jpg"
cv2.imwrite(filename, frame)
count += 1
cv2.imshow("Image Capturing", frame)
if cv2.waitKey(2) & 0xFF == ord('q'):
break
cap.release() # When everything done, release the capture
|
Demo 1: Calibration Based On Classical Black-white Chessboard¶
Code Snippet: chessboard_fisheye.py¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | ################################################################################
# #
# #
# IMPORTANT: READ BEFORE DOWNLOADING, COPYING AND USING. #
# #
# #
# Copyright [2017] [ShenZhen Longer Vision Technology], Licensed under #
# ******** GNU General Public License, version 3.0 (GPL-3.0) ******** #
# You are allowed to use this file, modify it, redistribute it, etc. #
# You are NOT allowed to use this file WITHOUT keeping the License. #
# #
# Longer Vision Technology is a startup located in Chinese Silicon Valley #
# NanShan, ShenZhen, China, (http://www.longervision.cn), which provides #
# the total solution to the area of Machine Vision & Computer Vision. #
# The founder Mr. Pei JIA has been advocating Open Source Software (OSS) #
# for over 12 years ever since he started his PhD's research in England. #
# #
# Longer Vision Blog is Longer Vision Technology's blog hosted on github #
# (http://longervision.github.io). Besides the published articles, a lot #
# more source code can be found at the organization's source code pool: #
# (https://github.com/LongerVision/OpenCV_Examples). #
# #
# For those who are interested in our blogs and source code, please do #
# NOT hesitate to comment on our blogs. Whenever you find any issue, #
# please do NOT hesitate to fire an issue on github. We'll try to reply #
# promptly. #
# #
# #
# Version: 0.0.1 #
# Author: JIA Pei #
# Contact: jiapei@longervision.com #
# URL: http://www.longervision.cn #
# Create Date: 2020-01-18 #
# Modified Date: 2020-04-21 #
################################################################################
import numpy as np
import cv2
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
criteria_fisheye = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 1e-6)
calibration_flags = cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC+cv2.fisheye.CALIB_CHECK_COND+cv2.fisheye.CALIB_FIX_SKEW
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((1,6*7,3), np.float32)
objp[0,:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
cap = cv2.VideoCapture(2)
#initialize the jevois cam. See below - don't change these as it capture videos without any process
cap.set(3,640) #width
cap.set(4,480) #height
cap.set(5,15) #fps
s,img = cap.read()
num = 20
found = 0
while(found < num): # Here, 20 can be changed to whatever number you like to choose
ret, img = cap.read() # Capture frame-by-frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (7,6), None)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp) # Certainly, every loop objp is the same, in 3D.
corners2 = cv2.cornerSubPix(gray,corners,(5,5),(-1,-1),criteria)
imgpoints.append(corners2)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (7,6), corners2, ret)
# Enable the following 2 lines if you want to save the calibration images.
filename = str(found).zfill(2) +".jpg"
cv2.imwrite(filename, img)
found += 1
cv2.imshow('img', img)
cv2.waitKey(10)
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
K = np.zeros((3, 3))
D = np.zeros((4, 1))
rvecs = [np.zeros((1, 1, 3), dtype=np.float64) for i in range(num)]
tvecs = [np.zeros((1, 1, 3), dtype=np.float64) for i in range(num)]
retval, _, _, _, _ = \
cv2.fisheye.calibrate(
objpoints,
imgpoints,
gray.shape[::-1],
K,
D,
rvecs,
tvecs,
calibration_flags,
criteria_fisheye
)
print(retval)
print(K)
print(D)
# Python code to write the image (OpenCV 4.3)
fs = cv2.FileStorage('calibration.yml', cv2.FILE_STORAGE_WRITE)
fs.write('camera_matrix', K)
fs.write('dist_coeff', D)
fs.release()
|
Intermediate Images: Chessboard¶
Results: calibration_chessboard_fisheye.yml¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | %YAML:1.0
---
camera_matrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 3.7163812469674616e+02, 0., 3.0622492925810030e+02, 0.,
3.7067642862628497e+02, 2.2683018293363298e+02, 0., 0., 1. ]
dist_coeff: !!opencv-matrix
rows: 4
cols: 1
dt: d
data: [ 9.3050265697576837e-02, -1.7065824578177742e+00,
6.5970560553632547e+00, -8.1535329116759439e+00 ]
|
Clearly,
Demo 2: Calibration Based On Asymmetrical Circle Pattern¶
Code Snippet: circle_grid_fisheye.py¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 | ################################################################################
# #
# #
# IMPORTANT: READ BEFORE DOWNLOADING, COPYING AND USING. #
# #
# #
# Copyright [2017] [ShenZhen Longer Vision Technology], Licensed under #
# ******** GNU General Public License, version 3.0 (GPL-3.0) ******** #
# You are allowed to use this file, modify it, redistribute it, etc. #
# You are NOT allowed to use this file WITHOUT keeping the License. #
# #
# Longer Vision Technology is a startup located in Chinese Silicon Valley #
# NanShan, ShenZhen, China, (http://www.longervision.cn), which provides #
# the total solution to the area of Machine Vision & Computer Vision. #
# The founder Mr. Pei JIA has been advocating Open Source Software (OSS) #
# for over 12 years ever since he started his PhD's research in England. #
# #
# Longer Vision Blog is Longer Vision Technology's blog hosted on github #
# (http://longervision.github.io). Besides the published articles, a lot #
# more source code can be found at the organization's source code pool: #
# (https://github.com/LongerVision/OpenCV_Examples). #
# #
# For those who are interested in our blogs and source code, please do #
# NOT hesitate to comment on our blogs. Whenever you find any issue, #
# please do NOT hesitate to fire an issue on github. We'll try to reply #
# promptly. #
# #
# #
# Version: 0.0.1 #
# Author: JIA Pei #
# Contact: jiapei@longervision.com #
# URL: http://www.longervision.cn #
# Create Date: 2020-01-18 #
# Modified Date: 2020-04-21 #
################################################################################
# Standard imports
import numpy as np
import cv2
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
criteria_fisheye = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 1e-6)
calibration_flags = cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC+cv2.fisheye.CALIB_CHECK_COND+cv2.fisheye.CALIB_FIX_SKEW
########################################Blob Detector##############################################
# Setup SimpleBlobDetector parameters.
blobParams = cv2.SimpleBlobDetector_Params()
# Change thresholds
blobParams.minThreshold = 8
blobParams.maxThreshold = 255
# Filter by Area.
blobParams.filterByArea = True
blobParams.minArea = 64 # minArea may be adjusted to suit for your experiment
blobParams.maxArea = 2500 # maxArea may be adjusted to suit for your experiment
# Filter by Circularity
blobParams.filterByCircularity = True
blobParams.minCircularity = 0.1
# Filter by Convexity
blobParams.filterByConvexity = True
blobParams.minConvexity = 0.87
# Filter by Inertia
blobParams.filterByInertia = True
blobParams.minInertiaRatio = 0.01
# Create a detector with the parameters
blobDetector = cv2.SimpleBlobDetector_create(blobParams)
###################################################################################################
###################################################################################################
# Original blob coordinates, supposing all blobs are of z-coordinates 0
# And, the distance between every two neighbour blob circle centers is 72 centimetres
# In fact, any number can be used to replace 72.
# Namely, the real size of the circle is pointless while calculating camera calibration parameters.
objp = np.zeros((1, 44, 3), np.float32)
objp[0][0] = (0 , 0 , 0)
objp[0][1] = (0 , 72 , 0)
objp[0][2] = (0 , 144, 0)
objp[0][3] = (0 , 216, 0)
objp[0][4] = (36 , 36 , 0)
objp[0][5] = (36 , 108, 0)
objp[0][6] = (36 , 180, 0)
objp[0][7] = (36 , 252, 0)
objp[0][8] = (72 , 0 , 0)
objp[0][9] = (72 , 72 , 0)
objp[0][10] = (72 , 144, 0)
objp[0][11] = (72 , 216, 0)
objp[0][12] = (108, 36, 0)
objp[0][13] = (108, 108, 0)
objp[0][14] = (108, 180, 0)
objp[0][15] = (108, 252, 0)
objp[0][16] = (144, 0 , 0)
objp[0][17] = (144, 72 , 0)
objp[0][18] = (144, 144, 0)
objp[0][19] = (144, 216, 0)
objp[0][20] = (180, 36 , 0)
objp[0][21] = (180, 108, 0)
objp[0][22] = (180, 180, 0)
objp[0][23] = (180, 252, 0)
objp[0][24] = (216, 0 , 0)
objp[0][25] = (216, 72 , 0)
objp[0][26] = (216, 144, 0)
objp[0][27] = (216, 216, 0)
objp[0][28] = (252, 36 , 0)
objp[0][29] = (252, 108, 0)
objp[0][30] = (252, 180, 0)
objp[0][31] = (252, 252, 0)
objp[0][32] = (288, 0 , 0)
objp[0][33] = (288, 72 , 0)
objp[0][34] = (288, 144, 0)
objp[0][35] = (288, 216, 0)
objp[0][36] = (324, 36 , 0)
objp[0][37] = (324, 108, 0)
objp[0][38] = (324, 180, 0)
objp[0][39] = (324, 252, 0)
objp[0][40] = (360, 0 , 0)
objp[0][41] = (360, 72 , 0)
objp[0][42] = (360, 144, 0)
objp[0][43] = (360, 216, 0)
###################################################################################################
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
cap = cv2.VideoCapture(2)
#initialize the jevois cam. See below - don't change these as it capture videos without any process
cap.set(3,640) #width
cap.set(4,480) #height
cap.set(5,15) #fps
s,img = cap.read()
num = 60 # For a more accurate result, more images are taken
found = 0
while(found < num): # Here, 60 can be changed to whatever number you like to choose
ret, img = cap.read() # Capture frame-by-frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
keypoints = blobDetector.detect(gray) # Detect blobs.
# Draw detected blobs as red circles. This helps cv2.findCirclesGrid() .
im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0,255,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
im_with_keypoints_gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findCirclesGrid(im_with_keypoints, (4,11), None, flags = cv2.CALIB_CB_ASYMMETRIC_GRID) # Find the circle grid
if ret == True:
objpoints.append(objp) # Certainly, every loop objp is the same, in 3D.
corners2 = cv2.cornerSubPix(im_with_keypoints_gray, corners, (5,5), (-1,-1), criteria) # Refines the corner locations.
imgpoints.append(corners2)
# Draw and display the corners.
im_with_keypoints = cv2.drawChessboardCorners(img, (4,11), corners2, ret)
# Enable the following 2 lines if you want to save the calibration images.
filename = str(found).zfill(2) +".jpg"
cv2.imwrite(filename, im_with_keypoints)
found += 1
cv2.imshow("img", im_with_keypoints) # display
cv2.waitKey(2)
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
K = np.zeros((3, 3))
D = np.zeros((4, 1))
rvecs = [np.zeros((1, 1, 3), dtype=np.float64) for i in range(num)]
tvecs = [np.zeros((1, 1, 3), dtype=np.float64) for i in range(num)]
retval, _, _, _, _ = \
cv2.fisheye.calibrate(
objpoints,
imgpoints,
gray.shape[::-1],
K,
D,
rvecs,
tvecs,
calibration_flags,
criteria_fisheye
)
print(retval)
print(K)
print(D)
# Python code to write the image (OpenCV 4.3)
fs = cv2.FileStorage('calibration.yml', cv2.FILE_STORAGE_WRITE)
fs.write('camera_matrix', K)
fs.write('dist_coeff', D)
fs.release()
|
Intermediate Images: Circle Grid¶
Results: calibration_circle_grid_fisheye.yml¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | %YAML:1.0
---
camera_matrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 4.3351108701524601e+02, 0., 2.6920617462414498e+02, 0.,
4.3197528630519076e+02, 1.9692952627450200e+02, 0., 0., 1. ]
dist_coeff: !!opencv-matrix
rows: 4
cols: 1
dt: d
data: [ 2.8597131872136111e-01, -1.8511466426164975e+00,
4.4720394622220461e+00, -3.8778158662574569e+00 ]
|
Clearly,
Clearly, result calibration_chessboard_fisheye.yml
from Demo 1 and result calibration_circle_grid_fisheye.yml
from Demo 2 are of some difference.
Highlights¶
are different from
mainly in 2 aspects:
cv2.calibrateCamera
->cv2.fisheye.calibrate
input data objpoints
2 dimension
->3 dimension
Assignments¶
Try to calibrate JeVois Smart Machine Vision Camera with cv2.calibrateCamera
and cv2.fisheye.calibrate
respectively, and tell differences between undistorted images from respective generated calibration files.