Abstract
In order to recover camera motion and 3-D structure from a sequence of images, points in the image plane must be related to directions in space. A least-squares algorithm is described for computing camera calibration from a series of motion sequences for which the translational direction of the camera is known. The method does not require special calibration objects or scene structure. It only requires the ability to move the camera in a given direction and to track features in the image as the camera moves. Since it is a linear least-squares method, it can include information from many sequences to produce a robust estimate of the calibration matrix, which can be updated dynamically as new measurements are taken. It uses the most general possible linear model for calibration. Experimental results from applying the algorithm to a set of real motion sequences with noisy correspondence data are given and analyzed.

This publication has 3 references indexed in Scilit: