SCIRP Mobile Website
Paper Submission

Why Us? >>

  • - Open Access
  • - Peer-reviewed
  • - Rapid publication
  • - Lifetime hosting
  • - Free indexing service
  • - Free promotion service
  • - More citations
  • - Search engine friendly

Free SCIRP Newsletters>>

Add your e-mail address to receive free newsletters from SCIRP.

 

Article citations

More>>

Chum, O. and Matas, J. (2005) Matching with PROSAC—Progressive Sample Consensus. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, 20-25 June 2005, 220-226.
http://dx.doi.org/10.1109/cvpr.2005.221

has been cited by the following article:

  • TITLE: A Robust Estimation Method for Camera Calibration with Known Rotation

    AUTHORS: Amir Egozi, Dov Eilot, Peter Maass, Chen Sagiv

    KEYWORDS: Spatial Calibration, Structure from Motion, Virtual Camera, Big Data

    JOURNAL NAME: Applied Mathematics, Vol.6 No.9, August 12, 2015

    ABSTRACT: Imagine that hundreds of video streams, taken by mobile phones during a rock concert, are uploaded to a server. One attractive application of such prominent dataset is to allow a user to create his own video with a deliberately chosen but virtual camera trajectory. In this paper we present algorithms for the main sub-tasks (spatial calibration, image interpolation) related to this problem. Calibration: Spatial calibration of individual video streams is one of the most basic tasks related to creating such a video. At its core, this requires to estimate the pairwise relative geometry of images taken by different cameras. It is also known as the relative pose problem [1], and is fundamental to many computer vision algorithms. In practice, efficiency and robustness are of highest relevance for big data applications such as the ones addressed in the EU-FET_SME project SceneNet. In this paper, we present an improved algorithm that exploits additional data from inertial sensors, such as accelerometer, magnetometer or gyroscopes, which by now are available in most mobile phones. Experimental results on synthetic and real data demonstrate the accuracy and efficiency of our algorithm. Interpolation: Given the calibrated cameras, we present a second algorithm that generates novel synthetic images along a predefined specific camera trajectory. Each frame is produced from two “neighboring” video streams that are selected from the data base. The interpolation algorithm is then based on the point cloud reconstructed in the spatial calibration phase and iteratively projects triangular patches from the existing images into the new view. We present convincing images synthesized with the proposed algorithm.