Light Field Flow Estimation Based on Occlusion Detection

Light field cameras have a wide area of applications, such as digital refocusing, scene depth information extraction and 3-D image reconstruction. By recording the energy and direction information of light field, they can well solve many technical problems that cannot be done by conventional cameras. An important feature of light field cameras is that a microlens array is inserted between the sensor and main lens, through which a series of sub-aperture images of different perspectives are formed. Based on this feature and the full-focus image acquisition technique, we propose a light-field optical flow calculation algorithm, which involves both the depth estimation and the occlusion detection and guarantees the edge-preserving property. This algorithm consists of three steps: 1) Computing the dense optical flow field among a group of sub-aperture images; 2) Obtaining a robust depth-estimation by initializing the light-filed optical flow using the linear regression approach and detecting occluded areas using the consistency; 3) Computing an improved light-field depth map by using the edge-preserving algorithm to realize interpolation optimization. The reliability and high accuracy of the proposed approach is validated by experimental results.


Introduction
Light Field cameras, due to their ability of collecting 4D light field information, provide many capabilities that can never be provided by conventional cameras, such as controlling of field depth, refocusing after the fact, and changing of view points [1].In recent years, light field cameras enjoy a variety of applications, such like light field depth map estimation [2], digital refocusing [3], 3-D image reconstruction and stereo vision matching.An important prerequisite for achieving these applications is to obtain an accurate and efficient depth estimation of light-field images.Recently, many depth estimation algorithms based on light-field images have been proposed [4].However, these algorithms do not take spatial and angular resolution into consideration, thus having the following two limitations in the presence of depth discontinuities or occlusions.
1.The occlusion detection algorithms involved in these algorithms are sensitive to the surface texture and color of objects in some cases.
2. The accuracy of depth estimation usually depends on the distance between two focal stack images.
To overcome the aforementioned drawbacks, this work proposes an edgepreserving light-field flow estimation algorithm.Firstly, the dense optical flow field of multi-frame sub-apertures of light-field images is computed.Then, all occlusions that may occur during the computation are taken into account.Based on the geometrical features of light-field images and the correspondence among them, a robust light-field flow can be obtained, which contains the accurate depth estimation and occlusion detection.It is worth noting that linear regression is performed on multi-aperture images in our proposed algorithm.As a result, it can well handle two problems that are suffered by conventional optical flow approaches: one is the accuracy-loss during the computation of the optical flow value between two images, and the other is that the occlusion region cannot be detected correctly.

Related Work
Optical flow estimation, as a hot topic, has been extensively studied [5].Most of the existing approaches are based on variational formulations and the related energy minimization.Basically, coarse-to-fine schemes are adopted in these approaches to perform the minimization.Unfortunately, such approaches suffer from the problem of error accumulation.More specifically, since the precision of minimization depends on the range of local minima and cross-scale calculations are required by coarse-to-fine schemes, errors are gradually accumulated during cross-scale calculations.Consequently, this problem leads to serious distortion of images, especially in the case of large displacements.
Recently, some approaches with matching features involved are proposed.Xu et al. merged the estimated flow with matching features at each level of the coarse-to-fine scheme [6].A penalization of the difference between the optical flow and HOG matches is added to the energy by Brox and Malik [7].Weinzaepfel et al. replaced the HOG matches by an approach based on similarities of non-rigid patches: DeepMatching [8].Braux-Zin et al. used segment features in addition to key points matching [9].As mentioned before, these approaches rely on coarse-to-fine schemes and thus some defects cannot be eliminated.In particular, it could happen that some details will lost after scaling up an image.Besides, it is difficult to distinguish a target object from the background when they have similar textures.
In this paper, we propose a novel optical flow calculation algorithm, where the edge preserving technique is adopted.The existing algorithms generally do not consider the edge parts of the target image, while ours does.Note that it is difficult to do the depth estimation and occlusion detection without considering the edge parts.Besides, the conventional algorithms fail to accurately estimates the model of the optical flow field when there are occluded areas during imaging.In addition, the conventional algorithms can hardly guarantee the integrity of the edge parts, which leads to the blurring of the moving boundary and thus affects the estimation of the whole depth map.For example, the results of depth estimation and occlusion detection obtained by the algorithm proposed by Wang et al. [10] are shown in Figure 1.It can be seen that the edge parts are not well handled.

Light Field Flow Detection Based on Occlusion Detection
In this section, we propose a new optical flow detection approach, which aims to obtain the dense and accurate light field flow estimation.Firstly, we get a series of sub-aperture images from the original light-field image.Secondly, we regard the central frame as the reference frame and compute the dense optical flow between it and all the other sub-aperture images.Thirdly, a robust depth estimation is obtained by initializing the light-field flow with linear regression approach.Note that occluded areas are detected with consistency during this process.Finally, an improved light-field depth map is obtained by using the edge-preserving algorithm to realize interpolation optimization.The detailed procedure is shown in Figure 2.

Dense Optical Flow in Multi-Frame Sub-Aperture Images
In order to calculate the optical flow, it is necessary to compute the correspondence points between frames.Let ( )  array, and the central sub-aperture image is the 25 th one.From the sub-aperture image array, we select one row (e.g. the 29 th image -35 th image) and column (e.g. the 5 th , 12 th , 19 th , 26 th , 33 th , 40 th , 47 th ) for analysis.The horizontal displacements of the pixels between the 25 th central sub-aperture image and the horizontal ones in the selected row are estimated and denoted by u1, u2, …, u7.And the vertical displacements of the pixels between the 25 th central sub-aperture image and the vertical ones in the selected column are estimated and denoted as v1, v2, …, v7.Theoretically, it should be a linear relationship between any two horizontal or any two vertical displacements when the pixel movement is stable and not occluded.Under this assumption, we try to calculate these two linear relationships.We firstly calculate the average horizontal and vertical displace- where n is the number of sub-aperture images in this row and column.Then, we define the deviation between the measured and the calculated displacements as follows: When u , 0

Initial Depth Estimation
By reconstructing the optical flow estimation ( ) ( )  , , , − , can be obtained.According to the geometric principle of light field, we have the following relations: where α is a depth factor of the refocus plane, generally ranging in [0.2, 2], and and denote the displacement of the counterpart of the coordinates ( ) 0 0 , s t on the other sub-aperture image with respect to the central one.To reduce the effect of noise, the depth estimation is initialized by solving the above overdetermined linear equations.The solution α is exactly the optimal initial depth estimation.In Figure 4, the initial depth estimation derived by our proposed approach and a conventional one considering only frames are illustrated, respectively.

Depth Map
During the calculation of optical flow, we can use the consistency of forward and backward flows to detect and remove the occlusions between every two subaperture images.Specifically, a comparison is made between the forward flow ( ) u v from the i-th sub-aperture image to the central one.Ideally, these two results are reversed, and a binary detector can be obtained by detecting the equivalence between these two results or by calculating the difference of ( ) ( )   , , to obtain a continuous estimation.When the estimated value is relative high, the pixel is marked as an occlusion point between these two sub-aperture images.
To further improve the performance, we calculate the forward and backward flows between multiple sub-aperture images, in a way that is similar to the cal-

Optimization of Deep Estimation Based on EpicFlow
In the aforementioned optical flow calculation and occlusion detection, the edge information is neglected as shown in Figure 6.As a result, the accuracy is decreased when finding the corresponding point to estimate the optical flow.Furthermore, there may exist errors during matching the corresponding points.Therefore, we further use the Epicflow algorithm proposed by Revaud et al [11]., which uses the structured edge detection (SED), to perform interpolation optimization for the blank areas that are caused by the removal of occluded areas.In this way, we can compensate for the accuracy loss of depth estimation that are caused by inaccurate occluded areas and the matching error of corresponding points.
The most recently proposed Fullflow method [12] also adopts EpicFlow.But the overall performance of optical flow estimation of our method is much better than that of Fullflow method as shown in Figure 7. Clearly, it is necessary to preserve the edge of the target region and to perform the interpolation optimization for the occluded regions in the optical flow estimation.

Optimization of Deep Estimation Based on EpicFlow
In this section, a set of experiments is carried out to validate the superiority of our proposed approach over the state-of-the-art methods.Note that light field images with the resolution of 398 × 398 are chosen for test.

Figure 3 .
Figure 3. Diagram of matching points between multi-frame sub-aperture images.
dv .After calculating the corresponding points of the central sub-aperture image in all the other ones, 1 n − optical flow estimation ( ) sub-aperture image to the i-th sub-aperture image, and the backward flow ( ) , b b

Figure 4 .Figure 5 .
Figure 4. Comparison of initial depth estimation between traditional approach and ours.(a) Result between only 2 frames; (b) our result.

Firstly, totally 7 ×
7 sub-aperture images are extracted from a raw light field image.Then, on the basis of conventional Lucas-Kanade algorithm, we calculate the set of corresponding points between the central sub-aperture image and all the other ones.Based on the obtained set of corresponding points, the optical flow estimation results are obtained in the horizontal direction and the vertical direction.As a result, the stable and homogeneous optical flow of the light field image can be calculated.Note that the effects of the noise, varying light intensity and occlusions are minimized during the calculation of the corresponding points of the target area.Finally, we optimize depth estimation of the light field image by the EpicFlow approach.
we can obtain the most suitable values that make the relationships between any two horizontal or any two vertical displacements approach to linearity to the most extent.In other words, hi w ∆ and vi w ∆ are the best displacement vectors.Hence, the best matching points can be obtained.