Applied Techniques in Tracking Moving Targets in Marine Using Image Processing


Moving objects tracking is one of the most used categories in the realm of machine vision that has attracted attention of so many researchers in recent decades. Video tracking has various applications in military industries, protective systems and machine vision. Target tracking algorithms vary according to their usages. In this paper, it has been attempted to discuss and analyze mobile target tracking techniques and algorithms in Marine.

Share and Cite:

Moghaddam, H. and Yarifard, A. (2016) Applied Techniques in Tracking Moving Targets in Marine Using Image Processing. Open Journal of Marine Science, 6, 524-537. doi: 10.4236/ojms.2016.64042.

1. Introduction

Some of main issues in machine vision and attracted so much attention because of various applications in recent years are video surveillance, target tracking in military applications, video objects tracking and extracting. Target tracking in video images has important role in advanced applications based on machine vision like surveillance systems, human and robot communication and the field of medical engineering [1] [2] [3] [4] . In tracking as usual, target is expressed as object contour; therefore any information about deformation in the framework of contour area would be useful [1] . This deformation can be accomplished by bi-dimensional affine transformation on contour area obtained by geometrical movement of object. We can imagine different features and features for target tracking system. This system must progress in different unusual situations with calculated costs and appropriate speed. A desirable tracking system should have capability in facing different optical and climatic situations [5] . For example, a transportation vehicle tracking system should be able to perform well at night, day, dawn and sunset and also its efficiency should not be reduced in sunny, rainy and foggy weather [6] . The extant tracking systems, in special climatic and light situations, indicate good performance but they have weak points in other situations. Noise is inevitable in images. Since the quality of an image is affected by many factors like video recording tool, climatic situations, the context of the desired vision and changes in light intensity during day and night brings about challenges for proposed algorithms in machine vision. Therefore, deleting noise or decreasing it can have better results for these algorithms. After removing noise, good features for track are obtained in frame in the moment of t using Shi and Tonasi method [7] . They calculated the good features with special values using second derivative (applying Sobel operator) and the obtained points by this method are introduced as good features for tracking. The next step is matching with KLT. In this step using Lucas-Kanade Method [8] , good correspondent features between two frames in moment of t and t + 1 are found. The basic idea of KLT Method is based on the assumptions: brightness constancy, temporal persistence or small movements and spatial coherence. According to these three assumptions, brightness constancy of pixels related to one object in sequential frames would be fixed and pixels in this sequence would have small movements and also points related to special pixel vicinity would have similar properties and movements. One of the mostly used characteristics for tracking is corner features which has attracted so much attention because of less sensitivity to light intensity variations. Using corner points for tracking has been used in many methods. Discovering corner points in tracking literature are: Moravec [9] , Harris [10] , KLT [7] , SIFT [11] . Authors in [12] and [13] assessed feature points for tracking.

In [14] and [15] , authors have used background subtraction method with upgrading along tracking. In [16] [17] [18] , authors used another clustering method known as K- means for color and place characteristics. The main idea of this method is at first object features and background near to object are clustered and then by deleting common clusters, object is identified from its near background and tracked. Most algorithms for tracking moving targets can be tracked from one frame to another frame in sequence of images. With respect to the performed examinations, we can divide tracking methods into five main groups [19] [20] .

1) Tracking based on model.

2) Tracking based on area.

3) Tracking based on static and dynamic (active) contour.

4) Tracking based on feature (tracking based on feature points).

5) Tracking based on mesh.

This object model usually during a repetitive process is modified with images and the best matching area is considered as object observation area. Other systems based on model according to moving detection methods identify moving areas from image and if these areas have appropriate features they are considered as targets and are tracked in next steps. Systems based on model unlike these systems that use only tracking swarm of pixels, attain to level of image appreciation and are more advance.

2. Tracking Using Model

For tracking an object a two dimensional or three dimensional models are used usually during repetitive process it is matched with image and the place with best matching is considered objection observation spot [21] . System not based on model separate moving area from image through movement detection methods, therefore if these spots have suitable specifications, are assumed as target and tracked in the next steps. In model-based methods, at first a hypothetical method based on primary data is established about figure or object movement. Then this model is applied for estimating object in a number of predicted locations and finally the status of target object is estimated by the help of situation estimation techniques like Kalman filter [22] [23] . Model-based methods partially function well and obviate the problem of overlapping. Using model based tracking has high capability for analyzing behaviour, body movements, and detection of people. The main problem of these methods is the high calculation complexity level of them that have less capability of immediate implementation [24] . In template matching as a part of model-based method, a predetermined target in input image templates are tracked using SAD (sum of absolute differences), SSD (sum of squared differences) and NCC (normalized cross correlation) and detects the target position of each input template as output. Calculation complexity of these methods increases with greater target and search window. For removing this problem we can disclose set of image features like corners, areas and edges and implement template matching operations on these features [25] . As object contours usually have more lighting intensity variations edge finding is used for disclosing these variations. One of the most features of edges is their less sensitivity to lighting intensity variations [1] . In outer environments, there are many challenges like lighting variations and environment noises. Fuzzy edging is efficient for dealing with these challenges.

Tracking Using Template Matching in Gray Images

In this method a piece of image is slipped onto input image and we detect and matching of two proposed normalized methods using SSD and NC techniques. Template featuring methods don’t require primary and final process. The complexity of correlational methods is rather less and these methods are suitable for immediate implementing [26] . We can obtain more precise matchings with price of more calculations with movement from simple criteria SAD to more complex criteria NCC. Templeate matching methods are in most cases the best and most appropriate solutions but their calculation complexity increases with higher target dimensions. The most common template matching method in gray images is NCC [27] . This method is common in object tracking and position finding applications and in the lighting changes in environment is more powerful than other methods.

c(m,n) is the rate of template matching in row m and column n, Us and Ut are mean rates in search window and template window, S represents search window and t stands for template window.

3. Area-Based Methods

In this method, an area in image is detected corresponding to considered target. In this method, information like movement, color and feature is used for detecting areas. By combining different areas we can make the object. The considered target usually is disclosed by subtracting two images. In [28] , Wen used small bubbles for tracking a person in an image. In this step body of human kind is shown by small bubbles each representing special part of body. Meanwhile the number of pixels relating to body of human kind and background is modeled by a Gaussian Model. Therefore by tracking each of the bubbles, the desired target is tracked. This method functions well when objects move freely without hitting each other but when objects overlap each other the disclosure is problematic, also this method has high calculation complexity. In area- based tracking method, the video objects are quantified primarily by object detection algorithm or user and then clusters of video images are grouped by classic tools like Watershed Algorithm. Clustered area matching is applied in sequential templates in order to perform tracking in next templates [29] [30] .

4. Tracking Based on Dynamic and Static Contour

Contour-based methods in fact are detecting desired target by presenting a surrounding of the target in active contour systems this upgrading is dynamic. The advantage of this method is less calculation complexity. Therefore, object clustering overlapped slightly is faced with problem. If in overlapping a new contour is built for each object, even in this case objects can be tracked [21] . Object tracking based on active contour models is performed by tracking target object contours. In this method, contour curve is defined surrounding target object automatically or by user. Then, this contour influenced by energy function is modified to match with target object contours [31] [32] . Contour is used for defining object range. In images by detecting foreground areas (as set of pixels in an area), it can be detected contour or edge of Foreground. Also the information is used for obtaining descriptors. Edges are suitable tools for compressing Foreground information for simplification of descriptor calculations [33] . Contour-based methods have been studied extensively in recent years. Some of the advantages of this method in relation to other tracking methods are:

1) Capability in representing objects with complex edges.

2) Tracking capability of objects like human which are modified dynamically.

The advantage of using a representation based on object contour in relation to area- based representation is less calculation complexity. In active contours in changing points of contour we face with a problem. Connecting codes are used for representing borders by connection and sequential line directions. Directions typically are divided into 4 or 8 parts. Each line is coded with numerical template as shown in Figure 1.

(a) (b)

Figure 1.Code template of sequences (a) code template for four directional sequences (b) code template for eight directional sequences.

Active contour models are distinct from each other because of special figure and movement as snake models and for the first time were defined as curves with minimum energy by Kass et al. [34] in 1987 influenced by energy function inclined to image prominent features like lines, edges, corners, … This method was suggested for clustering and video image tracking but for minimizing energy function, precision and speed, the proposed method faced with numerous problems. Amini et al. (1990) [35] proposed a dynamic programming method for minimizing energy of contour allowing harder contours added for optimizing contour treatment , but this algorithm was slow and had time complexities O (nm3) that n is contour points and m is possible locations repeating of point movement.

Shah and Williams (1992) [36] proposed a speedy greedy algorithm for minimizing tracking and counter energy which its time complexity was O (nm). The problem of this method was incorrect detection of objects with concave contours. Caselles et al. (1993) [37] proposed Geometric Active Contour Models solved the problem of concave contour detection and reliance on topology in previous models. However this method had so many calculation problems reduced algorithm speed and was not suitable for tracking. Terzopoulos and McInerney (2000) [38] proposed T-Snakes (Topology Adaptive Snakes) solved the problem of reliance on topology with less calculations in comparison with Caselles Method and was suitable for tracking objects with complex structures. Ivins [39] (1994), Hamarneh [40] (2000) and Schaub [41] (2003) proposed different models of Color-Snake forwarded contour curve toward target object with distinct color through a color pressure force to contour energy function, performing a set of statistical calculations on target area and background. The problem with this method was that the object color and background should be simple in this method but this method cannot track objects with complex and contextual colors.

5. Feature-Based Tracking

Feature based systems are categorized into two sets according to kind of features applied in tracking.

Ÿ Tracking with capability of movement detection.

Ÿ Tracking with capability of object detection.

These systems track objects using features inside tracked object. Feature points can be edges or lines in tracked object, theses points may belong to special object or a group of objects. For instance, top finger salience arrangement is related to a special person but symmetry features belong to group of objects. The advantage of feature-based tracking is that even in time of slight overlapping in two or few objects, some of the features of the object are visible. In feature points based methods, at first a number of appropriate points in target window is disclosed (like corner points by corner Detector) operators then in the next frame the corresponding point is detected and template vector for movement is determined, finally tracking window is adjusted based on movement template vector [42] .

6. Mesh-Based Tracking

In mesh-based tracking, at first a set of nodes in target object contours are defined. Then these nodes are connected together through triangulation procedure. These sets of points are tracked using sampling from node moving vectors according to visual flow movement estimation [43] [44] . Two-dimensional mesh model is movement tracking method based on space object changes in mesh structure cells and neighbor correlations between different cells. Movement in this method is tracked through transfer of corner points of mesh structure cells calculated by estimated movement field [43] [45] . Two dimensional meshes can be divided into monotonous meshes with the same size for all cells and non monotonous meshes like hierarchical and content-based models with variable cell sizes related to image frame [46] [47] [48] . Cells in monotonous mesh may overlap each other in video image sequential frames while a non monotonous mesh is adapted to object contours. In addition when there is more than one movement in image frame, monotonous mesh has not enough efficiency [49] [50] . A successful tracking is dependent on precision in overlapping area detection, MF areas and estimation of movement field near object contours. Movement compensation is needed for determining deletion points from set of mesh structure points and detecting new point locations. In standard mesh models, there is necessity in movement continuity in all frames and this is not desired in multi movements and overlapping areas in image frame [43] [45] . The concept of adaptive mesh expressed in movement tracking for dealing with this restriction. Movement field estimation algorithm, estimates movement vectors independently in each pixel of optical flow using Horn-Schunck [51] . One movement field estimation method operates based on optical flow field better that other methods (for example block matching method) as optical flow-based methods are monotonous movement fields appropriate for parameterizing movement fields [52] . In [53] , the other method for estimating movement field based on split and merge was expressed by Kanade and Lucas. Detection algorithm of BTBC area in mesh-based tracking has been used including steps explained in [43] [45] .

7. Performed Works

Techniques as Kalman Filter, Markov Chain Monte Carlo, Optical Flow and Particle Filter are known object tracking methods. Although the number of proposed methods in this field is so numerous, but each has its own advantages and disadvantages, for instance Kalman Filter Estimates the object condition and the measurement used to update the state estimation Therefore, Kalman Filter is live estimation predicting linear processes and correcting object position [54] . Extended Kalman Filter for solving non-linear problems, we can use this developed filter to make linear the estimation and measurement calculation relations [55] . Markov Monte Chain is one of the techniques used for tracking vehicles [56] . Although this algorithm can track overlapped vehicles adapted with sample sizes, is still unstable and there is error in tracking results. Moreover, optical flow is also used for tracking objects as a known method. In research [57] optical flow has been used for detection and moving object tracking. But optical flow reflects moving background of captured image. So, when there is overlapping or moving object is lost by obstacles, optical flow hardly overlaps with moving object. Tracking algorithm using particle filter is a safe method, it can deal with non-linear conditions [58] . In research [59] , a usual particle filter along with tracking faced with the problem of Particle Degeneracy. The degeneracy event of particles is because of lower weights of particles chosen after some repetitions; therefore this problem impedes most algorithm advancements. In [60] it is stated that particle degeneracy is solved by resampling or implementing so many particles in this algorithm. But, it isn’t practical to implement so many particles because of huge amount of calculations. Therefore, resampling of particles is a suitable method for dealing with this problem [61] . In addition, color feature in research [60] has been used for vehicles and other hard objects. According to their results, algorithm with color features can carefully track automobile as color feature is resistant in scale variation, turning and blocking automobiles. But color feature is restricted when there are disharmonies in background or the color of background is similar to target. In addition, in research [62] it has been shown that tracking algorithm with some features would have better tracking results.

8. Particle Filter

Obtained calculations from video sensors (camcorders) have been combined with noise. In addition, object movements can also be vulnerable to random disorders. Random tracking methods like particle filters solve these problems along object position estimations considering not precise model and calculations. Particle filter algorithm is considerable in image tracking as strong estimation tool in non-linear systems [1] [3] [4] [63] . One of the challenges in Particle filter is choosing function with appropriate importance for random sampling which has considerable effect on algorithm function. There has been so many attempts to estimation of importance function to replace position transient distribution [64] [65] . Particle filter is a set of filters in which Monte Carlo (Sequential Monte Claro) is used for estimation of previous distribution [65] . Capability of these methods depends on their ability in non-linear non-Gaussian model descriptions. In other words, as Particle filter is a numerical method for estimation of signal, non-linearity of model or non-Gaussian noise is unimportant. Simplicity in their implementation is among other reasons for usefulness of these set of methods. Particle filter is in fact implementation based on sampling in which intensity of previous possibility is estimated with set of given weight particles [66] . Generally, particle filter is based on three important stages, Prediction Stage, Measurement stage and resampling. In prediction stage, a set of particles indicating floating model position transfer (ship) are generated. In measurement stage particle weights are calculated based on possibility measurement. Resampling stage prevents from particle degeneracy problems. Particle filter is extended for float tracking with dynamic changes. Therefore previous possibilty intensity function and observation possibility intensity function measured in Particle filter algorithm are often non-Gussian. In previous possibility intensity function, position vector xt identifies tracked floating position space while zt determines all of estimations of position space. Main idea of particle filter is estimation of previous distribution based on limited set of weighed random samples called NP particle. Moreover each weighed sample is drawn to indicate target floating position estimation based on previous distribution. determines target floating position and identifies weights related to particle. As is weight related to each particle, therefore there is restriction for each set of particle weights that should be in the range of. Complete set of particle weights should be normalized and their sum should be one that these relations are indicated as following (1) and (2):



9. Extended Kalman Filter

Extended Kalman Filter or EKF is extensively is used for tracking systems with non-linear measurements. EKF is based on first grade linearization. EKF in facing with problem of tracking has some limitations. First in EKF linearization stage requires Jacobian Matrix calculation and matrix calculation of Hk is faced with so many problems in target tracking with high maneuver because of sudden changes in target acceleration. Also, using first grade estimation in EKF algorithm for calculation of error covariance matrix occasionally leads to filter divergence. Moreover, high amounts of calculations in this algorithm related to immediate need in target tracking are one of the problems of this method [67] .

10. Mathematical Description of Active Contour Models

Active contour model (Snake) proposed for the first time by Kass, is parametric curve in image frame defined by


This curve is deformed influenced by energy function and guided to desired features in image. Energy function is defined as following [34] [68] .


Having internal energy and image energy:


In which internal energy is dependent on internal features of contour like Elasticity and curvature and it is calculated by


The first part of internal energy causes that counter act like a spring determining elasticity level of curve. The second part of internal energy determines resistance level of curve against turning. In the above relation coefficients of α and β are weighed parameters controlling sensitivity level of contour against elasticity and turning. Image energy forwards contour curve toward desired and prominent features of image like edges, lines, corners, …this energy in the first relation of active contour models is estimated by edge detection and it is calculated by [34] .


Or [68]


which relation 8 is used for decreasing noise effect where in p is parameter for image energy size, Ñ stands for gradient function, Gs × I Convolution of image with Gaussian Filter having standard deviation of ∂. Therefore total energy of active contour is defined by:


If there are prominent image features (strong edges), energy function guides contour curve toward target correctly but unfortunately in case of not existing strong edges, contour curve is faced with difficulty in finding target object. For dealing with this problem, a new energy named color pressure energy will be used to replace edge energy in relation 9 [39] [40] [41] . This energy is a function of model statistical features and by generating a pressure force compresses or extends contour toward target object area. Color pressure energy is defined as:


ρ is a parameter determines pressure energy magnitude and quantified by user. G is a function defined as following according to [40] :


In which T is image intensity threshold. G function is also defined as following according to [39]


In which μ is mean and is standard deviation of target object pixels extant in form of previous data or obtained through calculation of image. K is constant determined by user. The above method functions well when object target and its background are simple but is problematic when target or background has color or context complexity.

11. Conclusion

Events like overlapping, hiding behind background of fixed objects, maneuver, dimension change, object distance variation related to camera along with clustering defects, forming shades, etc. are obstacles that should be dealt with guide algorithms. Constant tracking criteria (retaining target while making tracking progressive events) on sea level (water) comes up with many challenges. In this paper, it has been attempted to discuss algorithms and techniques for object tracking.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] Yilmaz, A., Javed, O. and Shah, M. (2006) Object Tracking: A Survey. ACM Computing Surveys.
[2] Yang, H., Shao, L., Zheng, F., Wang, L. and Song, Z. (2011) Recent Advances and Trends in Visual Tracking: A Review. Neurocomputing, 74, 3823-3831.
[3] Kwon, J., Choi, M., Park, F. and Chun, C. (2007) Particle Filtering on the Euclidean Group: Framework and Applications. Robotica, 25, 725-737.
[4] Yan, W., Weber, C. and Wermter, S. (2011) A Hybrid Probabilistic Neural Model for Person Tracking Based on a Ceiling-Mounted Camera. Journal of Ambient Intelligence and Smart Environments, 3, 237-252.
[5] Koller, D., Daniilidis, K. and Nagel, H. (1993) Model-Based Object Tracking in Monocular Image Sequences of Road Traffic Scenes. International Journal of Computer Vision, 10, 257-281.
[6] Rojas, J.C. and Crisman, J.D. (1997) Vehicle Detection in Color Images. Proceedings of IEEE Conference on Intelligent Transportation Systems, 12 November 1997.
[7] Shi, J. and Tomasi, C. (1994) Good Features to Track. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 21-23 June 1994.
[8] Lucas, B.D. and Kanade, T. (1981) An Iterative Image Registration Technique with an Application to Stereo Vision. Proceedings of the 7th Joint Conference on Artificial Intelligence, Vancouver, 24-28 August 1981, 674-679.
[9] Moravec, H.P. (1979) Visual Mapping by a Robot Rover. Proceedings of the 6th International Joint Conference on Artificial Intelligence, 1. Morgan Kaufmann Publishers Inc., San Francisco.
[10] Harris, C. and Stephens, M. (1988) A Combined Corner and Edge Detector. Alvey Vision Conference, Manchester, September 1988, 147-152.
[11] Lowe, D.G. (2004) Distinctive Image Features from Scale-Invariant Key Points. International Journal of Computer Vision, 60, 91-110.
[12] Mikolajczyk, K. and Schmid, C. (2003) A Performance Evaluation of Local Descriptors. International Conference on Computer Vision & Pattern Recognition (CVPR’03), 18-20 June 2003.
[13] Mikolajczyk, K. and Schmid, C. (2005) A Performance Evaluation of Local Descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27, 1615-1630.
[14] Stauffer, C. and Grimson, W.E.L. (2000) Learning Patterns of Activity Using Real-Time Tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, 747-757.
[15] Zhang, R. and Ding, J. (2012) Object Tracking and Detecting Based on Adaptive Background Subtraction. Procedia Engineering, 29, 1351-1355.
[16] Chunsheng, H., et al. (2007) Object Tracking with Target and Background Samples. IEICE Transactions on Information and Systems, 90, 766-774.
[17] Hua, C., et al. (2006) K-Means Tracker. A General Algorithm for Tracking People. Journal of Multimedia, 1, 46-53.
[18] Hua, C., et al. (2008) K-Means Clustering Based Pixel-Wise Object Tracking. IPSJ Online Transactions, 1, 66-79.
[19] Hariharakrishnan, K. and Schonfeld, D. (2005) Fast Object Tracking Using Adaptive Block Matching. IEEE Transactions on Multimedia, 7, 853-859.
[20] Cavallaro, A. (2002) From Visual Information to Knowledge: Semantic Video Object Segmentation, Tracking and Description. PhD Thesis, école Polytechnique Fédérale de Lausanne, Lausanne.
[21] Koller, D., Daniilidis, K., and Nagel, H. (1993) Model-Based Object Tracking in Monocular Image Sequences of Road Traffic Scenes. International Journal of Computer Vision, 10, 257-281.
[22] Toloei, A. and Niazi, S. (2015), Estimation of LOS Rates for Target Tracking Problems Using EKF and UKF Algorithms—A Comparative Study. International Journal of Engineering (IJE), 28, 172-178.
[23] Comport, A.I., Marchand, E. and Chaumette, F. (2005) Efficient Model-Based Tracking for Robot Vision. Advanced Robotics, 19, 1097-1113.
[24] Li, W.H. and Kleeman, L. (2006) Real Time Object Tracking Using Reflectional Symmetry and Motion. The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Beijing, 9-15 October 2006, 2798-2803.
[25] Bovik, A. (2009) The Essential Guide to Image Processing. 2nd Edition, Elsevier, Amsterdam.
[26] St?rring, M. and Moeslund, T.B. (2001) An Introduction to Template Matching. Technical Report 01-04, Computer Vision and Media Technology Laboratory, Aalborg University, Aalborg.
[27] Ahmed, J., Jafri, M.N., Shah, M. and Akbar, M. (2007) Real-Time Edge-Enhanced Dynamic Correlation and Predictive Open-Loop Car-Following Control for Robust Tracking. Machine Vision and Applications, 19, 1-25.
[28] Wren, C.R., Azarbayejani, A., Darrel, T. and Pentland, A.P. (1997) Pfinder: Real-Time Tracking of the Human Body. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 780-785.
[29] Black, M.J. and Jepson, A. (1998) Eigentracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation. International Journal of Computer Vision, 26, 63-84.
[30] Salembier, P., Torres, L., Meyer, F. and Gu, C. (1995) Region-Based Video Coding Using Mathematical Morphology. Proceedings of the IEEE, 83, 843-857.
[31] Bing, X., Wei, Y. and Charoensak, C., Member, IEEE (2004) Face Contour Tracking in Video Using Active Contour Model. IEEE, Singapore, 1021-1024.
[32] Fu, Y., Erdem, T. and Tekalp, A.M. (2000) Tracking Visible Boundary of Objects Using Occlusion Adaptive Motion Snake. IEEE Transactions on Image Processing, 9, 2051-2060.
[33] Gonzalez, R.C. and Woods, R.E. (2008) Digital Image Processing. Prentice Hall Publisher, Upper Saddle River.
[34] Kass, M., Witkin, A. and Terzopoulos, D. (1987) Snakes: Active Contour Models. International Journal of Computer Vision, 1, 321-331.
[35] Amini, A., Weymouth, T. and Jain, R. (1990) Using Dynamic Programming for Solving Variational Problems in Vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12, 855-867.
[36] Williams, D. and Shah, M. (1992) A Fast Algorithm for Active Contours and Curvature Estimations. CVGIP: Image Understanding, 55, 14-26.
[37] Caselles, V., Catte, F., Coll, T., et al. (1993) A Geometric Model for Active Contours in Image Processing. Numerische Mathematik, 66, 1-31.
[38] McInerney, T. and Terzopoulos, D. (2000) T-Snakes: Topology Adaptive Snakes. Medical Image Analysis, 4, 73-91.
[39] Ivins, J. and Porrill, J. (1994) Active Region Models for Segmenting Medical Images. Proceedings of the 1st International Conference on Image Processing, Austin, 13-16 November 1994, 227-231.
[40] Hamarneh, G., Chodorowski, A. and Gustavsson, T. (2000) Active Contour Models: Application to Oral Lesion Detection in Color Images. IEEE International Conference on Systems, Man, and Cybernetics, 4, 2458-2463.
[41] Schaub, H. and Smith, C. (2003) Color Snakes for Dynamic Lighting Conditions on Mobile Manipulation Platforms. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2, 1272-1277.
[42] Verestoy, J. and Chetverikov, D. (1998) Comparative Performance Evaluation of Four Feature Point Tracking Techniques. Proceedings of 22nd Workshop of the Austrian Pattern Recognition Group, Illmitz, 1998, 255-263.
[43] Altunbasak, Y. and Tekalp, A.M. (1997) Occlusion Adaptive, Content-Based Mesh Design and Forward Tracking. IEEE Transactions on Image Processing, 6, 1270-1280.
[44] Wang, Y. and Lee, O. (1994) Active Mesha Feature Seeking and Tracking Image Sequence Representation Scheme. IEEE Transactions on Image Processing, 3, 610-624.
[45] Altunbasak, Y. and Murat Tekalp, A. (1996) Occlusion-Adaptive 2-D Mesh Tracking. Proceedings of the Acoustics, Speech, and Signal Processing, 4, 2108-2111.
[46] Beek, P. and Murat Tekalp, A. (1993) Hierarchical 2-D Mesh Representation, Tracking, and Compression for Object-Based Video. IEEE Transactions on Circuits and Systems for Video Technology, 9, 353-369.
[47] Altunbasak, Y. and Al-Regib, G. (2001) 2-D Motion Estimation with Hierarchical Content-Based Meshes. International Conference on Acoustics, Speech, and Signal Processing, Salt Lake City, 7-11 May 2001, 1621-1624.
[48] Laurent, N. (2000) Hierarchical Mesh-Based Global Motion Estimation, Including Occlusion Areas Detection. International Conference on Image Processing, 3, 620-623.
[49] Weizhao, J. and Wang, P. (2002) An Object Tracking Algorithm Based On Occlusion Mesh Model. Proceedings of the 1st International Conference on Machine Learning and Cybernetics, Beijing, 4-5 November 2002, 288-292.
[50] Toklu, C. (1996) 2-D Mesh-Based Tracking of Deformable Objects with Occlusion. International Conference on Image Processing, 1, 933-936.
[51] Horn, B.K.P. and Schunck, B.G. (1981) Determining Optical Flow. Artificial Intelligence, 17, 185-203.
[52] Tekalp, A.M. (1995) Digital Video Processing. Englewood Cliffs, Prentice-Hall.
[53] Locas, B.D. and Kanade, T. (1981) An Iterative Image Registration Technique with an Application to Stereo Vision. Proceedings of the DARPA Image Understanding Workshop, Vancouver, 24-28 August 1981, 121-130.
[54] Li, X., Wang, K., Wang, W. and Li, Y. (2010) A Multiple Object Tracking Method Using Kalman Filter. Proceedings of International Conference on Information and Automation, Harbin, 20-23 June 2010, 1862-1866.
[55] Nabaee, M., Pooyafard, A. and Olfat, A. (2008) Enhanced Object Tracking with Received Signal Strength Using Kalman Filter in Sensor Networks. Proceeding of International Symposium on Telecommunications, Tehran, 27-28 August 2008, 318-323.
[56] Kow, W.Y., Khong, W.L., Chin, Y.K., Saad, I. and Teo, K.T.K. (2011) CUSUM-Variance Ratio Based Markov Chain Monte Carlo Algorithm in Overlapped Vehicle Tracking. Proceedings of International Conference on Computer Applications and Industrial Electronics, Penang, 4-7 December 2011, 50-55.
[57] Fang, Y. and Dai, B. (2009) An Improved Moving Target Detecting and Tracking Based on Optical Flow Technique and Kalman Filter. Proceedings of 4th International Conference on Computer Science & Education, Nanning, 25-28 July 2009, 1197-1202.
[58] Arulampalam, M.S., Maskell, S., Gordon, N. and Clapp, T. (2002) A Tutorial on Particle Filter for Online Nonlinear/Non-Gaussian Bayesian Tracking. IEEE Transactions on Signal Processing, 50, 174-188.
[59] Li, H., Wu, Y. and Lu, H. (2009) Visual Tracking Using Particle Filters with Gaussian Process Regression. In: Wada, T., Huang, F. and Lin, S., Eds., Advances in Image and Video Technology, Springer, Berlin, 261-270.
[60] Khong, W.L., Kow, W.Y., Angeline, L., Saad, I. and Teo, K.T.K. (2011) Overlapped Vehicle Tracking via Enhancement of Particle Filter with Adaptive Resampling Algorithm.
[61] Fu, X and Jia, Y. (2010) An Improvement on Resampling Algorithm of Particle Filter. IEEE Transactions on Signal Processing, 58, 5414-5420.
[62] Khong, W.L., Kow, W.Y., Chin, Y.K., Saad, I and Teo, K.T.K. (2011) Overlapping Vehicle Tracking via Adaptive Particle Filter with Multiple Cues. Proceedings of International Conference on Control System, Computing and Engineering, Batu Ferringhi, 25-27 November 2011, 460-465.
[63] Zhou, S., Chellappa, R. and Moghaddam, B. (2004) Visual Tracking and Recognition Using Appearance-Daptive Models in Particle Filters. IEEE Transactions on Image Processing, 13, 1491-1506.
[64] Merwe, R., Doucet, A., Freitas, N. and Wan, E. (2001) The Unscented Particle Filter. Advances in Neural Information Processing Systems, 13, 584-590.
[65] Doucet, A., Godsill, S. and Andrieu, C. (2000) On Sequential Monte Carlo Sampling Methods for Bayesian Filtering. Statistics and Computing, 10, 197-208.
[66] Aghamohammadi, A.A., Tamjidi, A.H. and Taghirad, H.D. (2008) Slam Using Single Laser Range Finde. IFAC Proceedings Volumes, 41, 14657-14662.
[67] Fujii, K. (2002) Extended Kalman Filter. The ACFA-Sim-J Group.
[68] Prince, J.L. and Xu, C. (1996) A New External Force Model For Snakes. Image and Multidimensional Signal Processing Workshop, 30-31.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.