Visual Feedback Balance Control of a Robot Manipulator and Ball-Beam System

In this paper, we present a vision guided robotic ball-beam balancing control system, consisting of a robot manipulator (actuator), a ball-beam system (plant) and a machine vision system (feedback). The machine vision system feedbacks real-time beam angle and ball position data at a speed of 50 frames per second. Based on feedback data, the end-effector of a robot manipulator is driven to control the ball position by maneuvering of the inclination angle of the ball-beam system. The overall control system is implemented with two FPGA chips, one for machine vision processing, and one for robot joints servo PID controllers as well as ball position PD controller. Experiments are performed on a 5-axes robot manipulator to validate the proposed ball beam balancing control system.


Introduction
Machine vision system can be applied in real-time to precisely measure visual properties such as color, length, angle, position, orientation, etc.One advantage of machine vision system is the ability to perform contact-free measurement, which is important especially when contact is difficult.However, one of the difficulties in designing a machine vision system is how to be robust to noises present in an image, to scene disturbances caused by unwanted background and foreground objects, as well as to different illumination conditions.Various machine vision approaches had been applied to visual servo control [1].Visual servo system must be capable of tracking image features in a sequence of images.
Visual object tracking is used to identify the trajectory of moving objects in vid-eo frame sequences.Object tracking involves intensive computation in order to extract real-time information from high-volume video data with low power consumption.Real-time image processing tasks require not only high computing power but also high data bandwidth.Thus, prototypical FPGA based real-time image processing systems for computer vision and visual servo applications have been proposed [2] [3] [4].An application can usually be divided into a preprocessing part located on a dedicated subsystem and a high-level part located on a host system.For instance, the pre-processing system is built on FPGA chips and the high-level part for robot vision applications runs on a PCI-based PC system [2].In particular, an FPGA implementation of a multi-microrobot visual tracking system using CMOS vision sensors had been shown to achieve a 0.01 mm accuracy [3].Their system was capable of tracking a single robot at a frequency of over 200 Hz.The motivation of this work is to develop a real-time machine vision feedback based robot motion ball-beam balance control system on FPGA chips.
Ball-and-beam control systems are well-known apparatus designed for demonstrating position control capabilities of unstable open-loop systems.Many nonlinear/linear control methods as well as system stability analysis of ball-beam systems have been presented in literatures [5] [6] [7].In particular, for a nonlinear ball-beam system model, a simple PD controller with asymptotic stability has been theoretically proven and experimentally tested [7].However, accurately determining the beam angle and ball position in real-time is an important task.
There are many approaches to applying beam inclination and ball position sensors; visual feedback is a common solution [8] [9] [10].However, there are only few works for applying visual feedback to both beam inclination and ball position measurements.Moreover, there are also few works on the integration of a full-scale robot manipulator and vision feedback system in a ball-beam or a ball-plate balancing control system.
Other related works include ball-beam balance systems with feedbacks from a tablet [11] and from a smartphone [12], a robotic soccer beam balance system [13], and a ball-and-plate balance system [14].A ball-beam control system with visual feedback sensor from a smart tablet had been designed in [11].Additionally, augmented reality was integrated into an interactive touchscreen interface.
A ball-beam system with wireless sensors on a smartphone had been demonstrated in [12].A smartphone's inertial and camera sensors were used to measure the angular orientation/velocity of the beam and translational ball position, and a local feedback linearizing controller was used to stabilize the ball-beam control system.Ryu and Oh had experimented with a soccer-and-beam balance system installed on the top of the end-effector of a redundant manipulator [13].A force-torque sensor was attached to the end-effector to estimate the ball position through differentiation.Cheng and Tsai had designed a ball-and-plate balance system, which was attached as the end effector and maneuver by a two-DOF robotic wrist [14].The ball's position was captured by a PC-based video camera system and the orientation of the plate was controlled by a LQR algorithm.
Up to now, we have not observed that an automatic ball-beam balance system by a robot manipulator with more than 3-axes.This paper attempts to design an automatic robotic ball-beam balance system using an integrated machine vision system.The main goal of this machine vision based robot motion control system is to measure the ball position and beam angle in real-time, and to balance the ball at any desired point on the beam through the motion of a 5-axes robotic end-effector.This works also focuses on an entirely FPGA based control implementation with input from a CMOS image sensor.

Robotic Ball-Beam Control System
The robotic ball-beam control system setup is shown in Figure 1.The end-effector of a 5-axes robot manipulator is connected to the left end of the beam which can be freely rotated about its center pivot.A blue ball is allowed to roll freely along the beam.Both the ball's position and beam angle are measured by a real-time image processing system with input from a CMOS image sensor at a rate of 50 frame-per-second.The overall control system is implemented on two FPGA development boards, one for real-time image processing and the other one for all required control functions, including ball balance control and endeffector position control.
The ball-beam system is a two-dimensional system on the x-z plane and hence only three axes of the 5-axis robot manipulator, 2 q , 3 q and 4 q , are under in- dependent-joint set-point PID control, the other two axes 1 q and 5 q are sta- tionary and set to zero angle position.The robot's end-effector is attached to the left-end of the beam, as shown in Figure 2. The ball-beam system itself is a sin- gle-input (beam angle) and single-output (ball position) system.
To have the robot's end-effector automatically drive and balance the ball at any position, a ball position PD controller and independent robot joint PID controllers play major parts in the overall control system as shown in Figure 3 x z α are then set to ( ) where R is half of the beam length as shown in Figure 2. Solving the inverse kinematic Equation ( 2), robot joint goal position commands 2 g q , 3g q and 4 g q are then obtained.Incremental digital PID control law, are used to independently control the position of each robot joints.The servo control sampling time is T s = 0.001 seconds.

Ball-Beam System Image Processing
Let ( ) 0 0 , c r be the image coordinate of the beam's pivot joint, ( ) , c r be the image coordinate of the blue ball, and ( ) , c r be the image coordinate of the point where 2 r is the lowest pixel position of the beam along the vertical line 2 c c = , as shown in Figure 4.A blue ball is used here in order to simplify ball detection and to improve measurement accuracy.Let , where L is a constant length; therefore, the ball position and beam angle can be expressed as and    × scanned pixels).The center position of the blue ball is obtained as follows.

Robot Joints Set-Point Control Experiments
The robot manipulator is running in set-point position control mode, in which each joint is under independent joint PID servo control.The digital PID control is implemented in relative form

Conclusion
In this work, we have implemented a visual feedback based robotic motion control system on two FPGA chips with efficient parallel architecture.We have experimentally verified that a robotic motion control system with an integrated

Figure 1 .
Figure 1.The control system set-up of the vision feedback robotic ballbeam balance system.

Figure 2 .
Figure 2. The robot end-effector automatically balances the ball at any desired position on the x-z plane, where ) , ( 0 0 z x is the reference position.

(
0 ,  0 ) = (340,250) unit: mm R x z end-effector R = 400 mm (0,0) 5-axes robot manipulator ball-beam balance system current position and target goal position, respectively.A simple discrete PD control law, guide the robotic end-effector's motion, where g e x x = ∆ − ∆ is the ball balance position error, and the control sampling time is T c = 0.02 sec.The robotic end-effector's motion command position and orientation ( ) feedback output variables.When beam angle θ is small, we have tan z

Figure 3 .
Figure 3.The system block diagram of the robotic ball-beam control system.

Figure 4 .
Figure 4. Ball-beam visual sensor system, where L depends on the distance between the camera and the beam.

Figure 5 .
Figure 5.The image processing pipeline.
fps (frames per second).The raw image is converted to a 640 480 × 24-bit RGB image (by down sampling of every 2 2

First 3 × 1 , 6
transformation is followed by binary segmentation for the b C component with a threshold value of 130.An erosion operation in a 3 window is performed next, followed by a dilation operation in a 3 3 × window.True pixels indicate potential ball position.Finally, the center of blue ball ( ) 1 c r is calculated from the center of the true pixels.Beam angle detection is processed in a similar way.Since the G-component of the RGB color image is similar to the gray image.First, Sobel edge detection is applied to the G-component of the original RGB image with a threshold value of 384, followed by a dilation operation in 3 3 × window.For all pixels, 2 r is the lowest pixel point of the beam in column 2 c ( 2 71 c = ) such that shows the ball-beam visual feedback processing experimental results.

Figure 6 .
Figure 6.Experimental results of the ball-beam visual feedback processing.
(a) The G component of original image (b) binary image for ball position detection (c) binary image for beam angle detection

Figure 7 .
Figure 7. Set-point control experimental results for robot joints 2 q , 3 q