Final Project Report
Spring 2000
Yeow-Seng Kua
Imagine
that you are working on a a piece of wasteland, approximately 48,740,000
miles away from home. Perhaps even with the luxury of not wearing a space
suit under the extreme martian atmosphere. Attached to your arms are two
sets of mechanical linkages. Now, imagine that when commanded, a robot
arm will magically pick and place a martian rock sample into a metal container.
While
hoping that one day you can find the evidence of life on Mars, you suddenly
hear a soft whisper beside your ears, asking you to be home on time. You
then remembered, today's your daughter's birthday! You step out from your
workstation and walk towards the exit, onto the parking lot, into your
Z3, and drove off to the nearest mall to get a gift.
Project Description
The scene described above
is the power of teleoperated robotic system. This system let you control
a robot directly but also remotely from any workspace. My project however,
will not be as fancy as described above but is similar. The purpose of
my project is to design a telerobotic system that will allow the user to
perform simple task such as picking and placing, or doing simple task like
manipulating a work piece while the workspace is in motion.
Environment
The environment
where this robot work is a 100inches wide by 100inches long plane. This
plane will be filled with a number of rectangular obstacles. Each of these
obstacles will have different sizes but must always be shaped as a box.
This is due to the way i wrote my collision detection algorithm. The entire
work space will be rotating
at a certain speed to simulate a moving workspace, however, the obstacle
it self is stationary. This environment also assumes that there will be
no unpredicted obstacle comming into the workspace while the robot is operating.
Robot
The robot
that i use have 4
degree of freedom. One rotational movement and 3 translational movement.
In order to know how the end effector will be when the links are changing
we need to formulate the forward kinematics for this robot. This is done
by using the Denavit-Hartenberg homogeneus transformation matrics.
Denavit-Hartenberg Homogeneous
Transformation Matrics
__________________________________
Joint i
alpha i a i
d i theta i
__________________________________
1 0
a1 d1
theta1
2 PI
a2 0
-90
3 0
0 d3
0
4 0
0 d4
0
0A1
= cos(theta1) -sin(theta1) 0
a1*cos(theta1)
sin(theta1) cos(theta1)
0 a1*sin(theta1)
0
0
1 d1
0
0
0 1
1A2=
0 -1 0 0
-1 0 0 -a2
0 0 -1 0
0 0 0 1
2A3=
1 0 0 0
0 1 0 0
0 0 1 d3
0 0 0 1
3A4=
1 0 0 0
0 1 0 0
0 0 1 d4
0 0 0 1
0A4=
sin(theta1) -cos(theta1)
0 a2*sin(theta1)+a1*cos(theta1)
-cos(theta1) -sin(theta1)
0 -a2*cos(theta1)+a1*sin(theta1)
0
0
-1 d3+d4+d1
0
0
0 1
After getting the Denavit-Hartenberg homogeneus transformation matrics, we can get the loop-closure equation by equating:
ux vx
wx qx = sin(theta1)
-cos(theta1) 0
a2*sin(theta1)+a1*cos(theta1)
uy vy
wy qy
-cos(theta1) -sin(theta1)
0 -a2*cos(theta1)+a1*sin(theta1)
uz vz
wz qz
0
0
-1 d3+d4+d1
0
0 0
1
0
0
0 1
Forward kinematics
qx=a2*sin(theta1)+a1*cos(theta1)
qy=-a2*cos(theta1)+a1*sin(theta1)
qz=d3+d4+d1
Approach
The
suggested solution to solve this problem involved 2 steps. Step 1 is to
find a way to let the user see the shape, position, and orientation of
the workpiece while the workspace is rotating, and step 2 is to perform
the task on the workpiece.
To
solve the first problem, the robot will be position at the center of rotation
of the workpiece. A observation device such as encoder or stereo vision
system will be put to use to determine the rotational speed of the workspace
relative to the robot. As the robot begin to rotate to match the rotational
speed of the workspace, the relative angular velocity will become smaller
and eventually goto zero; assuming that a good controller can do that.
Even though, in the real world the relative angular velocity of the robot
and the workspace will never match exactly, this system should be still
working. The main idea behind here is the same as to a differential drive
or differential controller. Instead of using the ground (stationary platform)
to measure the velocity and distance travelled, we reference the rotating
platform. For example, the workspace is rotating at 2 revolution/second
(counter clockwise), in order to move ahead of the workspace, the manipulator
have to rotate at 2.1 revolution/second (counter clockwise) and inorder
to move backward, the robot have to rotate at 1.9 revolution/second(counter
clockwise). The difference between the angular speed will cause the distance
to increase or decrease over time.
Another
important point i want to mention is that the user will be controlling
the manipulator by looking at the stalled picture on a computer generated
graphics.
After obtaining the position of the obstacles
and workpiece, the motion strategy method can be implimented to show the
user one or several of the suggested path that he or she can take to reach
the goal point from the given initial point. The task of moving the workpiece
from the initial point to the goal point looks easy from the human brain
stand point but if the obstacles are really complicated and if we are in
a low error tolerance environment a suggested path should be helpful and
might also improve the speed of path selection of the user.
Path Planning
The path planning method
that i currently use is a simple Rapidly-Exploring Random Tree (RRT). Basically,
all the vertices of the obstacles, work piece, and the workspace are given,
and the user can input the initial point and the goal point of the workpiece.
The summary of this tree
are as follow:
1. Get the initial and goal point.
2. Generate a set of random numbers for theta1, a1, a2, and d4.
3. Calculate the location of the end effector (qx, qy, qz) using the forward
kinematics.
4. Calculate the distance of (qx, qy, qz) relative to all nodes except
the goal node.
5. Select the shortest distance and that will be the current node (cx,
cy, cz).
6. Calculate the distance of (qx, qy, qz) to (cx, cy, cz) and deduce a
new current node
(cx, cy, cz , calpha, cbeta, cgamma) using a metric.
7. Let the current node (cx, cy, cz) be the temperary node (tx, ty, tz).
8. Check for collision of this temperaty node.
9. If there are collision abandon the remaining process and start again,
if not then the
temperary node (tx, ty, tz) will become the new node (nx, ny, nz).
10. Calculate the distance of all nodes to the goal and if it reach then
program terminate. If not
continue the loop but skip step 1.
Collision Detection
The collision detection
algorithm that I wrote is a simple 3D bounding box algorithm. I first need
to calculate the plane equation for the workspace and the obstacles (ax+by+cz+d=0),
where:
a=(y1-y0)*(z2-z0)-(z1-z0)*(y2-y0)
b=(z1-z0)*(x2-x0)-(x1-x0)*(z2-z0)
c=(x1-x0)*(y2-y0)-(y1-y0)*(x2-x0)
d=-(ax0+by0+cz0)
By combining all the 6 plane for each obstacles including the workspace I can get the volume region of the workspace. Then by using a point as the end effector. I can create a projection (a line) from that point to each plane on the workspace and the obstacles, where the line intersect at:
ti= -(axp+byp+czp+d)/a2+b2+c2
the distance of this line to each plane can be calculated using:
d=ti*(a2+b2+c2)1/2
then by using logical predicate I can create a imaginary bounding box around the point to check for the collision detection. For example:
pass[0]=1
pass[i]=0 i=1 to n
for(int a=0; a<i; a++){
for(int b=0; b<j; b++){
if((Ry0<d[i][j])&&(-Ry0>d[i][j])&&(Rx0<d[i][j])&&(-Rx0>d[i][j])&&(Rz0<d[i][j])&&(-Rz0>d[i][j])){
pass[0]=0;
}
if((-Ryn<d[i][j])&&(Ryn>d[i][j])&&(-Rxn<d[i][j])&&(Rxn>d[i][j])&&(-Rzn<d[i][j])&&(Rzn>d[i][j])){
pass[i]=0;
}
}
}
total_pass=sum of (pass[i])
from i=0 to n;
if(total_pass==0){
Collision=0;
} else Collision=1;
Rx, Ry, Rz are the length
of rectangle for x, y, z axis
i=number of obstacle
j=number of plane
n=obstacle number
Results
Animation
1
Animation
2
Animation
3
Additional Information & Future Work
In the near future I plan to use a better collision detection algorithm
so that irregular objects can be use as part of the obstacles. Additional
work such as improving the speed of the program, and implimenting other
path planning method will also provide the user to have more choices in
choosing a better path palnning method for different environment.
Source Code
BulletinBoard.C
References
Collision Detection Between Geometric Models: a survey.
Ming C. Lin, Stefan Gottschalk.
ComS 576 Homepage: http://janowiec.cs.iastate.edu/cs576/
Introduction
to Robotics Mechanics and Control 2nd ed.
John J. Craig. Addison-Wesley Publishing Company, 1989.
Obstacle
Avoidance in Multi-Robot Systems.
Mark Gill, Albert Zomaya. World Scientific, 1999.
Rapidly-Exploring
Random Trees: Progress and Prospects.
Steven M. LaValle, James J. Kuffner Jr.
Remote Control Robotics.
Craig Sayers. Springer-Verlag New York, Inc. 1999.
Robot Analysis The Mechanics of Serial and Parallel Manipulators.
Lung-Wen Tsai. John Wiley & Sons, Inc. 1999.