OBJECTIVE

As part of this Project I modified the RRT Planner[1] to get a new planner RRTWeight. The motivation for modifying this planner came from the fact that many times,the node chosen to extend the Randomly Exploring Tree via the nearest neighbor is not the best choice.This is particularly severe in those environments where we have a large vornoi region adjacent to nodes which cannot be used to extend the tree(because of obstacles). This problem can be ameliorated by maintaining weights at the nodes of a RRT. The weights at the nodes can be modified dynamically. These weights allow us to chose the best node for extending the tree. RRTWeight seems to give better performance than RRTPlanner as well as RRTGoalBias. This was found by the following examples. The collated Perfromance comaprison results can be found here

APPROACH FOLLOWED

The RRTWeight algorithm/planner maintains a weight of zero or one at the node of tree. A weight of one means that the node can be used to extend the tree.A weight of zero means that particular node is not a good choice for extending the tree. When a node is created it is assigned a value of one. If during the running of the algorithm we are unable to extend the tree via a particular node, the node's weight is reset to zero. This make sure that the node will not be used in future to extend the tree. Once we are unable to extend the tree for the randomly selected point,one of follwing steps is taken
(1) extend the tree via a randomly selected node. The randomly selected node should also have a weight of one. We try upto K times(I set K=2 for my experiments) to select a node whose weight is one. If unable to extend the tree for K iterations ,in the next iteration try extending the tree towards goal.
(2) we first of all try to extend the tree via Bestbet node. The Bestbet node is the last node the extension by which caused GoalDist (distance to the goal) to decrease. If the Weight of Bestbet is zero we try to extend the tree via a random node. However, if the randomly selected node also has a weight of zero, in next iteration try to extend the tree towards goal. (3) try extending the tree via a random node having weight one; Approach (3) gives worse performance compared to approach(1) and approach(2)

COMPUTED EXAMPLES

A point Robot in 2D world was assumed.Five different environments/examples were studied in detail. These environments were studied under configuartion space of C^2(Model2DPoint) and C^3 (Model2dRigid). Three of the examples(example1,example2,example5) were such for which RRT should make bad choices for extending the tree. Other examples served as a contol to make sure the performance of RRTWeight did not degrade under conditions.(hence guarding against the possibility of the planner being tuned towards certain environments). The examples studied,their description as well as the output computed by various Planners is given below:
example1
example2
example3
example4
example5

PERFORMANCE EVALUATION AND RESULTS

The following tables represents the results obtained
results(Approach 1)RRTWeight .
results(Approach 2) Bestbet
The results point towards the fact that RRTWeight gives consistently better performance than RRT Planner. Further the performance of RRTWeight is better or comparable to RRTGoalBias (with bias=0.5). Bestbet gives slightly worse performance than pure RRTWeight.

FURTHER WORK

The following further work needs to be carried out

* Using RRTWeight inside RRTExtExt. Presently RRTExtExt uses the basic RRT Planner to extend the tree from both the goal and the initial state. it would be intersting to note how the performance improves on using RRTWeight instead of RRT
* Presently the strategy of maintaining weights at the nodes of the tree is simple. Studies need to be carried out forother methods/strategies of maintaining weights
* Tuning the parameter K. This is the number of times we iterate to select a random node with weight one to extend the tree when we are unable to extend it via the nearest neighbor
* The performance of RRTWeight needs to be compared with RRTGoalBias for different values of the bias(presently done only for bias=0.5)

REFERENCES

[1] Rapidly-Exploring Random Trees: A New Tool for Path Planning by Steven M. LaValle, Technical Report No. 98-11, Dept. of Computer Science, Iowa State University, Oct.1998
[2] Rapidly-Exploring Random Trees: Progress and Prospects by Steven M. LaValle and James J. Kuffner, Jr., Proc. 2000 Workshop on the Algorithmic Foundations of Robotics.