2016
It’s been one month here and I have learnt tons of things working. I wouldn’t say everything is going according to the schedule, but I believe research projects are like this, we hope that something will work and go further but end up not using it because of some constraints and try to find another way out.
Project Overview
A Rover or an astronaut is mounted with a monocular camera from which the live streaming data can be accessed through a ROS topic. At every step, the camera can view some part of the region which is in front of it but not the terrain which is exactly below or immediate to it. So, visual odometry is used to find the position of the camera from which we can obtain the patch of the terrain which is immediate to the rover or an astronaut from the frames which are processed before the rover or astronaut reached that particular position.
We know the world coordinates of the camera from Visual Odometry and since we have calibrated the camera, we have all the parameters to get the image points on previous frames of corresponding world points. So, for every world point of the camera trajectory, we can get an approximated world point of the astronauts’ foot or wheels of the rover. We check if the particular world point is visible in the previous frames and if visible, we collect the patches of terrain from all the frames in which the corresponding world point is visible.
For now, it’s assumed that the astronaut gives the traversability of terrain on which he is standing by a push button and that this, with the patch that we obtained, can be given to a regression model as a training data. We should consider getting the labels for the terrain patches automatically by an accelerometer. The main thing that should be noticed is that we aren’t classifying the terrains but trying to predict the traversability for the given patch of the terrain.
So, in the beginning, the algorithm is not expected to give good results but in considerable time, it is assumed to give better estimation of traversability. The further ideas would include considering the slope of the terrain which is an important aspect of traversability.
Visual Odometry
Initial attempts were made on Semi-Direct Visual Odometry (SVO). It worked fine on the test data but didn’t deliver good results on a live streaming camera. The algorithm is tested with Microsoft Lifecam HD, Logitech and Pointgrey BFLY PGE cameras with focus set to infinity, constant exposure and brightness. Even after trying with different values of FPS, the algorithm didn’t deliver optimum results. SVO is considered good for downward looking camera and proves to produce robust results for MAV’s but not for a forward looking camera like in our case.
The next attempt is made on Ethzasl Ptam. It gave optimum results for even a forward looking camera. Considering the computational power we have on the spacesuit, we may consider running the mapping thread of the Visual Odometry on a ground station which would save a lot of computational time on spacesuit.
Support Vector Regression
The main aspect of the project is traversability estimation rather than terrain classification. So, it is inevitable to use a regression model rather than classifying the terrains based on their scores of traversability. It is really important to have a good amount of training data which would play a major role in training a good regression model. For now, we are using some fake data that we have generated by taking some pictures of random terrains and giving score of traversability on the scale of ten. But in the future, for the training data, we can even collect it from a rover and training can be done on the ground station. By doing it this way, we can even annotate the terrain patches automatically from wheel odometry which we can’t do on a spacesuit. The Italian Mars Society is also working to obtain a dataset of annotated terrains which can be used to train the model.
The features used are simple histograms of gradients extracted from the small patches of terrain. It is much faster even when compared to extraction of 128 dimension SIFT features and much more suitable for terrains. For each patch of the terrain, gradients along the X and Y directions and corresponding orientations are calculated. A histogram of gradient orientations is constructed and given to the SVR for training.
Conclusion
As of now, we are trying to implement the above-stated methods. But to use it more efficiently in real time in the future, IMU sensors can be used to obtain the trajectory more accurately. As stated above, running the Mapping thread of PTAM on a ground station like on a Rover which is just resting on the Mars or on the nearest space station would be a lot more effective.
Author: Vishnu Teja Yalakuntla
Events
Blog categories
- AMADEE-15 Simulation (13)
- AMADEE-18 (19)
- AMADEE-20 (21)
- AMADEE-24 (8)
- Aouda Spacesuit Simulator (67)
- ASE 2016 (9)
- Book tips (1)
- Events (32)
- Expeditions/Simulations (81)
- Flight projects (13)
- Guest blogs (14)
- Internships at the OeWF (53)
- OeWF News (352)
- Phileas rover (21)
- Press Releases (36)
- Research/Projects (129)
- Serenity spacesuit (3)
- World Space Week (25)