Robots require the ability to navigate in environment while doing their tasks. As we want them to be autonomous, they need to know their own and goal locations to navigate automatically.
In autonomous navigation, we have 2 main areas:

    • Indoor environments
    • Outdoor environments

In outdoor environments, we can use GPS data to locate the robot. This is done by our control board.

In indoor environments, we can use Visual SLAM to locate the robot. Our algorithm for doing this task is:

    • Capture a camera frame with processing board. (We use ODroid-U3)
    • Extract key features from frame.
    • Compare key features with previous map.
    • Update map and location with respect to key features.

Nowadays most people use open-source Visual SLAM algorithms such as PTAM or LSD_SLAM. We try to optimize these algorithms to achieve faster and more accurate results.