Tetras.AI's high-precision map algorithm is based on images collected by panoramic cameras, drones, digital cameras, and data from sensors such as GPS Bluetooth. It uses videos or pictures collected offline to extract visual features in a scene and match them. It then restores the camera position of the picture and the sparse 3D point cloud of the scene to build a high-precision 3D map of the environment, including sparse landmarks, dense point clouds, semantics and other information. This solution can be used to correctly handle the occlusion relationship and collision between virtual objects and real scenes to achieve high-quality AR effects.
Nvidia, Huawei, Hygon and Cambricon GPU
1. Merely depends on cameras, GPS, Bluetooth and other sensors, which is convenience and low cost
2. Efficiently completes the mapping of 20,000 square meters within 4 hours, with the error rate of less than 10cm per 10,000 square meters
The 3D map of a scene is constructed using Structure from Motion (SfM) technology and multiple view solid geometry technology.
Image feature distribution characteristics and multi-sensor information are used to build robust data association
Image device position and orientation and the 3D structure of the environment are restored from image matching relationships based on the multiple view geometry algorithm and nonlinear optimization method