Based on the high-precision map of a scene, Tetras.AI's visual positioning algorithm principally uses a single frame image taken by the terminal to restore the camera's position and orientation, and also employs GPS and bluetooth for auxiliary positioning. The terminal can send the image taken by the user to the cloud, which matches the features extracted from the image with the features in the map, and performs global 6DoF position and orientation computing on the user's device. The terminal receives the positioning results and related information back from the cloud and couples them into the SLAM optimization target, which can realize long-time high-precision positioning and tracking in large-scale scenes.
Android 8 and above, iOS 10 and above, WeChat mini programs,etc
TTetras.AI's visual positioning algorithm has been applied in many complex scenes, such as shopping malls, parking lots, exhibition halls, museums, airports, high-speed railway stations, and scenic spots. Its advantages include high speed, high concurrency performance and high precision. Single image positioning takes about 50ms, and QPS can reach 90 on a single T4GPU,that over 75% of positioning results have only centimeter-level errors.
Image-based positioning is realized based on deep learning and multiple view geometry technology.
Extracts and describes the global and local features of an image.
Uses neural networks and geometric consistency to achieve high-precision feature matching.