TetrasMobile 3D
三维视觉

慧鲤基于3D感知图像处理技术,为各行业的客户提供数字孪生所需的3D视觉算法,通过增强和融合RGB和深度图像,重建1:1的3D数字模型,实时高精度检测面部和人体,并提供包括AI Depth深度图像处理、3D(面部、物体、空间场景)重建、面部和人体驱动等产品服务,为智能手机、智能机器人、智能汽车、3D打印厂商、互娱等行业客户提供完整的3D解决方案。

Home > Products & Services > TetrasMobile 3D Vision > TetrasMobile 3D Vision
PRODUCT INTRODUCTION
Product Introduction

Relying on our self-developed deep learning platform and a lightweight rendering engine, Tetras.AI's Avatar virtual Being product provides a one-click function to generate different styles of virtual images based on photos taken. It captures facial expressions, body movements and gestures through the mobile phone camera, and employs voice-driven and text-driven functions, along with other capabilities, to drive Avatar in real time, so that everyone can have a virtual avatar in the metaverse.

PRODUCT ADVANTAGES
Product Advantages

1. Individual-specific quick imaging: Only one photo is required to quickly generate various virtual avatars;

2. My style, my rules: Precisely locates facial features and contours, accurately transfer users' expressions, and supports multiple styles of avatars driving;

3. Key point detection and tracking of limbs: Real-time positioning of human body positions, and capturing of human body movements;

4. Ultra-fast processing speeds:

5 ms/frame of real-time processing using mainstream mobile phones. Flexible architecture, convenient integration, high efficiency and high speeds

KEY TECHNOLOGY
Key Technologies

Tetras.AI's general 3D reconstruction solution is base on 3D reconstruction capability developed by multi-view geometry and deep learning technology,combining multi-view images, videos and depth information to form high-quality 3D point clouds, dense grids and texture maps,in order to create visualized,editable and driveable 3D models,aims to realize 3D digitization of scenes, objects and human.

  • 3D Virtual Human Reconstruction

    Supports the vectorized modeling and texture mapping of the 3D model of a scene, automatically generating a high-level structured expression model a 3D scene at a small data volume, thereby providing an efficient visual model base.

  • Facial Expression Capture

    Captures the expressions, mouth shape, head poses, and eyes of a real person with monocular RGB cameras, realizing expression migration and driving for virtual humans in an accurate manner, and performs real-time rendering of realistic 3D facial expressions.

  • Body Motion Capture

    Captures the body movements of a real person with monocular RGB cameras, and accurately and meticulously restores each joint motion to the virtual human.

  • Hand Motion Capture

    Recognizes and captures massive numbers of gestures and movements, even capable of capturing gestures capture in complex scenes, such as finger hearts, rapid hand-waving, fist clenching, and occlusion of both hands.

  • STA

    Supports text/voice input, and virtual image subjects can be driven to generate quality anthropomorphic expressions and movements in real time, rendering images naturally and smoothly.

APPLICATION SCENARIOS
APPLICATION SCENARIOS
  • Custom personalized avatars
  • Virtual Avatar video calls
  • Custom virtual image lock screen wallpaper, special effects, emojis
  • Virtual social interactions
×
Other products
Cooperation
Partner with Tetras.AI to Co-create New Possibilities