Patent Document: WO 2022/017779 PCT/EP2021/068642
This patent describes a comprehensive system for creating and displaying maps in augmented reality environments. The invention covers the complete workflow from 3D scene reconstruction to user interface display.
The fundamental process for creating AR maps:
This forms the foundation of the entire system.
Enhancement to the ground representation:
Instead of using the complex, detailed ground mesh from the 3D reconstruction, this claim replaces it with a simplified polygonal shape. This shape is created based on the intersection lines where detected wall planes meet the ground plane, resulting in cleaner, more geometric floor plans similar to architectural drawings.
Methods for improving ground textures:
When the ground mesh is simplified (as in Claim 2), its original texture may be lost or degraded. This claim provides several methods to recreate or enhance the texture:
Camera configuration for map generation:
Uses an orthographic camera (which eliminates perspective distortion) to render the final map. The camera settings are precisely determined by:
Optimal camera placement:
The orthographic camera is automatically centered based on the boundaries of the ground mesh. This ensures the AR scene is properly framed in the final map, providing optimal viewing of the entire space.
Quality improvement step:
Removes isolated elements from the 3D mesh - these are typically noise, small disconnected pieces, or reconstruction artifacts that don't represent meaningful parts of the scene. This cleaning process creates a more accurate and visually appealing result.
Spatial filtering:
Removes elements that fall outside the detected wall and ground planes. This effectively crops the scene to focus on the main room boundaries, eliminating extraneous objects or reconstruction errors that extend beyond the intended space.
Creating a layered AR interface:
This describes how to display the AR map in a comprehensive, multi-layered interface:
Required inputs:
Display layers (from bottom to top):
This creates a comprehensive interface showing the user's position on a map while simultaneously displaying the live camera view with AR elements.
User control functionality:
The map size can be dynamically adjusted based on user input, allowing for:
Flexibility in ground visualization:
The ground layer (second picture from claim 1) can be displayed with various options:
This provides flexibility in balancing floor plan visibility against AR content visibility.
Hardware implementation of the core method:
Describes the physical apparatus (device/system) that performs the method from claim 1. It specifies that a processor is configured to execute the same three main steps:
Hardware version of claim 2:
The processor can replace the complex ground mesh with simplified polygonal shapes based on wall-floor intersections, providing the same ground simplification capabilities in hardware form.
Hardware version of claim 3:
The processor can enhance textures using any of the methods described in claim 3: inpainting, synthesis, uniform colors, or database lookup.
Hardware implementation of orthographic rendering:
The apparatus uses orthographic camera rendering with parameters based on ground mesh boundaries and pixel sizes, as described in claim 4.
Hardware implementation of camera centering:
The apparatus positions the orthographic camera at the center of the AR scene based on ground mesh boundaries, as described in claim 5.
Hardware implementation of isolated element removal:
The apparatus includes functionality to clean the mesh by removing isolated elements, as described in claim 6.
Hardware implementation of boundary filtering:
The apparatus can remove elements outside detected wall planes and ground planes, as described in claim 7.
Hardware implementation of the display method:
Describes the physical apparatus that implements the display method from claim 8. The processor is configured to:
Hardware support for user interaction:
The display apparatus supports user-controlled map resizing, providing the hardware capability for the functionality described in claim 9.
Hardware implementation of ground display options:
The display apparatus can make the ground layer transparent or completely hidden, providing hardware support for the functionality described in claim 10.
Complete system architecture:
Describes a full augmented reality system comprising three essential components:
The map generated according to claim 1 is fully integrated into this system, providing spatial context and navigation capabilities.
Complete display functionality:
The complete AR system displays the layered map interface described in claim 8, providing users with the full multi-layered AR experience including live camera feed, AR content, map overlay, and user position indication.
Software implementation:
Describes computer program code that implements the methods from claims 1-10. This covers the software aspect of the invention, protecting the algorithmic implementation regardless of the specific hardware it runs on.
Non-transitory computer readable medium:
Protects the invention when stored on physical media such as:
This ensures the software implementation is protected regardless of how it's distributed or stored.
This patent system provides several key advantages:
This patent comprehensively protects a sophisticated augmented reality mapping system that transforms 3D reconstructed scenes into useful, interactive maps. The protection spans from basic algorithmic methods through complete system implementations, ensuring broad intellectual property coverage for this innovative AR technology.
Document prepared from Patent WO 2022/017779 PCT/EP2021/068642