Content is user-generated and unverified.

Patent Claims Explanation: Augmented Reality Mapping Method

Patent Document: WO 2022/017779 PCT/EP2021/068642

Overview

This patent describes a comprehensive system for creating and displaying maps in augmented reality environments. The invention covers the complete workflow from 3D scene reconstruction to user interface display.


PART I: CORE MAPPING METHOD (Claims 1-7)

Claim 1 - Main Method

The fundamental process for creating AR maps:

  • Reconstructing a 3D textured mesh: Takes captured data (from cameras/sensors) and builds a complete 3D model with textures
  • Splitting the mesh: Separates the 3D model into two distinct parts:
    • First mesh: Everything except the ground (walls, furniture, objects)
    • Second mesh: Just the ground/floor plane
  • Generating the map: Creates a top-down view by layering the non-ground elements over the ground plane

This forms the foundation of the entire system.

Claim 2 - Ground Simplification

Enhancement to the ground representation:

Instead of using the complex, detailed ground mesh from the 3D reconstruction, this claim replaces it with a simplified polygonal shape. This shape is created based on the intersection lines where detected wall planes meet the ground plane, resulting in cleaner, more geometric floor plans similar to architectural drawings.

Claim 3 - Texture Enhancement

Methods for improving ground textures:

When the ground mesh is simplified (as in Claim 2), its original texture may be lost or degraded. This claim provides several methods to recreate or enhance the texture:

  • Image inpainting: Algorithmically filling in missing texture areas
  • Texture synthesis: Generating new textures that match the original style
  • Uniform color filling: Using an average color from the original ground texture
  • Database lookup: Finding similar textures from a pre-existing collection based on similarity matching

Claim 4 - Rendering Method

Camera configuration for map generation:

Uses an orthographic camera (which eliminates perspective distortion) to render the final map. The camera settings are precisely determined by:

  • Boundaries of the ground mesh (to ensure proper framing)
  • Pixel resolution of both the first and second pictures (to maintain quality)

Claim 5 - Camera Positioning

Optimal camera placement:

The orthographic camera is automatically centered based on the boundaries of the ground mesh. This ensures the AR scene is properly framed in the final map, providing optimal viewing of the entire space.

Claim 6 - Mesh Cleaning (Isolated Elements)

Quality improvement step:

Removes isolated elements from the 3D mesh - these are typically noise, small disconnected pieces, or reconstruction artifacts that don't represent meaningful parts of the scene. This cleaning process creates a more accurate and visually appealing result.

Claim 7 - Boundary Cleaning

Spatial filtering:

Removes elements that fall outside the detected wall and ground planes. This effectively crops the scene to focus on the main room boundaries, eliminating extraneous objects or reconstruction errors that extend beyond the intended space.


PART II: DISPLAY PREPARATION METHOD (Claims 8-10)

Claim 8 - Display Method

Creating a layered AR interface:

This describes how to display the AR map in a comprehensive, multi-layered interface:

Required inputs:

  • AR scene data (virtual objects and information)
  • The map created from claims 1-7
  • User's location information (positioning data)
  • Real-time camera capture of the environment

Display layers (from bottom to top):

  1. Base layer: Real environment capture (live camera feed)
  2. AR layer: AR scene data overlaid on the camera view
  3. Map layer: The generated map overlaid on top
  4. User layer: User location indicator displayed on top of everything

This creates a comprehensive interface showing the user's position on a map while simultaneously displaying the live camera view with AR elements.

Claim 9 - Interactive Map Sizing

User control functionality:

The map size can be dynamically adjusted based on user input, allowing for:

  • Zoom in/out functionality
  • Manual resizing for better usability
  • Adaptive sizing based on user preferences or context

Claim 10 - Ground Display Options

Flexibility in ground visualization:

The ground layer (second picture from claim 1) can be displayed with various options:

  • Transparency levels: Making the ground semi-see-through
  • Complete hiding: Turning off ground display entirely

This provides flexibility in balancing floor plan visibility against AR content visibility.


PART III: HARDWARE APPARATUS (Claims 11-17)

Claim 11 - Basic Apparatus

Hardware implementation of the core method:

Describes the physical apparatus (device/system) that performs the method from claim 1. It specifies that a processor is configured to execute the same three main steps:

  • Reconstruct 3D textured mesh from captured data
  • Split into ground and non-ground components
  • Generate the final map through rendering

Claim 12 - Apparatus with Ground Simplification

Hardware version of claim 2:

The processor can replace the complex ground mesh with simplified polygonal shapes based on wall-floor intersections, providing the same ground simplification capabilities in hardware form.

Claim 13 - Apparatus with Texture Enhancement

Hardware version of claim 3:

The processor can enhance textures using any of the methods described in claim 3: inpainting, synthesis, uniform colors, or database lookup.

Claim 14 - Apparatus Rendering

Hardware implementation of orthographic rendering:

The apparatus uses orthographic camera rendering with parameters based on ground mesh boundaries and pixel sizes, as described in claim 4.

Claim 15 - Apparatus Camera Positioning

Hardware implementation of camera centering:

The apparatus positions the orthographic camera at the center of the AR scene based on ground mesh boundaries, as described in claim 5.

Claim 16 - Apparatus Mesh Cleaning (Isolated Elements)

Hardware implementation of isolated element removal:

The apparatus includes functionality to clean the mesh by removing isolated elements, as described in claim 6.

Claim 17 - Apparatus Boundary Cleaning

Hardware implementation of boundary filtering:

The apparatus can remove elements outside detected wall planes and ground planes, as described in claim 7.


PART IV: DISPLAY APPARATUS (Claims 18-20)

Claim 18 - Display Hardware

Hardware implementation of the display method:

Describes the physical apparatus that implements the display method from claim 8. The processor is configured to:

  • Obtain all required data (AR scene, map, user location, camera feed)
  • Prepare the layered display with proper overlay sequence
  • Manage the real-time rendering of all display layers

Claim 19 - Interactive Display Hardware

Hardware support for user interaction:

The display apparatus supports user-controlled map resizing, providing the hardware capability for the functionality described in claim 9.

Claim 20 - Hardware Ground Display Control

Hardware implementation of ground display options:

The display apparatus can make the ground layer transparent or completely hidden, providing hardware support for the functionality described in claim 10.


PART V: COMPLETE AR SYSTEM (Claims 21-22)

Claim 21 - Integrated AR System

Complete system architecture:

Describes a full augmented reality system comprising three essential components:

  • AR scene: The virtual environment containing digital objects and information
  • AR controller: The processing unit that manages computations and coordination
  • AR terminal: The user interface and display device for interaction

The map generated according to claim 1 is fully integrated into this system, providing spatial context and navigation capabilities.

Claim 22 - System Display Integration

Complete display functionality:

The complete AR system displays the layered map interface described in claim 8, providing users with the full multi-layered AR experience including live camera feed, AR content, map overlay, and user position indication.


PART VI: SOFTWARE IMPLEMENTATION (Claims 23-24)

Claim 23 - Computer Program

Software implementation:

Describes computer program code that implements the methods from claims 1-10. This covers the software aspect of the invention, protecting the algorithmic implementation regardless of the specific hardware it runs on.

Claim 24 - Storage Medium

Non-transitory computer readable medium:

Protects the invention when stored on physical media such as:

  • Hard drives
  • Memory cards
  • Optical discs
  • Flash memory
  • Any other non-transitory storage medium

This ensures the software implementation is protected regardless of how it's distributed or stored.


TECHNICAL ADVANTAGES

This patent system provides several key advantages:

  1. Comprehensive Coverage: Protects the invention across methods, apparatus, systems, and software implementations
  2. Scalable Architecture: Can be implemented in various hardware configurations
  3. User-Friendly Interface: Provides intuitive layered display with interactive controls
  4. Quality Optimization: Includes multiple cleaning and enhancement steps for better results
  5. Flexible Ground Representation: Allows for both detailed and simplified ground visualization
  6. Real-time Integration: Supports live camera feeds with AR overlay capabilities

CONCLUSION

This patent comprehensively protects a sophisticated augmented reality mapping system that transforms 3D reconstructed scenes into useful, interactive maps. The protection spans from basic algorithmic methods through complete system implementations, ensuring broad intellectual property coverage for this innovative AR technology.


Document prepared from Patent WO 2022/017779 PCT/EP2021/068642

Content is user-generated and unverified.
    Patent Claims Explanation - AR Mapping Method | Claude