Content is user-generated and unverified.

AI LiVoT Autonomous Wheelchair - Technical Consultation Document

Research Team: Manila Science High School - Grade 12 Avogadro
Consultation Date: [To be filled]
Consultant: [Your Name]
Document Status: Draft for Review


Executive Summary

The students propose developing "AI LiVoT," an autonomous wheelchair integrating YOLOv8 object detection, LiDAR navigation, voice control, and health monitoring for visually impaired elderly users. While technically ambitious, the project requires significant refinement in system architecture, safety protocols, and scope definition.


Project Overview

Target Users

  • Visually impaired elderly individuals
  • Focus on enhanced mobility, safety, and independence

Core Features

  • Autonomous Navigation: LiDAR-based SLAM with pre-mapped environments
  • Object Detection: YOLOv8 for real-time obstacle identification
  • Voice Control: Speech recognition for navigation commands
  • Health Monitoring: Temperature, heart rate, and SpO2 tracking
  • Safety Systems: Multiple sensor layers for collision avoidance

Technical Architecture

Hardware Components

ComponentPurposeSpecifications
Raspberry Pi 5 (8GB)Main AI processingYOLOv8, voice recognition, navigation
Arduino Mega 2560Motor control & sensorsLow-level hardware interface
ESP32 NodeMCUHealth monitoringIndependent health system
RPLIDARNavigation mapping2D SLAM, obstacle detection
USB CameraObject detectionYOLOv8 input
HC-SR04Close-range safetyUltrasonic obstacle detection
IMU (MPU6500)Orientation trackingNavigation stability
MotorsMovement2x DC with encoders + L298N driver

Software Stack

  • ROS (Robot Operating System): Navigation framework
  • YOLOv8: Object detection and classification
  • SLAM: Simultaneous Localization and Mapping
  • Blynk: Health monitoring mobile app
  • pyttsx3: Text-to-speech engine

Critical Technical Issues

🔴 High Priority Concerns

1. System Integration Complexity

Issue: Three separate microcontrollers with unclear communication protocols

  • Raspberry Pi 5 ↔ Arduino Mega ↔ ESP32
  • No defined data exchange protocols
  • Potential for communication failures

Communication Protocol Options:

ProtocolSpeedDistanceComplexityMulti-DeviceVoltage Issues
UART/SerialMediumLongLowPoint-to-pointYes (3.3V ↔ 5V)
I2CLow-MediumShortMediumMulti-master/slaveYes (3.3V ↔ 5V)
SPIHighShortMediumMaster-slave onlyYes (3.3V ↔ 5V)

Critical Voltage Level Issues:

  • Raspberry Pi 5: 3.3V logic
  • Arduino Mega: 5V logic
  • ESP32: 3.3V logic
  • Risk: Connecting 5V Arduino directly to 3.3V devices can damage them
  • Solution Required: Logic level converters or voltage dividers

Recommended Communication Architecture:

Option 1: USB Serial (Simplest)

Raspberry Pi 5 ←→ USB ←→ Arduino Mega
                ↓
            ESP32 (I2C to Arduino)

Option 2: I2C Bus (Most Flexible)

Raspberry Pi 5 (Master) ←→ I2C Bus ←→ Arduino Mega (Slave)
                                   ←→ ESP32 (Slave)

Requires logic level converters between Pi/ESP32 (3.3V) and Arduino (5V)

Option 3: Simplified Single Controller

Raspberry Pi 5 only ←→ Direct sensor connections
                   ←→ ESP32 (via I2C/SPI for health monitoring)

Questions:

  • How will real-time sensor fusion work across multiple controllers?
  • What happens if communication between controllers fails?
  • How will you synchronize data timestamps across systems?
  • Have you considered the 3.3V/5V logic level compatibility issues?
  • Why not use a single controller architecture to eliminate communication complexity?

2. Real-time Performance Requirements

Issue: Multiple resource-intensive processes on single Pi 5

  • YOLOv8 inference
  • SLAM processing
  • Voice recognition
  • Navigation planning

Questions:

  • Can Pi 5 handle all processes while meeting 30ms obstacle detection requirement?
  • What's your process prioritization strategy?
  • How will you handle computational bottlenecks?

3. Power Management Strategy

Issue: No unified power distribution plan

  • Multiple power sources mentioned
  • High-power components (Pi 5, LiDAR, motors)
  • No battery life estimates

Questions:

  • What's the expected battery runtime?
  • How will you handle graceful shutdown on low battery?
  • What happens during power failures while navigating?

🟡 Medium Priority Concerns

4. Safety-Critical Gaps

Issues:

  • No emergency stop mechanism
  • Limited to "pre-scanned enclosed areas"
  • No collision avoidance behavior beyond detection

Questions:

  • How will the wheelchair handle unexpected obstacles?
  • What's the emergency stop protocol?
  • How do you prevent navigation into dangerous areas (stairs, traffic)?

5. User Interface Reliability

Issues:

  • Voice control in noisy environments
  • No backup input methods
  • Health sensor requires continuous contact

Questions:

  • What's the backup if voice recognition fails?
  • How do you handle ambient noise interference?
  • What if the user can't maintain sensor contact?

Research Methodology Concerns

Testing Limitations

  1. Sample Size: Only 30 Grade 12 students (not representative of target elderly users)
  2. Validation Equipment: "Shopee-purchased" medical devices may lack clinical accuracy
  3. Environment: No specified testing conditions for navigation
  4. User Testing: No actual visually impaired participants

Statistical Reliability

  • Small sample sizes reduce statistical significance
  • Homogeneous test group doesn't represent target users
  • No longitudinal testing planned

Key Consultation Questions

Technical Implementation

  1. Communication Architecture
    • What protocol will you use for inter-controller communication?
    • How will you handle data synchronization and timing?
    • What's your fault tolerance strategy?
  2. Performance Optimization
    • How will you prioritize competing processes on the Pi 5?
    • What's your strategy for meeting real-time requirements?
    • How will you optimize YOLOv8 for edge deployment?
  3. System Integration
    • How will sensor data be fused in real-time?
    • What's your calibration process for multiple sensors?
    • How will you handle sensor failures?

Safety & Reliability

  1. Failure Modes
    • What happens if LiDAR fails during navigation?
    • How do you handle software crashes?
    • What's your emergency protocol?
  2. Environmental Limitations
    • What lighting conditions can YOLOv8 handle?
    • How do you manage in dynamic environments?
    • What's your strategy for unmapped areas?

Practical Implementation

  1. User Training
    • How will users learn to operate the system safely?
    • What's the learning curve for voice commands?
    • How do you handle user errors?
  2. Maintenance
    • How will software updates be deployed?
    • What's the maintenance schedule?
    • How do you handle hardware failures?

Recommendations

For Prototype Development:

  • Simplify Architecture: CRITICAL - Reduce to single controller (Raspberry Pi 5 only) or maximum two with proper level conversion
  • Define Communication Protocol: Choose UART, I2C, or SPI with proper voltage level conversion
  • Implement Logic Level Converters: Use bidirectional level shifters for 3.3V ↔ 5V communication
  • Define System Boundaries: Specify exact operating environments
  • Create Safety Protocol: Document comprehensive safety measures
  • Power Management Plan: Calculate consumption and battery requirements

Development Phase

  • Incremental Development: Start with basic navigation before AI features
  • Comprehensive Logging: Implement debugging and safety analysis
  • Controlled Testing: Begin in highly controlled environments
  • Sensor Calibration: Develop robust sensor fusion algorithms
  • Emergency Protocols: Implement multiple safety stop mechanisms

Testing & Validation

  • Realistic User Testing: Include elderly and visually impaired participants
  • Clinical Validation: Use certified medical devices for comparison
  • Environmental Testing: Test in various lighting and noise conditions
  • Failure Testing: Deliberately trigger failure modes
  • Long-term Reliability: Extended operation testing

Regulatory Considerations

  • Safety Standards: Research medical device regulations
  • Liability Issues: Address legal implications of autonomous mobility
  • Certification Requirements: Understand required approvals
  • Insurance Implications: Consider coverage requirements

Budget Analysis

Total Estimated Cost: ₱36,070

Cost Concerns:

  • High-end components (Pi 5, LiDAR) represent significant investment
  • No budget for testing equipment or safety certifications
  • Missing costs for housing, mounting hardware, and safety systems
  • No allocation for development tools or debugging equipment

Alternative Navigation Frameworks & Technologies

Beyond SLAM: Alternative Approaches

1. Landmark-Based Navigation

QR Code Systems:

  • Low-cost localization using Quick Response (QR) code landmarks that contain absolute position information
  • Fusion of inertial data (IMU) with QR code recognition reduces error propagation from dead reckoning
  • Advantages: Simple implementation, low cost, works in GPS-denied environments
  • Disadvantages: Requires pre-installation of QR codes, limited to prepared environments

Natural Landmark Recognition:

  • Computer vision-based landmark detection using wide-lens cameras for safe wheelchair navigation
  • Multi-label image classification to detect important landmarks (open paths, humans, staircases, doorways, obstacles)
  • Uses Vision Transformers (ViT) for scene understanding

2. Visual Odometry (VO) & Visual-Inertial Odometry (VIO)

Visual Odometry:

  • Incremental online estimation of vehicle position by analyzing image sequences from cameras
  • More accurate than GPS, INS, and wheel odometry in many scenarios

Visual-Inertial Odometry:

  • Combines visual odometry with IMU measurements to correct for errors from rapid movement
  • Commonly used for navigation when GPS is absent or unreliable (indoors, under bridges)

3. Dead Reckoning with Sensor Fusion

  • Process of calculating current position using previously determined position plus estimates of speed, heading, and elapsed time
  • Can be enhanced by fusing multiple sensor data (IMU, encoders, cameras) to reduce cumulative errors
  • Limitation: Errors compound over time, making it difficult for longer journeys

4. Computer Vision-Based Navigation

Deep Learning Approaches:

  • Object detection, localization and tracking using deep learning for semantic mapping
  • Uses SORT (Simple Online and Realtime Tracking) algorithm combined with object detection for environmental understanding

Semantic Mapping:

  • Combines position and depth data to create 3D semantic maps of the environment
  • Enables understanding of object types and their spatial relationships

5. Hybrid Navigation Systems

Multi-Modal Sensor Fusion:

  • Combines multiple positioning technologies: vision, infrared, ultrasonic, RFID, Bluetooth beacons, Wi-Fi, barcodes
  • Integration of 3D cameras (Intel RealSense) with LiDAR for comprehensive environmental sensing

Comparison with Student's SLAM Approach

ApproachComplexityCostAccuracyEnvironment FlexibilityPre-Setup Required
SLAM (Student Choice)HighHighHighMediumMinimal
QR Code LandmarksLowLowMediumLowHigh
Visual OdometryMediumMediumMedium-HighHighMinimal
Dead Reckoning + FusionMediumLowMediumMediumMinimal
Computer Vision + Deep LearningHighMediumHighHighLow-Medium

Recommendations for Students

For Prototype Development:

  1. Start Simpler: Consider QR code landmarks for initial proof-of-concept
  2. Incremental Complexity: Begin with dead reckoning + basic sensor fusion
  3. Hybrid Approach: Combine simple landmark navigation with basic obstacle avoidance

Technical Justification Questions:

  • Why choose SLAM over simpler landmark-based navigation for the prototype phase?
  • How will you handle the computational complexity of real-time SLAM processing?
  • Have you considered starting with visual odometry as an intermediate step?

Alternative Approaches to Consider

Simplified Architecture

  1. Single Controller Approach: Use only Raspberry Pi 5 with sensor expansion
  2. Modular Design: Separate navigation and health monitoring systems
  3. Commercial Integration: Utilize existing wheelchair platforms

Navigation Framework Alternatives

  1. QR Code + Dead Reckoning: Simpler, more reliable for controlled environments
  2. Visual Odometry: Good balance of complexity and performance
  3. Landmark-Based: Reduced computational requirements, easier debugging

Reduced Scope Options

  1. Indoor Navigation Only: Focus on controlled environments
  2. Assisted vs. Autonomous: Human oversight required
  3. Proof of Concept: Demonstrate core technologies only

Action Items for Next Meeting

Student Preparation

  • Create detailed system architecture diagram
  • Research inter-controller communication protocols
  • Calculate detailed power consumption analysis
  • Define specific testing environments and procedures
  • Research medical device safety standards

Documentation Needed

  • Safety protocol document
  • Power management specifications
  • Communication protocol design
  • Testing methodology details
  • Risk assessment matrix

Meeting Notes Section

Discussion Points

[Space for meeting notes]

Decisions Made

[Space for recording decisions]

Next Steps

[Space for action items]

Follow-up Schedule

[Space for future meeting planning]


Document Version: 1.0
Last Updated: [Date]
Next Review: [Date]

Content is user-generated and unverified.
    AI LiVoT Autonomous Wheelchair - Technical Consultation Document | Claude