AI-Based Detection of CEJ, AC, and Apex Landmarks for Automated Bone Loss Assessment
Problem Statement
Periodontal bone loss assessment is a critical component of dental diagnosis, requiring precise identification of anatomical landmarks to calculate the distance between the cementoenamel junction (CEJ) and alveolar bone crest. Currently, this process relies on manual identification by clinicians, which is subjective, time-consuming, and prone to inter-observer variability. The objective is to develop an automated AI system capable of accurately detecting three key dental landmarks:
- CEJ (Cementoenamel Junction) - Mesial & Distal: The anatomical boundary where enamel meets cementum, serving as the primary reference point for bone loss measurement
- AC (Apical Constriction) - Mesial & Distal: The narrowest portion of the root canal, critical for endodontic working length determination
- Apex: The root tip, essential for comprehensive periodontal and endodontic assessment
Clinical Significance
Bone loss calculation formula: Distance = CEJ position - Alveolar Bone Crest position
- Normal: 1-2mm from CEJ to bone crest
- Pathological: >2mm indicates periodontal bone loss
- Clinical Impact: Accurate landmark detection enables automated staging of periodontal disease and treatment planning
Dataset Overview
Master Dataset Composition
Location: s3://codentist-general/datasets/master
Structure:
master/
├── images/ # Dental radiographic images
├── annotations.json # COCO-format annotations
└── samples/ # Annotated sample images
Target Landmark Distribution
| Landmark | Count | Percentage | Clinical Importance |
|---|
| apex | 369,392 | 15.03% | Root tip identification |
| cej mesial | 93,461 | 3.80% | Primary bone loss reference |
| cej distal | 92,727 | 3.77% | Primary bone loss reference |
| ac distal | 90,527 | 3.68% | Endodontic working length |
| ac mesial | 89,265 | 3.63% | Endodontic working length |
Total Target Annotations: 735,372 (29.91% of all annotations)
Annotation Characteristics
- Representation: Bounding boxes (not keypoints)
- Rationale: Higher accuracy achieved with bbox approach vs coordinate detection
- Conversion: Centroid of bbox provides precise landmark coordinates
- Distribution: AC and CEJ occur twice per tooth (mesial/distal), apex occurs 1-3 times per tooth
- Coverage: Target landmarks present in most images (part of normal tooth anatomy)
- No Train/Val Split: Preserves class stratification for subset training
Literature Review: State-of-the-Art Performance
1. Panoramic Bone Loss Detection with CEJ Segmentation
Study: Jundaeng et al. (2025) - "Advanced AI-assisted panoramic radiograph analysis"
Architecture:
- Tooth Segmentation: YOLOv8
- CEJ + Bone Level Detection: Specialized CNN with segmentation head
Performance:
- Overall Accuracy: 98%
- Sensitivity: 100%
- Specificity: 98%
- F1 Score: 0.90
Methodology:
- Dataset: 2,000 panoramic radiographs (Thai population)
- Two-stage approach: Individual tooth detection → CEJ/bone level segmentation
- Image Enhancement: Preprocessing with contrast optimization
- Validation: Clinical evaluation by periodontal specialists
Key Innovation: Combined approach linking tooth detection with landmark-specific segmentation
2. Multi-Landmark Ensemble for Periapical Analysis
Study: Wu et al. (2023) - "Automatic recognition of teeth and periodontal bone loss measurement"
Architecture:
- Detection: YOLOv5 for tooth localization
- Classification: VGG-16 (16 convolutional layers)
- Segmentation: U-Net for precise boundary delineation
- Integration: Ensemble voting system
Performance:
- Overall Accuracy: 90%
- Tooth Position Detection: 88.8%
- Periodontal Bone Level: 92.61%
- Radiographic Bone Loss: 97.0%
Methodology:
- Dataset: 8,000 periapical radiographs, 27,964 teeth
- Annotation: VGG Image Annotator (VIA) with expert validation
- Training: 5 senior clinicians with periodontal expertise
- Validation: Kappa coefficient showing substantial agreement with experts
Key Innovation: End-to-end pipeline from detection to clinical measurement
3. CBCT 3D Landmark Detection
Study: CBCT-SAM (2024) - "A novel AI model for detecting periapical lesion on CBCT"
Architecture:
- Foundation Model: SAM-Med2D (Segment Anything Medical)
- Enhancement: Progressive Prediction Refinement (PPR) module
- Integration: Lightweight U-Net for dental-specific features
Performance:
- Segmentation Accuracy: 95%+
- Processing Speed: Real-time inference
- 3D Consistency: Superior to 2D methods
Methodology:
- Dataset: 185 CBCT scans with confirmed landmarks
- Training: 10-hour expert validation protocol
- Innovation: Transfer learning from large medical foundation model
Key Innovation: First application of foundation models to dental landmark detection
4. Apical Constriction Detection Baseline
Study: Saghiri et al. (2012) - "The reliability of artificial neural network in locating minor apical foramen"
Architecture:
- Model: Artificial Neural Network with Perceptron
- Input Features: Anatomical measurements and radiographic characteristics
Performance:
- AI Accuracy: 96%
- Human Expert Accuracy: 76%
- Improvement: 20% superior to endodontists
Methodology:
- Dataset: 50 single-rooted extracted teeth
- Validation: Stereomicroscope measurements as ground truth
- Significance: Only foundational study for AC detection
Research Gap: No modern deep learning validation of AC detection
5. Commercial FDA-Cleared System
Study: Denti.AI (2025) - "Automatic Detection of Radiographic Alveolar Bone Loss"
Architecture:
- Model: Convolutional Neural Network (proprietary)
- Integration: Practice management system compatibility
Performance:
- Periapical Sensitivity: 76%
- Periapical Specificity: 86%
- Periapical Accuracy: 81%
- Bitewing Accuracy: 76%
Methodology:
- Validation: Board-certified specialists reference standard
- Clinical Testing: Multi-source radiograph validation
- FDA Status: Cleared for clinical use
Key Innovation: First commercially deployed bone loss detection system
Performance Analysis & Research Gaps
Current Performance Benchmarks
| Method | Modality | Best Accuracy | Architecture | Key Limitation |
|---|
| CEJ + Bone Segmentation | Panoramic | 98% | YOLOv8 + CNN | Single modality |
| Ensemble Approach | Periapical | 90% | YOLOv5 + VGG-16 + U-Net | Complex pipeline |
| 3D CBCT | CBCT | 95% | SAM-Med2D + U-Net | High radiation/cost |
| Commercial System | Multi-modal | 81% | CNN | Moderate accuracy |
Critical Research Gaps
- AC Detection: Only one validation study (2012) - massive research opportunity
- Multi-modal Integration: No systems validated across panoramic + periapical + bitewing
- Real-time Performance: Limited clinical deployment validation
- Population Diversity: Most studies single-institution, limited geographic diversity
Technical Approach & Architecture Recommendations
Recommended Architecture Design
Stage 1: Multi-Modal Detection
python
detection_pipeline = {
"backbone": "YOLOv8",
"input_modalities": ["panoramic", "periapical", "bitewing"],
"target_classes": ["cej_mesial", "cej_distal", "ac_mesial", "ac_distal", "apex"],
"output_format": "bounding_boxes"
}
Stage 2: Landmark Refinement
python
refinement_network = {
"architecture": "CNN_segmentation_head",
"input": "YOLOv8_detections",
"enhancement": ["median_filtering", "bilateral_filtering", "CLAHE"],
"output": "precise_landmark_coordinates"
}
Performance Targets
- Individual Landmark Detection: >90% accuracy (based on apex success)
- CEJ Detection: >95% accuracy (based on panoramic bone loss success)
- AC Detection: >90% accuracy (exceed baseline study)
- Multi-landmark System: >85% combined accuracy
- Clinical Validation: Match/exceed 98% bone loss calculation benchmark
Training Strategy & Modality Selection
Evidence-Based Modality Recommendations:
Based on literature performance analysis, different landmarks show optimal detection rates on specific imaging modalities:
Recommended Training Approach:
python
modality_strategy = {
"CEJ_detection": {
"primary": "panoramic + periapical",
"evidence": "98% accuracy achieved on panoramic (Jundaeng et al.)",
"rationale": "CEJ clearly visible across full arch"
},
"AC_detection": {
"primary": "periapical + bitewing",
"evidence": "96% baseline accuracy (Saghiri et al.)",
"rationale": "Higher resolution needed for canal anatomy"
},
"apex_detection": {
"primary": "periapical",
"evidence": "97% F1 score on periapical (YoCNET)",
"rationale": "Best visualization of root tips"
}
}
Multi-Modal vs Single-Modal Training:
Option 1: Unified Multi-Modal Dataset
- Pros: Single model handles all radiograph types, real-world clinical flexibility
- Cons: Model must learn modality-specific features, potential performance compromise
- Recommended for: Clinical deployment requiring universal compatibility
Option 2: Modality-Specific Models
- Pros: Optimized performance per modality, follows literature success patterns
- Cons: Multiple models to maintain, modality detection required
- Recommended for: Maximum accuracy scenarios
Hybrid Recommendation:
Train on combined dataset but with modality-aware architecture:
- Shared backbone (YOLOv8) for universal feature extraction
- Modality-specific detection heads for landmark refinement
- Training includes modality classification as auxiliary task
This approach leverages the full dataset while respecting the evidence that different landmarks perform optimally on different modalities.
Expected Outcomes & Clinical Impact
Immediate Benefits
- Diagnostic Consistency: Eliminate inter-observer variability in bone loss measurement
- Efficiency Gains: Reduce assessment time from minutes to seconds
- Early Detection: Automated screening for subclinical bone loss
- Standardization: Consistent measurements across clinical settings
Long-term Implications
- Automated Staging: Real-time periodontal disease classification
- Treatment Planning: Data-driven therapy recommendations
- Longitudinal Monitoring: Precise tracking of disease progression
- Teledentistry: Remote periodontal assessment capabilities
Success Metrics
- Technical: >90% accuracy across all three landmarks
- Clinical: Non-inferiority to expert periodontist assessment
- Workflow: <30 second processing time per radiograph
- Deployment: Successful integration in clinical practice
This project addresses a critical gap in dental AI by focusing on the foundational landmarks required for automated periodontal diagnosis, with the potential to transform clinical practice through precise, consistent, and rapid bone loss assessment.