Leveraging YOLOv8 and EfficientNet-B3 to automatically detect buildings and classify post-disaster damage severity.
This research develops and evaluates a two-stage deep learning pipeline that combines object detection for building localisation with classification models for damage severity predictions. The increasing frequency of natural disasters due to climate change necessitates automated systems for rapid damage assessment to support disaster response efforts.
A sophisticated two-stage architecture combining state-of-the-art object detection with image classification.
Fast and accurate building localization using YOLOv8s, outperforming Faster R-CNN and FCOS on the xBD dataset.
Transfer learning with progressive fine-tuning achieves 87.8% test accuracy on damage classification.
Trained and validated on the xBD benchmark, one of the largest satellite imagery datasets for disaster assessment.
Detailed methodology, experiments, ablation studies, and evaluation metrics.
Open Report PDFOur two-stage pipeline processes satellite imagery through building detection and damage classification for comprehensive damage assessment.
Upload a post-disaster satellite image in common formats (JPEG, PNG, TIFF).
YOLOv8 identifies and localizes all buildings with bounding boxes.
EfficientNet-B3 classifies each detected building into a damage severity level.
Receive an annotated image with color-coded damage levels and a detailed report.
Receive an annotated image with color-coded damage levels and a detailed report.
Upload a satellite image to see building damage assessment in action. The model will detect buildings and classify damage severity.
Or try a sample image:
Analyzing image... This may take a few moments