Journal article
Proceedings of the 14th International Workshop on Structural Health Monitoring, 2023
APA
Click to copy
Eltouny, K. A., Sajedi, S., & Liang, X. (2023). High-Resolution Vision Transformers for Pixel-Level Identification of Structural Components and Damage. Proceedings of the 14th International Workshop on Structural Health Monitoring.
Chicago/Turabian
Click to copy
Eltouny, Kareem A., S. Sajedi, and Xiao Liang. “High-Resolution Vision Transformers for Pixel-Level Identification of Structural Components and Damage.” Proceedings of the 14th International Workshop on Structural Health Monitoring (2023).
MLA
Click to copy
Eltouny, Kareem A., et al. “High-Resolution Vision Transformers for Pixel-Level Identification of Structural Components and Damage.” Proceedings of the 14th International Workshop on Structural Health Monitoring, 2023.
BibTeX Click to copy
@article{kareem2023a,
title = {High-Resolution Vision Transformers for Pixel-Level Identification of Structural Components and Damage},
year = {2023},
journal = {Proceedings of the 14th International Workshop on Structural Health Monitoring},
author = {Eltouny, Kareem A. and Sajedi, S. and Liang, Xiao}
}
Visual inspection is predominantly used to evaluate the state of civil structures, but recent developments in unmanned aerial vehicles (UAVs) and artificial intelligence have increased the speed, safety, and reliability of the inspection process. In this study, we develop a semantic segmentation network based on vision transformers and Laplacian pyramids scaling networks for efficiently parsing high-resolution visual inspection images. The massive amounts of collected high-resolution images during inspections can slow down the investigation efforts. And while there have been extensive studies dedicated to the use of deep learning models for damage segmentation, processing high-resolution visual data can pose major computational difficulties. Traditionally, images are either uniformly downsampled or partitioned to cope with computational demands. However, the input is at risk of losing local fine details, such as thin cracks, or global contextual information. Inspired by super-resolution architectures, our vision transformer model learns to resize high-resolution images and masks to retain both the valuable local features and the global semantics without sacrificing computational efficiency. The proposed framework has been evaluated through comprehensive experiments on a dataset of bridge inspection report images using multiple metrics for pixel-wise materials detection.