Residual SuperPoint underwater coral reef image registration method based on adaptive equalization sample
-
摘要: 珊瑚礁生态系统是地球上生物多样性最为丰富的海洋生态系统,是珊瑚礁研究与保护的基础,水下监测是获取珊瑚礁数据的重要方式。面向光谱混杂、结构复杂度高的水下环境场景,本文提出了基于自适应均衡样本的残差SuperPoint水下珊瑚礁图像配准方法。针对Visual Geometry Group(VGG)网络易导致部分高频特征损失、特征提取效率不高的问题,在编码网络中引入残差模块,在保留原始特征的同时降低拟合难度,提高图像特征点提取精度;针对特征点提取时容易忽视负样本的问题,提出一种自适应均衡样本对比损失函数,引入困难负样本挖掘机制,在提高参数优化效率、加快收敛速度的同时提高特征点提取的精度。本文应用海南加井岛水下珊瑚礁光学数据集、COCO和HPatches数据集开展实验,结果表明,在HPatches数据集上,残差SuperPoint算法特征点重复率为61.7%,较对比算法提高4.8%~23.1%。在水下珊瑚礁场景中,残差SuperPoint的图像级配准评价指标较经典SuperPoint,结构相似性指数提升11.8%,互信息指标提升22.60%,均方根误差基本持平。与其他传统算法对比,结构相似性指数与互信息指标均为最优,均方根误差为次优。本文方法可为珊瑚礁调查、生态监测等领域提供技术支持。
-
关键词:
- 水下图像配准 /
- 珊瑚礁 /
- SuperPoint /
- 图像增强
Abstract: Coral reef ecosystems are the most biologically diverse marine ecosystems on Earth and form the basis for coral reef research and conservation. Underwater monitoring is an important method for obtaining coral reef data. For underwater environments characterized by spectral complexity and high structural complexity, this paper proposes a residual SuperPoint underwater coral reef image registration method based on adaptive equalization sample. To address the issue of Visual Geometry Group (VGG) networks causing partial loss of high-frequency features and low feature extraction efficiency, a residual module is introduced into the encoding network to retain original features while reducing fitting difficulty and improving the accuracy of image feature point extraction. To address the issue of feature point extraction easily neglecting negative samples, we propose an adaptive equalization sample comparison loss function that incorporates a difficult negative sample mining mechanism. This improves parameter optimization efficiency, accelerates convergence speed, and enhances feature point extraction accuracy. Experiments conducted using the Hainan Jiajing Island underwater coral reef optical dataset, COCO, and HPatches datasets demonstrate that on the HPatches dataset, the residual SuperPoint algorithm achieves a feature point overlap rate of 61.7%, outperforming comparison algorithms by 4.8% to 23.1%. In underwater coral reef scenarios, Residual SuperPoint achieved a 11.8% improvement in structural similarity index measure (SSIM) and a 22.60% increase in mutual information (MI) compared to the classic SuperPoint at the image-level registration evaluation metrics, while maintaining comparable root mean square error (RMSE). Compared to other traditional algorithms, it demonstrated optimal performance in both structural similarity index and mutual information metrics, with suboptimal RMSE. The proposed method provides technical support for coral reef surveys, ecological monitoring, and related fields.-
Key words:
- underwater image registration /
- coral reefs /
- SuperPoint /
- image enhancement
-
表 1 不同算法特征点提取和匹配评价指标
Tab. 1 Different algorithms feature point extraction and matching evaluation metrics
算法 特征点重复率 用时/s 平均特征点数量 特征点正确匹配的概率 用时/min SIFT 0.386 31 10038 0.803 10 ORB 0.568 27 4539.66 0.403 9 AKAZE 0.566 26 5034 0.627 9 经典SuperPoint 0.569 74 85 0.593 243 残差SuperPoint 0.617 57 342.34 0.638 83 表 2 算法图像配准评价指标对比
Tab. 2 Comparison of registration consistency parameters of different algorithms
方法 SSIM RMSE MI Hpatches 水下珊瑚礁 Hpatches 水下珊瑚礁 Hpatches 水下珊瑚礁 SIFT 0.645 0.450 44.970 67.047 1.093 0.307 ORB 0.568 0.433 49.973 75.041 0.940 0.135 AKAZE 0.627 0.446 45.701 72.380 1.064 0.336 经典SuperPoint 0.488 0.404 54.388 72.906 0.772 0.292 残差SuperPoint 0.616 0.452 46.767 72.372 1.037 0.358 -
[1] 龙丽娟, 杨芳芳, 韦章良. 珊瑚礁生态系统修复研究进展[J]. 热带海洋学报, 2019, 38(6): 1−8.Long Lijuan, Yang Fangfang, Wei Zhangliang. A review on ecological restoration techniques of coral reefs[J]. Journal of Tropical Oceanography, 2019, 38(6): 1−8. [2] Ai Bo, Liu Xue, Wen Zhen, et al. A novel coral reef classification method combining radiative transfer model with deep learning[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2024, 17: 13400−13412. doi: 10.1109/JSTARS.2024.3430899 [3] Hedley J D, Roelfsema C M, Chollett I, et al. Remote sensing of coral reefs for monitoring and management: a review[J]. Remote Sensing, 2016, 8(2): 118. doi: 10.3390/rs8020118 [4] 张飞飞, 任广波, 胡亚斌, 等. 融合地理空间认知的珊瑚礁地貌单元高分遥感分类方法[J]. 海洋技术学报, 2023, 42(1): 1−15.Zhang Feifei, Ren Guangbo, Hu Yabin, et al. A high-resolution remote sensing classification method of coral reef geomorphic units integrating geospatial cognition[J]. Journal of Ocean Technology, 2023, 42(1): 1−15. [5] Teague J, Megson-Smith D A, Allen M J, et al. A review of current and new optical techniques for coral monitoring[J]. Oceans, 2022, 3(1): 30−45. doi: 10.3390/oceans3010003 [6] 郑金辉, 任广波, 胡亚斌, 等. 生物天敌暴发导致珊瑚礁退化的高分遥感监测与分析—以南海太平岛为例[J]. 热带地理, 2023, 43(10): 1856−1873.Zheng Jinhui, Ren Guangbo, Hu Yabin, et al. High resolution remote sensing monitoring and analysis of coral reef degradation caused by outbreaks of biological natural enemies: a case study of the Taiping Island in the South China Sea[J]. Tropical Geography, 2023, 43(10): 1856−1873. [7] Turner J A, Polunin N V C, Field S N, et al. Measuring coral size-frequency distribution using stereo video technology, a comparison with in situ measurements[J]. Environmental Monitoring and Assessment, 2015, 187(5): 234. doi: 10.1007/s10661-015-4431-8 [8] Mahmood A, Bennamoun M, An Senjian, et al. Deep image representations for coral image classification[J]. IEEE Journal of Oceanic Engineering, 2019, 44(1): 121−131. doi: 10.1109/JOE.2017.2786878 [9] Ghaffar A A, Choi G S. A two-stream deep learning framework for robust coral reef health classification: insights and interpretability[J]. IEEE Access, 2025, 13: 78490−78512. doi: 10.1109/ACCESS.2025.3561226 [10] Zheng Ziqiang, Liang Haixin, Wut F H, et al. HKCoral: benchmark for dense coral growth form segmentation in the wild[J]. IEEE Journal of Oceanic Engineering, 2025, 50(2): 697−713. doi: 10.1109/JOE.2024.3494112 [11] Casoli E, Ventura D, Mancini G, et al. High spatial resolution photo mosaicking for the monitoring of coralligenous reefs[J]. Coral Reefs, 2021, 40(4): 1267−1280. doi: 10.1007/s00338-021-02136-4 [12] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91−110. doi: 10.1023/B:VISI.0000029664.99615.94 [13] Bay H, Tuytelaars T, Van Gool L. Surf: speeded up robust features[C]//Proceedings of the 9th European Conference on Computer Vision. Graz: Springer, 2006: 404−417. [14] Rublee E, Rabaud V, Konolige K, et al. ORB: an efficient alternative to SIFT or SURF[C]//Proceedings of the 2011 International Conference on Computer Vision. Barcelona: IEEE, 2011: 2564−2571. [15] Pang Siqi, Ge Junyao, Hu Lei, et al. RTV-SIFT: harnessing structure information for robust optical and SAR image registration[J]. Remote Sensing, 2023, 15(18): 4476. doi: 10.3390/rs15184476 [16] DeTone D, Malisiewicz T, Rabinovich A. SuperPoint: self-supervised interest point detection and description[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City: IEEE, 2018: 337−33712. [17] Zou Bin, Li Haolin, Zhang Lamei. Self-supervised SAR image registration with SAR-superpoint and transformation aggregation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5201115. [18] 曾旭东, 樊绍胜, 续尚植, 等. 低光照环境下基于轻量级SuperPoint的单目VI-SLAM算法[J]. 激光与光电子学进展, 2024, 61(18): 1815001.Zeng Xudong, Fan Shaosheng, Xu Shangzhi, et al. Monocular VI-SLAM algorithm based on lightweight SuperPoint network in low-light environment[J]. Laser & Optoelectronics Progress, 2024, 61(18): 1815001. [19] Li Zhaoyang, Cao Jie, Hao Qun, et al. DAN-SuperPoint: self-supervised feature point detection algorithm with dual attention network[J]. Sensors, 2022, 22(5): 1940. doi: 10.3390/s22051940 [20] 赵悦, 储开斌, 张继, 等. 面向复杂环境的特征匹配算法[J]. 计算机应用与软件, 2025, 42(1): 264−270,293.Zhao Yue, Chu Kaibin, Zhang Ji, et al. Feature point matching algorithms for complex environments[J]. Computer Applications and Software, 2025, 42(1): 264−270,293. [21] Zhong Jiageng, Li Ming, Zhang Hanqi, et al. Fine-grained 3D modeling and semantic mapping of coral reefs using photogrammetric computer vision and machine learning[J]. Sensors, 2023, 23(15): 6753. doi: 10.3390/s23156753 [22] Zhong J, Li M, Gruen A, et al. Cutting-edge 3D reconstruction solutions for underwater coral reef images: a review and comparison[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2025, 230: 779−803. [23] Xu Z Q J, Zhang Yaoyu, Xiao Yanyang. Training behavior of deep neural network in frequency domain[C]//Proceedings of the 26th International Conference on Neural Information Processing. Sydney: Springer, 2019: 264−274. [24] Francazi E, Baity-Jesi M, Lucchi A. A theoretical analysis of the learning dynamics under class imbalance[C]//Proceedings of the 40th International Conference on Machine Learning. Honolulu: JMLR. org, 2023: 413. [25] Gao X, Xie D, Zhang Y, et al. A comprehensive survey on imbalanced data learning[J]. arXiv preprint arXiv: 2502.08960, 2025. [26] Wu Xin, Zhang Lin, Huang Jipeng, et al. Underwater image enhancement via modeling white degradation[J]. IEEE Journal of Oceanic Engineering, 2024, 49(4): 1220−1232. doi: 10.1109/JOE.2024.3429653 [27] Sang V Q, 冯鹏, 汤斌, 等. 基于米氏散射理论的水中悬浮颗粒物散射特性计算[J]. 激光与光电子学进展, 2015, 52(1): 013001.Sang V Q, Feng Peng, Tang Bin, et al. Study on properties of light scattering based on Mie scattering theory for suspended particles in water[J]. Laser & Optoelectronics Progress, 2015, 52(1): 013001. [28] Wang Yudong, Guo Jichang, Gao Huan, et al. UIEC^2-Net: CNN-based underwater image enhancement using two color space[J]. Signal Processing: Image Communication, 2021, 96: 116250. doi: 10.1016/j.image.2021.116250 [29] Zhang Hanqi, Li Ming, Pan Xiaotian, et al. Novel approaches to enhance coral reefs monitoring with underwater image segmentation[C]//Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Wuhan: ISPRS, 2022: 271−277. [30] Huo Chunling, Zhang Da, Yang Huanyu. An underwater image denoising method based on high-frequency abrupt signal separation and hybrid attention mechanism[J]. Sensors, 2024, 24(14): 4578. doi: 10.3390/s24144578 [31] Sarlin P E, DeTone D, Malisiewicz T, et al. Superglue: Learning feature matching with graph neural networks[C]. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 4938−4947. [32] Srivatsan R A, Zevallos N, Vagdargi P, et al. Registration with a small number of sparse measurements[J]. The International Journal of Robotics Research, 2019, 38(12/13): 1403−1419. -
下载: