留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

结合SAM视觉分割模型与随机森林机器学习的无人机影像盐沼植被“精灵圈”提取

周若彤 谭凯 杨建儒 韩江涛 张卫国

周若彤,谭凯,杨建儒,等. 结合SAM视觉分割模型与随机森林机器学习的无人机影像盐沼植被“精灵圈”提取[J]. 海洋学报,2024,46(5):116–126 doi: 10.12284/hyxb2024048
引用本文: 周若彤,谭凯,杨建儒,等. 结合SAM视觉分割模型与随机森林机器学习的无人机影像盐沼植被“精灵圈”提取[J]. 海洋学报,2024,46(5):116–126 doi: 10.12284/hyxb2024048
Zhou Ruotong,Tan Kai,Yang Jianru, et al. Extraction of salt-marsh vegetation “fairy circles” from UAV images by the combination of SAM visual segmentation model and random forest machine learning algorithm[J]. Haiyang Xuebao,2024, 46(5):116–126 doi: 10.12284/hyxb2024048
Citation: Zhou Ruotong,Tan Kai,Yang Jianru, et al. Extraction of salt-marsh vegetation “fairy circles” from UAV images by the combination of SAM visual segmentation model and random forest machine learning algorithm[J]. Haiyang Xuebao,2024, 46(5):116–126 doi: 10.12284/hyxb2024048

结合SAM视觉分割模型与随机森林机器学习的无人机影像盐沼植被“精灵圈”提取

doi: 10.12284/hyxb2024048
基金项目: 国家自然科学基金项目(4217010220,41901399);上海市自然科学基金项目(22ZR1420900);重庆市自然科学基金项目(CSTB2022NSCQ-MSX1254);测绘遥感信息工程湖南省重点实验室开放基金项目(E22335);上海市科委社发研究项目(20DZ1204700)。
详细信息
    作者简介:

    周若彤(2000—),女,山东省菏泽市人,主要从事海岸带遥感研究。E-mail:51253904045@stu.ecnu.edu.cn

    通讯作者:

    谭凯(1987—),男,湖南省娄底市人,副研究员,主要从事海岸带多源遥感及应用研究。E-mail:ktan@sklec.ecnu.edu.cn

  • 中图分类号: P237

Extraction of salt-marsh vegetation “fairy circles” from UAV images by the combination of SAM visual segmentation model and random forest machine learning algorithm

  • 摘要: “精灵圈”是海岸带盐沼植被生态系统中的一种“空间自组织”结构,对盐沼湿地的生产力、稳定性和恢复力有重要影响。无人机影像是实现“精灵圈”空间位置高精度识别及解译其时空演化趋势与规律的重要数据源,但“精灵圈”像素与背景像素在色彩信息和外形特征上差异较小,如何从二维影像中智能精准地识别“精灵圈”像素并对识别的单个像素形成个体“精灵圈”是目前的技术难点。本文提出了一种结合分割万物模型(Segment Anything Model,SAM)视觉分割模型与随机森林机器学习的无人机影像“精灵圈”分割及分类方法,实现了单个“精灵圈”的识别和提取。首先,通过构建索伦森−骰子系数(Sørensen-Dice coefficient,Dice)和交并比(Intersection over Union,IOU)评价指标,从SAM中筛选预训练模型并对其参数进行优化,实现全自动影像分割,得到无属性信息的分割掩码/分割类;然后,利用红、绿、蓝(RGB)三通道信息及空间二维坐标将分割掩码与原图像进行信息匹配,构造分割掩码的特征指标,并根据袋外数据(Out of Bag,OOB)误差减小及特征分布规律对特征进行分析和筛选;最后,利用筛选的特征对随机森林模型进行训练,实现“精灵圈”植被、普通植被和光滩的自动识别与分类。实验结果表明:本文方法“精灵圈”平均正确提取率96.1%,平均错误提取率为9.5%,为精准刻画“精灵圈”时空格局及海岸带无人机遥感图像处理提供了方法和技术支撑。
  • 图  1  研究区域概况

    Fig.  1  Overview of the study area

    图  2  I区域正射影像(a)、整个研究区域正射影像(b)和II区域正射影像(c)

    Fig.  2  Orthophoto of Region I (a), orthophoto of the entire study area (b), and orthophoto of Region II (c)

    图  3  无人机摄像系统(a)和数据采集飞行路线(b)

    Fig.  3  UAV camera system (a) and data collection flight path (b)

    图  4  研究方法流程

    Fig.  4  Research method flow

    图  5  ViT 3种模型变体的IOU与Dice箱式图

    Fig.  5  Box chart of IOU and Dice for the three variants of ViT model

    图  6  特征指标重要性情况

    Fig.  6  Importance of the selected indicators

    图  7  位置与形态指标分布(a, b)和RGB指数指标分布(c, d)

    横坐标为样本序号,纵坐标为特征值,红色星号代表掩码的类型(从左到右分别为“精灵圈”植被、普通植被和光滩),彩色折线代表特征值的分布情况

    Fig.  7  Distributions of position and shape indexes (a, b), and distributions of RGB indexes (c, d)

    Where the horizontal coordinate is the sample serial number, the vertical coordinate is the feature index value, the red asterisk represents the type of mask (from left to right are "fairy circle" vegetation, common vegetation, and bare mudflat), and the color line represents the distribution of feature value

    图  8  研究区I提取结果

    a. 原始正射影像;b. SAM分割结果RGB显示;c. RF分类结果;d. “精灵圈”提取结果叠加原始正射影像;e. 删除特征bbox-x0、bbox-y0、IKAW和MVARI 4个特征前“精灵圈”提取结果

    Fig.  8  Extraction results of Region I

    a. Original orthophoto; b. RGB display of SAM segmentation results; c. random forest classification result; d. “ fairy circle” extraction result superimposed on original orthophoto; e. "fairy circle" extraction result superimposed on original orthophoto before removing the features of bbox-x0, bbox-y0, IKAW, and MVARI

    图  9  研究区II提取结果

    a. 原始正射影像;b. SAM分割结果RGB显示;c. RF分类结果;d. “精灵圈”提取结果叠加原始正射影像;e. 删除特征bbox-x0、bbox-y0、IKAW和MVARI 4个特征前“精灵圈”提取结果

    Fig.  9  Extraction results of Region II

    a. Original orthophoto; b. RGB display of SAM segmentation results; c. random forest classification result; d. “ fairy circle” extraction result superimposed on original orthophoto; e. "fairy circle" extraction result superimposed on original orthophoto before removing the features of bbox-x0, bbox-y0, IKAW, and MVARI

    表  1  RGB植被指数

    Tab.  1  RGB vegetation index

    指数 公式
    EXG 2 × GRB
    GCC G/B + G + R
    GRVI (GR)/(G + R)
    IKAW (RB)/(R + B)
    MGRVI (G2R2)/(G2 + R2)
    MVARI (GB)/(G + RB)
    RGBVI (G2B × R)/(G2 + B × R)
    TGI G − (0.39 × R) − (0.61 × B)
    VARI (GR)/(G + RB)
    VDVI (2 × GRB)/(2 × G + R + B)
    下载: 导出CSV

    表  2  “精灵圈”提取混淆矩阵,类别1、2、3分别代表“精灵圈”、背景植被和光滩

    Tab.  2  “ Fairy circle” extraction confusion matrix, and categories 1, 2 and 3 represent fairy circle, background vegetation, and bare flat, respectively

    研究区I(13个特征) 研究区II(13个特征)
    实际类别 预测类别 实际类别 预测类别
    1 2 3 1 2 3
    1 172 11 1 1 359 4 1
    2 27 157 5 2 18 35 0
    3 1 1 112 3 1 1 39
    正确识别率 93.5% 正确识别率 98.6%
    错误识别率 14% 错误识别率 5.0%
    研究区I(17个特征) 研究区II(17个特征)
    实际类别 预测类别 实际类别 预测类别
    1 2 1 1 2 3
    1 170 14 0 1 354 4 1
    2 35 151 3 2 31 35 0
    3 3 1 110 3 3 1 39
    正确识别率 92.4% 正确识别率 97.3%
    错误识别率 18.3% 错误识别率 8.8%
    下载: 导出CSV
  • [1] 韩倩倩, 牛振国, 吴孟泉, 等. 基于潮位校正的中国潮间带遥感监测及变化[J]. 科学通报, 2019, 64(4): 456−473. doi: 10.1360/N972018-00723

    Han Qianqian, Niu Zhenguo, Wu Mengquan, et al. Remote-sensing monitoring and analysis of China intertidal zone changes based on tidal correction[J]. Chinese Science Bulletin, 2019, 64(4): 456−473. doi: 10.1360/N972018-00723
    [2] 陈一宁, 陈鹭真. 滨海蓝碳生态系统的碳库间相互作用研究进展及展望[J]. 海洋学研究, 2023, 41(1): 3−13.

    Chen Yining, Chen Luzhen. Interactions between vegetation and sediment carbon pools within coastal blue carbon ecosystems: A review and perspective[J]. Journal of Marine Sciences, 2023, 41(1): 3−13.
    [3] Zhao Lixia, Zhang Kang, Siteur K, et al. Fairy circles reveal the resilience of self-organized salt marshes[J]. Science Advances, 2021, 7(6): eabe1100. doi: 10.1126/sciadv.abe1100
    [4] Rietkerk M, Bastiaansen R, Banerjee S, et al. Evasion of tipping in complex systems through spatial pattern formation[J]. Science, 2021, 374(6564): eabj0359. doi: 10.1126/science.abj0359
    [5] Ruiz-Reynés D, Gomila D, Sintes T, et al. Fairy circle landscapes under the sea[J]. Science Advances, 2017, 3(8): e1603262. doi: 10.1126/sciadv.1603262
    [6] de Paoli H, van der Heide T, van den Berg A, et al. Behavioral self-organization underlies the resilience of a coastal ecosystem[J]. Proceedings of the National Academy of Sciences of the United States of America, 2017, 114(30): 8035−8040.
    [7] Tao Pengjie, Tan Kai, Ke Tao, et al. Recognition of ecological vegetation fairy circles in intertidal salt marshes from UAV LiDAR point clouds[J]. International Journal of Applied Earth Observation and Geoinformation, 2022, 114: 103029. doi: 10.1016/j.jag.2022.103029
    [8] Liu Qingsheng, Huang Chong, Liu Gaohuan, et al. Comparison of CBERS-04, GF-1, and GF-2 satellite panchromatic images for mapping quasi-circular vegetation patches in the Yellow River Delta, China[J]. Sensors, 2018, 18(8): 2733. doi: 10.3390/s18082733
    [9] Shi Lei, Liu Qingsheng, Huang Chong, et al. Mapping quasi-circular vegetation patch dynamics in the Yellow River Delta, China, between 1994 and 2016[J]. Ecological Indicators, 2021, 126: 107656. doi: 10.1016/j.ecolind.2021.107656
    [10] Zhang Yunjie, Liu Qingsheng, Liu Gaohuan, et al. Mapping of circular or elliptical vegetation community patches: A comparative use of SPOT-5, ALOS and ZY-3 imagery[C]//Proceedings of the 8th International Congress on Image and Signal Processing. Shenyang: IEEE, 2015.
    [11] Zhang Xianlong, Zhang Fei, Qi Yaxiao, et al. New research methods for vegetation information extraction based on visible light remote sensing images from an unmanned aerial vehicle (UAV)[J]. International Journal of Applied Earth Observation and Geoinformation, 2019, 78: 215−226. doi: 10.1016/j.jag.2019.01.001
    [12] Kirillov A , Mintun E , Ravi N , et al. Segment Anything[C]//2023 IEEE/CVF International Conference on Computer Vision (ICCV). 0[2024-02-29]. DOI: 10.1109/ICCV51070.2023.00371.
    [13] Ning Guochen, Liang Hanyin, Jiang Zhongliang, et al. The potential of 'Segment Anything' (SAM) for universal intelligent ultrasound image guidance[J]. Bioscience Trends, 2023, 17(3): 230−233. doi: 10.5582/bst.2023.01119
    [14] Chen Fang, Chen Lingyu, Han Haojie, et al. The ability of Segmenting Anything Model (SAM) to segment ultrasound images[J]. Bioscience Trends, 2023, 17(3): 211−218. doi: 10.5582/bst.2023.01128
    [15] Shi Peilun, Qiu Jianing, Abaxi S M D, et al. Generalist vision foundation models for medical imaging: A case study of segment anything model on zero-shot medical segmentation[J]. Diagnostics, 2023, 13(11): 1947. doi: 10.3390/diagnostics13111947
    [16] Maxwell A E, Warner T A, Fang Fang. Implementation of machine-learning classification in remote sensing: an applied review[J]. International Journal of Remote Sensing, 2018, 39(9): 2784−2817. doi: 10.1080/01431161.2018.1433343
    [17] Pádua L, Adão T, Hruška J, et al. Vineyard classification using machine learning techniques applied to RGB-UAV imagery[C]//Proceedings of 2020 IEEE International Geoscience and Remote Sensing Symposium. Waikoloa: IEEE, 2020.
    [18] Juel A, Groom G B, Svenning J C, et al. Spatial application of Random Forest models for fine-scale coastal vegetation classification using object based analysis of aerial orthophoto and DEM data[J]. International Journal of Applied Earth Observation and Geoinformation, 2015, 42: 106−114. doi: 10.1016/j.jag.2015.05.008
    [19] 周小成, 郑磊, 黄洪宇. 基于多特征优选的无人机可见光遥感林分类型分类[J]. 林业科学, 2021, 57(6): 24−36.

    Zhou Xiaocheng, Zheng Lei, Huang Hongyu. Classification of forest stand based on multi-feature optimization of UAV visible light remote sensing[J]. Scientia Silvae Sinicae, 2021, 57(6): 24−36.
    [20] Yang Shuting, Gu Lingjia, Li Xiaofeng, et al. Crop classification method based on optimal feature selection and hybrid CNN-RF networks for multi-temporal remote sensing imagery[J]. Remote Sensing, 2020, 12(19): 3119. doi: 10.3390/rs12193119
    [21] Fu Bolin, Liu Man, He Hongchang, et al. Comparison of optimized object-based RF-DT algorithm and SegNet algorithm for classifying Karst wetland vegetation communities using ultra-high spatial resolution UAV data[J]. International Journal of Applied Earth Observation and Geoinformation, 2021, 104: 102553. doi: 10.1016/j.jag.2021.102553
    [22] Han Kai, Wang Yunhe, Chen Hanting, et al. A survey on vision transformer[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 87−110. doi: 10.1109/TPAMI.2022.3152247
    [23] Liaw A, Wiener M. Classification and regression by randomForest[J]. R News, 2002, 2(3): 18−22.
    [24] Breiman L. Random forests[J]. Machine Learning, 2001, 45(1): 5−32. doi: 10.1023/A:1010933404324
    [25] Rodriguez-Galiano V F, Ghimire B, Rogan J, et al. An assessment of the effectiveness of a random forest classifier for land-cover classification[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2012, 67: 93−104. doi: 10.1016/j.isprsjprs.2011.11.002
    [26] Nguyen M H, de la Torre F. Optimal feature selection for support vector machines[J]. Pattern Recognition, 2010, 43(3): 584−591. doi: 10.1016/j.patcog.2009.09.003
    [27] Morgan G R, Wang Cuizhen, Morris J T. RGB indices and canopy height modelling for mapping tidal marsh biomass from a small unmanned aerial system[J]. Remote Sensing, 2021, 13(17): 3406. doi: 10.3390/rs13173406
    [28] Congalton R G, Green K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices[M]. 3rd ed. Boca Raton: CRC Press, 2019.
  • 加载中
图(9) / 表(2)
计量
  • 文章访问数:  192
  • HTML全文浏览量:  96
  • PDF下载量:  45
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-09-29
  • 录用日期:  2024-03-01
  • 修回日期:  2023-12-28
  • 网络出版日期:  2024-03-11
  • 刊出日期:  2024-05-01

目录

    /

    返回文章
    返回