AIR-SARShip-1.0:高分辨率SAR舰船检测数据集

孙显 王智睿 孙元睿 刁文辉 张跃 付琨

孙显, 王智睿, 孙元睿, 等. AIR-SARShip-1.0:高分辨率SAR舰船检测数据集[J]. 雷达学报, 2019, 8(6): 852–862. doi:  10.12000/JR19097
引用本文: 孙显, 王智睿, 孙元睿, 等. AIR-SARShip-1.0:高分辨率SAR舰船检测数据集[J]. 雷达学报, 2019, 8(6): 852–862. doi:  10.12000/JR19097
SUN Xian, WANG Zhirui, SUN Yuanrui, et al. AIR-SARShip-1.0: High-resolution SAR ship detection dataset[J]. Journal of Radars, 2019, 8(6): 852–862. doi:  10.12000/JR19097
Citation: SUN Xian, WANG Zhirui, SUN Yuanrui, et al. AIR-SARShip-1.0: High-resolution SAR ship detection dataset[J]. Journal of Radars, 2019, 8(6): 852–862. doi:  10.12000/JR19097

AIR-SARShip-1.0:高分辨率SAR舰船检测数据集

(中文/English)

doi: 10.12000/JR19097
基金项目: 国家自然科学基金(61725105, 41801349, 41701508),国家高分辨率对地观测系统重大专项(GFZX0404120405)
详细信息
    作者简介:

    孙 显(1981–),男,中国科学院空天信息创新研究院研究员,博士生导师,主要研究方向为计算机视觉与遥感图像理解,IEEE高级会员,雷达学报青年编委。E-mail: sunxian@mail.ie.ac.cn

    王智睿(1990–),男,2018年在清华大学获得博士学位,现任中国科学院空天信息创新研究院助理研究员,主要研究方向为SAR图像智能解译。E-mail: zhirui1990@126.com

    孙元睿(1995–),男,博士生,2017年获得中国地质大学(武汉)工学学士学位,现为中国科学院大学信息与通信工程博士生,主要研究方向为SAR舰船检测。E-mail: sunyuanrui17@mails.ucas.ac.cn

    刁文辉(1988–),男,2016年在中国科学院大学获得博士学位,现任中国科学院空天信息创新研究院助理研究员。主要研究方向为深度学习理论及其在遥感图像解译中的应用,目前已发表论文20余篇。E-mail: whdiao@mail.ie.ac.cn

    张 跃(1990–),男, 2017年在中国科学院大学获得博士学位,现任中国科学院空天信息创新研究院助理研究员。主要研究方向为SAR图像智能分析与解译应用,目前已发表SCI论文10余篇。E-mail: zhangyue@air.cas.ac.cn

    付 琨(1974–),男,研究员,博士生导师,现任中国科学院空天信息创新研究院院长助理,中国科学院重点实验室主任,主要从事地理空间数据分析与挖掘、遥感图像智能解译等领域的研究工作,先后获国家科技进步特等奖、国家科技进步一等奖和省部级一等奖等多项。E-mail: fukun@mail.ie.ac.cn

    通讯作者:

    孙显 sunxian@mail.ie.ac.cn

  • 中图分类号: TN957.51; TN958

AIR-SARShip-1.0: High-resolution SAR Ship Detection Dataset (in English)

(English)

Funds: The National Natural Science Foundation of China (61725105, 41801349, 41701508), National Major Project on High Resolution Earth Observation System (GFZX0404120405)
More Information
    Author Bio:

    SUN Xian was born in 1981. He is a researcher and doctoral supervisor at the Aerospace Information Research Institute, Chinese Academy of Sciences. His main research fields are computer vision and remote sensing image interpretation. E-mail: sunxian@mail.ie.ac.cn

    WANG Zhirui was born in 1990. He received his PhD from Tsinghua University in 2018. He is currently a research assistant at the Aerospace Information Research Institute, Chinese Academy of Sciences. His main research field is intelligent interpretation of SAR images. E-mail: zhirui1990@126.com

    SUN Yuanrui was born in 1995. He received his bachelor’s degree in engineering from China University of Geosciences (Wuhan) in 2017. He is now a doctoral candidate in information and communication engineering of the University of Chinese Academy of Sciences. His main research field is SAR ship detection. E-mail: sunyuanrui17@mails.ucas.ac.cn

    DIAO Wenhui was born in 1988. He received his PhD from the University of Chinese Academy of Sciences in 2016. He is currently a research assistant at the Aerospace Information Research Institute, Chinese Academy of Sciences. His main research interests include deep learning theory and its application in remote sensing image interpretation. E-mail: whdiao@mail.ie.ac.cn

    ZHANG Yue was born in 1990. He received his PhD from the University of Chinese Academy of Sciences in 2017. He is currently a research assistant at the Aerospace Information Research Institute, Chinese Academy of Sciences. His main research field is intelligent analysis and interpretation of SAR images. E-mail: zhangyue@air.cas.ac.cn

    FU Kun was born in 1974. He is a researcher and doctoral supervisor. He is the president assistant at the Aerospace Information Research Institute, Chinese Academy of Sciences, and the director of the Key Laboratory of the Chinese Academy of Sciences. He is mainly engaged in research in the fields of geospatial data analysis and mining, and remote sensing image intelligent interpretation. He has successively won the National Science and Technology Progress Award, the first prize of the National Science and Technology Progress Award, and the first prize of the Provincial and Ministerial-Level Award. E-mail: fukun@mail.ie.ac.cn

    Corresponding author: SUN Xian, sunxian@mail.ie.ac.cn
  • 摘要: 近年来,深度学习技术得到广泛应用,然而在合成孔径雷达(SAR)舰船目标检测研究中,由于数据获取难、样本规模小,尚难以支撑深度网络模型的训练。该文公开了一个面向高分辨率、大尺寸场景的SAR舰船检测数据集,该数据集包含31景高分三号SAR图像,场景类型包含港口、岛礁、不同级别海况的海面等,背景涵盖近岸和远海等多样场景。同时,该文使用经典舰船检测算法和深度学习算法进行了实验,其中基于密集连接端到端网络方法效果最佳,平均精度达到88.1%。通过实验对比分析形成指标基准,方便其他学者在此数据集基础上进一步展开SAR舰船检测相关研究。
  • 图  1  数据集标注示意图

    Figure  1.  The annotated example in the dataset

    图  2  AIR-SARShip-1.0数据集中场景示例

    Figure  2.  The example scenes of AIR-SARShip-1.0 dataset

    图  3  数据集舰船矩形框面积分布

    Figure  3.  The area distribution of ship rectangle in the dataset

    图  4  同一地区不同角度成像示例

    Figure  4.  Imaging examples of the same area at different angles

    图  5  旋转图像示例

    Figure  5.  The examples of rotated images

    图  6  DCENN网络主要结构

    Figure  6.  The main structure of DCENN network

    图  7  基于密集连接融合特征图

    Figure  7.  The fusion feature map based on dense connection

    图  8  基于Faster-RCNN的SAR舰船检测示意图

    Figure  8.  The detection example of SAR ship based on Faster-RCNN

    1  AIR-SARShip-1.0数据集发布地址

    1.  Release address of AIR-SARShip-1.0 dataset

    图  1  Annotated example in the dataset

    图  2  Examples of the AIR-SARShip-1.0 dataset

    图  3  Area distribution of ship rectangle in the dataset

    图  4  Imaging examples of the same area at different angles

    图  5  Examples of original image and rotated images

    图  6  Main structure of the DCENN network

    图  7  Fusion feature map based on dense connection

    图  8  Detection example of SAR ship based on Faster-RCNN

    App   Fig. 1  Release address of the AIR-SARShip-1.0 dataset

    表  1  数据集信息

    Table  1.   The dataset information

    分辨率成像模式极化方式图像格式
    1 m, 3 m聚束式、条带式单极化Tiff
    下载: 导出CSV

    1  AIR-SARShip-1.0数据集详情

    1.   AIR-SARShip-1.0 dataset information in detail

    图像编号像素尺寸海况场景分辨率(m)舰船数量
    13000×30002级近岸35
    23000×30000级近岸17
    33000×30003级远海310
    43000×30002级远海38
    53000×30001级近岸315
    63000×30004级远海33
    73000×30004级远海35
    83000×30001级近岸12
    93000×30002级近岸17
    103000×30001级远海150
    113000×30001级近岸180
    123000×30002级近岸118
    134140×41401级近岸121
    143000×30001级近岸115
    153000×30001级近岸177
    163000×30003级近岸313
    173000×30003级近岸33
    183000×30003级近岸32
    193000×30003级近岸31
    203000×30002级近岸37
    213000×30002级近岸39
    223000×30001级近岸314
    233000×30001级远海34
    243000×30004级远海36
    253000×30004级远海120
    263000×30002级近岸315
    273000×30002级近岸319
    283000×30001级近岸38
    293000×30003级远海36
    303000×30002级远海38
    313000×30001级近岸33
    下载: 导出CSV

    表  2  经典机器学习算法舰船检测性能基准

    Table  2.   The performance benchmarks of classic ship detection algorithm

    算法AP(%)
    CFAR27.1
    基于K分布的CFAR19.2
    KSW28.2
    下载: 导出CSV

    表  3  基于深度学习的SAR舰船检测算法的性能基准

    Table  3.   The performance benchmarks of SAR ship detection algorithms based on deep learning

    性能排名算法AP(%)FPS
    1DCENN88.124
    2Faster-RCNN-DR84.229
    3Faster-RCNN79.330
    4SSD-51274.364
    5SSD-30072.4151
    6YOLOv164.7160
    下载: 导出CSV

    表  4  不同场景下算法性能结果

    Table  4.   The performance benchmarks of different scenes based on different algorithms

    性能排名算法近岸舰船AP(%)远海舰船AP(%)
    1DCENN68.196.3
    2Faster-RCNN-DR57.694.6
    3SSD-51240.389.4
    下载: 导出CSV

    表  1  The dataset information

    Resolution Imaging mode Polarization mode Format
    1 m, 3 m Spotlight, Strip Single Tiff
    下载: 导出CSV

    表  2  The performance benchmarks of classic ship detection algorithm

    Algorithm AP(%)
    CFAR 27.1
    CFAR method based on K distribution 19.2
    KSW 28.2
    下载: 导出CSV

    表  3  The performance benchmarks of SAR ship detection algorithms based on deep learning

    Performance ranking Algorithm AP(%) FPS
    1 DCENN 88.1 24
    2 Faster-RCNN-DR 84.2 29
    3 Faster-RCNN 79.3 30
    4 SSD-512 74.3 64
    5 SSD-300 72.4 151
    6 YOLOv1 64.7 160
    下载: 导出CSV

    表  4  The performance benchmarks of different scenes based on different algorithms

    Performance ranking Algorithm Nearshore ship AP(%) Offshore ship AP(%)
    1 DCENN 68.1 96.3
    2 Faster-RCNN-DR 57.6 94.6
    3 SSD-512 40.3 89.4
    下载: 导出CSV

    App  Tab. 1 AIR-SARShip-1.0 dataset information in detail

    Image No. Size Sea condition Scenario Resolution (m) Ship numbe
    1 3000×3000 Level 2 nearshore 3 5
    2 3000×3000 Level 0 nearshore 1 7
    3 3000×3000 Level 3 offshore 3 10
    4 3000×3000 Level 2 offshore 3 8
    5 3000×3000 Level 1 nearshore 3 15
    6 3000×3000 Level 4 offshore 3 3
    7 3000×3000 Level 4 offshore 3 5
    8 3000×3000 Level 1 nearshore 1 2
    9 3000×3000 Level 2 nearshore 1 7
    10 3000×3000 Level 1 offshore 1 50
    11 3000×3000 Level 1 nearshore 1 80
    12 3000×3000 Level 2 nearshore 1 18
    13 4140×4140 Level 1 nearshore 1 21
    14 3000×3000 Level 1 nearshore 1 15
    15 3000×3000 Level 1 nearshore 1 77
    16 3000×3000 Level 3 nearshore 3 13
    17 3000×3000 Level 3 nearshore 3 3
    18 3000×3000 Level 3 nearshore 3 2
    19 3000×3000 Level 3 nearshore 3 1
    20 3000×3000 Level 2 nearshore 3 7
    21 3000×3000 Level 2 nearshore 3 9
    22 3000×3000 Level 1 nearshore 3 14
    23 3000×3000 Level 1 offshore 3 4
    24 3000×3000 Level 4 offshore 3 6
    25 3000×3000 Level 4 offshore 1 20
    26 3000×3000 Level 2 nearshore 3 15
    27 3000×3000 Level 2 nearshore 3 19
    28 3000×3000 Level 1 nearshore 3 8
    29 3000×3000 Level 3 offshore 3 6
    30 3000×3000 Level 2 offshore 3 8
    31 3000×3000 Level 1 nearshore 3 3
    下载: 导出CSV
  • [1] 张杰, 张晰, 范陈清, 等. 极化SAR在海洋探测中的应用与探讨[J]. 雷达学报, 2016, 5(6): 596–606. doi:  10.12000/JR16124

    ZHANG Jie, ZHANG Xi, FAN Chenqing, et al. Discussion on application of polarimetric synthetic aperture radar in marine surveillance[J]. Journal of Radars, 2016, 5(6): 596–606. doi:  10.12000/JR16124
    [2] REY M T, CAMPBELL J, and PETROVIC D. A comparison of ocean clutter distribution estimators for CFAR-based ship detection in RADARSAT imagery[R]. Technical Report No. 1340, 1998.
    [3] NOVAK L M, BURL M C, and IRVING W W. Optimal polarimetric processing for enhanced target detection[J]. IEEE Transactions on Aerospace and Electronic Systems, 1993, 29(1): 234–244. doi:  10.1109/7.249129
    [4] STAGLIANO D, LUPIDI A, and BERIZZI F. Ship detection from SAR images based on CFAR and wavelet transform[C]. 2012 Tyrrhenian Workshop on Advances in Radar and Remote Sensing, Naples, Italy, 2012: 53–58.
    [5] HE Jinglu, WANG Yinghua, LIU Hongwei, et al. A novel automatic PolSAR ship detection method based on superpixel-level local information measurement[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(3): 384–388. doi:  10.1109/LGRS.2017.2789204
    [6] DENG Jia, DONG Wei, SOCHER R, et al. ImageNet: A large-scale hierarchical image database[C]. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, USA, 2009: 248–255.
    [7] EVERINGHAM M, VAN GOOL L, WILLIAMS C K I, et al. The PASCAL Visual Object Classes (VOC) challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303–338. doi:  10.1007/s11263-009-0275-4
    [8] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft coco: Common objects in context[C]. The 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 740–755.
    [9] XIA Guisong, BAI Xiang, DING Jian, et al. DOTA: A large-scale dataset for object detection in aerial images[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 3974–3983.
    [10] ZHANG Yuanlin, YUAN Yuan, FENG Yachuang, et al. Hierarchical and robust convolutional neural network for very high-resolution remote sensing object detection[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(8): 5535–5548. doi:  10.1109/TGRS.2019.2900302
    [11] LONG Yang, GONG Yiping, XIAO Zhifeng, et al. Accurate object localization in remote sensing images based on convolutional neural networks[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(5): 2486–2498. doi:  10.1109/TGRS.2016.2645610
    [12] XIAO Zhifeng, LIU Qing, TANG Gefu, et al. Elliptic Fourier transformation-based histograms of oriented gradients for rotationally invariant object detection in remote-sensing images[J]. International Journal of Remote Sensing, 2015, 36(2): 618–644. doi:  10.1080/01431161.2014.999881
    [13] LI Jianwei, QU Changwen, and SHAO Jiaqi. Ship detection in SAR images based on an improved faster R-CNN[C]. 2017 SAR in Big Data Era: Models, Beijing, China, 2017: 1–6.
    [14] HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi:  10.1109/JSTARS.2017.2755672
    [15] WANG Yuanyuan, WANG Chao, ZHANG Hong, et al. A SAR dataset of ship detection for deep learning under complex backgrounds[J]. Remote Sensing, 2019, 11(7): 765. doi:  10.3390/rs11070765
    [16] 张庆君. 高分三号卫星总体设计与关键技术[J]. 测绘学报, 2017, 46(3): 269–277. doi:  10.11947/j.AGCS.2017.20170049

    ZHANG Qingjun. System design and key technologies of the GF-3 satellite[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(3): 269–277. doi:  10.11947/j.AGCS.2017.20170049
    [17] LIU Wei, ANGUELOV D, ERHAN D, et al. SSD: Single shot MultiBox detector[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 21–37.
    [18] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 779–788.
    [19] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 2999–3007.
    [20] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 580–587.
    [21] GIRSHICK R. Fast R-CNN[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1440–1448.
    [22] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[C]. The 28th International Conference on Neural Information Processing Systems, Montreal, Canada, 2015: 91–99.
    [23] LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 936–944.
    [24] LIU Peng and JIN Yaqiu. A study of ship rotation effects on SAR image[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(6): 3132–3144. doi:  10.1109/TGRS.2017.2662038
    [25] JIAO Jiao, ZHANG Yue, SUN Hao, et al. A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection[J]. IEEE Access, 2018, 6: 20881–20892. doi:  10.1109/ACCESS.2018.2825376
  • [1] 崔兴超, 粟毅, 陈思伟.  融合极化旋转域特征和超像素技术的极化SAR舰船检测 . 雷达学报, 2021, 10(1): 35-48. doi: 10.12000/JR20147
    [2] 刘涛, 杨子渊, 蒋燕妮, 高贵.  极化SAR图像舰船目标检测研究综述 . 雷达学报, 2021, 10(1): 1-19. doi: 10.12000/JR20155
    [3] 马琳, 潘宗序, 黄钟泠, 韩冰, 胡玉新, 周晓, 雷斌.  基于子孔径与全孔径特征学习的SAR多通道虚假目标鉴别 . 雷达学报, 2021, 10(1): 159-172. doi: 10.12000/JR20106
    [4] 冷祥光, 计科峰, 熊博莅, 匡纲要.  面向舰船目标检测的单通道复值SAR图像统计建模方法研究 . 雷达学报, 2020, 9(3): 477-496. doi: 10.12000/JR20070
    [5] 艾加秋, 曹振翔, 毛宇翔, 汪章怀, 王非凡, 金兢.  一种复杂环境下改进的SAR图像双边CFAR舰船检测算法 . 雷达学报, 2020, 9(): 1-17. doi: 10.12000/JR20127
    [6] 戴牧宸, 冷祥光, 熊博莅, 计科峰.  基于改进双边网络的SAR图像海陆分割方法 . 雷达学报, 2020, 9(5): 886-897. doi: 10.12000/JR20089
    [7] 郭炜炜, 张增辉, 郁文贤, 孙效华.  SAR图像目标识别的可解释性问题探讨 . 雷达学报, 2020, 9(3): 462-476. doi: 10.12000/JR20059
    [8] 郭倩, 王海鹏, 徐丰.  SAR图像飞机目标检测识别进展 . 雷达学报, 2020, 9(3): 497-513. doi: 10.12000/JR20020
    [9] 罗迎, 倪嘉成, 张群.  基于“数据驱动+智能学习”的合成孔径雷达学习成像 . 雷达学报, 2020, 9(1): 107-122. doi: 10.12000/JR19103
    [10] 张晓玲, 张天文, 师君, 韦顺军.  基于深度分离卷积神经网络的高速高精度SAR舰船检测 . 雷达学报, 2019, 8(6): 841-851. doi: 10.12000/JR19111
    [11] 陈慧元, 刘泽宇, 郭炜炜, 张增辉, 郁文贤.  基于级联卷积神经网络的大场景遥感图像舰船目标快速检测方法 . 雷达学报, 2019, 8(3): 413-424. doi: 10.12000/JR19041
    [12] 张金松, 邢孟道, 孙光才.  一种基于密集深度分离卷积的SAR图像水域分割算法 . 雷达学报, 2019, 8(3): 400-412. doi: 10.12000/JR19008
    [13] 苏宁远, 陈小龙, 关键, 牟效乾, 刘宁波.  基于卷积神经网络的海上微动目标检测与分类方法 . 雷达学报, 2018, 7(5): 565-574. doi: 10.12000/JR18077
    [14] 赵飞翔, 刘永祥, 霍凯.  一种基于Dropout约束深度极限学习机的雷达目标分类算法 . 雷达学报, 2018, 7(5): 613-621. doi: 10.12000/JR18048
    [15] 王俊, 郑彤, 雷鹏, 魏少明.  深度学习在雷达中的研究综述 . 雷达学报, 2018, 7(4): 395-411. doi: 10.12000/JR18040
    [16] 窦方正, 刁文辉, 孙显, 张跃, 付琨.  基于深度形状先验的高分辨率SAR飞机目标重建 . 雷达学报, 2017, 6(5): 503-513. doi: 10.12000/JR17047
    [17] 刘泽宇, 柳彬, 郭炜炜, 张增辉, 张波, 周月恒, 马高, 郁文贤.  高分三号NSC模式SAR图像舰船目标检测初探 . 雷达学报, 2017, 6(5): 473-482. doi: 10.12000/JR17059
    [18] 赵飞翔, 刘永祥, 霍凯.  基于栈式降噪稀疏自动编码器的雷达目标识别方法 . 雷达学报, 2017, 6(2): 149-156. doi: 10.12000/JR16151
    [19] 徐丰, 王海鹏, 金亚秋.  深度学习在SAR目标识别与地物分类中的应用 . 雷达学报, 2017, 6(2): 136-148. doi: 10.12000/JR16130
    [20] 滑文强, 王爽, 侯彪.  基于半监督学习的SVM-Wishart极化SAR图像分类方法 . 雷达学报, 2015, 4(1): 93-98. doi: 10.12000/JR14138
  • 加载中
图(18) / 表 (10)
计量
  • 文章访问数:  6804
  • HTML全文浏览量:  6189
  • PDF下载量:  1390
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-11-16
  • 修回日期:  2019-12-17
  • 网络出版日期:  2019-12-27
  • 刊出日期:  2019-12-01

AIR-SARShip-1.0:高分辨率SAR舰船检测数据集

(中文/English)

doi: 10.12000/JR19097
    基金项目:  国家自然科学基金(61725105, 41801349, 41701508),国家高分辨率对地观测系统重大专项(GFZX0404120405)
    作者简介:

    孙 显(1981–),男,中国科学院空天信息创新研究院研究员,博士生导师,主要研究方向为计算机视觉与遥感图像理解,IEEE高级会员,雷达学报青年编委。E-mail: sunxian@mail.ie.ac.cn

    王智睿(1990–),男,2018年在清华大学获得博士学位,现任中国科学院空天信息创新研究院助理研究员,主要研究方向为SAR图像智能解译。E-mail: zhirui1990@126.com

    孙元睿(1995–),男,博士生,2017年获得中国地质大学(武汉)工学学士学位,现为中国科学院大学信息与通信工程博士生,主要研究方向为SAR舰船检测。E-mail: sunyuanrui17@mails.ucas.ac.cn

    刁文辉(1988–),男,2016年在中国科学院大学获得博士学位,现任中国科学院空天信息创新研究院助理研究员。主要研究方向为深度学习理论及其在遥感图像解译中的应用,目前已发表论文20余篇。E-mail: whdiao@mail.ie.ac.cn

    张 跃(1990–),男, 2017年在中国科学院大学获得博士学位,现任中国科学院空天信息创新研究院助理研究员。主要研究方向为SAR图像智能分析与解译应用,目前已发表SCI论文10余篇。E-mail: zhangyue@air.cas.ac.cn

    付 琨(1974–),男,研究员,博士生导师,现任中国科学院空天信息创新研究院院长助理,中国科学院重点实验室主任,主要从事地理空间数据分析与挖掘、遥感图像智能解译等领域的研究工作,先后获国家科技进步特等奖、国家科技进步一等奖和省部级一等奖等多项。E-mail: fukun@mail.ie.ac.cn

    通讯作者: 孙显 sunxian@mail.ie.ac.cn
  • 中图分类号: TN957.51; TN958

摘要: 近年来,深度学习技术得到广泛应用,然而在合成孔径雷达(SAR)舰船目标检测研究中,由于数据获取难、样本规模小,尚难以支撑深度网络模型的训练。该文公开了一个面向高分辨率、大尺寸场景的SAR舰船检测数据集,该数据集包含31景高分三号SAR图像,场景类型包含港口、岛礁、不同级别海况的海面等,背景涵盖近岸和远海等多样场景。同时,该文使用经典舰船检测算法和深度学习算法进行了实验,其中基于密集连接端到端网络方法效果最佳,平均精度达到88.1%。通过实验对比分析形成指标基准,方便其他学者在此数据集基础上进一步展开SAR舰船检测相关研究。

English Abstract

孙显, 王智睿, 孙元睿, 等. AIR-SARShip-1.0:高分辨率SAR舰船检测数据集[J]. 雷达学报, 2019, 8(6): 852–862. doi:  10.12000/JR19097
引用本文: 孙显, 王智睿, 孙元睿, 等. AIR-SARShip-1.0:高分辨率SAR舰船检测数据集[J]. 雷达学报, 2019, 8(6): 852–862. doi:  10.12000/JR19097
SUN Xian, WANG Zhirui, SUN Yuanrui, et al. AIR-SARShip-1.0: High-resolution SAR ship detection dataset[J]. Journal of Radars, 2019, 8(6): 852–862. doi:  10.12000/JR19097
Citation: SUN Xian, WANG Zhirui, SUN Yuanrui, et al. AIR-SARShip-1.0: High-resolution SAR ship detection dataset[J]. Journal of Radars, 2019, 8(6): 852–862. doi:  10.12000/JR19097
    • 合成孔径雷达(Synthetic Aperture Radar, SAR)是一种主动式微波成像雷达,它具备全天时、全天候的观测能力,在军事和民用领域中具有广阔的应用前景。近几年来,随着我国对地观测技术的进步,高分三号等多颗高分辨率SAR成像卫星陆续投入使用,SAR数据的质量和数量得到持续提升。

      SAR图像解译一直面临较大的挑战。SAR成像和光学成像差别较大,表征不直观,且成像时存在的相干斑、叠掩等现象容易对目标判读产生干扰。现有日常作业中大多采用人工解译,费时费力,难以满足海量SAR图像实时解译的需求。

      港口及海上区域的舰船目标持续监测是一项重要的应用任务[1]。舰船目标的提取和检测也一直是SAR图像解译领域的研究热点。SAR舰船检测分为近岸舰船检测和远海舰船检测两类。一般情况下,远海舰船中背景相对单一,前景目标提取任务难度略低;而近岸区域舰船数量较多,种类更为丰富,但由于港口处于海陆分割区域,受背景噪声干扰、地物类型多变等影响,对目标提取和识别的难度较大。

      经典的舰船检测方法主要是将统计学习与恒虚警率(Constant False Alarm Rate, CFAR)方法相结合。在针对单极化SAR的舰船目标检测研究中,Rey[2]最早提出利用K分布海杂波模型结合CFAR的检测方法,Novak等人[3]则发展了利用高斯模型的双参数CFAR的方法,D. Stagliano等人[4]也提出了一种基于CFAR与小波变换联合的SAR图像舰船目标检测算法。Jinglu He等人[5]进一步提出了一种基于超像素级局部信息测量的极化SAR舰船自动检测方法,通过生成多尺度超像素来计算某一超像素与周围像素间的测量值,将不同度量从超像素级转换为像素级进行判别和分类检测。传统方法虽然已经广泛地应用在舰船检测中,但其比较依赖于人工设计特征分类器提取舰船特征,比如CFAR算法的性能依赖于对海洋杂波的建模。同时,考虑到人工设计的特征分类器往往不能充分区别舰船和岛礁、近岸人造设施等虚警目标,因此往往在背景相对单一的远海舰船检测中效果较好,而在背景复杂的近岸舰船检测中效果并不理想。

      近年来,随着深度学习方法逐步发展,已出现了许多使用深度神经网络模型的目标检测算法,在一定程度上改善了传统学习方法的不足。常用的网络模型包括自动编码器、玻尔兹曼机以及卷积神经网络。尤其是卷积神经网络,相继出现了Alex Network (AlexNet), VGG, Google Network(GoogleNet)和残差网络(Residual Network, ResNet)等基础网络,及以此为结构的众多目标检测模型,包括SSD, YOLOv1和Faster-RCNN(Faster-Region Convolutional Neural Networks)等,这些方法在SAR舰船检测领域中也逐渐成为主流。

      然而,深度学习方法往往需要大量的训练样本数据作为支撑。在计算机视觉领域,已有较多的公开样本数据集,如ImageNet[6], VOC[7], COCO[8]等,数据规模达上千类目标、百万级切片。最近两年,在光学遥感领域,也有DOTA[9], HRRSD[10], RSOD[11,12]等数据集先后公开,为众多算法的研究测试提供了便利。

      相比之下,SAR图像舰船检测领域现有的公开数据集较为有限,可见公开报道的主要有SSDD[13], OpenSARShip[14]以及文献[15]所提供的数据集。这3类数据集均以民用舰船目标的切片为主,切片尺寸一般为256×256像素,分辨率包括3 m, 5 m, 8 m, 10 m和20 m等,背景相对单一,远海背景为主,近岸背景较少。这3个数据集发布后,较好地促进了深度神经网络模型在SAR舰船检测中的应用,同时也基于主流深度学习算法定义了数据集的指标基准。

      事实上,卫星应用中的舰船检测往往是在整景图像上处理的,覆盖面积一般为数十平方公里甚至更大。这种条件下,目标周围的环境,如码头、道路、附属建筑,甚至海浪等对舰船检测性能也有较大的影响。尤其是在近岸和有岛礁的场景中。因此,一个包含远海与近岸等更真实、更多样的场景、涵盖多种类型舰船目标的数据集将有利于训练出性能更优、鲁棒性更强、实用性更高的SAR舰船检测模型。

      为了促进SAR舰船检测方面的研究、提升国产化数据的使用率,本文基于高分三号卫星数据,构建了一个面向宽幅场景的SAR舰船目标公开样本数据集,命名为AIR-SARShip-1.0。该数据集包含31景SAR图像,场景类型包含港口、岛礁、不同级别海况的海面等,标注信息主要为舰船目标的位置,并经过专业判读人员的确认,目前该数据集以支持复杂场景下的舰船目标检测等应用为主。该数据集已可通过《雷达学报》官网的相关链接(http://radars.ie.ac.cn/web/data/getData?dataType=SARDataset)免费下载使用。另外,文中还使用几种常见深度学习网络进行了比较实验与分析,形成该数据集SAR舰船检测性能指标的基准,便于其他学者以此为参考展开相关研究。

    • 高分三号卫星是国家高分辨率对地观测系统重大专项中的民用微波遥感成像卫星,也是我国首颗C频段多极化高分辨率合成孔径雷达卫星[16]。本文公开的AIR-SARShip-1.0数据集均来源于高分三号卫星,包含31景大图,数据集信息如表1所示,图像分辨率包括1 m和3 m,成像模式包括聚束式和条带式,极化方式为单极化,图像格式为Tiff,图像尺寸绝大多数为3000×3000像素,本文列出了数据集中每一幅图像的详细信息,包括图像编号、像素尺寸、分辨率、海况、场景以及舰船数量,详细信息如文后附表1所示。

      表 1  数据集信息

      Table 1.  The dataset information

      分辨率成像模式极化方式图像格式
      1 m, 3 m聚束式、条带式单极化Tiff

      表 1  AIR-SARShip-1.0数据集详情

      Table 1.  AIR-SARShip-1.0 dataset information in detail

      图像编号像素尺寸海况场景分辨率(m)舰船数量
      13000×30002级近岸35
      23000×30000级近岸17
      33000×30003级远海310
      43000×30002级远海38
      53000×30001级近岸315
      63000×30004级远海33
      73000×30004级远海35
      83000×30001级近岸12
      93000×30002级近岸17
      103000×30001级远海150
      113000×30001级近岸180
      123000×30002级近岸118
      134140×41401级近岸121
      143000×30001级近岸115
      153000×30001级近岸177
      163000×30003级近岸313
      173000×30003级近岸33
      183000×30003级近岸32
      193000×30003级近岸31
      203000×30002级近岸37
      213000×30002级近岸39
      223000×30001级近岸314
      233000×30001级远海34
      243000×30004级远海36
      253000×30004级远海120
      263000×30002级近岸315
      273000×30002级近岸319
      283000×30001级近岸38
      293000×30003级远海36
      303000×30002级远海38
      313000×30001级近岸33
    • AIR-SARShip-1.0数据集按照PASCAL VOC数据集格式标注,结果保存文件为XML格式。图1(a)展示了数据集中某幅图像的舰船标注样例,图1(b)所示为XML文件中一个目标的标注示例,该文件中实际包含图1(a)中所有舰船目标的矩形框信息,图1(b)中仅仅列出其中一个目标的矩形框信息。XML文件中包含对应图像文件名、图像像素大小、图像通道数、图像分辨率、每个目标的类别名称以及目标框的位置。以图像左上角点位坐标原点,每个目标所在区域按矩形框标注,依次包括矩形框X轴坐标的最小值(xmin)与最大值(xmax)、Y 轴坐标的最小值(ymin)与最大值(ymax)4个坐标点,坐标值即为矩形框在图像中实际像素的位置,标注文件的格式跟VOC数据集中标注文件的格式保持一致。图2中则展示了该数据集的典型场景示例,可以发现图像不仅包含众多的舰船信息,还包括周围海域、陆地及港口相关信息,更加贴近实际舰船检测应用。

      图  1  数据集标注示意图

      Figure 1.  The annotated example in the dataset

      图  2  AIR-SARShip-1.0数据集中场景示例

      Figure 2.  The example scenes of AIR-SARShip-1.0 dataset

      实际训练过程中存在训练集与测试集的分配问题,考虑本数据集中包含共31景大图,一般建议按照大约2:1的比例,将21景图像作为训练数据,其余10景图像作为测试数据。矩形框的面积分布图如图3所示,图中横轴代表矩形框的面积所属区间,纵轴代表该面积范围内舰船数量占总数量的比重,例如第1个柱状条代表有6%的舰船矩形框面积在1000以下,第2个柱状条代表有13%的舰船矩形框面积在1000到2000之间。鉴于每张大图的尺寸是3000×3000像素,从图3可以看出大多数目标矩形框都分布于2000~5000,在整张大图中占比例较小,即使把整张大图做出500×500像素的切片,舰船矩形框在切片中的占比也仅仅在0.008~0.020,因此该数据集的场景大、目标小的特性十分显著。对比视觉领域中最具挑战性的数据集之一COCO,其小目标的比例也仅为41%,因此,AIR-SARShip-1.0数据集重点考验算法模型对小目标的检测性能。

      图  3  数据集舰船矩形框面积分布

      Figure 3.  The area distribution of ship rectangle in the dataset

    • 在深度学习流行之前,各国研究者对SAR舰船检测领域进行了深入研究,提出了许多经典SAR舰船检测算法,如CFAR算法、最佳熵自动门限法(Kapur, Sahoo and Wong, KSW)、基于K分布的CFAR方法等。最佳熵自动门限法将信息论中Shannon熵概念用于图像分割,其出发点是使图像中目标与背景分布的信息量最大。该算法通过选取双阈值克服了单阈值分割算法对高分辨率图像存在的舰船检测不连通、检测虚警高等问题。CFAR检测方法是雷达信号检测领域里最常用和最有效的检测算法之一,这个算法的核心思想是在保证虚警率为常数的同时,根据虚警率和SAR图像海洋杂波的统计特性(即海洋杂波的概率密度函数)计算得到检测舰船目标的阈值。当以高斯模型建立海洋背景杂波模型时,可以得到双参数CFAR算法,但是在很多情况下,高斯模型对海洋杂波的描述并不理想,所以1976年Jakeman和Pusey引入K分布的概念用来描述海洋杂波,即基于K分布的CFAR方法,进一步提升了舰船检测的精度,得到普遍认可。本文实验部分将利用3种经典舰船检测算法在AIR-SARShip-1.0数据集上进行测试与分析。

    • 近些年来,随着深度学习技术的发展,视觉领域中的目标检测问题也涌现出诸多算法,主要分为两大类:单阶段目标检测器和双阶段目标检测器。单阶段代表算法有SSD[17], YOLOv1[18], RetinaNet[19]等,单阶段检测算法YOLOv1只包含2个部分:特征提取部分以及检测目标框部分,YOLOv1将图像划分为S×S个网格,每个物体所在网格中心负责预测目标框位置以及类别,且仅能预测单类物体。SSD与YOLOv1不同点在于,SSD添加了锚框以及多尺度特征提取层,改善了YOLOv1网格粗糙、对小目标检测精度差的缺点。双阶段代表算法有R-CNN[20], Fast-RCNN[21], Faster-RCNN[22], FPN(Feature Pyramid Networks)[23]等,其中最具有代表性的是Faster-RCNN,该算法包含3个部分:第1部分为基础网络,从图像提取高层特征;第2部分为区域生成网络(Region Proposal Network, RPN),提出可能为目标的候选框;最后一部分为预测框回归网络,基于候选框对目标做进一步分类及位置回归。由于双阶段检测网络有候选框提取部分,所以在控制正负样本比例以及后续更精细化调节候选框的位置上要优于单阶段检测网络,但同时也大大增加了检测的时间成本。

      上述视觉领域的目标检测算法都有相似的基础网络,例如VGG, ResNet等。VGG主要分为卷积网络和全连接网络两部分。ResNet主要用于解决随着网络深度增加而导致网络性能下降的问题,它巧妙地设计了跳接模块组成残差块,大大加深了可使用的网络深度,常用的ResNet网络包括ResNet50, ResNet101, ResNet152。

    • 目前,在数据增强中主要使用的手段有图像翻转、随机缩放以及旋转,其中图像旋转主要是做90°旋转,而SAR卫星经常会对同一个地点做多时相以及多角度的成像,但这个角度是不确定的,既不是90°旋转也不是180°的翻转,如图4所示,这两幅SAR图像近似在同一地点成像,但是成像角度有一定差异。SAR成像不同于光学成像,不同角度的成像结果差异较大[24],如果只使用旋转90°的数据增强方式检测性能提升很有限。为了解决这一问题,本文中采用小角度间隔密集旋转增强的Faster-RCNN检测方法(Faster-RCNN based on Dense Rotation, Faster-RCNN-DR),以求获得数据角度的多样性,从而进一步提升SAR舰船目标检测的性能。图5展示了原图和逆时针旋转20°, 40°和 60°之后的图像。

      图  4  同一地区不同角度成像示例

      Figure 4.  Imaging examples of the same area at different angles

      图  5  旋转图像示例

      Figure 5.  The examples of rotated images

    • 根据应用场景和成像模式的不同,SAR图像的分辨率具有多样性,同一舰船目标在不同分辨率图像中、不同舰船目标在相同分辨率图像中会呈现出大小不一的特点,多分辨率SAR影像中舰船多尺度特性给目标检测带来了较大的挑战。在深度卷积神经网络中,低层卷积层的特征图含有丰富的空间信息,但是语义信息较少,高层的特征图含有较多的语义信息但是空间信息较少,而且尺度较小的目标经过多层卷积之后留下的信息很少,不利于对小目标检测与识别。因此,为了解决不同分辨率SAR图像多尺度舰船目标的检测难题,文献[25]提出了一种基于密集连接端到端网络结构(Densely Connected End-to-end Neural Network, DCENN)的舰船检测算法。该网络的主要结构如图6所示,使用ResNet101作为基础网络,图像经过多次卷积之后,随着卷积网络的加深,特征图有越来越多的语义信息,但是分辨率越来越低。为了使高分辨率的特征图同时拥有高层特征图的语义信息,可将高层特征图与低层特征图进行如图7所示的迭代连接。在基础网络和RPN网络之后是二阶段的检测子网络(如图6虚线框内所示),它具体分为候选区池化部分和用于分类和回归的全连接层部分,通过对这两部分进行了轻量化改进处理,既保证了检测精度又降低了内存占用、提升了处理速度。

      图  6  DCENN网络主要结构

      Figure 6.  The main structure of DCENN network

      图  7  基于密集连接融合特征图

      Figure 7.  The fusion feature map based on dense connection

    • 为了验证深度学习方法相比传统方法的优越性,本文在AIR-SARShip-1.0数据集上做了实验,比较第3部分中提到的几种算法的舰船检测性能,并给出具体分析。实验机器的操作系统为Ubuntu 16.04,内存32 GB, CPU使用Intel Xeon E5-2630,深度学习算法使用到的显卡是NVIDIA Telsa P100,传统算法没有使用显卡加速,只使用CPU进行计算。将数据集分为测试数据和训练数据,其中训练数据为21景大图,测试数据为10景大图,数据集中会提供train.txt和test.txt的文件详细记录训练集和测试集的文件名。在CFAR算法中认为海洋杂波服从(0, 1)的高斯分布;在基于K分布的CFAR算法中,设置参数K=2;在最佳熵自动门限法中,根据数据图像自动选择最佳阈值参数。由于传统算法不需要训练数据,故直接使用测试集数据集进行试验,测试精度如表2所示。

      表 2  经典机器学习算法舰船检测性能基准

      Table 2.  The performance benchmarks of classic ship detection algorithm

      算法AP(%)
      CFAR27.1
      基于K分布的CFAR19.2
      KSW28.2

      其中AP计算方式如式(1)所示

      $${\rm{AP}} = \sum\limits_0^1 {({r_{n + 1}} - {r_n}){p_{{\rm{interp}}}}({r_{n + 1}})} $$ (1)

      其中pinterp(rn+1)计算方式如式(2)所示

      $$ {p_{{\rm{interp}}}}({r_{n + 1}}) = \mathop {\max }\limits_{\tilde r:\tilde r \ge {r_{n + 1}}} p(\tilde r) $$ (2)

      其中,p($\tilde r$)代表在召回率$\tilde r$下最大的准确率,准确率p和召回率r的计算式如(3)和式(4)所示

      $$p = \frac{{{\rm{TP}}}}{{{\rm{TP}} + {\rm{FP}}}}$$ (3)
      $$r = \frac{{{\rm{TP}}}}{{{\rm{TP}} + {\rm{FN}}}}$$ (4)

      其中,TP代表检测结果为真且真值为真的检测框数量,FP代表检测结果为真但真值为假的检测框数量,FN代表检测结果为假但真值为真的检测框数量。如式(5)所示

      $${\rm{IOU}} = \frac{{{\rm{area}}({{\bf{B}}_p} \cap {{\bf{B}}_{gt}})}}{{{\rm{area}}({{\bf{B}}_p} \cup {{\bf{B}}_{gt}})}}$$ (5)

      定义交并比IOU为检测框与标注框重合部分面积除以两者做并集部分的面积,当IOU大于0.5认为检测成功,记为TP,当IOU小于0.5认为是为虚警,记为FP,然后未检测出的舰船记为FN。由于本数据集只包含舰船一类,所以代表所有类的AP平均值mAP与AP值相同。

      视觉领域的深度学习目标检测算法SSD, YOLOv1和Faster-RCNN以及基于旋转增强的检测网络算法均使用开源框架Pytorch进行实验,焦娇等人的DCENN算法使用Tensorflow开源框架进行实验。在实验过程中,将大图切成500×500像素尺寸大小,然后使用图像翻转、图像旋转、对比度增强和随机缩放等方式做数据增强,其中Faster-RCNN, SSD-512, SSD-300, YOLOv1几种算法使用的训练集是经过90°旋转增强的,而基于Faster-RCNN的旋转增强算法使用的训练集是经过以10°为间隔密集旋转增强的。在SSD中使用了两种图像尺寸,SSD-300和SSD-512。实验中设置学习率为0.00001,动量设置为0.99,根据GPU的内存限制,设置SSD-300的批处理量为24, SSD-512为4, Faster-RCNN为12, DCENN算法[25]设置为1,其它超参数均设置相同如文献[22],基于旋转增强的检测网络算法中超参数设置与Faster-RCNN完全相同。

      各深度学习算法的舰船检测性能如表3所示,其中每种算法的运行速度用FPS衡量,FPS代表每秒钟该算法可检测的图像张数,其中DCENN, Faster-RCNN-DR, Faster-RCNN, YOLOv1这几种算法输入的测试图像尺寸为500×500, SSD-512输入测试图像尺寸为512×512, SSD-300输入测试图像尺寸为300×300。从表中可以看出,使用90°旋转训练集增强的算法中,YOLOv1的指标最低,运行速度最快,而文献[25]提出的SAR舰船检测算法性能最优,运行速度最慢。在单阶段目标检测算法中,YOLOv1没有使用锚框进行预测,而是将图像划分为S×S个网格,每个网格只能预测一个目标,所以YOLOv1在AIR-SARShip-1.0这种密集小目标较多的数据集中检测性能较差,但也正因为去除锚框,使得YOLOv1的运行速度最快;SSD在训练时加入锚框,而且在网络多个特征层中进行预测,弥补了YOLOv1的不足,检测性能有所提升,但运行时间稍慢于YOLOv1; Faster-RCNN作为典型的双阶段检测算法,使用RPN网络提出候选框使得后面检测网络更精确的回归目标框的位置,性能优于单阶段检测算法,同时拥有双阶段算法的缺点,运行速度明显慢于单阶段算法。同样是使用Faster-RCNN检测算法,Faster-RCNN-DR以10°为间隔的密集旋转数据增强方法比90°旋转增强方法在性能上提升了4.9%,这是因为密集旋转的方式在一定程度上提升了数据集的丰富性和角度多样性,因为没有在网络阶段做出额外工作因此与Faster-RCNN检测算法运行时间基本相同。DCENN舰船检测算法中因为使用了密集连接,并且在多个特征层上进行预测,能更好地提取舰船特征,所以算法的性能最高,而密集连接也带来了更高的计算量,使得算法运行时间最长。

      表 3  基于深度学习的SAR舰船检测算法的性能基准

      Table 3.  The performance benchmarks of SAR ship detection algorithms based on deep learning

      性能排名算法AP(%)FPS
      1DCENN88.124
      2Faster-RCNN-DR84.229
      3Faster-RCNN79.330
      4SSD-51274.364
      5SSD-30072.4151
      6YOLOv164.7160

      表4给出了3种代表性算法在近岸和远海两种不同场景的检测结果,可以发现,远海场景的检测精度明显高于近岸场景,在本数据集上的最高精度优于95%,而近岸场景的性能则降低20%以上。这符合远海场景背景相对单一、噪声较少,而近岸场景受码头、建筑物、陆地等干扰较多的实际情况,也在一定程度上表明,近岸舰船目标检测距离实用仍有较大差距,是一个具有挑战性的研究课题。

      表 4  不同场景下算法性能结果

      Table 4.  The performance benchmarks of different scenes based on different algorithms

      性能排名算法近岸舰船AP(%)远海舰船AP(%)
      1DCENN68.196.3
      2Faster-RCNN-DR57.694.6
      3SSD-51240.389.4

      为了更直观地展示算法在AIR-SARShip-1.0数据集的检测效果,以某一景大图为例,使用Faster-RCNN算法进行舰船目标检测,结果如图8所示,绿色框中数字代表检测框的置信度。从图中可以看出,绝大多数舰船目标均正确检测出(图8c),检测框与目标重合度较好(图8c)、有一定差距且存在虚警(图8a)、检测框重合度稍差(图8b)、少数舰船漏检(图8d)。总体而言,检测结果还存在不理想情况,性能有待进一步提升。

      图  8  基于Faster-RCNN的SAR舰船检测示意图

      Figure 8.  The detection example of SAR ship based on Faster-RCNN

    • 为了促进深度学习技术在SAR舰船检测领域中的应用,本文公开了一个大场景、高分辨的AIR-SARShip-1.0数据集,该数据集包括近岸、远海两种场景。本文使用传统舰船检测算法及常见的深度学习检测算法进行了实验,结果发现:深度学习算法的检测性能明显优于传统舰船算法,其中DCENN检测算法在密集连接网络结构的基础上使用多个特征层进行预测,舰船检测AP指标最高,但运行速度最慢。其次,使用密集角度旋转的数据扩充方式可以在一定程度上提升数据的角度多样性,有利于模型性能提升且不会对预测时带来额外的运算。另外,文中分别在近岸和远海两种场景下测试了不同算法的性能,其中各算法在远海场景中性能差异较小,而在近岸场景中差异较大,这说明近岸环境更加复杂、舰船检测面临的挑战更加严峻。实验结果为AIR-SARShip-1.0数据集构建了性能基准,方便其他学者进一步展开SAR舰船检测的相关研究。

      附录

      高分辨率SAR舰船检测数据集-1.0(AIR-SARShip-1.0)依托《雷达学报》官方网站发布,现已上传至学报网站“数据”版块“SAR样本数据集”,网址为:http://radars.ie.ac.cn/web/data/getData?dataType=SARDataset,如附图1所示。

      图  1  AIR-SARShip-1.0数据集发布地址

      Figure 1.  Release address of AIR-SARShip-1.0 dataset

      AIR-SARShip-1.0数据集依托国家高分辨率对地观测系统重大科技专项,构建一套面向宽幅场景、覆盖典型类型、贴近实际应用的舰船目标数据集,旨在进一步提高国产化数据使用率,推动SAR目标检测等先进技术的深入研究。该数据集所有权归国家高分辨率对地观测系统重大科技专项和中国科学院空天信息创新研究院所有,《雷达学报》编辑部具有编辑出版权等。

    • Synthetic Aperture Radar (SAR) is an active microwave imaging radar that can provide all-weather, day-and-night imaging capability. SAR has broad application prospects in the military and civilian fields. With the development of China’s ground observation technology in recent years, many high-resolution SAR satellites, such as Gaofen-3, have been put into use. The quality and quantity of SAR data both continue to increase.

      The interpretation of SAR images faces many challenges. SAR imaging and optical imaging are different. For example, characterization is not intuitive, and coherent spots and overlays during imaging usually interfere with object interpretation. Most existing daily operations use manual interpretation, which is time-consuming and labor-intensive. Furthermore, meeting the needs of real-time interpretation of massive SAR images is difficult.

      Continuous monitoring of ships in ports and maritime areas is an important application task[1]. Ship detection has also been a research focus in the field of SAR image interpretation. Ship detection is divided into two types: nearshore ship detection and offshore ship detection. In general, the background of offshore ships is relatively unvarying, thus resulting in slightly less difficulty in extracting foreground objects. By contrast, the nearshore area has a larger number and a wider variety of ships. In addition, ports are located in the land–sea division area and are affected by background noise and ground interference. Therefore, detecting nearshore ships is more difficult.

      The classic ship detection method mainly combines statistical learning with the Constant False Alarm Rate (CFAR). In research on ship detection for single polarized SAR, Rey proposed a detection method that uses the K distribution of an ocean clutter model combined with CFAR[2]. Novaket al[3]. developed a two-parameter CFAR by using a Gaussian model. Stagliano et al[4]. proposed a ship detection algorithm for SAR, which is based on the combination of CFAR and wavelet transform. He Jinglu et al[5]. proposed an automatic ship detection method for polarized SAR based on superpixel-level local information measurement. The algorithm calculates the measured values between the superpixel and the surrounding pixels by generating multiscale superpixels and converts different metrics from the superpixel level to the pixel level for discrimination and detection. The traditional methods have been widely used in the ship detection field and rely on artificially designed feature classifiers to extract ship features. For example, the performance of the CFAR algorithm depends on the modeling of ocean clutter. Meanwhile, artificially designed feature classifiers often work well in the offshore ship detection of ocean ships with relatively unvarying backgrounds. When it comes to the nearshore scenario, the performance deteriorates because the traditional methods cannot fully distinguish the real ships and the false alarm targets, such as islands, reefs, and man-made facilities near the shore.

      In recent years, with the progressive development of deep learning methods, many target detection algorithms using deep neural network models have been proposed, thereby improving the deficiencies of traditional learning methods to a certain extent. Commonly used network models include autoencoders, Boltzmann machines, and Convolutional Neural Networks (CNN). For CNN in particular, many basic networks have emerged, such as Alex Network (AlexNet), VGG, Google Network (GoogleNet), and Residual Network (ResNet). Many target detection models based on this structure have been developed, together with some classic detection models, including SSD, YOLOv1, and Faster-region CNNs (Faster-RCNNs). These methods have gradually become the mainstream in the field of SAR ship detection.

      However, deep learning methods often require large amounts of data for training. In the field of computer vision, many public sample datasets are available, such as ImageNet[6], VOC[7], and COCO[8]. The data scale reaches thousands of types of targets and millions of slices. In the past two years, some datasets such as DOTA[9], HRRSD[10], and RSOD[11,12] have been released successively in the field of optical remote sensing, thereby facilitating the research and test of many algorithms.

      By contrast, the existing datasets in the field of SAR ship detection are relatively limited. Publicly available datasets include SSDD[13], OpenSARShip[14], and the dataset provided in Ref. [15]. These three types of datasets are mainly based on civilian ship slices. The slice size is generally 256 × 256 pixels, and the resolution includes 3 m, 5 m, 8 m, 10 m, and 20 m. Most backgrounds feature offshore scenarios, and the nearshore scenario is limited. After the release of these three datasets, the application of the deep neural network models in ship detection for SAR has been better promoted. The benchmarks of the datasets are defined based on the mainstream deep learning algorithms.

      In actual application, the ship detection task is often realized on the whole scene image, whose coverage area is generally tens of square kilometers or more. Under this condition, the environment around the target, such as docks, roads, outbuildings, and even waves, has a great impact on ship detection performance, especially in the nearshore and island reef scenarios. Therefore, a dataset that contains more realistic and diverse scenarios such as the distant sea and near shore and covers multiple types of ship targets will contribute to training a model with better performance, stronger robustness, and higher practicality.

      To promote research on ship detection for SAR and improve the utilization rate of localized data, this paper publishes AIR-SARShip-1.0, a SAR ship dataset based on Gaofen-3 satellite data. The dataset contains 31 SAR images. The scene types include ports, islands, reefs, and the sea. The labeling information is the ship position, which has been confirmed by professional interpreters. Currently, the dataset is mainly used for ship target detection in complex scenarios. The dataset is free to download from the link on the official website of the Journal of Radar. The paper also uses several common deep learning networks for comparative experiments and analysis. The performance indexes form a benchmark for SAR ship detection, which is convenient for other scholars to cite as a reference in related research.

    • Gaofen-3 is a civilian microwave remote sensing imaging satellite in the major project of the National High-Resolution Earth Observation System, and it is also the first Chinese C-band multipolarization high-resolution synthetic aperture radar satellite[16]. The AIR-SARShip-1.0 dataset is collected from the Gaofen-3 satellite, which contains 31 images of large scenes. The detailed information of this dataset is shown in Tab. 1. The resolution of SAR images includes 1 m and 3 m. The imaging mode has both spotlight and strip modes. All images are in single polarization mode with a size of about 3000 × 3000 pixels and all saved in TIF format. Details of each image, including image number, pixel size, resolution, sea state, scene type, and the number of ships, are presented in the App. Tab. 1 of this paper.

      Table 1.  The dataset information

      Resolution Imaging mode Polarization mode Format
      1 m, 3 m Spotlight, Strip Single Tiff
    • The AIR-SARShip-1.0 dataset is labeled according to the annotation format in the PASCAL VOC dataset, and the results are saved as XML files. Fig. 1(a) shows an example of some annotated rectangular boxes, and Fig. 1(b) shows partial details of the corresponding XML label file. The file in Fig. 1(b) actually contains the rectangle information of all ships in Fig. 1(a), but here, we list only the information of one target box as an example. The XML file includes image file name, pixel size, number of channels, resolution, category, and position of each target box. The top left corner of the SAR image is set as the origin of coordinates. Each target is labeled by a rectangular box, which is located in the top left corner (xmin, ymin) and the bottom right corner (xmax, ymax). The coordinates of these two points are the actual pixel position in the image. The annotation format in this dataset is consistent with that in the PASCAL VOC dataset. Fig. 2 presents some typical scenes in this dataset. The SAR images also contain the surrounding harbors, sea, and inland area, which is close to the real ship detection task.

      Figure 1.  Annotated example in the dataset

      Figure 2.  Examples of the AIR-SARShip-1.0 dataset

      The setting of the proportion of training and test sets is important in the process of training. Considering that this dataset contains 31 large-scene images, we take 21 images as training data and the remaining 10 images as test data. The area distribution of the bounding box is shown in Fig. 3. The horizontal axis represents the area of the bounding box, and the vertical axis is the proportion of total ships in the corresponding area. For example, the first column represents that 6% of ships have an area less than 1000, and the second column represents that 13% of ships have an area between 1000 and 2000. Fig. 3 shows that most target areas are between 2000 and 5000, and the ratios are small in the whole image. Even if the large image is cropped into slices with 500 × 500 pixels, the average area ratio of ship targets in the slice is between 0.008 and 0.020. Compared with the COCO dataset with about 41% small targets, being one of the most challenging datasets in the domain of computer vision, AIR-SARShip-1.0 has more small targets in large scenes. Therefore, AIR-SARShip-1.0 mainly focuses on the detection performance of targets in a small scale.

      Figure 3.  Area distribution of ship rectangle in the dataset

    • Before deep learning became popular, researchers from all over the world conducted in-depth research on the SAR ship detection field and proposed many classical SAR ship detection algorithms, such as the two-parameter CFAR algorithm, the optimal entropy automatic threshold method (KSW), and the CFAR method based on K distribution. The optimal entropy automatic threshold method applies the Shannon entropy in information theory to image segmentation. This algorithm overcomes the problem of ship detection disconnection and false alarm detection in high-resolution images by selecting double thresholds. The CFAR detection method is one of the most commonly used and effective detection algorithms in the field of radar signal detection. The core idea of this algorithm is to calculate the threshold for detecting ship targets on the basis of the CFAR and the statistical characteristics of ocean clutter in SAR images, i.e., the probability density function of ocean clutter. When the ocean background clutter is modeled by the Gaussian model, a double-parameter CFAR algorithm can be established. However, in many cases, the Gaussian model is not ideal for describing ocean clutter. Hence, in 1976, Jakeman and Pusey introduced the concept of K distribution to describe ocean clutter, that is, the CFAR method based on the K distribution further improves the accuracy of ship detection, which is universally accepted. In the experimental part of this paper, three classical ship detection algorithms are used to test and analyze the AIR-SARShip-1.0 dataset.

    • In recent years, with the development of deep learning technology, many algorithms have been proposed for object detection in the visual field, which are mainly divided into two categories: single-stage target detector and double-stage target detector. SSD[17], YOLOv1[18], and RetinaNet[19] are the representative single-stage detection algorithms. YOLOv1 contains only two parts: the feature extraction part and the detection object box part. YOLOv1 divides the image into S×S grids. The center of the grid where each object is located is responsible for predicting the position and category of the object box and can predict only the object of a single class. The difference between SSD and YOLOv1 is that SSD adds an anchor frame and a multiscale feature extraction layer, thereby improving YOLOv1’s rough grid and poor detection accuracy for small targets. The two-stage representative algorithms include R-CNN[20], fast-RCNN[21], Faster-RCNN[22], and feature pyramid networks[23]. The most representative Faster-RCNN consists of three parts. Part 1 is the basic network, which is used to extract high-level features from the image. Part 2 is the Region Proposal Network (RPN), which proposes candidate boxes that may be targets. Part 3 is the prediction box regression network, which further classifies the target based on the candidate box and performs the location regression. The two-stage detection network has the extraction part of candidate boxes, thus making it better than the single-stage detection network in controlling the proportion of positive and negative samples and refining the position of candidate boxes. However, it also greatly increases the time cost of detection.

      Target detection algorithms in the visual domain have similar basic networks, such as VGG and ResNet. VGG is mainly divided into two parts: convolutional network and fully connected network. ResNet is mainly used to solve the problem of the network performance degrading as the network depth increases. It cleverly designs a jumper module to form a residual block, which greatly increases the available network depth. Commonly used ResNet networks include ResNet50, ResNet101, and ResNet152.

    • At present, the main means of data enhancement in are turnover, random image scaling and 90-degree rotation. However, SAR satellites often conduct multitemporal and multiangle imaging of the same location, but this angle is uncertain, being neither a 90-degree rotation nor a 180-degree flip. As shown in Fig. 4, the two SAR images from the same place look different because of the imaging angle. SAR imaging is different from optical imaging, and the imaging results from different angles are different[24]. Using only a rotation of 90 degrees in the data enhancement method limits the detection performance improvement. To solve this problem, this paper adopts the Faster-RCNN based on dense rotation (Faster-RCNN-DR) enhancement with a small angle interval to obtain the diversity of data angles to further improve the performance of SAR ship target detection. Fig. 5 shows the original image and the image after 20°, 40°, and 60° counterclockwise rotation.

      Figure 4.  Imaging examples of the same area at different angles

      Figure 5.  Examples of original image and rotated images

    • SAR images have diverse resolutions according to different applications and imaging modes. The same ship object in different-resolution images and different ship objects in the same resolution image will show different sizes. The multiscale features of ships in the multiresolution SAR images pose great challenges to object detection. In DCNNs, the feature maps of low-level convolutional layers contain rich spatial information, but less semantic information is available. The feature maps of the higher layers contain more semantic information but less spatial information. Smaller-scale objects are left with little information after multilayer convolution, which is not good for small object detection and recognition. Therefore, to solve the multiscale ship object detection problem of SAR images with different resolutions, the Ref. [25] proposed a Densely Connected End-to-end Neural Network (DCENN) for ship detection. The main structure of this network, which uses ResNet101 as the backbone network, is shown in Fig. 6. As the convolutional network deepens and the image has been convolved multiple times, the feature map contains an increasing amount of semantic information, but the resolution continuously becomes lower. To combine the high-resolution feature map with the semantic information of the high-level feature map, the high-level and low-level feature maps can be iteratively connected as shown in Fig. 7. After the basic network and RPN network are established, a two-stage detection subnetwork is formed (shown in the dashed box in Fig. 6). This subnetwork is specifically divided into the proposal box pooling part and the fully connected layer part for classification and regression. Lightweighting and improving these two parts not only guarantee detection accuracy but also reduce the memory consumption and improve the processing speed.

      Figure 6.  Main structure of the DCENN network

      Figure 7.  Fusion feature map based on dense connection

    • We conducted experiments on the AIR-SARShip-1.0 dataset to verify the superiority of the deep learning methods. An Intel Xeon E5-2630 CPU with an Ubuntu 16.04 operating system, a 32 GB memory, and an NVIDIA Tesla P100 GPU for the deep learning algorithm was used for the experiments. The traditional algorithms use the CPU for calculation instead of the GPU for acceleration. The dataset is divided into the test set, which has 10 images, and the training set, which has 21 images. The dataset will provide train.txt and test.txt files to record the names of the training file and the test set file, respectively. In the CFAR algorithm, the ocean clutter is considered to obey the Gaussian distribution of (0,1). In the CFAR algorithm based on the K distribution, the parameter K is 2. In the KSW algorithm, the optimal threshold parameter is automatically selected according to the image. The traditional algorithms do not require training data and are thus directly tested based on the test set. The test accuracy is shown in Tab. 2.

      Table 2.  The performance benchmarks of classic ship detection algorithm

      Algorithm AP(%)
      CFAR 27.1
      CFAR method based on K distribution 19.2
      KSW 28.2

      The calculation method of AP is shown in Eq. (1), and the calculation method of pinterp(rn+1) in Eq. (1) is shown in Eq. (2). p( $ \tilde {r} $ ) in Eq. (2) represents the maximum precision under the recall $ \tilde {r} $ . The calculation of the precision p and the recall $ \tilde {r} $ are shown in Eq. (3) and Eq. (4). TP represents the number of detection bounding boxes where the detection result is true and the ground truth is true. FP represents the number of detection bounding boxes where the detection result is true but the ground truth is false. FN represents the number of detection bounding box where the detection result is false but the ground truth is true. As shown in Eq. (5), the Intersection Over Union (IOU) is defined as the area of the overlapping part of the detection bounding box and the ground truth divided by the area of the unified part of the two. When the IOU is greater than 0.5, the detection is successful, as recorded in TP. When the IOU is less than 0.5, it is considered a false alarm, as recorded in FP. The undetected ships are recorded in FN. Given that this dataset contains only the ship category, the mAP, which represents all classes’ AP average, is the same as the AP.

      $$ \quad {\rm{AP}} = \sum\limits_0^1 {({r_{n + 1}} - {r_n}){p_{\rm{{interp}}}}({r_{n + 1}})} $$ (1)
      $$ \quad {p_{{\rm{interp}}}}({r_{n + 1}}) = \mathop {\max }\limits_{\tilde r:\tilde r \ge {r_{n + 1}}} p(\tilde r) $$ (2)
      $$ \quad p = \frac{{{\rm{TP}}}}{{{\rm{TP}} + {\rm{FP}}}} $$ (3)
      $$ \quad r = \frac{{{\rm{TP}}}}{{{\rm{TP}} + {\rm{FN}}}} $$ (4)
      $$ {\rm{IOU}} = \frac{{{\rm{area}}({{\rm{B}}_p} \cap {\rm{B}}{}_{gt})}}{{{\rm{area}}({{\rm{B}}_p} \cup {\rm{B}}{}_{gt})}} $$ (5)

      In the visual field, the deep learning target detection algorithms SSD, YOLOv1, and Faster-RCNN and the detection network algorithm based on rotation enhancement are tested by using the open-source framework PyTorch. Jiao Jiao et al. used the DCENN algorithm in experiments with the open-source framework TensorFlow. In the experiment, the SAR image is divided into 500 × 500 pixels, and then the data are enhanced by image flipping, image rotation, contrast enhancement, and random scaling. The training sets used by the Faster-RCNN, SSD-512, SSD-300, and YOLOv1 algorithms are enhanced by 90-degree rotation, while the training sets used by the Faster-RCNN-based rotation enhancement algorithm are enhanced by dense rotation at 10-degree intervals. Two image sizes—SSD-300 and SSD-512—are used in SSD. In the experiment, the learning rate is 0.00001 and the momentum is set to 0.99. According to the memory limit of GPU, the batch processing capacity of SSD-300, SSD-512, Faster-RCNN, and DCENN are 24, 4, 12, and 12, respectively. Other hyperparameters are set the same as those in Ref. [22]. The hyperparameter setting in Faster-RCNN-DR is exactly the same as those in Faster-RCNN.

      The ship detection performance of each deep learning algorithm is shown in Tab. 3, in which the running speed of each algorithm is measured by FPS, which represents the number of images that the algorithm can detect per second. The input test image size of DCENN, Faster-RCNN-DR, Faster-RCNN, and YOLOv1 is 500 × 500. The input test image size of SSD-512 is 512 × 512, and that of SSD-300 is 300 × 300. The table shows that among the algorithms whose training set was enhanced by 90-degree rotation, YOLOv1 has the worst performance but the fastest running speed, while the SAR ship detection algorithm proposed in Ref. [25] has the best performance but the slowest running speed. In the single-stage target detection algorithm, YOLOv1 does not use an anchor frame to predict. Instead, it divides the image into S × S grid, with each grid being able to predict only one target. Thus, YOLOv1 in the dense small-target dataset of AIR-SARShip-1.0 has poor detection performance. However, YOLOv1 has the fastest speed due to the removal of the anchor frame. SSD adds an anchor frame during training and forecasts it in multiple feature layers of the network, which makes up for the deficiency of YOLOv1 and improves the detection performance. However, its running time is slightly slower than that of YOLOv1. As a typical two-stage detection algorithm, Faster-RCNN uses RPN to propose candidate boxes to make the latter network detect the position of the regression target box more accurately. Faster-RCNN performs better than the single-stage detection algorithm. However, it also has the shortcomings of the two-stage algorithm. The running speed is obviously slower than that of the single-stage algorithm. Compared with Faster-RCNN, Faster-RCNN-DR increases the performance by 4.9% because the dense rotation method improves the richness and angle diversity of the dataset to a certain extent. Given that no additional calculation is performed in the test stage, the running time is basically the same as that of the Faster-RCNN detection algorithm. The DCENN ship detection algorithm can better extract ship features because of the use of dense connections and prediction on multiple feature layers. Hence, this algorithm has the best performance. Nevertheless, the dense connection also requires a high computational amount, thereby lowering the processing efficiency.

      Table 3.  The performance benchmarks of SAR ship detection algorithms based on deep learning

      Performance ranking Algorithm AP(%) FPS
      1 DCENN 88.1 24
      2 Faster-RCNN-DR 84.2 29
      3 Faster-RCNN 79.3 30
      4 SSD-512 74.3 64
      5 SSD-300 72.4 151
      6 YOLOv1 64.7 160

      In Tab. 4, the detection results of the three representative algorithms are given in two different scenarios: nearshore and offshore. The detection accuracy of offshore scenario is obviously higher than that of the nearshore scenario. The highest accuracy in the offshore scenario on this dataset is better than 95%, while the performance of nearshore scenario is reduced by more than 20%. This phenomenon accords with the fact that the background of the offshore scenario is relatively unvarying and less noisy, whereas the nearshore scenario is interfered by wharfs, buildings, and land. To a certain extent, this finding also shows that a large gap still exists between scientific research on nearshore ship target detection and practical utilization, which is a challenging research topic.

      Table 4.  The performance benchmarks of different scenes based on different algorithms

      Performance ranking Algorithm Nearshore ship AP(%) Offshore ship AP(%)
      1 DCENN 68.1 96.3
      2 Faster-RCNN-DR 57.6 94.6
      3 SSD-512 40.3 89.4

      To show the detection effect on the AIR-SARShip-1.0 data set intuitively, we take one SAR image as an example and use the Faster-RCNN algorithm to detect ships. The results are shown in Fig. 8. The number in the green box represents the confidence of the detection box. Most of the ships are detected with correct rectangles as shown in Fig. 8(c). However, some unsatisfactory results still exist, i.e., false alarm (Fig. 8(a)), the detected ship with an inaccurate rectangle (Fig. 8(b)), and the overlooked ship (Fig. 8(d)), thus indicating that further research and improvement are needed.

      Figure 8.  Detection example of SAR ship based on Faster-RCNN

    • To promote the application of deep learning technology in the field of SAR ship detection, this paper publishes a large-scale high-resolution dataset called AIR-SARShip-1.0, which includes two scenarios, i.e., nearshore and offshore. In this paper, both the traditional ship detection algorithms and the common deep learning detection algorithms are experimentally tested. The deep learning algorithms have significantly better detection performance than the traditional ship algorithms. On the basis of densely connected network structures, the DCENN detection algorithm uses multiple connections for prediction. DCENN achieves the highest AP performance, but its operation speed is the slowest. The data expansion method using dense angular rotation can increase the angular diversity of the data to a certain extent, which is conducive to the improvement of model performance without bringing extra calculation in the prediction. In addition, different algorithms are tested in the nearshore and offshore scenarios. The performance difference is small in the offshore scenario but is significant in the nearshore scenario, thereby indicating that the nearshore environment is more complicated and that the ship detection task faces more challenges. The experimental results establish a performance benchmark for the AIR-SARShip-1.0 dataset, which conveniently enables other scholars to further perform related research on SAR ship detection.

    • The high-resolution SAR ship detection dataset -1.0 (AIR-SARShip-1.0) is published on the official website of the Journal of Radar and has been uploaded to the “data / SAR sample data set” page (App. Fig. 1) at http://radars.ie.ac.cn/web/data/getData?dataType=SARDataset.

      Figure App.   Fig. 1  Release address of the AIR-SARShip-1.0 dataset

      To increase the utilization rate of domestic data and promote research on advanced technologies, such as SAR target detection, the AIR-SARShip-1.0 dataset was built based on the major scientific and technological specialties of the National High-Resolution Earth Observation System. This dataset has large-scene images and covers typical types of ships, which is close to practical applications. AIR-SARShip-1.0 is owned by the National Science and Technology Major Project of High-Resolution Earth Observation System and the Aerospace Information Research Institute, Chinese Academy of Sciences. The editorial department of the Journal of Radar has editorial rights.

      Table App.  Tab. 1 AIR-SARShip-1.0 dataset information in detail

      Image No. Size Sea condition Scenario Resolution (m) Ship numbe
      1 3000×3000 Level 2 nearshore 3 5
      2 3000×3000 Level 0 nearshore 1 7
      3 3000×3000 Level 3 offshore 3 10
      4 3000×3000 Level 2 offshore 3 8
      5 3000×3000 Level 1 nearshore 3 15
      6 3000×3000 Level 4 offshore 3 3
      7 3000×3000 Level 4 offshore 3 5
      8 3000×3000 Level 1 nearshore 1 2
      9 3000×3000 Level 2 nearshore 1 7
      10 3000×3000 Level 1 offshore 1 50
      11 3000×3000 Level 1 nearshore 1 80
      12 3000×3000 Level 2 nearshore 1 18
      13 4140×4140 Level 1 nearshore 1 21
      14 3000×3000 Level 1 nearshore 1 15
      15 3000×3000 Level 1 nearshore 1 77
      16 3000×3000 Level 3 nearshore 3 13
      17 3000×3000 Level 3 nearshore 3 3
      18 3000×3000 Level 3 nearshore 3 2
      19 3000×3000 Level 3 nearshore 3 1
      20 3000×3000 Level 2 nearshore 3 7
      21 3000×3000 Level 2 nearshore 3 9
      22 3000×3000 Level 1 nearshore 3 14
      23 3000×3000 Level 1 offshore 3 4
      24 3000×3000 Level 4 offshore 3 6
      25 3000×3000 Level 4 offshore 1 20
      26 3000×3000 Level 2 nearshore 3 15
      27 3000×3000 Level 2 nearshore 3 19
      28 3000×3000 Level 1 nearshore 3 8
      29 3000×3000 Level 3 offshore 3 6
      30 3000×3000 Level 2 offshore 3 8
      31 3000×3000 Level 1 nearshore 3 3
参考文献 (25)

目录

    /

    返回文章
    返回