飞鸟与无人机目标雷达探测与识别技术进展与展望

陈小龙 陈唯实 饶云华 黄勇 关键 董云龙

陈小龙, 陈唯实, 饶云华, 等. 飞鸟与无人机目标雷达探测与识别技术进展与展望[J]. 雷达学报, 2020, 9(5): 803–827. doi:  10.12000/JR20068
引用本文: 陈小龙, 陈唯实, 饶云华, 等. 飞鸟与无人机目标雷达探测与识别技术进展与展望[J]. 雷达学报, 2020, 9(5): 803–827. doi:  10.12000/JR20068
CHEN Xiaolong, CHEN Weishi, RAO Yunhua, et al. Progress and prospects of radar target detection and recognition technology for flying birds and unmanned aerial vehicles[J]. Journal of Radars, 2020, 9(5): 803–827. doi:  10.12000/JR20068
Citation: CHEN Xiaolong, CHEN Weishi, RAO Yunhua, et al. Progress and prospects of radar target detection and recognition technology for flying birds and unmanned aerial vehicles[J]. Journal of Radars, 2020, 9(5): 803–827. doi:  10.12000/JR20068

飞鸟与无人机目标雷达探测与识别技术进展与展望

(中文/English)

doi: 10.12000/JR20068
基金项目: 国家自然科学基金(U1933135, 61871391, 61931021),山东省重点研发计划(2019GSF111004),基础加强计划技术领域基金(2102024),装发“十三五”领域基金(61404130212)
详细信息
    作者简介:

    陈小龙(1985–),男,山东烟台人,博士,副教授。主要研究方向为雷达动目标检测、海杂波抑制、雷达信号精细化处理等。获中国专利优秀奖、军队科技进步一等奖、中国产学研合作促进会军民融合奖、中国电子学会/全军优博。入选中国科协青托,第七届中国电子学会优秀科技工作者,世界无线电联盟、国际应用计算电磁学会青年科学家奖。E-mail: cxlcxl1209@163.com

    陈唯实(1982–),男,天津人,博士,研究员。主要研究方向为机场运行安全管理、无人机反制、鸟击防范、低空空域监视等。E-mail: chenwsh@mail.castc.org.cn

    饶云华(1972–),男,博士,副教授,主要研究方向为新体制雷达,雷达系统设计与信号处理等。E-mail: ryh@whu.edu.cn

    黄 勇(1979–),男,副教授,主要研究方向为雷达目标检测、多维信号处理等。E-mail: huangyong_2003@163.com

    关 键(1968–),男,辽宁锦州人,教授,博士生导师。主要研究方向为雷达目标检测与跟踪、侦察图像处理和信息融合。E-mail: guanjian_68@163.com

    董云龙(1974–),男,天津宝坻人,教授,主要研究方向为多传感器信息融合。E-mail: china_dyl@sina.com

    通讯作者:

    陈小龙 cxlcxl1209@163.com

  • 责任主编:张群 Corresponding Editor: ZHANG Qun
  • 中图分类号: TN957.51

Progress and Prospects of Radar Target Detection and Recognition Technology for Flying Birds and Unmanned Aerial Vehicles (in English)

(English)

Funds: The National Natural Science Foundation of China (NSFC) (U1933135, 61871391, 61931021), Key Research and Development Program of Shandong (2019GSF111004), Fundamental Strengthening Technology Program (2102024), Foundation of the Equipment development of the “13th Five-Year Plan” (61404130212)
More Information
  • 摘要: 飞鸟和无人机(UAVs)是典型的“低慢小”目标,具有低可观测性,对两者的有效监视和识别成为保障空中航路安全、城市安保等需求迫切需要解决的难题。飞鸟和无人机目标类型多、飞行高度低、机动性强、雷达散射截面积小,加之探测环境复杂,给目标探测带来极大困扰,已成为世界性难题。因此迫切需要研发“看得见(检测能力强)、辨得明(识别概率高)”的无人机、飞鸟等“低慢小”目标监视手段和技术,实现目标的精细化描述和识别。该文集中对近年来复杂场景下旋翼无人机和飞鸟目标检测与识别技术的研究进展进行了归纳总结,介绍了飞鸟和无人机探测的主要手段,从回波建模和微动特性认知、泛探模式下机动特征增强与提取、分布式多视角特征融合、运动轨迹差异、深度学习智能分类等方面给出了检测和识别的有效途径。最后,该文总结了现有研究存在的问题,对未来复杂场景下飞鸟和无人机目标检测与识别技术的发展进行了展望。
  • 图  1  飞鸟、无人机对民航飞机的威胁

    Figure  1.  The threat of flying birds and UAVs to civil aviation aircraft

    图  2  无人机危害公共安全

    Figure  2.  UAVs endanger public safety

    图  3  “低慢小”目标雷达回波

    Figure  3.  “Low, slow, small” target radar echo

    图  4  典型探鸟雷达系统

    Figure  4.  Typical avian radar systems

    图  5  典型非合作无人机探测系统

    Figure  5.  Typical non-cooperative UAV detection system

    图  6  国际典型的“低慢小”目标探测系统

    Figure  6.  International typical “low, slow, small” target detection system

    图  7  DVB-T外辐射源雷达AULOS无人机探测结果[35]

    Figure  7.  DVB-T passive radar AULOS UAV detection results[35]

    图  8  基于DTMB外辐射源雷达的无人机探测实验[38]

    Figure  8.  UAV detection experiment based on DTMB passive radar[38]

    图  9  基于数字电视信号外辐射源雷达的无人机目标监视应用场景示例

    Figure  9.  UAV target surveillance application based on digital TV signal passive radar

    图  10  德国FHR多通道无源雷达系统GAMMA-2[42]

    Figure  10.  German FHR multi-channel passive radar system GAMMA-2[42]

    图  11  英国伦敦大学NetRAD雷达系统[44]

    Figure  11.  NetRAD radar system of university of London[44]

    图  12  荷兰Robin飞鸟/无人机识别雷达[45]

    Figure  12.  The Robin birds/UAV recognition radar in Netherlands[45]

    图  13  英国Aveillant公司全息雷达系统[46]

    Figure  13.  The holographic radar system from Aveillant, UK[46]

    图  14  旋翼无人机微动特性[53]

    Figure  14.  Micro-motion characteristics of six-rotor UAV[53]

    图  15  飞鸟与无人机微动特征差异

    Figure  15.  Differences in micromotion characteristics of flying birds and UAVs

    图  16  仿真的飞鸟目标雷达微动特性分析

    Figure  16.  Analysis of radar m-D characteristics of simulated flying bird

    图  17  无人机目标实测微动特性分析(外辐射源中心频率658 MHz)[39]

    Figure  17.  Analysis of measured m-D characteristics of UAV target (center frequency of external radiation source 658 MHz)[39]

    图  18  无人机和飞鸟目标微多普勒(24 GHz)[54]

    Figure  18.  Micro-Doppler (24 GHz) for UAV and bird targets[54]

    图  19  双基地外辐射源雷达配置图

    Figure  19.  Configuration diagram of bistatic passive radar

    图  20  非参数搜索长时间相参积累(NUSP-LTCI)处理流程[58]

    Figure  20.  NUSP-LTCI processing[58]

    图  21  低空飞行目标雷达相参积累结果对比分析[58,60]

    Figure  21.  Comparative analysis of radar coherent integration results of low-altitude flying targets[58,60]

    图  22  单频网分布式外辐射源雷达微动特征提取研究思路

    Figure  22.  Method of micro-motion feature extraction via single-frequency distributed passive radar

    图  23  CNN中各卷积层的数据特征(以LeNet为例)[73]

    Figure  23.  Data characteristics of convolutional layers in CNN (take LeNet as an example)[73]

    图  24  基于深度学习的微动参数估计流程示意图

    Figure  24.  Schematic diagram of m-D parameter estimation based on deep learning

    图  25  基于雷达数据的飞鸟与无人机目标识别算法流程

    Figure  25.  Target recognition algorithm of flying birds and UAVs based on radar data processing

    图  26  飞鸟与无人机目标飞行轨迹识别结果[77]

    Figure  26.  Recognition results of trajectories of flying birds and UAVs[77]

    图  27  某机场鸟类活动热点统计情况[79]

    Figure  27.  Statistics of hot spots of birds activity at an airport[79]

    图  28  无人机和飞鸟目标外场探测试验

    Figure  28.  UAV and flying birds detection experiment

    图  1  The threat of flying birds and UAVs to civil aviation aircraft

    图  2  UAVs endanger public safety

    图  3  “low, slow and small” target radar echo

    图  4  Typical avian radar systems

    图  5  Typical noncooperative UAV detection system

    图  6  International typical “low, slow and small” target detection system

    图  7  DVB-T passive radar AULOS UAV detection results[35]

    图  8  UAV detection experiment based on DTMB passive radar[38]

    图  9  UAV target surveillance application based on digital TV signal passive radar

    图  10  German FHR multi-channel passive radar system GAMMA-2[42]

    图  11  NetRAD radar system of university of London[44]

    图  12  The Robin birds/UAV recognition radar in Netherlands[45]

    图  13  The holographic radar system from Aveillant, UK[46]

    图  14  Micromotion characteristics of six-rotor UAV[53]

    图  15  Differences in micromotion characteristics of flying birds and UAVs

    图  16  Analysis of radar m-D characteristics of simulated flying bird

    图  17  Analysis of measured m-D characteristics of UAV target (center frequency of external radiation source 658 MHz)[39]

    图  18  Micro-Doppler (24 GHz) for UAV and bird targets[54]

    图  19  Configuration diagram of bistatic passive radar

    图  20  NUSP-LTCI processing[58]

    图  21  Comparative analysis of radar coherent integration results of low-altitude flying targets[58,60]

    图  22  Method of micromotion feature extraction via single-frequency distributed passive radar

    图  23  Data characteristics of convolutional layers in CNN (take LeNet as an example)[73]

    图  24  Diagram of m-D parameter estimation based on deep learning

    图  25  Target recognition algorithm of flying birds and UAVs based on radar data processing

    图  26  Recognition results of trajectories of flying birds and UAVs[77]

    图  27  Statistics of hot spots of birds activity at an airport[79]

    图  28  UAV and flying birds detection experiment

    表  1  国外3种典型探鸟雷达产品说明

    Table  1.   Description of three typical foreign avian radar products

    产品名称 技术特点 部署方式
    Merlin 水平和垂直扫描雷达结合,固态发射机 水平扫描雷达通常部署在靠近机场中心的位置,负责机场周边低空预警;垂直扫描雷达通常部署在每条跑道中心一侧,负责监视航班起降通道;当然,雷达部署是个复杂的系统问题,需要考虑净空障碍物限制面、地物遮挡、供电等多方面因素。
    Accipiter 水平和垂直扫描雷达结合,附加抛物面天线
    Robin 水平和垂直扫描雷达结合,磁控管发射机
    下载: 导出CSV

    表  2  鸟类热点月度轨迹数量统计

    Table  2.   Monthly statistics of bird hot spots

    热点编号 轨迹数量 百分比(%)
    1 7628 0.73
    2 30299 2.89
    3 22484 2.15
    4 16885 1.61
    5 21987 2.10
    6 15652 1.50
    7 29856 2.85
    8 33938 3.24
    9 21650 2.07
    10 846406 80.86
    下载: 导出CSV

    表  1  Description of three typical foreign avian radar products

    Product name Merlin Accipiter Robin
    Technical characteristics Technical features combination of hori-zontal and vertical scanning radar,
    solid state transmitter
    Combination of horizontal and vertical scanning radar, additional parabolic antenna additional parabolic antenna combination of horizontal and vertical scanning radar, magn-etron transmitter
    Deployment method Horizontal scanning radars are usually deployed close to the center of the airport and are responsible for low-altitude warnings around the airport; vertical scanning radars are usually deployed on the center side of each runway and are responsible for monitoring flight take-off and landing channels; of course, radar deployment is a complex system problem that needs to be considered, there are many factors such as clearance obstacle restriction surface, ground obstruction, power supply and so on.
    下载: 导出CSV

    表  2  Monthly statistics of bird hot spots

    Hot number Track number The percentage (%)
    1 7628 0.73
    2 30299 2.89
    3 22484 2.15
    4 16885 1.61
    5 21987 2.10
    6 15652 1.50
    7 29856 2.85
    8 33938 3.24
    9 21650 2.07
    10 846406 80.86
    下载: 导出CSV
  • [1] 王维, 项洪达. 鸟击风险计算方法研究[J]. 中国民航大学学报, 2019, 37(5): 21–24.WANG Wei and XIANG Hongda. Calculation method of aircraft bird strike risk[[J]. Journal of Civil Aviation University of China, 2019, 37(5): 21–24.
    [2] 于飞, 刘东华, 贺飞扬. 无人机“黑飞”对电磁空间安全的挑战[J]. 中国无线电, 2018, (8): 43–44. doi:  10.3969/j.issn.1672-7797.2018.08.026YU Fei, LIU Donghua, and HE Feiyang. UAV “black flying” challenges the safety of electromagnetic space[J]. China Radio, 2018, (8): 43–44. doi:  10.3969/j.issn.1672-7797.2018.08.026
    [3] 陈唯实. 轻小型无人机监管、探测与干扰技术[J]. 中国民用航空, 2017, (7): 33–34.CHEN Weishi. The supervision, detection and jamming technologies for light and small UAS[J]. China Civil Aviation, 2017, (7): 33–34.
    [4] 吴小松, 房之军, 陈通海. 民用无人机反制技术研究[J]. 中国无线电, 2018, (3): 55–58. doi:  10.3969/j.issn.1672-7797.2018.03.038WU Xiaosong, FANG Zhijun, and CHEN Tonghai. Research on civil UAV countermeasure technology[J]. China Radio, 2018, (3): 55–58. doi:  10.3969/j.issn.1672-7797.2018.03.038
    [5] 罗淮鸿, 卢盈齐. 国外反“低慢小”无人机能力现状与发展趋势[J]. 飞航导弹, 2019, (6): 32–36. doi:  10.16338/j.issn.1009-1319.20180472LUO Huaihong and LU Yingqi. Ability status and development trend of anti-“low, slow and small” UAVs[J]. Aerodynamic Missile Journal, 2019, (6): 32–36. doi:  10.16338/j.issn.1009-1319.20180472
    [6] 陈小龙, 关键, 黄勇, 等. 雷达低可观测目标探测技术[J]. 科技导报, 2017, 35(11): 30–38. doi:  10.3981/j.issn.1000-7857.2017.11.004CHEN Xiaolong, GUAN Jian, HUANG Yong, et al. Radar low-observable target detection[J]. Science&Technology Review, 2017, 35(11): 30–38. doi:  10.3981/j.issn.1000-7857.2017.11.004
    [7] 陈小龙, 关键, 黄勇, 等. 雷达低可观测动目标精细化处理及应用[J]. 科技导报, 2017, 35(20): 19–27. doi:  10.3981/j.issn.1000-7857.2017.20.002CHEN Xiaolong, GUAN Jian, HUANG Yong, et al. Radar refined processing and its applications for low-observable moving target[J]. Science&Technology Review, 2017, 35(20): 19–27. doi:  10.3981/j.issn.1000-7857.2017.20.002
    [8] NOHARA T J, ENG B, ENG M, et al. An overview of avian radar developments - past, present and future[C]. The 2007 Bird Strike Committee USA/Canada, 9th Annual Meeting, Kingston, Ontario, 2007.
    [9] BEASON R C, NOHARA T J, and WEBER P. Beware the Boojum: Caveats and strengths of avian radar[J]. Human-Wildlife Interactions, 2013, 7(1): 16–46.
    [10] BRAUN C E. Techniques for wildlife investigations and management[J]. The Condor, 2007, 109(4): 981–983. doi:  10.1093/condor/109.4.981
    [11] 宁焕生, 刘文明, 李敬, 等. 航空鸟击雷达鸟情探测研究[J]. 电子学报, 2006, 34(12): 2232–2237. doi:  10.3321/j.issn:0372-2112.2006.12.023NING Huansheng, LIU Wenming, LI Jing, et al. Research on radar avian detection for aviation[J]. Acta Electronica Sinica, 2006, 34(12): 2232–2237. doi:  10.3321/j.issn:0372-2112.2006.12.023
    [12] 陈唯实, 宁焕生, 刘文明, 等. 基于雷达图像的飞鸟目标检测与信息提取[J]. 系统工程与电子技术, 2008, 30(9): 1624–1627. doi:  10.3321/j.issn:1001-506X.2008.09.006CHEN Weishi, NING Huansheng, LIU Wenming, et al. Flying bird targets detection and information extraction based on radar images[J]. Systems Engineering and Electronics, 2008, 30(9): 1624–1627. doi:  10.3321/j.issn:1001-506X.2008.09.006
    [13] 陈唯实, 李敬. 雷达探鸟技术发展与应用综述[J]. 现代雷达, 2017, 39(2): 7–17. doi:  10.16592/j.cnki.1004-7859.2017.02.002CHEN Weishi and LI Jing. Review on development and applications of avian radar technology[J]. Modern Radar, 2017, 39(2): 7–17. doi:  10.16592/j.cnki.1004-7859.2017.02.002
    [14] 陈唯实, 万显荣, 李敬. 机场净空区非合作无人机目标探测技术[J]. 民航学报, 2018, 2(5): 54–57, 45. doi:  10.3969/j.issn.2096-4994.2018.05.014CHEN Weishi, WAN Xianrong, and LI Jing. Detection technology of non-cooperative UAV targets in airport clearance area[J]. Journal of Civil Aviation, 2018, 2(5): 54–57, 45. doi:  10.3969/j.issn.2096-4994.2018.05.014
    [15] 蔡亚梅, 姜宇航, 赵霜. 国外反无人机系统发展动态与趋势分析[J]. 航天电子对抗, 2017, 33(2): 59–64. doi:  10.16328/j.htdz8511.2017.02.016CAI Yamei, JIANG Yuhang, and ZHAO Shuang. Development status and trend analysis of counter UAV systems[J]. Aerospace Electronic Warfare, 2017, 33(2): 59–64. doi:  10.16328/j.htdz8511.2017.02.016
    [16] 罗德与施瓦茨(中国)科技有限公司. 无人机自动识别、定位和压制系统(一)[J]. 中国无线电, 2016, (8): 72–73. doi:  10.3969/j.issn.1672-7797.2016.08.047Rohde & Schwarz (China) Technology Co., L td. UAV automatic identification, positioning and suppression system (One)[J]. China Radio, 2016, (8): 72–73. doi:  10.3969/j.issn.1672-7797.2016.08.047
    [17] 罗德与施瓦茨(中国)科技有限公司. 无人机自动识别、定位和压制系统(二)[J]. 中国无线电, 2016, (9): 71–72. doi:  10.3969/j.issn.1672-7797.2016.09.045Rohde & Schwarz (China) Technology Co., L td. UAV automatic identification, positioning and suppression system (Two)[J]. China Radio, 2016, (9): 71–72. doi:  10.3969/j.issn.1672-7797.2016.09.045
    [18] 罗德与施瓦茨(中国)科技有限公司. 无人机自动识别、定位和压制系统(三)[J]. 中国无线电, 2016, (10): 72–73. doi:  10.3969/j.issn.1672-7797.2016.10.044Rohde & Schwarz (China) Technology Co., L td. UAV automatic identification, positioning and suppression system (Three)[J]. China Radio, 2016, (10): 72–73. doi:  10.3969/j.issn.1672-7797.2016.10.044
    [19] JEON S, SHIN J W, LEE Y J, et al. Empirical study of drone sound detection in real-life environment with deep neural networks[C]. The 2017 25th European Signal Processing Conference, Kos, Greece, 2017. doi: 10.23919/EUSIPCO.2017.8081531.
    [20] HOMMES A, SHOYKHETBROD A, NOETEL D, et al. Detection of acoustic, electro-optical and RADAR signatures of small unmanned aerial vehicles[C]. SPIE 9997, Target and Background Signatures Ⅱ, Edinburgh, USA, 2016. doi: 10.1117/12.2242180.
    [21] MEZEI J and MOLNÁR A. Drone sound detection by correlation[C]. The 2016 IEEE 11th International Symposium on Applied Computational Intelligence and Informatics, Timisoara, Romania, 2016. doi: 10.1109/SACI.2016.7507430.
    [22] 陈超帅, 王世勇. 大疆无人机目标红外辐射特性测量及温度反演[J]. 光电工程, 2017, 44(4): 427–434. doi:  10.3969/j.issn.1003-501X.2017.04.007CHEN Chaoshuai and WANG Shiyong. Infrared radiation characteristics measurement and temperature retrieval based on DJI unmanned aerial vehicle[J]. Opto-Electronic Engineering, 2017, 44(4): 427–434. doi:  10.3969/j.issn.1003-501X.2017.04.007
    [23] 郭溪溪. 低空慢速小目标检测识别与威胁度评估[D]. [硕士论文], 中国科学院长春光学精密机械与物理研究所, 2017.GUO Xixi. Detection and recognition and thread assessment for small target at low altitude and slow speed[D]. [Master dissertation], Changchun Institute of Optics, Fine Mehcanics and Physics, Chinese Academy of Sciences, 2017.
    [24] MÜLLER T. Robust drone detection for day/night counter-UAV with static VIS and SWIR cameras[C]. SPIE 10190, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VⅢ, Anaheim, USA, 2017. doi: 10.1117/12.2262575.
    [25] KLARE J, BIALLAWONS O, and CERUTTI-MAORI D. Detection of UAVs using the MIMO radar MIRA-CLE Ka[C]. The EUSAR 2016: 11th European Conference on Synthetic Aperture Radar, Hamburg, Germany, 2016.
    [26] WILLIAMS H. AUDS to target small unmanned intruders[J]. Janes International Defense Review, 2015, 48: 22.
    [27] 郭珊珊. 反无人机技术与产品发展现状[J]. 军事文摘, 2016, (19): 36–39.GUO Shanshan. Anti-UAV technology and product development status[J]. Military Digest, 2016, (19): 36–39.
    [28] CLEMENTE C and SORAGHAN J J. Passive bistatic radar for helicopters classification: A feasibility study[C]. 2012 IEEE Radar Conference, Atlanta, USA, 2012. doi: 10.1109/RADAR.2012.6212273.
    [29] LIU Yuqi, WAN Xianrong, TANG Hui, et al. Digital television based passive bistatic radar system for drone detection[C]. 2017 IEEE Radar Conference, Seattle, USA, 2017. doi: 10.1109/RADAR.2017.7944443.
    [30] SCHÜPbach C, PATRY C, MAASDORP F, et al. Micro-UAV detection using DAB-based passive radar[C]. 2017 IEEE Radar Conference, Seattle, USA, 2017. doi: 10.1109/RADAR.2017.7944357.
    [31] TIKKINEN J, HILTUNEN K, MARTIKAINEN K, et al. Helicopter detection capability of passive coherent location (PCL) radar[C]. The 2012 9th European Radar Conference, Amsterdam, Netherlands, 2012.
    [32] TIKKINEN J, HILTUNEN K, and MARTIKAINEN K. Utilization of long coherent integration time in helicopter recognition by passive coherent location (PCL) radar[C]. 2013 European Radar Conference, Nuremberg, Germany, 2013.
    [33] BĄCZYK M K, MISIUREWICZ J, GROMEK D, et al. Analysis of recorded helicopter echo in a passive bistatic radar[C]. 2013 European Radar Conference, Nuremberg, Germany, 2013.
    [34] MAASDORP F D V, CILLIERS J E, INGGS M R, et al. Simulation and measurement of propeller modulation using FM broadcast band commensal radar[J]. Electronics Letters, 2013, 49(23): 1481–1482. doi:  10.1049/el.2013.2875
    [35] MARTELLI T, COLONE F, and CARDINALI R. DVB-T based passive radar for simultaneous counter-drone operations and civil air traffic surveillance[J]. IET Radar,Sonar&Navigation, 2020, 14(4): 505–515. doi:  10.1049/iet-rsn.2019.0309
    [36] MARTELLI T, MURGIA F, COLONE F, et al. Detection and 3D localization of ultralight aircrafts and drones with a WiFi-based passive radar[C]. International Conference on Radar Systems, Belfast, UK, 2017. doi: 10.1049/cp.2017.0423
    [37] MUSA S A, RSA R A, SALI A, et al. DVBS based forward scattering radar for drone detection[C]. The 2019 20th International Radar Symposium, Ulm, Germany, 2019. doi: 10.23919/IRS.2019.8767456.
    [38] LIU Yuqi, YI Jianxin, WAN Xianrong, et al. Time varying clutter suppression in CP OFDM based passive radar for slowly moving targets detection[J]. IEEE Sensors Journal, 2020, in press. doi:  10.1109/JSEN.2020.2986717
    [39] 刘玉琪, 易建新, 万显荣, 等. 数字电视外辐射源雷达多旋翼无人机微多普勒效应实验研究[J]. 雷达学报, 2018, 7(5): 585–592. doi:  10.12000/JR18062LIU Yuqi, YI Jianxin, WAN Xianrong, et al. Experimental research on micro-Doppler effect of multi-rotor drone with digital television based passive radar[J]. Journal of Radars, 2018, 7(5): 585–592. doi:  10.12000/JR18062
    [40] YI Yucheng, WAN Xianrong, YI Jianxin, et al. polarization diversity technology research in passive radar based on subcarrier processing[J]. IEEE Sensors Journal, 2019, 19(5): 1710–1719. doi:  10.1109/JSEN.2018.2881226
    [41] DAN Yangpeng, YI Jianxin, WAN Xianrong, et al. LTE-based passive radar for drone detection and its experimental results[J]. The Journal of Engineering, 2019, 2019(20): 6910–6913. doi:  10.1049/joe.2019.0583
    [42] PATEL J S, FIORANELLI F, and ANDERSON D. Review of radar classification and RCS characterisation techniques for small UAVs or drones[J]. IET Radar,Sonar&Navigation, 2018, 12(9): 911–919. doi:  10.1049/iet-rsn.2018.0020
    [43] KNOEDLER B, ZEMMARI R, and KOCH W. On the detection of small UAV using a GSM passive coherent location system[C]. The 2016 17th International Radar Symposium, Krakow, Poland, 2016. doi: 10.1109/IRS.2016.7497375.
    [44] POULLIN D. UAV detection and localization using passive DVB-T radar MFN and SFN[R]. 2016.
    [45] HOFFMANN F, RITCHIE M, FIORANELLI F, et al. Micro-Doppler based detection and tracking of UAVs with multistatic radar[C]. 2016 IEEE Radar Conference, Philadelphia, USA, 2016. doi: 10.1109/RADAR.2016.7485236.
    [46] JAHANGIR M, BAKER C J, and OSWALD G A. Doppler characteristics of micro-drones with L-Band multibeam staring radar[C]. 2017 IEEE Radar Conference, Seattle, USA, 2017. doi: 10.1109/RADAR.2017.7944360.
    [47] 张群, 胡健, 罗迎, 等. 微动目标雷达特征提取、成像与识别研究进展[J]. 雷达学报, 2018, 7(5): 531–547. doi:  10.12000/JR18049ZHANG Qun, HU Jian, LUO Ying, et al. Research progresses in radar feature extraction, imaging, and recognition of target with micro-motions[J]. Journal of Radars, 2018, 7(5): 531–547. doi:  10.12000/JR18049
    [48] 陈小龙, 关键, 何友. 微多普勒理论在海面目标检测中的应用及展望[J]. 雷达学报, 2013, 2(1): 123–134. doi:  10.3724/SP.J.1300.2013.20102CHEN Xiaolong, GUAN Jian, and HE You. Applications and prospect of micro-motion theory in the detection of sea surface target[J]. Journal of Radars, 2013, 2(1): 123–134. doi:  10.3724/SP.J.1300.2013.20102
    [49] 刘凯越, 张晨新, 刘刚, 等. 关于飞机雷达探测飞鸟的RCS建模仿真[J]. 计算机仿真, 2016, 33(5): 120–124, 228. doi:  10.3969/j.issn.1006-9348.2016.05.025LIU Kaiyue, ZHANG Chenxin, LIU Gang, et al. Modeling and simulation about RCS of aircraft radar detecting bird[J]. Computer Simulation, 2016, 33(5): 120–124, 228. doi:  10.3969/j.issn.1006-9348.2016.05.025
    [50] SINGH A K and KIM Y H. Automatic measurement of blade length and rotation rate of drone using w-band micro-Doppler radar[J]. IEEE Sensors Journal, 2018, 18(5): 1895–1902. doi:  10.1109/JSEN.2017.2785335
    [51] 宋晨, 周良将, 吴一戎, 等. 基于自相关-倒谱联合分析的无人机旋翼转动频率估计方法[J]. 电子与信息学报, 2019, 41(2): 255–261. doi:  10.11999/JEIT180399SONG Chen, ZHOU Liangjiang, WU Yirong, et al. An estimation method of rotation frequency of unmanned aerial vehicle based on auto-correlation and cepstrum[J]. Journal of Electronics&Information Technology, 2019, 41(2): 255–261. doi:  10.11999/JEIT180399
    [52] 陈永彬, 李少东, 杨军, 等. 一种旋翼叶片微动特征提取新方法[J]. 雷达科学与技术, 2017, 15(1): 13–18, 28. doi:  10.3969/j.issn.1672-2337.2017.01.003CHEN Yongbin, LI Shaodong, YANG Jun, et al. A new method for micro-motion signature extraction of rotor blades[J]. Radar Science and Technology, 2017, 15(1): 13–18, 28. doi:  10.3969/j.issn.1672-2337.2017.01.003
    [53] JAHANGIR M and BAKER C J. Extended dwell Doppler characteristics of birds and micro-UAS at l-band[C]. The 2017 18th International Radar Symposium, Prague, Czech Republic, 2017: 1–10. doi: 10.23919/IRS.2017.8008144.
    [54] RAHMAN S and ROBERTSON D A. Radar micro-Doppler signatures of drones and birds at K-band and W-band[J]. Scientific Reports, 2018, 8(1): 17396. doi:  10.1038/s41598-018-35880-9
    [55] CHEN Xiaolong, GUAN Jian, LIU Ningbo, et al. Maneuvering target detection via radon-fractional Fourier transform-based long-time coherent integration[J]. IEEE Transactions on Signal Processing, 2014, 62(4): 939–953. doi:  10.1109/TSP.2013.2297682
    [56] 关键, 陈小龙, 于晓涵. 雷达高速高机动目标长时间相参积累检测方法[J]. 信号处理, 2017, 33(3A): 1–8. doi:  10.16798/j.issn.1003-0530.2017.3A.001GUAN Jian, CHEN Xiaolong, and YU Xiaohan. Long-time coherent integration-based detection method for high-speed and highly maneuvering radar target[J]. Journal of Signal Processing, 2017, 33(3A): 1–8. doi:  10.16798/j.issn.1003-0530.2017.3A.001
    [57] LI Xiaolong, CUI Guolong, YI Wei, et al. Radar maneuvering target detection and motion parameter estimation based on TRT-SGRFT[J]. Signal Processing, 2017, 133: 107–116. doi:  10.1016/j.sigpro.2016.10.014
    [58] CHEN Xiaolong, GUAN Jian, CHEN Weishi, et al. Sparse long-time coherent integration-based detection method for radar low-observable manoeuvring target[J]. IET Radar,Sonar&Navigation, 2020, 14(4): 538–546. doi:  10.1049/iet-rsn.2019.0313
    [59] DJUROVIĆ I and SIMEUNOVIĆ M. Parameter estimation of 2D polynomial phase signals using NU sampling and 2D CPF[J]. IET Signal Processing, 2018, 12(9): 1140–1145. doi:  10.1049/iet-spr.2018.5083
    [60] CHEN Xiaolong, GUAN Jian, WANG Guoqing, et al. Fast and refined processing of radar maneuvering target based on hierarchical detection via sparse fractional representation[J]. IEEE Access, 2019, 7: 149878–149889. doi:  10.1109/ACCESS.2019.2947169
    [61] 罗迎, 张群, 封同安, 等. OFD-LFM MIMO雷达中旋转目标微多普勒效应分析及三维微动特征提取[J]. 电子与信息学报, 2011, 33(1): 8–13. doi:  10.3724/SP.J.1146.2010.00234LUO Ying, ZHANG Qun, FENG Tongan, et al. Micro-Doppler effect analysis of rotating target and three-dimensional micro-motion feature extraction in OFD-LFM MIMO radar[J]. Journal of Electronics&Information Technology, 2011, 33(1): 8–13. doi:  10.3724/SP.J.1146.2010.00234
    [62] 赵双, 鲁卫红, 冯存前, 等. 多视角微多普勒融合的进动目标特征提取[J]. 信号处理, 2016, 32(3): 296–303. doi:  10.16798/j.issn.1003-0530.2016.03.006ZHAO Shuang, LU Weihong, FENG Cunqian, et al. Feature extraction of precession targets based on multi-aspect micro-Doppler fusion[J]. Journal of Signal Processing, 2016, 32(3): 296–303. doi:  10.16798/j.issn.1003-0530.2016.03.006
    [63] RITCHIE M, FIORANELLI F, BORRION H, et al. Multistatic micro-Doppler radar feature extraction for classification of unloaded/loaded micro-drones[J]. IET Radar,Sonar&Navigation, 2017, 11(1): 116–124. doi:  10.1049/iet-rsn.2016.0063
    [64] 万显荣, 易建新, 程丰, 等. 单频网分布式外辐射源雷达技术[J]. 雷达学报, 2014, 3(6): 623–631. doi:  10.12000/JR14156WAN Xianrong, YI Jianxin, CHENG Feng, et al. Single frequency network based distributed passive radar technology[J]. Journal of Radars, 2014, 3(6): 623–631. doi:  10.12000/JR14156
    [65] 万显荣, 孙绪望, 易建新, 等. 分布式数字广播电视外辐射源雷达系统同步设计与测试[J]. 雷达学报, 2017, 6(1): 65–72. doi:  10.12000/JR16134WAN Xianrong, SUN Xuwang, YI Jianxin, et al. Synchronous design and test of distributed passive radar systems based on digital broadcasting and television[J]. Journal of Radars, 2017, 6(1): 65–72. doi:  10.12000/JR16134
    [66] CHENG Gong, HAN Junwei, and LU Xiaoqiang. Remote sensing image scene classification: Benchmark and state of the art[J]. Proceedings of the IEEE, 2017, 105(10): 1865–1883. doi:  10.1109/JPROC.2017.2675998
    [67] 文贡坚, 朱国强, 殷红成, 等. 基于三维电磁散射参数化模型的SAR目标识别方法[J]. 雷达学报, 2017, 6(2): 115–135. doi:  10.12000/JR17034WEN Gongjian, ZHU Guoqiang, YIN Hongcheng, et al. SAR ATR Based on 3D parametric electromagnetic scattering model[J]. Journal of Radars, 2017, 6(2): 115–135. doi:  10.12000/JR17034
    [68] OH B S, GUO Xin, WAN Fangyuan, et al. Micro-Doppler mini-UAV classification using empirical-mode decomposition features[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(2): 227–231. doi:  10.1109/LGRS.2017.2781711
    [69] BIRTCHER C R, BALANIS C A, and DECARLO D. Rotor-blade modulation on antenna amplitude pattern and polarization: Predictions and measurements[J]. IEEE Transactions on Electromagnetic Compatibility, 1999, 41(4): 384–393. doi:  10.1109/15.809828
    [70] ZHAO Yichao and SU Yi. Synchrosqueezing phase analysis on micro-Doppler parameters for small UAVs identification with multichannel radar[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(3): 411–415. doi:  10.1109/LGRS.2019.2924266
    [71] 田壮壮, 占荣辉, 胡杰民, 等. 基于卷积神经网络的SAR图像目标识别研究[J]. 雷达学报, 2016, 5(3): 320–325. doi:  10.12000/JR16037TIAN Zhuangzhuang, ZHAN Ronghui, HU Jiemin, et al. SAR ATR based on convolutional neural network[J]. Journal of Radars, 2016, 5(3): 320–325. doi:  10.12000/JR16037
    [72] 王俊, 郑彤, 雷鹏, 等. 深度学习在雷达中的研究综述[J]. 雷达学报, 2018, 7(4): 395–411. doi:  10.12000/JR18040WANG Jun, ZHENG Tong, LEI Peng, et al. Study on deep learning in radar[J]. Journal of Radars, 2018, 7(4): 395–411. doi:  10.12000/JR18040
    [73] 苏宁远, 陈小龙, 关键, 等. 基于卷积神经网络的海上微动目标检测与分类方法[J]. 雷达学报, 2018, 7(5): 565–574. doi:  10.12000/JR18077SU Ningyuan, CHEN Xiaolong, GUAN Jian, et al. Detection and classification of maritime target with micro-motion based on CNNs[J]. Journal of Radars, 2018, 7(5): 565–574. doi:  10.12000/JR18077
    [74] KIM B K, KANG H S, and PARK S O. Drone classification using convolutional neural networks with merged Doppler images[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(1): 38–42. doi:  10.1109/LGRS.2016.2624820
    [75] MENDIS G J, WEI Jin, and MADANAYAKE A. Deep learning cognitive radar for micro UAS detection and classification[C]. 2017 Cognitive Communications for Aerospace Applications Workshop, Cleveland, USA, 2017: 1–5. doi: 10.1109/CCAAW.2017.8001610.
    [76] 苏宁远, 陈小龙, 陈宝欣, 等. 雷达海上目标双通道卷积神经网络特征融合智能检测方法[J]. 现代雷达, 2019, 41(10): 47–52, 57. doi:  10.16592/j.cnki.1004-7859.2019.10.009SU Ningyuan, CHEN Xiaolong, CHEN Baoxin, et al. Dual-channel convolutional neural networks feature fusion method for radar maritime target intelligent detection[J].Modern Radar, 2019, 41(10): 47–52, 57. doi:  10.16592/j.cnki.1004-7859.2019.10.009
    [77] 陈唯实, 刘佳, 陈小龙, 等. 基于运动模型的低空非合作无人机目标识别[J]. 北京航空航天大学学报, 2019, 45(4): 687–694. doi:  10.13700/j.bh.1001-5965.2018.0447CHEN Weishi, LIU Jia, CHEN Xiaolong, et al. Non-cooperative UAV target recognition in low-altitude airspace based on motion model[J]. Journal of Beijing University of Aeronautics and Astronautics, 2019, 45(4): 687–694. doi:  10.13700/j.bh.1001-5965.2018.0447
    [78] 陈唯实, 万健, 李敬. 基于机场探鸟雷达数据的鸟击风险评估[J]. 北京航空航天大学学报, 2013, 39(11): 1431–1436.CHEN Weishi, WAN Jian, and LI Jing. Bird strike risk assessment with airport avian radar data[J]. Journal of Beijing University of Aeronautics and Astronautics, 2013, 39(11): 1431–1436.
    [79] 陈唯实, 张洁, 卢贤锋. 基于探鸟雷达数据的机场鸟情分析[J]. 民用航空, 2020, (1): 43–45.CHEN Weishi, ZHANG Jie, and LU Xianfeng. Airport bird situation analysis based on avian radar data[J]. China Civil Aviation, 2020, (1): 43–45.
    [80] KIM B K, KANG H S, and PARK S O. Experimental analysis of small drone polarimetry based on micro-Doppler signature[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(10): 1670–1674. doi:  10.1109/LGRS.2017.2727824
    [81] 章鹏飞, 李刚, 霍超颖, 等. 基于双雷达微动特征融合的无人机分类识别[J]. 雷达学报, 2018, 7(5): 557–564. doi:  10.12000/JR18061ZHANG Pengfei, LI Gang, HUO Chaoying, et al. Classification of drones based on micro-Doppler radar signatures using dual radar sensors[J]. Journal of Radars, 2018, 7(5): 557–564. doi:  10.12000/JR18061
    [82] 罗迎, 倪嘉成, 张群. 基于“数据驱动+智能学习”的合成孔径雷达学习成像[J]. 雷达学报, 2020, 9(1): 107–122. doi:  10.12000/JR19103LUO Ying, NI Jiacheng, and ZHANG Qun. Synthetic aperture radar learning-imaging method based on data-driven technique and artificial intelligence[J]. Journal of Radars, 2020, 9(1): 107–122. doi:  10.12000/JR19103
    [83] 牟效乾, 陈小龙, 苏宁远, 等. 基于时频图深度学习的雷达动目标检测与分类[J]. 太赫兹科学与电子信息学报, 2019, 17(1): 105–111. doi:  10.11805/TKYDA201901.0105MOU Xiaoqian, CHEN Xiaolong, SU Ningyuan, et al. Radar detection and classification of moving target using deep convolutional neural networks on time-frequency graphs[J]. Journal of Terahertz Science and Electronic Information Technology, 2019, 17(1): 105–111. doi:  10.11805/TKYDA201901.0105
    [84] SUN Hongbo, OH B S, GUO Xin, et al. Improving the Doppler resolution of ground-based surveillance radar for drone detection[J]. IEEE Transactions on Aerospace and Electronic Systems, 2019, 55(6): 3667–3673. doi:  10.1109/TAES.2019.2895585
    [85] 许稼, 彭应宁, 夏香根, 等. 空时频检测前聚焦雷达信号处理方法[J]. 雷达学报, 2014, 3(2): 129–141. doi:  10.3724/SP.J.1300.2014.14023XU Jia, PENG Yingning, XIA Xianggen, et al. Radar signal processing method of space-time-frequency focus-before-detects[J]. Journal of Radars, 2014, 3(2): 129–141. doi:  10.3724/SP.J.1300.2014.14023
    [86] 陈曾平, 张月, 鲍庆龙. 数字阵列雷达及其关键技术进展[J]. 国防科技大学学报, 2010, 32(6): 1–7. doi:  10.3969/j.issn.1001-2486.2010.06.001CHEN Zengping, ZHANG Yue, and BAO Qinglong. Advance in digital array radar and its key technologies[J]. Journal of National University of Defense Technology, 2010, 32(6): 1–7. doi:  10.3969/j.issn.1001-2486.2010.06.001
    [87] CHEN Xiaolong, GUAN Jian, LI Xiuyou, et al. Effective coherent integration method for marine target with micromotion via phase differentiation and radon-Lv’s distribution[J]. IET Radar,Sonar&Navigation, 2015, 9(9): 1284–1295. doi:  10.1049/iet-rsn.2015.0100
    [88] KLARE J, BIALLAWONS O, and CERUTTI-MAORI D. UAV detection with MIMO radar[C]. The 2017 18th International Radar Symposium, Prague, Czech Republic, 2017. doi: 10.23919/IRS.2017.8008140.
  • [1] 马琳, 潘宗序, 黄钟泠, 韩冰, 胡玉新, 周晓, 雷斌.  基于子孔径与全孔径特征学习的SAR多通道虚假目标鉴别 . 雷达学报, 2021, 10(1): 159-172. doi: 10.12000/JR20106
    [2] 许述文, 白晓惠, 郭子薰, 水鹏朗.  海杂波背景下雷达目标特征检测方法的现状与展望 . 雷达学报, 2020, 9(4): 684-714. doi: 10.12000/JR20084
    [3] 张金松, 邢孟道, 孙光才.  一种基于密集深度分离卷积的SAR图像水域分割算法 . 雷达学报, 2019, 8(3): 400-412. doi: 10.12000/JR19008
    [4] 赵飞翔, 刘永祥, 霍凯.  一种基于Dropout约束深度极限学习机的雷达目标分类算法 . 雷达学报, 2018, 7(5): 613-621. doi: 10.12000/JR18048
    [5] 何其芳, 张群, 罗迎, 李开明.  正弦调频Fourier-Bessel变换及其在微动目标特征提取中的应用 . 雷达学报, 2018, 7(5): 593-601. doi: 10.12000/JR17069
    [6] 杨琪, 邓彬, 王宏强, 秦玉亮.  太赫兹雷达目标微动特征提取研究进展 . 雷达学报, 2018, 7(1): 22-45. doi: 10.12000/JR17087
    [7] 章鹏飞, 李刚, 霍超颖, 殷红成.  基于双雷达微动特征融合的无人机分类识别 . 雷达学报, 2018, 7(5): 557-564. doi: 10.12000/JR18061
    [8] 苏宁远, 陈小龙, 关键, 牟效乾, 刘宁波.  基于卷积神经网络的海上微动目标检测与分类方法 . 雷达学报, 2018, 7(5): 565-574. doi: 10.12000/JR18077
    [9] 张群, 胡健, 罗迎, 陈怡君.  微动目标雷达特征提取、成像与识别研究进展 . 雷达学报, 2018, 7(5): 531-547. doi: 10.12000/JR18049
    [10] 康妙, 计科峰, 冷祥光, 邢相薇, 邹焕新.  基于栈式自编码器特征融合的SAR图像车辆目标识别 . 雷达学报, 2017, 6(2): 167-176. doi: 10.12000/JR16112
    [11] 张新征, 谭志颖, 王亦坚.  基于多特征-多表示融合的SAR图像目标识别 . 雷达学报, 2017, 6(5): 492-502. doi: 10.12000/JR17078
    [12] 徐丰, 王海鹏, 金亚秋.  深度学习在SAR目标识别与地物分类中的应用 . 雷达学报, 2017, 6(2): 136-148. doi: 10.12000/JR16130
    [13] 赵飞翔, 刘永祥, 霍凯.  基于栈式降噪稀疏自动编码器的雷达目标识别方法 . 雷达学报, 2017, 6(2): 149-156. doi: 10.12000/JR16151
    [14] 陈小龙, 关键, 何友, 于晓涵.  高分辨稀疏表示及其在雷达动目标检测中的应用 . 雷达学报, 2017, 6(3): 239-251. doi: 10.12000/JR16110
    [15] 张增辉, 郁文贤.  稀疏微波SAR图像特征分析与目标检测研究 . 雷达学报, 2016, 5(1): 42-56. doi: 10.12000/JR15097
    [16] 冯存前, 李靖卿, 贺思三, 张豪.  组网雷达中弹道目标微动特征提取与识别综述 . 雷达学报, 2015, 4(6): 609-620. doi: 10.12000/JR15084
    [17] 韩萍, 王欢.  基于改进的稀疏保持投影的SAR目标特征提取与识别 . 雷达学报, 2015, 4(6): 674-680. doi: 10.12000/JR15068
    [18] 王璐, 张帆, 李伟, 谢晓明, 胡伟.  基于Gabor滤波器和局部纹理特征提取的SAR目标识别算法 . 雷达学报, 2015, 4(6): 658-665. doi: 10.12000/JR15076
    [19] 孙志军, 薛磊, 许阳明, 孙志勇.  基于多层编码器的SAR目标及阴影联合特征提取算法 . 雷达学报, 2013, 2(2): 195-202. doi: 10.3724/SP.J.1300.2012.20085
    [20] 陈小龙, 关键, 何友.  微多普勒理论在海面目标检测中的应用及展望 . 雷达学报, 2013, 2(1): 123-134. doi: 10.3724/SP.J.1300.2012.20102
  • 加载中
图(56) / 表 (4)
计量
  • 文章访问数:  3737
  • HTML全文浏览量:  931
  • PDF下载量:  398
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-05-27
  • 修回日期:  2020-06-16
  • 网络出版日期:  2020-07-02
  • 刊出日期:  2020-10-28

飞鸟与无人机目标雷达探测与识别技术进展与展望

(中文/English)

doi: 10.12000/JR20068
    基金项目:  国家自然科学基金(U1933135, 61871391, 61931021),山东省重点研发计划(2019GSF111004),基础加强计划技术领域基金(2102024),装发“十三五”领域基金(61404130212)
    作者简介:

    陈小龙(1985–),男,山东烟台人,博士,副教授。主要研究方向为雷达动目标检测、海杂波抑制、雷达信号精细化处理等。获中国专利优秀奖、军队科技进步一等奖、中国产学研合作促进会军民融合奖、中国电子学会/全军优博。入选中国科协青托,第七届中国电子学会优秀科技工作者,世界无线电联盟、国际应用计算电磁学会青年科学家奖。E-mail: cxlcxl1209@163.com

    陈唯实(1982–),男,天津人,博士,研究员。主要研究方向为机场运行安全管理、无人机反制、鸟击防范、低空空域监视等。E-mail: chenwsh@mail.castc.org.cn

    饶云华(1972–),男,博士,副教授,主要研究方向为新体制雷达,雷达系统设计与信号处理等。E-mail: ryh@whu.edu.cn

    黄 勇(1979–),男,副教授,主要研究方向为雷达目标检测、多维信号处理等。E-mail: huangyong_2003@163.com

    关 键(1968–),男,辽宁锦州人,教授,博士生导师。主要研究方向为雷达目标检测与跟踪、侦察图像处理和信息融合。E-mail: guanjian_68@163.com

    董云龙(1974–),男,天津宝坻人,教授,主要研究方向为多传感器信息融合。E-mail: china_dyl@sina.com

    通讯作者: 陈小龙 cxlcxl1209@163.com
  • 责任主编:张群 Corresponding Editor: ZHANG Qun
  • 中图分类号: TN957.51

摘要: 飞鸟和无人机(UAVs)是典型的“低慢小”目标,具有低可观测性,对两者的有效监视和识别成为保障空中航路安全、城市安保等需求迫切需要解决的难题。飞鸟和无人机目标类型多、飞行高度低、机动性强、雷达散射截面积小,加之探测环境复杂,给目标探测带来极大困扰,已成为世界性难题。因此迫切需要研发“看得见(检测能力强)、辨得明(识别概率高)”的无人机、飞鸟等“低慢小”目标监视手段和技术,实现目标的精细化描述和识别。该文集中对近年来复杂场景下旋翼无人机和飞鸟目标检测与识别技术的研究进展进行了归纳总结,介绍了飞鸟和无人机探测的主要手段,从回波建模和微动特性认知、泛探模式下机动特征增强与提取、分布式多视角特征融合、运动轨迹差异、深度学习智能分类等方面给出了检测和识别的有效途径。最后,该文总结了现有研究存在的问题,对未来复杂场景下飞鸟和无人机目标检测与识别技术的发展进行了展望。

注释:
1)  责任主编:张群 Corresponding Editor: ZHANG Qun

English Abstract

陈小龙, 陈唯实, 饶云华, 等. 飞鸟与无人机目标雷达探测与识别技术进展与展望[J]. 雷达学报, 2020, 9(5): 803–827. doi:  10.12000/JR20068
引用本文: 陈小龙, 陈唯实, 饶云华, 等. 飞鸟与无人机目标雷达探测与识别技术进展与展望[J]. 雷达学报, 2020, 9(5): 803–827. doi:  10.12000/JR20068
CHEN Xiaolong, CHEN Weishi, RAO Yunhua, et al. Progress and prospects of radar target detection and recognition technology for flying birds and unmanned aerial vehicles[J]. Journal of Radars, 2020, 9(5): 803–827. doi:  10.12000/JR20068
Citation: CHEN Xiaolong, CHEN Weishi, RAO Yunhua, et al. Progress and prospects of radar target detection and recognition technology for flying birds and unmanned aerial vehicles[J]. Journal of Radars, 2020, 9(5): 803–827. doi:  10.12000/JR20068
    • 鸟击是指航空器起降或飞行过程中与鸟类等相撞的事件,或是因为动物活动影响到正常飞行活动的事件[1]。全球每年约发生21000起鸟击事件,造成经济损失约12亿美元。随着航班量的持续增长和生态环境的不断好转,我国机场的鸟击防范工作压力越来越大,其带来的损失远远超出其它原因所致之损失,可以说飞鸟是航班起降阶段的传统安全威胁因素。近年来,以旋翼无人机(Unmanned Aerial Vehicle, UAV)等消费行无人机为代表的低空飞行器得到了快速发展,全国多个机场接连出现无人机“黑飞”扰航事件[2],非法放飞无人机成为新的焦点问题,同“鸟击”一起成为威胁机场净空区航班起降安全的“两大隐患”,如图1所示。

      图  1  飞鸟、无人机对民航飞机的威胁

      Figure 1.  The threat of flying birds and UAVs to civil aviation aircraft

      传统的机场鸟情观测依靠人工,但在目测困难的黎明、黄昏和夜晚,恰恰是鸟击事件的高发期。雷达探鸟系统能够克服天气因素、距离、昼夜变化等不利因素的影响,真正意义上实现全天24 h不间断获取空中鸟类的距离、方位、速度、高度等实时信息,实现鸟情不间断监测。经过30多年的发展,国内外已经研制出了相对成熟的“雷达探鸟系统”,最具代表性的是美国的Merlin雷达、加拿大的Accipiter雷达、荷兰的Robin雷达,以及中国民航科学技术研究院(简称“航科院”)开发的“机场雷达探鸟与驱赶联动系统”。2018年8月,民航局发布《机场新技术名录指南(2018—2020年度)》,聚焦机场运行安全和效率,兼顾便捷服务以及施工安全,共4大类25项新技术,鸟击防范技术位列其中。目前,多数探鸟雷达产品全部采用S波段水平扫描雷达与X波段垂直扫描雷达相结合的方式,并与部分驱鸟设备实现了联动,但机场普遍反应,目前的探鸟雷达系统的探测率低于75%,且存在一定虚警,探测效果还不能满足机场需求,联动系统驱鸟效果不明显。

      无人机等低空飞行器的出现和迅速发展,对空中航路安全、城市安保等提出了严峻挑战[3]。2015年至今,为维护民航机场净空保护区域飞行安全,加强对无人机运行的管理,国家、各部委及民航局相继出台了多部规章标准,包括《无人驾驶航空器飞行管理暂行条例(征求意见稿)》、 《轻小无人机运行规定(试行)》、 《民用无人驾驶航空器实名制登记管理规定》、 《无人机云系统接口数据规范》、 《无人机围栏》等。在民航局于2018年8月发布的《机场新技术名录指南(2018—2020年度)》中,“鸟击防范技术”和“无人机反制技术”位列其中,在2019年1月的省部级领导干部研讨班上,习近平总书记再次强调“要加快科技安全预警监测体系建设,围绕无人机等领域,加快推进相关立法工作”。但目前,对无人机和飞鸟的监视,尤其是识别仍缺乏有效的技术和手段,“黑飞”和“扰航”现象仍十分普遍,一些简易航空器容易偏离预定航线,若进入重要区域上空,或被恐怖分子利用携带危险武器,将严重威胁公共安全,如图2所示。

      图  2  无人机危害公共安全

      Figure 2.  UAVs endanger public safety

      针对“合作”入网的无人机,其飞行信息实时接入“无人机云”等管理系统,监管部门可对误入相应区域的无人机进行查询、记录;针对“合作”但未入网的无人机,生产商可通过监听“飞控协议”,对相关品牌产品的飞行状态进行监控。上述合作式监管技术目前已能覆盖95%以上的消费级无人机,剩余不足5%的非合作飞行器是防范的重点和难点[4]。以机场净空区为例,针对无人机、飞鸟等不同的“低慢小”入侵目标,机场在发现目标后需要采取不同的反制措施:发现无人机目标后,机场首先发出预警,引导起降航班避让,并协调机场及地方公安进行处置;发现飞鸟目标后,机场将基于一定的驱鸟策略,综合采用多种驱鸟设备将其驱离危险区域或采取一定的规避措施。因此,有必要对无人机和飞鸟目标进行识别判性。

      无人机和飞鸟目标种类繁多,其尺寸、形状及运动特性的不同导致目标具有不同的雷达散射特性和多普勒特性,是典型的“低慢小”目标[5],具有低可观测性[6,7]。具体体现在:(1)目标尺寸较小,散射截面积(Radar Cross Section, RCS)小,速度较慢,目标回波藏匿于强杂波或噪声背景中,信杂比低;(2)目标机动飞行导致多普勒扩散,目标回波难以积累;(3)雷达回波微弱,目标特征提取和估计难;(4)雷达精细化处理过程面临挑战,目标分类识别难度大。目前对“低慢小”目标检测方法主要涉及恒虚警检测(Constant False Alarm Rate, CFAR)、动目标显示(Moving Target Indicator, MTI)、动目标检测(Moving Target Detection, MTD)、相参积累、特征检测等方法。在利用现有技术对“低慢小”目标进行探测时,由于环境复杂(机场、城市、临海等)、虚假目标多、目标机动性强、有效观测时间短等诸多问题的存在,使得目标检测概率低、杂波虚警高、积累增益低、跟踪不稳定,给目标检测和判性带来极大困扰(图3),使得飞鸟和无人机探测成为世界性难题。由于部分涉及敏感关键技术,新技术和新方法也少有公开报道。迫切需要研发“看得见(检测能力强)、辨得明(识别概率高)”的非合作无人机、飞鸟等“低慢小”目标监视新手段和新技术,实现目标的精细化描述和识别。

      图  3  “低慢小”目标雷达回波

      Figure 3.  “Low, slow, small” target radar echo

      本文集中对近年来复杂场景下旋翼无人机和飞鸟目标检测与识别技术的研究进展进行归纳总结,并分析该技术领域存在的问题,指出其发展趋势。本文内容安排如下:第2节介绍飞鸟和无人机探测的主要手段,包括国内外探鸟雷达的研发情况、非合作无人机目标的无线电监测、音频探测、光电探测、主动和被动雷达探测和识别手段,并介绍了相关的系统应用情况。第3节重点介绍飞鸟与无人机目标检测与分类识别的技术方法,一方面从目标特性认知与特征提取方面入手,介绍回波建模和微动特性认知方法、泛探模式下目标机动特征增强与提取技术以及分布式多视角微动特征提取技术,目的是提高检测能力,实现目标的精细化特征描述;另一方面结合机器学习或深度学习方法,提出目标智能识别的有效技术途径,依据运动轨迹差异实现区分无人机和飞鸟的目的;最后,介绍了作者团队在此方面的工作。第4节和第5节针对现有研究存在的不足,提出今后的研究发展趋势。第6节对全文归纳总结。

    • 飞鸟是机场净空区的传统安全隐患。鸟击防范长期以来是威胁飞行安全的国际性难题,随着航班量的持续增长和生态环境的不断好转,我国机场的鸟击防范工作压力越来越大。雷达是鸟情观测的重要技术手段,其优点在于不受能见度等因素的限制,能够全天候自动运行。气象雷达和空中交通管制雷达最早被用于监视鸟类活动,特别是较大区域(几百千米)内的大群鸟类迁徙,但此类大型雷达分辨率较低,不适于机场等较小区域(10 km)内的鸟情观测。目前,应用较为广泛的雷达探鸟系统通常采用体积较小、功耗较低的海事雷达,通过对其天线、信号与数据处理等部分的改造,实现以更高的分辨率在较小的区域内跟踪小型鸟群或单只鸟运动。

      图4给出了国外3款探鸟雷达及其获取的含有飞鸟目标的雷达图像[8,9]。2005年,Braun[10]回顾并比较了不同类型探鸟雷达的特点和用途。国际上,典型的探鸟雷达系统包括美国的Merlin雷达、加拿大的Accipiter雷达、以及荷兰的Robin雷达。如图4所示,Merlin雷达采用S波段水平扫描雷达覆盖机场周边的低空空域以及X波段垂直扫描雷达覆盖航空器的起降通道,并采用专业鸟情信息提取算法分离出飞鸟目标信息;Accipiter雷达为X波段抛物面天线,其波束较窄(4º),系统可获得较为精确的高度信息,但探测的范围有限;Robin雷达除了S波段水平和X波段垂直扫描雷达外,还包括一部调频连续波雷达,专为飞鸟目标判性设计。表1比较了3款探鸟雷达的技术特点和部署方式。在国内,航科院与北京航空航天大学2006年以来开展了雷达探鸟可行性验证,并利用航海雷达获取的视频图像开展了相关研究[11,12]。近年来,航天二院、北京理工大学、国防科大等单位也相继开展了基于相控阵、MIMO、全息雷达等先进雷达技术的雷达探鸟研究[13]。但是,目前大部分探鸟雷达仅能利用目标回波强度,按照鸟群、大鸟、中鸟、小鸟进行大致分类,且不具备区分飞鸟与无人机目标的能力。

      表 1  国外3种典型探鸟雷达产品说明

      Table 1.  Description of three typical foreign avian radar products

      产品名称 技术特点 部署方式
      Merlin 水平和垂直扫描雷达结合,固态发射机 水平扫描雷达通常部署在靠近机场中心的位置,负责机场周边低空预警;垂直扫描雷达通常部署在每条跑道中心一侧,负责监视航班起降通道;当然,雷达部署是个复杂的系统问题,需要考虑净空障碍物限制面、地物遮挡、供电等多方面因素。
      Accipiter 水平和垂直扫描雷达结合,附加抛物面天线
      Robin 水平和垂直扫描雷达结合,磁控管发射机

      图  4  典型探鸟雷达系统

      Figure 4.  Typical avian radar systems

    • 一般的消费级无人机价格便宜,小巧轻便,是干扰公共安全和破坏战略要地的入门手段,例如,无人机材料质地坚硬,其与高速飞行的飞机相撞,较之于鸟击的破坏力更大[14]。世界各国为此投入大量人力物力开展无人机反制技术研究,并形成了一批新型技术与设备[3]。现有探测技术主要分4类,即无线电监测、音频探测、光电探测和雷达探测技术[15],各技术的研究现状分述如下:

      (1) 无线电监测手段。无线电监测采用射频扫描技术,针对民用无人机遥控和图传信号频段进行实时监测、分析和测向。该技术可通过扫描该频段得到无人机控制信号波形,与系统库中的无人机控制波形进行对比,判断是否存在遥控无人机并确定其类型[16-18]。罗德与施瓦茨(R&S)公司、国防科技大学、中电41所等均采用无线电侦测技术研制了相关系统,部分系统已在我国白云机场、宝安机场、双流机场试用,并取得良好效果,典型系统如图5(a)所示。但无线电监测单站测量通常只有目标方位信息,测量精度低,且对于巡航式电磁静默的无人机,该监测手段可能失效。

      图  5  典型非合作无人机探测系统

      Figure 5.  Typical non-cooperative UAV detection system

      (2) 音频探测手段。每种无人机都有独特的螺旋桨旋转声,称为音频指纹,采集各种无人机的音频指纹可以建立数据库,通过将定向麦克风探测的音频数据与数据库进行匹配,不仅能够探测到无人机,还能够识别出无人机型号。韩国电子技术附属研究所、法国圣路易斯法德研究所、匈牙利欧布达大学等均研制了基于音频探测技术的无人机反制系统并提出了相关解决方案,典型系统如图5(b)所示[19-21]。但音频探测技术易受噪声、杂波影响,对于大型无人机效果较好,中小型无人机声音小,加之环境中的噪声干扰严重,导致探测效果不佳。

      (3) 光电探测手段。光电探测技术主要利用可见光和红外图像完成目标探测和识别。德国光电系统技术和图像开发研究院、长春光机所等均开发了基于光电技术的无人机探测系统,但光电探测易受环境光线干扰,且“低慢小”目标光电信号较弱、信噪比低,加之在机场环境下,大小目标遮蔽效应、飞机发动机强烈的红外辐射都使光电探测、识别、跟踪的难度进一步增大[22-24]。因此,光电探测技术通常作为雷达、无线电监测的补充确认设备。图5(c)所示为一种典型的雷达光电复合探测系统。

      (4) 雷达探测手段。(a)主动雷达探测:雷达作为目标探测和监视的主要手段,在空中和海面目标监视以及预警探测等国防和公共安全领域应用广泛。虽然传统雷达对于“低慢小”目标存在探测效能不足的问题,但雷达仍作为对空目标探测的重要手段被广泛采用。随着雷达体制和信号处理技术的快速发展,其已具备获取微弱目标特性的潜力,为低可观测目标探测和识别提供了新的途径[7]。系统方面,开发低成本的新型雷达探测系统已受到世界各国防务公司和科研院所的重视。德国应用科学研究院(FHR)利用Ku波段MIMO体制雷达,针对无人机目标信号与杂波相互耦合的现象,通过环境数据训练检测器在杂波区和非杂波区的检测门限,改善了雷达对低速目标的检测性能[25]。英国防务公司Selex ES的“隼盾”(falcon shield)的电子战系统,该系统不仅能够探测、定位、识别、干扰、打击低空慢速飞行的小型无人机(“低慢小”目标),还能够接管目标无人机的指控权并引导其安全降落,系统由雷达、光电监视系统、指控组件、电子侦察与射频威胁管理系统等组成,雷达采用相参信号处理算法,提高目标抗杂波能力,如图6(a)所示。Blighter公司的反无人飞行器防御系统采用雷达准确定位无人机(图6(b)),然后通过发射定向大功率干扰射频,干扰无人机迫使其降落[26]。瑞典萨博公司拓展了“长颈鹿”AMB雷达能力,使其在常规模式下提供空中监视能力的同时,探测、分类和跟踪低速飞行的小型无人机,并已验证了在复杂环境下同时应对6架无人机目标的能力[27]。(b)被动雷达探测:主动雷达需要频率许可、电磁辐射大。某些应用场景,如机场净空区,具有严格的电磁准入制度,管理部门对主动雷达技术的使用通常持谨慎态度,即使是科学实验,也需要经过严格的审批流程,而外辐射源雷达具有功耗低、隐蔽性和覆盖性好等优点,作为一种经济安全的非合作目标探测手段近年来引起了学术界的广泛关注。尤其是随着硬件性能的提升,以及通信和无线网络等可用外辐射源的发展,外辐射源雷达引起了各国的普遍重视和深入研究[28,29]。德国应用科学研究院、瑞士国防装备采购局、法国国家航天航空研究中心、武汉大学等国内外科研机构利用电视、数字电视、数字音频广播、卫星及移动通信等信号作为雷达外辐射源进行了大量的理论与应用研究[30]

      图  6  国际典型的“低慢小”目标探测系统

      Figure 6.  International typical “low, slow, small” target detection system

      在低空目标探测方面,尤其是无人机探测方面,基于外辐射源雷达进行探测的研究得到广泛关注,英国斯凯莱德大学[28]研究了基于全球导航卫星系统(Global Navigation Satellite System, GNSS)信号的外辐射源雷达直升机目标分类识别,可实现直升机目标分类。芬兰奥鲁大学[31,32]开展了基于DVB-T信号的外辐射源雷达直升机探测实验,观测到了直升机叶片转动的微多普勒效应,并进一步研究得到了边带频率谱线间隔与叶片转动速率之间的关系。华沙工业大学[33]基于DVB-T照射源也开展了类似的实验研究,推导了双基地下直升机旋转叶片回波数学模型。南非科学与工业研究理事会[34]开展了基于调频 (Frequency Modulated, FM)信号的螺旋桨飞机探测实验,成功提取到了螺旋桨回波微多普勒特征。罗马大学的Martelli等人[35]研究了利用地面数字电视广播(Digital Video Broadcasting-Terrestrial, DVB-T)外辐射源雷达AULOS在机场区域同时对无人机与民航飞机进行监测,其实验结果表明,通过长相干积累该雷达对机场周围飞行无人机(大疆精灵4和御)的单站探测距离可达约3 km,如图7所示。同时,他们还利用Wi-Fi信号对无人机进行了探测,研究表明可对3D空间无人机进行稳定跟踪,但其精度较差[36]。马来科技大学Rsa R A等人[37]从理论分析了利用DVB数字电视卫星信号进行无人机前向散射探测的可行性。

      图  7  DVB-T外辐射源雷达AULOS无人机探测结果[35]

      Figure 7.  DVB-T passive radar AULOS UAV detection results[35]

      国内,武汉大学[38-41]电波传播实验室在利用外辐射源雷达对无人机探测方面进行了较多的研究,利用数字电视地面广播(Digital Television terrestrial Multimedia Broadcasting, DTMB)信号进行无人机探测实验,结果表明通过杂波抑制后,对于典型的大疆精灵3无人机,其探测距离超过了3 km,如图8所示。图9给出了基于数字电视信号外辐射源雷达的无人机目标监视应用场景示例。

      图  8  基于DTMB外辐射源雷达的无人机探测实验[38]

      Figure 8.  UAV detection experiment based on DTMB passive radar[38]

      图  9  基于数字电视信号外辐射源雷达的无人机目标监视应用场景示例

      Figure 9.  UAV target surveillance application based on digital TV signal passive radar

    • 国外相关机构在无人机分类和识别方面已经开展了研究并形成了产品,代表性的有:(1)德国应用科学研究院(FHR)[42]利用环境中的全球移动通信系统(Global System for Mobile communications, GSM)信号,构建了一套多通道外辐射源雷达探测系统GAMMA-2(图10),探索了外辐射源雷达获取“低慢小”目标微动特征的可能性;(2)法国国家航天航空研究中心(ONERA)[43]采用特高频(Ultra High Frequency, UHF)频段DVB-T信号,通过多频及单频组网探测,不仅可对3 km处的无人机目标实现连续定位跟踪,更能探测到多旋翼无人机的微多普勒效应,将该体制雷达对“低慢小”目标的探测推向更为精细化的应用;(3)荷兰TNO研究院进行了无人机微多普勒测量实验,利用经典时频分析方法进行微动特征提取;(4)英国伦敦大学[44]针对无人机的“低慢小”特性,通过布置一发三收S波段雷达系统NetRAD(图11),利用多旋翼无人机的微多普勒特性,弥补时域维探测效能不足的问题,提升目标的定位跟踪性能;(5)荷兰Robin公司[45]的新一代雷达采用调频连续波波形,利用微动特性区分无人机和飞鸟目标,同时具备无人机探测和识别功能(图12);(6)英国Aveillant公司[46]的全息雷达(图13)采用数字阵列体制,主要通过固定、凝视探测方式,延长驻留时间,能够获取目标的精细化特征表述,系统通过基于决策树分类器的机器学习算法,可对无人机目标和非无人机目标进行分类,同时雷达的软件系统可以分析目标的运动轨迹,从而将其与其他目标区分开来。

      图  10  德国FHR多通道无源雷达系统GAMMA-2[42]

      Figure 10.  German FHR multi-channel passive radar system GAMMA-2[42]

      图  11  英国伦敦大学NetRAD雷达系统[44]

      Figure 11.  NetRAD radar system of university of London[44]

      图  12  荷兰Robin飞鸟/无人机识别雷达[45]

      Figure 12.  The Robin birds/UAV recognition radar in Netherlands[45]

      图  13  英国Aveillant公司全息雷达系统[46]

      Figure 13.  The holographic radar system from Aveillant, UK[46]

    • 无人机和飞鸟均为非刚体目标,无人机旋翼的转动和飞鸟的翅膀扇动会在由主体平动产生的雷达回波多普勒频移信号附近引入额外的调制边带,该信号称为微多普勒信号,产生微多普勒效应[47]。微多普勒反映的是多普勒频移的瞬时特性,表征了目标微动的瞬时径向速度。微多普勒信号中所包含的信息可以反演出目标的形状、结构、姿态、表面材料电磁参数、受力状态及目标独一无二的运动特性[48]。旋翼无人机回波表现为主体和旋翼部件多普勒信号的叠加,不同类型和旋翼个数的无人机其微动特性也不尽相同。

      (1)飞鸟与无人机目标雷达回波建模与特性认知。飞鸟与旋翼无人机目标的回波多普勒谱出现展宽,飞鸟翅膀扇动与无人机旋翼转动对回波产生调制特性,具有时变和周期性,体现出微动特征[49]。对飞鸟与旋翼无人机微多普勒特征的建模和精细特性认知是后续检测和目标分类的前提条件。旋翼无人机的回波信号表现为无人机主体多普勒信号和旋翼部件微多普勒信号的叠加,如图14所示。首先是对含微动部件的无人机目标主体运动和微动进行建模分析与参数化表征,推导多普勒频率、旋转速率、叶片数目和尺寸之间的对应关系[50]。对于多旋翼的无人机,假设各旋翼叶片的RCS相同且均为1,在直升机旋翼模型基础上,多旋转旋翼的无人机目标回波模型为[51,52]

      图  14  旋翼无人机微动特性[53]

      Figure 14.  Micro-motion characteristics of six-rotor UAV[53]

      $$ \begin{split} {s }(t) =\,& \sum\limits_{m = 1}^M {L\exp \left\{ { - {\rm j}\frac{{4\pi }}{\lambda }\left[ {{R_{{0_m}}} + {z_{{0_m}}}\sin {\beta _m}} \right]} \right\}} \\ & \cdot \sum\limits_{k = 0}^{N - 1} {\sin {\rm{c}}} \left\{ \frac{{4\pi }}{\lambda }\frac{L}{2}\cos {\beta _m}\cos \left( {\varOmega _m}t+ {\varphi _{{0_m}}}\right.\right. \\ & \left. + k2\pi /N \right) \} \exp \left\{ { - {\rm{j}}{\varPhi _{k,m}}\left( t \right)} \right\} \end{split} $$ (1)
      $$ \begin{split} {\varPhi _{m,k}}\left( t \right) =\,& \frac{{4\pi }}{\lambda }\frac{L}{2}\cos {\beta _m}\cos \left( {{\varOmega _m}t + {\varphi _{{0_m}}} + 2k\pi /N} \right),\\ & k{\rm{ }} = {\rm{ }}0,1,2,···,N - 1,\,m{\rm{ }} = {\rm{ }}1,2,···,M\\[-10pt] \end{split}$$ (2)

      式中, $M$ 为旋翼数, $N$ 为单个旋翼叶片数, $L$ 表示旋翼叶片长度, ${R_{{0_m}}}$ 为雷达到第 $m$ 个旋翼中心的距离, ${z}_{0_m}$ 表示第 $m$ 个旋翼叶片的高度, ${\beta _m}$ 为雷达到第 $m$ 个旋翼的俯仰角, ${\varOmega _m}$ 为第 $m$ 个旋翼的转动角频率, ${\varphi _{{0_m}}}$ 为第 $m$ 个旋翼的初始旋转角。

      回波信号的瞬时多普勒频率可由信号的相位函数求时间导数得到,得到第 $m$ 个旋翼的第 $k$ 个叶片的等效瞬时微多普勒频率为

      $$ {f_{m,k}}\left( t \right) = - \frac{{L{\varOmega _m}}}{\lambda }\cos {\beta _m}\sin \left( {{\varOmega _m}t + {\varphi _{{0_m}}} + 2k{\rm{\pi }}/N} \right) $$ (3)

      由式(3)可知,多旋翼无人机微多普勒特征是由 $M \times N$ 条正弦形式的曲线组成,且受到载频、旋翼数目、旋翼转速、叶片数、叶片长度、初始相位和俯仰角的影响。其中,载频、叶片长度和俯仰角仅与微多普勒频率幅度有关,而旋翼数目、旋翼转速、叶片数和初始相位将影响微多普勒特征曲线的幅度和相位。进一步分析飞鸟和无人机目标的微动特征差异,图15图17给出了两种目标微动信号的初步仿真对比结果。图15为X波段雷达的飞鸟、四旋翼无人机和无人直升机的微多普勒时频图,通过对比可知,飞鸟目标微多普勒多集中在低频区,振动周期较长,这与不同类型鸟类翅膀的扑翼运动特点密切相关,而无人机目标的微动特征呈现明显的周期特性,根据旋翼转速和长度的不同,转动频率和周期也不同,由此可知,飞鸟目标和无人机微动特性有明显差异。图16为仿真的飞鸟目标外辐射源雷达微动特性分析,外辐射源中心频率分别为658 MHz和900 MHz,由于照射源的频率较低,波长较长,导致微动特性并不明显。图17为武汉大学前期开展的无人机微多普勒效应实验结果,探测大疆M100无人机,该无人机旋翼数为4,每旋翼叶片数为2,叶片长度17.25 cm,最大转速为129.5 rps,无人机处于悬停状态,可知,各旋翼旋转叶片的调制回波凸显,表现为频谱的展宽,能观察到无人机的微多普勒效应,且各旋翼转速的不同导致占据多个多普勒单元。

      图  15  飞鸟与无人机微动特征差异

      Figure 15.  Differences in micromotion characteristics of flying birds and UAVs

      图  16  仿真的飞鸟目标雷达微动特性分析

      Figure 16.  Analysis of radar m-D characteristics of simulated flying bird

      图  17  无人机目标实测微动特性分析(外辐射源中心频率658 MHz)[39]

      Figure 17.  Analysis of measured m-D characteristics of UAV target (center frequency of external radiation source 658 MHz)[39]

      图18给出了大疆S900无人机和猫头鹰的微多普勒特征对比[54],由于无人机旋翼转速较快,其微动周期明显快于飞鸟,但强度弱于飞鸟目标微多普勒,此外,由于飞鸟机动导致的翅膀扇动不规律,使其微多普勒更为复杂。无人机和飞鸟微动特征有明显差异,从而为无人机和飞鸟目标的准确探测与精确识别提供不依赖于先验信息、可靠性高、可分性好的重要特征依据。目前,利用微动特征进行飞鸟和无人机目标识别的主要难点是“低慢小”目标回波微弱,微动特征提取难,时频分辨力低,分类和识别难,且目标与微动特征的对应关系尚不明确。

      图  18  无人机和飞鸟目标微多普勒(24 GHz)[54]

      Figure 18.  Micro-Doppler (24 GHz) for UAV and bird targets[54]

      (2) 泛探模式下机动特征增强与提取技术。利用泛探观测模式能够通过延长积累时间,获取高的信号处理增益、提高速度分辨率,并从多域(时频域、变换域、特征域)、多维(回波序列、二维微动时频图、距离-多普勒图、变换域图)、多视角(多接收站位)分析目标运动特征,可实现复杂微动特征高精度提取、机动特征的精确描述[7]

      假设一种外辐射源雷达的双基地配置,如图19所示[29]T0R0分别为发射站和接收站,L为基线距离。设初始时刻目标位于O点,t时刻运动到O'点, ${R_{{T_0}}}$ RT(t)分别为初始时刻和t时刻目标距发射站的距离, ${R_{{r_0}}}$ Rr(t)分别为初始时刻和t时刻目标距接收站的距离,目标做匀加速运动,初始速度为v0,加速度为a0β0为初始时刻的双基地角, $\phi$ 为目标运动方向。若基带信号为s(t), t 时刻目标回波相对直达波的延时τ=R(t)/c0, $R(t) = {R_T}(t) + {R_r}(t) - L$ , c0为光速,λ为波长。将R(t)在t=0处Taylor展开,并忽略2次以上项,则回波信号模型可简化为

      图  19  双基地外辐射源雷达配置图

      Figure 19.  Configuration diagram of bistatic passive radar

      $$ \begin{split} r(t) =\,& {A_0}s\left[ {t - \frac{{R(0) - v{t_m} - 1/2at_m^2}}{{{c_0}}}} \right]\\ & \cdot \exp \left[ {{\rm{j}}2{\rm{\pi }}{f_c}\frac{{R(0) - v{t_m} - 1/2at_m^2}}{{{c_0}}}} \right] \end{split} $$ (4)

      式中,fc为载波频率,A0为回波信号幅度,ttm分别为进行外辐射源信号二维分时处理后的快时间和慢时间,如中国移动多媒体广播(China Mobile Multimedia Broadcasting, CMMB)信号规定1 s的信号为1帧,并将其划分为40个时隙,每个时隙长25 ms,又可划分为1个信标和53个正交频分复用(Orthogonal Frequency Division Multiplexing, OFDM)符号,以OFDM符号为处理单元划分快时间和慢时间

      $$\left. \begin{aligned} & R(0) = {R_{{T_0}}} + {R_{{r_0}}} - L \\ & v = 2{v_0}\cos \left( {\phi + {\beta _0}/2} \right)\cos ({\beta _0}/2) \\ & a = - \frac{{v_0^2{{\sin }^2}(\phi + {\beta _0})}}{{{R_{{T_0}}}}} - \frac{{v_0^2{{\sin }^2}({\beta _0})}}{{{R_{{r_0}}}}} \\ & \quad\ \ + 2{a_0}\cos \left( {\phi + {\beta _0}/2} \right)\cos \left( {{\beta _0}/2} \right) \end{aligned} \right\} $$ (5)

      由此可知,在较长的观测时间内,目标的时延和多普勒频率分布一定区间内,即同一个目标可能对应多个距离单元和多普勒单元,产生距离徙动和多普勒徙动。距离徙动的大小取决于目标速度在双基角平分线的投影值与平台速度在观测角平分线的切向投影值;多普勒徙动取决于目标、平台速度的切向分量及目标距发射站、接收站距离,基线距离,以及目标、平台加速度的切向分量。

      若目标做高阶机动,则可将长时间信号模型统一为

      $$ r(t,{t_m}) = {A_0}s\left[ {t - \frac{{R({t_m})}}{{{c_0}}}} \right]\exp \left[ {{\rm{j}}2{\rm{\pi }}{f_c}\frac{{R({t_m})}}{{{c_0}}}} \right] $$ (6)

      3阶多项式可描述大部分的机动目标的运动,即

      $$R({t_m}) = R(0) + v{t_m} + at_m^2/2 + gt_m^3/6$$ (7)

      传统的距离和多普勒徙动补偿方法多为分段分步补偿,即先进行距离徙动补偿,后进行多普勒徙动补偿,算法复杂且后续补偿性能受制于前续结果,不能实现回波信号的长时间快速相参积累。基于参数搜索的长时间相参积累(Long-Time Coherent Integration, LTCI)方法[55-57]根据预先设定的目标运动参数(初始距离、速度和加速度)搜索范围,提取位于距离-慢时间二维平面中的目标观测值,然后在相应的变换域选择合适的变换参数对该观测值进行匹配和积累,但该方法的主要问题在于需要多维参数搜索,导致运算量较大,增加了工程应用的难度。

      为此,可基于非参数搜索的长时间相参积累方法,通过快速降阶处理和多项式相位补偿等思路,提高算法运算效率。文献[58]提出了一种基于非均匀采样降阶运算和变标尺度变换(Non-Uniform resampling and Scale Processing, NUSP)的机动目标长时间相参积累检测方法(NUSP-LTCI),如图20所示,并用于低空飞行目标检测,该方法仅通过1次非均匀采样降阶运算,将高阶相位信号降阶至1阶相位信号,相比传统逐次降阶法降低了交叉项的影响,提高了参数估计精度;无需参数搜索匹配计算,降低了运算量,适合工程应用;能够同时补偿距离和高阶多普勒走动,实现长时间相参积累,提高雷达强杂波背景下的机动目标检测能力。如图21给出了一低空直升机目标雷达相参积累结果对比分析,可以看出NUSP-LTCI方法在杂波抑制能力和参数估计精度等方面较传统的MTD和经典Radon傅里叶变换(Radon Fourier Transform, RFT)方法有明显改善。

      图  20  非参数搜索长时间相参积累(NUSP-LTCI)处理流程[58]

      Figure 20.  NUSP-LTCI processing[58]

      图  21  低空飞行目标雷达相参积累结果对比分析[58,60]

      Figure 21.  Comparative analysis of radar coherent integration results of low-altitude flying targets[58,60]

      非均匀采样降阶运算定义为[59]

      $$S(t_m') = s({t_p} + t_m')s({t_p} - t_m')$$ (8)

      式中, $t_m'$ 为非均匀采样后的慢时间,tp为非均匀采样时间区间的中心, $t_m' = \sqrt {c{\tau _m}} $ , c为尺度因子,控制非均匀采样的密度, ${\tau _m}$ 为新的时间变量。

      非均匀采样降阶运算不同于传统的相位差分运算,在不进行信号的延迟复共轭相乘逐次降阶的前提下,能够快速将N阶相位降阶至N/2或N/2+1阶,从而大大降低了运算量,并减少了交叉项的影响。而变标尺度变换,借鉴Keystone变换的思想,对时间坐标轴进行尺度变换,消除变量之间的线性耦合关系,可在目标速度未知的前提下实现距离徙动校正。

      沿距离向对脉压后的雷达数据进行傅里叶变换,得到距离频率-脉间慢时间二维数据

      $$ \begin{split} {S_{{\rm{PC}}}}(f,{t_m}) =\,& A_{{\rm{PC}}}'{\rm{rect}}\left( {\frac{f}{B}} \right)\exp\! \left[ { - {\rm{j}}\frac{{{\rm{4}}\pi }}{{{c_0}}}(f \!+\! {f_c}){R_s}({t_m})} \right] \\ = \,& A_{{\rm{PC}}}'{\rm{rect}}\left( {\frac{f}{B}} \right)\exp \left[ - {\rm{j}}\frac{{{\rm{4}}\pi }}{{{c_0}}}(f + {f_c})\right.\\ & \cdot \left( {{r_0} + {v_0}{t_m} + {a_s}t_m^2/2 + gt_m^3/6} \right) ] \\[-17pt] \end{split} $$ (9)

      式中, $A_{{\rm{PC}}}'$ 为信号幅度,fc为发射信号载频,f 为距离频率。

      则通过非均匀采样运算,式(9)可转变为

      $$ \begin{split} S(f,{\tau _m}) =\,& {\left( {A_{{\rm{PC}}}'} \right)^2}{\rm{rect}}\left( {\frac{f}{B}} \right)\\ & \cdot \exp\! \left[ { - {\rm{j}}\frac{{{\rm{4}}\pi }}{{{c_0}}}(f \!+\! {f_c}) \cdot 2{R_s}({t_p})} \right] \\ & \cdot \exp\! \left[ { - {\rm{j}}\frac{{{\rm{4}}\pi }}{{{c_0}}}(f \!+\! {f_c})({a_s} \!+\! {g_s}{t_p})c{\tau _m}} \right] \end{split} $$ (10)

      由式(10)可知,机动目标回波相位已由关于tm的3阶多项式,降阶为关于新的时间变量 ${\tau _m}$ 的1次项。因此,后续可采用变标尺度变换去除距离频率f ${\tau _m}$ 的线性耦合关系,进而实现距离和多普勒的徙动补偿。

      (3) 分布式多视角微动特征提取技术。由于无人机运动目标姿态变化的复杂性,在不同的雷达视角下,其微多普勒特征将呈现显著差异,因此,利用分布式多视角雷达探测可提取更加全面的微多普勒特征,从而对目标实现更好的分类[61-63]。以外辐射源雷达为例,外辐射源雷达通常利用数字广播电视信号作为照射源,其工作频率较低,在目标精细化特征尤其是微动特征描述方面优势并不突出,但外辐射源雷达一个重要优势在于能利用分布式发射站以同时获得不同视角上的微动分量,从而提高微动特征分辨率[64,65]图22给出了单频网分布式外辐射源雷达微动特征提取的方案流程,利用单频网分布式外辐射源雷达的多视角特性,分析波数域样本分布与微动特征分辨率之间的内在联系,设计接收站位置优化策略以提高目标微动特征分辨率。

      图  22  单频网分布式外辐射源雷达微动特征提取研究思路

      Figure 22.  Method of micro-motion feature extraction via single-frequency distributed passive radar

      对于外辐射源雷达双基地探测,若基线中点位于空间固定坐标系 $\left( {X,Y,Z} \right)$ 的原点 $O$ ,双基地角为 $\beta $ ,方位角为 $\delta $ (定义为双基地角平分线与旋转平面的夹角)。不失一般性,假设旋转叶片在水平面内转动,叶片数为 $K$ ,叶片长度为 $L$ ,转动速率为 $\varOmega $ ,若不考虑目标主体运动及多径杂波的影响,则接收到的理想旋转叶片回波信号可表示为

      $$ s\left( t \right) = A\sum\limits_{k = 0}^{K - 1} {\sin\! {\rm c}\left( {{\varPhi _k}\left( t \right)} \right)\exp \left\{ { - {\rm{j}}{\varPhi _k}\left( t \right)} \right\}} $$ (11)

      其中, $A$ 为散射信号强度,函数 ${\varPhi _k}\left( t \right)$

      $$ {\varPhi _k}\left( t \right) = \frac{{2{\rm{\pi }}}}{\lambda }\frac{L}{2}\cos \left( {\varOmega t + {\varphi _k}} \right)\cos \left( {{\beta / 2}} \right)\cos \left( \delta \right) $$ (12)

      其中, $\lambda $ 为波长, ${\varphi _k} \!=\! {\varphi _0} \!+\! {{k2{\rm{\pi }}} / K} , {k \!=\! 0,1,2, ··· ,K - 1}$ 为各叶片对应的初始旋转角。

      旋转叶片时域特征由式(11)的幅度决定,即

      $$ \left| {s\left( t \right)} \right| = \left| {A\sum\limits_{k = 0}^{K - 1} {\sin \!{\rm c}\left( {{\varPhi _k}\left( t \right)} \right)\exp \left\{ { - {\rm{j}}{\varPhi _k}\left( t \right)} \right\}} } \right| $$ (13)

      可见,时域特征是由sinc函数组成的,由sinc函数的定义可知,连续两个峰值位置之间的时间间隔为

      $$ \Delta T = \left\{ \begin{aligned} & \frac{{2{\rm{\pi }}}}{{2K \cdot \varOmega }},\;\;\;K \;{\rm{is}} \;{\rm{odd}}\\ & \frac{{2{\rm{\pi }}}}{{K \cdot \varOmega }},\;\;\;\;\;K\; {\rm{is}}\; {\rm{even}} \end{aligned} \right. $$ (14)

      对式(11)作傅里叶变换,得到其频谱为

      $$ S\left( f \right) = \sum\limits_{m = - {M / 2}}^{{M / 2}} {{C_m}\delta \left( {f - {{mK\varOmega } / {2{\rm{\pi }}}}} \right)} $$ (15)

      可见,调制谱是由一系列谱线间隔为 $\Delta f = {{K\varOmega } / {2{\rm{\pi }}}}$ 的谱线组成的, ${C_m}$ 为谱线幅度,由Bessel函数确定,式(15)中 $M$ 为调制谱的谱线条数,由式(16)确定

      $$M = \frac{{4{\rm{\pi }}}}{K}\frac{L}{\lambda }\cos \left( {{\beta / 2}} \right)\cos \left( \delta \right)$$ (16)

      由此可得调制信号带宽为

      $$B = M \cdot \Delta f = \frac{{2{V_{{\rm{tip}}}}}}{\lambda }\cos \left( {{\beta / 2}} \right)\cos \left( \delta \right)$$ (17)

      其中, ${V_{{\rm{tip}}}} = L \cdot \varOmega$ 为叶尖速度。

      因此,在不同视角条件下,可得到不同的雷达微多普勒回波,对多视角微多普勒特征提取并融合,则可更有益于目标识别与分类。

    • 近年来,飞鸟与无人机目标自动识别得到了广泛关注,其主要技术思路与雷达目标识别有相似之处,主要方法包括基于模板的识别方法[66]和基于模型的识别方法[67],前者的主要工作集中在特征库构建方面,同类目标的特征可以聚类在一起,而不同类的特征在特征空间中是可分离的,通过精心选择目标的特征,可以有效的对目标进行分类和识别;后者在于目标的物理模型构建,包括其外形,几何结构,纹理,材质等。这两种方法都需要有深厚的领域知识来构建特征库,或者需要特定的模型知识构建模型库,并且特征库和模型库的好坏直接影响识别结果。目前,较为常用的飞鸟和无人机分类流程为,首先检测目标,然后通过信号处理方法提取飞鸟和无人机的特征[42,68-70],包括RCS特征、高分辨距离像特征、变换谱特征、时频特征、极化特征等,进而利用特征差异,采用支持向量机(Support Vector Machine, SVM)等方法实现两者的分类和识别。区别于上述方法,本文对目前的两种新兴方法进行阐述。

      (1) 基于机器学习或深度学习的目标识别方法。深度学习方法是一种高效的智能处理方法,相比于传统的SVM更适合挖掘更高维度的抽象特征,具有良好的泛化能力,并且在雷达领域开始得到应用[71]。深度神经网络方法可从实测数据中习得目标的各种隐藏特征,无需构造复杂的高逼真度模型,因此在高分辨距离像、微多普勒谱图和距离多普勒谱图等识别中有非常好的应用前景[72,73]。由于飞鸟和无人机的时频微动图有明显差异,将其作为数据集采用深度卷积网络(Convolutional Neural Network, CNN)学习,有望实现目标微动信号的智能化提取和识别[74,75]。目前,该方向处于起步研究阶段,卷积网络的构建、参数设置以及数据集的构造等均需进一步研究。

      无人机旋翼和飞鸟的微动特征参数与无人机和飞鸟类型、运动状态、雷达观测方式、环境和背景均有密切的关系,试图从数学建模和特性认知的角度去寻找参数之间的关系,并基于特征差异开展目标识别,是一种可行的技术途径,但复杂的运动形式和环境因素,其内在的关系有时难以用模型和参数的方式描述清楚。并且作为关键步骤的特征提取过程则需要占据相当的系统资源,且特征多为人工设计,从而造成很难获得目标数据深层本质特征。为了解决雷达目标识别中特征提取面临的问题,获取目标深层本质特征信息,提高识别精度,可采用深度学习方法,如深度神经网络、卷积神经网络、深度置信网络和递归神经网络,用于特征描述、提取和识别,通过设置不同的隐藏层数和迭代次数,获取数据各层次的特征表达,然后和近邻方法相结合,对目标进行识别[72,73,76]。可采取的思路是:

      (a)通过深度学习构建可用于无人机和飞鸟运动模型的参数估计的深度神经网络结构,取得优于传统参数模型建模方法和时频参数估计的效果;

      (b)通过卷积神经网络结构,实现回波信号(如微动信号)的深度特征提取和场景精细化识别;

      (c)通过对大量特征数据集的认知学习,利用深度学习方法构建深度网络结构进行回归分析来反演运动形式,并以此分析运动特性与运动参数之间的关联。

      对于微动参数估计,由于微动回波信号可以看做是时间-频率的二维特征数据,可采用卷积神经网络(CNN),将微动目标回波信号作为输入数据,经过多层卷积神经网络,获得微动模型关键运动参数,再将参数代入微动模型,作为代价函数反馈,进行反向传播,调整权重,重新输入微动目标回波数据训练计算,拟合程度越高,证明数据越符合该微动模型,一直到获得参数较为理想为止,并估算出微动模型的参数,最后计算微动模型与实际信号的误差。图23给出了微动时频图CNN中各卷积层的数据特征(以LeNet为例)[73],其中,图23(a)图23(c)分别为CNN的第1个卷积层和第2个卷积层的卷积核,图23(b)图23(d)分别为第1个卷积层和第2个卷积层的特征图,可以看出,CNN能够提取出数据集中较为精细的纹理和轮廓特征,从而为特征分类和参数估计奠定了基础。

      图  23  CNN中各卷积层的数据特征(以LeNet为例)[73]

      Figure 23.  Data characteristics of convolutional layers in CNN (take LeNet as an example)[73]

      对于典型的旋转目标来说,主要估计的就是旋翼转速ω,需要找到一个合适的参数ω使得微动模型与回波数据拟合,拟合程度越高,证明模型越符合数据。首先需要将杂波数据作为输入,进入神经元运算,进行卷积运算、池化,进行全连接,经过多层的CNN网络层,最后通过Softmax层得到两个输出,得到一个值向量,根据取值范围的设定,选定此次训练后应选取的参数值,代入到原模型中作为代价函数,反向传播,调整权重,重新训练,计算数值结果是否拟合于此参数的模型。如相差较大则继续训练,直到得到的参数值较为符合训练数据。由图23可知,一个CNN由若干卷积层、Pooling层、全连接层组成。可以根据不同的分布模型,构建各种不同的卷积神经网络来完成此次任务。基于深度学习的微动参数估计流程初步设计如图24所示。对信号进行特征描述时通过构造深层的卷积神经网络,获取中间层网络,尤其是靠近全连接层的卷积层网络输出作为学习获取的特征描述。

      图  24  基于深度学习的微动参数估计流程示意图

      Figure 24.  Schematic diagram of m-D parameter estimation based on deep learning

      (2) 基于无人机与飞鸟目标运动轨迹差异的识别方法。无人机目标的飞行轨迹相对稳定,而鸟类的飞行轨迹较为多变、机动性高,因此利用两者运动方式的差异,采用基于运动模型的低空非合作无人机与飞鸟目标轨迹差异特征提取,能够为两者的分类和识别提供新的技术途径[77]。因此,可以通过长时间的探鸟雷达数据积累,对一定周期内的鸟类飞行轨迹数量、飞行高度、飞行方向等数据进行统计分析,结合探测区域鸟情人工调研结果,掌握周边留鸟的活动节律及过境候鸟的相关迁徙情况[78]

      (a)基于多模型概率提取的目标运动模式判别与分类。除提取“信号”层面的目标微动特征之外,利用“数据”层面的目标飞行轨迹特征,同样是区分飞鸟和无人机目标的有效技术途径。通常情况下,无人机目标的飞行轨迹持续时间长且相对稳定,而飞鸟目标的飞行轨迹相对较短且灵活多变,可通过评估“数据”层面目标运动模型的变化情况区分飞鸟和无人机。图25所示的算法通过提取出目标的运动模型转换频率特征,进而区分轻小型无人机目标与飞鸟目标。

      图  25  基于雷达数据的飞鸟与无人机目标识别算法流程

      Figure 25.  Target recognition algorithm of flying birds and UAVs based on radar data processing

      图26所示为利用飞行轨迹特征对飞鸟与无人机目标进行识别的结果,雷达实测数据中包含了一段时间内某机场净空区自然活动的若干飞鸟目标与一架大疆精灵3无人机测试目标。其中,飞鸟目标多为本地留鸟,机动性高且飞行轨迹短。在工程应用中,无人机和飞鸟目标的分类阈值需要通过人工标记的方法进行验证和设定,确保算法的识别效果。实际上,无人机的飞行轨迹也可能设计为机动且复杂多变,因此,该算法的适用范围有待于进一步研究和验证。

      图  26  飞鸟与无人机目标飞行轨迹识别结果[77]

      Figure 26.  Recognition results of trajectories of flying birds and UAVs[77]

      (b)鸟类活动节律分析。利用探鸟雷达获取的机场鸟情数据,对不同时段的鸟类飞行轨迹数量进行统计,确定鸟类活动的高峰时段;在机场探鸟雷达的监视范围内划分网格,统计高峰时段内鸟类活动的网格分布情况,确定鸟类活动的主要区域,结合机场鸟情调研情况,为机场留鸟活动节律分析提供数据支撑。图27表2给出了某机场鸟类活动热点分布情况[79],共统计出10个鸟类活动的热点区域,每个区域记录的月度轨迹数量和百分比见表2。其中,热点区域10的面积最大,鸟类活动最活跃,其高峰时段出现在清晨和傍晚,因此,其属于鸟类栖息地的可能性最大。其余9个热点分布在飞行区内及机场周边区域,面积较小,高峰时段出现在白天,属于鸟类觅食或饮水地的可能性更大。机场管理机构可以结合机场实地鸟情调研情况验证以上猜测,对相关区域开展生态治理,降低鸟击风险。

      表 2  鸟类热点月度轨迹数量统计

      Table 2.  Monthly statistics of bird hot spots

      热点编号 轨迹数量 百分比(%)
      1 7628 0.73
      2 30299 2.89
      3 22484 2.15
      4 16885 1.61
      5 21987 2.10
      6 15652 1.50
      7 29856 2.85
      8 33938 3.24
      9 21650 2.07
      10 846406 80.86

      图  27  某机场鸟类活动热点统计情况[79]

      Figure 27.  Statistics of hot spots of birds activity at an airport[79]

      2017—2018年,受民航局机场司委托,作者团队对当前国内成熟的轻小型无人机目标监管技术进行了持续跟踪调研,调研发现,目前国内研制的基于雷达技术的非合作无人机探测系统大多基于已有的军事雷达或其他用途探测雷达的改进版,在探测和识别能力上与国际相比有很大欠缺,少量系统虽开发了识别功能模块,但识别途径和方法有限,多依赖于回波幅度和信号起伏变化,误报率极高,经常发生将飞鸟误识别为无人机的情况。基于传统特征信息的雷达目标识别,特别是非合作目标识别变得困难甚至失效。基于研发的“机场雷达探鸟实验系统”,初步验证了变换域动目标检测技术、长时积累技术以及运动模式差异分类技术,如图28所示,获得了国家重点研发计划、国家自然科学基金民航联合研究基金、山东省重点研发计划、民航科技项目等多项课题的资助。2019年,民航局委托航科院启动了《机场净空区无人机探测技术标准》与《机场探鸟雷达系统技术标准》两项行业标准的制定工作,将从标准层面规范飞鸟与非合作无人机目标的探测、识别与技术验证。

      图  28  无人机和飞鸟目标外场探测试验

      Figure 28.  UAV and flying birds detection experiment

    • 由上述研究现状可知目前实现飞鸟与无人机目标的分类和识别的主要途径有:一是,采用长时间凝视观测或泛探观测模式,进而提高目标的多普勒分辨率;二是,利用无人机和飞鸟微动特征及差异是实现分类和识别的有效手段;三是,通过双多基地雷达的观测,雷达的多视角特性可以克服单基雷达姿态敏感性的局限及遮挡效应的影响,形成多视角资源互补,有助于提高分辨能力和特征提取精度;四是,采用深度学习智能分类和识别方法,将目标的距离多普勒图以及时频图作为图片进行学习,通过深度卷积网络,对目标特征进行深度挖掘,实现精细化描述。目前该方向存在的主要问题和难点是:

      (1) 无人机与飞鸟目标建模及特性研究方面。理想状态下,目标典型微动的特征与其微动频率、微动幅度,径向距离和观测角度等因素相关,其随时间的变化规律表现为正弦调频(Sinusoidal Frequency Modulation, SFM)信号形式,因此在时频图像中也呈现出正弦变化规律。事实上,微动过程中散射中心位置、强度的变化还会对回波幅度进行非线性调制,因此在实际应用中还需要考虑目标姿态变化、目标形状、回波不连续、遮挡效应等因素的影响[80,81]。另外,雷达发射信号参数也会对微多普勒频率的观测造成影响。此外,目标或目标部件还复合有多种微动形式,如鸟类目标等。目标的微多普勒不再服从简单的正弦规律,而是表现为多个正弦分量的叠加。微动回波模型显得更为复杂,也提升了后续特征提取的复杂度。

      (2) 无人机与飞鸟目标特征差异提取方面。对无人机和飞鸟的差异特征提取是后续分类和识别的基础和关键,两者是典型的低可观测目标,对其回波信号的预处理的好坏直接决定了特征描述的精细化程度。无人机目标类型多,具有不同的尺寸、形状及运动特性,其雷达散射特性和多普勒特性也不尽相同。外辐射源雷达能通过长时间相干积累可提高多普勒分辨力,但其工作频率较低,在目标精细化特征尤其是微动特征描述方面优势并不突出,在复杂环境下实现高分辨的微动特征提取技术有待于进一步研究。此外,除提取“信号”层面的目标微动特征之外,利用“数据”层面的目标飞行轨迹特征,同样是区分飞鸟和无人机目标的有效技术途径。通常情况下,无人机目标的飞行轨迹持续时间长且相对稳定,而飞鸟目标的飞行轨迹相对较短且灵活多变,可以通过评估“数据”层面目标运动模型的变化情况探索飞鸟和无人机目标的分类识别方法。如何建立科学有效的飞行运动模型,并提出运动模式的判别准则和方法,这些都需要深入研究。

      (3) 无人机与飞鸟智能分类识别方面。目前的CNN模型发展迅速,适用于各种应用的改进算法层出不穷,将CNN用于无人机和飞鸟目标的分类识别前提是构建不同类型的数据集,数据集的类型完备性和标注准确性直接决定CNN网络学习的效果,因此仍需长时间开展不同条件下、不同目标的数据采集和标注。此外,如何将特征数据重构以适应深度学习模型输入层、如何结合目标识别实际需求进行模型参数调优、如何利用输出的差异特征进行场景辨识和目标识别方法设计,也是迫切需要解决的关键科学问题。

    • (1) 雷达回波信号的精细化处理是提高检测和识别性能的前提。以往的雷达目标探测技术研究,通常是只针对雷达回波数据,仅利用目标和背景的1阶和2阶统计分布特征、谱特征、空间相关特征以及目标匀速、匀加速运动特征,在事先假定的信号模型、目标模型、背景模型条件下开展检测、跟踪与识别算法设计。这种粗放型的做法能够解决雷达探测中的大多数问题,但是随着环境和目标的日益复杂,这种粗放型的做法日益明显地阻碍了雷达探测性能的进一步提升[82]。必须从雷达探测所面对的目标和背景,从雷达探测所包含的杂波干扰抑制、检测、跟踪和识别等环节中进行精细化的分析与处理,提高信息的利用率,进而获得雷达探测性能的改善[6]

      (2) 信号和数据特征融合是提高分类正确率的有效途径。目标分类和识别最重要的前提是差异特征构建和提取。外辐射源雷达长时间的观测模式易于提取高分辨的微动特征,能够获取无人机和飞鸟的精细信号特征,旋翼无人机和飞鸟目标的微动特征有明显差异,这为分类和识别奠定了基础。此外,大量的实验数据表明,雷达获取的目标数据中,无人机与飞鸟的运动轨迹具有一定的区分度。目前,成熟的目标跟踪算法通过建立多种运动模型逼近目标的真实运动状态,运动轨迹相对稳定的无人机目标在飞行过程的目标状态一般符合单一的运动模型,而机动性较高的飞鸟目标在飞行过程中的目标状态通常在多个运动模型直接切换。同时,大量的探鸟雷达数据分析表明,一定区域内的鸟类活动具有明显的节律性。由此可知,通过评估目标运动模型切换频率结合鸟类活动节律分析的方法进行目标识别分类是可行的。因此,融合利用无人机和飞鸟的信号和数据特征,能够扩展差异特征空间,提高分类概率[77]

      (3) 深度学习网络为无人机和飞鸟目标智能识别提供了新的手段。由于无人机和飞鸟复杂的运动形式和环境因素,其本质属性有时难以用模型和参数的方式描述清楚,从而造成很难获得目标数据深层本质特征。采用深度学习等智能学习的思路,通过构建多层卷积神经网络,发现高维数据中的复杂结构,已在图像识别和语音识别等领域经过验证具有很强的特征表述能力和较高的分类识别准确率。深度学习的好处是用非监督式或半监督式的特征学习和分层特征提取高效算法来替代手工获取特征。由于目标微动信号可以看做是时间-频率的二维特征数据,目标回波和运动轨迹体现在雷达P显画面也是距离和方位(多普勒)的二维图像[76,83],因此,采用深度学习来进行目标智能分类和识别是非常适合的。此外,仅用时频图得到的微动特征对无人机和飞鸟信号进行分类时利用的特征空间有限,准确率较低,为此,可采用多特征多通道CNN融合处理的思路,将目标的回波序列图、微动时频图、距离-多普勒图、变换域图、运动轨迹图等输入多通道CNN,从而更好地运用卷积神经网络来进行特征学习和目标辨识[76]

      (4) 新体制雷达为目标精细化处理和探测识别一体化奠定了硬件基础。当前,雷达硬件技术,特别是硬件的计算能力和存储能力的进步为雷达目标探测精细化、智能化信息处理技术的发展提供了坚实的硬件基础;同时,数字阵列体制、外辐射源雷达、全息雷达、MIMO雷达、软件化雷达、智能化雷达等新的雷达体制,扩展了可利用的信号维度,为雷达目标探测精细化、探测识别一体化处理提供了灵活的软件框架[84]。例如,全息数字阵雷达、MIMO雷达等具有多波束凝视和泛探能力的雷达,能够实现时域(跨距离、跨波束、跨多普勒的长时间相参积累)、空域(宽空域全数字波束形成全覆盖)和频域(长时间凝视改善多普勒分辨力,提取精细化运动特征)信息的联合处理,从而实现“低慢小”等低可观测目标的可靠检测和识别[85-88]。外辐射源雷达利用第三方辐射源,本身不发射信号,无需专门申请频率,具有成本低、绿色环保、无电磁污染的特点,在机场等电磁频谱敏感区域的无人机监视尤具有独特优势。此外,复杂环境下,依靠雷达单一探测设备的目标识别概率都不高,需要综合利用不同传感器的信息,如光电、声等,弥补单一传感器的局限性,提高识别效率与精度。

    • 本文对近年来复杂场景下飞鸟与无人机目标检测和识别这一热点研究领域的主要技术手段和系统进行了梳理和总结,分析了各自的优缺点,给出了数据仿真分析结果,并从信号精细化处理、信号和数据特征融合、深度学习智能识别以及新体制雷达发展等几个方面,对未来实现有效监视和识别飞鸟和无人机目标进行了展望。由于飞鸟和无人机目标均属于“低慢小”目标,回波具有低可观测性,并且隐藏在复杂背景中,因此,对两者的有效监视和识别也极大考验和促进雷达新技术的发展和应用,相关技术也能够为其它复杂背景下的弱小目标探测提供思路。期望更多的学者能够对此领域进行深入研究,寻找更好地提高飞鸟和无人机目标探测和识别能力的方法和手段。

    • Bird strikes refer to events in which an aircraft collides with birds during takeoff, landing, or flight or events in which normal flight activities are affected by animal activities[1]. Approximately 21,000 bird strikes occur worldwide every year, causing economic losses of approximately US$1.2 billion. With the number of flights continuously increasing and ecological environment continuously improving, the pressure to prevent bird strikes at airports is increasing. The losses caused by bird strikes far exceeds losses caused by other reasons. Flying birds are the traditional security threats to an aircraft at the takeoff and landing stages. In recent years, low-altitude aircrafts that are represented by consumer drones, such as Unmanned Aerial Vehicles (UAVs), have developed rapidly and many airports in china have continuously experienced UAV “black flight” disturbances[2]. The illegal flying of drones has become a new problem that needs attention; this problem coupled with bird strikes has become “two hidden dangers” that threaten the safety of flights taking off and landing in the clear zones of airports, as shown in Fig. 1.

      Figure 1.  The threat of flying birds and UAVs to civil aviation aircraft

      Traditionally, airports relied on manual labor for observing birds. However, bird strikes usually occur at dawn, dusk, and night, thus making visual observations difficult. Using radar systems for bird detection, we can overcome the influence of unfavorable factors such as bad weather, long distance, and diurnal changes; in a true sense, radar systems can obtain real-time information such as the distance, position, speed, and height of birds in the sky without interruption; thus, radar systems can continuously monitor the situation of birds in the sky. After more than 30 years of development, relatively mature “radar bird detection systems” have been developed worldwide. The most representative radar bird detection system is the Merlin radar in the United States, Accipiter radar in Canada, Robin radar in the Netherlands, and “airport radar bird detection and driving linkage system” developed by the Chinese Academy of Civil Aviation Science and Technology (also referred to as “Academy of Aeronautics and Astronautics”). In August 2018, the Chinese Civil Aviation Administration issued the “Guide to the Airport New Technology Directory (2018–2020)”, which focused on the safety and efficiency of airport operations and took into account the convenience and construction safety at airports; in the guide, 25 new technologies divided into four categories and bird strike prevention technologies are listed. At present, most radar systems for bird detection use a combination of the S-band horizontal and X-band vertical scanning radars and are linked with some type of bird repellent equipment. However, airports report that generally the detection rate of the current bird detection radar systems is low and less than 75%, sometimes false alarms occur, the linkage system has no visible bird-repelling effects, and the detection systems cannot meet the needs of the airports.

      The emergence and rapid development of UAVs and other low-altitude aircraft have posed severe challenges to the safety of air routes and urban security[3]. Since 2015, the state, various ministries and commissions, and the Chinese Civil Aviation Administration have successively issued numerous regulations and standards to maintain flight safety in the clearance protection areas of civil aviation airports and strengthen the management of drone operations; these standards include the “Interim Regulations on the Management of Unmanned Aircraft Flight (Request for Opinion Draft)”, “Regulations on the Operation of Light and Small UAVs (Trial)”, “Regulations on the Real-Name Registration Management of Civil Unmanned Aircraft”, “Unmanned Aerial Vehicle Cloud System Interface Data Specification”, and “UAV Fence”. In the “Guidelines for New Airport Technology Directory (2018–2020)” issued by the Civil Aviation Administration in August 2018, “Bird strike prevention technologies” and “UAV countermeasure technologies” are listed. General Secretary Xi Jinping emphasized that it is necessary to hasten the construction of a scientific and technological safety early warning and monitoring system and accelerate the promotion of relevant legislative work in areas such as drones. However, at present, we lack effective technology and means for the surveillance and especially identification of drones and birds. Presently, the “black flight” and “disturbance” phenomena are very common. Some UAVs are prone to deviate from the scheduled route. If such aircraft enter the airspace over important areas or are used by terrorists to carry dangerous weapons, then public safety will be seriously threatened (Fig. 2).

      Figure 2.  UAVs endanger public safety

      The flight information of drones that are “cooperatively” connected to the management network is linked to management systems, such as the “drone cloud” in real time, and supervising authorities can query and record drones that have strayed into important areas. For UAVs, manufacturers can monitor the flight status of related brand products by monitoring the “flight control protocol”. The above-mentioned cooperative supervision technology can presently cover more than 95% of consumer-grade drones, and the remaining noncooperative aircrafts that comprise less than 5% of the total aircrafts and are difficult to prevent from straying are the focus of this study[4]. For example, in airport clearance areas, airports must take different countermeasures after discovering a “low, slow, and small” intrusion target such as a drone or bird. After discovering the target, the airport first issues an early warning and guides the takeoff and landing of flights to avoid damage and coordinates with the airport and local public security for the disposal of the target. After finding a bird target, the airport will use various types of bird repellent equipment or take certain evasive measures based on certain bird repellent strategies to drive the birds out of the dangerous area. Therefore, UAV and bird targets must be identified and evaluated.

      There are many types of UAV and flying bird targets, and their varying size, shape, and motion characteristics result in the targets having different radar scattering and Doppler characteristics. These targets are typically “low, slow, and small”[5] with low observability[6,7]. It is embodied in the following: (1) small target size, small Radar Cross Section (RCS), slow speed, low signal-to-clutter ratio, and their target echo being hidden in strong clutter or background noise; (2) the target’s maneuvering flight causes Doppler diffusion, and the target echo is difficult to accumulate; (3) the radar echo is weak, and target feature extraction and estimation are difficult; (4) the radar refinement process faces challenges, and target classification and recognition are difficult. The current detection methods for “low, slow, and small” targets mainly involve Constant False Alarm Detection (CFAR), Moving Target Indicator (MTI), Moving Target Detection (MTD), and coherent accumulation, feature detection, etc.. Due to the complex environment (airports, cities, seaside, etc., problems may arise when using these methods, such as many false targets, strong target mobility, short effective observation time, etc.. The low probability, high clutter false alarm, low accumulation gain, and unstable tracking lead to several issues in target detection and judgment (Fig. 3), making bird and UAV detection a worldwide problem. New technologies and methods are rarely reported publicly due to some sensitive key technologies. Thus, “visible (strong detection ability) and clearly distinguishable (high recognition probability)” noncooperative drones, as well as new surveillance methods and technologies for flying birds and “low, slow, and small” targets must be developed to achieve refined target description and recognition.

      Figure 3.  “low, slow and small” target radar echo

      This study focuses on the research progress of rotor drone and bird target detection, as well as recent recognition technologies in complex scenes; moreover, it analyzes the problems in this technical field and points out its development trend. The paper is arranged as follows: Section 2 introduces the main methods of bird and drone detection, including the research and development of bird detection radars worldwide, radio monitoring of noncooperative drone targets, audio detection, photoelectric detection, active and passive radar detection and identification methods; related system application situations are also introduced. Section 3 focuses on the technical methods of target detection and classification and recognition of flying birds and UAVs. On the one hand, it starts from the aspects of target characteristic recognition and feature extraction and introduces echo modeling and micromotion characteristic recognition methods, as well as targets in general exploration mode. Mobile feature enhancement and extraction technology and distributed multiview micromotion feature extraction technology are designed to improve detection ability and achieve a refined feature description of the target. On the other hand, machine learning or deep learning methods are combined to propose an effective technical approach for intelligent target recognition. In accordance with the difference of motion trajectory, the purpose of distinguishing UAVs and flying birds is achieved. Lastly, the work of the research team in this area is introduced. Section 4 and Section 5 propose future research development trends in view of the shortcomings of existing research. Section 6 summarizes the full text.

    • Flying birds are a traditional safety hazard in airport clearance areas. Bird strike prevention has long been an international problem that threatens flight safety. As the number of flights continues to grow and the ecological environment continues to improve, the pressure on bird strike prevention in airports is increasing. Radar is an important technical means for bird observation. Its advantage is that it is not restricted by factors, such as visibility, and can operate automatically in all kinds of weather. Meteorological and air traffic control radars were first used to monitor bird activities, especially the migration of large groups of birds in a large area (a few hundred kilometers); however, such large radars have low resolution and are not suitable for bird observation around small airports (10 km). At present, the more widely used radar bird detection system usually adopts maritime radar with a smaller volume and lower power consumption. By means of the transformation of its antenna and signal and data processing, it can achieve high resolution in small areas and track the movement of small bird flocks or single birds.

      Fig. 4 shows three foreign bird detection radars and their acquired radar images containing flying bird targets[8,9]. In 2005, Braun[10] reviewed and compared the characteristics and uses of different types of bird detection radars. Internationally, typical bird detection radar systems include the Merlin radar in the United States, the Accipiter radar in Canada, and the Robin radar in the Netherlands. As shown in Fig. 4, the Merlin radar uses the S-band horizontal scanning radar to cover the low-altitude airspace around the airport, the X-band vertical scanning radar to cover the takeoff and landing channels of aircraft, and a professional bird information extraction algorithm to separate the bird target information; the Accipiter radar uses an X-band parabolic antenna, which has a narrow beam (4º), and the system can obtain accurate height information although with a limited detection range; in addition to the S-band horizontal and X-band vertical scanning radars, a frequency-modulated continuous wave radar, which is specially designed for the judgment of flying bird targets, is used in the Robin radar. Tab. 1 compares the technical characteristics and deployment methods of the three bird detection radars. In China, the Academy of Aeronautics and Astronautics and the Beijing University of Aeronautics and Astronautics have performed feasibility verification of radar bird detection since 2006 and conducted related research using the video images obtained by the navigation radar[11,12]. In recent years, the Second Academy of Aerospace, the Beijing Institute of Technology, the National University of Defense Technology, and other units have successively conducted radar bird detection research based on advanced radar technologies, such as phased array, multi-inputs and multi-outputs, and holographic radar[13]. However, most bird detection radars can only use the target echo intensity to roughly classify them according to flock of birds, large birds, medium birds, and small birds; moreover, they do not have the ability to distinguish between flying birds and drone targets.

      Table 1.  Description of three typical foreign avian radar products

      Product name Merlin Accipiter Robin
      Technical characteristics Technical features combination of hori-zontal and vertical scanning radar,
      solid state transmitter
      Combination of horizontal and vertical scanning radar, additional parabolic antenna additional parabolic antenna combination of horizontal and vertical scanning radar, magn-etron transmitter
      Deployment method Horizontal scanning radars are usually deployed close to the center of the airport and are responsible for low-altitude warnings around the airport; vertical scanning radars are usually deployed on the center side of each runway and are responsible for monitoring flight take-off and landing channels; of course, radar deployment is a complex system problem that needs to be considered, there are many factors such as clearance obstacle restriction surface, ground obstruction, power supply and so on.

      Figure 4.  Typical avian radar systems

    • General consumer drones are cheap, small, and light and are an entry method to interfere with public safety and destroy strategic locations. For example, compared with bird strikes, drones are made of hard materials and thus has more destructive power when they collide with high-speed aircraft[14]. Countries around the world have invested substantial manpower and material resources to conduct research on UAV countermeasures and formed a batch of new technologies and equipment[3]. Existing detection technologies are mainly divided into four categories, namely, radio monitoring, audio detection, photoelectric detection, and radar detection technology[15]. The current research status of each technology is described as follows:

      (1) Radio monitoring. Radio monitoring uses radio frequency scanning technology to conduct real-time monitoring, analysis, and direction finding for the frequency bands of civil drone remote control and image transmission signals. This technology can obtain the UAV control signal waveform by scanning this frequency band; then, it compares the obtained waveform with the UAV control waveform in the system library to determine the presence of a remote-controlled UAV and its type[16-18]. Rohde & Schwarz, the National University of Defense Technology, and China Power 41 Institute have all developed relevant systems using radio detection technology. Some systems have been tested in Chinese Baiyun Airport, Bao’an Airport, and Shuangliu Airport and have achieved good results. A typical system is shown in Fig. 5(a). However, single-station radio monitoring measurement usually only has the target position information, and the measurement accuracy is low. For cruise-type electromagnetic silent UAVs, the monitoring method may be invalid.

      Figure 5.  Typical noncooperative UAV detection system

      (2) Audio detection. Each drone has a unique propeller rotating sound called audio fingerprint. Collecting audio fingerprints of various drones can establish a database. By matching the audio data detected by the directional microphone with the database, it can not only detect drone but can also recognize its type. The Affiliated Research Institute of Electronic Technology in Korea, the France-German Institute in St. Louis, France, and the University of Obuda, Hungary, have all developed a UAV countermeasure system based on audio detection technology and proposed related solutions. A typical system is shown in Fig. 5(b)[19-21]. However, audio detection technology is susceptible to noise and clutter and has good effects on large drones. Small- and medium-sized drones have low sound. In addition, noise interference is serious, resulting in poor detection results.

      (3) Photoelectric detection. Photoelectric detection technology mainly uses visible light and infrared images to complete target detection and recognition. The German Optoelectronic System Technology and Image Development Research Institute and the Changchun Institute of Optomechanics have developed UAV detection systems based on photoelectric technology, but photoelectric detection is susceptible to ambient light interference, and the “low, slow, and small” target photoelectric signal is weak and reliable. The low noise ratio, combined with the shielding effect of large and small targets and the strong infrared radiation of aircraft engines in the airport environment, further increase the difficulty of photoelectric detection, identification, and tracking[22-24]. Therefore, photoelectric detection technology is commonly used as a supplementary confirmation device for radar and radio monitoring. Fig. 5(c) shows a typical radar photoelectric composite detection system.

      (4) Radar detection. (a)Active radar detection: As the main means of target detection and surveillance, radar is widely used in national defense and public security fields, such as air and sea target surveillance and early warning detection. Although traditional radar has insufficient detection efficiency for “low, slow, and small” targets, it is still widely used as an important means of detecting air targets. With the rapid development of radar systems and signal processing technology, radar has the potential to acquire the characteristics of weak targets, providing a new way to detect and recognize low observable targets[7]. In terms of systems, the development of low-cost new radar detection systems has attracted the attention of defense companies and scientific research institutes around the world. The German Institute of Applied Sciences (FHR) uses Ku-band MIMO radar to target the mutual coupling of UAV target signals and clutter and trains the detector to detect thresholds in clutter and nonclutter areas through environmental data and thus improve the detection performance of radar on low-speed targets[25]. The “falcon shield” electronic warfare system of the British defense company, Selex ES, can not only detect, locate, identify, jam, and combat low-altitude slow-flying small drones (“low, slow, and small” targets) but also take over the command right of the target drone and guide it to land safely. The system consists of a radar, a photoelectric surveillance system, command and control components, an electronic reconnaissance system, and a radio frequency threat management system. The radar uses coherent signal processing algorithms to improve the target’s anticlutter ability, as shown in Fig. 6(a). Blighter’s anti-UAV defense system uses radar to locate the UAV accurately shown in Fig. 6(b) and then transmits a directional high-power interference radio frequency to interfere with the UAV to force it to land[26]. The Swedish company, Saab, has expanded the “Giraffe” AMB radar capability to detect, classify, and track small drones flying at low speeds while providing aerial surveillance capabilities in a conventional mode; it can also handle six drones simultaneously in complex environments[27]. (b)Passive radar detection: Active radar requires frequency permission and large electromagnetic radiation. Certain application scenarios, such as airport clearance areas, have strict electromagnetic access systems. Management departments are usually cautious about the use of active radar technology. Even scientific experiments require a strict approval process. As an economic and safe noncooperative target detection method, external radiation source radars have the advantages of low power consumption, concealment, and good coverage and thus have attracted widespread attention in the academic community in recent years. Especially with the improvement of hardware performance and the development of available external radiation sources, such as communications and wireless networks, external radiation source radar has attracted widespread attention and in-depth research in various countries[28,29]. The German Academy of Applied Sciences, the Swiss National Defense Equipment Procurement Agency, the French National Aerospace Research Center, Wuhan University, and other domestic and foreign scientific research institutions have used television, digital television, digital audio broadcasting, and satellite and mobile communication signals as external radar sources for passive radar detection[30].

      Figure 6.  International typical “low, slow and small” target detection system

      In the detection of low-altitude targets, especially the detection of UAVs, research based on the detection of external radiation source radar has received widespread attention. The University of Sklade in the United Kingdom[28] conducted research based on the global navigation satellite system. Signal classification and recognition of helicopter targets by external radiation source radar can realize helicopter target classification. The University of Oulu in Finland[31,32] conducted a helicopter detection experiment based on DVB-T signal by external radiation source radar and observed the micro-Doppler effect of the rotation of helicopter blades; furthermore, they studied the relationship between the sideband frequency spectrum line interval and the blade rotation rate. The Warsaw University of Technology[33] also conducted similar experimental research based on DVB-T radiation source and deduced a mathematical model of the echo of the helicopter’s rotating blade under the bistatic system. South Africa’s Science and Industry Research Council[34] conducted a propeller aircraft detection experiment based on frequency-modulated signals and successfully extracted the micro-Doppler features of the propeller echo. Martelli et al.[35] of the University of Rome studied the use of digital video broadcasting (Terrestrial, DVB-T) external radiation source radar AULOS to monitor drones and civil aircraft in an airport area simultaneously. The experimental results show that through long-term coherent accumulation, the single-station detection range of the radar for flying drones (DJI 4 and Yu) around the airport can reach approximately 3 km, as shown in Fig. 7. At the same time, they also used Wi-Fi signals to detect UAVs. Research has shown that 3D space UAVs can be tracked stably, but its accuracy is poor[36]. Rsara et al.[37] from the University of Science and Technology Malaya analyzed the feasibility of using DVB digital TV satellite signals for UAV forward scattering detection.

      Figure 7.  DVB-T passive radar AULOS UAV detection results[35]

      In China, the Radio Wave Propagation Laboratory of Wuhan University[38-41] conducted further research on the use of external radiation source radar to detect drones and used Digital Television terrestrial Multimedia Broadcasting (DTMB) signals for unmanned operations. The results showed that after clutter suppression, the detection distance of a typical DJI Spirit 3 UAV exceeds 3 km, as shown in Fig. 8. Fig. 9 shows an example of the application scenario of UAV target monitoring based on digital TV signal external radiation source radar.

      Figure 8.  UAV detection experiment based on DTMB passive radar[38]

      Figure 9.  UAV target surveillance application based on digital TV signal passive radar

    • Related foreign institutions have conducted research on the classification and identification of UAVs and formed products. Representatives include the following: (1)The German Institute of Applied Sciences (FHR)[42] used the signal for Global System for Mobile communications (GSM), reconstructed a set of multichannel external radiator radar detection systems called GAMMA-2 (Fig. 10), and explored the possibility of the external radiator radar to obtain the characteristics of “low, slow, and small” target micromotion. (2)The French National Aerospace Research Center (ONERA)[43] used ultrahigh-frequency DVB-T signals through multifrequency and single-frequency network detection not only for drones at 3 km; targets achieved continuous positioning and tracking and could well detect the micro-Doppler effect of multirotor UAVs, thus pushing the system’s radar to detect “low, slow, and small” targets to a refined application. (3)Holland TNO research institute conducted micro-Doppler measurement experiments for UAVs using classic time-frequency analysis methods to extract micromotion features. (4)The University of London in the United Kingdom[44] aimed at the characteristics of “low, slow, and small” characteristics of UAVs. The three-receiving S-band radar system NetRAD (Fig. 11) uses the micro-Doppler characteristics of multirotor UAVs to compensate for the lack of detection efficiency in the time domain and improve target positioning and tracking performance. (5)Dutch Robin[45] S new-generation radar uses frequency-modulated continuous wave waveforms and utilizes micromotion characteristics to distinguish between drones and flying bird targets; it also has drone detection and recognition functions (Fig. 12). (6)The British Aveillant company[46] designed the holographic radar using digital array system (Fig. 13), which is mainly used for fixed and staring detection methods to extend the residence time and can obtain the refined feature expression of the target. The system can use a machine learning algorithm based on the decision tree classifier to determine the target and non-UAV target. UAV targets are classified, and the radar’s software system can analyze the trajectory of the target to distinguish it from other targets.

      Figure 10.  German FHR multi-channel passive radar system GAMMA-2[42]

      Figure 11.  NetRAD radar system of university of London[44]

      Figure 12.  The Robin birds/UAV recognition radar in Netherlands[45]

      Figure 13.  The holographic radar system from Aveillant, UK[46]

    • Drones and birds are nonrigid targets. The rotation of the drone’s rotor and the flapping of the birds’ wings introduce additional modulation sidebands near the Doppler frequency shift signal of the radar echo generated by the translation of the main body. This signal called micro-Doppler signal produces the micro-Doppler effect[47]. The micro-Doppler signal reflects the instantaneous characteristics of the Doppler frequency shift and characterizes the instantaneous radial velocity of the target’s micromovement. The information contained in the micro-Doppler signal can reflect the shape, structure, and posture of the target, as well as electromagnetic parameters of the surface material, the force state, and the unique movement characteristics[48]. The echo of the rotor drone is the superposition of the Doppler signals of the main body and the rotor components. The micromotion characteristics of the drones of different types and numbers of rotors vary.

      (1) Radar echo modeling and characteristic recognition of flying birds and UAV targets. The Doppler spectrum of the echoes of flying birds and rotary-wing UAV targets appears to be broadened, and the flapping of the wings of flying birds and the rotation of the UAV’s rotor produce modulation characteristics of the echo, which are time varying and periodic, reflect the characteristics of micromotion[49]. The modeling and fine feature cognition of the micro-Doppler features of flying birds and rotary-wing UAVs are the prerequisites for subsequent detection and target classification. The echo signal of the rotary-wing UAV is shown as the superposition of the Doppler signal of the main body of the UAV and the micro-Doppler signal of the rotor component, as shown in Fig. 14. The parameterization of the UAV target body motion and micromovement with micromovement components is modelled, and the corresponding relationship between Doppler frequency, rotation rate, number of blades, and size is derived[50]. For a multirotor UAV, assuming that the RCS of each rotor blade is the same and both are 1 and based on the helicopter rotor model, the multirotor UAV target echo model is[51,52]

      Figure 14.  Micromotion characteristics of six-rotor UAV[53]

      $$ \begin{split} {s_\Sigma }(t) =& \sum\limits_{m = 1}^M {L\exp \left\{ { - {\rm{j}}\frac{{4\pi }}{\lambda }\left[ {{R_{{0_m}}} + {z_{{0_m}}}\sin {\beta _m}} \right]} \right\}} \\ & \cdot \sum\limits_{k = 0}^{N - 1} {\sin {\rm{c}}} \bigg\{ \frac{{4\pi }}{\lambda }\frac{L}{2}\cos {\beta _m}\cos ( {\varOmega _m}t + {\varphi _{{0_m}}} \\ & + k2\pi /N ) \bigg\}\exp \left\{ { - {\rm{j}}{\varPhi _{k,m}}\left( t \right)} \right\} \end{split} $$ (1)
      $$ \begin{split} {\varPhi _{m,k}}\left( t \right) & = \frac{{4{\rm{\pi }}}}{\lambda }\frac{L}{2}\cos {\beta _m}\cos \left( {{\varOmega _m}t + {\varphi _{{0_m}}} + 2k{\rm{\pi }}/N} \right),\\ &(k = 0,1,2,···,N - 1;m = 1,2,···,M) \end{split} $$ (2)

      where M is the number of rotors, N is the number of single rotor blades, L is the length of the rotor blades, ${R_{{0_m}}}$ is the distance from the radar to the center of the mth rotor, ${z_{{0_m}}}$ is the height of the mth rotor blade, ${\beta _m}$ is the pitch angle from the radar to the mth rotor, and ${\varOmega _m}$ is the mth rotor. ${\varphi _{{0_m}}}$ is the rotation angle frequency of is the initial rotation angle of the mth rotor.

      The instantaneous Doppler frequency of the echo signal can be obtained by calculating the time derivative of the phase function of the signal, and the equivalent instantaneous micro-Doppler frequency of the mth blade of the kth rotor is

      $$ {f_{m,k}}\left( t \right) = - \frac{{L{\varOmega _m}}}{\lambda }\cos {\beta _m}\sin \left( {{\varOmega _m}t + {\varphi _{{0_m}}} + 2k{{\pi }}/N} \right) $$ (3)

      Eq. (3) shows that the micro-Doppler characteristic of multirotor UAVs is composed of $M \times N$ sinusoidal curves and is affected by carrier frequency, number of rotors, rotor speed, number of blades, blade length, initial phase, and pitch angle. Among these factors, only the carrier frequency, blade length, and pitch angle are related to the micro-Doppler frequency amplitude, whereas the number of rotors, rotor speed, blade number, and initial phase affect the amplitude and phase of the micro-Doppler characteristic curve. The difference between the micromotion characteristics of flying birds and UAV targets is analyzed further. Fig. 15Fig. 17 show the preliminary simulation comparison results of the micromotion signals of the two targets. Fig. 15 shows the micro-Doppler time-frequency diagrams of the X-band radar bird, quadrotor UAV, and unmanned helicopter. The comparison reveals that the micro-Doppler of the bird target is mostly concentrated in the low-frequency region, and the vibration period is long. The flapping motion characteristics of different types of bird wings are closely related, and the micromotion characteristics of UAV targets show periodic characteristics. Depending on the rotation speed and length of the rotor, the rotation frequency and period also differ. The bird target is different from the micromotion characteristics of UAVs. Fig. 16 shows the analysis of the micromotion characteristics of the external radiation source radar of the simulated flying bird target. The center frequencies of the external radiation source are 658 and 900 MHz. The micromotion characteristics are not evident due to the low frequency and long wavelength of the radiation source. Fig. 17 shows the results of the UAV micro-Doppler effect experiment conducted by Wuhan University in the early stage, detecting the DJI M100 UAV. The UAV has four rotors, two blades per rotor, a blade length of 17.25 cm, and a maximum speed of 125.9 rps when the UAV is in a hovering state. The modulation echo of each rotor blade is prominent, as manifested by the broadening of the spectrum. The micro-Doppler effect of the UAV can be observed, and the rotation speed of each rotor is different, which explains why multiple Doppler units are occupied.

      Figure 15.  Differences in micromotion characteristics of flying birds and UAVs

      Figure 16.  Analysis of radar m-D characteristics of simulated flying bird

      Figure 17.  Analysis of measured m-D characteristics of UAV target (center frequency of external radiation source 658 MHz)[39]

      Fig. 18 shows the comparison of the micro-Doppler characteristics of the DJI S900 UAV and an owl[54]. The UAV’s rotor rotates faster than the owl; thus, its micromotion period is much faster than that of a bird, but the intensity is weaker than that of a bird’s target. In addition, puller, the irregular wing flapping caused by the maneuver of the bird, makes its micro-Doppler more complicated. Differences between the micromotion characteristics of drones and flying birds are observed, thereby providing important basis for the accurate detection and identification of drones and flying birds; such detection and identification methods do not rely on prior information, are highly reliable, and have good separability. At present, the main difficulties in using micromotion features for the target recognition of flying birds and UAVs are the weak echoes of “low, slow, and small” targets, the difficult extraction of micromotion features, the low time-frequency resolution, the difficult classification and recognition, and the unclear correspondence between the target and micromotion features.

      Figure 18.  Micro-Doppler (24 GHz) for UAV and bird targets[54]

      (2) Maneuver feature enhancement and extraction technology in general exploration mode. Utilizing the pan-exploration observation mode can obtain high signal processing gain and speed resolution by prolonging the accumulation time, and a multidomain (time-frequency, transform, and characteristic domains), multidimensional (echo sequence, two-dimensional micromotion time-frequency map, range-Doppler map, transform domain map), multiview (multireceiving station) analysis of target motion features can achieve a high-precision extraction of complex micromotion features and an accurate description of maneuvering features[7].

      A bistatic configuration of an external source radar is assumed, as shown in Fig. 19[29].T0 and R0 are the transmitting and receiving stations, respectively, and L is the baseline distance. Suppose the target is at the point O at the initial time, and it moves to point O' at time t; $ R_{T_0}$ and RT(t) are the distance between the target and the transmitter at the initial time and t, Rr0 and Rr(t) is the distance between the target and the receiver at the initial time and t, respectively. The target moves with a uniform acceleration, i.e., the initial velocity v0, the acceleration a0, β0 is the bistatic angle, and $\varphi $ is the target motion direction. If the baseband signal is s(t), then the delay between the target echo and the direct wave at time t is τ=R(t)/c0, $R(t) = {R_T}(t) + {R_r}(t) - L$ , where c0 is the speed of light, and λ is the wavelength. By expanding R(t) by Taylor at t=0 and ignoring more than two terms, the echo signal model can be simplified to

      Figure 19.  Configuration diagram of bistatic passive radar

      $$\begin{split} r(t) =& {A_0}s\left[ {t - \frac{{R(0) - v{t_m} - 1/2at_m^2}}{{{c_0}}}} \right]\\ & \cdot\exp \left[ {{\rm{j}}2{{\pi }}{f_c}\frac{{R(0) - v{t_m} - 1/2at_m^2}}{{{c_0}}}} \right] \end{split}$$ (4)

      where, fc is the carrier frequency; A0 is the amplitude of the echo signal; t and tm are respectively the fast and slow time after the two-dimensional time-sharing processing of the external radiation source signal, such as China Mobile Multimedia Broadcasting (CMMB). The signal stipulates that the 1 s signal is a frame, and it is divided into 40 time slots; each time slot is 25 ms long and can be divided into one beacon and 53 orthogonal frequency division multiplexing (Orthogonal Frequency Division Multiplexing, OFDM). OFDM symbols are used as the processing unit to divide fast time and slow time.

      $$\left. \begin{aligned} &R(0) = {R_{{T_0}}} + {R_{{r_0}}} - L \\ & v = 2{v_0}\cos \left( {\phi + {\beta _0}/2} \right)\cos ({\beta _0}/2) \\ &a = - \frac{{v_0^2{{\sin }^2}(\phi + {\beta _0})}}{{{R_{{T_0}}}}} - \frac{{v_0^2{{\sin }^2}({\beta _0})}}{{{R_{{r_0}}}}} \\ & \qquad + 2{a_0}\cos \left( {\phi + {\beta _0}/2} \right)\cos \left( {{\beta _0}/2} \right) \end{aligned} \right\} $$ (5)

      During a long observation time, the time delay and Doppler frequency of the target are distributed within a certain range, that is, the same target may correspond to multiple range and Doppler units, resulting in range and Doppler migration, respectively. The size of the range migration depends on the projection value of the target velocity on the bisecting line of the double base angle and the tangential projection value of the platform velocity on the bisecting line of the observation angle. Doppler migration depends on the target, the tangential component of the platform velocity and the target distance, the distance between the transmitting station and the receiving station, the baseline distance, and the tangential component of the target and platform acceleration.

      If the target maneuvers with high-order motions, then the long-time signal model can be unified as

      $$\begin{split}&{\rm{}}\\ &r(t,{t_m}) = {A_0}s\left[ {t - \frac{{R({t_m})}}{{{c_0}}}} \right]\exp \left[ {{\rm{j}}2{{\pi }}{f_c}\frac{{R({t_m})}}{{{c_0}}}} \right] \end{split}$$ (6)

      The third-order polynomial can describe the motion of most maneuvering targets, namely,

      $$R({t_m}) = R(0) + v{t_m} + at_m^2/2 + gt_m^3/6$$ (7)

      Traditional distance and Doppler migration compensation methods are mostly segmented and stepwise; that is, distance migration compensation is performed first, followed by Doppler migration compensation. The algorithm is complex, and the subsequent compensation performance is restricted by the previous results, which cannot achieve long-time, fast, and coherent accumulation of echo signals. The Long-Time Coherent Integration (LTCI) method based on parameter search[55-57]. According to the search range of preset target motion parameters (initial distance, speed, and acceleration), the target observation value of distance-slow time in the two-dimensional plane is extracted, and then the appropriate transformation parameters in the corresponding transformation domain is selected to match and accumulate the observation value; however, the major problem of this method is that it requires multidimensional parameter search, which leads to a large amount of calculation and increases the difficulty of engineering applications.

      To this end, a long-term coherent accumulation method based on nonparametric search can be used to improve the efficiency of the algorithm through ideas, such as fast reduction processing and polynomial phase compensation. Ref. [58] proposed a long-term coherent accumulation detection method (NUSP-LTCI) for maneuvering targets based on NonUniform resampling and Scale Processing (NUSP), as shown in Fig. 20. This method is used for the detection of low-altitude flying targets, and it reduces the high-order phase signal to the first-order phase signal through only one nonuniform sampling reduction operation. Compared with the traditional successive order reduction method, this method reduces the influence of the cross-term and improves the accuracy of parameter estimation. Parameter search and matching calculation are unnecessary, reducing the amount of calculation and making the method suitable for engineering applications. Moreover, the method can simultaneously compensate for range and high-order Doppler movement, thereby achieving long-time coherent accumulation and improving detection performance for maneuvering targets in strong clutter background. Fig. 21 shows a comparative analysis of the results of the radar coherent accumulation of a low-altitude helicopter target is given. The NUSP-LTCI method is better than the traditional MTD and the classic Radon-Fourier transform in terms of clutter suppression and parameter estimation accuracy; the method has been greatly improved.

      Figure 20.  NUSP-LTCI processing[58]

      Figure 21.  Comparative analysis of radar coherent integration results of low-altitude flying targets[58,60]

      The nonuniform sampling reduction operation is defined as[59]

      $$S(t_m') = s({t_p} + t_m')s({t_p} - t_m')$$ (8)

      where $t_m'$ is the slow time after nonuniform sampling; tp is the center of the nonuniform sampling time interval; $t_m'{\rm{ = }}\sqrt {c{\tau _m}} $ , where c is the scale factor, which controls the density of nonuniform sampling; and ${\tau _m}$ is a new time variable.

      The nonuniform sampling reduction operation is different from the traditional phase difference operation. It can quickly reduce the N-order phase to N/2 or N/2+1, without performing the delayed complex conjugate multiplication of the signal successively. Order greatly reduces the amount of calculation and reduces the effect of cross terms. Scale transformation, which draws on the idea of keystone transformation, is performed on the time coordinate axis; this process eliminates the linear coupling relationship between the variables and can realize distance migration correction under the premise that the target speed is unknown.

      Fourier transform the pulse pressure radar data along the range to obtain the two-dimensional data of range frequency-pulse slow time

      $$\begin{split} {S_{{\rm{PC}}}}(f,{t_m}) =& A_{{\rm{PC}}}'{\rm{rect}}\left(\! {\frac{f}{B}} \!\right)\exp \left[ { - {\rm{j}}\frac{{{\rm{4}}\pi }}{{{c_0}}}(f + {f_c}){R_s}({t_m})} \right] \\ = & A_{{\rm{PC}}}'{\rm{rect}}\left( {\frac{f}{B}} \right)\exp \bigg[ - {\rm{j}}\frac{{{\rm{4}}\pi }}{{{c_0}}}(f + {f_c})\\ &\cdot\left( {{r_0} + {v_0}{t_m} + {a_s}t_m^2/2 + gt_m^3/6} \right) \bigg]\\[-17pt] \end{split} $$ (9)

      where, $A_{{\rm{PC}}}'$ is the signal amplitude, fc is the carrier frequency of the transmitted signal, and f is the distance frequency.

      Then, through the nonuniform sampling operation, the formula can be transformed into

      $$\begin{split} S(f,{\tau _m}) =& {\left(\! {A_{{\rm{PC}}}'} \!\right)^2}{\rm{rect}}\left(\! {\frac{f}{B}} \!\right)\exp \!\left[ \!{ - {\rm{j}}\frac{{{\rm{4}}\pi }}{{{c_0}}}(\!f + {f_c})\! \cdot 2{R_s}({t_p})} \!\right] \\ & \cdot \exp \left[ { - {\rm{j}}\frac{{{\rm{4}}\pi }}{{{c_0}}}(f + {f_c})({a_s} + {g_s}{t_p})c{\tau _m}} \right] \\[-15pt] \end{split} $$ (10)

      Eq. (10) shows that the phase of the maneuvering target’s echo has been reduced from a third-order polynomial with respect to tm to a first-order term with respect to the new time variable ${\tau _m}$ . Therefore, subsequent scaling can be used to remove the linear coupling between the distance frequency f and ${\tau _m}$ . Then the distance and Doppler migration compensation can be realized.

      (3) Distributed multiview micromotion feature extraction technology. Given that the complexity of UAV moving target attitude changes, its micro-Doppler features show significant differences under different radar viewing angles. Therefore, the use of distributed multiview radar detection can extract additional comprehensive micro-Doppler features to improve target classification[61-63]. For example, the external radiation source radar usually uses the digital broadcast television signal as illumination source. Its operating frequency is low, and it has no outstanding advantages in the refined features of the target, especially in the description of the micromotion characteristics. An important advantage is its ability to use distributed transmitter stations to obtain micromotion components from different viewing angles simultaneously, thereby improving the resolution of micromotion features[64,65]. Fig. 22 shows the solution flow of the micromotion feature extraction of the distributed external source radar of a single-frequency network. By using the multiview characteristics of the distributed external source radar of the single-frequency network, the difference between the sample distribution of the wave number domain and the resolution of the micromotion feature is analyzed. To improve the resolution of target micromotion features, an optimization strategy of receiving station positions is designed.

      Figure 22.  Method of micromotion feature extraction via single-frequency distributed passive radar

      For the bistatic detection of external radiation source radar, if the midpoint of the baseline is at the origin $O$ of the space fixed coordinate system $\left( {X,Y,Z} \right)$ , the bistatic angle is $\beta $ , and the azimuth is $\delta $ (defined as the angle between the bistatic angle bisector and the rotating plane). Without loss of generality, assuming that the rotating blades rotate in the horizontal plane, the number of blades is $K$ , the length of the blades is $L$ , and the rotation rate is $\varOmega $ . If the target body motion and the influence of multipath clutter are not considered, then the received ideal rotating blade echo signal can be expressed as

      $$ s\left( t \right) = A\sum\limits_{k = 0}^{K - 1} {\sin c\left( {{\varPhi _k}\left( t \right)} \right)\exp \left\{ { - {\rm{j}}{\varPhi _k}\left( t \right)} \right\}} $$ (11)

      where $A$ is the intensity of the scattered signal, and the function ${\varPhi _k}\left( t \right)$ is

      $$ {\varPhi _k}\left( t \right) = \frac{{2{\rm{\pi }}}}{\lambda }\frac{L}{2}\cos \left( {\varOmega t + {\varphi _k}} \right)\cos \left( {{\beta / 2}} \right)\cos \left( \delta \right) $$ (12)

      where $\lambda $ is the wavelength and ${\varphi _k} = {\varphi _0} + {{k2{\rm{\pi }}} / K} \left( {k = 0,1,2, ··· ,K - 1} \right)$ is the initial rotation angle corresponding to each blade. The time domain characteristics of the rotating blade are determined by the magnitude of the formula as follows

      $$ \left| {s\left( t \right)} \right| = \left| {A\sum\limits_{k = 0}^{K - 1} {\sin {\rm{c}}\left( {{\varPhi _k}\left( t \right)} \right)\exp \left\{ { - {\rm{j}}{\varPhi _k}\left( t \right)} \right\}} } \right| $$ (13)

      The time domain feature is composed of the sin c function. From the definition of the sin c function, the time interval between two consecutive peak positions is

      $$ \Delta T=\left\{\begin{aligned} & \frac{2{\rm{\pi}}}{2K\cdot \varOmega },\;\;K\;{\rm{is}}\;{\rm{odd}}\\ & \frac{2{\rm{\pi}}}{K\cdot \varOmega },\;\;K\;{\rm{is}}\;{\rm{even}}\end{aligned}\right. $$ (14)

      The Fourier transform is performed on the Eq. (11), and the frequency spectrum is

      $$ S\left( f \right) = \sum\limits_{m = - {M / 2}}^{{M / 2}} {{C_m}\delta \left( {f - {{mK\varOmega } / {2{\rm{\pi }}}}} \right)} $$ (15)

      The modulation spectrum is composed of a series of spectral lines with a spectral line interval $\Delta f = {{K\varOmega } / {2{{\pi }}}}$ , where ${C_m}$ is the spectral line amplitude determined by the Bessel function, $M$ is the number of spectral lines of the modulation spectrum is determined by Eq. (16).

      $$ M = \frac{{4{{\pi }}}}{K}\frac{L}{\lambda }\cos \left( {{\beta / 2}} \right)\cos \left( \delta \right) $$ (16)

      The bandwidth of the modulated signal can be obtained as

      $$B = M \cdot \Delta f = \frac{{2{V_{{\rm{tip}}}}}}{\lambda }\cos \left( {{\beta / 2}} \right)\cos \left( \delta \right)$$ (17)

      where, ${V_{{\rm{tip}}}} = L \cdot \varOmega $ is the tip speed.

      Therefore, under different viewing angle conditions, different radar micro-Doppler echoes can be obtained, and multiview micro-Doppler features are extracted and fused and thus become more beneficial to target recognition and classification.

    • In recent years, the automatic recognition of flying birds and UAV targets has received widespread attention. Its main technical ideas are similar to radar target recognition. The main methods include template-based[66] and model-based recognition methods[67]. The former focuses on feature library construction. The features of the same target can be clustered together, and the features of different types are separable in the feature space. By carefully selecting the target features, the target can be effectively classified and recognized. The latter lies in the construction of the physical model of the target, including its shape, geometric structure, texture, and material. Both methods require deep domain knowledge to build a feature library or specific model knowledge to build a model library, and the quality of the feature and model libraries directly affects the recognition results. At present, the more commonly used bird and drone classification process detects the target first and then extracts the features of the bird and drone (including RCS, high-resolution range profile, transformation spectral, time-frequency, and polarization features) through signal processing methods[42,68-70]; then, the feature differences are used in Support Vector Machine (SVM) and other methods to achieve the classification and recognition of the two. This study describes the two emerging methods.

      (1) Target recognition method based on machine or deep learning. Deep learning is an efficient and intelligent processing method. Compared with traditional SVM, it is more suitable for mining high-dimensional abstract features, has good generalization ability, and has begun to be applied in the field of radar[71]. The deep neural network method can learn various hidden features of the target from measured data without constructing a complex high-fidelity model. Therefore, it can be used in the recognition of high-resolution range profiles, micro-Doppler spectroscopy, and range Doppler spectroscopy. It also has good application prospects[72,73]. Given the difference between the time–frequency micromotion graphs of the flying bird and the drone, using it as a dataset using deep convolutional network (Convolutional Neural Network, CNN) learning is expected to realize the intelligent extraction and recognition of target micromotion signals[74,75]. At present, this direction is in the initial research stage, and convolutional network construction, parameter setting, and dataset construction need further research.

      The micromotion characteristic parameters of UAV rotors and flying birds are closely related to the type, movement state, radar observation method, environment, and background of the UAV and the flying bird. This study attempted to find the parameters from the perspective of mathematical modeling and characteristic recognition. It is a feasible technical approach to perform target recognition based on the differences of characteristics. However, the internal relationship of complex motion forms and environmental factors is sometimes hardly described by models and parameters. In addition, the feature extraction process, which is a key step, requires considerable system resources, and most features are manually designed, which makes it difficult to obtain the deep essential features of the target data. To solve the problem of feature extraction in radar target recognition, the deep essential feature information of the target is obtained, and the recognition accuracy is improved; deep learning methods, such as deep neural networks, CNNs, deep belief networks, and recurrent neural networks, can be used for features. Feature description, extraction, and recognition, by setting different hidden layers and iteration times, the feature expression of each level of the data is obtained, and then combined with the nearest neighbor method to identify the target[72,73,76]. The ideas that can be taken are as follows:

      (a) A deep neural network structure that can be used for parameter estimation of UAV and bird motion models is constructed through deep learning; the model achieves better results than traditional parameter modeling methods and time–frequency parameter estimation.

      (b) Through the CNN structure, the deep feature extraction of echo signals (such as micromotion signals) and the refined recognition of scenes are achieved.

      (c) Through the cognitive learning of numerous feature datasets, deep learning methods are used to build a deep network structure for regression analysis, invert the form of motion, and analyze the relationship between motion characteristics and motion parameters.

      For micromotion parameter estimation, micromotion echo signals can be regarded as time-frequency two-dimensional feature data; thus, CNNs can be used, and micromotion target echo signals are used as input data. The product neural network obtains the key motion parameters of the micromotion model and then substitutes the parameters into the micromotion model as a cost function feedback; then, it performs backpropagation, adjusts the weight, and reinputs the micromotion target echo data training calculation. The higher the degree of fitting, the more consistent of the data is with the micromotion model. Keep training until the parameters are more ideal, and then estimate the parameters of the micromotion model. Finally the error between the micromotion model and the actual data is calculated. Fig. 23 shows the data characteristics of each convolutional layer in the micromotion time-frequency graph CNN (taking LeNet as an example)[73]; Fig. 23(a) and Fig. 23(c) are the convolution kernel of the first and the second CNN layer Fig. 23(b) and Fig. 23(d) are the feature maps of the first and second convolutional layers, respectively. CNN can extract finer texture and contour features of the dataset, which lays the foundation for feature classification and parameter estimation.

      Figure 23.  Data characteristics of convolutional layers in CNN (take LeNet as an example)[73]

      For a typical rotating target, the main estimation is the rotor speed ω, and an appropriate parameter ω needs to be found to make the micromotion model fit the echo data. First, the clutter data are input to the neuron, and the convolution operation, pooling, and full connection are performed. Through multilayer CNN network layers, two outputs are finally obtained as a value vector through the SoftMax layer to obtain. The parameter values are selected after this training process according to the setting of the value range. Afterward, the chosen values is substituted into the original model as a cost function, followed by backpropagation. The weight is adjusted before retraining, and the numerical result is calculated to determine whether it fits the model of this parameter. If the difference is large, training is continued until the obtained parameter values are consistent with the training data. As shown in the Fig. 23, a CNN consists of several convolutional layers, pooling layers, and fully connected layers. In accordance with different distribution models, various convolutional neural networks can be constructed to complete this task. The preliminary design of the micromotion parameter estimation process based on deep learning is shown in Fig. 24. When the signal is characterized by constructing a deep CNN, the intermediate layer network is obtained, especially the output of the convolutional layer close to the fully connected layer as the learned feature description.

      Figure 24.  Diagram of m-D parameter estimation based on deep learning

      (2) Recognition method based on the difference of the trajectory of UAVs and bird targets. The flight trajectory of drone targets is relatively stable, whereas that of birds is more variable and highly maneuverable. Therefore, the difference between the two motion modes is used, and the trajectory difference between the low-altitude noncooperative drone and the bird target is adopted on the basis of the motion model. Extraction can provide a new technical approach for the classification and identification of the two[77]. Therefore, through the accumulation of long-term bird detection radar data, statistical analysis of the number of bird flight trajectories, flight heights, flight directions, and other data within a certain period, combined with the results of manual surveys of bird conditions in the detection area, can grasp the activity rhythm of surrounding resident birds and related migration of transit migratory birds[78].

      (a) Discrimination and classification of target motion patterns based on multimodel probability extraction. In addition to extracting target micromotion features at the “signal” level, using target flight trajectory features at the “data” level is an effective technical way to distinguish flying birds from UAV targets. Under normal circumstances, the flight trajectory of UAV targets is long and relatively stable, whereas that of bird targets is relatively short and flexible. By evaluating the changes in the target motion model at the “data” level, flying birds can be distinguished from unmanned machines. The algorithm shown in Fig. 25 converts frequency features by extracting the target’s motion model and then distinguishes small and light UAV targets from bird targets.

      Figure 25.  Target recognition algorithm of flying birds and UAVs based on radar data processing

      Fig. 26 shows the results of using flight trajectory features to identify flying birds and UAV targets. The actual radar data include several flying bird targets and a DJI Spirit 3 UAV target in an airport clearance area over a period. Among them, most bird targets are local resident birds with high mobility and short flight trajectories. In engineering applications, the classification threshold of drones and flying bird targets must be verified and set by manual labeling methods to ensure the recognition effect of the algorithm. The flight trajectory of the UAV may also be designed to be maneuverable and complex. Therefore, the scope of application of the algorithm must be further studied and verified.

      Figure 26.  Recognition results of trajectories of flying birds and UAVs[77]

      (b) Analysis of bird activity rhythm. The airport bird situation data obtained by the bird detection radar are used to count the number of bird flight trajectories at different times and thus determine the peak period of bird activity. The grid within the monitoring range of the airport bird detection radar is divided to count the birds during peak hours. The grid distribution of activities determines the main areas of bird activity; combined with the airport bird situation investigation, they provide data support for the rhythm analysis of airport resident bird activities. Fig. 27 and Tab. 2 show the distribution of bird activity hotspots at a certain airport[79]. A total of 10 bird activity hotspot areas are counted. The monthly track numbers and percentages recorded in each area are shown in Tab. 2. Among them, hot spot 10 has the largest area in which bird activities are the most active; its peak hours appear in the early morning and evening and thus is most likely a bird habitat. The remaining nine hotspots are distributed in the flight area and the surrounding area of the airport. They are small in size and appear in the daytime during peak hours. They are more likely to be for foraging or drinking birds. The airport management agency can verify the above conjectures on the basis of the on-site bird situation surveys at the airport and implement ecological management of relevant areas to reduce the risk of bird strikes.

      Table 2.  Monthly statistics of bird hot spots

      Hot number Track number The percentage (%)
      1 7628 0.73
      2 30299 2.89
      3 22484 2.15
      4 16885 1.61
      5 21987 2.10
      6 15652 1.50
      7 29856 2.85
      8 33938 3.24
      9 21650 2.07
      10 846406 80.86

      Figure 27.  Statistics of hot spots of birds activity at an airport[79]

      From 2017 to 2018, commissioned by the Airport Department of the Civil Aviation Administration, the research team conducted continuous follow-up research on the current, mature, domestic, light and small drone target monitoring technology. The research found that most of the current domestically developed noncooperative drone detection based on radar technology are based on existing military radars or improved versions of detection radars for other purposes. Compared with international ones, they are greatly deficient in detection and recognition capabilities. Although a few systems have developed recognition function modules, the recognition methods are limited and rely on changes of echo amplitude and signal fluctuations; moreover, the false alarm rate is extremely high, and birds are often misidentified as drones. Radar target recognition based on traditional feature information, especially noncooperative target recognition, becomes difficult or even invalid. Based on the “Airport Radar Bird Detection Experiment System”, preliminary verification of moving target detection method in transform domain, long-time accumulation technology and motion pattern difference classification technology, as shown in Fig. 28. The research team has received funding from numerous subjects, including the National Key Research and Development Program, the National Natural Science Foundation of China Civil Aviation Joint Research Fund, the Shandong Province Key Research and Development Program, and the Civil Aviation Science and Technology Project. In 2019, China Academy of Civil Aviation Science and Technology initiated the formulation of two industry standards, the “Technical Standards for UAV Detection in Airport Clearance Area” and the “Technical Standards for Airport Bird Detection Radar System”.

      Figure 28.  UAV and flying birds detection experiment

    • The above research status reveals that the main ways to achieve the classification and identification of flying birds and UAV targets are as follows: (1)the use of long-time staring observation or pan-exploration observation mode to improve the Doppler resolution of the target; (2)the use of micromotion characteristics and differences between drones and flying birds to achieve classification and identification; (3)observation of bi-multistatic radar, whose multiview characteristics can overcome the limitation of single-base radar attitude sensitivity and the influence of occlusion effect (the formation of multiview resource complementation helps improve the resolution ability and accuracy of feature extraction); (4)the use of deep learning intelligent classification and recognition methods, in which the range-Doppler and time-frequency plots are used as pictures for learning, and target features are mined in depth through the deep CNN to achieve fine description of features. The main problems and difficulties in this direction are as follows:

      (1) UAV and bird target modeling and characteristic research. In an ideal state, the micro-Doppler frequency of typical micromovements of targets is related to its micromovement frequency, micromovement amplitude, and LOS. Its variation with time is in the form of sinusoidal frequency modulation signals. The time-frequency image also shows the law of sinusoidal changes. Changes in the position and intensity of the scattering center during the micromotion process also nonlinearly modulate the echo amplitude. Therefore, in practical applications, the target attitude change, target shape, echo discontinuity, occlusion effect, and other factors must be considered[80,81]. In addition, the radar transmitting signal parameters affect the observation of the micro-Doppler frequency, and the target or target component is compounded with various micromovement forms, such as bird targets. The micro-Doppler of the target no longer obeys the simple sine law but manifests as the superposition of multiple sine components. The micromotion model is complicated, thus increasing the complexity of subsequent feature extraction.

      (2) Extraction of target feature differences between UAVs and birds. The difference in the feature extraction of drones and flying birds is the basis and key to subsequent classification and recognition. Both are typical low observable targets. The preprocessing of their echo signals directly determines the fineness of feature description. Many types of UAV targets have different sizes, shapes, and motion characteristics, and their radar scattering and Doppler characteristics also vary. The external radiation source radar can improve the Doppler resolution through long-term coherent accumulation, but its working frequency is low, and its advantages in the refined features of the target, especially in the description of the micromotion feature, are not outstanding; nevertheless, it can achieve high resolution in a complex environment. Micromotion feature extraction technology needs further research. In addition to extracting target micromotion features at the “signal” level, using target flight trajectory features at the “data” level is an effective technical way to distinguish flying birds from drone targets. Under normal circumstances, the flight trajectory of the drone target is long and relatively stable, whereas that of bird targets is relatively short and flexible. Classification and recognition methods of flying bird and UAV targets can be explored by evaluating the changes in the target motion model at the “data” level. How to establish a scientific and effective flight motion model must be investigated, and criteria and methods for determining the motion mode must be proposed; these tasks require in-depth research.

      (3) Intelligent classification and recognition of drones and birds. The current CNN model is developing rapidly, and improved algorithms suitable for various applications are emerging in an endless stream. The premise of using CNNs for the classification and recognition of drones and flying bird targets is to construct different types of datasets. The completeness of datasets and the accuracy of labeling directly determine the effect of CNN network learning; thus, considerable time is needed to conduct data collection and labeling under different conditions and targets. In addition, how to reconstruct feature data to adapt to the input layer of the deep learning model, how to combine the actual needs of target recognition for model parameter tuning, and how to use the output difference features for scene recognition and target recognition method design are also key sciences that urgently need to be resolved.

    • (1) The refined processing of radar echo signals is a prerequisite for improving detection and recognition performance. Previous research on radar target detection technology usually only focused on radar echo data, using only the first- and second-order statistical distribution characteristics, spectral characteristics, and spatial correlation characteristics of the target and the background, as well as the characteristics of the target’s uniform velocity and uniform acceleration. The detection, tracking, and recognition algorithm design is implemented under the conditions of signal, target, and background models. This extensive approach can solve most of the problems in radar detection; however, with the increasing complexity of the environment and the targets, this extensive approach has increasingly hindered the further improvement of radar detection performance[82]. Thus, refined analysis and processing must be conducted from the perspective of clutter suppression, detection, tracking, and identification process to improve the utilization rate of information and radar detection performance[6].

      (2) The fusion of signal and data features is an effective way to improve classification accuracy. The most important prerequisite for target classification and recognition is the construction and extraction of different features. The long-term observation mode of the external radiation source radar easily extracts high-resolution micromotion characteristics and can obtain the fine signal characteristics of drones and birds. The micromotion characteristics of rotary-wing drones and flying birds are remarkably different, which lays the foundation of the classification and recognition. In addition, considerable experimental data show that in the target data obtained by radar, the movement trajectories of UAVs and flying birds have a certain degree of discrimination. At present, mature target tracking algorithms approach the true motion state of the target by establishing various motion models. The target state of UAV targets with relatively stable motion trajectories generally conforms to a single motion model during flight, whereas bird targets have higher mobility. The target state during flight is usually switched directly in multiple motion models. At the same time, the analysis of considerable bird detection radar data shows that the activity of birds in a certain area has rhythm. Performing target recognition and classification is feasible by evaluating the switching frequency of target motion models combined with bird activity rhythm analysis. Therefore, fusion using the signal and data features of UAVs and flying birds can expand the difference feature space and improve the classification probability[77].

      (3) The deep learning network provides a new method for the intelligent identification of drones and birds. Given the complex motion forms and environmental factors of UAVs and flying birds, their essential attributes are sometimes difficult to describe clearly through models and parameters, thus making it difficult to obtain the deep essential characteristics of target data. Using deep learning and other intelligent learning ideas, by means of multilayer convolutional neural networks, complex structures in high-dimensional data can be found. It has been verified in the fields of image recognition and speech recognition to have strong feature expression capabilities and high classification recognition accuracy rate. The advantage of deep learning is the use of unsupervised or semisupervised feature learning and hierarchical feature extraction-efficient algorithms to replace manual feature acquisition. The micromotion signal of targets can be regarded as time-frequency two-dimensional characteristic data; thus, the target echo and motion trajectory are reflected in the radar P display and the two-dimensional image of distance and azimuth (Doppler)[76,83]. Therefore, the use of deep learning is suitable for intelligent target classification and recognition. In addition, only the micromotion features obtained from the time–frequency maps are used to classify UAV and bird signals with limited feature space and low accuracy. For this reason, the idea of multifeature multichannel CNN fusion processing can be used, and the target’s echo sequence diagrams, micromotion time-frequency diagrams, distance-Doppler diagrams, transform domain diagrams, and motion trajectory diagrams are input to multichannel CNNs for feature learning and target recognition[76].

      (4) The new radar system has laid a hardware foundation for the refined processing of targets and the integration of detection and recognition. At present, advances in radar hardware technology, especially hardware computing and storage capabilities, provide a solid hardware foundation for the development of refined and intelligent information processing technology for radar target detection. At the same time, digital array systems, external radiation source radars, and holographic radar systems, such as radar, MIMO radar, software radar, and intelligent radar, expand the available signal dimensions and provide a flexible software framework for refined radar target detection and integrated processing of detection and recognition[84]. For example, holographic digital array radar, MIMO radar, and other radars with multibeam staring capabilities can realize integrated processing in time domain (long-time coherent accumulation across range, beam, and Doppler units), space domain (full digital beamforming, and full coverage) and frequency domain (long-time staring improves Doppler resolution and extracts refined motion features). They can help to achieve reliable detection and recognition of “low, slow, and small” targets[85-88]. External radiation source radar uses a third-party radiation source, does not emit signals itself, and does not need to apply for a special frequency. It has the characteristics of low cost, environmental protection, and no electromagnetic pollution. It has unique advantages for drone surveillance in electromagnetic spectrum-sensitive areas, such as airports. In addition, in complex environments, the probability of target recognition by a single-radar detection device is not high. Information from different sensors, such as photoelectric and acoustic sensors, must be comprehensively utilized to compensate for the limitations of a single sensor and improve recognition efficiency and accuracy.

    • This study combs and summarizes the main technical methods and systems in the research field of bird and UAV target detection and recognition in complex scenes. Furthermore, it analyzes their respective advantages and disadvantages and provides the results of data simulation analysis. Refined signal processing, signal and data feature fusion, deep learning intelligent recognition, and the development of new systems of radar look forward to the effective surveillance and recognition of flying birds and UAV targets in the future. Flying birds and UAV targets are “low, slow, and small” targets; thus, their echoes have low observability and are hidden in complex backgrounds. Therefore, the effective surveillance and recognition of the two also greatly test and promote new radar technology. The development and application of related technologies can also provide ideas for the detection of weak and small targets in other complex backgrounds. Hopefully, scholars can conduct in-depth research in this field and find ways and means to improve the detection and recognition of flying birds and UAVs.

参考文献 (88)

目录

    /

    返回文章
    返回