Accepted Article Preview
More
Conventional synthetic aperture radar (SAR) can only obtain two-dimensional (2-D) azimuth-range images without accurately reflecting the three-dimensional (3-D) scattering structure information of the targets. However, SAR tomography (TomoSAR) is a multi-baseline interferometric measurement mode that extends the synthetic aperture principle into the elevation direction, making it possible to recover the true height of the target, thereby achieving 3-D imaging. Moreover, differential SAR tomography (D-TomoSAR) extends the synthetic aperture principle into the elevation and time directions simultaneously. Thus, it can obtain the target 3-D scattering structure along with the deformation speed of the observed target. GaoFen-3 (GF-3) is the first C-band multi-polarization 1 m resolution SAR satellite of China. It has several advantages, such as high-resolution, large swath width, and multiple imaging modes, which are crucial to the development of a high-resolution earth observation technology for China. Presently, GF-3 data are mainly used in the image processing field, such as target identification. However, the phase information of the SAR images is not yet fully utilized. Moreover, because of the high-dimensional imaging ability that was overlooked at the beginning of designing the system, existing SAR images acquired by GF-3 have spatial and temporal de-coherence problems. Thus, it is difficult to use the images in further interference series processing. To solve the above problems, this study achieved 3-D and four-dimensional (4-D) imaging of buildings around Yanqi Lake, in Beijing, based on the data of seven SAR complex images. We obtained the 3-D scattering structure information of buildings and achieved millimeter-level high-precision monitoring of building deformation. The preliminary experimental results demonstrate the application potential of GF-3 SAR data and provide a technical support for the subsequent further application of the GF-3 SAR satellite in urban sensing and monitoring. Conventional synthetic aperture radar (SAR) can only obtain two-dimensional (2-D) azimuth-range images without accurately reflecting the three-dimensional (3-D) scattering structure information of the targets. However, SAR tomography (TomoSAR) is a multi-baseline interferometric measurement mode that extends the synthetic aperture principle into the elevation direction, making it possible to recover the true height of the target, thereby achieving 3-D imaging. Moreover, differential SAR tomography (D-TomoSAR) extends the synthetic aperture principle into the elevation and time directions simultaneously. Thus, it can obtain the target 3-D scattering structure along with the deformation speed of the observed target. GaoFen-3 (GF-3) is the first C-band multi-polarization 1 m resolution SAR satellite of China. It has several advantages, such as high-resolution, large swath width, and multiple imaging modes, which are crucial to the development of a high-resolution earth observation technology for China. Presently, GF-3 data are mainly used in the image processing field, such as target identification. However, the phase information of the SAR images is not yet fully utilized. Moreover, because of the high-dimensional imaging ability that was overlooked at the beginning of designing the system, existing SAR images acquired by GF-3 have spatial and temporal de-coherence problems. Thus, it is difficult to use the images in further interference series processing. To solve the above problems, this study achieved 3-D and four-dimensional (4-D) imaging of buildings around Yanqi Lake, in Beijing, based on the data of seven SAR complex images. We obtained the 3-D scattering structure information of buildings and achieved millimeter-level high-precision monitoring of building deformation. The preliminary experimental results demonstrate the application potential of GF-3 SAR data and provide a technical support for the subsequent further application of the GF-3 SAR satellite in urban sensing and monitoring.
The application of one-bit quantization technology in a massive MIMO radar system significantly reduced the system cost, power consumption, and transmission bandwidth. However, it also poses a severe challenge to extract high-precision target information from one-bit quantized data. To address the problem of low positioning accuracy and poor robustness of secondary positioning based on one-bit quantization under low Signal-to-Noise Ratio (SNR), this paper proposes a multi-station radar target direct position determination algorithm based on one-bit quantization. First, by quantizing the received signal with one bit, and deriving the probability distribution based on the one-bit signal, the cost function about the target position is established. Second, by proving the convexity of the cost function, the maximum likelihood estimation and gradient descent algorithm are used to solve the unknown signal parameters in the echo. Finally, the direct positioning of the target is achieved according to the maximum likelihood estimation. Simulation experiments were performed to analyze the positioning performance of the proposed algorithm, and the results showed that the proposed algorithm only needed to transmit 6.25% of the communication bandwidth compared with the high-bit sampling (e.g., 16 bits) direct position determination algorithm, and its power consumption is only 0.1% of the former. In addition, compared with the secondary positioning algorithm based on one-bit quantization, the proposed algorithm can achieve an effective estimation of the target position under a low SNR. In addition, its localization performance is significantly better than the former under low SNR and a low number of MIMO antennas. Simultaneously, its performance will be further improved with the application of oversampling technology. The application of one-bit quantization technology in a massive MIMO radar system significantly reduced the system cost, power consumption, and transmission bandwidth. However, it also poses a severe challenge to extract high-precision target information from one-bit quantized data. To address the problem of low positioning accuracy and poor robustness of secondary positioning based on one-bit quantization under low Signal-to-Noise Ratio (SNR), this paper proposes a multi-station radar target direct position determination algorithm based on one-bit quantization. First, by quantizing the received signal with one bit, and deriving the probability distribution based on the one-bit signal, the cost function about the target position is established. Second, by proving the convexity of the cost function, the maximum likelihood estimation and gradient descent algorithm are used to solve the unknown signal parameters in the echo. Finally, the direct positioning of the target is achieved according to the maximum likelihood estimation. Simulation experiments were performed to analyze the positioning performance of the proposed algorithm, and the results showed that the proposed algorithm only needed to transmit 6.25% of the communication bandwidth compared with the high-bit sampling (e.g., 16 bits) direct position determination algorithm, and its power consumption is only 0.1% of the former. In addition, compared with the secondary positioning algorithm based on one-bit quantization, the proposed algorithm can achieve an effective estimation of the target position under a low SNR. In addition, its localization performance is significantly better than the former under low SNR and a low number of MIMO antennas. Simultaneously, its performance will be further improved with the application of oversampling technology.
Traditional radars can detect moving targets using the Doppler effect. However, traditional radars have shadow areas in detecting the angular motion of the rotating targets. The discovery of the rotational Doppler effect based on vortex electromagnetic waves helps solve the problem of detecting the angular motion of the rotating targets under direct vision, which has attracted considerable attention from domestic and foreign scholars. In this study, we discussed the recent research progress on the rotational Doppler effect of vortex electromagnetic waves, particularly for related results in the microwave band, including the rotational Doppler effects on the target under on-axis and off-axis cases; decoupling linear Doppler, micro-Doppler and rotational Doppler effects under complex motion cases; and rotational Doppler effects on the applications of radar imaging and velocity measurement. We summarized and analyzed the existing problems demanding prompt solutions in this field, and proposed future research directions and relative applications. Traditional radars can detect moving targets using the Doppler effect. However, traditional radars have shadow areas in detecting the angular motion of the rotating targets. The discovery of the rotational Doppler effect based on vortex electromagnetic waves helps solve the problem of detecting the angular motion of the rotating targets under direct vision, which has attracted considerable attention from domestic and foreign scholars. In this study, we discussed the recent research progress on the rotational Doppler effect of vortex electromagnetic waves, particularly for related results in the microwave band, including the rotational Doppler effects on the target under on-axis and off-axis cases; decoupling linear Doppler, micro-Doppler and rotational Doppler effects under complex motion cases; and rotational Doppler effects on the applications of radar imaging and velocity measurement. We summarized and analyzed the existing problems demanding prompt solutions in this field, and proposed future research directions and relative applications.
In the traditional coherent radar signal processing, the cascaded processing with pulse compression followed by coherent integration cannot achieve the maximum accumulation of high-speed target’s echo energy in theory. In addition, the result of the cascaded processing is characterized by deviation in the target peak position, accompanied by problems, such as the broadening of the main lobe, a decrease in the gain, and an increase in the side lobes. Therefore, this paper proposes a long time coherent integration method combining Pulse Compression and Radon-Fourier Transform (PC-RFT). This method utilizes the correlation between signals to combine matched filter and RFT. To maximize the target gain, the fast time (intra-pulse time) and slow time (inter-pulse time) dimensions are combined to compensate for the intra-pulse and inter-pulse Doppler shifts. The experimental results show that the two-dimensional joint processing outperforms the cascaded processing. In the traditional coherent radar signal processing, the cascaded processing with pulse compression followed by coherent integration cannot achieve the maximum accumulation of high-speed target’s echo energy in theory. In addition, the result of the cascaded processing is characterized by deviation in the target peak position, accompanied by problems, such as the broadening of the main lobe, a decrease in the gain, and an increase in the side lobes. Therefore, this paper proposes a long time coherent integration method combining Pulse Compression and Radon-Fourier Transform (PC-RFT). This method utilizes the correlation between signals to combine matched filter and RFT. To maximize the target gain, the fast time (intra-pulse time) and slow time (inter-pulse time) dimensions are combined to compensate for the intra-pulse and inter-pulse Doppler shifts. The experimental results show that the two-dimensional joint processing outperforms the cascaded processing.
Quantum Orbital Angular Momentum (OAM) indicates that each Electromagnetic (EM) photon of an EM wave carries OAM. In the microwave band, such an EM wave photon is called a vortex microwave photon. Physical properties distinguish between EM waves with vortex and plane wave photons. When illuminating a traditional stealthy target, a vortex microwave photon can achieve higher echo power, thereby improving the Radar Cross-Section (RCS), the corresponding receiving signal power, and detection probability. Hence, the vortex microwave photon shows promise in antistealth technology. In this paper, a vortex microwave quantum radar based on the OAM quantum state is proposed. Its basic physical architecture and corresponding mathematical model are given, and the high echo power characteristics of the photon are analyzed using Quantum Electrodynamics (QED). The correctness of the theoretical calculation was experimentally verified with an approximate 9-dB improvement in echo power. Moreover, the simulations are performed to clarify the improvement in radar performance, including the receiving power and detection probability, illustrating the capability of the vortex microwave photon when applied to antistealth radar. Quantum Orbital Angular Momentum (OAM) indicates that each Electromagnetic (EM) photon of an EM wave carries OAM. In the microwave band, such an EM wave photon is called a vortex microwave photon. Physical properties distinguish between EM waves with vortex and plane wave photons. When illuminating a traditional stealthy target, a vortex microwave photon can achieve higher echo power, thereby improving the Radar Cross-Section (RCS), the corresponding receiving signal power, and detection probability. Hence, the vortex microwave photon shows promise in antistealth technology. In this paper, a vortex microwave quantum radar based on the OAM quantum state is proposed. Its basic physical architecture and corresponding mathematical model are given, and the high echo power characteristics of the photon are analyzed using Quantum Electrodynamics (QED). The correctness of the theoretical calculation was experimentally verified with an approximate 9-dB improvement in echo power. Moreover, the simulations are performed to clarify the improvement in radar performance, including the receiving power and detection probability, illustrating the capability of the vortex microwave photon when applied to antistealth radar.
The electromagnetic vortex wave has demonstrated excellent research value with potential applications in the fields of wireless communication and radar detection and imaging due to its unusual electromagnetic field distribution and theoretically infinite orthogonal Orbital Angular Momentum (OAM) modes. This study analyzes the anti-interference performance of OAM modes in the electromagnetic vortex Radio Frequency (RF) transceiver link primarily from the perspective of the electromagnetic vortex field distributions in space and the OAM modes orthogonality. Planar antenna arrays are designed to generate the electromagnetic vortex beams with respective OAM modes of and in the C band, and the corresponding RF transceiver links are established. The OAM modes’ anti-interference properties under different interference situations are analyzed in the electromagnetic vortex RF transceiver link by using a horn antenna as the interference source. Meanwhile, the corresponding OAM mode spectrum and the OAM modes’ orthogonality are employed as the primary methods in our analysis. Finally, the designed antenna models are fabricated, and the electromagnetic vortex RF transceiver links are measured. The corresponding analyses and conclusions are presented in this study. The OAM modes’ anti-interference performance analysis in the vortex electromagnetic wave’s RF transceiver link can provide a reference for exploring and designing a vortex electromagnetic wave in wireless communication and radar detection and imaging research. The electromagnetic vortex wave has demonstrated excellent research value with potential applications in the fields of wireless communication and radar detection and imaging due to its unusual electromagnetic field distribution and theoretically infinite orthogonal Orbital Angular Momentum (OAM) modes. This study analyzes the anti-interference performance of OAM modes in the electromagnetic vortex Radio Frequency (RF) transceiver link primarily from the perspective of the electromagnetic vortex field distributions in space and the OAM modes orthogonality. Planar antenna arrays are designed to generate the electromagnetic vortex beams with respective OAM modes of and in the C band, and the corresponding RF transceiver links are established. The OAM modes’ anti-interference properties under different interference situations are analyzed in the electromagnetic vortex RF transceiver link by using a horn antenna as the interference source. Meanwhile, the corresponding OAM mode spectrum and the OAM modes’ orthogonality are employed as the primary methods in our analysis. Finally, the designed antenna models are fabricated, and the electromagnetic vortex RF transceiver links are measured. The corresponding analyses and conclusions are presented in this study. The OAM modes’ anti-interference performance analysis in the vortex electromagnetic wave’s RF transceiver link can provide a reference for exploring and designing a vortex electromagnetic wave in wireless communication and radar detection and imaging research.
Current Issue
More
Special Topic Papers: Intelligent Information Processing for Microwave Remote Sensing
Three-dimensional (3D) imaging is one of the leading trends in the development of Synthetic Aperture Radar (SAR) technology. The current SAR 3D imaging system mainly includes tomography and array interferometry, both with drawbacks of either long acquisition cycle or too much system complexity. Therefore, a novel framework of SAR microwave vision 3D imaging is proposed, which is to effectively combine the SAR imaging model with various 3D cues contained in SAR microwave scattering mechanism and the perceptual semantics in SAR images, so as to significantly reduce the system complexity, and achieve high-efficiency and low-cost SAR 3D imaging. In order to promote the development of SAR microwave vision 3D imaging theory and technology, a comprehensive SAR microwave vision 3D imaging dataset is planned to be constructed with the support of NSFC major projects. This paper outlines the composition and construction plan of the dataset, and gives detailed composition and information description of the first version of published data and the method of making the dataset, so as to provide some helpful support for SAR community. Three-dimensional (3D) imaging is one of the leading trends in the development of Synthetic Aperture Radar (SAR) technology. The current SAR 3D imaging system mainly includes tomography and array interferometry, both with drawbacks of either long acquisition cycle or too much system complexity. Therefore, a novel framework of SAR microwave vision 3D imaging is proposed, which is to effectively combine the SAR imaging model with various 3D cues contained in SAR microwave scattering mechanism and the perceptual semantics in SAR images, so as to significantly reduce the system complexity, and achieve high-efficiency and low-cost SAR 3D imaging. In order to promote the development of SAR microwave vision 3D imaging theory and technology, a comprehensive SAR microwave vision 3D imaging dataset is planned to be constructed with the support of NSFC major projects. This paper outlines the composition and construction plan of the dataset, and gives detailed composition and information description of the first version of published data and the method of making the dataset, so as to provide some helpful support for SAR community.
The Bilateral Constant False Alarm Rate (BCFAR) detection algorithm calculates the spatial information of Synthetic Aperture Radar (SAR) image by the Gaussian kernel density estimator, and combines it with the intensity information of image to obtain the joint image for target detection. Compared with the classical CFAR detection algorithm which uses only intensity information for target detection, bilateral CFAR has better detection performance and robustness. However, with continuous high-intensity heterogeneous points (such as breakwater, azimuth ambiguity and phantom) in a complex environment, spatial information calculated by kernel density estimator will have more errors, which will lead to many false alarms in detection results. In addition, when it comes to a weak target with less similarity between adjacent pixels, it will miss detection. To effectively improve these problems, this paper designs an Improved Bilateral CFAR (IB-CFAR) algorithm in complex environment. The IB-CFAR proposed in this paper is mainly divided into three stages: intensity level division based on the nonuniform quantization method, intensity spatial domain information fusion and parameter estimation after clutter truncation. The intensity level division based on the nonuniform quantization method can improve the similarity and contrast information of weak targets, leading to improved ship detection rate. The information fusion of strength spatial domain is to fuse the spatial similarity, distance direction and strength information, which can further improve the detection rate and describe the ship structure information. Parameter estimation after clutter truncation can remove continuous high-intensity heterogeneous points in the background window and retain the real sea clutter samples to the maximum extent, which makes parameter estimation more accurate. Finally, according to the estimated parameters, an accurate sea clutter statistical model is established for CFAR detection. In this paper, the effectiveness and robustness of the proposed algorithm are verified by using GaoFen-3 and TerraSAR-X data.The experimental results show that the proposed algorithm performs well in the environment with more dense distribution of weak targets, and can obtain 97.85% detection rate and 3.52% false alarm rate in such environment. Compared with the existing detection algorithms, the detection rate increased by 5% and the false alarm rate reduced by 10%. However, when the number of weak targets is small and the background is very complex, few false alarms will appear. The Bilateral Constant False Alarm Rate (BCFAR) detection algorithm calculates the spatial information of Synthetic Aperture Radar (SAR) image by the Gaussian kernel density estimator, and combines it with the intensity information of image to obtain the joint image for target detection. Compared with the classical CFAR detection algorithm which uses only intensity information for target detection, bilateral CFAR has better detection performance and robustness. However, with continuous high-intensity heterogeneous points (such as breakwater, azimuth ambiguity and phantom) in a complex environment, spatial information calculated by kernel density estimator will have more errors, which will lead to many false alarms in detection results. In addition, when it comes to a weak target with less similarity between adjacent pixels, it will miss detection. To effectively improve these problems, this paper designs an Improved Bilateral CFAR (IB-CFAR) algorithm in complex environment. The IB-CFAR proposed in this paper is mainly divided into three stages: intensity level division based on the nonuniform quantization method, intensity spatial domain information fusion and parameter estimation after clutter truncation. The intensity level division based on the nonuniform quantization method can improve the similarity and contrast information of weak targets, leading to improved ship detection rate. The information fusion of strength spatial domain is to fuse the spatial similarity, distance direction and strength information, which can further improve the detection rate and describe the ship structure information. Parameter estimation after clutter truncation can remove continuous high-intensity heterogeneous points in the background window and retain the real sea clutter samples to the maximum extent, which makes parameter estimation more accurate. Finally, according to the estimated parameters, an accurate sea clutter statistical model is established for CFAR detection. In this paper, the effectiveness and robustness of the proposed algorithm are verified by using GaoFen-3 and TerraSAR-X data.The experimental results show that the proposed algorithm performs well in the environment with more dense distribution of weak targets, and can obtain 97.85% detection rate and 3.52% false alarm rate in such environment. Compared with the existing detection algorithms, the detection rate increased by 5% and the false alarm rate reduced by 10%. However, when the number of weak targets is small and the background is very complex, few false alarms will appear.
In this paper, a Spatial-Channel Selective Kernel Fully Convolutional Network (SCSKFCN) and a Semi-supervised Preselection-United Optimization (SPUO) method are proposed for polarimetric Synthetic Aperture Radar (SAR) image classification. Integrated with spatial-channel attention mechanism, SCSKFCN adaptively fuses features that have different sizes of reception field, and achieves promising classification performance. SPUO can efficiently extract information contained in unlabeled samples according to annotated samples. It utilizes K-Wishart distance to preselect unlabeled samples for pseudo label generation, and then optimizes SCSKFCN with both labeled and pseudo labeled samples. During the training process of SCSKFCN, a two-step verification mechanism is applied on pseudo labeled samples to reserve reliable samples for united optimization. The experimental results show that the proposed SCSKFCN-SPUO can achieve promising performance and efficiency using limited number of annotated pixels. In this paper, a Spatial-Channel Selective Kernel Fully Convolutional Network (SCSKFCN) and a Semi-supervised Preselection-United Optimization (SPUO) method are proposed for polarimetric Synthetic Aperture Radar (SAR) image classification. Integrated with spatial-channel attention mechanism, SCSKFCN adaptively fuses features that have different sizes of reception field, and achieves promising classification performance. SPUO can efficiently extract information contained in unlabeled samples according to annotated samples. It utilizes K-Wishart distance to preselect unlabeled samples for pseudo label generation, and then optimizes SCSKFCN with both labeled and pseudo labeled samples. During the training process of SCSKFCN, a two-step verification mechanism is applied on pseudo labeled samples to reserve reliable samples for united optimization. The experimental results show that the proposed SCSKFCN-SPUO can achieve promising performance and efficiency using limited number of annotated pixels.
Deep-learning technology has enabled remarkable results for ship detection in SAR images. However, in view of the complex and changeable backgrounds of SAR ship images, how to accurately and efficiently extract target features and improve detection accuracy and speed is still a huge challenge. To solve this problem, a ship detection algorithm based on multiscale feature fusion and channel relation calibration of features is proposed in this paper. First, based on Faster R-CNN, a channel attention mechanism is introduced to calibrate the channel relationship between features in the feature extraction network, so as to improve the network’s expression ability for extraction of ship features in different scenes. Second, unlike the original method of generating candidate regions based on single-scale features, this paper introduces an improved feature pyramid structure based on a neural architecture search algorithm, which helps improve the performance of the network. The multiscale features are effectively fused to settle the problem of missing detections of small targets and adjacent inshore targets. Experimental results on the SSDD dataset show that, compared with the original Faster R-CNN, the proposed algorithm improves detection accuracy from 85.4% to 89.4% and the detection rate from 2.8 FPS to 10.7 FPS. Thus, this method effectively achieves high-speed and high-accuracy SAR ship detection, which has practical benefits. Deep-learning technology has enabled remarkable results for ship detection in SAR images. However, in view of the complex and changeable backgrounds of SAR ship images, how to accurately and efficiently extract target features and improve detection accuracy and speed is still a huge challenge. To solve this problem, a ship detection algorithm based on multiscale feature fusion and channel relation calibration of features is proposed in this paper. First, based on Faster R-CNN, a channel attention mechanism is introduced to calibrate the channel relationship between features in the feature extraction network, so as to improve the network’s expression ability for extraction of ship features in different scenes. Second, unlike the original method of generating candidate regions based on single-scale features, this paper introduces an improved feature pyramid structure based on a neural architecture search algorithm, which helps improve the performance of the network. The multiscale features are effectively fused to settle the problem of missing detections of small targets and adjacent inshore targets. Experimental results on the SSDD dataset show that, compared with the original Faster R-CNN, the proposed algorithm improves detection accuracy from 85.4% to 89.4% and the detection rate from 2.8 FPS to 10.7 FPS. Thus, this method effectively achieves high-speed and high-accuracy SAR ship detection, which has practical benefits.
Multiscale object detection in Synthetic Aperture Radar (SAR) images can locate and recognize key objects in large-scene SAR images, and it is one of the key technologies in SAR image interpretation. However, for the simultaneous detection of SAR objects with large size differences, that is, cross-scale object detection, existing object detection methods are difficult to extract the features of cross-scale objects, and also difficult to realize cross-scale object simultaneous detection. In this study, we propose a multiscale object detection method based on the Feature-Transferable Pyramid Network (FTPN) for SAR images. In the feature extraction stage, the feature migration method is used to obtain an effective mosaic of the feature images of each layer and extract feature images with different scales. Simultaneously, the void convolution method is used to increase the receptive field of feature extraction and aid the network in extracting large object features. These steps can effectively preserve the features of objects of different sizes, to realize the simultaneous detection of cross-scale objects in SAR images. The experiments based on the GaoFen-3 SAR dataset, SAR Ship Detection Dataset (SSDD), and high-resolution SSDD-2.0 show that the proposed method can detect cross-scale objects, such as airports and ships in SAR images, and the mean Average Precision (mAP) can reach 96.5% on the existing dataset, which is 8.1% higher than that of the characteristic pyramid network algorithm. Moreover, the overall performance of the proposed method is better than that of the latest YOLOv4 and other object detection algorithms. Multiscale object detection in Synthetic Aperture Radar (SAR) images can locate and recognize key objects in large-scene SAR images, and it is one of the key technologies in SAR image interpretation. However, for the simultaneous detection of SAR objects with large size differences, that is, cross-scale object detection, existing object detection methods are difficult to extract the features of cross-scale objects, and also difficult to realize cross-scale object simultaneous detection. In this study, we propose a multiscale object detection method based on the Feature-Transferable Pyramid Network (FTPN) for SAR images. In the feature extraction stage, the feature migration method is used to obtain an effective mosaic of the feature images of each layer and extract feature images with different scales. Simultaneously, the void convolution method is used to increase the receptive field of feature extraction and aid the network in extracting large object features. These steps can effectively preserve the features of objects of different sizes, to realize the simultaneous detection of cross-scale objects in SAR images. The experiments based on the GaoFen-3 SAR dataset, SAR Ship Detection Dataset (SSDD), and high-resolution SSDD-2.0 show that the proposed method can detect cross-scale objects, such as airports and ships in SAR images, and the mean Average Precision (mAP) can reach 96.5% on the existing dataset, which is 8.1% higher than that of the characteristic pyramid network algorithm. Moreover, the overall performance of the proposed method is better than that of the latest YOLOv4 and other object detection algorithms.
Radar Electronic Countermeasures
Retrieving the working modes of multifunction radar from electronic reconnaissance data is a difficult problem, and it has attracted widespread attention in the field of electronic reconnaissance. It is also an important task when extracting benefits from big electromagnetic data and provides straightforward support to applications, such as radar type recognition, working state recognition, radar intention inferring, and precise electronic jamming. Based on the assumption of model simplicity, this study defines a complexity measurement rule for multifunction radar pulse trains and introduces the semantic coding theory to analyze the temporal structure of multifunction radar pulse trains. The model complexity minimization criterion guides the semantic coding procedure to extract radar pulse groups corresponding to different radar functions from pulse trains. Furthermore, based on the coded sequence of the pulse train, the switching matrix between different pulse groups is estimated, and the hierarchical working model of multifunction radars is ultimately reconstructed. Simulations are conducted to verify the feasibility and performance of the new method. Simulation results indicate that the coding theory is successfully used in the proposed method to automatically extract pulse groups and rebuild operating models based on multifunction radar pulse trains. Moreover, the method is robust to data noises, such as missing pulses. Retrieving the working modes of multifunction radar from electronic reconnaissance data is a difficult problem, and it has attracted widespread attention in the field of electronic reconnaissance. It is also an important task when extracting benefits from big electromagnetic data and provides straightforward support to applications, such as radar type recognition, working state recognition, radar intention inferring, and precise electronic jamming. Based on the assumption of model simplicity, this study defines a complexity measurement rule for multifunction radar pulse trains and introduces the semantic coding theory to analyze the temporal structure of multifunction radar pulse trains. The model complexity minimization criterion guides the semantic coding procedure to extract radar pulse groups corresponding to different radar functions from pulse trains. Furthermore, based on the coded sequence of the pulse train, the switching matrix between different pulse groups is estimated, and the hierarchical working model of multifunction radars is ultimately reconstructed. Simulations are conducted to verify the feasibility and performance of the new method. Simulation results indicate that the coding theory is successfully used in the proposed method to automatically extract pulse groups and rebuild operating models based on multifunction radar pulse trains. Moreover, the method is robust to data noises, such as missing pulses.
Deep convolutional neural networks have achieved great success in recent years. They have been widely used in various applications such as optical and SAR image scene classification, object detection and recognition, semantic segmentation, and change detection. However, deep neural networks rely on large-scale high-quality training data, and can only guarantee good performance when the training and test data are independently sampled from the same distribution. Deep convolutional neural networks are found to be vulnerable to subtle adversarial perturbations. This adversarial vulnerability prevents the deployment of deep neural networks in security-sensitive applications such as medical, surveillance, autonomous driving and military scenarios. This paper first presents a holistic view of security issues for deep convolutional neural network-based image recognition systems. The entire information processing chain is analyzed regarding safety and security risks. In particular, poisoning attacks and evasion attacks on deep convolutional neural networks are analyzed in detail. The root causes of adversarial vulnerabilities of deep recognition models are also discussed. Then, we give a formal definition of adversarial robustness and present a comprehensive review of adversarial attacks, adversarial defense, and adversarial robustness evaluation. Rather than listing existing research, we focus on the threat models for the adversarial attack and defense arms race. We perform a detailed analysis of several representative adversarial attacks on SAR image recognition models and provide an example of adversarial robustness evaluation. Finally, several open questions are discussed regarding recent research progress from our workgroup. This paper can be further used as a reference to develop more robust deep neural network-based image recognition models in dynamic adversarial scenarios. Deep convolutional neural networks have achieved great success in recent years. They have been widely used in various applications such as optical and SAR image scene classification, object detection and recognition, semantic segmentation, and change detection. However, deep neural networks rely on large-scale high-quality training data, and can only guarantee good performance when the training and test data are independently sampled from the same distribution. Deep convolutional neural networks are found to be vulnerable to subtle adversarial perturbations. This adversarial vulnerability prevents the deployment of deep neural networks in security-sensitive applications such as medical, surveillance, autonomous driving and military scenarios. This paper first presents a holistic view of security issues for deep convolutional neural network-based image recognition systems. The entire information processing chain is analyzed regarding safety and security risks. In particular, poisoning attacks and evasion attacks on deep convolutional neural networks are analyzed in detail. The root causes of adversarial vulnerabilities of deep recognition models are also discussed. Then, we give a formal definition of adversarial robustness and present a comprehensive review of adversarial attacks, adversarial defense, and adversarial robustness evaluation. Rather than listing existing research, we focus on the threat models for the adversarial attack and defense arms race. We perform a detailed analysis of several representative adversarial attacks on SAR image recognition models and provide an example of adversarial robustness evaluation. Finally, several open questions are discussed regarding recent research progress from our workgroup. This paper can be further used as a reference to develop more robust deep neural network-based image recognition models in dynamic adversarial scenarios.
An optimal joint allocation of multijammer resources is proposed for jamming a Netted Radar System (NRS) in the case of multitarget penetration. First, the multitarget detection probabilities of NRS in the suppressive jamming environment are used as an interference performance metric. Then, the resource optimization model is established, including two optimization variables, namely, jamming beam and transmitting power, considering the detection performance requirements of different targets. Particle swarm optimization is used to solve the resource-optimization problem. Finally, considering the generalization error of the detection probability caused by the parameter uncertainty of the NRS, the robust resource-optimization model is established. The simulation results show that the proposed optimization model is effective in suppressing the NRS and reducing the probability of the penetrating targets detected by the NRS. Compared with the traditional method, the robust algorithm improves the cooperative interference performance of multiple jammers against NRS and is robust. An optimal joint allocation of multijammer resources is proposed for jamming a Netted Radar System (NRS) in the case of multitarget penetration. First, the multitarget detection probabilities of NRS in the suppressive jamming environment are used as an interference performance metric. Then, the resource optimization model is established, including two optimization variables, namely, jamming beam and transmitting power, considering the detection performance requirements of different targets. Particle swarm optimization is used to solve the resource-optimization problem. Finally, considering the generalization error of the detection probability caused by the parameter uncertainty of the NRS, the robust resource-optimization model is established. The simulation results show that the proposed optimization model is effective in suppressing the NRS and reducing the probability of the penetrating targets detected by the NRS. Compared with the traditional method, the robust algorithm improves the cooperative interference performance of multiple jammers against NRS and is robust.
Radar Application Technology
Space target state estimation aims to obtain a target’s on-orbit attitude, structure, movement, and other parameters accurately. This process helps observers analyze the target action intention, check for potential fault threats, and predict the development of on-orbit situations and is the core technology in the field of space situation awareness. Currently, the estimation of the on-orbit state of space targets mainly relies on external observations from high-performance sensors, such as radars, paralleled by the emergence of a series of representative methods. This paper briefly introduces the development status of inverse synthetic-aperture radar used for space target monitoring at home and abroad. Then, several representative methods, including data feature matching, three-dimensional (3D) imaging reconstruction, and multi-look fusion estimation, are introduced. The data feature-matching technology performs well when the priori target 3D model and scene conditions are given. The state estimation with 3D geometric reconstruction has the potential for fine description of the target, but high-level observation conditions are required. Finally, the future development trend of this direction is forecasted. Space target state estimation aims to obtain a target’s on-orbit attitude, structure, movement, and other parameters accurately. This process helps observers analyze the target action intention, check for potential fault threats, and predict the development of on-orbit situations and is the core technology in the field of space situation awareness. Currently, the estimation of the on-orbit state of space targets mainly relies on external observations from high-performance sensors, such as radars, paralleled by the emergence of a series of representative methods. This paper briefly introduces the development status of inverse synthetic-aperture radar used for space target monitoring at home and abroad. Then, several representative methods, including data feature matching, three-dimensional (3D) imaging reconstruction, and multi-look fusion estimation, are introduced. The data feature-matching technology performs well when the priori target 3D model and scene conditions are given. The state estimation with 3D geometric reconstruction has the potential for fine description of the target, but high-level observation conditions are required. Finally, the future development trend of this direction is forecasted.
Multi-sensor fusion perception is one of the key technologies to realize intelligent automobile driving, and it has become a hot issue in the field of intelligent driving. However, because of the limited resolution of millimeter-wave radars, the interference of noise, clutter, and multipath, and the influence of weather on LiDAR, the existing fusion algorithm cannot easily achieve accurate fusion of the data of two sensors and obtain robust results. To address the problem of accurate and robust perception in intelligent driving, this study proposes a robust perception algorithm that combines millimeter-wave radar and LiDAR. Using a new method of spatial correction based on feature-based two-step registration, the precise spatial synchronization of the 3D LiDAR and 2D radar point clouds is realized. The improved millimeter-wave radar filtering algorithm is used to reduce the influence of noise and multipath on the radar point cloud. Then, according to the novel fusion method proposed in this study, the data of the two sensors are fused to obtain accurate and robust sensing results, which solves the problem of the influence of smoke on LiDAR performance. Finally, we conducted multiple sets of experiments in a real environment to verify the effectiveness and robustness of our method. Even in extreme environments such as smoke, we can still achieve accurate positioning and robust mapping. The environment map established by the fusion method proposed in this study is more accurate than that established by a single sensor. Moreover, the location error obtained can be reduced by at least 50%. Multi-sensor fusion perception is one of the key technologies to realize intelligent automobile driving, and it has become a hot issue in the field of intelligent driving. However, because of the limited resolution of millimeter-wave radars, the interference of noise, clutter, and multipath, and the influence of weather on LiDAR, the existing fusion algorithm cannot easily achieve accurate fusion of the data of two sensors and obtain robust results. To address the problem of accurate and robust perception in intelligent driving, this study proposes a robust perception algorithm that combines millimeter-wave radar and LiDAR. Using a new method of spatial correction based on feature-based two-step registration, the precise spatial synchronization of the 3D LiDAR and 2D radar point clouds is realized. The improved millimeter-wave radar filtering algorithm is used to reduce the influence of noise and multipath on the radar point cloud. Then, according to the novel fusion method proposed in this study, the data of the two sensors are fused to obtain accurate and robust sensing results, which solves the problem of the influence of smoke on LiDAR performance. Finally, we conducted multiple sets of experiments in a real environment to verify the effectiveness and robustness of our method. Even in extreme environments such as smoke, we can still achieve accurate positioning and robust mapping. The environment map established by the fusion method proposed in this study is more accurate than that established by a single sensor. Moreover, the location error obtained can be reduced by at least 50%.
Radar Signal and Data Processing
Aircraft wake are a couple of counter-rotating vortices generated by a flying aircraft, which can pose a serious hazard to follower aircraft. The behavior prediction of it is a key issue for air traffic safety management. To this end, we propose a prediction method based on data assimilation, which can be used to predict the evolution and hazard area of aircraft wake vortex from the vortex-core’s positions and circulation. To construct our wake vortex prediction model, we use linear shear and least square estimation. In addition, we use a data assimilation model based on the unscented Kalman filter to instantly correct the predicted trajectories. Our experimental results show that the proposed method performs well and runs steadily, thus, providing an effective tool for aircraft wake vortex prediction and support for the establishment of dynamic wake separation in air traffic management. Aircraft wake are a couple of counter-rotating vortices generated by a flying aircraft, which can pose a serious hazard to follower aircraft. The behavior prediction of it is a key issue for air traffic safety management. To this end, we propose a prediction method based on data assimilation, which can be used to predict the evolution and hazard area of aircraft wake vortex from the vortex-core’s positions and circulation. To construct our wake vortex prediction model, we use linear shear and least square estimation. In addition, we use a data assimilation model based on the unscented Kalman filter to instantly correct the predicted trajectories. Our experimental results show that the proposed method performs well and runs steadily, thus, providing an effective tool for aircraft wake vortex prediction and support for the establishment of dynamic wake separation in air traffic management.
Airborne Synthetic Aperture Radar (SAR) location error is affected by the position/speed measurement error of the aircraft, system time error, etc., and also related to the residual error of motion compensation. However, the existing airborne SAR location model rarely considers the effect of residual motion error. Considering that motion and trajectory measurement errors are common in practice, this paper derives a location error transfer model of an airborne SAR image based on the motion compensation and frequency-domain imaging algorithms. The proposed model clarifies the influence of trajectory measurement error on location deviation when residual motion error exists and provides a method of error calibration measurement. The simulation experiments validate the correctness of the proposed location error transfer model. The present method obtains a more accurate error calibration measurement result than the location error model that does not consider the residual motion error, proving the superiority of the proposed model. Airborne Synthetic Aperture Radar (SAR) location error is affected by the position/speed measurement error of the aircraft, system time error, etc., and also related to the residual error of motion compensation. However, the existing airborne SAR location model rarely considers the effect of residual motion error. Considering that motion and trajectory measurement errors are common in practice, this paper derives a location error transfer model of an airborne SAR image based on the motion compensation and frequency-domain imaging algorithms. The proposed model clarifies the influence of trajectory measurement error on location deviation when residual motion error exists and provides a method of error calibration measurement. The simulation experiments validate the correctness of the proposed location error transfer model. The present method obtains a more accurate error calibration measurement result than the location error model that does not consider the residual motion error, proving the superiority of the proposed model.
With the advent of the aging population, fall detection has gradually become a research hotspot. Aiming at the detection of human fall using millimeter-wave radar, a Range-Doppler heat map Sequence detection Network (RDSNet) model that combines the convolutional neural network and long short-term memory network is proposed in this study. First, feature extraction is performed using the convolutional neural network. After obtaining the feature vector, the feature vector corresponding to the dynamic sequence is inputted to the long short-term memory network. Subsequently, the time correlation information of the heat map sequence is learned. Finally, the detection results are obtained using the classifier. Moreover, diverse human movement information of different objects is collected using millimeter-wave radar, and a range-Doppler heat map dataset is built in this work. Comparative experiments show that the proposed RDSNet model can reach an accuracy of 96.67% and the calculation delay is not higher than 50 ms. The proposed RDSNet model has good generalization capabilities and provides new technical ideas for human fall detection and human posture recognition. With the advent of the aging population, fall detection has gradually become a research hotspot. Aiming at the detection of human fall using millimeter-wave radar, a Range-Doppler heat map Sequence detection Network (RDSNet) model that combines the convolutional neural network and long short-term memory network is proposed in this study. First, feature extraction is performed using the convolutional neural network. After obtaining the feature vector, the feature vector corresponding to the dynamic sequence is inputted to the long short-term memory network. Subsequently, the time correlation information of the heat map sequence is learned. Finally, the detection results are obtained using the classifier. Moreover, diverse human movement information of different objects is collected using millimeter-wave radar, and a range-Doppler heat map dataset is built in this work. Comparative experiments show that the proposed RDSNet model can reach an accuracy of 96.67% and the calculation delay is not higher than 50 ms. The proposed RDSNet model has good generalization capabilities and provides new technical ideas for human fall detection and human posture recognition.

微信 | 公众平台

随时查询稿件 获取最新论文 知晓行业信息