[1] |
GIRSHICK R. Fast R-CNN[C]// Proceedings of the IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015: 1440-1448.
|
[2] |
REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. Advances in Neural Information Processing Systems, 2015, 1(28): 91-99.
|
[3] |
CAI Z, VASCONCELOS N. Cascade R-CNN: delving into high quality object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City, Utah, US: IEEE, 2018: 6154-6162.
|
[4] |
LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]// Proceedings of the Computer Vision-ECCV 2016: 14th European Conference. Amsterdam, The Netherlands: Springer International Publishing, 2016: 21-37.
|
[5] |
LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]// Proceedings of the IEEE International Conference on Computer Vision. 2017: 2980-2988.
|
[6] |
BOCHKOVSKIY A, WANG C Y, LIAO H Y M. Yolov4: optimal speed and accuracy of object detection:arXiv:2004.10934[R/OL]. Ithaca, NY, US: Cornell University, 2020,(2020-04-23)[2024-05-13]. https://arxiv.org/abs/2004.10934.
|
[7] |
TIAN Y L, LUO P, WANG X G, et al. Deep learning strong parts for pedestrian detection[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Washington, US: IEEE Computer Society, 2015: 1904-1912.
|
[8] |
ZHANG S F, WEN L Y, BIAN X, et al. Occlusion-aware RCNN: detecting pedestrians in a crowd[C]// Proceedings of the 15th European Conference on Computer Vision. Munich, Cham: Springer, 2018: 657-674.
|
[9] |
ZHANG K, XIONG F, SUN P, et al. Double anchor R-CNN for human detection in a crowd: arXiv:1909.09998[R/OL]. Ithaca, NY, US: Cornell University, 2019 (2019-09-20) [2023-10-06]. https://arxiv.org/abs/1909.09998.
|
[10] |
CHU X G, ZHENG A L, ZHANG X Y, et al. Detection in crowded scenes: one proposal, multiple predictions[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, US: IEEE, 2020: 12211-12220.
|
[11] |
ZHANG Z S, XIE C H, WANG J Y, et al. DeepVoting: a robust and explainable deep network for semantic part detection under partial occlusion[C]// Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, WA, US: IEEE Computer Society, 2018: 1372-1380.
|
[12] |
枭越. M1A2主战坦克的未来发展[J]. 坦克装甲车辆, 2016(11):26-31.
|
|
XIAO Y. Future development of M1A2 main battle tank[J]. Tanks and Armored Vehicles, 2016(11):26-31. (in Chinese)
|
[13] |
KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks: arXiv:1609.02907[R/OL]. Ithaca, NY, US: Cornell University, 2016 (2016-09-09) [2023-10-06]. https://arxiv.org/abs/1609.02907.
|
[14] |
WANG Y, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38(5): 1-12.
|
[15] |
QI C R, SI H, MO K, et al. Pointnet: deep learning on point sets for 3D classification and segmentation[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, US: IEEE, 2017: 652-660.
|
[16] |
HAMILTON W, YING Z, LESKOVEC J. Inductive representation learning on large graphs[J]. Advances in Neural Information Processing Systems, 2017, 30: 1024-1034.
|
[17] |
DUVENAUD D, MACLAURIN D, AGUILERA-IPARRAGUIRRE J, et al. Convolutional networks on graphs for learning molecular fingerprints[J]. Advances in Neural Information Processing Systems, 2015, 28: 2224-2232.
|
[18] |
HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, US: IEEE, 2016: 770-778.
|
[19] |
HU J G, SUN Z X, SUN Y H, et al. Progressive refinement: a method of coarse-to-fine image parsing using stacked network[C]// Proceedings of the 2018 IEEE International Conference on Multimedia and Expo. San Diego, CA, US: IEEE, 2018: 1-6.
|
[20] |
TANG Y F, GU L Z, WANG L T. Deep stacking network for intrusion detection[J]. Sensors, 2021, 22(1): 25.
|