OpenCodePapers

3d-object-detection-on-nuscenes

Object Detection3D Object Detection
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeNDSmAPmATEmASEmAOEmAVEmAAEModelNameReleaseDate
EA-LSS: Edge-aware Lift-splat-shot Framework for 3D BEV Object Detection✓ Link0.780.770.230.210.280.200.12EA-LSS2023-03-31
[]()0.770.750.230.220.270.210.13MegFusion
[]()0.770.750.220.220.280.190.13MMFusion-e
[]()0.760.760.240.230.330.230.13DeepInteraction-e
BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation✓ Link0.760.750.240.230.320.220.13BEVFusion-e2022-05-26
[]()0.760.740.240.230.320.220.13DeepInteraction-large
[]()0.760.740.230.230.290.240.13RacoonPower
[]()0.750.730.240.240.320.230.13MSMDFusion-TTA
[]()0.750.730.240.230.280.240.13FusionVPE
[]()0.750.730.230.230.280.260.13DAA
FocalFormer3D : Focusing on Hard Instance for 3D Object Detection✓ Link0.750.720.250.240.330.230.13FocalFormer3D-F2023-08-08
[]()0.750.720.240.230.320.210.13CenterPoint-Fusion
UniTR: A Unified and Efficient Multi-Modal Transformer for Bird's-Eye-View Representation✓ Link0.750.710.240.230.260.240.13UniTR2023-08-15
[]()0.750.710.240.220.280.210.13ADS-TEAM
Robust Multimodal 3D Object Detection via Modality-Agnostic Decoding and Proximity-based Modality Ensemble✓ Link0.740.720.270.240.300.270.11MEFormer2024-07-27
[]()0.740.720.260.240.320.260.13SparseFusion
[]()0.740.720.250.240.350.250.13Utrans-Fusion
[]()0.740.720.250.240.340.270.13YZLFusion
[]()0.740.720.250.240.340.270.13ChangYuan
[]()0.740.720.250.240.320.270.13BEVFusion-base
FocalFormer3D : Focusing on Hard Instance for 3D Object Detection✓ Link0.740.710.240.240.320.200.13FocalFormer3D-TTA2023-08-08
[]()0.740.710.240.230.310.230.13PAI3D
[]()0.740.710.240.230.30.240.13LargeKernel-F
[]()0.740.710.230.240.320.240.13VXTR-tta
[]()0.740.70.240.230.310.240.13FocalSparseCNN
[]()0.730.710.260.240.370.280.13test333
[]()0.730.710.260.240.350.260.14SemanticBEVFusion
[]()0.730.710.260.240.330.290.13xpnet
3D Dual-Fusion: Dual-Domain Dual-Query Camera-LiDAR Fusion for 3D Object Detection✓ Link0.730.710.260.240.330.270.133D Dual-Fusion_T2022-11-24
[]()0.730.710.260.240.330.270.133D Dual-Fusion_T
[]()0.730.710.260.240.330.250.13DeepInteraction-base
[]()0.730.710.250.240.370.220.13yuhahad
[]()0.730.710.250.240.360.250.13ADLab-BEVFusion-pure
[]()0.730.710.250.240.340.260.13Deeplearner
[]()0.730.710.250.240.330.260.14MSMDFusion
[]()0.730.70.250.240.370.230.13SJTU2
[]()0.730.70.250.230.360.240.15PCIE
[]()0.730.70.240.230.380.250.12DAA AVP
[]()0.730.70.240.230.310.230.14LinK
FocalFormer3D : Focusing on Hard Instance for 3D Object Detection✓ Link0.730.690.250.240.340.220.13FocalFormer3D-L2023-08-08
[]()0.730.690.250.230.320.210.13RLVNet
[]()0.730.690.250.230.310.210.13ADS-FUSION
[]()0.730.690.240.230.350.190.13SphereFormer
[]()0.730.680.240.230.320.230.13MDRNet-L
[]()0.730.680.240.230.310.20.12VPFusion
MGTANet: Encoding Sequential LiDAR Points Using Long Short-Term Motion-Guided Temporal Attention for 3D Object Detection✓ Link0.730.670.250.230.310.190.12MGTANet2022-12-01
DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets✓ Link0.730.250.230.300.250.14DSVT2023-01-15
Multi-Modal 3D Object Detection by Box Matching✓ Link0.7210.689FBMNet (Ours)2023-05-12
[]()0.720.70.260.250.340.320.13bevfusion-re
[]()0.720.70.260.240.360.270.13GTFS
[]()0.720.690.260.250.370.270.13BEVFusion
[]()0.720.690.260.240.360.290.13TransFusion
[]()0.720.690.260.240.360.270.13Leapmotor_huhaotian
[]()0.720.690.260.240.350.30.14MixFusion
[]()0.720.690.260.240.340.270.13asdasdzz
[]()0.720.690.260.240.330.230.13smallrot_noflip
[]()0.720.690.250.240.370.220.13VirConv
[]()0.720.680.260.240.350.270.13FusionPainting
[]()0.720.680.260.240.340.210.12SJTU-VISION
[]()0.720.680.260.230.310.260.13q2rdasdsad1
[]()0.720.680.250.230.320.260.13VFF
[]()0.720.670.250.240.310.280.12DCAN
[]()0.720.670.250.230.320.260.13LidarMultiNet
[]()0.710.690.270.240.380.280.13yangfan
[]()0.710.690.260.240.380.270.13DS VinFast
[]()0.710.680.270.240.360.260.123D Dual-Fusion
[]()0.710.680.270.230.360.260.12spacvpr1
[]()0.710.680.260.250.370.280.13Chaokang_Jiang
[]()0.710.680.260.240.350.280.13Convoy
[]()0.710.670.310.250.350.220.12UVTR-Multimodality
[]()0.710.670.270.250.380.210.12MoCa (MMDet3D)
[]()0.710.670.270.240.380.270.12wilfred zaha
[]()0.710.670.270.240.370.270.12picolo
[]()0.710.670.270.240.360.270.12SJTU_VVISION
[]()0.710.670.260.240.360.270.14test4444
[]()0.710.670.260.240.360.260.139541
[]()0.710.670.250.240.350.270.12SJTU AI Institute and Noah CV Lab
Center-based 3D Object Detection and Tracking✓ Link0.710.670.250.240.350.250.14CenterPoint2020-06-19
[]()0.710.660.260.240.340.250.12Damo
Multimodal Virtual Point 3D Detection✓ Link0.710.660.260.240.320.310.13MVP2021-11-12
[]()0.710.660.250.230.330.250.13MTA-Net
[]()0.710.660.250.230.310.240.13PillarNet-34
[]()0.710.660.240.240.320.270.13AutoAlign
[]()0.710.650.250.230.320.250.13HongdaChang
[]()0.710.650.240.230.30.240.12ARNet
[]()0.70.660.260.240.350.280.13TransFusion-L
[]()0.70.660.250.240.340.280.13HanyangSpa
[]()0.70.650.270.240.380.280.13SPV-SSD
[]()0.70.650.270.240.360.280.13Noah CV Lab & Octopus 2
[]()0.70.650.260.240.390.270.12epoch_40_s0.1
[]()0.70.650.260.240.380.270.13TransFusion_fading_yqwang
[]()0.70.650.260.230.360.250.12PVC_ensemble
[]()0.70.640.260.240.340.220.13pcd_lidar_99
[]()0.70.640.260.240.340.210.13D-Align_CP
[]()0.70.640.260.240.320.210.12DWS-large
[]()0.70.640.260.230.320.220.12VISTA
[]()0.70.640.260.230.320.220.12test_1111
[]()0.70.640.250.230.320.240.13Centerpoint+SA-FPN
SpaRC: Sparse Radar-Camera Fusion for 3D Object Detection0.6990.646SpaRC2024-11-29
[]()0.690.640.290.250.390.250.13Noah CV Lab & Octopus
[]()0.690.640.260.250.370.280.12Center+R-CNN
[]()0.690.630.280.240.340.30.13Fast-CLOCs
[]()0.690.630.270.230.320.250.14S2M2-SSD
[]()0.690.630.260.240.360.270.13CenterPoint + CBMOT
[]()0.690.630.260.240.330.260.14JasonChen
[]()0.690.620.280.240.370.210.13RAANet
[]()0.680.620.260.230.340.30.14AFDetV2
[]()0.680.610.280.240.360.220.12AOP-Net
[]()0.680.610.270.240.380.230.13Centerpoint+LiDAR-Mixup
[]()0.680.60.270.240.340.240.13DWS-medium
[]()0.680.60.260.240.360.280.13Revise_CenterPoint
HVDetFusion: A Simple and Robust Camera-Radar Fusion Framework✓ Link0.674HVDetFusion2023-07-21
[]()0.670.620.30.250.430.330.12Jck
[]()0.670.60.260.240.360.290.14CenterPoint-Single
[]()0.670.60.250.240.380.270.12epoch_39_s0.3
[]()0.670.60.250.240.360.290.13centerpoints
[]()0.670.590.390.250.320.170.14VideoBEV
[]()0.670.590.270.240.40.230.13JianhuiLiu
[]()0.670.580.260.270.320.240.13Don-6
[]()0.670.580.260.260.320.240.13demar-ctpt-20
[]()0.670.580.260.260.310.230.12fliptta-demo
[]()0.670.580.260.250.330.220.14ground-ctpt
[]()0.660.610.40.250.340.320.13BEVDepthFormer
[]()0.660.610.280.260.40.320.13Sotirios
[]()0.660.610.280.260.40.320.13tarane
[]()0.660.590.370.240.380.170.12BEVDet-dev2.0
[]()0.660.590.270.240.380.330.13HotSpotNet-0.1m
[]()0.660.580.290.240.350.310.15subProcess
[]()0.660.580.280.250.370.240.133D-CVF_v2
[]()0.660.580.280.250.350.30.13FMF-VoxelNet
[]()0.660.580.270.240.380.290.13CyliNet-RG single
[]()0.660.580.270.240.350.340.12CenterPP-Large
[]()0.660.570.250.240.370.270.12epoch_40_s0.4
[]()0.660.570.250.240.350.270.12rewq
[]()0.650.60.320.250.490.260.13LRCF360-Single
[]()0.650.580.450.260.340.240.13BEVFormer v2 Opt
[]()0.650.580.280.250.430.310.13FIFA3D
[]()0.640.590.430.260.370.350.14qichuangeng
Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception✓ Link0.640.570.400.250.420.250.12HyDRa2024-03-12
[]()0.640.570.30.250.480.250.12DWS-pillar
[]()0.640.560.470.240.360.210.12PatternFormer
[]()0.640.550.30.240.390.270.12CVCNet-Single
[]()0.630.550.510.250.320.280.12Baidu_vis_3d
[]()0.630.550.490.240.340.240.12StreamPETR-Base
[]()0.630.550.450.250.380.260.13VideoBEV-Base
[]()0.630.550.420.240.340.330.14BevDepth++
[]()0.630.550.290.250.390.350.14CVFNet
[]()0.630.540.30.250.390.340.14stool
[]()0.630.540.30.250.390.240.16DTIF
[]()0.630.540.290.250.460.260.13centerpoint_pillars_gcn
Do You Remember . . . the Future? Weak-to-Strong generalization in 3D Object Detection✓ Link0.630.54X-Ray CenterPoint-Voxel2024-08-03
[]()0.630.530.320.250.380.240.12CRIPAC
[]()0.630.530.30.250.380.240.14MTV
CRN: Camera Radar Net for Accurate, Robust, Efficient 3D Perception✓ Link0.6240.575CRN2023-04-03
[]()0.620.540.340.260.440.330.13PolarStream-4-PCx1
[]()0.620.530.450.240.340.310.13TiG-BEV
[]()0.620.530.440.250.340.290.13BEVDepth-AeDet
[]()0.620.530.40.240.320.330.13Inspur_DABNet4D-pure
[]()0.620.520.420.250.410.250.13791914561
SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds✓ Link0.620.510.340.240.430.270.09SSN
[]()0.620.510.30.240.380.30.13Sy
[]()0.620.510.30.240.380.290.13luuminhquan_openpcdet
[]()0.620.510.30.240.370.290.13PE-RVCN
[]()0.620.50.320.240.360.310.13Team_Simon
[]()0.610.550.490.250.380.410.11DEPE
[]()0.610.540.40.260.430.360.15yangfan293
[]()0.610.540.380.260.540.290.13pointpainting
[]()0.610.520.440.240.350.350.13BEVDepth-pure
[]()0.610.520.440.240.330.340.12s22kdist21
[]()0.610.510.320.240.40.320.13jinkkun
[]()0.610.510.30.250.430.370.13SSL_CP_Flow
[]()0.610.50.310.250.370.330.13HEU
[]()0.610.480.320.250.380.260.13PiFeNet
[]()0.60.530.520.250.350.380.124DMVT-BEV
[]()0.60.530.420.260.450.370.15obj_6
[]()0.60.530.40.260.440.350.15swin-b-nuimage-2key
[]()0.60.520.430.260.40.370.15obj_5
[]()0.60.520.420.260.420.380.13KPRDepth
[]()0.60.50.450.250.40.320.13liyinhao1234
[]()0.60.50.450.240.380.320.13bevdepth
[]()0.60.50.310.250.440.390.13CenterPoint_mmdet3D
[]()0.590.520.560.250.370.390.13Brave
[]()0.590.510.550.250.360.40.134DMVT-RCNN
[]()0.590.510.550.240.370.360.12Focal-PETRv2e
[]()0.590.510.550.240.360.370.13PETRv2-pure
[]()0.590.510.530.260.370.320.12Sparse4d_vovnet99
[]()0.590.510.450.260.380.390.14obj_4
[]()0.590.510.430.260.450.410.15obj_3
[]()0.590.50.550.250.360.340.12ghkbkhjbjn
[]()0.590.50.540.250.350.340.12zhanghdgeh
[]()0.590.50.470.250.380.330.13BEVDistill
Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic Segmentation✓ Link0.590.490.330.240.440.270.24Reconfig PP v32020-08-04
[]()0.590.480.450.240.350.340.12ECARX_BEVNet
[]()0.590.470.320.250.440.30.13PointPillars + DSA
[]()0.590.460.310.250.420.320.13AGO-Net
[]()0.580.620.290.250.331.360.42ATME
[]()0.580.620.280.240.341.360.42lgdnumber2
[]()0.580.620.280.240.331.350.42Radiant
[]()0.580.50.580.250.380.390.13ayil1e
[]()0.580.50.540.250.380.380.11DC-Sym-Residual
[]()0.580.50.430.260.470.40.16bevdepth
[]()0.580.490.570.260.370.310.13Real4d
[]()0.580.490.560.240.360.340.12PETR v2
[]()0.580.490.550.240.380.420.12det22947
[]()0.580.480.550.250.360.320.12ZongMu-CV
[]()0.580.480.540.250.370.370.11Rover
[]()0.580.480.530.250.380.320.12RadarFormer
[]()0.580.480.530.250.370.360.12MV-BEVT
[]()0.580.480.510.250.40.360.12DA-HydraFormer
[]()0.580.480.440.260.450.370.13lyra
[]()0.580.470.50.250.380.330.13Inspur-MASTER-v2
[]()0.580.460.460.250.390.330.12DAMEN
[]()0.580.460.390.270.50.250.11PointPainting
[]()0.580.450.320.250.430.330.13Syn-Aug1
[]()0.580.450.320.250.430.330.13HQUJIN
[]()0.570.490.570.260.380.390.13OA-BEV
[]()0.570.490.560.260.360.440.13PolarFormer
[]()0.570.490.540.260.370.390.14VoxelFormer
[]()0.570.490.520.260.410.430.12RCM_Fusion
[]()0.570.490.50.250.380.440.15Yinhaoli
[]()0.570.480.580.270.380.330.12VisionDP
BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers✓ Link0.570.480.580.260.380.380.13BEVFormer2022-03-31
[]()0.570.480.580.260.380.380.13BEVFormer
[]()0.570.480.560.260.390.330.11VisionGroup
[]()0.570.480.440.240.460.440.13RC-BEVFusion
[]()0.570.460.490.250.380.330.12wangzehao
SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds✓ Link0.570.460.370.260.430.260.31SSN-v1
[]()0.570.450.510.240.390.30.12BEVDet4D
[]()0.570.450.320.250.420.420.14ASCNet-1-5s
BEVDet4D: Exploit Temporal Cues in Multi-camera 3D Object Detection✓ Link0.5690.4510.5110.2410.3860.3010.121BEVDet4D2022-03-31
[]()0.5620.4670.6050.2530.3310.3360.187CLIP-BEVFormer
[]()0.560.470.610.240.380.370.12boring_test_9
[]()0.560.460.520.250.410.350.13TJU-IMMLab_CC
[]()0.560.460.50.250.440.360.12ppyolo
[]()0.550.570.260.721.530.270.12v-test
[]()0.550.50.310.251.390.30.13dang
[]()0.550.480.320.250.870.340.13llstm
[]()0.550.470.570.270.490.410.15obj_1
[]()0.550.460.610.260.420.430.13BEV_align_fpn
[]()0.550.440.520.260.40.430.11RVFUSION
[]()0.550.440.50.260.430.380.11X3KD
[]()0.550.420.520.250.430.290.11Star-Optimus
[]()0.550.420.420.250.440.320.13joint
[]()0.540.460.620.250.390.490.13CAM4D
[]()0.540.460.610.260.390.470.13PolarFormer-pure
[]()0.540.460.610.250.390.480.13auto5D
[]()0.540.450.590.260.420.460.13DGroup
[]()0.540.450.570.250.390.480.13ZMfusion
[]()0.540.440.630.260.40.430.14BEVFormer-pure
[]()0.540.440.630.260.390.370.13convnext
[]()0.540.440.540.260.410.480.12Detr3d_RV
[]()0.530.450.620.260.420.470.13AutoD
[]()0.530.450.570.250.40.670.11Detr4d
[]()0.530.450.520.260.380.630.13puppyboy
[]()0.530.450.380.250.680.280.37CenterPoint-VID
[]()0.530.430.540.270.420.570.11DETNT
[]()0.530.410.560.250.40.430.13sina
[]()0.530.390.540.250.390.350.13BEVerse
[]()0.520.450.570.250.380.780.13zongmu-CV
[]()0.520.430.60.260.420.50.13xin.lu.2
[]()0.520.420.630.260.380.490.12TransCAR
[]()0.520.390.560.260.450.350.12Spirit of Optimus
[]()0.520.340.350.260.480.270.11xixi
[]()0.510.450.580.250.380.790.13yunzhu
[]()0.510.430.640.270.440.540.13AD3D
[]()0.510.420.480.270.580.530.17RightOrange
[]()0.510.410.690.260.450.450.12gznyyb
[]()0.510.330.390.250.490.310.14Reconfig PP
[]()0.50.440.590.250.380.810.13PETR-e
[]()0.50.430.620.260.410.780.13CascadeNet
[]()0.50.420.520.250.390.830.12CFT-BEV3D
[]()0.50.380.590.250.490.420.14Inspur-MASTER-3D
[]()0.50.380.590.250.490.420.14Inspur-MASTER-3D
[]()0.490.440.530.250.391.00.15FudanZVG-TPD-e
[]()0.490.430.630.250.410.880.13TESTB3DTV
[]()0.490.430.620.250.390.790.13Graph-DETR3D
[]()0.490.430.590.250.410.840.13PolarDETR
EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation✓ Link0.490.4230.5470.2360.3021.0710.123EPro-PnP-Det v22023-03-22
[]()0.490.420.590.250.390.850.13ORA3D
[]()0.490.420.550.240.31.070.12EPro-PnP-Det v2
[]()0.490.410.530.260.390.790.13CFT-BEV3D
[]()0.490.410.480.260.590.70.14RCBEV
[]()0.490.380.610.250.50.530.1ada-mmfusion
[]()0.490.360.520.270.590.430.14RCF360
[]()0.480.420.570.250.371.010.12DD3D
[]()0.480.420.530.240.40.980.15BEVDet-Beta
[]()0.480.410.640.260.390.850.13DETR3D
[]()0.480.370.630.250.510.540.09mmfusion
[]()0.480.320.40.250.760.270.09VBNET
[]()0.470.440.570.260.511.220.11Wang Qitai
[]()0.470.440.560.260.511.220.11ImmortalTracker
[]()0.470.430.620.250.391.120.21M^2BEV_Det&Seg
[]()0.470.430.580.250.381.050.19M^2BEV_Det
[]()0.470.430.540.260.441.020.16wintercoconut
[]()0.470.420.650.260.450.860.13BEV3D
[]()0.470.410.570.240.371.630.12yy
[]()0.470.40.590.240.340.970.13ufchaq72938
[]()0.470.370.590.270.750.40.13PiCasso4D
[]()0.470.370.480.40.670.480.11Daimler_RD_ASA-HD Frustum PointNet
[]()0.460.420.650.260.460.950.14PETR_Radar_Fusion
[]()0.460.410.650.270.460.980.15PegasusAI-B
[]()0.460.40.680.260.420.890.13xin.lu
[]()0.460.40.670.270.40.880.13SRCN3D
[]()0.460.390.620.240.391.530.12MAOLoss
[]()0.460.390.590.250.391.040.11Pinkie Pie
[]()0.460.390.590.240.391.060.15MG
[]()0.460.380.580.260.440.910.14RepTrans
[]()0.460.350.630.260.520.520.16L-G
EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation✓ Link0.4530.3730.6050.2430.3591.0670.124EPro-PnP-Det v12022-03-24
[]()0.450.410.70.260.480.950.15FTM3D
[]()0.450.390.630.250.451.510.13PGD, Camera
[]()0.450.390.60.250.341.610.22IPD3D
Probabilistic and Geometric Depth: Detecting Objects in Perspective✓ Link0.450.39PGD2021-07-29
[]()0.450.380.630.250.441.480.13Vidar
[]()0.450.330.630.260.520.610.11CenterFusion
CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection✓ Link0.4490.326CenterFusion2020-11-10
nuScenes: A multimodal dataset for autonomous driving✓ Link0.449PointPillars (ImageNet)2019-03-26
nuScenes: A multimodal dataset for autonomous driving✓ Link0.448PointPillars (KITTI)2019-03-26
nuScenes: A multimodal dataset for autonomous driving✓ Link0.442PointPillars2019-03-26
[]()0.440.390.570.260.511.050.2M^2BEV, Camera
[]()0.440.380.670.240.461.150.12FCOS3D-DSG
[]()0.440.370.60.250.471.00.15tsu-mdf
[]()0.440.360.670.260.41.590.12DHNet
[]()0.440.310.680.270.580.490.15mincheol moon
[]()0.430.360.690.250.451.430.12MMDet3D, Camera
[]()0.430.210.630.240.410.310.13cvprspa2
FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection✓ Link0.4280.3580.6900.2490.4521.4340.124FCOS3D2021-04-22
[]()0.420.360.710.270.480.970.14PeaceLove
[]()0.420.350.380.250.551.00.38Freespace
[]()0.420.330.660.260.351.660.2Camera
[]()0.420.30.450.341.550.390.14Capstone team
InfoFocus: 3D Object Detection for Autonomous Driving with Dynamic Information Modeling0.40.390.360.261.131.00.4InfoFocus2020-07-16
[]()0.40.340.660.260.631.630.14CenterNet
[]()0.390.380.370.271.171.00.41Henry
[]()0.390.310.730.250.51.520.12SAIC ADC
[]()0.380.490.480.841.581.450.36SparsePoint
[]()0.380.390.390.250.511.531.0BirdNet+ (multisweep)
[]()0.380.30.740.260.551.550.13MonoDIS
[]()0.380.30.720.270.541.170.18weareateam
[]()0.380.30.650.260.571.040.2first
[]()0.370.290.750.260.61.580.14LRM0
[]()0.360.30.830.280.611.090.19yingfei
[]()0.360.270.710.250.461.310.33QD-3DT
[]()0.350.30.450.270.871.00.38VIPL_ICT
[]()0.340.350.630.260.51.581.0RadarFusion
[]()0.340.280.840.280.721.230.17atss3d
[]()0.340.280.840.280.721.230.17atss-3d
[]()0.330.30.710.260.661.710.61HZ
[]()0.320.270.510.330.841.740.49553
[]()0.270.210.80.450.591.580.53Peixuan
[]()0.250.280.760.290.841.631.0camera_car
[]()0.230.090.670.431.090.740.34DLLA-search
[]()0.210.080.610.421.261.590.33MinkPoint
[]()0.180.090.70.420.961.00.59SECOND + PointPillars
[]()0.160.110.870.581.62.210.5PointNet
[]()0.140.10.90.570.661.661.0Monodis reproduction with zip
[]()0.140.050.820.430.612.081.0KPConvPillars
[]()0.130.050.880.570.781.410.76YZ_2019
[]()0.130.050.880.570.781.410.76WhiteWolf
[]()0.120.060.630.791.291.340.63IAIR-Pioneer
[]()0.080.090.920.920.91.080.92LGDnumber1
[]()0.080.050.820.791.161.430.87JerryAI
[]()0.080.050.820.791.141.430.87TeamDark
[]()0.080.010.860.640.791.391.0YZ2019
[]()0.070.021.060.610.831.511.03D-GCK
[]()0.060.050.820.931.191.430.87Team XD
[]()0.040.040.960.920.91.181.0dawn
[]()0.040.030.830.941.221.41.0mqdao
[]()0.030.010.970.920.930.990.93Radar-PointGNN: Graph convolutions
[]()0.010.01.021.01.060.990.9qww
[]()0.00.01.01.01.01.01.0Only one
Class-balanced Grouping and Sampling for Point Cloud 3D Object Detection✓ Link0.63.30.528MEGVII2019-08-26