Place |
Team |
AP, % |
mAP, % |
Admission to the final |
Bus |
Car |
Truck |
1 |
fitpredict |
86.65 |
85.24 |
57.65 |
76.52 |
 |
2 |
icegas |
88.11 |
73.67 |
53.54 |
71.77 |
 |
3 |
AwesomeNet |
90.98 |
75.84 |
48.20 |
71.68 |
 |
4 |
DL_Team |
51.78 |
29.29 |
16.43 |
32.50 |
 |
5 |
SSDTeam |
46.61 |
24.21 |
18.64 |
29.82 |
+ |
6 |
Bold_Boys |
46.57 |
24.58 |
18.13 |
29.76 |
+ |
7 |
Team-A |
47.64 |
25.21 |
2.36 |
25.07 |
 |
8 |
LSCF |
42.62 |
23.57 |
2.17 |
22.79 |
 |
9 |
RSREU |
19.32 |
32.34 |
0.67 |
17.44 |
+ |
10 |
Лицей_№10 |
31.87 |
19.96 |
0.00 |
17.28 |
+ |
11 |
BYLB [Big Yandex Lyceum Brothers] |
17.63 |
16.34 |
6.52 |
13.50 |
+ |
12 |
Jurru |
8.28 |
21.25 |
1.34 |
10.29 |
+ |
The rating is formed on basis of the test sample recognition results. The AP (average precision) metric for each object type and mAP (mean average precision) metric for the entire sample are computed.
The threshold value of the measure IoU (Intersection over union) in the calculations is taken equal to 0.5.
To determine the quality of detection of AP and mAP objects, the library is used: https://github.com/rafaelpadilla/Object-Detection-Metrics