The SPRD (Surveillance Person Re-Identification Dataset) contains 9700 pedestrian image sequences taken from arbitrary viewpoints from 24 real surveillancecameras, also under varying illumination conditions. Each sequence contains about 200 to 300 images from the same person. For each camera view, we record continuous human sequences to facilitate multi-shot person Re-ID. The naming rule of this dataset is as follows. Each folder contains images from a unique person. They might include different camera views. The number of images of the same person varies. The naming rule for each image is as follows. For example, image file “057_02_0001_00562.jpg” denotes camera ID 57，person ID 2, sequence number 1，and frame ID 562. We follow a multi-shot person Re-ID setting. Namely, instead of using only a single image, sequence of images can be used together to represent a person. Basically, a pair of person image sequences of the used for training should be from different viewing cameras (this can be read from the camera ID).
Fig. 1: Examples of pedestrians in SPRD
The evaluation criterion of this Re-ID dataset is 3-fold cross validation. In particular, we divide all person folders into 3 sessions. In each run, we use 2 sessions for training and the rest for testing. The final performance is obtained by the average performance from all 3 runs. For each run, the evaluation metric is the commonly used “average Cumulative Matching Characteristics (CMC) curves”. Please refer to the following papers for more details:
[ZhangACCV2016] Chongyang Zhang, Bingbing Ni, Li Song, Xiaokang Yang, and Wenjun Zhang, BEST: Benchmark and Evaluation of Surveillance Task, in the 13th Asian Conference on Computer Vision Workshop on Benchmark and Evaluation of Surveillance Task (BEST2016), Taipei, Taiwan ROC, November 20-24, 2016;
[YanECCV2016] Y. Yan, B. Ni, Z. Song, C. Ma, Y. Yan, and X. Yang, Person Re-Identification via Recurrent Feature Aggregation, European Conference on Computer Vision (ECCV), 2016.
● No commercial reproduction, distribution, display or performance rights in this work are provided.
● If you use this facility in any publication, we request you to kindly acknowledge this website
(http://best.sjtu.edu.cn) and cite the following paper:
[ZhangACCV2016] Chongyang Zhang, Bingbing Ni, Li Song, Guangtao Zhai, Xiaokang Yang, and Wenjun Zhang, BEST: Benchmark and Evaluation of Surveillance Task, in the 13th Asian Conference on Computer Vision Workshop on Benchmark and Evaluation of Surveillance Task (BEST2016), Taipei, Taiwan ROC, November 20-24, 2016. (pdf)
2016 © SJTU-BEST 沪交ICP备20160083Support by : Wei Cheng