To make full use of computer vision technology in stores, it is required toconsider the actual needs that fit the characteristics of the retail scene.Pursuing this goal, we introduce the United Retail Datasets (Unitail), alarge-scale benchmark of basic visual tasks on products that challengesalgorithms for detecting, reading, and matching. With 1.8M quadrilateral-shapedinstances annotated, the Unitail offers a detection dataset to align productappearance better. Furthermore, it provides a gallery-style OCR datasetcontaining 1454 product categories, 30k text regions, and 21k transcriptions toenable robust reading on products and motivate enhanced product matching.Besides benchmarking the datasets using various state-of-the-arts, we customizea new detector for product detection and provide a simple OCR-based matchingsolution that verifies its effectiveness.