Existing open-set recognition (OSR) studies typically assume that each image contains only one class label, with the unknown test set (negative) having a disjoint label space from the known test set (positive), a scenario referred to as full-label shift. This paper introduces the mixed OSR problem, where test images contain multiple class semantics, with both known and unknown classes co-occurring in the negatives, leading to a more complex super-label shift that better reflects real-world scenarios. To tackle this challenge, we propose the OpenSlot framework, based on object-centric learning, which uses slot features to represent diverse class semantics and generate class predictions. The proposed anti-noise slot (ANS) technique helps mitigate the impact of noise (invalid or background) slots during classification training, addressing the semantic misalignment between class predictions and ground truth. We evaluate OpenSlot on both mixed and conventional OSR benchmarks. Without elaborate designs, our method not only excels existing approaches in detecting super-label shifts across OSR tasks, but also achieves state-of-the-art performance on conventional benchmarks. Meanwhile, OpenSlot can localize class objects without using bounding boxes during training, demonstrating competitive performance in open-set object detection and potential for generalization.
OpenSlot: Mixed Open-Set Recognition with Object-Centric Learning
IEEE Transactions on Multimedia (TMM), 2025