Kernel Sharing With Joint Boosting For Multi-Class Concept Detection

Abstract
Object/scene detection by discriminative kernel-based classification has gained great interest due to its promising performance and flexibility. In this paper, unlike traditional approaches that independently build binary classifiers to detect individual concepts, we proposed a new framework for multi-class concept detection based on kernel sharing and joint learning. By sharing "good" kernels among concepts, accuracy of individual weak detectors can be greatly improved; by joint learning of common detectors among classes, the required kernels and the computational complexity for detecting each individual concept can be reduced. We demonstrated our approach by developing an extended JointBoost framework, which was used to choose the optimal kernel and subset of sharing classes in an iterative boosting process. In addition, we constructed multi-resolution visual vocabularies by hierarchical clustering and computed kernels based on spatial matching. We tested our method in detecting 12 concepts (objects, scenes, etc) over 80+ hours of broadcast news videos from the challenging TRECVID 2005 corpus. Significant performance gains were achieved -10% in mean average precision (MAP) and up to 34% average precision (AP) for some concepts like maps, building, and boat-ship. Extensive analysis of the results also revealed interesting and important underlying relations among concepts.

This publication has 3 references indexed in Scilit: