Multimodal Feature Integration on Manifold for Traffic Applications
Abstract
In the fields of traffic modeling, objects (i.e., various vehicles) are usually represented using multimodal features. However, two problems remain unsolved on how to utilize these multimodal features better: 1) The missing features because of noise and 2) The curse of dimensionality. In this paper, we addressed these two problems by integrating the multimodal features on the Grassmann manifold. By defining grouping criteria on the multimodal features, the feature vectors are grouped into a set of subspaces, and are further treated as a point on the Grassmann manifold. To deal with the missing features L2-Hausdorff distance, a metric that compares multimodal feature vectors of different number of subspaces, is computed first. And then a kernel matrix is computed. Furthermore, based on the kernel matrix, we proposed a supervised as well as an unsupervised feature selection criterion to obtain t h e representative features on in the Reproducing Kernel Hilbert Space (RKHS). This has alleviated the curse of dimensionality to a significant extent. Experimental results on three different multimodal datasets show that the proposed feature integration technique can outperform the state-of-the-art methods
Keywords
Multimodal feature fusion, Grassmann manifold, L2-Hausdorff distance, Kernel matrix.
DOI
10.12783/dtetr/ecame2017/18438
10.12783/dtetr/ecame2017/18438
Refbacks
- There are currently no refbacks.