A Travel Mode Identification Framework via Contrastive Fusion of Multi-View Trajectory Representations
Yutian
Lei, Xuefeng
Guan, and Huayi
Wu
ISPRS International Journal of Geo-Information, 2025
Travel mode identification (TMI) plays a crucial role in intelligent transportation systems by accurately identifying travel modes from Global Positioning System (GPS) trajectory data. Given that trajectory data inherently exhibit spatial and kinematic patterns that complement each other, recent TMI methods generally combine these characteristics through image-based projections or direct concatenation. However, such approaches achieve only shallow fusion of these two types of features and cannot effectively align them into a shared latent space. To overcome this limitation, we introduce multi-view contrastive fusion (MVCF)-TMI, a novel TMI framework that enhances identification accuracy and model generalizability by aligning spatial and kinematic views through multiview contrastive learning. Our framework employs multi-view learning to separately extract spatial and kinematic features, followed by an inter-view contrastive loss to optimize feature alignment in a shared subspace. This approach enables cross-view semantic understanding and better captures complementary information across different trajectory representations. Extensive experiments show that MVCF-TMI outperforms baseline methods, achieving 86.45% accuracy on the GeoLife dataset. The model also demonstrates strong generalization by transferring knowledge from pretraining on the large-scale GeoLife dataset to the smaller SHL dataset.