Technical Name Multimodal Fusion of Telecom and Vision-based Data for Scalable Traffic Prediction
Project Operator National Taiwan University
Project Host 徐宏⺠
Summary
We leverage extensive telecom data from mobile users integrated with vision-based data for comprehensive traffic insights. We collected first-ever telecom data from extensive road sections as a novel traffic indicator. We fused vision data from cameras with telecom data to enhance prediction accuracy. We then implemented a dynamic loss function to balance the impact of multi-modal data, achieving accurate cross-modal predictions. Our approach is accepted by top conferences AAAI, CIKM, and WWW.
Scientific Breakthrough
Our research utilizes widely distributed telecom and vision data to propose a multi-modal fusion framework, significantly enhancing traffic prediction accuracy. We process telecom data and ensure user privacy. By integrating telecom and vision data, we improve prediction accuracy by over 20%. We designed a dynamic loss function to achieve cross-modal predictions, successfully forecasting traffic flow in sensor-free areas with a 22% accuracy improvement. The work has been accepted to AAAI'24.
Industrial Applicability
Our technology improves traffic prediction accuracy and scalability by integrating telecom and vision data, demonstrating significant industry application potential. Traditional traffic prediction relies on limited sensors, while our approach uses extensive telecom network data to reduce costs and expand application scope. This technology can reduce management costs, improve efficiency, and create new business models for telecom companies. We have live POC demos in few major cities in Taiwan.
Keyword Telecom-based Flow, Cellular Traffic Intelligent Transportation System Spatio-Temporal Prediction Multi-modal Data Fusion Cross-Modal Prediction Graph Neural Networks GNN
Notes
  • Contact
  • XU,HONG-MIN