Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3511808.3557569acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
short-paper

Cooperative Max-Pressure Enhanced Traffic Signal Control

Authors Info & Claims
Published:17 October 2022Publication History

ABSTRACT

Adaptive traffic signal control is an important and challenging real-world problem that fits well with the task framework of deep reinforcement learning. As one of the critical design elements, the environmental state plays a crucial role in traffic signal control decisions. The state definitions of most existing works mostly contain lane-level queue length, intersection phase, and other features. However, these works are heuristically designed in representing states. This results in highly sensitive and unstable performances of next actions. The paper proposes a <u>C</u>ooperative <u>M</u>ax-<u>P</u>ressure enhanced <u>S</u>tate <u>L</u>earning for the traffic signal control (CMP-SL), which is inspired by the advanced pressure definition for an intersection in the transportation field to cope with this problem. First, our CMP-SL explicitly extends the cooperative max-pressure to the state definition of a target intersection, aiming to obtain accurate environment information by including the traffic pressures of surrounding intersections. From then on, a graph attention mechanism (GAT) is used to learn the state representation of the target intersection in our spatial-temporal state module. Second, since the state is coupled with the reward in reinforcement learning, our method takes the cooperative max-pressure of the target intersection into the reward definition. Furthermore, a temporal convolutional network (TCN) based sequence model is used to capture the historical state of traffic flow. And the historical spatial-temporal and the current spatial state features are concatenated into a DQN network to predict the Q value and generate each phase action. Finally, experiments with two real-world traffic datasets demonstrate that our method achieves shorter vehicle average times and higher network throughput than the state-of-the-art models.

References

  1. Jeancarlo Arguello Calvo and Ivana Dusparic. 2018. Heterogeneous Multi-Agent Deep Reinforcement Learning for Traffic Lights Control. In Proceedings for the 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science (AICS), Vol. 2259. 2--13.Google ScholarGoogle Scholar
  2. Chacha Chen, Hua Wei, Nan Xu, Guanjie Zheng, Ming Yang, Yuanhao Xiong, Kai Xu, and Zhenhui Li. 2020. Toward A Thousand Lights: Decentralized Deep Reinforcement Learning for Large-Scale Traffic Signal Control. In Proceedings of The Thirty-Fourth AAAI Conference on Artificial Intelligence. 3414--3421.Google ScholarGoogle ScholarCross RefCross Ref
  3. Tianshu Chu, Jie Wang, Lara Codecà, and Zhaojian Li. 2020. Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control. IEEE Transactions on Intelligent Transportation Systems, Vol. 21, 3 (2020), 1086--1095.Google ScholarGoogle ScholarCross RefCross Ref
  4. Mustafa Coskun, Abdelkader Baggag, and Sanjay Chawla. 2018. Deep Reinforcement Learning for Traffic Light Optimization. In 2018 IEEE International Conference on Data Mining Workshops (ICDMW). 564--571.Google ScholarGoogle Scholar
  5. Deepeka Garg, Maria Chli, and George Vogiatzis. 2018. Deep reinforcement learning for autonomous traffic light control. In 2018 3rd ieee international conference on intelligent transportation engineering (icite). IEEE, 214--218.Google ScholarGoogle Scholar
  6. Yaobang Gong, Mohamed Abdel-Aty, Qing Cai, and Md Sharikur Rahman. 2019. Decentralized network level adaptive signal control by multi-agent deep reinforcement learning. Transportation Research Interdisciplinary Perspectives, Vol. 1 (2019), 100020.Google ScholarGoogle ScholarCross RefCross Ref
  7. Shenxue Hao, Licai Yang, Li Ding, and Yajuan Guo. 2019. Distributed cooperative backpressure-based traffic light control method. Journal of Advanced Transportation, Vol. 2019 (2019), 1--15.Google ScholarGoogle Scholar
  8. Xiaoyuan Liang, Xusheng Du, Guiling Wang, and Zhu Han. 2019. A deep q learning network for traffic lights' cycle control in vehicular networks. IEEE Transactions on Vehicular Technology, Vol. 68, 2 (2019), 1243--1253.Google ScholarGoogle ScholarCross RefCross Ref
  9. Alan J Miller. 1963. Settings for fixed-cycle traffic signals. Journal of the Operational Research Society, Vol. 14, 4 (1963), 373--386.Google ScholarGoogle ScholarCross RefCross Ref
  10. Seyed Sajad Mousavi, Michael Schukat, Peter Corcoran, and Enda Howley. 2017. Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning. IET Intelligent Transport Systems, Vol. 11, 7 (2017), 417--423.Google ScholarGoogle ScholarCross RefCross Ref
  11. Tomoki Nishi, Keisuke Otaki, Keiichiro Hayakawa, and Takayoshi Yoshimura. 2018. Traffic Signal Control Based on Reinforcement Learning with Graph Convolutional Neural Nets. In 21st International Conference on Intelligent Transportation Systems (ITSC). 877--883.Google ScholarGoogle Scholar
  12. Yuquan Peng, Lin Li, Qing Xie, and Xiaohui Tao. 2021. Learning Cooperative Max-Pressure Control by Leveraging Downstream Intersections Information for Traffic Signal Control. In Asia-Pacific Web (APWeb) and Web-Age Information Management (WAIM) Joint International Conference on Web and Big Data,, Leong Hou U, Marc Spaniol, Yasushi Sakurai, and Junying Chen (Eds.), Vol. 12859. 399--413.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Elise Van der Pol and Frans A Oliehoek. 2016. Coordinated deep reinforcement learners for traffic light control. Proceedings of learning, inference and control of multi-agent systems (at NIPS 2016), Vol. 1 (2016), 1--8.Google ScholarGoogle Scholar
  14. Pravin Varaiya. 2013. Max pressure control of a network of signalized intersections. Transportation Research Part C: Emerging Technologies, Vol. 36 (2013), 177--195.Google ScholarGoogle ScholarCross RefCross Ref
  15. Yanan Wang, Tong Xu, Xin Niu, Chang Tan, Enhong Chen, and Hui Xiong. 2020. STMARL: A spatio-temporal multi-agent reinforcement learning approach for cooperative traffic light control. IEEE Transactions on Mobile Computing (2020), 1--1.Google ScholarGoogle Scholar
  16. Hua Wei, Chacha Chen, Guanjie Zheng, Kan Wu, Vikash V. Gayah, Kai Xu, and Zhenhui Li. 2019a. PressLight: Learning Max Pressure Control to Coordinate Traffic Signals in Arterial Network. In In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1290--1298.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Hua Wei, Nan Xu, Huichu Zhang, Guanjie Zheng, Xinshi Zang, Chacha Chen, Weinan Zhang, Yanmin Zhu, Kai Xu, and Zhenhui Li. 2019b. CoLight: Learning Network-level Cooperation for Traffic Signal Control. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM). 1913--1922.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Hua Wei, Guanjie Zheng, Vikash V. Gayah, and Zhenhui Li. 2020. Recent Advances in Reinforcement Learning for Traffic Signal Control: A Survey of Models and Evaluation. ACM SIGKDD Explorations Newsletter, Vol. 22, 2 (2020), 12--18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Hua Wei, Guanjie Zheng, Huaxiu Yao, and Zhenhui Li. 2018. IntelliLight: A Reinforcement Learning Approach for Intelligent Traffic Light Control. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2496--2505.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Marco A Wiering. 2000. Multi-agent reinforcement learning for traffic light control. In Machine Learning: Proceedings of the Seventeenth International Conference (ICML). 1151--1158.Google ScholarGoogle Scholar
  21. Libing Wu, Min Wang, Dan Wu, and Jia Wu. 2021. DynSTGAT: Dynamic Spatial-Temporal Graph Attention Network for Traffic Signal Control. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM). 2150--2159.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Yuanhao Xiong, Guanjie Zheng, Kai Xu, and Zhenhui Li. 2019. Learning Traffic Signal Control from Demonstrations. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM). 2289--2292.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Xinshi Zang, Huaxiu Yao, Guanjie Zheng, Nan Xu, Kai Xu, and Zhenhui Li. 2020. MetaLight: Value-Based Meta-Reinforcement Learning for Traffic Signal Control. In Proceedings of The Thirty-Fourth AAAI Conference on Artificial Intelligence, Vol. 34. 1153--1160.Google ScholarGoogle ScholarCross RefCross Ref
  24. Ting Zhong, Zheyang Xu, and Fan Zhou. 2021. Probabilistic Graph Neural Networks for Traffic Signal Control. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 4085--4089.Google ScholarGoogle Scholar

Index Terms

  1. Cooperative Max-Pressure Enhanced Traffic Signal Control

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CIKM '22: Proceedings of the 31st ACM International Conference on Information & Knowledge Management
        October 2022
        5274 pages
        ISBN:9781450392365
        DOI:10.1145/3511808
        • General Chairs:
        • Mohammad Al Hasan,
        • Li Xiong

        Copyright © 2022 ACM

        © 2022 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 17 October 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • short-paper

        Upcoming Conference

      • Article Metrics

        • Downloads (Last 12 months)55
        • Downloads (Last 6 weeks)8

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader