Runtime Safety‑shielded Deep Reinforcement Learning Approaches for Collision Avoidance in Autonomous Vehicles

Authors

  • Kattamuri Manideep
  • Kudupudi Rishi Krishna Srikar
  • Chandra Sekhar Koppireddy

Abstract

Self-driving technology is leaning heavily on deep reinforcement learning (DRL) to handle the complex, ever-changing nature of traffic. While DRL has shown it can be effective at avoiding accidents, there is a catch: it rarely comes with a solid safety guarantee once the car is actually on the road. The problem is that most current methods focus on safety only during the training phase. This leaves the system vulnerable if it encounters a scenario it has not encountered before or if conditions shift too rapidly. Obviously, that is a major roadblock for using these systems in real cars where safety is non-negotiable. This paper tackles that exact issue by introducing a “safety shield” framework designed for real-time collision avoidance. Instead of just hoping the AI remembers its training, this system runs alongside the learned policy while the vehicle is moving. Because this safety layer is separate from the learning process, it makes the entire system more reliable without requiring a rebuild of the core architecture. This approach in simulated traffic is tested, and the difference is clear. The shielded framework significantly cut down on crashes and safety breaches compared to standard DRL, all while keeping up with the driving task just as efficiently. These findings suggest that adding an explicit safety check during operation is not just helpful it is a practical, necessary step to getting DRL-based cars safely onto real-world streets.

Published

2026-02-26

How to Cite

Manideep, K., Rishi Krishna Srikar, K., & Sekhar Koppireddy, C. (2026). Runtime Safety‑shielded Deep Reinforcement Learning Approaches for Collision Avoidance in Autonomous Vehicles. Journal of Innovations in Data Science and Big Data Management, 5(1), 13–27. Retrieved from https://matjournals.net/engineering/index.php/JIDSBDM/article/view/3156