Thursday, 2025/10/30

  • CS/AI
  • C
  • TITECH
  • Switch Language
    • ja日本語 (Japanese)
    • enEnglish

Shimosaka Research Group

pursuing MIUBIQ (machine intelligence in UbiComp Research)

  • Home
    • Members
    • Location
  • News
  • Projects
  • Publications
  • Awards
  • Archives
    • Codes
    • Datasets
Navigation
News Presenting our paper on Continuous Inverse Reinforcement Learning with State-wise Safety Constraints for Stable Driving Behavior Prediction at ITSC2025

Presenting our paper on Continuous Inverse Reinforcement Learning with State-wise Safety Constraints for Stable Driving Behavior Prediction at ITSC2025

2025/10/30 | NewsPresentations | 13 views |

The IEEE International Conference on Intelligent Transportation Systems (ITSC2025) will be held on November 18-21, 2025, Gold Coast, Australia.
ITSC is an annual flagship conference of the IEEE Intelligent Transportation Systems Society (ITSS).

The following presentation will be delivered.

Continuous Inverse Reinforcement Learning with State-wise Safety Constraints for Stable Driving Behavior Prediction

Abstract:
Inverse reinforcement learning (IRL) is a promising approach for modeling human driving behaviors by learning underlying reward functions from expert demonstrations. While recent studies have incorporated failed demonstrations to improve learning robustness, most existing methods enforce safety constraints only at the trajectory level, which is insufficient for real-world autonomous driving scenarios requiring per-state safety.
This paper proposes a novel IRL framework that introduces state-wise safety constraints via a behavior discriminator, which generates safety labels for each state based on environmental context. By integrating the discriminator into the main reward optimization loop, the proposed method avoids additional computational complexity while ensuring safety at every decision point.
Experimental results in the CARLA simulator across multiple driving scenarios demonstrate improved performance in both behavior imitation and driving task requirements. The results confirm that enforcing state-wise safety significantly enhances stability and reliability in driving behavior prediction in static contextual environments, providing a viable direction for safer autonomous decision-making.

—–
Presentation information(Program)

November 21, 2025, 16:00−16:20 Session “S42c-Safety and Risk Assessment for Autonomous Driving Systems”
Title:Continuous Inverse Reinforcement Learning with State-Wise Safety Constraints for Stable Driving Behavior Prediction
Authors:Zhao, Minglu (Tokyo Institute of Technology), Shimosaka, Masamichi (Tokyo Institute of Technology)
—–

  • tweet

Comments are disabled for this post

Social Networks

  • twitter
  • rss

Recent News

  • Presenting our paper on Continuous Inverse Reinforcement Learning with State-wise Safety Constraints for Stable Driving Behavior Prediction at ITSC2025 2025/10/30
  • Presenting our paper on Exploiting Periodic UWB CIRs for Robust Activity Recognition with Attention-aware Multi-level Wavelet at PerCom2025 2025/02/15
  • Presenting our paper on revealing Universities’ Atmosphere from Visitor Interests has been presented at IEEE BigData 2024 2024/12/16
  • Our paper on adaptive incremental-decremental BLE placement optimization for accurate indoor positioning has been presented at IPIN2024. 2024/10/23
  • Presenting two papers at SIGSPATIAL 2024 2024/10/23
  • Forecasting Crowded Events using Public Announcements with Large Language Models 2024/10/15
  • Forecasting Lifespan of Crowded Events Inspired by Acoustic Synthesis Technique 2024/07/04
  • Our paper on forecasting lifespan of crowded events has been published in IEEE Access 2024/07/04
  • Presenting our paper on Stable IRL from failed demonstrations at IV2024 2024/05/30
  • Presenting our demo on the application “CityScouter” at UbiComp 2023 2023/10/11

Search

Copyright 2015 · Shimosaka Research Group at TITECH