Deep Reinforcement Learning for Adaptive Demand Forecasting and Perishable Inventory Optimization in Multi-Echelon Supply Chain

Dhivya, R. and Thailambal, G. (2025) Deep Reinforcement Learning for Adaptive Demand Forecasting and Perishable Inventory Optimization in Multi-Echelon Supply Chain. In: 2025 9th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India.

Full text not available from this repository. (Request a copy)

Abstract

Multi-echelon supply chains for perishable items are becoming increasingly complex and the tension between cost, service, and waste is becoming more pronounced and difficult to navigate in multi-echelon supply chains for perishable items. Inventory policies like (s,S) and (Q,R) do not consistently adjust to the uncertainty of demand and lead times and details on perishability and often do not yield a desired outcome. The purpose of this study is to create and deploy a complete AI–Deep Reinforcement Learning (DRL) model for adaptive demand forecasting and inventory control in a multi-echelon supply system for perishable products. The study offers a hybrid approach for probabilistic demand forecasting through a Transformer-based forecasting model, use hierarchical multi-agent DRL for control and a constrained optimization input layer for feasibility.The framework was compared to (S,s), (Q,R), MPC and single-agent DRL baselines using the publicly available FreshRetailNet-50K dataset. The results indicated a total cost reduction of 25%, a waste reduction of ~40%, and a fill rate increase of 2.6%, more than the existing methods. The proposed approach provided evidence for how AI-driven adaptive control can improve efficiency and sustainability of perishable multi-echelon supply chains.

Item Type: Conference or Workshop Item (Paper)
Subjects: Computer Applications > Artificial Intelligence
Domains: Computer Science
Depositing User: Mr IR Admin
Date Deposited: 07 May 2026 17:14
Last Modified: 07 May 2026 17:14
URI: https://ir.vistas.ac.in/id/eprint/14039

Actions (login required)

View Item
View Item