Physics-Informed Transformer Networks for Multi-Peril Insurance Pricing: A Novel Hybrid Computational Framework Integrating Actuarial Principles with Deep Attention Mechanisms

  • Eslam Seyam Imam Muhammad Ibn Saud Islamic University
  • Mohamed Abdel Mawla Osman Finance Department, King Saud University, Riyadh, Saudi Arabia
Keywords: Physics-informed neural networks, Transformer architecture, Insurance pricing, Generalized Linear Models, Gradient Boosting Machines, Actuarial constraints

Abstract

A classic problem in insurance pricing is the trade-off between actuarial validity and predictive accuracy. Traditional Generalized Linear Models follow insurance principles rigorously but don’t forecast very well, while machine learning algorithms offer strong predictive performance but ignore key insurance rules. To close this gap, we extend the Transformer architecture into what we call a Physics-Informed Transformer. This model integrates five core insurance rules—premium adequacy, monotonicity, multiplicative decomposition, calibration, and coherence—directly into both the architecture and the loss function. The proposed Physics-Informed Transformer uses multi-head attention to learn non-linear relationships among features while maintaining actuarial validity through a combination of soft and hard constraints. We tested the model on French motor insurance data with 108,699 samples. The results show that the Physics-Informed Transformer outperforms existing models in actuarial performance, achieving a Gamma deviance of 1.0756. It also reaches over 99% compliance with insurance validity rules, whereas typical machine learning algorithms do not comply with these rules. To quantify how well the model adheres to insurance rules, we introduce a new measure called the Actuarial Validity Score (AVS). The proposed model achieves an AVS of 0.7659, representing a 23% improvement over traditional GLM models and matching the performance of Gradient Boosting Machines. Beyond prediction accuracy and rule compliance, the model’s attention mechanism highlights actuarially meaningful feature interactions without the need for manual feature engineering, offering valuable insights that support regulatory approval and acceptance. However, despite its advantages, the Physics-Informed Transformer still falls short in certain areas, particularly in Calibration Retention (10% compliance) and Monotonicity Preservation (72% compliance).
Published
2025-12-12
How to Cite
Seyam, E., & Mohamed Abdel Mawla Osman. (2025). Physics-Informed Transformer Networks for Multi-Peril Insurance Pricing: A Novel Hybrid Computational Framework Integrating Actuarial Principles with Deep Attention Mechanisms. Statistics, Optimization & Information Computing. https://doi.org/10.19139/soic-2310-5070-3232
Section
Research Articles