DML
DML Sharif University of Technology
Log-Sum-Exponential Estimator for Off-Policy Evaluation and Learning
  July   2025       Off-Policy Learning Off-Policy Evaluation Log Sum Exponential Regret Bound Generalization Bound Concentration Bias and Variance
A. Behnamnia , G. Aminian , A. Aghaei , C. Shi and H.R. Rabiee
Off-policy learning and evaluation scenarios leverage logged bandit feedback datasets, which contain context, action, propensity score, and feedback for each data point. These scenarios face significant challenges due to high variance and poor performance with low-quality propensity scores and heavy-tailed reward distributions. We address these issues by introducing a novel estimator based on the log-sum-exponential (LSE) operator, which outperforms traditional inverse propensity score estimators. our LSE estimator demonstrates variance reduction and robustness under heavytailed conditions. For off-policy evaluation, we derive upper bounds on the estimator’s bias and variance. In the off-policy learning scenario, we establish bounds on the regret—the performance gap between our LSE estimator and the optimal policy—assuming bounded (1 + ϵ)-th moment of weighted reward. Notably, we achieve a convergence rate of O(n<sup>−ϵ/(1+ϵ)</sup>), where n is the number of training samples for the regret bounds and ϵ ∈ [0, 1]. Theoretical analysis is complemented by comprehensive empirical evaluations in both off-policy learning and evaluation scenarios, confirming the practical advantages of our approach.
Type
Conference
Conference
International Conference on Machine Learning
Publisher
Open Review
Location
Vancouver, Canada