Double Machine Learning isn't as 'doubly robust' as you might think. New paper out! DML is popular in causal inference partly because of its "double robustness" property: get at least one of two models right (treatment or outcome), and the causal estimate should be correct. However, for common ways DML is used in practice, this isn't the case: The widely-used Robinson estimator isn't doubly robust in practice. Even the theoretically doubly-robust Chernozhukov estimator fails in standard implementations. We explain where this issue comes from, how it can be mitigated and propose Augmented DML, an estimator that achieves double robustness by automatically adapting when either model is misspecified. Paper link in comments, or find us at the NeurIPS CauScien workshop on Dec 6th. Joint work with Gianluca Detommaso, Yikuan Li, Manfred Opper and Michael Brückner. #CausalInference #Causality #MachineLearning #NeurIPS2025
Isn't the ADML estimator just a re-statement of the efficient influence function approach for the partially linear model with continuous treatment? The paper's Definition 3.1 is a clean statement of Model Double Robustness: consistency when either \hat{m} \to m or \hat{g} \to g in L_2. This is the AIPW-style property from Bang & Robins (Biometrics 2005). The "doubly robust" nomenclature in DML refers to the product-of-rates condition (the Neyman orthogonality bonus), not the either-or robustness to misspecification.
The paper made a good point. I believe that's why the formal name of this technique is double/"debiased" machine learning. Since, Neyman-orthogonality only guarantees debiasedness (the estimator converge fast even when nuisance converges slow), and I personally think "Debiased Machine Learning" is more proper name.
Well my life is ruined
Paper: https://openreview.net/pdf?id=jt5tghOeK9