Quantitative Economics January 2022, Volume 13, Issue 1 is now online

TABLE OF CONTENTS, January 2022, Volume 13, Issue 1
Full Issue

Articles
Abstracts follow the listing of articles.

Testing identifying assumptions in fuzzy regression discontinuity designs
Yoichi Arai, Yu‐Chin Hsu, Toru Kitagawa, Ismael Mourifié, Yuanyuan Wan

The influence function of semiparametric estimators
Hidehiko Ichimura, Whitney K. Newey

Information theoretic approach to high‐dimensional multiplicative models: Stochastic discount factor and treatment effect
Chen Qiu, Taisuke Otsu

Uncertain identification
Raffaella Giacomini, Toru Kitagawa, Alessio Volpicella

A consistent specification test for dynamic quantile models
Peter Horvath, Jia Li, Zhipeng Liao, Andrew J. Patton

A note on the estimation of job amenities and labor productivity
Arnaud Dupuy, Alfred Galichon

The environmental cost of land‐use restrictions
Mark Colas, John M. Morehouse

Modeling time varying risk of natural resource assets: Implications of climate change
Anke D. Leroux, Vance L. Martin, Kathryn A. St. John

Peso problems in the estimation of the C‐CAPM
Juan Carlos Parra‐Alvarez, Olaf Posch, Andreas Schrimpf

Financing corporate tax cuts with shareholder taxes
Alexis Anagnostopoulos, Orhan Erem Atesagaoglu, Eva Cárceles‐Poveda

How success breeds success
Ambroise Descamps, Changxia Ke, Lionel Page


Testing identifying assumptions in fuzzy regression discontinuity designs
Yoichi Arai, Yu‐Chin Hsu, Toru Kitagawa, Ismael Mourifié, Yuanyuan Wan


Abstract

 

We propose a new specification test for assessing the validity of fuzzy regression discontinuity designs (FRD‐validity). We derive a new set of testable implications, characterized by a set of inequality restrictions on the joint distribution of observed outcomes and treatment status at the cut‐off. We show that this new characterization exploits all of the information in the data that is useful for detecting violations of FRD‐validity. Our approach differs from and complements existing approaches that test continuity of the distributions of running variables and baseline covariates at the cut‐off in that we focus on the distribution of the observed outcome and treatment status. We show that the proposed test has appealing statistical properties. It controls size in a large sample setting uniformly over a large class of data generating processes, is consistent against all fixed alternatives, and has non‐trivial power against some local alternatives. We apply our test to evaluate the validity of two FRD designs. The test does not reject FRD‐validity in the class size design studied by Angrist and Lavy (1999) but rejects it in the insurance subsidy design for poor households in Colombia studied by Miller, Pinto, and Vera‐Hernández (2013) for some outcome variables. Existing density continuity tests suggest the opposite in each of the two cases.

Fuzzy regression discontinuity design nonparametric test inequality restriction multiplier bootstrap C12 C14 C31
---
The influence function of semiparametric estimators
Hidehiko Ichimura, Whitney K. Newey


Abstract

 

 

There are many economic parameters that depend on nonparametric first steps. Examples include games, dynamic discrete choice, average exact consumer surplus, and treatment effects. Often estimators of these parameters are asymptotically equivalent to a sample average of an object referred to as the influence function. The influence function is useful in local policy analysis, in evaluating local sensitivity of estimators, and constructing debiased machine learning estimators. We show that the influence function is a Gateaux derivative with respect to a smooth deviation evaluated at a point mass. This result generalizes the classic Von Mises (1947) and Hampel (1974) calculation to estimators that depend on smooth nonparametric first steps. We give explicit influence functions for first steps that satisfy exogenous or endogenous orthogonality conditions. We use these results to generalize the omitted variable bias formula for regression to policy analysis for and sensitivity to structural changes. We apply this analysis and find no sensitivity to endogeneity of average equivalent variation estimates in a gasoline demand application.

Influence function semiparametric estimation NPIV C13 C14 C20 C26 C36
---
Information theoretic approach to high‐dimensional multiplicative models: Stochastic discount factor and treatment effect
Chen Qiu, Taisuke Otsu


Abstract

 

 

This paper is concerned with estimation of functionals of a latent weight function that satisfies possibly high‐dimensional multiplicative moment conditions. Main examples are functionals of stochastic discount factors in asset pricing, missing data problems, and treatment effects. We propose to estimate the latent weight function by an information theoretic approach combined with the 1‐penalization technique to deal with high‐dimensional moment conditions under sparsity. We study asymptotic properties of the proposed method and illustrate it by a theoretical example on treatment effect analysis and empirical example on estimation of stochastic discount factors.

Information theoretic approach high‐dimensional model stochastic discount factor treatment effect C12 C14
---
Uncertain identification
Raffaella Giacomini, Toru Kitagawa, Alessio Volpicella


Abstract

 

 

Uncertainty about the choice of identifying assumptions is common in causal studies, but is often ignored in empirical practice. This paper considers uncertainty over models that impose different identifying assumptions, which can lead to a mix of point‐ and set‐identified models. We propose performing inference in the presence of such uncertainty by generalizing Bayesian model averaging. The method considers multiple posteriors for the set‐identified models and combines them with a single posterior for models that are either point‐identified or that impose nondogmatic assumptions. The output is a set of posteriors (post‐averaging ambiguous belief), which can be summarized by reporting the set of posterior means and the associated credible region. We clarify when the prior model probabilities are updated and characterize the asymptotic behavior of the posterior model probabilities. The method provides a formal framework for conducting sensitivity analysis of empirical findings to the choice of identifying assumptions. For example, we find that in a standard monetary model one would need to attach a prior probability greater than 0.28 to the validity of the assumption that prices do not react contemporaneously to a monetary policy shock, in order to obtain a negative response of output to the shock.

Partial identification sensitivity analysis model averaging Bayesian robustness ambiguity C11 C32 C52
---
A consistent specification test for dynamic quantile models
Peter Horvath, Jia Li, Zhipeng Liao, Andrew J. Patton


Abstract

 

 

Correct specification of a conditional quantile model implies that a particular conditional moment is equal to zero. We nonparametrically estimate the conditional moment function via series regression and test whether it is identically zero using uniform functional inference. Our approach is theoretically justified via a strong Gaussian approximation for statistics of growing dimensions in a general time series setting. We propose a novel bootstrap method in this nonstandard context and show that it significantly outperforms the benchmark asymptotic approximation in finite samples, especially for tail quantiles such as Value‐at‐Risk (VaR). We use the proposed new test to study the VaR and CoVaR (Adrian and Brunnermeier (2016)) of a collection of US financial institutions.

Bootstrap VaR series regression strong approximation C14 C22 C52
---
A note on the estimation of job amenities and labor productivity
Arnaud Dupuy, Alfred Galichon


Abstract

 

 

This paper introduces a maximum likelihood estimator of the value of job amenities and labor productivity in a single matching market based on the observation of equilibrium matches and wages. The estimation procedure simultaneously fits both the matching patterns and the wage curve. While our estimator is suited for a wide range of assignment problems, we provide an application to the estimation of the Value of a Statistical Life using compensating wage differentials for the risk of fatal injury on the job. Using US data for 2017, we estimate the Value of Statistical Life at $6.3 million ($2017).

Matching observed transfers structural estimation value of statistical life C35 C78 J31
---
The environmental cost of land‐use restrictions
Mark Colas, John M. Morehouse


Abstract

 

 

Cities with cleaner power plants and lower energy demand tend also to have tighter land‐use restrictions; these restrictions increase housing prices and reduce the incentive for households to live in these lower greenhouse gas‐emitting cities. We use a spatial equilibrium model to quantify the overall effects of land‐use restrictions on the levels and spatial distribution of household carbon emissions. Our model features heterogeneous households, cities that vary in both their power plant technologies and their utility benefits of energy usage, as well as endogenous wages and rents. Relaxation of the current land‐use restrictions in California to the level faced by the median urban household in the US leads to a 0.6% drop in national household carbon emissions and a decrease in the social cost of carbon of $310 million annually.

Greenhouse gases local labor markets spatial equilibrium Q4 R13 R31
---
Modeling time varying risk of natural resource assets: Implications of climate change
Anke D. Leroux, Vance L. Martin, Kathryn A. St. John


Abstract

 

 

A multivariate GARCH model of natural resources is specified to capture the effects of time varying portfolio risk. A special feature of the model is the inclusion of realized volatility for natural resource assets that are available at multiple frequencies as well as being sensitive to sudden changes in climatic conditions. Natural resource portfolios under climate change are simulated from bootstrapping schemes as well as being derived from global climate model projections. Both approaches are applied to a multiasset water portfolio model consisting of reservoir inflows, rainwater harvesting, and desalinated water. The empirical results show that while reservoirs remain the dominant water asset, adaptation to climate change involves increased contributions from rainwater harvesting and more frequent use of desalinated water. It is estimated that climate change increases annual water supply costs by between 7% and 44% over a 20‐year forecast horizon.

RV‐DCC realized variance natural resource portfolio climate change C32 C53 Q35 Q54
---
Peso problems in the estimation of the C‐CAPM
Juan Carlos Parra‐Alvarez, Olaf Posch, Andreas Schrimpf


Abstract

 

 

This paper shows that the consumption‐based capital asset pricing model (C‐CAPM) with low‐probability disaster risk rationalizes pricing errors. We find that implausible estimates of risk aversion and time preference are not puzzling if market participants expect a future catastrophic change in fundamentals, which just happens not to occur in the sample (a “peso problem”). A bias in structural parameter estimates emerges as a result of pricing errors in quiet times. While the bias essentially removes the pricing error in the simple models when risk‐free rates are constant, time‐variation may also generate large and persistent estimated pricing errors in simulated data. We also show analytically how the problem of biased estimates can be avoided in empirical research by resolving the misspecification in moment conditions.

Rare events asset pricing errors C‐CAPM E21 G12 O41
---
Financing corporate tax cuts with shareholder taxes
Alexis Anagnostopoulos, Orhan Erem Atesagaoglu, Eva Cárceles‐Poveda


Abstract

 

 

We study the aggregate and distributional consequences of replacing corporate profit taxes with shareholder taxes, namely taxes on dividends and capital gains, in a setting with incomplete markets and heterogeneity at both the household and the firm level. The reform yields distributional gains with a large majority of households benefiting. Moreover, if dividend and capital gains are taxed at the same rate, the reform is also efficiency‐enhancing and the implied optimal corporate income tax rate is zero. In contrast, an asymmetric tax treatment of dividend and capital gains induces a trade‐off between efficiency and distributional concerns that is optimally resolved at a positive optimal corporate tax rate, implying double taxation.

Optimal corporate taxes double taxation heterogeneity misallocation E6
---
How success breeds success
Ambroise Descamps, Changxia Ke, Lionel Page


Abstract

 

 

We investigate if, and why, an initial success can trigger a string of successes. Using random variations in success in a real‐effort laboratory experiment, we cleanly identify the causal effect of an early success in a competition. We confirm that an early success indeed leads to increased chances of a later success. By alternatively eliminating strategic features of the competition, we turn on and off possible mechanisms driving the effect of an early success. Standard models of dynamic contest predict a strategic effect due to asymmetric incentives between initial winners and losers. Surprisingly, we find no evidence that they can explain the positive effect of winning. Instead, we find that the effect of winning seems driven by an information revelation effect, whereby players update their beliefs about their relative strength after experiencing an initial success.

Dynamic contest momentum real effort feedback confidence experiment C91 D74