Journal of the Econometric Society
Volume 89, Issue 5 (September 2021) has just been published. The full content of the journal is accessible at
Heterogeneous Choice Sets and Preferences
Levon Barseghyan, Maura Coughlin, Francesca Molinari, Joshua C. Teitelbaum
We propose a robust method of discrete choice analysis when agents' choice sets are unobserved. Our core model assumes nothing about agents' choice sets apart from their minimum size. Importantly, it leaves unrestricted the dependence, conditional on observables, between choice sets and preferences. We first characterize the sharp identification region of the model's parameters by a finite set of conditional moment inequalities. We then apply our theoretical findings to learn about households' risk preferences and choice sets from data on their deductible choices in auto collision insurance. We find that the data can be explained by expected utility theory with low levels of risk aversion and heterogeneous non‐singleton choice sets, and that more than three in four households require limited choice sets to explain their deductible choices. We also provide simulation evidence on the computational tractability of our method in applications with larger feasible sets or higher‐dimensional unobserved heterogeneity.
Market Selection and the Information Content of Prices
Alp E. Atakan, Mehmet Ekmekci
We study information aggregation when n bidders choose, based on their private information, between two concurrent common‐value auctions. There are ks identical objects on sale through a uniform‐price auction in market s and there are an additional kr objects on auction in market r, which is identical to market s except for a positive reserve price. The reserve price in market r implies that information is not aggregated in this market. Moreover, if the object‐to‐bidder ratio in market s exceeds a certain cutoff, then information is not aggregated in market s either. Conversely, if the object‐to‐bidder ratio is less than this cutoff, then information is aggregated in market s as the market grows arbitrarily large. Our results demonstrate how frictions in one market can disrupt information aggregation in a linked, frictionless market because of the pattern of market selection by imperfectly informed bidders.
An Axiomatic Model of Persuasion
Alexander M. Jakobsen
A sender ranks information structures knowing that a receiver processes the information before choosing an action affecting them both. The sender and receiver may differ in their utility functions and/or prior beliefs, yielding a model of dynamic inconsistency when they represent the same individual at two points in time. I take as primitive (i) a collection of preference orderings over all information structures, indexed by menus of acts (the sender's ex ante preferences for information), and (ii) a collection of correspondences over menus of acts, indexed by signals (the receiver's signal‐contingent choice(s) from menus). I provide axiomatic representation theorems characterizing the sender as a sophisticated planner and the receiver as a Bayesian information processor, and show that all parameters can be uniquely identified from the sender's preferences for information. I also establish a series of results characterizing common priors, common utility functions, and intuitive measures of disagreement for these parameters—all in terms of the sender's preferences for information.
A Model of Scientific Communication
Isaiah Andrews, Jesse M. Shapiro
We propose a positive model of empirical science in which an analyst makes a report to an audience after observing some data. Agents in the audience may differ in their beliefs or objectives, and may therefore update or act differently following a given report. We contrast the proposed model with a classical model of statistics in which the report directly determines the payoff. We identify settings in which the predictions of the proposed model differ from those of the classical model, and seem to better match practice.
We propose a bootstrap procedure for data that may exhibit cluster‐dependence in two or more dimensions. The asymptotic distribution of the sample mean or other statistics may be non‐Gaussian if observations are dependent but uncorrelated within clusters. We show that there exists no procedure for estimating the limiting distribution of the sample mean under two‐way clustering that achieves uniform consistency. However, we propose bootstrap procedures that achieve adaptivity with respect to different uniformity criteria. Important cases and extensions discussed in the paper include regression inference, U‐ and V‐statistics, subgraph counts for network data, and non‐exhaustive samples of matched data.
Firms and governments often use R&D contests to incentivize suppliers to develop and deliver innovative products. The optimal design of such contests depends on empirical primitives: the cost of research, the uncertainty in outcomes, and the surplus participants capture. Can R&D contests in real‐world settings be redesigned to increase social surplus? I ask this question in the context of the Department of Defense's Small Business Innovation Research program, a multistage R&D contest. I develop a structural model to estimate the primitives from data on R&D and procurement contracts. I find that the optimal design substantially increases social surplus, and simple design changes in isolation (e.g., inviting more contestants) can capture up to half these gains; however, these changes reduce the DOD's own welfare. These results suggest there is substantial scope for improving the design of real‐world contests but that a designer must balance competing objectives.
Random Evolving Lotteries and Intrinsic Preference for Information
Faruk Gul, Paulo Natenzon, Wolfgang Pesendorfer
We introduce random evolving lotteries to study preference for non‐instrumental information. Each period, the agent enjoys a flow payoff from holding a lottery that will resolve at the terminal date. We provide a representation theorem for non‐separable risk consumption preferences and use it to characterize agents' attitude to non‐instrumental information. To address applications, we characterize peak‐trough utilities that aggregate trajectories of flow utilities linearly but, in addition, put weight on the best (peak) and worst (trough) lotteries along each path. We show that the model is consistent with recent experimental evidence on attitudes to information, including a preference for gradual arrival of good news and the ostrich effect, that is, decision makers' tendency to prefer information after good news to information after bad news.
Reconciling Models of Diffusion and Innovation: A Theory of the Productivity Distribution and Technology Frontier
Jess Benhabib, Jesse Perla, Christopher Tonetti
We study how endogenous innovation and technology diffusion interact to determine the shape of the productivity distribution and generate aggregate growth. We model firms that choose to innovate, adopt technology, or produce with their existing technology. Costly adoption creates a spread between the best and worst technologies concurrently used to produce similar goods. The balance of adoption and innovation determines the shape of the distribution; innovation stretches the distribution, while adoption compresses it. On the balanced growth path, the aggregate growth rate equals the maximum growth rate of innovators. While innovation drives long‐run growth, changes in the adoption environment can influence growth by affecting innovation incentives, either directly, through licensing of excludable technologies, or indirectly, via the option value of adoption.
What Do Data on Millions of U.S. Workers Reveal about Lifecycle Earnings Dynamics?
Fatih Guvenen, Fatih Karahan, Serdar Ozkan, Jae Song
We study individual male earnings dynamics over the life cycle using panel data on millions of U.S. workers. Using nonparametric methods, we first show that the distribution of earnings changes exhibits substantial deviations from lognormality, such as negative skewness and very high kurtosis. Further, the extent of these nonnormalities varies significantly with age and earnings level, peaking around age 50 and between the 70th and 90th percentiles of the earnings distribution. Second, we estimate nonparametric impulse response functions and find important asymmetries: Positive changes for high‐income individuals are quite transitory, whereas negative ones are very persistent; the opposite is true for low‐income individuals. Third, we turn to long‐run outcomes and find substantial heterogeneity in the cumulative growth rates of earnings and the total number of years individuals spend nonemployed between ages 25 and 55. Finally, by targeting these rich sets of moments, we estimate stochastic processes for earnings that range from the simple to the complex. Our preferred specification features normal mixture innovations to both persistent and transitory components and includes state‐dependent long‐term nonemployment shocks with a realization probability that varies with age and earnings.
Whither Formal Contracts?
Raúl Sánchez de la Sierra
To measure the benefits of formal contract enforcement for society, I create a market with merchants and buyers, in which buyers can choose whether to buy, and whether to pay. A set of multiple “state‐favored” ethnic groups control the state. I experimentally vary whether formal contracts are required and the composition of buyer‐merchant pairs. The design separately identifies the effect of the contracts on the buyers' incentive to pay and on their incentive to buy. I document two ways in which society limits the benefits of contracts. First, contracts reduce buyer cheating, thus increasing merchants' profits, if, and only if, the merchant is state‐favored. Buyers' beliefs suggest that the merchants can enforce the contracts if, and only if, the merchant is state‐favored. Second, holding constant whether the pair is state‐favored, contracts only influence buyer choices when the buyer and the merchant belong to two, different, state‐favored ethnic groups. Buyers' choices and beliefs confirm that, in that case, the contracts are expected to be enforceable, but they have no effect on buyers' choices because reputation already governs the incentives to cheat within groups. The findings temper the view of the state as independent from society, offer a rationale for why contracts are not adopted, and nuance the notion of state weakness.
Using the Sequence-Space Jacobian to Solve and Estimate Heterogeneous-Agent Models
Adrien Auclert, Bence Bardóczy, Matthew Rognlie, Ludwig Straub
We propose a general and highly efficient method for solving and estimating general equilibrium heterogeneous‐agent models with aggregate shocks in discrete time. Our approach relies on the rapid computation of sequence‐space Jacobians—the derivatives of perfect‐foresight equilibrium mappings between aggregate sequences around the steady state. Our main contribution is a fast algorithm for calculating Jacobians for a large class of heterogeneous‐agent problems. We combine this algorithm with a systematic approach to composing and inverting Jacobians to solve for general equilibrium impulse responses. We obtain a rapid procedure for likelihood‐based estimation and computation of nonlinear perfect‐foresight transitions. We apply our methods to three canonical heterogeneous‐agent models: a neoclassical model, a New Keynesian model with one asset, and a New Keynesian model with two assets.
Economic Predictions with Big Data: The Illusion of Sparsity
Domenico Giannone, Michele Lenza, Giorgio E. Primiceri
We compare sparse and dense representations of predictive models in macroeconomics, microeconomics, and finance. To deal with a large number of possible predictors, we specify a prior that allows for both variable selection and shrinkage. The posterior distribution does not typically concentrate on a single sparse model, but on a wide set of models that often include many predictors.
A Projection Framework for Testing Shape Restrictions that Form Convex Cones
Zheng Fang, Juwon Seo
This paper develops a uniformly valid and asymptotically nonconservative test based on projection for a class of shape restrictions. The key insight we exploit is that these restrictions form convex cones, a simple and yet elegant structure that has been barely harnessed in the literature. Based on a monotonicity property afforded by such a geometric structure, we construct a bootstrap procedure that, unlike many studies in nonstandard settings, dispenses with estimation of local parameter spaces, and the critical values are obtained in a way as simple as computing the test statistic. Moreover, by appealing to strong approximations, our framework accommodates nonparametric regression models as well as distributional/density‐related and structural settings. Since the test entails a tuning parameter (due to the nonstandard nature of the problem), we propose a data‐driven choice and prove its validity. Monte Carlo simulations confirm that our test works well.
Location as an Asset
Adrien Bilal, Esteban Rossi‐Hansberg
The location of individuals determines their job and schooling opportunities, amenities, and housing costs. We conceptualize the location choice of individuals as a decision to invest in a “location asset.” This asset has a current cost equal to the location's rent, and a future payoff through better job and schooling opportunities. As with any asset, savers in the location asset transfer resources into the future by going to expensive locations with high future returns. In contrast, borrowers transfer resources to the present by going to cheap locations that offer few other advantages. Holdings of the location asset depend on its comparison to other assets, with the distinction that the location asset is not subject to borrowing constraints. We propose a dynamic location model and derive an agent's mobility choices after experiencing income shocks. We document the investment dimension of location and confirm the core predictions of our theory using French individual panel data from tax returns.
The Size-Power Tradeoff in HAR Inference
Eben Lazarus, Daniel J. Lewis, James H. Stock
Heteroskedasticity‐ and autocorrelation‐robust (HAR) inference in time series regression typically involves kernel estimation of the long‐run variance. Conventional wisdom holds that, for a given kernel, the choice of truncation parameter trades off a test's null rejection rate and power, and that this tradeoff differs across kernels. We formalize this intuition: using higher‐order expansions, we provide a unified size‐power frontier for both kernel and weighted orthonormal series tests using nonstandard “fixed‐b” critical values. We also provide a frontier for the subset of these tests for which the fixed‐b distribution is t or F. These frontiers are respectively achieved by the QS kernel and equal‐weighted periodogram. The frontiers have simple closed‐form expressions, which show that the price paid for restricting attention to tests with t and F critical values is small. The frontiers are derived for the Gaussian multivariate location model, but simulations suggest the qualitative findings extend to stochastic regressors.
Inferring Inequality with Home Production
Job Boerma, Loukas Karabarbounis
We revisit the causes, welfare consequences, and policy implications of the dispersion in households' labor market outcomes using a model with uninsurable risk, incomplete asset markets, and home production. Allowing households to be heterogeneous in both their disutility of home work and their home production efficiency, we find that home production amplifies welfare‐based differences, meaning that inequality in standards of living is larger than we thought. We infer significant home production efficiency differences across households because hours working at home do not covary with consumption and wages in the cross section of households. Heterogeneity in home production efficiency is essential for inequality, as home production would not amplify inequality if differences at home only reflected heterogeneity in disutility of work.