TABLE OF CONTENTS March 2018 - Volume 86, Issue 2
Demand Analysis using Strategic Reports: An application to a school choice mechanism
Nikhil Agarwal, Paulo Somaini
Several school districts use assignment systems that give students an incentive to misrepresent their preferences. We find evidence consistent with strategic behavior in Cambridge. Such strategizing can complicate preference analysis. This paper develops empirical methods for studying random utility models in a new and large class of school choice mechanisms. We show that preferences are nonparametrically identified under either sufficient variation in choice environments or a preference shifter. We then develop a tractable estimation procedure and apply it to Cambridge. Estimates suggest that while 83% of students are assigned to their stated first choice, only 72% are assigned to their true first choice because students avoid ranking competitive schools. Assuming that students behave optimally, the Immediate Acceptance mechanism is preferred by the average student to the Deferred Acceptance mechanism by an equivalent of 0.08 miles. The estimated difference is smaller if beliefs are biased, and reversed if students report preferences truthfully.
A Theory of Non-Bayesian Social Learning
Pooya Molavi, Alireza Tahbaz‐Salehi, Ali Jadbabaie
This paper studies the behavioral foundations of non‐Bayesian models of learning over social networks and develops a taxonomy of conditions for information aggregation in a general framework. As our main behavioral assumption, we postulate that agents follow social learning rules that satisfy “imperfect recall,” according to which they treat the current beliefs of their neighbors as sufficient statistics for the entire history of their observations. We augment this assumption with various restrictions on how agents process the information provided by their neighbors and obtain representation theorems for the corresponding learning rules (including the canonical model of DeGroot). We then obtain general long‐run learning results that are not tied to the learning rules' specific functional forms, thus identifying the fundamental forces that lead to learning, non‐learning, and mislearning in social networks. Our results illustrate that, in the presence of imperfect recall, long‐run aggregation of information is closely linked to (i) the rate at which agents discount their neighbors' information over time, (ii) the curvature of agents' social learning rules, and (iii) whether their initial tendencies are amplified or moderated as a result of social interactions.
Recent evidence suggests that investors are inattentive to their portfolios and hire expensive portfolio managers. This paper develops a life‐cycle portfolio‐choice model in which the investor experiences loss‐averse utility over news and can ignore his portfolio. In such a model, the investor prefers to ignore and not rebalance his portfolio most of the time because he dislikes bad news more than he likes good news such that expected news causes a first‐order decrease in utility. Consequently, the investor has a first‐order willingness to pay a portfolio manager who rebalances actively on his behalf. Moreover, the investor can diversify over time and his consumption aligns with predictions of mental accounting. I structurally estimate the preference parameters by matching stock shares and stock‐market non‐participation over the life cycle. My parameter estimates are in line with the literature, generate reasonable intervals of inattention, and simultaneously explain consumption and wealth accumulation over the life cycle. Here, it matters that news utility preserves first‐order risk aversion even in the presence of stochastic labor income, which also causes stock shares to rise in wealth.
Multiproduct-Firm Oligopoly: An Aggregative Games Approach
Volker Nocke, Nicolas Schutz
We develop an aggregative games approach to study oligopolistic price competition with multiproduct firms. We introduce a new class of IIA demand systems, derived from discrete/continuous choice, and nesting CES and logit demands. The associated pricing game with multiproduct firms is aggregative and a firm's optimal price vector can be summarized by a uni‐dimensional sufficient statistic, the ι‐markup. We prove existence of equilibrium using a nested fixed‐point argument, and provide conditions for equilibrium uniqueness. In equilibrium, firms may choose not to offer some products. We analyze the pricing distortions and provide monotone comparative statics. Under (nested) CES and logit demands, another aggregation property obtains: All relevant information for determining a firm's performance and competitive impact is contained in that firm's uni‐dimensional type. We extend the model to nonlinear pricing, quantity competition, general equilibrium, and demand systems with a nest structure. Finally, we discuss applications to merger analysis and international trade.
A Theory of Input–Output Architecture
Individual producers exhibit enormous heterogeneity in many dimensions. This paper develops a theory in which the network structure of production—who buys inputs from whom—forms endogenously. Entrepreneurs produce using labor and exactly one intermediate input; the key decision is which other entrepreneur's good to use as an input. Their choices collectively determine the economy's equilibrium input–output structure, generating large differences in size and shaping both individual and aggregate productivity. When the elasticity of output to intermediate inputs in production is high, star suppliers emerge endogenously. This raises aggregate productivity as, in equilibrium, more supply chains are routed through higher‐productivity techniques.
Who Should Be Treated? Empirical Welfare Maximization Methods for Treatment Choice
Toru Kitagawa, Aleksey Tetenov
One of the main objectives of empirical analysis of experiments and quasi‐experiments is to inform policy decisions that determine the allocation of treatments to individuals with different observable covariates. We study the properties and implementation of the Empirical Welfare Maximization (EWM) method, which estimates a treatment assignment policy by maximizing the sample analog of average social welfare over a class of candidate treatment policies. The EWM approach is attractive in terms of both statistical performance and practical implementation in realistic settings of policy design. Common features of these settings include: (i) feasible treatment assignment rules are constrained exogenously for ethical, legislative, or political reasons, (ii) a policy maker wants a simple treatment assignment rule based on one or more eligibility scores in order to reduce the dimensionality of individual observable characteristics, and/or (iii) the proportion of individuals who can receive the treatment is a priori limited due to a budget or a capacity constraint. We show that when the propensity score is known, the average social welfare attained by EWM rules converges at least at n−1/2 rate to the maximum obtainable welfare uniformly over a minimally constrained class of data distributions, and this uniform convergence rate is minimax optimal. We examine how the uniform convergence rate depends on the richness of the class of candidate decision rules, the distribution of conditional treatment effects, and the lack of knowledge of the propensity score. We offer easily implementable algorithms for computing the EWM rule and an application using experimental data from the National JTPA Study.
Identifying Long-Run Risks: A Bayesian Mixed-Frequency Approach
Frank Schorfheide, Dongho Song, Amir Yaron
We document that consumption growth rates are far from i.i.d. and have a highly persistent component. First, we estimate univariate and multivariate models of cash‐flow (consumption, output, dividends) growth that feature measurement errors, time‐varying volatilities, and mixed‐frequency observations. Monthly consumption data are important for identifying the stochastic volatility process; yet the data are contaminated, which makes the inclusion of measurement errors essential for identifying the predictable component. Second, we develop a novel state‐space model for cash flows and asset prices that imposes the pricing restrictions of a representative‐agent endowment economy with recursive preferences. To estimate this model, we use a particle MCMC approach that exploits the conditional linear structure of the approximate equilibrium. Once asset return data are included in the estimation, we find even stronger evidence for the persistent component and are able to identify three volatility processes: the one for the predictable cash‐flow component is crucial for asset pricing, whereas the other two are important for tracking the data. Our model generates asset prices that are largely consistent with the data in terms of sample moments and predictability features. The state‐space approach allows us to track over time the evolution of the predictable component, the volatility processes, the decomposition of the equity premium into risk factors, and the variance decomposition of asset prices.
Optimal Inference in a Class of Regression Models
Timothy B. Armstrong, Michal Kolesár
We consider the problem of constructing confidence intervals (CIs) for a linear functional of a regression function, such as its value at a point, the regression discontinuity parameter, or a regression coefficient in a linear or partly linear regression. Our main assumption is that the regression function is known to lie in a convex function class, which covers most smoothness and/or shape assumptions used in econometrics. We derive finite‐sample optimal CIs and sharp efficiency bounds under normal errors with known variance. We show that these results translate to uniform (over the function class) asymptotic results when the error distribution is not known. When the function class is centrosymmetric, these efficiency bounds imply that minimax CIs are close to efficient at smooth regression functions. This implies, in particular, that it is impossible to form CIs that are substantively tighter using data‐dependent tuning parameters, and maintain coverage over the whole function class. We specialize our results to inference on the regression discontinuity parameter, and illustrate them in simulations and an empirical application.
Inference Based on Structural Vector Autoregressions Identified With Sign and Zero Restrictions: Theory and Applications
Jonas E. Arias, Juan F. Rubio‐Ramírez, Daniel F. Waggoner
In this paper, we develop algorithms to independently draw from a family of conjugate posterior distributions over the structural parameterization when sign and zero restrictions are used to identify structural vector autoregressions (SVARs). We call this family of conjugate posteriors normal‐generalized‐normal. Our algorithms draw from a conjugate uniform‐normal‐inverse‐Wishart posterior over the orthogonal reduced‐form parameterization and transform the draws into the structural parameterization; this transformation induces a normal‐generalized‐normal posterior over the structural parameterization. The uniform‐normal‐inverse‐Wishart posterior over the orthogonal reduced‐form parameterization has been prominent after the work of Uhlig (2005). We use Beaudry, Nam, and Wang's (2011) work on the relevance of optimism shocks to show the dangers of using alternative approaches to implementing sign and zero restrictions to identify SVARs like the penalty function approach. In particular, we analytically show that the penalty function approach adds restrictions to the ones described in the identification scheme.
Very Simple Markov-Perfect Industry Dynamics: Theory
Jaap H. Abbring, Jeffrey R. Campbell, Jan Tilly, Nan Yang
This paper develops a simple model of firm entry, competition, and exit in oligopolistic markets. It features toughness of competition, sunk entry costs, and market‐level demand and cost shocks, but assumes that firms' expected payoffs are identical when entry and survival decisions are made. We prove that this model has an essentially unique symmetric Markov‐perfect equilibrium, and we provide an algorithm for its computation. Because this algorithm only requires finding the fixed points of a finite sequence of contraction mappings, it is guaranteed to converge quickly.
Estimating Semi-parametric Panel Multinomial Choice Models using Cyclic Monotonicity
Xiaoxia Shi, Matthew Shum, Wei Song
This paper proposes a new semi‐parametric identification and estimation approach to multinomial choice models in a panel data setting with individual fixed effects. Our approach is based on cyclic monotonicity, which is a defining convex‐analytic feature of the random utility framework underlying multinomial choice models. From the cyclic monotonicity property, we derive identifying inequalities without requiring any shape restrictions for the distribution of the random utility shocks. These inequalities point identify model parameters under straightforward assumptions on the covariates. We propose a consistent estimator based on these inequalities.
Estimating Both Supply and Demand Elasticities Using Variation in a Single Tax Rate
Floris T. Zoutman, Evelina Gavrilova, Arnt O. Hopland
We show how an insight from taxation theory allows identification of both the supply and demand elasticities using only one instrument. Most models of taxation since Ramsey (1927) assume that a tax levied on the demand side only affects demand through the price after taxation. Econometrically, we show that this assumption acts as an exclusion restriction. Under the Ramsey Exclusion Restriction (RER), a single tax reform can serve to simultaneously identify the demand and supply elasticity. We develop an estimation method, which includes 2SLS estimators for the elasticities, and a test for strength of the instrument. We discuss possible applications.