Journal of the Econometric Society
Volume 88, Issue 1 (January 2020) has just been published. The full content of the journal is accessible at
Testing Models of Social Learning on Networks: Evidence from Two Experiments
Arun G. Chandrasekhar, Horacio Larreguy, Juan Pablo Xandri
We theoretically and empirically study an incomplete information model of social learning. Agents initially guess the binary state of the world after observing a private signal. In subsequent rounds, agents observe their network neighbors' previous guesses before guessing again. Agents are drawn from a mixture of learning types—Bayesian, who face incomplete information about others' types, and DeGroot, who average their neighbors' previous period guesses and follow the majority. We study (1) learning features of both types of agents in our incomplete information model; (2) what network structures lead to failures of asymptotic learning; (3) whether realistic networks exhibit such structures. We conducted lab experiments with 665 subjects in Indian villages and 350 students from ITAM in Mexico. We perform a reduced‐form analysis and then structurally estimate the mixing parameter, finding the share of Bayesian agents to be 10% and 50% in the Indian‐villager and Mexican‐student samples, respectively.
Endogenous Production Networks
Daron Acemoglu, Pablo D. Azar
We develop a tractable model of endogenous production networks. Each one of a number of products can be produced by combining labor and an endogenous subset of the other products as inputs. Different combinations of inputs generate (prespecified) levels of productivity and various distortions may affect costs and prices. We establish the existence and uniqueness of an equilibrium and provide comparative static results on how prices and endogenous technology/input choices (and thus the production network) respond to changes in parameters. These results show that improvements in technology (or reductions in distortions) spread throughout the economy via input–output linkages and reduce all prices, and under reasonable restrictions on the menu of production technologies, also lead to a denser production network. Using a dynamic version of the model, we establish that the endogenous evolution of the production network could be a powerful force towards sustained economic growth. At the root of this result is the fact that the arrival of a few new products expands the set of technological possibilities of all existing industries by a large amount—that is, if there are n products, the arrival of one more new product increases the combinations of inputs that each existing product can use from 2n−1 to 2n, thus enabling significantly more pronounced cost reductions from choice of input combinations. These cost reductions then spread to other industries via lower input prices and incentivize them to also adopt additional inputs.
The Global Diffusion of Ideas
Francisco J. Buera, Ezra Oberfield
We provide a tractable, quantitatively‐oriented theory of innovation and technology diffusion to explore the role of international trade in the process of development. We model innovation and diffusion as a process involving the combination of new ideas with insights from other industries or countries. We provide conditions under which each country's equilibrium frontier of knowledge converges to a Fréchet distribution, and derive a system of differential equations describing the evolution of the scale parameters of these distributions, that is, countries' stocks of knowledge. The model remains tractable with many asymmetric countries and generates a rich set of predictions about how the level and composition of trade affect countries' frontiers of knowledge. We use the framework to quantify the contribution of bilateral trade costs to long‐run changes in TFP and individual post‐war growth miracles. For our preferred calibration, we find that both gains from trade and the fraction of variation of TFP growth accounted for by changes in trade more than double relative to a model without diffusion.
Heterogeneity and Persistence in Returns to Wealth
Andreas Fagereng, Luigi Guiso, Davide Malacrino, Luigi Pistaferri
We provide a systematic analysis of the properties of individual returns to wealth using 12 years of population data from Norway's administrative tax records. We document a number of novel results. First, individuals earn markedly different average returns on their net worth (a standard deviation of 22.1%) and on its components. Second, heterogeneity in returns does not arise merely from differences in the allocation of wealth between safe and risky assets: returns are heterogeneous even within narrow asset classes. Third, returns are positively correlated with wealth: moving from the 10th to the 90th percentile of the net worth distribution increases the return by 18 percentage points (and 10 percentage points if looking at net‐of‐tax returns). Fourth, individual wealth returns exhibit substantial persistence over time. We argue that while this persistence partly arises from stable differences in risk exposure and assets scale, it also reflects heterogeneity in sophistication and financial information, as well as entrepreneurial talent. Finally, wealth returns are correlated across generations. We discuss the implications of these findings for several strands of the wealth inequality debate.
Forecasting with Dynamic Panel Data Models
Laura Liu, Hyungsik Roger Moon, Frank Schorfheide
This paper considers the problem of forecasting a collection of short time series using cross‐sectional information in panel data. We construct point predictors using Tweedie's formula for the posterior mean of heterogeneous coefficients under a correlated random effects distribution. This formula utilizes cross‐sectional information to transform the unit‐specific (quasi) maximum likelihood estimator into an approximation of the posterior mean under a prior distribution that equals the population distribution of the random coefficients. We show that the risk of a predictor based on a nonparametric kernel estimate of the Tweedie correction is asymptotically equivalent to the risk of a predictor that treats the correlated random effects distribution as known (ratio optimality). Our empirical Bayes predictor performs well compared to various competitors in a Monte Carlo study. In an empirical application, we use the predictor to forecast revenues for a large panel of bank holding companies and compare forecasts that condition on actual and severely adverse macroeconomic conditions.
Savage's P3 is Redundant
Savage (1954) provided the first axiomatic characterization of expected utility without relying on any given probabilities or utilities. It is the most famous preference axiomatization existing. This note shows that Savage's axiom P3 is implied by the other axioms, which reveals its redundancy. It is remarkable that this was not noticed before as Savage's axiomatization has been studied and taught by hundreds of researchers for more than six decades.
Nonlinear Pricing in Village Economies
Orazio Attanasio, Elena Pastorino
This paper examines the prices of basic staples in rural Mexico. We document that nonlinear pricing in the form of quantity discounts is common, that quantity discounts are sizable for basic staples, and that the well‐known conditional cash transfer program Progresa has significantly increased quantity discounts, although the program, as documented in previous studies, has not affected unit prices on average. To account for these patterns, we propose a model of price discrimination that nests those of Maskin and Riley (1984) and Jullien (2000), in which consumers differ in their tastes and, because of subsistence constraints, in their ability to pay for a good. We show that under mild conditions, a model in which consumers face heterogeneous subsistence or budget constraints is equivalent to one in which consumers have access to heterogeneous outside options. We rely on known results to characterize the equilibrium price schedule, which is nonlinear in quantity. We analyze the effect of nonlinear pricing on market participation as well as the impact of a market‐wide transfer, analogous to the Progresa one, when consumers are differentially constrained. We show that the model is structurally identified from data on prices and quantities from a single market under common assumptions. We estimate the model using data on three commonly consumed commodities from municipalities and localities in Mexico. Interestingly, we find that relative to linear pricing, nonlinear pricing is beneficial to a large number of households, including those consuming small quantities, mostly because of the higher degree of market participation that nonlinear pricing induces. We also show that the Progresa transfer has affected the slopes of the price schedules of the three commodities we study, which have become steeper as consistent with our model, leading to an increase in the intensity of price discrimination. Finally, we find that a reduced form of our model, in which the size of quantity discounts depends on the hazard rate of the distribution of quantities purchased in a village, accounts for the shift in price schedules induced by the program.
Sampling-Based versus Design-Based Uncertainty in Regression Analysis
Alberto Abadie, Susan Athey, Guido W. Imbens, Jeffrey M. Wooldridge
Consider a researcher estimating the parameters of a regression function based on data for all 50 states in the United States or on data for all visits to a website. What is the interpretation of the estimated parameters and the standard errors? In practice, researchers typically assume that the sample is randomly drawn from a large population of interest and report standard errors that are designed to capture sampling variation. This is common even in applications where it is difficult to articulate what that population of interest is, and how it differs from the sample. In this article, we explore an alternative approach to inference, which is partly design‐based. In a design‐based setting, the values of some of the regressors can be manipulated, perhaps through a policy intervention. Design‐based uncertainty emanates from lack of knowledge about the values that the regression outcome would have taken under alternative interventions. We derive standard errors that account for design‐based uncertainty instead of, or in addition to, sampling‐based uncertainty. We show that our standard errors in general are smaller than the usual infinite‐population sampling‐based standard errors and provide conditions under which they coincide.
Informational channels of financial contagion
Two main classes of channels are studied as informational sources of financial contagion. One is a fundamental channel that is based on real and financial links between economies, and the second is a social learning channel that arises when agents base their decisions on noisy observations about the actions of others in foreign markets. Using global games, I present a two‐country model of financial contagion in which both channels can operate and I test its predictions experimentally. The experimental results show that subjects do not extract information optimally, which leads to two systematic biases that affect these channels directly. Base‐rate neglect leads subjects to underweight their prior, and thus weakens the fundamental channel. An overreaction bias strengthens the social learning channel, since subjects rely on information about the behavior of others, even when this information is irrelevant. These results have significant welfare effects rooted in the specific way in which these biases alter behavior.