JOURNAL OF THE ECONOMETRIC SOCIETY
Volume 89, Issue 6 (November 2021) has just been published. The full content of the journal is accessible at
Inequality, Business Cycles, and Monetary-Fiscal Policy
Anmol Bhandari, David Evans, Mikhail Golosov, Thomas J. Sargent
We study optimal monetary and fiscal policies in a New Keynesian model with heterogeneous agents, incomplete markets, and nominal rigidities. Our approach uses small‐noise expansions and Fréchet derivatives to approximate equilibria quickly and efficiently. Responses of optimal policies to aggregate shocks differ qualitatively from what they would be in a corresponding representative agent economy and are an order of magnitude larger. A motive to provide insurance that arises from heterogeneity and incomplete markets outweighs price stabilization motives.
Learning Dynamics in Social Networks
Simon Board, Moritz Meyer‐ter‐Vehn
This paper proposes a tractable model of Bayesian learning on large random networks where agents choose whether to adopt an innovation. We study the impact of the network structure on learning dynamics and product diffusion. In directed networks, all direct and indirect links contribute to agents' learning. In comparison, learning and welfare are lower in undirected networks and networks with cliques. In a rich class of networks, behavior is described by a small number of differential equations, making the model useful for empirical work.
Team Players: How Social Skills Improve Team Performance
Ben Weidmann, David J. Deming
Most jobs require teamwork. Are some people good team players? In this paper, we design and test a new method for identifying individual contributions to team production. We randomly assign people to multiple teams and predict team performance based on previously assessed individual skills. Some people consistently cause their team to exceed its predicted performance. We call these individuals “team players.” Team players score significantly higher on a well‐established measure of social intelligence, but do not differ across a variety of other dimensions, including IQ, personality, education, and gender. Social skills—defined as a single latent factor that combines social intelligence scores with the team player effect—improve team performance about as much as IQ. We find suggestive evidence that team players increase effort among teammates.
The Reputation Trap
David K. Levine
Few want to do business with a partner who has a bad reputation. Consequently, once a bad reputation is established, it can be difficult to get rid of. This leads on the one hand to the intuitive idea that a good reputation is easy to lose and hard to gain. On the other hand, it can lead to a strong form of history dependence in which a single beneficial or adverse event can cast a shadow over a very long period of time. It gives rise to a reputational trap where an agent rationally chooses not to invest in a good reputation because the chances others will find out is too low. Nevertheless, the same agent with a good reputation will make every effort to maintain it. Here, a simple reputational model is constructed and the conditions for there to be a unique equilibrium that constitutes a reputation trap are characterized.
This paper shows that the products and prices offered in markets are correlated with local income‐specific tastes. To quantify the welfare impact of this variation, I calculate local price indexes micro‐founded by a model of non‐homothetic demand over thousands of grocery products. These indexes reveal large differences in how wealthy and poor households perceive the choice sets available in wealthy and poor cities. Relative to low‐income households, high‐income households enjoy 40 percent higher utility per dollar expenditure in wealthy cities, relative to poor cities. Similar patterns are observed across stores in different neighborhoods. Most of this variation is explained by differences in the product assortment offered, rather than the relative prices charged, by chains that operate in different markets.
Lumpy Durable Consumption Demand and the Limited Ammunition of Monetary Policy
Alisdair McKay, Johannes F. Wieland
The prevailing neo‐Wicksellian view holds that the central bank's objective is to track the natural rate of interest (r*), which itself is largely exogenous to monetary policy. We challenge this view using a fixed‐cost model of durable consumption demand, in which expansionary monetary policy prompts households to accelerate purchases of durable goods. This yields an intertemporal trade‐off in aggregate demand as encouraging households to increase durable holdings today leaves fewer households acquiring durables going forward. Interest rates must be kept low to support demand going forward, so accommodative monetary policy today reduces r* in the future. We show that this mechanism is quantitatively important in explaining the persistently low level of real interest rates and r* after the Great Recession.
Investment Demand and Structural Change
Manuel García‐Santana, Josep Pijoan‐Mas, Lucciano Villacorta
We study the joint evolution of the sectoral composition and the investment rate of developing economies. Using panel data for several countries in different stages of development, we document three novel facts: (a) the share of industry and the investment rate are strongly correlated and follow a hump‐shaped profile with development, (b) investment goods contain more domestic value added from industry and less from services than consumption goods do, and (c) the evolution of the sectoral composition of investment and consumption goods differs from the one of GDP. We build a multi‐sector growth model to fit these patterns and provide two important results. First, the hump‐shaped evolution of investment demand explains half of the hump in industry with development. Second, asymmetric sectoral productivity growth helps explain the decline in the relative price of investment goods along the development path, which in turn increases capital accumulation and promotes growth.
We conduct inference on volatility with noisy high‐frequency data. We assume the observed transaction price follows a continuous‐time Itô‐semimartingale, contaminated by a discrete‐time moving‐average noise process associated with the arrival of trades. We estimate volatility, defined as the quadratic variation of the semimartingale, by maximizing the likelihood of a misspecified moving‐average model, with its order selected based on an information criterion. Our inference is uniformly valid over a large class of noise processes whose magnitude and dependence structure vary with sample size. We show that the convergence rate of our estimator dominates n1/4 as noise vanishes, and is determined by the selected order of noise dependence when noise is sufficiently small. Our implementation guarantees positive estimates in finite samples.
Haavelmo (1944) proposed a probabilistic structure for econometric modeling, aiming to make econometrics useful for decision making. His fundamental contribution has become thoroughly embedded in econometric research, yet it could not answer all the deep issues that the author raised. Notably, Haavelmo struggled to formalize the implications for decision making of the fact that models can at most approximate actuality. In the same period, Wald (1939, 1945) initiated his own seminal development of statistical decision theory. Haavelmo favorably cited Wald, but econometrics did not embrace statistical decision theory. Instead, it focused on study of identification, estimation, and statistical inference. This paper proposes use of statistical decision theory to evaluate the performance of models in decision making. I consider the common practice of as‐if optimization: specification of a model, point estimation of its parameters, and use of the point estimate to make a decision that would be optimal if the estimate were accurate. A central theme is that one should evaluate as‐if optimization or any other model‐based decision rule by its performance across the state space, listing all states of nature that one believes feasible, not across the model space. I apply the theme to prediction and treatment choice. Statistical decision theory is conceptually simple, but application is often challenging. Advancing computation is the primary task to complete the foundations sketched by Haavelmo and Wald.
Identification at the Zero Lower Bound
I show that the zero lower bound (ZLB) on interest rates can be used to identify the causal effects of monetary policy. Identification depends on the extent to which the ZLB limits the efficacy of monetary policy. I propose a simple way to test the efficacy of unconventional policies, modeled via a “shadow rate.” I apply this method to U.S. monetary policy using a three‐equation structural vector autoregressive model of inflation, unemployment, and the Federal Funds rate. I reject the null hypothesis that unconventional monetary policy has no effect at the ZLB, but find some evidence that it is not as effective as conventional monetary policy.
Exchange Design and Efficiency
Marzena Rostek, Ji Hee Yoon
Most assets clear independently rather than jointly. This paper presents a model based on the uniform‐price double auction which accommodates arbitrary restrictions on market clearing, including independent clearing across assets (allowed when demand for each asset is contingent only on the price of that asset) and joint market clearing for all assets (required when demand for each asset is contingent on the prices of all assets). Additional trading protocols for traded assets—neutral when the market clears jointly—are generally not redundant innovations, even if all traders participate in all protocols. Multiple trading protocols that clear independently can be designed to be at least as efficient as joint market clearing for all assets. The change in price impact brought by independence in market clearing can overcome the loss of information, and enhance diversification and risk sharing. Except when the market is competitive, market characteristics should guide innovation in trading technology.
Pairwise stable matching in large economies
Michael Greinecker, Christopher Kah
We formulate a stability notion for two‐sided pairwise matching problems with individually insignificant agents in distributional form. Matchings are formulated as joint distributions over the characteristics of the populations to be matched. Spaces of characteristics can be high‐dimensional and need not be compact. Stable matchings exist with and without transfers, and stable matchings correspond precisely to limits of stable matchings for finite‐agent models. We can embed existing continuum matching models and stability notions with transferable utility as special cases of our model and stability notion. In contrast to finite‐agent matching models, stable matchings exist under a general class of externalities.
Capital Buffers in a Quantitative Model of Banking Industry Dynamics
Dean Corbae, Pablo D'Erasmo
We develop a model of banking industry dynamics to study the quantitative impact of regulatory policies on bank risk‐taking and market structure. Since our model is matched to U.S. data, we propose a market structure where big banks with market power interact with small, competitive fringe banks as well as non‐bank lenders. Banks face idiosyncratic funding shocks in addition to aggregate shocks which affect the fraction of performing loans in their portfolio. A nontrivial bank size distribution arises out of endogenous entry and exit, as well as banks' buffer stock of capital. We show that the model predictions are consistent with untargeted business cycle properties, the bank lending channel, and empirical studies of the role of concentration on financial stability. We find that regulatory policies can have an important impact on banking market structure, which, along with selection effects, can generate changes in allocative efficiency and stability.
Learning with Heterogeneous Misspecified Models: Characterization and Robustness
J. Aislinn Bohren, Daniel N. Hauser
This paper develops a general framework to study how misinterpreting information impacts learning. Our main result is a simple criterion to characterize long‐run beliefs based on the underlying form of misspecification. We present this characterization in the context of social learning, then highlight how it applies to other learning environments, including individual learning. A key contribution is that our characterization applies to settings with model heterogeneity and provides conditions for entrenched disagreement. Our characterization can be used to determine whether a representative agent approach is valid in the face of heterogeneity, study how differing levels of bias or unawareness of others' biases impact learning, and explore whether the impact of a bias is sensitive to parametric specification or the source of information. This unified framework synthesizes insights gleaned from previously studied forms of misspecification and provides novel insights in specific applications, as we demonstrate in settings with partisan bias, overreaction, naive learning, and level‐k reasoning.