This post is part of the HISRECO 2018 series. Participants of the 2018 HISRECO conference were asked to write short blogposts to highlight their contribution to the conference for a general audience. The idea of this blogged conference comes from the « Learning by the book » conference in Princeton, published as a series on the History of Knowledge Blog. This post is number 5 of 7.
The 2018 appointment of Jerome Powell, a trained lawyer, as chairman of the Board of Governors of the Federal Reserve System is a throwback to a time where non-economists ran the Fed. Up until the early 1970s most of the Board governors and the Regional Bank presidents were not economists but bankers, lawyers, or businessmen. Academic credentials were much less valuable than practical experience in the business and banking world, either in the private sector or at the Federal Reserve System. Allan Sproul (1896-1978), president of the New York Fed from 1941 to 1956, is a good example. One of the most influential central bankers of his time, he had a BS in pomology. He was initially hired by the research department of the San Francisco Fed though, by his own admission, he knew “little about banking and nothing about central banking.” It would be hard to deny that at least part of central banking remains a craft, or an art, that is learnt on-site, but the background of Fed policymakers has become much less varied: Since the 1960s the amount of trained economists serving as Board governors and Regional Bank presidents has increased substantially, and of the last Board chairs—from Arthur Burns (1970-1978) to Janet Yellen (2014-2018)—only George Miller (1978-1979) had neither a MA nor a PhD in economics.
The “scientization” which most protagonists and historians agree has taken place between the 1960s to 1980s went beyond the credentials of the board members. François Claveau and Jeremie Dion estimate that nearly 50% of money and banking economists listed by the American Economic Association work in the research departments of regional banks and the Board. It was however not merely a quantitative trend, but a qualitative one as well. The type of economic knowledge used in the Federal Open Market Committee’s policy decisions also shifted.
The Board’s first large-scale macroeconometric model was built in the 1960s, and its resulting forecasts were integrated into two documents for the FOMC: a Greenbook outlining present and future economic conditions and a Bluebook proposing alternative policy scenarios. Yet how this transformation in the economic expertise developed and implemented inside the Fed came about have not been fully spelled out in the sprawling literature on central banking. In sociological accounts, bio- and autobiographies or protagonists’ reminiscences, economists are often cast as decision-makers and practitioners rather than modelers, knowledge producers and advisors.
Our research shows that crafting an institutional and intellectual space for economic analysis within the Fed was a reaction both to mounting external criticisms against the Fed’s decisions and to an oft-described process internal to the discipline whereby institutionalism was displaced by new forms of analysis. We argue that the rise of the number of PhD economists working at the Fed is a symptom rather than a cause of this transformation. Key to our story are a handful of economists from the Board of Governor’s Division of Research and Statistics (DRS) who paradoxically did not always held a PhD, but envisioned their role as going beyond data accumulation and were associated with collective efforts to build large-scale macroeconometric models.
The Fed under pressure
Beginning in the late 1950s, the Fed faced mounting criticisms coming from congressional committees and governmental bodies. But behind them stood academic economists involved in policy advising. Some of them were dissatisfied with the policies implemented by Fed chairman William McChesney Martin. It was the case of the Council of Economic Advisors, who was trying to persuade Kennedy to pass a tax cut with the aim of bridging the “output gap.” In their weekly memos to the President, CEA chairman Walter Heller and James Tobin lambasted the Fed’s bias in favor of price stability and its unwillingness to take into account employment and growth targets. Martin thought that stabilizing the dollar was the primary mission of the Fed, and that high interest rates were the only way to avoid a gold drain and external imbalances that would weight on growth and employment. But for Tobin,
[t]he economic logic of this prejudice [was], to say the least, obscure … like some other positions of the Board, this proposition relies more on a general faith that virtue pays than on careful empirical and theoretical analysis.
Other economists attacked how the Fed arrived at its policy decisions. It was the case of Karl Brunner and Allan Meltzer, the two monetarists who authored a study commissioned by the House Committee on Banking and Currency in 1964. The Committee was instigated and chaired by Wright Patman, a proponent of low interest rates who was hailed as “the populist scourge of the Fed.” He wanted no less than getting rid of the FOMC, suppressing the Fed’s budgetary autonomy, and re-establishing the Treasury’s oversight over board decisions. Brunner and Meltzer’s major conclusion was that “after 50 years the Federal Reserve ha[d] not yet provided a rational foundation for policymaking.” The Fed had failed to build and communicate clearly a systematic framework that could be tested and compared with competing alternatives, they explained. The FOMC focused on the wrong indicators and instruments, and had no clear picture of the lags with which financial agents reacted to policies.
Overall, many economists questioned the Fed’s expertise in monetary policy (it was often pointed out that Martin had no knowledge of economic mechanisms). They demanded that FOMC decisions be grounded in the kind of theories and models macroeconomists had developed after the war. Martin and the Board of Governors, whose share of members with an economics PhD had grown from 0 in 1951 to 3/7 in 1965, took those criticisms very seriously. Being faulted for their lack of expertise in up-to-date economic research, they turned their eye to their Division of Research and Statistics (DRS).
Inside the Fed
Unlike the board, the DRS had largely been staffed with professional economists since its establishment in 1918. 4/7 of the successive DRS directors nominated before 1965 held a PhD, one a MA in economics, and the other two a BA in economics. Those ratios were similar among top DRS executives. What changed during the 1960s were the type of work carried out at the DRS. The division had hitherto concentrated on intensive data collection and analysis without much of a guiding theoretical framework, which Brunner and Meltzer denounced as “pointless data collection.” This practice borrowed from the institutionalist NBER tradition in which many DRS economists were trained.
In the early 1960s, the perspective began to shift under the influence of Daniel Brill, who had done graduate work at the American University before the war without completing a PhD, and Frank de Leeuw, who was working on his Harvard dissertation (defended 1965). Both participated in the construction of the macroeconometric model of the Committee on Economic Stability of the Social Science Research Council (SSRC), who would later become known as the Brookings Quarterly model. De Leeuw was in charge of writing the financial sector equations. The project was extremely ambitious. Meant for “forecasting and policy formation,” it drew on the method previously engineered by Lawrence Klein. It involved specifying tens of behavioral equations and constraints to model a set of aggregates describing government, housing, finance, and several production sectors in a way mathematically consistent enough for the hundreds of parameters to be estimated with OLS, maximum likelihood, and recursive techniques. Large datasets had to be unified, hundreds of card had to be punched, and kilometers of FORTRAN code had to be written. Successive DRS directors Ralph Young, a Penn PhD who had directed the NBER Financial Research Program and Guy Noyes, a former Yale student, supported the project. Despite their institutional leanings, they likely did not see any sharp break between NBER empirical work and econometrics. They rather envisioned the macroeconometric project as a way to use flow-of-fund accounts the DRS had painfully developed since the 1940s under the leadership of Morris Copeland.
Brill, who had entered the Board as Morris Copeland’s assistant, was appointed DRS director in 1963. He strengthened ties between his division and macroeconometricians through research meetings and supporting the establishment of a joint SSRC Subcommittee on Monetary Research. At the request of chairman Martin, George Lee Bach started “Consultants Meetings” where the likes of Franco Modigliani and Milton Friedman discussed monetary policy. In 1966, De Leeuw decided to team up with Albert Ando and Modigliani to build a new model explicitly aimed at a better understanding of the connection between the real and financial sectors and how monetary policy affected growth, prices, and employment. To build and maintain what came to be known as the FRB-MIT-Penn or FMP model, the DRS hired a stream of young econometricians.
Art vs. Science?
In 1964, Brill started the Greenbook, a document introducing forecasts of the main economic series to the FOMC. The next year, he suggested that Stephen Axilrod added a Bluebook outlining potential scenarios in the money market given a set of policy alternatives. Governor Sherman Maisel (1965-1972), who had also participated in the Committee’s macroeconometric model project years before, helped carrying these forecasts into the FOMC decision-making process. Yet the books cannot be seen as a takeover of macroeconometrics at the Fed. For staff econometricians like James Pierce, the methods involved in producing the Greenbook forecasts were rather primitive. It was the staff’s judgmental forecast for six months to a year, obtained assuming a baseline growth rate and adjusting by the staff members’ detailed knowledge of each sector: “[W]hat they used to call “business economists” or we called “judgmental economists” […] just stare at the wall and figure out what’s going to happen,” Pierce remembers. The model did play a role in the preparation of the Bluebook, but there was no straightforward reuse of the forecasts. “it’s not ‘the’ model […] it’s economists using it as a tool to guide their thinking […] It help you work through the dynamics,” Pierce explains.
Even carefully crafted as a combination of judgment and econometrics, the books represented a sharp departure from previous practices. What was perceived as “mechanical” forecasting was banned from the Fed since Winfield Riefler’s time as Martin’s main staff adviser. Yet Martin knew he needed to bring up-to-date economic methods to bear on board decisions, if only to silence critics. In 1964, he thus set up a subcommittee to help the rationalization of FOMC procedures, and supported the changes engineered by the DRS. But he remained skeptical toward econometric modeling. Informed judgment, and a “feel” for current business mood was superior. He also considered that quantification could give a false sense of certainty that was dangerous for monetary policymakers. In 1968, the DRS staff failed to forecast the economic consequences of the tax surcharge designed to curb the inflation associated with the Vietnam War adequately. The failure convinced Martin that econometricians had been given too much weight in the decision making process. Relying too much on quantitative indicators was, in Martin’s eye, a disease, one he dubbed “statisticalitis.” Monetary policy was an art, not a science, he repeated.
The forecasting debacle resulted in Brill’s resignation in 1969, shortly before Arthur Burns replaced Martin. Though he was the first PhD economist to chair the Board, Burns did not care much about the books. First because, by every account, he was an autocrat whose leadership style contrasted with Martin’s consensus seeking. Second, because he fully endorsed the NBER style of inquiry, and merely wanted the DRS to supply loads of data he could draw a picture of the economy from. He would consequently use the Bluebook as a political tool, instructing his staff to write down his preferred policy course as “scenario B,” a kind of middle ground that would rally FOMC participants.
Overall, the story of how economic analysis was gradually embedded in the Fed’s decision-making process belies the idea of a linear irresistible take-over by newly-minted PhD economists. First, institutionalist data and intuition-based evaluation of the economic situation was not replaced by sophisticated large-scale macroeconometric models. The two were combined in documents carefully crafted to appeal to FOMC members with diverse backgrounds.
Second, the shift toward new forms of analysis was engineered by economists who either had no PhD, or completed one along their work as staff experts. It was not their training that was key to their endorsement of macroeconometrics, but their participation into collective endeavors (for Brill) or the external pressures they faced (for Martin and FOMC members).
Finally, the use of the books and underlying forecasts was resisted, including by economists who held PhDs but clung to institutionalist practices, and still largely ignored at the turn of the 1970s. The road toward scientization was a long and bumpy one.
Juan Acosta is a PhD candidate at the University of Lille (LEM-CNRS, UMR 9221).