Does Government Spending Harm The Environment?

A market failure occurs when there is a gap between the private and social costs of an activity. That is, the social costs are higher than the private costs. The activity itself is something economists call an externality. For example, consider a factory where the production process throws off disgusting waste. If the factory dumps this junk into a handy river instead of disposing it in a less convenient, less harmful place, the resulting pollution is an externality.

Of course, the problem is that by using the river as a dump, production costs are lower than if the factory disposed of its waste in a socially responsible way, so the factory owner has little incentive not to pollute. This leaves the people and businesses along the river, who suffer the bad effects, to bear the cost–hence the term “social cost.”

The failure of the market to cover the costs of an externality is taken as an invitation for the government to step in and make private parties deal with the social costs. In this example, the government could promulgate regulations to stop river pollution by making it unlawful for the factory to dump waste in the river, which, in turn, would raise the private cost of production but lower the social costs borne by society.

At any rate, during the 1970s, the “correction” of market failures accelerated the pace of social regulation. In 1970, both the Occupational Safety and Health Act and the Clean Air Act were passed. Two years later came the Marine Protection Act, the Water Pollution Act, and the Federal Insecticide and Rodenticide Act. These were followed by the Safe Drinking Water Act in 1974 and the Toxic Substances Control Act of 1975. There were also amendments to the Clean Air Act in 1977. A welter of new bureaucracies were created, among others: the Environmental Protection Agency, the Occupational Safety and Health Administration, the Consumer Product Safety Commission, and the National Highway Traffic Safety Administration.

The extravaganza continued in the 1980s. In environmental regulation, more than half a dozen laws were passed, including the Radon Gas and Indoor Air Quality Research Act in 1986; the Radon Pollution Control Act in 1988; the Clean Air Amendments in 1990; CERCLA (the Comprehensive Environmental Response, Compensation and Liabilities Act–the notorious Superfund) in 1980; the Hazardous and Solid Waste Amendments in 1984; FIFRA (the Federal Insecticide, Fungicide, and Rodenticide Act Amendments) in 1988.

Nineteen-ninety was a stellar year for regulatory excess. First came the mother of expensive regulation–additional amendments to the Clean Air Act. The law, which covers many businesses from giant utilities and auto companies to tiny bakeries and dry cleaners, could cost as much as $60 billion annually when fully implemented in the late 1990s. Then came the Disabled Americans Act, which requires owners of private businesses, stores, hotels, restaurants, and apartments to make specified physical modifications to accommodate the disabled. The initial conversion costs to bring these establishments into compliance range between $60 billion and $70 billion. There was also legislation requiring food manufacturers to affix labels to products carrying various nutritional information.

A good guide to assessing the growth of regulation is to look at the number of pages in the Federal Register, where all new regulations are published annually. Pages went steadily up during the 1970s, until they reached an all-time high of 87,000 in 1979. Indeed, not until the 1980s was there a respite, when the Reagan administration seriously slowed the trajectory of ever-more regulation; the number of pages in the Federal Register actually declined by 34,000, the number of federal workers involved in regulation fell, and the cost of administering federal regulatory programs was about flat.

Then the great reversal. During the first two years of the Bush administration, regulation got out of hand. The number of pages in the Federal Register increased to 70,000. The number of federal employees busy issuing and enforcing the stuff reached an all-time high of almost 125,000, and the amount of money devoted to administering these programs grew at double-digit rates. It got so bad, in fact, that, in 1992, Bush had to announce a regulatory reform against his own administration–no new rules for 90 days, later extended another 90 days. No matter. In came the Clinton administration and the growth of new pages resumed apace.

There is no one correct and clear way to measure the cost of regulation on the economy. But there are plenty of estimates. One of the most reasonable figures is that in 1992, regulation cost one-half trillion dollars, or 1% of the economy. This figure includes social regulation such as environment, health, and safety; economic regulation like trade restrictions, federal labor laws, and farm programs; and the paperwork involved in filling out forms, keeping records, paying accountants and lawyers.

Who Pays for All This? In general, the cost of regulating is initially expressed as a cost of doing business. Okay, but who pays this tariff? We all do, in one way or another.

Consider a standard situation in which a law requires certain practices to be followed in hiring or procedures to be used to assure product quality. The former will raise costs by forcing employers to expand their job search and fill out forms to prove compliance; the latter will raise costs by requiring changes in the production process. Sometimes, firms can pass these costs to consumers, making them pay more; sometimes, firms can’t pass them along at all, so they will have lower profits, which means that owners or shareholders foot the bill. But, often, employers pass these costs down the line with lower wages and salaries. Other times, when costs cannot be directly passed off to employees, employers will respond by either hiring fewer people or laying off those already employed. Either way, higher business costs from regulation will result in lower wages and/or higher unemployment.

Excessive regulation also discourages investment in domestic business: Why plop a factory down on regulated soil when unregulated opportunities beckon abroad? Moreover, the threat of regulatory changes creates uncertainty, which scares investors, who then demand higher returns, and tends to make planning horizons more short term.

Further, regulation stymies innovation. This has been especially true in the drug and medical-device industry. Long approval periods shorten the effective patent time for the results of expensive research and development and thus diminish returns on discoveries without lowering risk. A larger gap between risk and return renders many research and development projects too unprofitable to undertake.

And last, all of the above make it harder for domestic firms to compete in international markets in which many foreign-based firms do not have to contend with the effects of excessive regulation.

What makes all the direct and indirect costs, to say nothing of the mind-numbing frustration, even more wicked, is that taken all together, these costs slow economic growth. Rather than detailing the many subtle ways in which regulation impedes growth that are so favored by economists, here instead is just one basic example: Innovation requires research, but research might not be undertaken if regulation makes the fruit of that research susceptible to liability action. Innovation requires development, but development might not be undertaken if regulation draws out the period before approval is granted. In other words, firms need a secure environment for innovation before they will commit money, time, and other scarce resources to the process. Lacking that environment, there will be less business innovation, which leads to slower productivity growth, and that creates slower economic growth.

There are, of course, offsetting benefits to regulation, especially those that concern health and safety. People are less susceptible to sickness, injury, and death because of the many workplace and drug and food laws. Ditto for those who fly airplanes and drive cars. And certainly everybody who breathes the air, or swims in rivers or lakes, is better off than they would be if cities still generated dense smogs of pollution or waterways still caught on fire. But no respectable effort has been made to quantify these benefits overall.

It is, however, possible to estimate benefits in specific categories. For example, trade restrictions such as quotas on imported cars or textiles benefit the workers who keep their jobs–so just multiply the number of jobs “kept” by annual salaries to arrive at total benefit. The irony is that often when these benefits are quantified, the costs of the regulations outweigh them. For example, the total figure for the benefits from restricting imports that accrue to domestic car and textile industries, in profits and jobs, is smaller than the costs to consumers in the form of higher prices for cars and clothes.

Then there are regulations that don’t seem to produce any benefits at all. Consider, for instance, the billions of dollars spent to remove asbestos from buildings, although the very removal may be as health-threatening as if the stuff were just left alone, or the billions it costs to clean up hazardous-waste sites, despite the lack of evidence that those sites constitute a hazard. And, finally, there are categories in which regulation actually results in negative benefits–that is, in harm. The Food and Drug Administration, for instance, has delayed approval of many drugs and medical devices that could have saved lives; critics point to long approval times (measured in years) for Interleukin 2 for kidney cancer, for example.

For any readers isolated from the impact of regulatory zeal and therefore of the mind that I am overstating the case, there exists a real world example of the positive impact of unraveling regulation. Consider the move in the 1970s to deregulate large sectors of the economy. This economic event was aimed at freeing a few very basic industries in the country’s infrastructure–some of which had been seriously regulated as far back as the 19th century.

The Benefits of Deregulation.

A real breakthrough came in 1975 when President Ford, asking Congress for fundamental changes in the laws regulating transportation–railroads, airlines, and trucking firms–used the word “cost” to discuss regulation. Ford complained that “regulation has been used to protect and support the growth of established firms rather than to promote competition.” And he pointed to the proliferation of commissions, agencies, bureaus, and offices to oversee new programs and the cumbersome and costly procedures to license, certify, review, and approve of new technologies and products.

This spirit continued in force under President Carter: Along with deregulation of transportation, there were major steps to deregulate the financial markets and telecommunications, and to decontrol energy prices. Early in his tenure, Carter was apparently impressed by the argument that an increase in regulation showed up as cost increases followed by price increases and ultimately as wage increases. In the president’s 1978 Economic Report, Carter wrote: “There is no question that the scope of regulation has become excessive and that too little attention is given to its economic costs…wherever possible, the extent of regulation should be reduced.” True to his word, in 1978 came the Airline Deregulation Act and a strong push for similar deregulation in trucking and rails. In 1979, Carter presented deregulation bills for telecommunications and financial services.

Deregulation of these infrastructure industries was the most significant policy event of the 1970s–and surely an important one in providing a long-term boost to the economy in the 1980s. Total benefits to the economy are probably around $40 billion a year. Consumers pay lower prices, enjoy more choices and better quality in goods and services from the deregulated industries, and the rate of economic growth has been much stronger than it would have been absent this effort.

How Far Should We Go?

In fact, most of the global experience with deregulation has been so positive that some people think we should deregulate everything in sight. Should we? There are two equally appealing answers: yes and no. Yes, because of all the bad things that regulation does, as argued above. No, because notwithstanding all those bad things, people have already made their adjustments and expect to operate in a regulated environment; a sudden deregulation would create all sorts of new problems.

As attractive as both yes-and-no responses are, however, they don’t offer a good guide for future regulation. Since the yen to regulate things–and to hope for good outcomes–is not going to go away, we should figure out some way to regulate while limiting the damage that will surely result.

Take the category that offers the smallest contribution to social welfare and the largest amount of per capita suffering–paperwork. A good approach would be to cut paperwork requirements in half (at least) and then maintain that number–of pages of forms no matter what. If new paperwork is mandated by some new rule-making outburst, then the same number of pages of old paperwork would have to go. (We could take a giant step toward a reduction simply by changing the federal income tax system to a flat rate system so that the tax form could fit on a post card.)

Second, consider economic regulation–that category that is the most unnecessary and is aimed almost exclusively at cheering up special interests. We could just drop it entirely. What’s the good of having the discipline of the market if we won’t let the market discipline? Minimum wages? Free employers to pay employees based on the value of their work, not on the political calculations of Congress. Trade protection? Let firms and industries fail if they can’t cut the mustard internationally. Farm supports and other forms of economic welfare? Dismantle them.

Finally, what about the most expensive category–the one that, no surprise, turns out to have the biggest, most enthusiastic constituency–social regulation. There are plenty of ideas on how to make social regulation more cost-effective. Two are neat variations on the theme of making the federal government put its money near its mouth: making the government fund mandates to state and local governments and compensate owners for taking property for other uses (for example, by declaring the property inviolate as wetlands or as having historic interest).

Currently, the most fashionable idea for reform of social regulation is something called risk assessment or risk management. It attacks the presumption behind most social regulation, especially for environmental rules–that risk is controllable; that if breathing bad air or eating apples sprayed with pesticide increases the risk of cancer, then decreasing air pollution or banning pesticides will cut down on that risk.

Fine. Except, what if the cost of reducing risk was a million times the benefit derived? Or what if the cost of reducing risk was reasonable, but the risk itself was trivial? Or if the risk itself was grave, but the amount it could be reduced was trivial? Under any of these conditions, would it still make sense to go ahead and regulate?

That’s where risk assessment comes in. Using what its proponents like to call sound science, or good economics, the risk of a particular activity is measured against the cost of reducing it. And presumably, if the costs and/or benefits are way out of whack, the regulation will not be undertaken.

There are lots of problems with this approach–beyond the obvious of defining “sound” science or “good” economics–like how to quantify human life, since human life is what is at risk. But requiring a risk assessment investigation before approving new regulations would be a start toward reversing over 35 years of indulging an attitude most easily characterized as: “Eek! Look, a risk! Let’s regulate it away–right this minute–no matter the cost–hurry!”

COMMENTS

 

Comments RSS

Leave a Reply