Those that agreed with the wisdom of this move were christened cartel apologists by one of those that disagree with the removal of criminal penalties for cartels.
There is an infallible rule in competition law enforcement. It arises mostly crisply in merger law enforcement. If competitors oppose a merger, the merger must be pro-consumer. If the merger is anti-competitive, that merger will increase prices. The competing firms can follow those prices up and profit from the weakening of competition.
Under the collusion hypothesis, rivals of the merging firm benefit since there is a higher probability of successful collusion limits output and raises product prices. The share prices of these rival firms should increase in anticipation of enhanced cartel profits. As Eckbo explains:
Using Stigler (1964) theory of oligopoly, a horizontal merger can reduce the monitoring costs by reducing the number of independent producers in the industry. The fewer the members of the industry the more “visible” are each producers actions, and the higher is the probability of detecting members who try to cheat on the cartel by increasing output.
When was the last time an entrepreneur complained about his rivals putting their prices up? The entrepreneur can either match that price increase or undercut it to win more business. The real reason competitors oppose a merger is the merged firm will have lower costs, making it a fiercer competitor.
If the share prices of competitors fall on news of the merger, they are worse off as a result because they face a fiercer competitor. If their share prices rise, that suggests either that others in the industry are to benefit from higher prices or rival firms will soon replicate the cost savings discovered in the course of the merger. The latter is the information effect of mergers:
…since the production technologies of close competitors are (by definition) closely related, the news of a proposed efficient merger can also signal opportunities for the rivals to increase their productivity
Mergers are a high-risk way of securing higher prices unless there are offsetting cost saving of combining the two firms. Mergers disturb previously efficient firm sizes and risk diseconomies of scale and a burgeoning corporate hierarchy. A cartel is a safer way to raise prices by jointly agreeing to restrict output.
Cartels have few redeeming features. Cartels are inherently unstable because the history of cartels is the history of double-crossing. The best place in a cartel is to be on the outside undercutting it slightly to sell as you can at inflated cartel price.
The complication with cartels is competitors must sometimes coordinate their activities with their rivals in various ways such as agreeing product standards, undertaking joint ventures or licensing technologies to them.
Criminalisation of cartels may deter these business practices that promote consumer welfare. The process of innovation in new industries in particular often involves successful firms taking over the unsuccessful firms.
Serial competition is common in rapidly innovating industries with one dominant firm making hay for a while then quickly swept away. Merger law enforcement agencies do not handle the wake of creative destruction well.
There is no more cutthroat market than Hollywood. Yet the movie industry is riddled with collusion and joint ventures. Actors and producers can be collaborating on one film and also be making another film that will be its rival in the box office when released.
The movie industry would not work without this incestuous mix of competition and collaboration. Joint ventures are aplenty between otherwise direct competitors in the film industry. When do these joint ventures become cartels threatened with criminal penalties?
What should be another working rule in competition law enforcement is when there is reasons to stay your hand, that is usually a good idea even if you do not have the reasons worked out yet. When in doubt, stay your hand.
It goes back to that extremely famous 1984 essay by Frank Easterbrook on the limits of anti-trust law. The essay was about errors in competition policy and law enforcement:
- When a competition law enforcer makes a mistake and closes off an efficiency enhancing practice or stops a pro-consumer merger, there are few mechanisms to correct this mistake; and
- If a competition law enforcer inadvertently does not stop a anti-competitive merger or lets a collusive or inefficient practice get through, at least there is market processes that will slowly chip away at his mistake.
Easterbrook argued that courts and enforcers should craft liability and procedural rules to minimise the sum of competition law’s error and decision costs:
The legal system should be designed to minimize the total costs of (1) anticompetitive practices that escape condemnation; (2) competitive practices that are condemned or deterred; and (3) the system itself
Competition law enforcers and policymakers made plenty of errors in the past. Chastened by their follies aplenty in the past, competition law policymakers should not approach any issue with overconfidence. They have had a dismal track record in aligning competition law with applied price theory and the basics of the economics of industrial organisation.
That is at best only a good start for the competition law enforcement agencies. This is because the economics of industrial organisation spent a lot of time condemning practices that neither restricted output or increased prices.
It took many decades for consumer welfare to be the exclusive goal of competition. Time and again protecting competitors from competition was the priority of competition law enforcement agencies.
The ICT revolution coincided with a revolution in competition law economics and policy. That revolution consisted of basing competition law on applied price theory and not condemning every novel or as yet unexplained practice.
In the high-tech industries, competition law runs a high risk of chilling innovation. As Joshua Wright said:
Innovation is critical to economic growth. Incentives to innovate are at the heart of the antitrust enterprise in dynamically competitive industries, and, thus, getting antitrust policy right in high-tech markets is an increasingly important component of regulatory policy in the modern economy. While antitrust enforcement activity in high-tech markets in the United States and the rest of the world is ever-increasing, there remain significant disputes as to how to assess intervention in dynamically competitive markets.
The relentless pursuit of Microsoft by the US Department of Justice at the behest of its competitors such as Netscape is notorious example of the chilling of innovation.
You are showing your age if you even remember who Netscape was. Its complaint was that Microsoft by giving away its browser was engaging in predatory competition.
Netscape want to protect consumers from the scourge of lower prices – from not having to pay $49 for the Netscape browser. You are showing your age if you have ever paid to install a browser.
Netscape had the advantage of a senior US senator representing the state where it was based. He happened to sit on the committee overseeing the budget of the US Anti-trust enforcement agencies.
We are still waiting for the day when Microsoft finishes giving away its browser, excludes competition from the market for browsers, jacks up its price to make up for a good 20 years of giving away its browser and is not immediately threatened by new entry.
The intrepid competition law enforcers of the 1990s did not anticipate a business model where competitors profitably give their product away.
Thankfully, Facebook did not face competitors who charged for their social media. If Facebook had faced such competition, what would the US Department of Justice thought of this anti-competitive practice of giving social media away. The scourge of lower prices again. That great bugbear of competition law enforcement agencies.
Facebook is doing the exact same thing that Microsoft did when it gave away the Internet Explorer browser. To this day, competition law enforcement agencies including the New Zealand Commerce Commission do not accept lower prices to be lawful in all cases without exception.
A test of how imbibed you are with the fatal conceit about competition law is to cast your mind back as to what your attitude was to the Department of Justice anti-trust lawsuit against Microsoft.
If you thought the anti-trust lawsuit against Microsoft was well-founded, you are an optimist about the efficient scope of competition law. To quote McKenzie and Shughart:
Microsoft’s critics come far closer to the mark when they complain that Microsoft has been “brutally competitive” than when they claim Microsoft is a “monopoly.” From our perspective, it appears that once again the Justice Department is using the antitrust laws to thwart competition by a highly successful American firm. To protect unsuccessful competitors, it is squelching competition.
A long time has passed since that suit. People can reflect upon the extent to which Microsoft have successfully monopolised browsing the Internet. It hasn’t. As Gary Becker said:
Anti trust policy should recognize that dynamic competition is often a powerful force when static competition is weak. The big policy question then is whether it is worthwhile to bring expensive and time consuming anti trust cases against still innovating firms that have considerable profits and monopoly power, given the significant probability that new competitors will before long greatly erode this power through different products? I believe the answer to that is no, and that policy should often rely on dynamic competition, even when that allows dominant firms only temporarily to enjoy economic power.
The law and economics of competition has been a bit of a glass house for the last 50 years. People should be careful about criticising new idea and attempts to be more modest about the positive contribution the competition law makes to society.
Competition law can subvert competition by stymieing the introduction of new goods and the temporary monopoly often necessary to recoup their invention costs and induce innovation. Sam Peltzman, when reflecting on the contributions of Aaron Director to the law and economics of competition said:
There are the myriad of ways in which real world business practices behave differently from the caricaturing in textbooks. Those differences sometimes arouses suspicious responses from economists. Visions of market power and deadweight loss triangles dance their heads, and some of the suspect practices have been constrained by anti-trust policy. Director rejected this kind of intellectual laziness, and he sought, sometimes successfully, to inoculate those around him against it.
Director approached all business practices with the methodology that entailed asking very basic questions and answering them in a rigorous logic that it appealed ultimately to facts. The style was verbal – some combination of Socratic dialogue and Adam Smith. This style had the disadvantage of producing few closed-form solutions. But it had the advantage of permitting analysis of the kind of problems that eluded simple solutions.
Indeed I believe that one reason for Director’s lasting influence he was able to show that simple judgements about business practices often cannot withstand rigorous scrutiny.
Economic theory and empirical evidence are full of examples of business conduct that reduce choice but increase consumer welfare through lower prices, more innovation, or higher quality products and services. Manne and Wright noted in the paper, Innovation and the Limits of Antitrust that:
Both product and business innovations involve novel practices, and such practices generally result in monopoly explanations from the economics profession followed by hostility from the courts (though sometimes in reverse order) and then a subsequent, more nuanced economic understanding of the business practice usually recognizing its pro-competitive virtues.
Competition law enforcement agencies are suing Google because it is anti-competitive. The dead hands of the competitors to Google are buried somewhere in those suits. Is there no learning. There is certainly no modesty about past mistakes about the proper scope of competition law.
Bart Madden and Vernon Smith outlined a brilliant proposal charted above to shorten lags in the availability of life-saving medicine based on reforms in Japan:
Recently, Japanese legislation has implemented the core FTCM [Free to Choose Medicine] principles of allowing not-yet-approved drugs to be sold after safety and early efficacy has been demonstrated; in addition, observational data gathered for up to seven years from initial launch will be used to determine if formal drug approval is granted. In order to address the pressing needs of an aging population, Japanese politicians have initially focused on regenerative medicine (stem cells, etc.).
This process would release the relevant data behind the drug including its clinical trials on a web portal so that patients and their doctors can work out whether a new drug is suitable to them given their genetic markers. Madden and Smith explain the operation of the web portal for Free to Choose Medicine (FTCM) as follows:
Doctors would be empowered to use their medical knowledge and in-depth knowledge of their patients similar to how they decide on off-label use for approved drugs, i.e., for uses that the FDA has neither tested nor approved but, in the opinion of doctors, are likely to be beneficial to patients. To gain early access, patients would purchase the drug from developers and consent to doctor and developer immunity from lawsuits except in the case of gross negligence or willful misconduct.
Off label use of medicines arises because the current Food and Drug Administration (FDA) process for drug approval has several phases. Phase 1 tests for the safety of the drug. Later phases are about whether the drug has its predicted effects. That should not be a concern of the FDA or its superfluous New Zealand equivalent Medsafe.
If a new drug isn’t better than the existing competition, that’s a problem for its investors for backing the wrong horse. It’s up to its investors and potential buyers to work out for themselves whether a new drug is more effective than the existing options. That’s a commercial decision, not a decision from regulators.
Once a drug is approved by the FDA for particular uses, doctors and researchers often discover that a drug has other clinical applications.
Source: Pharma Marketing Blog
Rather than go through another round of FDA approvals, doctors simply prescribe that drug despite the fact it is not approved by the FDA for that particular clinical use. This is what is called off label prescription.
A number of US states have passed hopelessly unconstitutional Right to Try legislation that authorises the prescription of new drugs not approved by the FDA.
The Free to Choose Medicine proposal is similar to Right to Try legislation. Free to Choose Medicine would allow doctors to make their own prescription choices for their patients as long as the new drug has been shown to be safe. That is, it has passed Phase 1 of the FDA drug approval process. Phase 1 is about drug safety.
In 1962, an amended law gave the FDA authority to judge if a new drug produced the results for which it had been developed. Formerly, the FDA monitored only drug safety. It previously had only sixty days to decide this. Drug trials can now take up to 10 years.
Sam Peltzman showed in a famous paper in 1973 that the 1962 amendments to US Federal drug approval laws reduced the introduction of effective new drugs in the USA from an average of forty-three annually in the decade before the 1962 amendments to sixteen annually in the ten years afterwards. No increase in drug safety was identified.
Peltzman found that the unregulated market quickly weeded out ineffective drugs prior to the 1962 law change in the USA. The sales of ineffective new drugs declined rapidly within a few months of their introduction.
Doctors stop prescribing medicines that don’t work. Patients complain quickly about medicines that don’t work. What matters is they had the chance to try this drug.
If economists have a bitter drinking song, a battle cry that unites the warring schools of economic thought all, it would be “how many people has the FDA killed today”. Many drugs became available years after they were on the market outside the USA because of drug approval lags at the FDA. The dead are many. To quote David Friedman:
In 1981… the FDA published a press release confessing to mass murder. That was not, of course, the way in which the release was worded; it was simply an announcement that the FDA had approved the use of timolol, a ß-blocker, to prevent recurrences of heart attacks. At the time timolol was approved, ß-blockers had been widely used outside the U.S. for over ten years. It was estimated that the use of timolol would save from seven thousand to ten thousand lives a year in the U.S. So the FDA, by forbidding the use of ß-blockers before 1981, was responsible for something close to a hundred thousand unnecessary deaths.
Free to Choose Medicine is an excellent way to break the regulatory deadlock over drug lags. Free to Choose Medicine should be adopted in New Zealand. Any new drug that has passed the phase 1 drug safety part of regulatory approval processes in any one of the USA, UK, Australia, Canada or Germany should be lawful to prescribe in New Zealand. New drugs should not have to go through the superfluous processes of Medsafe.
The existing drug regulatory regime is based upon making the drug safe for the average patient. That has been swept aside by pharmaceutical innovation as Madden and Smith explain:
Today’s world of accelerating medical advancements is ushering in an age of personalized medicine in which patients’ unique genetic makeup and biomarkers will increasingly lead to customized therapies in which samples are inherently small. This calls for a fast-learning, adaptable FTCM environment for generating new data.
In sharp contrast, the status quo FDA environment provides a yes/no approval decision based on statistical tests for an average patient, i.e., a one-size-fits-all drug approval process. In a FTCM environment, big data analytics would be used to analyze TEDD [Tradeoff Evaluation Drug Database] in general and, in particular, to discover subpopulations of patients who do extremely well or poorly from using a FTCM drug.
You can’t find Reaganomics in the tax statistics below. Maggie Thatcher stopped a rapid growth in British tax revenues which caused the UK to become the sick man of Europe. She then unwound that growth in tax revenues as a percentage of British GDP. The election of the Blair government in 1997 stopped the rapid growth in tax revenues as a share of British GDP under that double secret socialist John Major. As Sam Peltzman previously noticed, the growth the government stopped in Denmark and Sweden in the mid-1980s.
Peltzman was right! Scandinavian growth in the size of government stopped in the early 1980s.
Figure 1: Danish, Finnish, Norwegian and Swedish tax revenues as a percentage of GDP, 1965–2013
Source: OECD StatExtract.
Sam Peltzman in "Mortality Inequality" used the Lorenz curve to measure mortality inequality. The top figure below is based on data for 1852; the bottom figure on data for 2002. A straight line in the figure below at a 45-degree angle shows perfect equality of mortality: that is, 20% of the population lives 20% of the total life-years at this time; 40% of the population lives 40% of the life-years for this group, and so on.
The curved line is the data In 1852 in the USA and in 2002. It shows that with high infant mortality, the bottom 30% of the distribution lived close to 0% of the life years in 1852.
Sam Peltzman argues that:
governments grow where groups which share a common interest in that growth and can perceive and articulate that interest become more numerous.
Growth in the size of governmental is driven by the evolving demands of voters. Peltzman maintains that:
the levelling of income differences across a large part of the population . . . has in fact been a major source of the growth of government in the developed world over the last fifty years [because this levelling created] a broadening of the political base that stood to gain from redistribution generally and thus provided a fertile source of political support for expansion of specific programs. At the same time, these groups became more able to perceive and articulate that interest . . this simultaneous growth of “ability” served to catalyse politically the spreading economic interest in redistribution
Growing income equality, which was a result of the Industrial Revolution and modern economic growth, caused the size of government to then grow. The reduction in inequality preceded the rise of the welfare state in the mid-20th century.
Sam Peltzman likes to point out the road fatalities in the USA fell pretty much at a steady rate of 3% for the entire 20th century. There was no break in trend with the drop in fatalities after major road safety legislation was passed by Congress in 1966.
The composition of who died changed: Peltzman found fewer drivers and passengers died but the more pedestrians were killed because drivers drove faster and with less care. Alma Cohen and Linan Einav (2003) found that seat-belt laws, in the absence of any behavioural response, were expected to save three times as many lives as were in fact saved. This shortfall because of greater risk taking is the Peltzman effect:
the hypothesized tendency of people to react to a safety regulation by increasing other risky behaviour, offsetting some or all of the benefit of the regulation.
I found that the unregulated market was very quickly weeding out ineffective drugs prior to 1962. Their sales declined rapidly within a few months of introduction, and there was thus little room for the regulation to improve on market forces . . . most of the subsequent academic research reached conclusions similar to mine . . .
The carnage from this regulation, I regret to assure you, will continue for a long time . . . the deaths of which I speak are counterfactual deaths, not deaths that can be directly connected to any regulatory malfeasance . . .
the actual victims of the regulation did not swallow a bad FDA-approved pill. They merely failed to swallow a good one in time and never knew what they had missed.
From 1950 to 1980 the size of government doubled in the developed world and then stopped dead in 1980. This great restraint on the growth of government happened everywhere. It was not just Thatcher’s Britain or Reagan’s America. It was everywhere, in France and Germany, and even in Scandinavia.
Peltzman’s data below has government spending double between 1950 and 1980, and then nothing much happened in between 1980 and 2007 – the size of government is pretty flat as a share of GDP for 27 years.
Source: Sam Peltzman, The Socialist Revival? (2012).
There is a noticeable reduction in the size of government spending in Scandinavia. Reagan and Thatcher had nothing on those Social Democrats in Scandinavia when it comes to cutting the size of government.
Governments everywhere hit a brick wall in terms of their ability to raise further tax revenues. Political parties of the Left and Right recognised this new reality.
Government spending grew in many countries in the 20th century because of demographic shifts, more efficient taxes, more efficient spending, a shift in the political power from those taxed to those subsidised, shifts in political power among taxed groups, and shifts in political power among subsidised groups.
The median voter in all countries was alive to the power of incentives and to not killing the goose that laid the golden egg.
After 1980, the taxed, regulated and subsidised groups had an increased incentive to converge on new lower cost modes of redistribution.
More efficient taxes, more efficient spending, more efficient regulation and a more efficient state sector reduced the burden of taxes on the taxed groups.
Most subsidised groups benefited as well because their needs were met in ways that provoked less political opposition.
Gary Becker made this warning about the political repercussions of tax reform and economic reform in general for the size of government:
…the greater efficiency of a VAT and its ease of collection is a two-edged sword.
On the one hand, it would raise a given amount of tax revenue efficiently and cheaply.
Since economists usually evaluate different types of taxes by their efficiency and ease of collecting a given amount of tax revenue, economists typically like value added taxes.
The error in this method of evaluating taxes is that it does not consider the political economy determinants of the level of taxes.
From this political economy perspective, the value added tax does not look so attractive, at least to those of us who worry that governments would spend and tax at higher levels than is economically and socially desirable.
Reforms ensued after 1980 led by parties on the Left and Right, with some members of existing political groupings benefiting from joining new political coalitions.
The deadweight losses of taxes, transfers and regulation limit inefficient policies and the sustainability of redistribution.
Peltzman likes to note that at the start of the 20th century, the United States government was about 8% of GDP. The two largest programs were education and highways. The post office was as big as the military.
Government is about five times that now with defence, health, education and income security accounting for 70% of this total. Peltzman makes the very interesting point that:
There is no new program in the political horizon that seems capable of attaining anything like the size of any of these four.
For the time being the future government rest on the extent of existing mega programs.
Health and income security account for 55% of total government spending in the OECD. It is in these two programs where the future of the growth of government lie.
The pressure for that growth in government will come from the elderly. Governments will have to choose between high taxes on the young to fund these programs for the elderly or find other options.