April 1, 2012 |
Corporations are not working for the 99 percent. But
this wasn’t always the case. In a special five-part series, William
Lazonick, professor at UMass, president of the Academic-Industry
Research Network, and a leading expert on the business corporation,
along with journalist Ken Jacobson and AlterNet’s Lynn Parramore, will
examine the foundations, history and purpose of the corporation to
answer this vital question: How can the public take control of the
business corporation and make it work for the real economy?
In 2010, the top 500 U.S. corporations – the
Fortune 500
– generated $10.7 trillion in sales, reaped a whopping $702 billion in
profits, and employed 24.9 million people around the globe.
Historically, when these corporations have invested in the productive
capabilities of their American employees, we’ve had lots of well-paid
and stable jobs.
That was the case a half century ago.
Unfortunately, it’s not the case today. For the past three decades,
top executives have been rewarding themselves with mega-million dollar
compensation packages while American workers have suffered an
unrelenting disappearance of middle-class jobs. Since the 1990s, this
hollowing out of the middle-class has even affected people with lots of
education and work experience. As the Occupy Wall Street movement has
recognized, concentration of income and wealth of the top “1 percent”
leaves the rest of us high and dry.
What went wrong? A fundamental transformation in the investment
strategies of major U.S. corporations is a big part of the story.
A Look Back
A generation or two ago, corporate leaders considered the interests
of their companies to be aligned with those of the broader society. In
1953, at his congressional confirmation hearing to be Secretary of
Defense, General Motors CEO Charles E. Wilson was asked whether he would
be able to make a decision that conflicted with the interests of his
company. His famous
reply: “For years I thought what was good for the country was good for General Motors and vice versa.”
Wilson had good reason to think so. In 1956, under the
Federal-Aid Highway Act of 1956,
the U.S. government committed to pay for 90 percent of the cost of
building 41,000 miles of interstate highways. The Eisenhower
administration argued that we needed them in case of a military attack
(the same justification that would be used in the 1960s for government
funding of what would become the Internet). Of course, the interstate
highway system also gave businesses and households a fundamental
physical infrastructure for civilian purposes– from zipping products
around the country to family road trips in the station wagon.
And it was also good for GM. Sales shot up and employment soared.
GM's managers, engineers and other male white-collar employees could
look forward to careers with one company, along with defined-benefit
pensions and health benefits in retirement. GM’s blue-collar employees,
represented by the United Auto Workers (UAW), did well, too. In business
downturns, such as those of 1958, 1961 and 1970, GM laid off its most
junior blue-collar workers, but the UAW paid them supplemental
unemployment benefits on top of their unemployment insurance. When
business picked up, GM rehired these workers on a seniority basis.
Such opportunities and employment security were typical of most
Fortune 500 firms in the 1950s, '60s and '70s. A career with one company
was the norm, while mass layoffs simply for the sake of boosting
profits were viewed as bad not only for the country, but for the
company, too.
What a difference three decades makes! Now mass layoffs to boost
profits are the norm, while the expectation of a career with one company
is long gone. This transformation happened because the U.S. business
corporation has become in a (rather ugly) word “financialized.” It means
that executives began to base all their decisions on increasing
corporate earnings for the sake of jacking up corporate stock prices.
Other concerns -- economic, social and political -- took a backseat.
From the 1980s, the talk in boardrooms and business schools changed.
Instead of running corporations to create wealth for all, leaders should
think only of “maximizing shareholder value.”
When the shareholder-value mantra becomes the main focus, executives
concentrate on avoiding taxes for the sake of higher profits, and they
don’t think twice about permanently axing workers. They increase
distributions of corporate cash to shareholders in the forms of
dividends and, even more prominently, stock buybacks. When a corporation
becomes financialized, the top executives no longer concern themselves
with investing in the productive capabilities of employees, the
foundation for rising living standards for all. They become focused
instead on generating financial profits that can justify higher stock
prices – in large part because, through their stock-based compensation,
high stock prices translate into megabucks for these corporate
executives themselves. The ideology becomes: Corporations for the 0.1
percent -- and the 99 percent be damned.
The 99 percent needs to understand these fundamental changes in the
ways in which top executives have decided to make use of resources if
we want U.S. corporations to work for us rather than just for them.
The Financialization Monster
The beginnings of financialization date back to the 1960s when
conglomerate titans built empires by gobbling up scores and even
hundreds of companies. Business schools justified this concentration of
corporate power by teaching that a good manager could manage any type of
business -- the bigger the better. But conglomeration often became
simply a method of using accounting tricks to boost earnings in the
short-run to encourage speculation in the company’s stock price. This
focus on short-term financial manipulation often undermined the
financial conditions for sustaining higher levels of earnings over the
long term. But the interest of stock-market speculators was (as it
always is) to capitalize on short-term changes in the market’s
evaluation of corporate shares.
When these giant empires imploded in the 1970s and 1980s, people
began to see the weakness of the model. By the early 1970s the
downgraded debt of conglomerates, known as “fallen angels,” created the
opportunity for a young bond trader, Michael Milken, to create a liquid
market in high-yield “junk bonds.” By the mid-'80s, Milken (who
eventually went to jail for securities fraud) was using his network of
financial institutions to back corporate raiders in junk-bond financed
leveraged buyouts with the purpose of extracting as much money as
possible from a company once it was taken over through layoffs of
workers and by breaking up the company to sell it off in pieces.
Wall Street changed the way it made its money. Investment banks
turned their focus from supporting long-term corporate investment in
productive assets to trading corporate securities in search of higher
yields. The great casino was taking form. In 1971, NASDAQ was launched
as a national electronic market for generating price quotes on highly
speculative stocks. The Employee Retirement Income Security Act of 1974
encouraged corporate pension funds to get into the game since inflation
had eroded household savings. In 1975, competition from NASDAQ led the
much more conservative New York Stock Exchange, which dated back to
1792, to
end fixed commissions on stock transactions. This move only further encouraged stock market speculation by making it less costly for speculators to buy and sell.
In 1980, Robert Hayes and William Abernathy, professors of technology
management at Harvard Business School, wrote a widely read
article
that criticized executives for focusing on short-term profits rather
than investments in innovation. But in 1983, two financial economists,
Eugene Fama of the University of Chicago and Michael Jensen of the
University of Rochester, co-authored two articles in the
Journal of Law and Economics
which extolled corporate honchos who focused on “maximizing shareholder
value” -- by which they meant using corporate resources to boost stock
prices, however short the time-frame. In 1985 Jensen landed a higher
profile pulpit at Harvard Business School. Soon, shareholder-value
ideology became the mantra of thousands of MBA students who were
unleashed in the corporate world.
Proponents of the Fama/Jenson view argue that for superior economic
performance, corporate resources should be allocated to maximize returns
to shareholders because they are the only economic actors who make
investments without a guaranteed return. They say that shareholders are
the only ones who bear risk in the corporate economy, and so they should
also get the rewards. But this argument could not be more false. In
fact, lots of people bear risks of investing in the corporation without
knowing if they will pay off for them. Governments in the U.S., funded
by the body of taxpayers, are constantly making investments in physical
infrastructures and human capabilities that provide benefits to
businesses, but without a guaranteed return to taxpayers. An employer
expects workers to give time and effort beyond that required by their
current pay to make a better product and boost profits for the company
in the future. Where’s the worker’s guaranteed return? In contrast, most
public shareholders simply buy and sell shares of a corporation on the
stock market, making no contribution whatsoever to investment in the
company’s productive capabilities.
In the name of this misguided philosophy, major U.S. corporations now
channel virtually all of their profits to shareholders, not only in the
form of dividends, which reward them for holding shares, but even more
importantly in the form of stock buybacks, which reward them for selling
shares. The sole purpose of stock buybacks is to give a manipulative
boost to a company’s stock price. The top executives then benefit when
they exercise their typically bountiful stock options and cash in by
selling the stock. For 2001-2010, 459 companies in the S&P 500 Index
in January 2011 distributed $1.9 trillion in dividends, equivalent to
40 percent of their combined net income, and $2.6 trillion in buybacks,
equal to another 54 percent of their net income. After all that, what
was left over for investments in innovation, including upgrading the
capabilities of their workforces? Not much.
Falling to the Challenge
Big changes in markets and technologies since the 1980s have given U.S.
corporations serious competitive challenges. Confronted by Japanese and
then Korean competition, companies closed plants, permanently displacing
blue-collar workers from what had been middle-class jobs. Meanwhile,
the open systems technologies that characterized the microelectronics
revolution favored younger workers with the latest computer skills. In
the name of shareholder value, by the 1990s U.S. corporations seized on
these changes in competition and technology to put an end to the norm of
a career with one company, ridding themselves of more expensive older
employees in the process. In the 2000s, American corporations found that
low-wage nations like China and India possessed millions of qualified
college graduates who were able and willing to do high-end work in place
of U.S. workers. Offshoring put the nail in the coffin of employment
security in corporate America.
In response to these challenges, U.S. corporations could have used their
profits to upgrade the capabilities of the U.S. labor force, laying the
foundation for a new prosperity. Instead, the same misguided
financialized responses have meant big losses for taxpayers and workers
while the top 1 percent has gained. Instead of rising to the challenge,
they’ve fallen into greed and short-sightedness that chips away at our
chances for a prosperous economy.
Yet properly governed, corporations can be run for the 99 percent. In
fact, that’s still the case in many successful economies. The truth is
that it’s possible to take back the corporations for the 99 percent in
the U.S. if we can really wrap our heads around the problem and the
solutions. Here are three places to start:
1) Ban It. Ban large established
companies from buying back their own stock, and reward them instead for
investing in the retention and training of their employees.
2) Link It. Link executive pay to the productive
performance of the company, with increases in executive pay being tied
to increases for the corporate labor force as a whole.
3) Occupy It. Recognize that taxpayers
and workers bear a significant proportion of the risk of corporate
investment, and put their representatives on corporate boards where they
can have input into the relation between risks and rewards.
William Lazonick is professor
of economics and director of the UMass Center for Industrial
Competitiveness. He is president of the Academic-Industry Research Network.
His book, "Sustainable Prosperity in the New Economy? Business
Organization and High-Tech Employment in the United States" (Upjohn
Institute, 2009) won the 2010 Schumpeter Prize.
No comments:
Post a Comment