Monthly Archives: December 2014

Beware of Bad Research on the Minimum Wage from a Great San Diego University

Get that degree in black-smithing

Do Blacksmith’s Earn Above Minimum Wage?

A new, highly flawed, study on the “job killing’ impacts of the minimum wage, by a professor of economics at the University of California, San Diego (UCSD), went viral amongst far right wing blogs. That was to be expected. But now it’s making  its way, uncritically,  into more professional forums, like CATO, the American Enterprise Institute, the National Review,  and Forbes.   Soon it will be on nervous “mainstream media,” like CNN, who want to be sure, at all costs, they are “balanced,” even if one of the sides is based on junk science (and the other is not).

Because this defective work comes from a credentialed faculty member at a very fine economics department in a distinguished university, of course adds to its credibility….and  to its harm.  Where was the peer review?  The work in question, examining the impact of the 2007 mandated increase in the federal minimum wage from $5.15 to $7.25 per hour in three increments, concludes that “binding minimum wage increases had significant, negative effects on the employment and income growth of targeted workers. Lost income reflects contributions from employment declines, increased probabilities of working without pay (i.e., an “internship” effect), and lost wage growth associated with reductions in experience accumulation.”

The Research Design of this study is terribly flawed, on several grounds. First, the main study period begins July 2007,  just before the U.S. financial collapse and Great Recession; and ends in July 2009, not quite before the recovery really begins; more like when the free fall stopped.

Trying to evaluate the impacts of a small, gradual increase in minimum wage (MW) affecting a fraction of the workforce, in the midst of an earthshaking set of economic events, is pure (unscientific) folly. Even if some effects in the study extend a year or so beyond this highly exceptional episode in U.S. economic history. (Actually, we are not yet beyond the episode). The author’s acknowledgement of that is (amazingly) perfunctory, and his effort to adjust for it, even weaker.

Under the best of study conditions — a period of economic stability with few or no seismic events — it is extremely difficult to reach “conclusions” about the impact of changing a single economic variable. It’s like trying to study the environmental effects of hydraulic fracking during a period that coincides with an 8.0 earthquake on the Richter Scale.

A second and related flaw in the study is that ANY brief time frame for examining the impact of a minimum wage change is painfully too small to reach valid conclusions (apart from the fact that it coincided with the Great Recession).  You are always going to find some (at least short term) negative employment impacts from an increase in the MW. But, raising the MW also puts more money in the pockets of (the vast majority) of workers who don’t lose jobs. They spend the extra dollars in the local economy, creating (some) jobs elsewhere in the region. That’s why many MW studies show no or negligible net job loss, and sometimes even small job gains.  You need a little time for the cascading effects to show up. (You can find a summary of that literature here).

Most professional economists agree that large, broad-based, precipitous MW changes can be highly destabilizing, lead to significant, long term job losses,  and have the opposite of intended effects on low income earners. But that’s not at all the nature of the MW changes mandated by the Minimum Wage Act of 2007 and subject to the UCSD study.

A third, serious flaw in the study concerns the data source – the Survey of Income and Program Participation (SIPP). Self reported income and wages from surveys are notoriously poor.  Read about that, and more, here and here.  Administrative data (like unemployment insurance [UI] and social security administration [SSA] reports are not perfect, but decades of research overwhelmingly conclude that SIPP (and other survey data) significantly understate wages, compared with UI and SSA data. The reported differences  are in the $1000 to $3000 range annually. That’s large.

The author may say, “well, yes, SIPP under reports wages (which it does), but I’m not looking at absolute levels; I’m looking at ‘trends.’” That would be a nice try. But, the SIPP wage data not only under estimate wages, they are also notoriously unstable and volatile from year to year. SIPP may provide good trend results over a 10 or 20 year span, but using it to pinpoint the impacts of a small change in the economy (the MW) over two or three years is amazingly irresponsible.

The SIPP wage data, BTW, also includes tips; i.e. the respondents are asked to include tip income in their recollections. (Now, there is something the typical respondent is going to report with tremendous accuracy!)  Many workers directly affected by the MW receive significant tip income. In the midst of the greatest recession since the great depression the size of tips likely fell; quite apart from any effects a (small) change in the MW may have. That is not addressed in the study, as best as I can tell.  I could have missed something buried in a footnote.

Even if this study was without serious flaws in design and methods, the best than can be said for it is that it confirms the belief that public policies which raise business costs during a severe recession or a halting recovery, may not be a good idea. That’s elementary.

Ironically, the minimum wage isn’t even the most effective way to address wage and income inequality. Expanding the Earned Income Tax Credit (EITC) is a much better approach. Still, that doesn’t mean junk science attacking the MW ought to be published under the banner of a great university.

So, Who is Really “Qualified” to be President?

The-White-House-Washington-DCThe threshold question of basic “qualifications” to be U.S. President has been in the forefront for the two most recent Presidencies, to a degree that seems unprecedented. For both Bush II and Obama, a deep skepticism (to put it mildly) has permeated around their basic “qualifications” for the job, for vastly different reasons. We are not talking here about constitutional qualifications, like age or citizenship, but competencies demanded by the job. For Obama,  the discredited “birthers” focused on both.

Because the U.S. Presidency is an extremely demanding, one-of-a-kind, job, not very many Presidents have been especially prepared for the position when they first took office. How could they be, especially after the U.S. became a world power and a big player in international affairs in the first half of the 20th century?  Unless you’ve been a secretary of state or defense (with portfolio to improvise U.S. foreign/defense policy), CIA Director, a Chairman of the Joint Chiefs, or a Vice President who was empowered to make real decisions, there is a huge hole in almost anyone’s resume.

Reading Foreign Affairs Magazine or going on Congressional junkets abroad, doesn’t quite cut it either. Heading a foreign affairs committee in Congress counts for something, but it’s still not the same as being a real diplomat or executive point person in foreign or defense policy. Having been Governor of a large complex state helps a lot, but still doesn’t satisfy the big foreign and defense policy requirements, even if you can see Russia from the Governor’s mansion.

If you place the resumes of our last twelve Presidents at the time they first ran for the High Office (or succeeded to it), side by side with the Job Description, more than half (the 7 in Red) would have failed the “resume test;” another three (Blue)  would have barely passed; only two — Eisenhower and Bush I —  would have received a grade in the B to B+ range. At least that’s my audacious and subjective argument, depicted in the color coded table.

The gentlemen in the Red section are all essentially at the same level.  Historians would say there’s a mixture of great and failed Presidents in this group.  The same goes for the chaps with the better resumes, in blue and yellow.

Scoring Presidential Candidates. Seven (equally weighted) attributes of the Presidential Job Description are aligned across the top: (A) Leadership on the World Stage, (B) CEO Experience, (C) Diverse Experience in Business, Government,  (D) International Affairs or High Level Military Experience, (E) Knowledge, Education, (F) Judgment, Intelligence.

SCORING PRESIDENTIAL RESUMES: Seven attributes of the Presidential Job Description are aligned across the top: (A) Leadership on the World Stage, (B) CEO Experience, (C) Diverse Experience in Business, Government, (D) International Affairs or High Level Military Experience, (E) Knowledge, Education, (F) Judgment, Intelligence.

For each attribute, I (subjectively) assigned points on a scale of 1-5 for the last twelve U.S. Presidents (when they first took office). All of this is of course my imperfect judgment.  The last column expresses each total score as a percent of the maximum. The maximum is 35 points (a perfect “5” on each of seven attributes).

Every President’s points would of course have been considerably higher if they ran for a second term. Having been President is still by far the best experience for being President.

Yes, Gerald Ford, brought a lot to the table. He was a Yale Law School grad; a Naval officer with combat medals, a Congressman for 25 years, with a lot of budget experience; Republican Minority Leader of the House…..all before he was appointed Vice President (replacing the discredited Agnew) and succeeded to the Presidency (after Nixon resigned). Too bad he was “boring,” had a ponderous speaking style, and was lampooned mercilessly by Chevy Chase on SNL, ironically portraying the most athletic U.S. President in history as a chronic prat-faller.

You may argue with me about the Elder Bush having too high a grade, and Reagan too low. But, Pappy Bush was “Mr. Resume:” Congressman, UN Ambassador, Envoy to China, CIA Director….and more. Check him out.

What about Reagan’s low grade?  First, a reminder: these scores don’t reflect later performance or reputation as President. Reagan, except for having been Governor of California for eight years, which is a very big deal, was weak on the other criteria. (Obviously, my opinion).  Yes, I gave Reagan only a “2” on knowledge and education. We can debate that. He was totally lacking, at the time of his first run, in anything related to foreign affairs, defense, or national security. (So was Carter, Clinton, Obama, and Bush II). Obama’s low score on the resume test does not reflect what I think of his Presidency.

Although Harry Truman was Vice President before he took the High Office, he was in that position for only 82 days when FDR died, and had been ignored and marginalized during that short time. He was mocked by adversaries (and even by some in his own party) as a failed haberdasher from the “corrupt,” Missouri Pendergast machine. Views about the Truman Presidency of course turned 180 degrees years after he left office.

The lack of “qualified” candidates using the resume test should not be a surprise. Besides the difficulties of meeting the foreign and defense policy specs, the advent of primary elections in the 1960s and 1970s, which replaced party conventions and smoke filled rooms, with the (so called) “fresh air” of “democratic elections,” changed everything. Being able to raise money and shine in the media spot light, became more important than satisfying attributes on the Presidential Job Description. All of that was already in play in the Kennedy nomination (see Theodore White’s famous account). And even more so in the nominations of Clinton, Reagan, and Obama. (Once again; that doesn’t mean none were good Presidents).

Am I lamenting the loss of party conventions and cloak room deals?  Yes, to some degree. There are are still cloak room cabals, only now they’re conducted in fancy hotels, gated estates, or board rooms,  with very big money players.

If You Are Worried About the National Debt, Be Wary of “Dynamic Scoring”

Wonder Where the Numbers Come From?

Wonder Where the Numbers Come From?

Republican Congressmen led by House Appropriations Chair, Paul Ryan, are going to direct congressional staff to change the way they estimate the revenue impact of proposed changes to the tax code. They want the Congressional Budget Office (CBO) and the Joint Committee on Taxation (JCT) to use “dynamic scoring” to estimate the cost of tax policy changes. The method reflects the belief, not without some merit, that tax cuts can change behavior in ways that boost economic activity and generate new revenue, thus paying for themselves without contributing to deficits.  You can see why this scoring approach might be attractive!

The story did make the front page of the LA Times business section. But because it’s an arcane subject it’s been flying under the radar.

It’s not entirely clear how the Ryan proposal would affect the scoring of spending bills. Some government spending, especially in education and transportation, is demonstrably positive for the economy.  Such measures too can also be scored as “paying for themselves” and being debt neutral.  Dynamic scoring is thus a double edged sword for fiscal conservatives.

The dynamic scoring approach, championed now by “conservatives,” ironically, would make it harder to bring down the national debt; but easier to pass tax cuts. It would also make it easier to pass spending bills, which is, of course, not Mr. Ryan’s intention.

The “scoring” of tax (and spending) bills is a nerdy and technical enterprise, performed often by PhD economists and MBAs,  but it has serious consequences for the nation’s short and long term fiscal outlook—both perception and reality. The contrasting approach to dynamic scoring is “static scoring.” These are not either/or options; they are on a continuum.

Ideally, government needs an approach to scoring taxes (and spending) which is:

• Transparent
• Replicable
• Explainable (to people without MBAs or doctorates in economics)
• Fiscally prudent
• Not easy to manipulate for political purposes

The Ryan dynamic scoring proposal falls short on all these accounts. Static scoring, the traditional approach, scores much better.

The Ryan proposal is also breath-taking in its hypocrisy, because its proponents talk about using “sophisticated advances in economic science” (new and better economic models) that make dynamic scoring more reliable.  So, is “economic science” (and its modeling) more reliable than, say, “climate science” (and its modeling infrastructure)?  Really?

On the spending side, the CBO, under pressure from both Republicans and Democrats, used a form of scoring with some dynamic elements, to estimate the impact of the Affordable Care Act (ACA).  Thus, the CBO says Obamacare will slow down the overall rate of growth in health care costs and make it easier to reign-in the national debt. Some of that has actually happened since the measure passed, though it’s not clear how much of it is due to the ACA. This made Democrats happy and made it easier to pass the ACA in 2010.

The CBO also put its toe in the dynamic scoring waters by assuming the ACA would cause a number of people to exit the labor market because they would no longer be dependent on employer provided health insurance.  This CBO analysis made it easier for Republicans to argue that the ACA would shrink the U.S. economy. (Funny, how only part of the CBO’s story about ACA seems to have made it into the consciousness of Americans).

You can see the can of worms being opened here. There is, of course,  some merit to identifying the “full” impacts of tax (or spending) bills, but trying to score the budget this way is not only dicey, but it’s not transparent, replicable, fiscally prudent, or readily explainable; and it’s ripe for political manipulation.

Dynamic scoring is not quite the equivalent of “voo-doo economics,” like its fiercest critics say. A cut in the gasoline tax (up to a point) will likely induce more economic activity across the economy. Even a Marxist economist (with a real PhD) would probably own up to that. But the more dynamic you get in scoring, the closer it gets to voo-doo. That’s because even the best economists and models can’t predict these secondary and tertiary and Nth order impacts very accurately.

Not only that, but dynamic scoring doesn’t typically reflect the spending cuts that would accompany a tax cut (if one wanted to avoid increasing the debt). A gasoline tax cut would likely reduce the amount of money available to build highways, bridges and other transportation infrastructure. That could have a very large negative impact on the economy, maybe not immediately, but down the (better paved) road. Typically, dynamic scoring models either don’t measure that at all, or do it inaccurately.

Besides, there is always room for fancier modeling and research as a supplement to official scoring, where economists can legitimately talk about broader effects without affecting the official balance sheets,

If you want a better understanding of dynamic (versus static) scoring, read on.

Continue reading

More Data Show Why the Ferguson Decision Angers So Many

car_train05An impressive research effort published by PBS News makes a powerful case that an indictment in the Ferguson-Brown-Wilson affair would have made a great deal of sense. Conviction of the officer is another matter.  PBS researchers systematically culled voluminous transcripts to summarize and quantify key eye witness testimony from the grand jury hearings. The PBS findings show that testimony from eye witnesses (chosen by the Missouri prosecutor)  would have normally been clear grounds for an indictment.

Partly because the PBS report wasn’t (could not have been) issued immediately after the prosecutor’s announcement, it has received scant attention in the mainstream media. Here are the main findings, quoted directly from the report.

More than 50 percent of the witness statements said that Michael Brown held his hands up when Darren Wilson shot him. (16 out of 29 such statements)
• Only five witness statements said that Brown reached toward his waist during the confrontation leading up to Wilson shooting him to death.
• More than half of the witness statements said that Brown was running away from Wilson when the police officer opened fire on the 18-year-old, while fewer than one-fifth of such statements indicated that was not the case.
• There was an even split among witness statements that said whether or not Wilson fired upon Brown when the 18-year-old had already collapsed onto the ground.
• Only six witness statements said that Brown was kneeling when Wilson opened fire on him. More than half of the witness statements did not mention whether or not Brown was kneeling.

The PBS report is definitely worth studying, especially their main chart coding and summarizing the testimony.    Here it is:

Eyewitness testimony Ferguson Case, PBS Report

Eyewitness Testimony,  Ferguson Case, PBS Report

In his press conference on November 24th, viewed by millions, Prosecutor McCullough said, in his own words,  that eye witness testimony was ambiguous, contradictory, and inconsistent.  For a breath-taking split second, I thought he was going to surprise the world with news of an indictment.  The ambiguity alone, documented by McCullough, in an extremely sensitive case involving a fatal shooting, should have resulted in an indictment.

An indictment would have meant there are a lot of unanswered questions that needed to be looked at in a trial, which is what Mcullough appears to have said.   The PBS report provides the documentation.

An ensuing trial is where conflicting testimony, in a case like this, is supposed to be sorted out in true adversarial fashion, with an umpire.  Instead, prosecutor McCullough used the fog of the testimony, together with some physical evidence, which he said favored one side, to conclude that an indictment was unjustified.

Conviction in a subsequent trial would have hardly been a slam dunk. But when the charge being considered is so serious, and exculpatory evidence at best murky, prosecutors routinely seek indictments;  grand juries almost always go along.  We wouldn’t need criminal trials, judges and juries, if prosecutors and grand juries operated as they did in Ferguson.  Hey, maybe there is a useful cost saving idea embedded in this affair?

The main factor behind the outcome in Ferguson is severe reluctance to second guess, much less prosecute or punish,  law enforcement officers for actions they take in the line of duty. This is hardly a new phenomenon, or unique to the USA or to any one part of the country.

It makes sense for societies to subject police actions in the line of duty to a somewhat different standard than we use in civilian life.  But a lot of criminologists and ordinary citizens believe the society has gone too far, maybe way too far,  in that direction. Here are some data, which may help us make a judgment.

The newest and most believable data I could find (based on a Bowling Green State University criminology study reported in a Wall Street Journal article), says that (in a 7 year period ending in 2011), 41 officers in the U.S. (about 6 per year) were charged with either murder or manslaughter in connection with on duty shootings.  These are just charges, not necessarily resulting in convictions.

In the same period, the FBI says there were 2,718 justified homicides by police (or about 400 per year). This is widely considered by criminologists as a large undercount because of lax reporting standards.  An “independent” estimate based on news paper reports across the U.S. says the real number is closer to about 1,000 per year. (Or about 7000 in seven years, rather than the 2,700 reported by the FBI).

If you accept the FBI police homicide number, it means about 1.5% percent of police homicides result in a “charge” of murder or a lesser manslaughter offense. If you accept the “independent” count of police homicides, the percent of officers charged in on-duty killings would be 0.6%. I of course don’t know what an appropriate charge rate might be, but a percent in this vicinity is worrisome.

Government at all levels in the US needs a robust process for holding police accountable,  one that’s more effective and trustworthy than internal reviews or appointed “review boards” dominated by police or their labor unions. The process must still be sensitive to the unique position of law enforcement officers, and their need to know they have the support of elected officials and citizens.  But today’s “acquittal” rate of about 99% ought to raise some eye brows.

Most governments at all levels in the U.S. have the capacity to create special adjudicatory systems to handle charges of police misconduct, which strike the right balance. We don’t have that in very many places today in the U.S.