Courtesy of William F. Trench, Andrew G. Cowles Distinguished Professor Emeritus, Trinity University:
List of undergraduate and graduate econometrics textbooks, with brief annotations and links to supplementary materials for them.
Perhaps also useful to students is a page of links to free online econometrics books and class notes.
The official release of John R. Lott, Jr.'s Dumbing Down the Courts is today. (Disclosure: I went to graduate school at UCLA with John.)
This book does a fine job of arguing a single, important point. Over the last twenty-five years or so individuals who would be the most effective federal judges are increasingly likely to suffer delays in being confirmed and are less likely to actually be confirmed. John states his thesis on the first page:
Who are the nominees that make it through the confirmation process to become a federal judge? Are they the brightest people who have the most detailed and sophisticated knowledge of the law? Are the most successful lower court judges also the most likely to get promoted to serve on higher courts?
Surprisingly, the qualities that make someone a successful judge also make them less likely to be confirmed for the same reason that smart, persuasive people are rarely asked to be jurors.
John supports his thesis in two principal ways. In Chapter 2, “Supreme Battles,” he provides some anecdotal evidence. For example, the nominations of Robert Bork and Douglas Ginsburg were opposed effectively because they were considered too “brilliant”; Anthony Kennedy was acceptable because he wasn’t considered as smart (pp. 75-76). Elena Kagan was confirmed with fewer votes than Sonia Sotomayor because Kagan was considered “more formidable” (p. 81).
John presents the bulk of his argument in Chapter 4, “Who Has the Toughest Time Getting Confirmed?” This chapter uses regression analysis to look at how nominees of different quality are treated. But what is “quality”? John uses two types of measures. First are attributes known at the time, or shortly after, of nomination: whether the nominee attended a top law school, whether he or she served on the law review, what type, if any, of judicial clerkship the nominee served, and what the nominee’s ABA rating was. The second measure is based on the work of two previous papers that examined how much influence serving judges have had: how often their decisions are cited and by whom. These measures, along with various control variables, are included in regressions explaining the time between nomination and confirmation and the probability of confirmation. (The controls include the legal and professional background of the nominees, their demographic backgrounds, and the political environment.)
This is a fine little—not including the acknowledgements, footnotes, and index, it’s just 114 pages—book. It would be useful as a supplementary text in any statistics or quantitative methods course. It also could be enjoyed by anyone with an interest in data, especially when data are used to formulate public policy. The author, Joel Best, succinctly states his theme as follows (p. 5):
This book is guided by the assumption that we are exposed to many statistics that have serious flaws. This is important, because most of us have a tendency to equate numbers with facts, to presume that statistical information is probably pretty accurate information. If that’s wrong—if lots of the figures—that we encounter are in fact flawed—then we need ways of assessing the data we’re given.
Best makes his case through a set of well-chosen examples. Some are of numbers that are inaccurately high. He warns (p. 11), “. . . keep in mind one rule of thumb: in general, the worse things are the less common they are. . . . Most social problems display this pattern: there are lots of less serious cases, and relatively few very serious ones. This point is important because media coverage and other claims about social problems often feature disturbing typifying examples: that is, they use dramatic cases to illustrate the problem.”
Here are three examples.
Page 10: a claim that “more than four million women [in the U.S.] are battered to death by their husbands or boyfriends each year”. Best notes that four million is far more than the annual deaths of women from all causes. I found this claim repeated in other places such as here and here. How did this claim come about? Best doesn't speculate but I found sites claiming four million batterings but not deaths: http://www.clarkprosecutor.org/html/domviol/facts.htm and http://www.davidicke.com/forum/showthread.php?t=232778 I speculate "batterings" were transformed into "deaths".
Page 19, a classic example. Claim: “Today, a young person, age 14—26, kills herself or himself every 13 minutes in the United States.” But that’s more than 40,000 per year. The total number of suicides by people of all ages was about 32,000 per year. The correct number, Best finds, turns out to be about 4000 (in 2002), or about one every 131 minutes. Best concludes the claimed number was the product of a slipped decimal point.
Pages 44 and 45: What about the trend toward obesity in America? “In 1998, the federal government redefined the category ‘overweight’ . . . The redefinition meant that 29 million Americans whose weight had been considered normal suddenly were classified as overweight?” How many journalists know this? How many adjust the numbers indicating a rapid rise in obesity for the change in definition?