Thursday, December 18, 2014

Clayton Cramer.: Where Does 1 In 5 Come From?

Clayton Cramer.: Where Does 1 In 5 Come From?

That’s because the statistic comes from a 2007 study that is based on a survey of just two colleges. Funded by the National Institute of Justice, the "Campus Sexual Assault Study" summarizes the online survey results of male and female students at two large public institutions. Nineteen percent, or about one in five, of the female respondents said they had experienced an attempted or completed sexual assault since starting college.

Defining Sexual Assault
Other critics have focused not so much on the limited scope of the survey, but rather its broad definition of sexual assault, which includes kissing and groping. The study's definition of sexual assault includes both rape -- described as oral, anal, and vaginal penetration -- and sexual battery, which was described as "sexual contact only, such as forced kissing and fondling." Some argue that an unwanted kiss should not be conflated with other kinds of more severe sexual assault or rape.

Laura Dunn, executive director of sexual assault prevention group SurvJustice, said the fact that some people still balk at the idea of unwanted kissing being considered sexual assault is a result of the criminal justice system frequently focusing on only the worst kinds of sexual violence. It’s caused a particular image of sexual assault to form in people’s heads, she said, and it's an image denies a much broader expanse of offenses.
“People who deny this issue don’t believe something like an unwanted kiss is harmful, but it is,” Dunn said. “I think there’s an idea in our society that says if a man’s not using a gun or beating a woman, then it’s O.K. to be pushy and aggressive, or to wait until she’s drunk. We really think of some sexual aggression as really not that bad, and that mentality extends to the survivors as well. In these surveys, if you use broader legal terms, you actually get less reporting.”...

Despite the Campus Sexual Assault Study’s shortcomings as a national barometer of the issue, other research has yielded similar findings – though with some caveats. A Centers for Disease Control and Prevention survey found that the rate of women who experience sexual assault is one in five, though that rate is for all women in instead of just those going to college. That survey, too, has been questioned for its classification of having sex while intoxicated in any way as a sexual assault.

Then there’s the statistic that gives John Foubert’s organization its name: one in four. That comes from a Justice Department survey of 4,000 college women in 2006 that found that nearly one-quarter of college women have survived rape or attempted rape in their lifetime, a figure that doesn't account for sexual assaults that are not rape. While the study is of college women, the rape could have occurred at any point in their lives.

Wednesday, December 17, 2014

The Feinstein Report Is Going to Cost Us

The Feinstein Report Is Going to Cost Us


Still, notwithstanding the revelation of a few new gory details, this is old news and its disclosure serves no useful purpose — it is just a settling of scores.

“Old news” is not used here in the familiar Clinton/Obama sense of acknowledging a few embarrassing scandal details on Friday night to pave the way for dismissing scandal coverage as stale by Monday morning. The CIA’s interrogation program happened over a decade ago. It was investigated by Justice Department prosecutors for years — and not once but twice. The second time, even Eric Holder, the hyper-politicized, hard-Left attorney general who had promised Obama’s base a “reckoning,” could not help but concede that the case against our intelligence agents should be dropped because the evidence was insufficient to warrant torture prosecutions.

As I have frequently argued here over the years, there is a world of difference between what is couched in political rhetoric as “torture,” a conversation stopper that the Left cavalierly applies to every instance of prisoner abuse, and the federal crime of torture, which has a strict legal definition and is a difficult offense to prove, precisely to ensure that torture is not trivialized. Not surprisingly, then, the fact that the interrogations investigation was terminated has never been regarded as a clean bill of health.

Friday, December 12, 2014

‘Torture’ Thought Experiment

‘Torture’ Thought Experiment


Here is a thought experiment I have been using for many years as we’ve debated this topic. It goes to what Obama says about the intolerably brutal nature ofwaterboarding, the most coercive of the enhanced techniques that were used.

If you were to take everyone in America who is serving a minor jail sentence of, say, 6 to 18 months, and you were to ask them whether they’d rather serve the rest of their time or be waterboarded in the manner practiced by the CIA post 9/11 (i.e., not in the manner practiced by the Japanese in World War II), how many would choose waterboarding? I am guessing, conservatively, that over 95 percent would choose waterboarding.

Now, if you take the same group of inmates and ask them whether they’d prefer to serve the remainder of their time or be subjected to Obama’s drone program (where we kill rather than capture terrorists, therefore get no intelligence from the people in the best position to provide actionable intelligence, and kill bystanders – including some children – in addition to the target), how many would choose the drone program? I am guessing that it would be . . . zero.

Wednesday, December 10, 2014

Debunking the Debunking of Dynamic Scoring and the Laffer Curve - Daniel J. Mitchell - Townhall Finance Conservative Columnists and Financial Commentary - Page full

Debunking the Debunking of Dynamic Scoring and the Laffer Curve - Daniel J. Mitchell - Townhall Finance Conservative Columnists and Financial Commentary - Page full



He asks nine questions and then provides his version of the right answers. Let’s analyze those answers and see which of his points have merit and which ones fall flat.
But even before we get to his first question, I can’t resist pointing out that he calls dynamic scoring “an accounting gimmick from the 1970s” in his introduction. That is somewhat odd since the JCT and CBO were both completely controlled by Democrats at the time and there was zero effort to do anything other than static scoring.
I suppose Yglesias actually means that dynamic scoring first became an issue in the 1970s as Ronald Reagan (along with Jack Kemp and a few other lawmakers) began to argue that lower marginal tax rates would generate some revenue feedback because of improved incentives to work, save, and invest.
Now let’s look at his nine questions and see if we can debunk his debunking.

Tuesday, December 09, 2014

Edging towards Irrelevance - part 1

Edging towards Irrelevance - part 1

Suppose you published a book making a set of very specific claims. Then, after highly critical reviews of your book are published in major scientific journals, an international research team publishes a detailed study in the Proceedings of the National Academy (PNAS) on the very system that was the focus of your book. Great news?  Well, maybe, except for one little problem. That research paper shows, in great detail, why the claims at the heart of your book were wrong. Do you walk away quietly, hoping no one notices?

Monday, December 08, 2014

New estimates of the effects of the minimum wage | Econbrowser

New estimates of the effects of the minimum wage | Econbrowser


A large literature has examined the effects on employment of raising the minimum wage, with different researchers arriving at conflicting conclusions. The core reason that economists can’t answer questions like this better is that we usually can’t run controlled experiments. There is always some reason that the legislators chose to raise the minimum wage, often related to prevailing economic conditions. We can never be sure if changes in employment that followed the legislation were the result of those motivating conditions or the result of the legislation itself. For example, if Congress only raises the minimum wage when the economy is on the rebound and all wages are about to rise anyway, we’d usually observe a rise in employment following a hike in the minimum wage that is not caused by the legislation itself. UCSD Ph.D. candidate Michael Wither and his adviser Professor Jeffrey Clemens have some interesting new research that sheds some more light on this question.

Clemens and Wither study the effects of a series of hikes in the federal minimum wage signed into law in May 2007. The first of these raised the minimum rage from $5.15 to $5.85 effective July 2007, the second from $5.85 to $6.55 effective July 2008, and the third from $6.55 to $7.25 in July 2009. They note that such legislation would be expected to affect some states more than others, since many states already had a state-mandated minimum wage that was higher than the federal. They therefore chose to compare two groups of states, the first of which had a state-mandated minimum wage of $6.55 or higher as of January 2008, with all other states included in the second group. The hope is that this gives us a kind of controlled experient, with the federal legislation effectively raising the minimum wage for some states but not others.

The hike in the federal minimum wage should also matter more for some workers than others. To allow for the latter possibility, Clemens and Wither considered two different groups of workers. The first group had an average wage in the 12 months leading up to July 2009 that was below $7.50, while the second group had an average wage over this period between $7.50 and $10.00. We would expect the legislation to matter more for the first group than for the second. The quasi-experiment is thus to compare the change in wages between low skill and slightly higher skill individuals between states that were affected by the federal legislation and those that were not.
....
The hike in the minimum wage thus appears to have raised the wage for low-skilled workers but made it harder for them to find jobs. Clemens and Wither conclude:
Over the late 2000s, the average effective minimum wage rose by 30 percent across the United States. We estimate that these minimum wage increases reduced the national employment-to-population ratio by 0.7 percentage point.

Saturday, December 06, 2014

Please Don’t Price Low-Skilled Workers Out of Jobs

Please Don’t Price Low-Skilled Workers Out of Jobs


The economic reasons against raising the minimum wage are too many to rehearse in a short letter. So bear with me as I focus on what I believe is the most important of the reasons: raising the minimum wage will harm the very people who I know you seek to help.

Employers of minimum-wage workers almost all operate in highly competitive industries, such as retail food service, cleaning services, and lawn-care services. These industries have at least three characteristics that make a minimum-wage hike not only especially unlikely to result in higher incomes for low-skilled workers, but actually to reduce to zero the incomes of workers who can least afford to suffer such an economic calamity.

First, profit margins in the industries that use lots of low-skilled workers generally are razor thin. So there’s no way that mandated higher labor costs can be absorbed by these employers – that is, there is no way that the costs of a higher minimum wage will be paid for exclusively, or even largely, by employers.

Second, many of the tasks performed by low-skilled workers are manual and rote and, hence, are especially easy to mechanize. Third, many of these tasks are of such low value to consumers that they are readily avoided if the cost of their performance rises significantly. The incidence of such mechanization and avoidance will increase with the costs of employing human workers. For example, some fast-food restaurants are now experimenting with computers that allow customers to place orders and pay without the assistance of cashiers. And just a few weeks ago I stayed at a hotel in Manhattan that gives extra awards points to guests who stay for multiple nights and who agree to forgo daily maid service.

The result of this reality is that a government-enforced hike in the cost of employing low-skilled workers will cast many of the lowest-skilled workers indefinitely into unemployment lines. These workers’ pay will fall to $0. Worse, they will be denied opportunities to gain work experience. The ranks of people lacking skills and experience – and hope – will swell.

I know, Dave, that you mean well. I know also that some ‘experts’ assure you that studies exist that contradict the economic analysis that I summarize above. But for every empirical study that denies the negative consequences of minimum-wage legislation, I can show you several top-flight studies that confirm that these negative consequences are real.

So in light of the dueling empirics on this matter, I suggest that common sense combined with human decency counsel against raising the minimum wage. If (as is the case) the empirical evidence drawn from a multi-trillion-dollar, complex, and ever-changing economy doesn’t overwhelmingly contradict the fundamental economic proposition that raising employers’ cost of hiring low-skilled workers will prompt employers to more strictly economize on the number of such workers they hire, then to nevertheless forcibly increase employers’ costs of hiring low-skilled workers is to unjustifiably put in greater peril the most economically vulnerable people in our society.

I would be happy to testify, in Richmond, in much more detail on both the theoretical and empirical case against raising the minimum wage. Such a policy is, despite its fine-sounding name and the excellent intentions of you and many other of its proponents, profoundly if invisibly anti-poor and anti-minority.

Sincerely,
Donald J. Boudreaux
Professor of Economics
and
Martha and Nelson Getchell Chair for the Study of Free Market Capitalism at the Mercatus Center
George Mason University
Fairfax, VA 22030

Libel law and the Rolling Stone / UVA alleged gang rape story - The Washington Post

Libel law and the Rolling Stone / UVA alleged gang rape story - The Washington Post


Several readers have asked: If the Rolling Stone article describing the alleged gang rape of a UVA student at a Phi Kappa Psi fraternity party is materially false, could the Rolling Stone be successfully sued for libel? This is a good illustration of some important libel law principles, so I thought I’d write about it.
The answer turns out to be, most likely yes. However, various plaintiffs might not want to.

Wednesday, December 03, 2014

A Story Too Useful to Verify? | National Review Online

A Story Too Useful to Verify? | National Review Online


Rolling Stone has published an incredible story about a rape at the University of Virginia, sending shock waves around the country.

But when I say the story is incredible, I mean that in the literal, largely abandoned sense of the word. It is not credible — I don’t believe it.

I’m not saying that the author of the story, Sabrina Rubin Erdely, deliberately fabricated facts. Nor do I believe that all of her reporting was flawed. There may be an outrageously callous attitude toward sexual assaults at UVA. Rape, particularly date rape, may be a major problem there. I’ve talked to enough people with connections to the campus to think that part is credible enough. But the central story isn’t about a spontaneous alcohol-fueled case of some creep refusing to take no for an answer (an inexcusable offense in my opinion). It’s an account of a well-planned gang rape by seven fraternity pledges at the direction of two members. If true, lots of people need to go to jail for decades — if.

The basic story is this: Jackie is asked out on a date her freshman year by a junior named “Drew” (not his real name). After dinner, they go to a party at Phi Kappa Psi. Quickly, Drew asks Jackie, “Want to go upstairs, where it’s quieter?”

Jackie is led to a “pitch-black” bedroom. She’s knocked to the floor. A heavy person jumps on top of her. A hand covers her mouth. When she bites it, she’s punched in the face. And for the next three hours she’s brutally raped, with Drew and another upperclassman shouting out instructions to the pledges, referring to Jackie as “it.”

Many alleged details (though Erdely never uses the word “alleged”) aren’t suitable for a family paper. Others are simply hard to believe. The pitch-black darkness doesn’t prevent Jackie from recognizing an attacker or seeing them drink beer. The assault takes place amidst the wreckage of a broken glass table, but the rapists are undeterred by shards of glass.

The most unbelievable dialogue comes later. Sometime after 3 a.m., Jackie leaves the still-raging party, “her face beaten, dress spattered with blood,” without anyone seeing her. Distraught, she calls three friends, Andy, Randall, and Cindy (not their real names) for help. They arrive in “minutes.” One of the male friends says they have to take her to the hospital. Cindy replies, “Is that such a good idea?” adding, “Her reputation will be shot for the next four years.”

Erdely expounds: “Andy seconded the opinion, adding that since he and Randall both planned to rush fraternities, they ought to think this through. The three friends launched into a heated discussion about the social price of reporting Jackie’s rape . . . ”

Really? Neither boy put Jackie’s medical needs above their pledge prospects? What a convenient conversation for an exposé of rape culture — it reads like a script written for a feminist avant-garde theater troupe. Similarly, when Jackie reports what happened to school authorities — again, a brutal, premeditated gang rape by nearly half the pledge class of a prominent fraternity — the dean is described as responding with all of the emotion you’d expect if Jackie requested to change majors. Meanwhile, it was all kept hush-hush until Erdely reported it.

Erdely admits she set out to find a sexual-assault story at an elite school like UVA. She looked at lots of other colleges first, but “none of those schools felt quite right” in the words of a Washington Post profile of Erdely. But UVA, which Erdely describes in Rolling Stone as a school without a thriving “radical feminist culture seeking to upend the patriarchy,” was just right. As Worth magazine editor Richard Bradley noted last week, the whole thing seems like an adventure in confirmation bias.

Initially, Erdely wouldn’t say whether she even knew the names of the alleged rapists. Late Monday, according to the Washington Post, Erdely’s editor said Rolling Stone “verified their existence” by talking to Jackie’s friends, but the magazine couldn’t reach them. Uh huh.

Erdely’s story was reported uncritically for days as a powerful example of the “rape epidemic” that is somehow taking place amidst a 20-year decline in reported rapes. News outlets repeated the claim that one in five college women are sexually assaulted. This bogus statistic comes from “The Campus Sexual Assault Study,” a shoddy online survey of just two universities that counted attempted (forced) kissing and the like as “sexual assault” — and never even asked female respondents about rape.

Erdely’s story may be proven true after a needed investigation, but I suspect it will turn out to have been one of those stories too useful to verify.

The Laffer Curve: Will Tax Cuts Pay for Themselves? : The Freeman : Foundation for Economic Education

The Laffer Curve: Will Tax Cuts Pay for Themselves? : The Freeman : Foundation for Economic Education
As someone who worked for Laffer (in 2006–07), I thought I would clarify some of the misconceptions that leftist progressives and even many libertarians have about supply-side economics and the Reagan years.

In my view, Laffer’s biggest contribution to policy debates was to show the ambiguity in the terms “tax cut” and “tax hike.” In his own writings, Laffer would always distinguish between a tax rate reduction and a drop in tax revenues. If the analyst adopts a static model of the economy, and assumes households and businesses act the same regardless of tax rates, then the two ways of speaking are identical. In debates over government policy, people typically rely on the static approach. For example, they refer to a “tax cut of $x billion” or say that the president’s proposal would “raise taxes by $x billion over 10 years.” Laffer’s insight demonstrates that the world is a far more complicated place.

If people respond to incentives — as they always do — then changes in the tax rate and tax revenues may be quite different, and can even go in opposite directions. For example, when it comes to the personal income tax, did the Reagan administration “cut taxes”? Well, tax rates certainly fell sharply: the top personal income tax rate went from 70 percent in 1980 down to 28 percent by 1988. However, during the Reagan years, tax receipts went up — from $599 billion in fiscal year 1981 to $991 billion in FY 1989 (in historical dollars), an annualized growth rate of 6.5 percent. It’s definitely true that the federal debt mushroomed under Reagan’s tenure. In my view, this is one of the failures of Reagan’s “conservative” and “small government” legacy. However, it is ludicrous that critics deride his “tax cuts for the rich” as the source of the deficits; total federal outlays went from $678 billion in FY 1981 to $1,144 billion in FY 1989, for an annualized growth rate of 6.8 percent. Over the whole period, therefore, total federal tax receipts grew by a cumulative 65 percent while total outlays grew 69 percent. The problem with “Reaganomics” wasn’t that it “starved the beast” of revenue, but rather that the federal government let spending grow faster than tax receipts.

Of course, my discussion above relies on what has become known as the “Laffer curve,” which is the source of both confusion and ridicule among economists and pundits alike. The Laffer curve epitomizes the distinction between tax rates and total receipts by plotting them against each other. The two endpoints are easy enough to calculate. At a tax rate of 0 percent, the government will collect $0 in tax receipts. However, at a tax rate of 100 percent, the government will also collect (virtually) $0 in total receipts, because people will either stop generating income, or they will operate in the black market and fail to report their income to the IRS.

Between these extremes, the government will collect positive revenue. If we assume a smooth curve, then there is a tax rate — greater than 0 percent but smaller than 100 percent — that maximizes total tax revenues. This has been dubbed the “Laffer point” by some, but the title may sow seeds of confusion. In neither his scholarly nor his popular writings did Laffer ever argue that this point is optimal. (You can read about it at the Laffer Center website if you don’t believe me.) Rather, his modest point was simply to underscore the trade-offs involved. Clearly, it made no sense to set tax rates above the inflection point on the Laffer curve, because then the government would not only cripple economic growth, but also forfeit potential tax revenue.

In other words, the only rhetorical significance of the “Laffer point” would be to convince all sides in the policy debate that surely tax rates should be reduced at least to that level, because doing so would allow citizens to keep more after-tax income while also allowing the government to increase its revenue. To repeat, the purpose for this rhetorical point wasn’t that Laffer himself was holding up “more government spending” as a goal; it was instead to avoid truly absurd rates of taxation that were counterproductive even from the perspective of big-government liberals.

Critics like to deride the Laffer curve as “voodoo economics” by pointing to counterexamples, say of tax rate reductions that didn’t increase total revenue, or by pointing to tax rate hikes that brought in more revenue. But these possibilities were contained in the original Laffer curve itself. Specifically, if the tax rate starts below the inflection point, then a tax rate reduction will shrink receipts, while a tax rate hike will increase receipts. Laffer never drew his curve with the inflection point hovering above 1 percent, so how in the world did critics get the idea that Laffer thought “tax cuts always pay for themselves”? Did the critics think Laffer couldn’t read his own curve?

Now what Laffer did stress — and I can speak with authority here, because at his firm I had occasion to read plenty of his old papers going back to the early 1980s — is that a tax rate reduction would have a smaller impact on tax receipts than a “static” scoring analysis would indicate. So, for example, if California cut its marginal personal income tax rates across the board by one percentage point, the drop in total tax receipts would be smaller than one percent. The increase in economic activity would not only increase the base of the personal income tax, but it would also increase receipts from sales taxes, property taxes, and so on. Depending on how onerous the initial tax rate was, it was even theoretically possible that the drop in revenue would be negative — meaning that total tax receipts would actually increase — but that was never a blanket prediction of the Laffer approach.

Let me close with one last analytical twist I learned while working for Laffer. When it comes to assessing the incentives from a tax rate change, you need to look at the after-tax return on the margin from additional activity. For example, at first it might seem as if a tax rate hike from 10 percent to 20 percent is a bigger deal than a hike of 90 percent to 95 percent, because the first hike is a 10-percentage-point increase and a doubling of the rate, while the second hike is a 5-percentage-point increase and a comparable jump in the proportion. Yet, if someone is considering investing in a project that will pay $1,000, in the first scenario his after-tax return goes from $900 to $800, while in the second scenario it goes from $100 down to $50. The measured rate of after-tax return has been cut in half in the second scenario, while it only fell about 12 percent in the first scenario.

The Reagan years were certainly not a textbook model of small government and fiscal conservatism, but the derision of the theoretical apparatus of supply-side economics — and of the Laffer curve in particular — is misplaced. The point here was and is a simple one, yet to this day it is routinely ignored in policy debates.

Monday, December 01, 2014

Don’t Hate Me ‘Cause I’m Human — A blast from the past post Dec. 2010 | According To Hoyt

Don’t Hate Me ‘Cause I’m Human — A blast from the past post Dec. 2010 | According To Hoyt


There’s this disturbing trend I’ve observed recently – okay, the last thirty years.

It’s part of what I was talking about yesterday, in a way. For a book to be considered serious, or introspective or relevant, it has to attack the past or western culture or civilization or tech or… humanity.

Not that there is anything wrong with attacking these, mind, to an extent. And they used to be shockers and a very good way to attract attention immediately. And I’m not saying the mindlessly chauvinistic “our people, right or wrong” was much better. For instance, the cowboy-and-Indian trope became really tired after a while and when my brother gave me a book called – I think – (in Portuguese translation) The Mace of War, detailing all the injustices against Native Americans it was a mind-altering experience. Literally. And very worth it. [Though I later found it was also full of politically correct made up stuff like the small pox blankets. In fact the book might have been of the school of false-Amerindian “History” that gave us what’s his face at Colorado College. OTOH it was a good way to make me think outside the mindless trope of afternoon serials — note from 2014 Sarah.]

I’m just saying that these days, by default what you hear is against-whatever-the-dominant-culture is.

I first realized this when I was studying for my final exam in American culture in college. The book changed opinions and contradicted itself but it was ALWAYS against the winners and against whatever ended up being the status quo. So, the book was against the North of the US, because the North… won. Even though it had before been against slavery. It was very much against modern US and raged against… embalming practices for three or four pages. (Because they divorce us from the Earth. Just SILLY stuff.)

And then I started noting this trend in everything, including fiction. Think about it. Who is to blame in any drama: the US; the successful; the British; the Europeans; the… humans.

Years ago when Discovery Channel put out its “future evolution” series, my kids and I were glued to the screen. We’re the family for whom the Denver Museum of Nature And Science is home away from home, the place we will visit if we have an afternoon free, the place where we have watched lectures and movies. I refer to it as “molesting dinos” and it’s usually my way to celebrate finishing a book.

So we were glued to the TV. Except that after the beginning, I realized the way it was going, and I started predicting it. Instead of taking a “what might humans become” the people who wrote this went down a path where first humans and then everything VAGUELY related to humans became successively extinct, till the only warm-blooded survivor was a bird, and then that too became extinct. In the end, tree-dwelling SQUIDS inherited the Earth.

Yes, you DID read that right. Tree. Dwelling. SQUIDS.

The contortions were capricious and often absurd, but you could predict where it was going.

It’s been a while since we had cable, but I understand there was a very popular series called “Life After US” about what would happen to the works of humans if we were suddenly extinct. And people watched it, fascinated and – from the tones of posts about it – a little wistful.

This is when you must step back and go “What is wrong with us?” “Is this a sickness of the soul?”

The answer? Yes and no.

Part of it, of course, is wanting to shock, wanting to revolutionize, wanting to be innovative… in safe ways – in (dare we say it?) politically correct ways. It’s easy and approved of to attack: males, America, western civ, humans.

People who select works at publishers and studios and all that are often liberal arts graduates and they come from this curious world where they still think the establishment is circa 1950s and that they’re telling something new and wonderful.

Part of it is, of course, that we do see problems in our own culture, in our own society, in our own species. Of course we do. We are an introspective culture. We examine our consciences, we find ourselves lacking, we try to improve. This is, in general a good thing – though perhaps a little perspective is also in order.

Part of it is politeness/sensitivity to other cultures, mingled with the consciousness our ancestors were often wrong. We’ve been taught the crimes of colonizers in various lands and most of those colonizers (and colonized, at least for most of us) were our ancestors. We’re conscious we’re big and others are smaller. It’s a peculiar form of noblesse oblige. We don’t want to trample others by pointing out faults in other cultures or other species. I understand this, because I learned to drive in my thirties and lived in a mountain town with lots of foot traffic downtown. I was excruciatingly careful driving through there, because I could crush a pedestrian and not notice. This is why we tend to turn our flagellation upon ourselves.

And part of it is sicker/darker. I notice this tendency every time we discuss a great figure of the past, from George Washington to Heinlein – as different as they are. I call it “counting coup.” George Washington? Well, he was slave owner. And he had wooden teeth. And Lincoln? Well, he was very ill, and besides, he was probably gay and in the closet. Heinlein? Despite all his efforts at including – for his time – minorities and giving women starring roles, he must have been closet racist and sexist, donchaknow? Because he doesn’t fit OUR superior notions of inclusiveness.

What is going on here – besides tearing at our own past, and thereby continuing the self-flagellation – is being able to prove we are “superior” to these high achievers. We might do nothing and achieve nothing, but we are superior beings because we’re more moral than they are. Individually, none of these trends is really bad – or at least not for those of us who grew up with the opposite tradition.

Oh, the constant and predictable chest-beating becomes boring. At least it does for me. Maybe it doesn’t for other people?

But think of (grin) the children. They have no perspective. All they hear is how their country, their culture, their SPECIES is evil. How things would be so much better without us… How things would – ultimately – be much better if… THEY hadn’t been born.

It’s not healthy. It’s vaguely disgusting. And the best it can do is engender the MOTHER of all backlashes and bring about a cultural chauvinism the likes of which you’ve never seen. The worst… well, one of the other cultures we don’t criticize because they’re small and we’re big becomes the norm.

And before you cheer them on, let me put this in perspective: Western civ has committed crimes. ALL human cultures throughout history have committed crimes. Slavery? Since the dawn of time. Exploitation? Since the dawn of time. Murder? War? Genocide? Yep, and yep, and yep. And many of those cultures STILL do all of those things and don’t feel in the slightest bit guilty, mostly because we handily and frequently blame OURSELVES for their behavior and they get our books, our TV series and our movies.

Such as it is, the West has brought the greatest freedom, prosperity and security to the greatest population.

Yes, there were crimes committed, but a lot of them were the result of a clash of world views – tribalism met the state. Look, it’s not that Native Americans or Africans lived in a state of innocence and harmony with nature. If you believe that, you need to study history and put down Jean Jacques Rosseau. And get out of your mom’s basement. And take the Star Trek posters off the wall. And the Avatar poster, too, while you’re at it.

To the extent the native peoples were innocent and helpless, it was because of their mental furniture. What gave colonizers the edge was not their weapons or civilization (Oh, come on, back then, there wasn’t that much of a distance.) It was their mental furniture. To wit, they had overcome tribalism and organized on a large scale. Most of the colonized (excepting some small empires) hadn’t. So they would attack in ways that worked in tribal warfare: exterminate a village or an outpost. And the reaction of the colonizers (who by the way also didn’t understand the difference in mental furniture and therefore thought this made the native peoples’ “bestial” or “evil) was to exterminate all of a tribe or a federation of tribes. And it worked because westerners were united as a MUCH larger group. Which made them stronger. Western civilization started overcoming tribalism with the Romans. That was the real innovation.

If you think that we’re rich because of those acts, you must study economics. It doesn’t work that way. If anything those acts made all of us worse off. We’re way past any wealth we could plunder off others. We’ve created wealth. The whole world lives better than it did five hundred years ago.

And if you’re going to tell me the fact that all humans are flawed proves that we’re a bad species, you’ll have to tell me: As opposed to what? Dolphins are serial rapists. Chimps commit murder. Rats… Every species we examine has our sins, but none of our redeeming qualities.

Heinlein said it was important to be FOR humanity because we’re human. Beavers might be admirable, but we’re not beavers. He was right. But beyond all that, we’re the only species that tries self-perfecting. We exist – as Pratchett said – at the place where rising ape meets falling angel, but as far as I know, we’re the only species reaching upward. (Of course, we wouldn’t know if there are others and again, we have to assume we are it. The others have flaws too.)

We are part of the world and in it. To love the other animals of the Earth – or the hypothetical alien – and hate us is strange. Are we not animals? Are we not of the Earth? And who the heck can compete with sentients who exist only in the story teller’s imagination?

By all means, let’s protect the weaker. Let’s shelter the little. But let’s not beat ourselves because we’re bigger and stronger. Let’s USE our powers for good instead.

Am I saying that you shouldn’t tell these stories then?

No, I’m not. I would never repress anyone’s right to create, or anyone’s opinion. But I’m asking you to think. I’m asking you to pause and go “The west is bad… as opposed to? Humans are bad… as opposed to?” And tell your kids that, ask them those questions.

And then, perhaps, every now and then, try to imagine a story from the contrary view point. Just to wake things up. And to keep others thinking.

Because six decades of hating our own history, culture and species is enough.

Sunday, November 30, 2014

The Microaggression Farce by Heather Mac Donald, City Journal Autumn 2014

The Microaggression Farce by Heather Mac Donald, City Journal Autumn 2014
In November 2013, two dozen graduate students at the University of California at Los Angeles marched into an education class and announced a protest against its “hostile and unsafe climate for Scholars of Color.” The students had been victimized, they claimed, by racial “microaggression”—the hottest concept on campuses today, used to call out racism otherwise invisible to the naked eye. UCLA’s response to the sit-in was a travesty of justice. The education school sacrificed the reputation of a beloved and respected professor in order to placate a group of ignorant students making a specious charge of racism.

The pattern would repeat itself twice more at UCLA that fall: students would allege that they were victimized by racism, and the administration, rather than correcting the students’ misapprehension, penitently acceded to it. Colleges across the country behave no differently. As student claims of racial and gender mistreatment grow ever more unmoored from reality, campus grown-ups have abdicated their responsibility to cultivate an adult sense of perspective and common sense in their students. Instead, they are creating what tort law calls “eggshell plaintiffs”—preternaturally fragile individuals injured by the slightest collisions with life. The consequences will affect us for years to come.

Tuesday, November 25, 2014

Study That


Here’s a line from this report in the New York Times; I add emphasis to that part of the line that is most germane to this blog post:

Studies show that simply raising the price of an alcoholic beverage by 10 percent reduces alcohol consumption by 7 percent, suggesting that higher taxes on alcohol could make a significant dent in excessive drinking.

Is anyone surprised by this finding?  I’m not.  I have no doubt that any increase in the cost to consumers of acquiring alcoholic beverages reduces the consumption of those beverages from what that consumption would have been had the cost of alcoholic beverages not risen.  I’m much less certain about the seven-percent finding.  It could be accurate for the time period that was studied; this figure sounds plausible enough.  But the actual figure might be higher or it might be lower.  That question – the question of the magnitude of the decline in alcohol consumption – isn’t of interest to me here.  I am, however, quite certain about the direction of the change in alcohol consumption that is caused by a change in the consumers’ costs of acquiring alcohol: higher cost, lower demand.

I’m also quite certain that intrepid-enough researchers could produce honest and impressive-looking empirical studies that find that raising the price of alcohol by 10 percent does not result in any reduction in the consumption of alcohol.  I’m pretty sure, however, that no one has done such studies.  The reason is that no one – at least no respectable economist – doubts for a moment that the higher the price of alcohol, ceteris paribus, the lower is the quantity demanded of some well-specified unit of alcohol.  Therefore, when study after study turns up the expected relationship between price and quantity demanded for alcohol, it makes sense.  No one is inspired to set about to add different control variables, to segregate the data into finer categories, or to otherwise fiddle with the empirical studies in order to see if it’s possible to find empirical evidence in favor of the proposition that, at least for small increases in the retail price of alcohol, higher alcohol prices do not decrease the quantities of alcohol demanded.

And if some researchers did undertake such a project, when they would then try to publish their findings in a scientific journal their paper(s) would be rejected.  The referees of the paper would, quite correctly no doubt, point out any number of flaws in the specification of the model, the manner of collecting data, the classification of the data, and the interpretation of the empirical findings.  The findings would just not make sense absent some paradigm-shifting theoretical breakthrough to explain such findings.

And yet, such studies are frequently done for one item: low-skilled labor.  While many studies continue to find – and confirm – what is predicted by foundational economics (namely, that - ceteris paribus – the higher the minimum wage, the fewer are the quantity of hours of low-skilled labor demanded by employers), many other studies find the opposite.

The reason for the flood of the ‘pro-minimum-wage’ studies is not, I firmly believe, that anyone intentionally sets out to disprove the standard, ‘anti-minimum-wage’ theory and findings.  Rather, the reason is that there are people whose priors have them invested in believing that somehow, for some reason, low-skilled workers are the exception to the rule that is at the very core of economic science – that rule being the law of demand.  Believing, pre-scientifically, ‘in’ the goodness of the minimum wage, scholars are led to explore the data intrepidly until they find what they know must be true – namely, that at least small hikes in the minimum wage do not reduce the quantity demanded of low-skilled labor.

And the enormous complexity of economic reality renders finding such confirmation in appropriately parsed, diced, and sliced data easy enough to do.  Again, one produces such findings for the effect of alcohol taxes on alcohol consumption (or the effect of carbon taxes on carbon emissions, or the effect of penalties for rape and the frequency of rape or …. you name it).  That no one does these other studies is explained by the fact that too few people are invested in believing that the law of demand does not apply in these other situations.  There are, however, many people – including (I believe sadly) some economists – who are invested in believing that minimum-wage legislation simply must help the people it is ostensibly meant to help.  And, again, finding empirical confirmation for such a prior in this incredibly complex and ever-changing world of ours is really rather simple.

Friday, November 21, 2014

Net Neutrality or Government Brutality? : The Freeman : Foundation for Economic Education

Net Neutrality or Government Brutality? : The Freeman : Foundation for Economic Education

Over the past six years or so, network neutrality, or “net neutrality,” has risen from an obscure techie buzz phrase to a bona fide political issue and rallying cry for some strange political bedfellows. The current debate comprises competing views on economics, regulation, free speech, property rights, and even the supposed rights of individuals and businesses to a certain Internet experience. Would a net-neutrality mandate protect the rights of some or merely trample the fundamental rights of others and stifle competition and innovation?
Much of the perplexity surrounding net neutrality stems from ambiguity and confusion over the very definition of the term. The concept concerns how information is transmitted over the Internet. Data are moved in “packets” through networks of computers and routers. Currently, these data are processed with little regard to what kind of information they are—be they important medical data, streaming video, or spam.
Generally speaking, net neutrality is the notion that all content, applications, and services should be treated the same by Internet service providers (ISPs). Net-neutrality proponents fear that network operators might someday discriminate against certain types of information by charging fees to particular content providers in exchange for guarantees of higher-quality service or by blocking some content completely.
Such a proposal may sound innocuous enough, but the problem is that the proliferation of things like streaming video and online gaming are taking up increasingly large amounts of bandwidth and are sensitive to delay. This Internet congestion can lead to the degradation of service for all Internet users. Slight delays may hardly be noticeable in e-mail or web-browser applications, but can be more serious for video-content providers or Voice over Internet Protocol (VoIP), which allows people to make phone calls over the Internet.
Then there is the question whether the government has any right to tell ISPs how to manage their own networks and pricing structures, which will be discussed in some detail below.
Adding to the confusion is the fact that net-neutrality advocates disagree over just how much control network operators should be allowed to maintain. Some believe that neutrality means data packets must be handled on a first-come-first-served basis without exception, while others would permit the existence of differing quality-of-service levels as long as there are no special fees (no price discrimination) for higher service levels. Still others would allow prioritization of data and differing quality levels (along with tiered pricing), provided that there were no exclusivity in service contracts. Or, in the words of Sir Tim Berners-Lee, developer of the World Wide Web, “We pay for connection to the Net as though it were a cloud which magically delivers our packets. We may pay for a higher or a lower quality of service. We may pay for a service which has the characteristics of being good for video, or quality audio. But we each pay to connect to the Net, but no one can pay for exclusive access to me.”
Since the most restrictive definition is the one that is typically embodied in legislation and that raises the most serious issues, it is the one on which this article will focus.
The Birth of “Net Neutrality”
The idea of network neutrality originated during the late 1990s as some feared potential threats to the “end-to-end” nature of the Internet, although some trace the concept back to the age of the telegram, when Congress passed the Pacific Telegraph Act of 1860. The act subsidized a transcontinental telegraph line and stated that “messages received from any individual, company, or corporation, or from any telegraph lines connecting with this line at either of its termini, shall be impartially transmitted in the order of their reception, excepting that the dispatches of the government shall have priority.” The term “network neutrality” was coined by Columbia Law School professor Tim Wu in his 2002 paper, “Network Neutrality, Broadband Discrimination,” in which he promotes a “network anti-discrimination regime.”
There have been several efforts to pass net-neutrality laws at the federal and state levels, but they have thus far been rebuffed. That may change, however, particularly if Senator Barack Obama wins the presidential election in November. He has expressed support for net neutrality, dating back to a 2006 bill (S 2817). The prospect of imposing government regulation on what is essentially a free market might lead one to believe that Democrats are more likely to support net-neutrality mandates than Republicans (notwithstanding the fact that the GOP frequently acts in contradiction to its pro-market rhetoric), and, indeed, there is some truth to this.
Generally speaking, most members of the political left have tended to favor net-neutrality legislation and most on the right have tended to oppose it, but there are notable exceptions. Organizations like MoveOn.org, the American Civil Liberties Union, and a number of liberal bloggers have come out in favor of such legislation, for example, but former Clinton White House press secretary Mike McCurry is co-chairman of the Hands Off the Internet Coalition, which opposes it. On the other hand, most Republicans oppose net neutrality, but conservative groups such as the Christian Coalition and Gun Owners of America support it.
Even the most important innovators of the Internet are divided on the issue. Vinton Cerf, a co-inventor of the Internet Protocol (IP) and vice president and “Chief Internet Evangelist” for Google, is for it. Bob Kahn, inventor of the Transmission Control Protocol (TCP), which provides reliable delivery of a stream of bytes over the Internet, and David Farber, a computer science and public-policy professor at Carnegie Mellon University who is known as the “grandfather of the Internet,” are against it.
And then there are the corporate interests. Large web-content providers such as Google, Yahoo!, eBay, and YouTube support net-neutrality mandates because they fear the prospect of having to pay higher prices to ensure the quality of their content, while cable and telecommunications companies such as AT&T, Verizon, Comcast, and Cox Cable oppose it because they feel they should have the freedom to operate their own networks and set their own prices without interference from the government.
In 2004 then-Federal Communications Commission (FCC) Chairman Michael Powell outlined a set of nondiscrimination principles. Powell argued that the broadband industry should offer consumers freedom to access content, run applications, attach devices, and obtain service-plan information.
When AT&T and BellSouth merged in 2006, the FCC attached a net-neutrality provision as condition of its approval. Under the measure the company agreed “not to provide or to sell to Internet content, application, or service providers, including those affiliated with AT&T/BellSouth, any service that privileges, degrades or prioritizes any packet transmitted over AT&T/BellSouth’s wireline broadband Internet access service based on its source, ownership or destination.” AT&T agreed to the concession in order to break a 2–2 deadlock among the commissioners that had held up the merger for several months. The provision was narrowly tailored to AT&T, however, and included a 30-month expiration date. Moreover, current FCC chairman Kevin Martin and fellow Republican commissioner Deborah Taylor Tate warned that the measure “does not mean that the commission has adopted an additional Net neutrality principle. We continue to believe such a requirement is not necessary and may impede infrastructure deployment,” they wrote in a statement. Martin and Tate added, “Thus, although AT&T may make a voluntary business decision, it cannot dictate or bind government policy.”
Proposed Legislative “Solutions”
S 2817 was just one of many attempts to codify net-neutrality regulations in recent years. An attempt to attach a neutrality provision to the purportedly landmark 2006 telecommunications bill (S 2686) failed on an 11–11 committee vote, and S 2686 ended up failing in the Senate anyway. The Communications Opportunity, Promotion and Enhancement (COPE) Act of 2006 (HR 5252) contained neutrality provisions, which were stripped out before the bill ultimately died, as did the Internet Non-Discrimination Act of 2006 (S 2360) and the Internet Freedom and Nondiscrimination Act of 2006 (HR 5417). The Network Neutrality Act of 2006 (HR 5273) was defeated in committee. The Internet Freedom Preservation Act of 2008 (HR 5353), which would enforce the principles of the FCC’s AT&T–BellSouth merger deal on all broadband providers, is now pending, as are some older bills that have been reintroduced.
As with the neutrality debate in general, there are divisions over policy within the federal government. While Congress and perhaps the FCC seem to be moving toward increased government regulation, the Federal Trade Commission (FTC) has opposed new regulation. As far back as 2002 the FTC noted the rapidly evolving nature of the high-speed Internet service market and argued that “broadband services should exist in a minimal regulatory environment that promotes investment and innovation in a competitive market.” More recently, a 2007 FTC report reiterated its position and asserted that since no “significant market failure or demonstrated consumer harm from conduct by broadband providers” could be found, net-neutrality regulations “may well have adverse effects on consumer welfare, despite the good intentions of their proponents.”
The FTC’s conclusion is critical because one of the main justifications of net-neutrality laws is to prevent harm to consumers. That no harm has been found has led neutrality critics to dub the notion a “solution in search of a problem.”
To date, only a couple of cases of what could be called net-neutrality incidents have occurred. Madison River Communications blocked a web-based application when it prevented customers from using Vonage’s VoIP service. The FCC stepped in and ordered Madison River to stop the blocking and make a $15,000 payment to the federal government. In another case, America Online was accused of blocking e-mail to the website dearAOL.com, which was established to protest an AOL plan to charge users a higher price for a feature to block e-mail from unauthorized senders. AOL maintained that the blocking was unintentional and assured that access was restored after customers complained. No government involvement was necessary. Finally, there was an allegation that Comcast was blocking Internet traffic to certain peer-to-peer (file-sharing) websites that were consuming large amounts of bandwidth, but it was later revealed that Comcast was merely slowing down certain peer-to-peer uploads by reducing the number of simultaneous connections that users could have to the site.
Net-neutrality proponents contend that they want to use regulation to increase competition and innovation, but their remedies would have the opposite effect. The growth in demand for bandwidth-intensive applications, such as streaming video, multi-player online gaming, and telemedicine, will require vast capital investments. Broadband providers will not invest in such projects, however, if there is not a good chance they will be able to recoup their costs and turn a profit. This is not unlike how cable companies currently rely on richer customers paying for premium services so that they can invest in less-profitable ventures, such as providing infrastructure for services to rural areas. As Randolf J. May, president of the Free State Foundation, explained in testimony before the New York City Committee on Technology in Government on a proposed net-neutrality resolution,
If broadband providers are not allowed to differentiate their services because of regulatory straightjackets, their ability to compete in the marketplace will be compromised. Lacking the flexibility to find innovative new ways to respond to customer demand, they will lack incentives to invest in new network facilities and improve applications. This lack of new investment, in turn, will have the perverse effect of dampening competition among existing and potential broadband operators.
Net-neutrality advocates also tend to underestimate the amount of competition that already exists in the market for high-speed Internet services. There are multiple companies providing these services using multiple technologies, including wireline, cable, terrestrial wireless, and satellite. Wireless broadband services, in particular, have come to provide a strong source of competition. Recent FCC data show that wireless has gone from having no subscribers in the beginning of 2005 to 35 million subscribers and a 35 percent share of the market for high-speed lines by June 2007. Moreover, as of June 2006 there were two or more broadband providers in 92 percent of the nation’s zip codes, and four or more providers in 87 percent of the nation’s zip codes. With all of this competition, it simply would not be in the companies’ interests to degrade services to consumers because doing so would cause them to lose business to their more innovative rivals.
The costs of stifling competition and innovation through net-neutrality regulations would be significant. A May 2007 American Consumer Institute study estimated that regulation would cost consumers $69 billion over ten years. According to study author Stephen Pociask, “Despite proponents’ best intentions, net neutrality proposals would be a twofold problem for consumers. Innovations that require a guaranteed level of service won’t come to market, and consumers would have to pay more for the services they receive.”
The Usefulness of Price Discrimination
Price discrimination is another concern of neutrality advocates. Despite the negative connotation associated with the word “discrimination,” price discrimination is a common and efficient way of allocating scarce resources and satisfying consumer demand. Children and seniors get discounted ticket prices at movie theaters; people pay different prices for different seats at concerts and sporting events; and some toll roads charge different prices depending on the time of day and the resulting levels of traffic congestion. In response to an FCC Notice of Inquiry regarding broadband practices, the Department of Justice’s Antitrust Division (of all things!) heralded the value of price discrimination in a September 2007 statement, noting the example of the U.S. Postal Service: “The U.S. Postal Service, for example, allows consumers to send packages with a variety of different delivery guarantees and speeds, from bulk mail to overnight delivery. These differentiated services respond to market demand and expand consumer choice.” The Department concluded, “Whether or not the same type of differentiated products and services will develop on the Internet should be determined by market forces, not regulatory intervention.”
In other words, the government should simply get out of the way and allow the market to work. Government should not try to pick winners and losers.
When neutrality proponents say that people have a right to “neutral” provision of information over the Internet, they are really saying that the public has some sort of right over the private property of the companies that provide the access to that information. Some have tried to justify this argument by claiming that the Internet was designed to be neutral, but it is the freedom from government restrictions that has encouraged innovation and allowed the Internet to flourish. Or as my Reason Foundation colleague Steven Titch has put it,
The legislated mandate for neutrality . . . is based on the supposition that neutrality was a founding doctrine of the Internet. That couldn’t be more wrong. The Internet and its commercial component, the World Wide Web, are what they are today due to the simple principle of free exchange through voluntary agreement. Engineering concepts such as “network neutrality” or meaningless slogans like “information should be free” had nothing to do with it.
Broadband providers have invested large sums of money in their networks and should be free to manage them as they see fit. Customers who feel their needs are not being met are free to switch to other providers. This freedom of contract and voluntary exchange are the cornerstones of a free-market economy. Supporters of net neutrality fear that without regulation, a relatively small number of companies will become the “gatekeepers” of the Internet, but the alternative is far worse: a monopolistic government gatekeeper whose incentives are to cater to political power, not consumer desires.
In addition to violating free-market ideals, net neutrality might also violate constitutional rights, specifically, the Takings Clause of the Fifth Amendment. As the Free State Foundation’s May explains,
[T]he de facto imposition of common carrier regulation through net neutrality mandates raises serious Fifth Amendment property rights issues under the Takings Clause. This is because the mandate to carry traffic that ISPs might otherwise choose not to carry, or to carry traffic at faster speeds than the service providers otherwise might prefer, or to refrain from charging more to those who impose greater capacity demands, is not costless. . . . Government mandates that impose such costs, but which, at the same time, restrict ISPs’ freedom to recover such costs, implicate the ISP’s property rights.
Net neutrality also brings up First Amendment concerns on both sides of the debate. Some grassroots groups, such as the Christian Coalition and Gun Owners of America, fear that broadband providers might someday decide to block access to their web content for ideological reasons. This, they argue, would constitute a violation of their free-speech rights.
This analysis is erroneous for a couple of reasons. First, the Constitution prohibits the government from restricting one’s speech, not other private parties. As Brian Costin of the Heartland Institute writes, “[F]ree speech rights for an individual or group end where another’s property rights begin.” Second, a government regulation such as net neutrality that forced a private party to provide access to forms of speech with which it disagrees would violate the free-speech rights of the broadband provider. As noted previously, ISPs have an economic incentive not to block access to content, but they would be within their rights to do so if they saw fit.
The Right Tool for the Job
While network-neutrality advocates claim to want to ensure fairness and competition, the government regulation they propose will result in anything but those things. In the free market, competition ensures that customers receive the services they demand. Government control, by contrast, ensures that they receive whatever services the politicians and bureaucrats in power at the time deem appropriate (not to mention the inevitable and endless litigation about who could offer what services when and for how much).
The concept of the “tiered” Internet is not something to be feared. On the contrary, it could be a means of enhancing services to broadband customers, providing revenue for ISPs to invest in accommodating increasing demand for bandwidth-intensive and delay-sensitive applications and making further improvements to data delivery, and of increasing fairness by ensuring that content providers responsible for the most Internet congestion pay the higher costs of assuring a high quality of service for Internet users. Choking off this potential revenue stream through net-neutrality mandates will only ensure that instead of an Internet with regular lanes and “fast lanes,” all consumers will be stuck in the slow lane.

Thursday, November 20, 2014

The coming war between sex-positive feminism and affirmative consent | WashingtonExaminer.com

The coming war between sex-positive feminism and affirmative consent | WashingtonExaminer.com

Ever since the U.S. Department of Education’s Office for Civil Rights released its “Dear Colleague” letter in 2011, colleges and universities across the country — and the entire state of California — have been adopting policies that define consent so broadly as to be meaningless and nearly impossible to prove.

Now, under the “affirmative consent” or “yes means yes” standard, consent must be active and ongoing. Competitive Enterprise Institute counsel Hans Bader, a critic of these policies, has argued that they constitute “dry legal contracts” requiring every step of a sexual encounter to receive a “yes” or “no” response.

These new policies do not consider silence or lack of restraint to be a sign of consent, and consent is revoked if an accuser was intoxicated. But intoxication is never defined. Is it the same level of intoxication police use in a DUI arrest? If so, where can students get Breathalyzers to test their dates? And if there is no legal level of intoxication, how can a college or university accept a woman’s word that she was too intoxicated to give consent?

These new policies contradict the idea that women should be free to explore their sexuality. It’s hard to reconcile the idea that, on one hand, women shouldn’t be judged for engaging in drunken sex, using a standard under which they cannot legally give consent even if they consumed just a little alcohol. How can the same action simultaneously be a manifestation of feminine sexual liberation and an example of the heinous crime of rape? Feminists can’t have it both ways.

Now it’s not just that women can have as much guilt-free sex with as many partners as they want, but if they do feel guilty about any sexual encounter, it must have been rape.

I don’t see anything wrong with women enjoying sex as much as men, but just as men regret some sexual encounters, women do too — but that doesn’t mean they were raped.

The new definition of rape and sexual assault — that women are too weak to handle alcohol and therefore aren’t responsible for their decisions — flies in the face of those supposedly fighting for equality. Women should be free to get blitzed at parties and hook up with whoever they want — but just as men aren’t excused from being drunk, neither should women get a pass.

I know many will call this victim-blaming, but I’m not talking about women who say “no” or pass out and are raped. I’m talking about people who get drunk, consent to sex, and then wish they hadn’t in the morning.

The original sex-positive feminists opposed any kind of limit on consensual sexual activity. That belief is now being turned on its head by people claiming that consent is not consent if alcohol is involved, and that schools and government must redefine sex.

This new view of alcohol-fueled sex makes no sense in a truly equal world, as men have as much a right as women to claim they were too drunk to consent to sex. This discrepancy is highlighted in cases of same-sex sexual assault, when the patriarchy can’t be blamed.

For heterosexual men, the only rational response to this new contradiction is never to sleep with a woman who has had even one drink (and to be wary of bad breakups or “friends with benefits”). That seems to be what the current crop of feminists wants, but if men stop sleeping with women who are under the influence, doesn’t that limit a heterosexual woman’s freedom to engage in sexual activity?

Sorry, Liberals: Voter ID Laws Don’t Really Impact Election Results - Matt Vespa

Sorry, Liberals: Voter ID Laws Don’t Really Impact Election Results - Matt Vespa


If there’s one thing that gets liberals blinded with rage, it’s voter ID laws. There was a whole panel dedicated to this issue at the progressive Netroots Nation last summer, where panelists agreed that this is the latest evolution of Jim Crow laws. But is it impacting elections?

Nate Cohn at the New York Times wrote that such laws don’t really sway elections. Granted, there are some issues with voter databases that could prevent someone with a valid ID from voting. Yet, these errors also inflate the number of voters who are labeled as not having proper identification. Additionally, it’s hyperbolic to say these laws suppress the vote since the demographics that could potentially be disproportionately impacted don’t vote often anyway:
These figures overstate the number of voters who truly lack identification. Those without ID are particularly unlikely to vote. And many who do vote will vote Republican. In the end, the seemingly vast registration gaps dwindle, leaving enough voters to decide only elections determined by fractions of a point. To begin with, the true number of registered voters without photo identification is usually much lower than the statistics on registered voters without identification suggest. The number of voters without photo identification is calculated by matching voter registration files with state ID databases. But perfect matching is impossible, and the effect is to overestimate the number of voters without identification.

Take Texas, a state with a particularly onerous voter ID law. If I register to vote as “Nate” but my ID says “Nathan,” I might be counted among the hundreds of thousands of registered voters without a photo ID. But I’ll be fine at the polling station on Election Day with a name that’s “substantially similar” to the one on file.



The demographic profile of voters without identification — young, nonwhite, poor, immobile, elderly — is also similar to the profile of voters who turn out at low rates. It’s also possible that the voter file is the issue. Some people voted in past elections, but have moved since and haven’t been purged from the voter file, even though their ID may have expired (if they had one in the first place). Some elderly voters might just be dead and not yet removed from the voter rolls.
The article also notes that some of these folks that don’t have IDs are Republicans, but those without identification are mostly breaking for Democrats. Still, it’s not enough to decide anything but an extremely close election. Moreover, it’s not like Democrats have been unable to win states with voter ID laws; Cohn aptly noted that Obama won Indiana in 2008. Concerning the American people, voter ID laws are immensely popular across the political spectrum. In Texas, 67 percent support their voter ID laws. A Fox News poll from May of 2014 found that 70 percent, including 55 percent of Democrats, support laws that protect the integrity of our elections. In July of 2013, when parts of the Voting Rights Acts were struck down as unconstitutional, Marist asked: “Do you think it is a good thing or a bad thing if election laws were changed to do each of the following: Require voters to show identification in order to vote?”

There was 70+ percent approval across the board; regions, political ideology, political affiliation, income, sex, and race all said such were a good thing. One statistic that stood out was 65 percent of those describing themselves as “very liberal” approved of voter ID laws.

If there is one thing that’s preventing Americans from voting, it’s not voter ID laws; it’s the lack of resources at polling stations in predominantly minority voting districts.

Now, take what you will from that narrative, but it’s clear the voter ID laws are popular–and they’re not really deciding elections.

Even President Obama said on Al Sharpton’s radio show last October that voter ID laws are not preventing minorities from voting.
"Most of these laws are not preventing the overwhelming majority of folks who don't vote from voting. Most people do have an ID. Most people do have a driver's license. Most people can get to the polls. It may not be as convenient' it may be a little more difficult."