Monday, 10 December 2018

A Case for Problem-Based Humanities Research

What does it mean for a town to die?

What does it mean for an industry to die?

What responsibility does a government have to prevent these things from happening?

As someone who hails from Atlantic Canada, I wonder about these questions constantly. I’d go further and say that these questions are the most pressing concerns of nearly every jurisdiction in Canada that isn’t a metropolitan region. Yet these fundamental questions seem to rarely make it into the political conversations taking place in my home region or elsewhere. Instead, all of the political conversations I hear tend to focus exclusively on value for money.

We know, of course, that this isn’t how things play out in the real world. The truth is that per capita funding is anathema to people living in sparsely populated areas, because a turn to pure per capita funding would result in the immediate closure of countless schools, hospitals, and other vital pieces of social infrastructure that would see many of our rural communities disappear. Yet many of these communities continue to receive the support they need to continue existing, even if it constitutes a bare amount of “life support” that keeps them limping along.

To those concerned with efficiency and a utilitarian best-outcome-for-the-most-people set of values, this reality can be very frustrating. These people believe that it is only political expediency, and the disproportionate voting power apportioned to specific regions, that keeps politicians making “political” promises of social infrastructure funding to areas that, for some, should simply be permitted to die of natural causes—read: the decline of their traditional industries.

On the other side, people living in rural communities will argue for the importance of their dignity, which is directly attached to their sense of home and community. They might also point to the logistical impossibility of their moving to a more densely populated area, or the foolhardiness of concentrating all of a province’s population in one or two urban centres as a long-term strategy. Most of the time, though, these conversations tend to come back to the eternal notion of value for money, as though the meaning of "value" were self-evident. 

It’s the failure of these conversations to get to the real issues, the “Why?” that should entice governments to fund more problem-based humanities research that speaks directly to the challenges faced by local communities. What are people truly asking for when they ask to be supported in their rural communities? What is at stake in a government’s decision to subsidize a dying industry that has little chance of ever becoming sustainable again? Are better jobs really the sole way of helping citizens live more fulfilling lives? These are questions for rigorous humanities-based research.  The reason we often don’t invest in this type of research is because we’ve come to accept the notion that philosophy is a private concern, with each person’s values being just as important as anyone else’s. While this is true in a democracy, this does not mean that the ways in which people apply those values to specific decisions (and their rationale for doing so) are equal. 

It’s in this realm, the realm where people’s core values intersect with decision-making, that all of society can benefit from the help of experts in the humanities. I am a PhD in English literature, and I still would never argue that I have all the philosophical knowledge I need to assess how governments should approach the big questions I’ve outlined earlier in this piece. To achieve that kind of understanding, I’d need to read a report from a humanities scholar (or better yet, a team of diverse scholars) who has invested the right amount of expertise, time, and experience into framing and addressing these questions. That doesn’t mean that the final report will produce answers that will make everyone happy or will compel everyone to agree about what to do. It doesn’t even mean the report will produce more answers than questions. What it will do, though, is finally get us talking about the real issues, like human dignity, that underlie our policy debates.   

Without this kind of humanities-based intervention, we are left with a cacophonous town hall in which the plurality of self-interested voices becomes noise, and policymakers are much less likely to meaningfully integrate community feedback into their decisions. When you have these voices collected by experts, however, then distilled into a government report on the human value of work and community, you have something that policymakers can use (if they wish) to reflect meaningfully on the “Why?” of what they’re doing.

Let’s take the example of jobs. To be sure, there are few people in Canada who die of starvation or exposure each year. This is not to downplay the crisis of adequate food and housing that many Canadians suffer from Rather, my point is that for many people across Canada (especially for those whose entire politics are built around the notion of more, better jobs), it is wrong to believe that more, better jobs are necessary to "make people not die." It's also wrong to assume that more, better jobs will immediately cure our society of problems like violence or addiction, as a quick look at Fort McMurray will attest to.    

So if jobs aren’t the true solution, what is?

To start, we have to realize that a lack of good jobs is never the real problem. The real problem is the corrosion of security, freedom, and dignity that precarious or alienated employment has on an individual. Once we collectively accept that this assault on dignity is the real problem, we can open our minds to a wide variety of ways to help our citizens feel more empowered in their daily lives.

The point of all of this is to say that politicians across our country, especially those who govern over areas with sparse populations or dying industries, would do well to ask themselves the question, “What do our citizens actually need and want?” We should then invest not only in the stakeholder research that allows people’s voices to be heard, but the kind of problem-based humanities research that will help all of us get to the true crux of these issues. Then, we might begin having a genuine public conversation about the truly valuable things that secondary concerns like jobs are supposed to make possible.  

Wednesday, 5 December 2018

The Humanities and the Teaching of Good Judgement

We’ve seen an erosion in the concept of good judgement over the past forty years. The partisan arguments over US Supreme Court appointments, the increasing emphasis that all moral values are relative, the insistence that anything other than the most mathematically proven declarations are arbitrary, the notion that “subjective” is a synonym for “random” and “rationally groundless”—all of these speak to the growing sense that all statements are either completely objective or utterly arbitrary.
To be fair, the concept of good judgement originally came under fire for good reason, which lies in the fact that good judgement has been historically coded as white, cisgendered, heterosexual, male, and old. But it’s also important to acknowledge that just as historically marginalized groups and individuals were finding a voice in public discourse, this trend was accompanied by a growing skepticism toward the notion of good judgement and expertise in general. It seems that many would rather live in a world with no intellectual authority rather than allow historically marginalized groups to lay claim to this authority.

The problem with all of this is that when a person confronts a judgement they don’t like, they can completely write it off. This leads to a social (or should I say antisocial) phenomenon one could rightfully call the privatization of truth.

What’s been lost in all of these conversations is the principle that one can, through education, improve their subjective judgement. A graphic designer might not have an objective sense of which designs will be better received by certain audiences, but to say that their aesthetic judgement is therefore arbitrary, groundless, and no better than anyone else’s is to throw out the concept of good judgement altogether.

This crisis of faith in good judgement is part of the crisis that’s impacting the Humanities. Part of this crisis is the notion that good judgement, no matter how well-argued, can never compel agreement. One could offer a strongly argued reading of misogyny in the works of William Faulkner, but the fact remains that any student, if they wish, can fold their arms and argue, “It’s not there. You’re just reading too far into things.” The professor can offer mounting evidence, but all the student needs to do is continue shaking their head. For some instructors, this type of response can badly rattle their confidence in their own reasoning. But good judgement doesn’t rely on the acceptance of others to show its worth. The values and hallmarks of good judgement are many. Persuasiveness might be one of them, but compelled agreement isn’t. Persuasiveness is a quality of the argument itself; agreement depends entirely on the caprice of the listener. If the recalcitrant position of “I’m not persuaded” were enough to completely undermine the concept of good judgement, a majority of our institutions would completely collapse (including the law itself, which is based solely on judges’ subjective, informed judgement of the law as it’s written).

So what are the hallmarks of good judgement? Thankfully, they are skills that the Humanities continues to teach very well, the first of which is verbal acuity—the ability to make a point clearly. Another is discursive command, the ability to be intentional about what types of language (be it medical, literary, religious language, etc.) one is drawing upon when making an argument, and what types of language their interlocutor is using. Another still is embodied knowledge, the ability to listen to one’s physical reaction to certain statements, assessing this reaction to sense whether there is “something wrong” with what is being said, the using verbal acuity and discursive command to try and formulate this objection in words. Another still is empathy, the ability to inhabit (however imperfectly) the perspective of another person, or at least to acknowledge that that person’s lived experience is radically unknowable to oneself (as is the case with a white male speaker like myself trying to speak on behalf of individuals whose lived experience is radically inaccessible to me. In that situation, the principle of empathy defers to listening).

All of these skills, and many others, are taught by the Humanities. But here’s where I think the Humanities faces its biggest conundrum. The Humanities, generally speaking, is not content to uncouple the skills it teaches from the values it wishes to instill. For example, the ability to critically reflect on how language can shape reality is a core skill learned in an English program. But what are we to make of a Republican politician who stands on the floor of the US Senate arguing that climate change reports are simply representations of reality and not the thing itself? To many English professors, this argument would constitute an irresponsible misinterpretation of what critique is meant to do. But on the other hand, what exactly prevents this senator from using critique in this way? What happens when critical doubt, when applied to subjects as diverse as climate change and sexual assault, becomes the greatest weapon regressive conservatism has at its disposal?

At this point, the Humanities faces a choice: to focus on teaching discrete skills and then encouraging people to use them in responsible ways, or to continue arguing that there is something inherently progressive about the skills it teaches. This is where some Humanities instructors might argue that they are teaching habits of thought rather than something as superficially utilitarian as "skills." In any other discipline, critical thinking is simply another name for problem-solving or problem identification. In the Humanities, it seems to carry with it a progressive (or at least anti-authoritarian) mission, due in part to the inheritance of "critical" from 20th-century critical theory. This isn’t to say that the Humanities should abandon its values; rather, it might need to give up the notion that there is something inherently progressive about the skills it teaches.

Further, the Humanities needs to stop arguing that there is some sort of moral improvement or “becoming more human” that is inherent to the skills it teaches. The critical reflective skills taught in the Humanities can just as easily be used for self-deception as they are for self-knowledge; they can just as easily be used to rationalize unjust practices as they are to critique them. Indeed, it’s the double-edged nature of these skills that makes them so powerful and so dangerous at the same time. The problem lies in thinking that a certain progressive mindset is inherent to the skills taught by the Humanities, which if we are to be honest, can produce a regressive devil’s advocate just as easily as they can produce a progressive critical thinker.

What remains in all of this is the importance of good judgement and the skills that constitute it. When Eve Sedgewick speaks about the homosocial continuum, the quality of her judgement and the salience of her points does not depend on compelled agreement. If someone folds their arms and says, “Bullshit,” it doesn’t matter. The quality of Sedgewick’s argument depends on the skills she built over her career, and her ability to use those skills to create a strong argument.

What needs to be reasserted (and it’s a shame that this needs to be argued) is that one’s judgement can improve through education, and that the majority of our social world is predicated entirely on the quality of people’s subjective judgements, something the Humanities helps to improve. Talk about good judgement in a boardroom today, and heads will nod. Talk about good judgement in a Humanities classroom, and suddenly people start using words like “arbitrary” or “groundless.” The Humanities doesn’t need to apologize for the fact that some people’s judgement (with allowances made for context) can be better than that of others. But even more importantly, it needs to emphasize that a person’s judgement, through education, can become better than it previously was.  

Sunday, 20 May 2018

Saint John, New Brunswick

Neptune sighs
With the breath of a god who’s lived too long
And the fog descends 
Over the living and the dead of Saint John.

Centuries of Protestant pride
And Catholic shame
Loyalist blood
And famine-starved bone
Crunching together 
Like continental plates colliding 
Ploughing up mountains 
Resembling steeples.
Providing the habitat 
For that singular species 
The Old Saint John family.

They’ve created their own gods
Thick-featured statues with broad shoulders
Squatting outside Market Square
Appearing again 
In the paintings of Miller Brittain.
But these thick people are only aspirational. 

The Saint Johner 
Is as thin-skinned 
as the Pinot grapes
That also thrive 
under cover of fog.

The bricks of their ancient buildings
Mortared together
With centuries of insults 
both real and imagined 
(Seven parts of the latter 
to every one of the former).

They know their prayers.
But the one they know best
Is the one they'd never dare say
Before their neighbours.

A prayer the people in their Sunday finest
know better than the Lord’s
Than the Hail Mary.

We are afraid. 
We feel alone.
We want to be wanted.

A prayer that still echoes
Against the stone walls
Of their family chapels.

Thursday, 29 March 2018

What if Students Want to Write Poorly?

I was re-reading George Orwell’s “Politics and the English Language” recently and was struck by the relevance it still holds today. To recap, Orwell argues that the “ugly and inaccurate” use of written English that he witnessed during his time was not the mere by-product of untalented writers. Rather, it was a distinct trend motivated by political orthodoxies that sought to “give an appearance of solidity to pure wind.” Being the friend of many English instructors, I was particularly interested in Orwell’s suggestion that poor writing serves a strategic purpose, that it reflects the motives of writers whose goals are better served by vagueness than by clarity. Nearly every person I know who has taught a writing class has expressed frustration or dismay at the difficulty of turning a poor writer into a good one. When considered alongside Orwell’s essay, this challenge provokes me to ask: what if students want to write poorly?

What if they bring to writing a set of assumptions and goals that are incompatible with clear, concise expression?

Finally, how can understanding and addressing these assumptions and goals help instructors succeed in fostering better writers?
When I was fresh out of graduate school and looking for work in the private sector, I landed a part-time job as a proposal writer for an IT security company based in Toronto. Armed with a PhD in English Literature, I vibrated with excitement at the opportunity to prove my more practically-minded relatives wrong by showing how valuable my skills could be in the “real world.” I wasn’t prepared for the setback I’d experience after handing my boss my first draft proposal.

The man emailed the document back to me almost immediately demanding a full rewrite, noting that the document “didn’t speak the language” that was necessary to gain credibility in the IT sector. I needed to use more words like synergistic, architect (as a verb), leverage (also as a verb), and utilize. This last word pained me even more than the others, since Orwell himself once advised his readers never to trust someone who uses the word “utilize” when they could just as easily use “use.” Yet my boss insisted that demonstrating our comfort with consulting-sector jargon superseded the goal of communicating our value as clearly as possible. He also asked me to add more than ten pages of extraneous material simply to make the document appear more detailed and rigorous.
In another instance, I found myself arguing on the phone with a representative from a company that had overcharged me for a tax-filing service. Over and over, the young man on the other end explained to me: “It has been decided that you will not receive a refund.” Repeatedly, I demanded that the young man admit that a human being, located somewhere in the world, was responsible for this decision. But he wouldn’t budge from his use of the passive voice, and kept repeating “It has been decided” until I gave up in Kafkaesque despair.
What I soon learned in my postgraduate life was that even though clear expression is a great gift, the world constantly calls on us to obscure what we are saying for personal or professional ends. There are daily occasions where we must choose not to express ourselves clearly, but must pad our writing and speech with innumerable qualifications in order to achieve specific ends, whether it be to soften our tone when delivering bad news or to qualify our thinking with a dozen layers of nuance.  
To return to the question I posed at the beginning of this essay: what might motivate a student to write poorly? As many instructors will no doubt attest, teaching students to write well can be very difficult, even over the course of a four-year university degree program. A student might spend a few more hours than usual studying for a biology exam and expect to improve their grade on their next test. Yet spending a few extra hours on an English term paper (while always a fine idea), does not carry the same level of correlation to an improved mark. Anyone who has ever heard a student say something along the lines of, “But I worked so much harder on this one!” understands that this lack of correlation between increased effort and instant payoff can be a source of great frustration for students and instructors alike.
Mastering the mechanics and style of good writing is a long and difficult process. But I’m convinced that it is longer and more difficult than it needs to be, due to the assumptions and motives that students bring to the process.
Anyone who has ever taught a writing class will recognize the line, “Since the dawn of time, man has always…” This common opening reflects the writer’s inability to assign an appropriate level of scope to their argument. But it also reveals something more—the student’s engrained belief that English class is a place for lofty statements, the bolder the better. Such lines are the product of a culture whose concept of an English professor has not advanced beyond the likes of John Keating in Dead Poets Society.
To summarize, teaching students to write well is difficult not only because the craft itself is hard to master, but also because of the false beliefs and counterproductive motives that inform students’ concept of what writing is supposed to accomplish. For many people, and young people especially, writing is meant to convey one’s grandest ideas and to persuade others to agreement. Accomplishing as much will require a writer who can go beyond simple, clear statements. However, one can’t progress to the strategic use of language until they’ve grasped how to write an idea in simple terms. But as I’ve seen countless times, many people will actively resist putting their grandest ideas into simple terms.
One of the greatest gifts of youth is a belief in the uniqueness and world-shaping significance of one’s ideas. For many, these ideas exist not in words, but in the boundless enthusiasm that one might feel for a fragment or image that feels extremely insightful. Unfortunately, these ideas are much like dreams—incredibly interesting to the person who experiences them, but equally vague and boring to those who don’t. The holder of the idea will often resist expressing it in plain language, for fear of killing the happiness it inspires in them. Considered in the daylight of clear expression, the idea reveals itself to be not nearly as unique or compelling as its creator initially thought. This resistance to clear expression isn’t limited to young people. There are many adults I’ve met in my postgraduate life (especially entrepreneurs) who’d much rather preserve their enthusiasm for a vague idea than ruin it by trying to set it down in clear terms.  
This is all to say that there are powerful motives informing people’s unwillingness (and yes, I call it an unwillingness) to write clearly. When seen as the product of unwillingness as much as the product of inability, poor writing reveals why it is such a difficult problem to address.
I haven’t written anything in this essay that experienced writing instructors don’t already know. What I’d like to pose again, though, is the question: how might students and instructors both benefit if writing classes explored the motives of poor writing as thoroughly as they addressed the mechanics of strong writing? I'd be very interested in hearing people's thoughts on this subject in the comments below. 

Monday, 5 February 2018

The Right to Speak is Not the Right to be Heard

Free speech advocates will often accuse so-called Social Justice Warriors of shutting down speech with nothing more than a claim to victimhood, as though the marginalization of specific groups were based solely on a person’s squishy feeling of being victimized and not a matter of historical record (which it is. Anti-Jewish state propaganda exists. Laws banning women from voting exist. Look ‘em up). The next claim is that while such laws might have once existed, we’ve cleared them all away and now we’re all on an equal footing (let the moral hand-dusting begin). How these people would explain the overrepresentation of African Americans and Indigenous peoples in the prison system, or the high likelihood of assault being committed on members of the LGQTQ++ community remains a mystery to me if group-based marginalization doesn’t exist.

If these free speech advocates want us to be super specific about which speech SJWs want to shut down and which speech they don’t, they need to be equally rigorous about the flip side of the equation: what does and does not constitute a violation of a person’s freedom of speech? When we ask this question, we see just as much hand-waving by free speech “victims” as they accuse SJWs of engaging in.

So let’s start off with a baseline for a violation of a person’s freedom of speech.

Case #1: When the government threatens to legally prosecute, intimidate, or disappear someone based on something they’ve said.

That’s it. Seriously. That’s it.

The Antifa groups that sometimes engage in violence such as Nazi punching?

Nope, not a violation of a person’s freedom of speech. Even if you don’t believe in violence, this is a violation of a person’s right not to be assaulted. The assailant might not know the person is a Nazi at all. They may have thought the person in question was looking at them cockeyed. Even if you don’t believe in punching Nazis, this is (and is only) a violation of a person’s right not to be assaulted.

When a university denies someone the right to speak on campus?

Newp. Not a violation of their freedom of speech. This is a matter of campus policy. Different campuses have different policies about who can come onto their campus and speak, and they can change these rules any time they want. There may be some argument here based on the fact that these are publicly funded institutions, but the fact remains that policies about who can and can’t come speak on campus are different at different schools.

When I start screaming at someone to shut up and drown them out?

No, this doesn’t constitute a violation of that person’s freedom of speech. If it did, the free speech advocate would have to admit that speech in itself is capable of violating another person’s rights. That means speech is a form of violence, and the entire divide between speech and violence that free speech advocates rely on would come crashing down.

The screaming example raises another important point, and it’s one I repeat to myself whenever people start talking about their free speech being violated.

The right to speak is not the right to be heard.

In recent conversations on the internet (and they have been actual productive conversations), I’ve engaged free speech advocates about the idea of internet “mobbing” of those who express unpopular views, be it on the right or left. One claim to victimhood that is constantly made by free speech advocates is that when more than a few people (let’s say more than five) gang up on someone to call them a Nazi or some other epithet, this somehow constitutes a violation of that person’s speech. To these people, being a person who bravely presents a dissenting view to a group orthodoxy somehow constitutes a more authentic act of speech than those who just so happen to share a common reprisal. We can talk all we want about the necessity of maintaining civil discourse and a rational exchange of ideas, but this doesn’t come close to violating that person’s freedom of speech. If Twitter (a private company) bans that person or the people who come after them, it doesn’t violate anyone’s freedom of speech.

Why not?

Because the right to speak is not the right to be heard.