Saturday, 12 January 2019

Should English profs ban the "You're reading too far into things" argument from their classroom?

Many English professors have no doubt encountered students in their classroom who, arms folded, derail an otherwise productive textual analysis by insisting that their professor or classmates are “reading too much into things.” The implicit argument here, is that a) there is no such thing as figurative meaning, or b) that the figurative meaning being argued for oversteps the boundaries of reasonable credulity, with the recalcitrant student in question serving as the arbiter of what’s reasonable.

Yet despite the familiarity of this experience , many of the English professors I know still struggle with it. They don’t want to use their authority to overrule the student, nor do they want to appear as though they’re silencing dissent. Usually, they tend to acknowledge the student’s comments as one perspective among others, and then move on with the exercise, sometimes a little shaken by the experience. 

I feel less accommodating toward any version of the “I don’t buy it” or “You’re seeing things that aren’t there” argument, if in fact the argument goes no further than these statements. Rather, I would recommend that English professors include a note in their course syllabi explaining to the class why the “I don’t buy it” argument, in the absence of further qualification, has no place in an English classroom.  Here’s why:

a)    The study of English is, among other things, the study of figurative meaning. If a student doesn’t like talking about meaning beyond what is common sense or literal, they shouldn’t be in an English class. We’ve known about the existence of figurative meaning and the ways it can be mobilized through rhetorical devices for some time now.

b)   The “You’re reading too far into things” argument is not an evidence-based claim, because you cannot prove a negative. 

c)    Due to the absence of evidence in b), the student must offer an alternative reading of the same text, based on evidence, in order for the argument to be considered valid. 


This isn’t to say that a student should be told to keep quiet about their reservations toward a certain reading until they have a fully formed counterargument. Expressing reservations with a reading should be encouraged. What I’m talking about here is the more truculent “This is dumb; you’re seeing meaning that isn’t there” argument that, for the reasons outlined above, should be discarded as invalid before an English course has even begun, preferably through a statement in the course syllabus. 

Thursday, 3 January 2019

What Do the Humanities Do? They Parse.

I was watching an old episode of The Simpsons recently and came across one of the most famous scenes from the series (although to many fans of my age, all scenes from Seasons 3-10 are famous). In the scene, an exasperated Kirk Van Houten is playing Pictionaryat a dinner party  with his wife Luann as his partner. Kirk quickly becomes frustrated by Luann’s inability to guess the meaning of his impossibly abstract drawing. When their time is up, Kirk throws his hands in the air and exclaims, “It’s dignity! Don’t you know dignity when you see it?!”


Kirk Van Houten's rendering of dignity

I still laugh at this moment, part of the reason being that the concept of dignity (and Kirk’s ridiculous drawing) was perfectly chosen for this gag. The fact is that dignity, as noted by Philosopher Remy Debes, remains one of the most often-referenced, but also most vaguely understood concepts in contemporary philosophy, jurisprudence, and just about any field of knowledge interested in better understanding the human experience. 

So what do we do when we find ourselves constantly referring to a concept that we have difficulty defining, especially when we’re willing to create laws based on that concept? We do the core work of the Humanities, which is to parse.

By standard definition, to parse means “to identify the parts of a sentence and explain how they work together”; it also means “to examine in a minute way: analyze critically.” But I’d like to expand on these definitions within the context of the Humanities to define parsing as “To pursue the critical analysis of a concept up to the point that the concept becomes opaque or seemingly impervious to further description, analysis, or understanding, and then, through an act of verbal acuity, to discover a new means by which to describe, analyze, or understand that concept.” In other words, to parse means to analyze a concept so minutely that one either finds new constituent parts to break it into (like the discovery of subatomic particles) or one articulates a new paradigm through which to describe and analyze the concept (like a paradigmatic shift from Newtonian physics to the Theory of Relativity).

Anyone who's taught or studied the Humanities knows intimately the moment that parsing occurs, the moment at which one is writing an essay exploring a concept, and one stops typing, leans back in their chair, laces their hands behind their head, and tilts their gaze toward the ceiling, struggling to articulate a half-formed thought or break through the point at which an idea has become seemingly irreducible. This is all but synonymous with the work of thinking. It is the essence of Humanities-based work and critical thought in general. It is also the part of our thinking that is most severely threatened by a culture bent on rendering all human experience as "user" experience, based on principles of frictionless cognition that are best captured by the title of one of the most influential books in online design—“Don’t Make Me Think” by Steve Krug

Now to be fair to Krug, the point of his book could be revised to say “Don’t Make Me Think (About Anything Except the Stuff that Really Matters)”. But the fact that Krug and his publisher settled on this title speaks to a deep cultural desire to remove the moment of thinking, and its attendant friction and discomfort, from human experience altogether. On the opposite side, I’ve seen a reactionary movement among some Humanities professors who believe that the antidote to this trend is a generalized fetishization of discomfort, in which it’s assumed that anything that makes a student uncomfortable is inherently good and indicative of growth. But cognitive discomfort is a secondary effect of the work of parsing; it is not an end in itself. 

In an article last year titled, “'The difficulty is the point': teaching spoon-fed students how to really read,” Tegan Bennett Daylight does a good job of aligning the work of parsing with the act of reading itself, especially when one is working to understand challenging material. “This is what I want for my students," writes Daylight. "First, I want them to read a book, all the way through. I want them to find something difficult and do it anyway.” One could argue that there is perhaps no statement more countercultural than that of “I want them to find something difficult and do it anyway.” In a culture increasingly driven by a paradigm of user experience, which is intent on stripping the world of cognitive friction, this notion of finding something difficult and doing it anyway has been replaced by a paradigm of helping people get in touch with their core passions, with the hope that doing so will provide them with a level of emotional compulsion that will make discipline irrelevant, as the force of their "passion" will propel them onward through a semi-conscious flow state, and the uncomfortable experience of coming up against conceptual opacity and forcing oneself to work through it by an act of will shall no longer be necessary. 

To be fair, there are no doubt thinkers out there who are propelled by compulsion more than discipline. If one is utterly obsessed with parsing ambiguity and pressing beyond conceptual opacity, no discipline is required, since the involuntary force of the compulsion will propel one onward. Patient devotion, though, is different, as it requires the discipline to continue pressing onward with the work of parsing even after one is not in a flow state of immersion in one’s work, and one must be left with uncomfortable thoughts and an apparent lack of progress for an indefinite amount of time.

It’s also important to note that it takes many different registers of language to perform the work of parsing. Sometimes, concepts or aspects of human experience become so amorphous, opaque, or irreducible that we must rely on poetry, literature, and the other arts to perform the work of forging onward, attempting to give any form at all (however intuitive) to what has thus far been formless. Sometimes, an explorer will combine the language of poetry and analysis to forge onward, as one can find in the mind-bending work of Maurice Blanchot, which is sometimes so patient in its parsing of the unarticulated that it verges on maddening. 

This point also brings us to the issue of the rage that often accompanies the work of parsing. This rage can come from many places, which include the mind’s desire to defend its cognitive status quo, the desire to believe that one already knows everything that’s worth knowing, and the belief that all knowledge that isn’t fully objective is arbitrary and thus unthreatening. These status quo beliefs are often held with a deep sense of urgency, as the distant rumble of the yet-to-be-parsed can threaten the conceptual bedrock of a person’s identity. On a more basic level, the work of parsing can make us all feel stupid, and there is nothing more heartbreaking and more enraging than the feeling that one is stupid. This is an emotional reaction to parsing that we must understand and respect while still insisting on the need to forge onward with the work of parsing. 

What some Humanities scholars might struggle with today is the notion that the work of parsing is cumulative and additive, meaning that over time, we as a discipline have added to the stock of knowledge. This is not the same thing as stating that Humanities research is progressive, since progressive presumes a fictional telos or “end of knowledge” that might one day be reached. We need not believe in progressive teleology to see value in the work of parsing, but we do need to understand that when a new act of parsing has occurred, something important has been achieved. This moment of achievement is every bit as significant to human knowledge as discovering the constituent parts of the atom, and the Humanities as a discipline must learn to recognize and celebrate the fact of such achievement, even as groups within it might value certain achievements over others. 

By way of example, it is clear that for all the criticism directed toward it, the concept of intersectionality has made enormous contributions to our understanding of the human experience, the situatedness of knowledge and the dynamics of power among identifiable groups being but two (and two that have been much better parsed in the existing literature). These contributions have added to the stock of knowledge by which we understand human experience. We can have a debate about whether intersectionality is the singular means by which to understand human experience, and we can also debate the extent to which power shapes human relations, with positions seeming to range from those that relegate power to the realm of secondary effects (it is rarely, but sometimes a factor in constituting human relationships and hierarchies) to those that assign power a hegemonic, all-encompassing status (“There is no outside, and one can never appeal to a basis for knowledge that isn’t predicated on power”). Regardless of where one falls in this debate, what remains clear is that the concept of intersectionality has further parsed the human experience, and in doing so, has made available to us a realm of knowledge that wasn’t available prior to its being parsed. 

In addition to a culture that is bent on reducing all forms of cognitive drag, we also live in a culture where people celebrate and defend points of conceptual opacity because these points of opacity provide us with occasion to scream at one another. A point of opacity now serves as a fulcrum, on each side of which you have a camp that makes slippery slope arguments about the other side. By way of example, this schism can manifest in the fear that an absolutist approach to free speech will defend fascist beliefs just long enough for them to take over, or the converse belief that any regulations on speech will soon lead to an Orwellian nightmarescape. This is very different from the act of two people coming together with an interest in taking an erstwhile opaque concept and working together to press onward in parsing it. 


That said, it doesn’t take long to see the naivety of this idea, because the fact is that people don’t parse concepts from a position of political neutrality, but often do so with a vested interest in defending ideas that shape human experience and power relations in the way that best serves them. But to throw out the possibility of genuine cooperation in the work of parsing would be to adopt the position that all knowledge and human relations are 100% reducible to power, and that the influence of power is homogeneously exercised across all aspects of human experience, with no spectrum leading from more power-heavy aspects of experience to less power-heavy ones. This monolithic, homogenous, there-is-no-outside understanding of power isn’t one I agree with, for the primary reason that it requires very little thinking and leaves no possibility of further parsing other aspects of human experience like joy, beauty, and love. 

What remains true in all of this is the centrality of parsing to the work of the Humanities, and the need for the field to better recognize the cumulative contributions that the act of thinking and parsing has made to human knowledge. Finally, we must learn how to better celebrate these achievements and to acknowledge that we are the better for them. 

Monday, 10 December 2018

A Case for Problem-Based Humanities Research

What does it mean for a town to die?

What does it mean for an industry to die?

What responsibility does a government have to prevent these things from happening?

As someone who hails from Atlantic Canada, I wonder about these questions constantly. I’d go further and say that these questions are the most pressing concerns of nearly every jurisdiction in Canada that isn’t a metropolitan region. Yet these fundamental questions seem to rarely make it into the political conversations taking place in my home region or elsewhere. Instead, all of the political conversations I hear tend to focus exclusively on value for money.

We know, of course, that this isn’t how things play out in the real world. The truth is that per capita funding is anathema to people living in sparsely populated areas, because a turn to pure per capita funding would result in the immediate closure of countless schools, hospitals, and other vital pieces of social infrastructure that would see many of our rural communities disappear. Yet many of these communities continue to receive the support they need to continue existing, even if it constitutes a bare amount of “life support” that keeps them limping along.

To those concerned with efficiency and a utilitarian best-outcome-for-the-most-people set of values, this reality can be very frustrating. These people believe that it is only political expediency, and the disproportionate voting power apportioned to specific regions, that keeps politicians making “political” promises of social infrastructure funding to areas that, for some, should simply be permitted to die of natural causes—read: the decline of their traditional industries.

On the other side, people living in rural communities will argue for the importance of their dignity, which is directly attached to their sense of home and community. They might also point to the logistical impossibility of their moving to a more densely populated area, or the foolhardiness of concentrating all of a province’s population in one or two urban centres as a long-term strategy. Most of the time, though, these conversations tend to come back to the eternal notion of value for money, as though the meaning of "value" were self-evident. 

It’s the failure of these conversations to get to the real issues, the “Why?” that should entice governments to fund more problem-based humanities research that speaks directly to the challenges faced by local communities. What are people truly asking for when they ask to be supported in their rural communities? What is at stake in a government’s decision to subsidize a dying industry that has little chance of ever becoming sustainable again? Are better jobs really the sole way of helping citizens live more fulfilling lives? These are questions for rigorous humanities-based research.  The reason we often don’t invest in this type of research is because we’ve come to accept the notion that philosophy is a private concern, with each person’s values being just as important as anyone else’s. While this is true in a democracy, this does not mean that the ways in which people apply those values to specific decisions (and their rationale for doing so) are equal. 

It’s in this realm, the realm where people’s core values intersect with decision-making, that all of society can benefit from the help of experts in the humanities. I am a PhD in English literature, and I still would never argue that I have all the philosophical knowledge I need to assess how governments should approach the big questions I’ve outlined earlier in this piece. To achieve that kind of understanding, I’d need to read a report from a humanities scholar (or better yet, a team of diverse scholars) who has invested the right amount of expertise, time, and experience into framing and addressing these questions. That doesn’t mean that the final report will produce answers that will make everyone happy or will compel everyone to agree about what to do. It doesn’t even mean the report will produce more answers than questions. What it will do, though, is finally get us talking about the real issues, like human dignity, that underlie our policy debates.   

Without this kind of humanities-based intervention, we are left with a cacophonous town hall in which the plurality of self-interested voices becomes noise, and policymakers are much less likely to meaningfully integrate community feedback into their decisions. When you have these voices collected by experts, however, then distilled into a government report on the human value of work and community, you have something that policymakers can use (if they wish) to reflect meaningfully on the “Why?” of what they’re doing.

Let’s take the example of jobs. To be sure, there are few people in Canada who die of starvation or exposure each year. This is not to downplay the crisis of adequate food and housing that many Canadians suffer from Rather, my point is that for many people across Canada (especially for those whose entire politics are built around the notion of more, better jobs), it is wrong to believe that more, better jobs are necessary to "make people not die." It's also wrong to assume that more, better jobs will immediately cure our society of problems like violence or addiction, as a quick look at Fort McMurray will attest to.    

So if jobs aren’t the true solution, what is?

To start, we have to realize that a lack of good jobs is never the real problem. The real problem is the corrosion of security, freedom, and dignity that precarious or alienated employment has on an individual. Once we collectively accept that this assault on dignity is the real problem, we can open our minds to a wide variety of ways to help our citizens feel more empowered in their daily lives.

The point of all of this is to say that politicians across our country, especially those who govern over areas with sparse populations or dying industries, would do well to ask themselves the question, “What do our citizens actually need and want?” We should then invest not only in the stakeholder research that allows people’s voices to be heard, but the kind of problem-based humanities research that will help all of us get to the true crux of these issues. Then, we might begin having a genuine public conversation about the truly valuable things that secondary concerns like jobs are supposed to make possible.  

Wednesday, 5 December 2018

The Humanities and the Teaching of Good Judgement

We’ve seen an erosion in the concept of good judgement over the past forty years. The partisan arguments over US Supreme Court appointments, the increasing emphasis that all moral values are relative, the insistence that anything other than the most mathematically proven declarations are arbitrary, the notion that “subjective” is a synonym for “random” and “rationally groundless”—all of these speak to the growing sense that all statements are either completely objective or utterly arbitrary.
  
To be fair, the concept of good judgement originally came under fire for good reason, which lies in the fact that good judgement has been historically coded as white, cisgendered, heterosexual, male, and old. But it’s also important to acknowledge that just as historically marginalized groups and individuals were finding a voice in public discourse, this trend was accompanied by a growing skepticism toward the notion of good judgement and expertise in general. It seems that many would rather live in a world with no intellectual authority rather than allow historically marginalized groups to lay claim to this authority.

The problem with all of this is that when a person confronts a judgement they don’t like, they can completely write it off. This leads to a social (or should I say antisocial) phenomenon one could rightfully call the privatization of truth.

What’s been lost in all of these conversations is the principle that one can, through education, improve their subjective judgement. A graphic designer might not have an objective sense of which designs will be better received by certain audiences, but to say that their aesthetic judgement is therefore arbitrary, groundless, and no better than anyone else’s is to throw out the concept of good judgement altogether.

This crisis of faith in good judgement is part of the crisis that’s impacting the Humanities. Part of this crisis is the notion that good judgement, no matter how well-argued, can never compel agreement. One could offer a strongly argued reading of misogyny in the works of William Faulkner, but the fact remains that any student, if they wish, can fold their arms and argue, “It’s not there. You’re just reading too far into things.” The professor can offer mounting evidence, but all the student needs to do is continue shaking their head. For some instructors, this type of response can badly rattle their confidence in their own reasoning. But good judgement doesn’t rely on the acceptance of others to show its worth. The values and hallmarks of good judgement are many. Persuasiveness might be one of them, but compelled agreement isn’t. Persuasiveness is a quality of the argument itself; agreement depends entirely on the caprice of the listener. If the recalcitrant position of “I’m not persuaded” were enough to completely undermine the concept of good judgement, a majority of our institutions would completely collapse (including the law itself, which is based solely on judges’ subjective, informed judgement of the law as it’s written).

So what are the hallmarks of good judgement? Thankfully, they are skills that the Humanities continues to teach very well, the first of which is verbal acuity—the ability to make a point clearly. Another is discursive command, the ability to be intentional about what types of language (be it medical, literary, religious language, etc.) one is drawing upon when making an argument, and what types of language their interlocutor is using. Another still is embodied knowledge, the ability to listen to one’s physical reaction to certain statements, assessing this reaction to sense whether there is “something wrong” with what is being said, the using verbal acuity and discursive command to try and formulate this objection in words. Another still is empathy, the ability to inhabit (however imperfectly) the perspective of another person, or at least to acknowledge that that person’s lived experience is radically unknowable to oneself (as is the case with a white male speaker like myself trying to speak on behalf of individuals whose lived experience is radically inaccessible to me. In that situation, the principle of empathy defers to listening).

All of these skills, and many others, are taught by the Humanities. But here’s where I think the Humanities faces its biggest conundrum. The Humanities, generally speaking, is not content to uncouple the skills it teaches from the values it wishes to instill. For example, the ability to critically reflect on how language can shape reality is a core skill learned in an English program. But what are we to make of a Republican politician who stands on the floor of the US Senate arguing that climate change reports are simply representations of reality and not the thing itself? To many English professors, this argument would constitute an irresponsible misinterpretation of what critique is meant to do. But on the other hand, what exactly prevents this senator from using critique in this way? What happens when critical doubt, when applied to subjects as diverse as climate change and sexual assault, becomes the greatest weapon regressive conservatism has at its disposal?

At this point, the Humanities faces a choice: to focus on teaching discrete skills and then encouraging people to use them in responsible ways, or to continue arguing that there is something inherently progressive about the skills it teaches. This is where some Humanities instructors might argue that they are teaching habits of thought rather than something as superficially utilitarian as "skills." In any other discipline, critical thinking is simply another name for problem-solving or problem identification. In the Humanities, it seems to carry with it a progressive (or at least anti-authoritarian) mission, due in part to the inheritance of "critical" from 20th-century critical theory. This isn’t to say that the Humanities should abandon its values; rather, it might need to give up the notion that there is something inherently progressive about the skills it teaches.

Further, the Humanities needs to stop arguing that there is some sort of moral improvement or “becoming more human” that is inherent to the skills it teaches. The critical reflective skills taught in the Humanities can just as easily be used for self-deception as they are for self-knowledge; they can just as easily be used to rationalize unjust practices as they are to critique them. Indeed, it’s the double-edged nature of these skills that makes them so powerful and so dangerous at the same time. The problem lies in thinking that a certain progressive mindset is inherent to the skills taught by the Humanities, which if we are to be honest, can produce a regressive devil’s advocate just as easily as they can produce a progressive critical thinker.

What remains in all of this is the importance of good judgement and the skills that constitute it. When Eve Sedgewick speaks about the homosocial continuum, the quality of her judgement and the salience of her points does not depend on compelled agreement. If someone folds their arms and says, “Bullshit,” it doesn’t matter. The quality of Sedgewick’s argument depends on the skills she built over her career, and her ability to use those skills to create a strong argument.

What needs to be reasserted (and it’s a shame that this needs to be argued) is that one’s judgement can improve through education, and that the majority of our social world is predicated entirely on the quality of people’s subjective judgements, something the Humanities helps to improve. Talk about good judgement in a boardroom today, and heads will nod. Talk about good judgement in a Humanities classroom, and suddenly people start using words like “arbitrary” or “groundless.” The Humanities doesn’t need to apologize for the fact that some people’s judgement (with allowances made for context) can be better than that of others. But even more importantly, it needs to emphasize that a person’s judgement, through education, can become better than it previously was.  

Sunday, 20 May 2018

Saint John, New Brunswick

Neptune sighs
With the breath of a god who’s lived too long
And the fog descends 
Over the living and the dead of Saint John.

Centuries of Protestant pride
And Catholic shame
Loyalist blood
And famine-starved bone
Crunching together 
Like continental plates colliding 
Ploughing up mountains 
Resembling steeples.
Providing the habitat 
For that singular species 
The Old Saint John family.

They’ve created their own gods
Thick-featured statues with broad shoulders
Squatting outside Market Square
Appearing again 
In the paintings of Miller Brittain.
But these thick people are only aspirational. 

The Saint Johner 
Is as thin-skinned 
as the Pinot grapes
That also thrive 
under cover of fog.

The bricks of their ancient buildings
Mortared together
With centuries of insults 
both real and imagined 
(Seven parts of the latter 
to every one of the former).

They know their prayers.
But the one they know best
Is the one they'd never dare say
Before their neighbours.

A prayer the people in their Sunday finest
know better than the Lord’s
Than the Hail Mary.

We are afraid. 
We feel alone.
We want to be wanted.

A prayer that still echoes
Against the stone walls
Of their family chapels.

Thursday, 29 March 2018

What if Students Want to Write Poorly?


I was re-reading George Orwell’s “Politics and the English Language” recently and was struck by the relevance it still holds today. To recap, Orwell argues that the “ugly and inaccurate” use of written English that he witnessed during his time was not the mere by-product of untalented writers. Rather, it was a distinct trend motivated by political orthodoxies that sought to “give an appearance of solidity to pure wind.” Being the friend of many English instructors, I was particularly interested in Orwell’s suggestion that poor writing serves a strategic purpose, that it reflects the motives of writers whose goals are better served by vagueness than by clarity. Nearly every person I know who has taught a writing class has expressed frustration or dismay at the difficulty of turning a poor writer into a good one. When considered alongside Orwell’s essay, this challenge provokes me to ask: what if students want to write poorly?

What if they bring to writing a set of assumptions and goals that are incompatible with clear, concise expression?

Finally, how can understanding and addressing these assumptions and goals help instructors succeed in fostering better writers?
When I was fresh out of graduate school and looking for work in the private sector, I landed a part-time job as a proposal writer for an IT security company based in Toronto. Armed with a PhD in English Literature, I vibrated with excitement at the opportunity to prove my more practically-minded relatives wrong by showing how valuable my skills could be in the “real world.” I wasn’t prepared for the setback I’d experience after handing my boss my first draft proposal.

The man emailed the document back to me almost immediately demanding a full rewrite, noting that the document “didn’t speak the language” that was necessary to gain credibility in the IT sector. I needed to use more words like synergistic, architect (as a verb), leverage (also as a verb), and utilize. This last word pained me even more than the others, since Orwell himself once advised his readers never to trust someone who uses the word “utilize” when they could just as easily use “use.” Yet my boss insisted that demonstrating our comfort with consulting-sector jargon superseded the goal of communicating our value as clearly as possible. He also asked me to add more than ten pages of extraneous material simply to make the document appear more detailed and rigorous.
In another instance, I found myself arguing on the phone with a representative from a company that had overcharged me for a tax-filing service. Over and over, the young man on the other end explained to me: “It has been decided that you will not receive a refund.” Repeatedly, I demanded that the young man admit that a human being, located somewhere in the world, was responsible for this decision. But he wouldn’t budge from his use of the passive voice, and kept repeating “It has been decided” until I gave up in Kafkaesque despair.
What I soon learned in my postgraduate life was that even though clear expression is a great gift, the world constantly calls on us to obscure what we are saying for personal or professional ends. There are daily occasions where we must choose not to express ourselves clearly, but must pad our writing and speech with innumerable qualifications in order to achieve specific ends, whether it be to soften our tone when delivering bad news or to qualify our thinking with a dozen layers of nuance.  
To return to the question I posed at the beginning of this essay: what might motivate a student to write poorly? As many instructors will no doubt attest, teaching students to write well can be very difficult, even over the course of a four-year university degree program. A student might spend a few more hours than usual studying for a biology exam and expect to improve their grade on their next test. Yet spending a few extra hours on an English term paper (while always a fine idea), does not carry the same level of correlation to an improved mark. Anyone who has ever heard a student say something along the lines of, “But I worked so much harder on this one!” understands that this lack of correlation between increased effort and instant payoff can be a source of great frustration for students and instructors alike.
Mastering the mechanics and style of good writing is a long and difficult process. But I’m convinced that it is longer and more difficult than it needs to be, due to the assumptions and motives that students bring to the process.
Anyone who has ever taught a writing class will recognize the line, “Since the dawn of time, man has always…” This common opening reflects the writer’s inability to assign an appropriate level of scope to their argument. But it also reveals something more—the student’s engrained belief that English class is a place for lofty statements, the bolder the better. Such lines are the product of a culture whose concept of an English professor has not advanced beyond the likes of John Keating in Dead Poets Society.
To summarize, teaching students to write well is difficult not only because the craft itself is hard to master, but also because of the false beliefs and counterproductive motives that inform students’ concept of what writing is supposed to accomplish. For many people, and young people especially, writing is meant to convey one’s grandest ideas and to persuade others to agreement. Accomplishing as much will require a writer who can go beyond simple, clear statements. However, one can’t progress to the strategic use of language until they’ve grasped how to write an idea in simple terms. But as I’ve seen countless times, many people will actively resist putting their grandest ideas into simple terms.
One of the greatest gifts of youth is a belief in the uniqueness and world-shaping significance of one’s ideas. For many, these ideas exist not in words, but in the boundless enthusiasm that one might feel for a fragment or image that feels extremely insightful. Unfortunately, these ideas are much like dreams—incredibly interesting to the person who experiences them, but equally vague and boring to those who don’t. The holder of the idea will often resist expressing it in plain language, for fear of killing the happiness it inspires in them. Considered in the daylight of clear expression, the idea reveals itself to be not nearly as unique or compelling as its creator initially thought. This resistance to clear expression isn’t limited to young people. There are many adults I’ve met in my postgraduate life (especially entrepreneurs) who’d much rather preserve their enthusiasm for a vague idea than ruin it by trying to set it down in clear terms.  
This is all to say that there are powerful motives informing people’s unwillingness (and yes, I call it an unwillingness) to write clearly. When seen as the product of unwillingness as much as the product of inability, poor writing reveals why it is such a difficult problem to address.
I haven’t written anything in this essay that experienced writing instructors don’t already know. What I’d like to pose again, though, is the question: how might students and instructors both benefit if writing classes explored the motives of poor writing as thoroughly as they addressed the mechanics of strong writing? I'd be very interested in hearing people's thoughts on this subject in the comments below.