“Sisu:” a new favorite word that comes from Finnish and was popularized by war

A Thousand Lakes of Red Blood on White Snow” brilliantly describes how tiny Finland successfully fought the Soviet Union twice during World War II:

Thus with a thousand lakes of warm red blood on cold white snow did the Finns purchase their escape from assimilation into the Soviet Union, ensuring that when the Iron Curtain was drawn, it ran along the eastern side of Finland rather than the western one.

The word “sisu” captures the mindset necessary to persevere against formidable, unlikely odds, though it is unlikely to have the resonance it needs unless you’ve read the entire article:

Sisu resists exact translation into other languages but loosely translated refers to a stoic toughness consisting of strength of will, determination, and perseverance in the face of adversity and against repeated setbacks; it means stubborn fortitude in the face of insurmountable odds; the ability to keep fighting after most people would have quit, and fighting with the will to win.

Sisu is more than mere physical courage, requiring an inner strength nourished by optimism, tempered by realism, and powered by a great deal of pig-headed obstinacy.

“Grit,” “stoicism,” and “tenacity” express similar concepts in English.

Anyone know a good, general history of Finland? Many people are currently enamored of its schools, but perhaps the same cultural thing that enabled the country to fight the Winter War also enable it to succeed educationally where others fail.

Owning vs sharing: Don’t get caught in the ugly middle

In a tweet Paul Graham writes: “As buying and selling become easier, owning approaches sharing.” That describes my behavior regarding many objects, especially electronics: for as long as I’ve been buying Macs and Mac products, I’ve been selling the old versions on Craigslist for a third to half of their initial value. In some sense I’m actually leasing them, but using myself as the leasing agent. Although I’ve owned a car I actually prefer not to and Uber is accelerating the ability to rent cars when needed and avoid the hassles of ownership. Housing has of course long been both rented and owned, and like many economists I find the U.S. obsession with owning housing to be misguided.

But there are other ways too that owning approaches sharing in my life:

  • Old cameras and lenses get sold to fund new ones. Like Macs, they tend to retain a fair amount of value—usually about half for lenses and a third for camera bodies.
  • It’s not uncommon for me to sell books that look promising but don’t live up to expectations, almost always through Amazon (despite Amazon’s encourage for buyers to scam sellers; for objects worth less than $20 I don’t think the issue is overwhelmingly important).
  • Although I haven’t begun doing this yet, I think that selling bikes may be more economical than moving them. The last bike I moved from Tucson to New York was probably a net loss and should’ve been sold instead of shipped.

There are some items that still aren’t easily sold, like beds and furniture, in part because they’re heavy, in part because they can harbor bed bugs, and in part because they just aren’t that valuable. I don’t have the citation handy, but I’ve read that Ikea might be facilitating mobility by making it cheap and easy to setup new apartments: it’s possible to buy a couch, a chair, some dishes, a bed, and some shelves for under $1,000, in the course of an afternoon (although I’d prefer a Tuft & Needle bed, but that’s an aside). Among my friends, city-to-city moves often entail dumping most of their stuff and buying it again at the destination, since the moving cost is too high to justify the hassle. That’s less true of me because I have a sit-stand desk and some other pretty expensive gear, but in this respect I’m in the minority. Keeping a minority of one’s stuff may also lead to a more satisfying, experience-rich life, at least for some people.

The habit of either having very expensive and durable stuff or throwaway stuff may also be indicative of the polarization of many domains, in which it makes sense to either buy or be the best, or buy throwaway stuff or don’t bother competing. Don’t get caught in the ugly middle. Like “Death before inconvenience,” “Don’t get caught in the ugly middle,”

Paul Graham and the artist

Paul Graham’s new essay “Before the Startup” is as always fascinating, but Graham also says several things that apply to artists:

The way to come up with good startup ideas is to take a step back. Instead of making a conscious effort to think of startup ideas, turn your mind into the type that startup ideas form in without any conscious effort. In fact, so unconsciously that you don’t even realize at first that they’re startup ideas.

The same is true of ideas for novels, which often come from minute observations or moments or studies of character. They often don’t feel like novels at first: they feel like a situation (“What if a guy did this…”) and the full novel comes later. Artists often work at the margins.

He also writes in a footnote:

I did manage to think of a heuristic for detecting whether you have a taste for interesting ideas: whether you find known boring ideas intolerable. Could you endure studying literary theory, or working in middle management at a large company?

This may be why I and perhaps many other grad students find grad school worse as time goes on, and why MFA programs have been growing. Too many critics have ceased focusing not on how “to be an expert on your users and the problem you’re solving for them”—or, in this example, “readers” instead of “users”—and instead focus on straight forward careerism, which rarely seems to overlap with what people want to read.Paul Graham and the artist

What happened with Deconstruction? And why is there so much bad writing in academia?

How To Deconstruct Almost Anything” has been making the online rounds for 20 years for a good reason: it’s an effective satire of writing in the humanities and some of the dumber currents of contemporary thought in academia.* It also usually raises an obvious question: How did “Deconstruction,” or its siblings “Poststructuralism” or “Postmodernism,” get started in the first place?

My take is a “meta” idea about institutions rather than a direct comment on the merits of deconstruction as a method or philosophy. The rise of deconstruction has more to do with the needs of academia as an institution than the quality of deconstruction as a tool, method, or philosophy. To understand why, however, one has to go far back in time.

Since at least the 18th Century, writers of various sorts have been systematically (key word: before the Enlightenment andIndustrial Revolution, investigations were rarely systematic by modern standards) asking fundamental questions about what words mean and how they mean them, along with what works made of words mean and how they mean them. Though critical ideas go back to Plato and Aristotle, Dr. Johnson is a decent place to start. We eventually began calling such people “critics.” In the 19th Century this habit gets a big boost from the Romantics and then writers like Matthew Arnold.

Many of the debates about what things mean and why have inherent tensions, like: “Should you consider the author’s time period or point in history when evaluating a work?” or “Can art be inherently aesthetic or must it be political?” Others can be formulated. Different answers predominate in different periods.

In the 20th Century, critics start getting caught up in academia (I. A. Richards is one example); before that, most of them were what we’d now call freelancers who wrote for their own fancy or for general, education audiences. The shift happens for many reasons, and one is the invention of “research” universities; this may seem incidental to questions about Deconstruction, but it isn’t because Deconstruction wouldn’t exist or wouldn’t exist in the way it does without academia. Anyway, research universities get started in Germany, then spread to the U.S. through Johns Hopkins, which was founded in 1876. Professors of English start getting appointed. In research universities, professors need to produce “original research” to qualify for hiring, tenure, and promotion. This makes a lot of sense in the sciences, which have a very clear discover-and-build model in which new work is right and old work is wrong. This doesn’t work quite as well in the humanities and especially in fields like English.

English professors initially study words—these days we’d primarily call them philologists—and where they come from, and there is also a large contingent of professors of Greek or Latin who also teach some English. Over time English professors move from being primarily philological in nature towards being critics. The first people to really ratchet up the research-on-original-works game were the New Critics, starting in the 1930s. In the 1930s they are young whippersnappers who can ignore their elders in part because getting a job as a professor is a relatively easy, relatively genteel endeavor.

New Critics predominate until the 1950s, when Structuralists seize the high ground (think of someone like Northrop Frye) and begin asking about what sorts of universal questions literature might ask, or what universal qualities it might possess. After 1945, too, universities expand like crazy due to the G.I. Bill and then baby boomers goes to college. Pretty much anyone who can get a PhD can get a tenure-track job teaching English. That lets waves of people with new ideas who want to overthrow the ideas of their elders into academia. In the 1970s, Deconstructionists (otherwise known as Post-structuralists) show up. They’re the French theorists who are routinely mocked outside of academia for obvious reasons:

The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

That’s Judith Butler, quoted in Steven Pinker’s witty, readable The Sense of Style, in which he explains why this passage is terrible and how to avoid inflicting passages like it onto others. Inside of academia, she’s considered beyond criticism.

In each generational change of method and ideology, from philology to New Criticism to Structuralism to Poststructuralism, newly-minted professors needed to get PhDs, get hired by departments (often though not always in English), and get tenure by producing “original research.” One way to produce original research is to denounce the methods and ideas of your predecessors as horse shit and then set up a new set of methods and ideas, which can also be less charitably called “assumptions.”

But a funny thing happens to the critical-industrial complex in universities starting around 1975: the baby boomers finish college. The absolute number of students stops growing and even shrinks for a number of years. Colleges have all these tenured professors who can’t be gotten rid of, because tenure prevents them from being fired. So colleges stop hiring (see Menand’s The Marketplace of Ideas for a good account of this dynamic).

Colleges never really hired en masse again.

Other factors also reduced or discouraged the hiring of professors by colleges. In the 1980s and 1990s court decisions strike down mandatory retirement. Instead of getting a gold watch (or whatever academics gave), professors could continue being full profs well into their 70s or even 80s. Life expectancies lengthened throughout the 20th Century, and by now a professor gets tenure at say 35 could still be teaching at 85. In college I had a couple of professors who should have been forcibly retired at least a decade before I encountered them, but that is no longer possible.

Consequently, the personnel churn that used to produce new dominant ideologies in academia stops around the 1970s. The relatively few new faculty slots from 1975 to the present go to people who already believed in Deconstructionist ideals, though those ideals tend to go by the term “Literary Theory,” or just “Theory,” by the 1980s. When hundreds of plausible applications arrive for each faculty position, it’s very easy to select for comfortable ideological conformity. As noted above, the humanities don’t even have the backstop of experiment and reality against which radicals can base major changes. People who are gadflies like me can get blogs, but blogs don’t pay the bills and still don’t have much suck inside the academic edifice itself. Critics might also write academic novels, but those don’t seem to have had much of an impact on those inside. Perhaps the most salient example of institutional change is the rise of the MFA program for both undergrads and grad students, since those who teach in MFA programs tend to believe that it is possible to write well and that it is possible and even desirable to write for people who aren’t themselves academics.

Let’s return to Deconstruction as a concept. It has some interesting ideas, like this one: “he asks us to question not whether something is an X or a Y, but rather to get ‘meta’ and start examining what makes it possible for us to go through life assigning things too ontological categories (X or Y) in the first place” and others, like those pointing out that a work of art can mean two opposing things simultaneously, and that there often isn’t a single best reading of a particular work.

The problem, however, is that Deconstruction’s sillier adherents—who are all over universities—take a misreading of Saussure to argue that Deconstruction means that nothing means anything, except that everything means that men, white people, and Western imperialists oppress women, non-white people, and everyone else, and hell, as long as we’re at it capitalism is evil. History also means nothing because nothing means anything, or everything means nothing, or nothing means everything. But dressed up in sufficiently confusing language—see the Butler passage from earlier in this essay—no one can tell what if anything is really being argued.

There has been some blowback against this (Paglia, Falck, Windschuttle), but the sillier parts of Deconstructionist / Post-structuralist nonsense won, and the institutional forces operating within academia mean that that victory has been depressingly permanent. Those forces show no signs of abating. Almost no one in academia asks, “Is the work I’m doing actually important, for any reasonable value of ‘important?'” The ones who ask it tend to find something else to do. As my roommate from my first year of grad school observed when she quit after her M.A., “It’s all a bunch of bullshit.”

The people who would normally produce intellectual churn have mostly been shut out of the job market, or have moved to the healthier world of ideas online or in journalism, or have been marginalized (Paglia). Few people welcome genuine attacks on their ideas and few of us are as open-minded as we’d like to believe; academics like to think they’re open-minded, but my experience with peer review thus far indicates otherwise. So real critics tend to follow the “Exit, Voice, Loyalty” model described by Albert Hirschman in his eponymous book and exit.

The smarter ones who still want to write go for MFAs, where the goal is to produce art that someone else might actually want to read. The MFA option has grown for many reasons, but one is as an alternative for literary-minded people who want to produce writing that might matter to someone other than other English PhDs.

Few important thinkers have emerged from the humanities in the last 25 or so years. Many have in the sciences, which should be apparent through the Edge.org writers. As John Brockman, the Edge.org founder, says:

The third culture consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are.

One would think that “the traditional intellectual” would wake up and do something about this. There have been some signs of this happening—like Franco Moretti or Jonathan Gottschall—but so far those green shoots have been easy to miss and far from the mainstream. “Theory” and the bad writing associated with remains king.

Works not cited but from which this reply draws:

Menand, Louis. The Marketplace of Ideas: Reform and Resistance in the American University. New York: W.W. Norton, 2010.

Paglia, Camille. “Junk Bonds and Corporate Raiders: Academe in the Hour of the Wolf.” Arion Third Series 1.2 (1991/04/01): 139-212.

Paglia, Camille. Sex, Art, and American Culture: Essays. 1 ed. New York: Vintage, 1992.

Falck, Colin. Myth, Truth and Literature: Towards a True Post-modernism. 2 ed. New York: Cambridge University Press, 1994.

Windschuttle, Keith. The Killing of History: How Literary Critics and Social Theorists are Murdering Our Past. 1st Free Press Ed., 1997 ed. New York: Free Press, 1997.

Star, Alexander. Quick Studies: The Best of Lingua Franca. 1st ed. Farrar, Straus and Giroux, 2002.

Cusset, Francois. French Theory: How Foucault, Derrida, Deleuze, & Co. Transformed the Intellectual Life of the United States. Trans. Jeff Fort. Minneapolis: University Of Minnesota Press, 2008.

Pinker, Steven. The Sense of Style: the Thinking Person’s Guide to Writing in the 21st Century. New York: Viking Adult, 2014.


* Here is one recent discussion, from which the original version of this essay was drawn. “How To Deconstruct Almost Anything” remains popular for the same reason academic novels remain popular: it is often easier to criticize through humor and satire than direct attack.

Finally! Someone else notices that the best instructors aren’t necessarily the most credentialed

Finally! Someone else notices that a lot of academic practices don’t make any sense: “Pictures from an Institution: Leon Botstein made Bard College what it is, but can he insure that it outlasts him?” makes me like Bard; this in particular stands out: “In the thirty-nine years that Botstein has been president of Bard, the college has served as a kind of petri dish for his many pedagogical hypotheses [. . . including] that public intellectuals are often better teachers than newly minted Ph.D.s are.” Why isn’t anyone else following the Bard model?

The question is partially rhetorical. College presidents and trustees are probably systematically selected for conformity, but I’ve gotta think there are other people out there who are going, “Aping the Ivy League model is not going to work for us. What can we do differently?” The current order of things, driven by bogus ranking systems, discourages this sort of thinking. Colleges love the rhetoric of being different, but very few follow that rhetoric to actually being different. Perhaps rising costs will eventually force them to be differentiate or die. Then again, the article says that Bard may be on its way to death or drastic restructuring because of financial problems. Still I don’t see overspending as being fundamentally and intrinsically linked with other issues. Instead, it seems that being a maverick in one field may simply translate to being a maverick in many, including places one doesn’t want mavericks (like finances).

A few weeks ago I wrote about donating to Clark, my alma mater. Although I still think Clark a good school, I’d love to see it move in a more Bard-ish direction. the current President and trustees, however, appear to have come through the system and do not seem like shake-it-up types, regardless of their rhetoric.

Women like to watch other women

Although I would classify this as speculative, Noah Berlatsky writes that “Women Appreciate Good Booty-Shaking, Too” and that “Women can look at sexualized images of women all day every day.” Moreover, and perhaps more importantly, they do: look at the covers of magazines targeting women and you’ll mostly find… pictures of other women. Look at magazines devoted to men and you’ll also tend to find… women. Men like looking at women and apparently women like looking at other women.

This link may be NSFW, though it is primarily text, but the blog Pornhub Insights (which probably got started after the success of OkTrend’s data-driven blog about online dating) has a post called “What Women Want” that observes how “Pornhub’s Lesbian category is the leading favorite among the ladies, with Gay (male) following close at second place.” That’s a revealed preference not dissimilar to what magazines targeting women have also found.

IMG_0382A couple people have observed that my Flickr photos tend to feature more women than men, and likewise the photos on this blog. That may be true, but that may be because I’m giving audiences what they want.

I don’t have any good theories or research on the politics, cultural, or biology of why we might be seeing this phenomenon, but I do have a rationale for why I choose what I choose. Not every photographer will care about this, of course, but those who are contemplating what subjects people want to see should at least be cognizant of popularity.

“All American fiction is young adult fiction: Discuss”

Via Twitter Hollis Robbins offers a prompt: “‘[A]ll American fiction is young-adult fiction.’ Discuss.” Her takeoff is A. O. Scott’s excellent “The Death of Adulthood in American Culture,” which you should go read; oddly, it does not mention the show Entourage, which may be the best contemporary narrative artifact / fantasy about the perpetual party.*

American fiction tends toward comedy more than “young-adult” because comedy = tragedy – consequences. AIDS fiction is tragic because people die. Most contemporary heterosexual love stories are comedy because the STIs tend to be curable or not that important; people who are diligent with birth control rarely get pregnant. Facing death, starvation, or other privations have always been the adult’s lot, and adults who made sufficiently bad choices regarding resource allocation or politics died. Think of the numerous adults who could have done everything they could to flee the area between Russia and Germany in 1914 and didn’t, or the ones who didn’t after 1918 and before the Holocaust. The example is extreme but it illustrates the principle. Frontier and farm life was relentlessly difficult and perilous.

Today by contrast we live in the a world of second chances. America is a “victim,” although that is the wrong word, of its own success. If you color more or less inside the lines and don’t do anything horrendous, life can be awesome. People with an agreeable and conscientious disposition can experience intense pleasures and avoid serious pain for decades; not everyone takes to this (see for example the works of Michel Houellebecq) but many do. The literary can write essays, the scientists can do science, the philosophers can argue with each other, the business guys have a fecund environment, and the world’s major problems are usually over “there” somewhere, across the oceans. If we ever get around to legalizing drugs we’ll immediately stabilize every country from Mexico to Chile.**

What are the serious challenges that Americans face as a whole? In the larger world there is no real or serious—”serious” being a word associated with adulthood—ideological alternatives to democracy or capitalism. Dictatorships still exist but politics are on the whole progressing instead of regressing, Russia and parts of the Middle East excepted.

One could reframe the question of all American fiction being young adult fiction to: “Why not young adult fiction?” Adults send young people to war to die; adulthood is World War II, us against them, thinking that if we don’t fight them in Saigon we’ll have to fight them in Seattle. Adults brought us Vietnam. Young people brought us rock ‘n’ roll, rap, and EDM. Adults want to be dictators, whether politically or religiously, and the young want to party and snag the girl(s) or guy(s) of their dreams.

Adulthood is associated with boredom, stagnation, suburbs, and death. Responsibility is for someone else, if possible, and those who voluntarily assume responsibility rarely seem to be rewarded for it in the ways that really count (I will be deliberately ambiguous on what those ways are). Gender politics and incentives in the U.S. and arguably Western Europe are more screwed up than many of us would want to admit, and in ways that current chat among the clerisy and intellectual class do not reflect or discuss. If adulthood means responsibility, steady jobs, and intense fidelity, then we’ve been dis-incentivizing it for decades, though we rarely want to confront that.

Many people are so wealthy and safe that they are bored. In the absence of real threats they invent fake ones (vaccines) or worry disproportionately about extremely unlikely events (kidnapping). Being a steady person in a steady (seeming) world is often thus perceived as being dull. In contemporary dating, does the stolid guy or girl win, or does hot funny and unreliable guy or girl win?

A lot of guys have read the tea leaves: divorce can be a dangerous gamble while marriage offers few relationship rewards that can’t be achieved without involving the legal establishment or the state more generally. A shockingly large number of women are willing to bear the children of men they aren’t married to: 40.7%$ of births now occur to unmarried women, and that number has been rising for decades.

Why take on responsibility when no one punishes you for evading it and arguably active irresponsibility is rewarded in many ways, while safety nets exist to catch those who are hurt by the consequences of their actions? That’s our world, and it’s often the world of young adulthood; in fiction we can give ourselves monsters to fight and true enduring love that lasts forever, doesn’t have bad breath in the morning, and doesn’t get bored of us in four years. Young adult fiction gives us the structure lacking in the rest of our lives.

Moreover, there has always been something childlike in the greatest scientists and artists. Children feel unconstrained by boundaries, and as they grow older they feel boundaries more and more acutely. I’m not about to argue that no one should have boundaries, but I am going to argue that retaining an adult version of the curiosity children have and the freedom they have is useful today and in many cases has always been useful.

The world has gotten so efficient that vast pools of money are available for venture capitalists to fund the future and tech guys to build or make it. The biggest “problem” may be that so many of us want to watch TV instead of writing code, but that may be a totally bunk argument because consumption has probably always been more common and easier than production.

In this world fiction should tend towards comedy, not the seriousness too typically associated with Literature.

If American fiction is young adult fiction, that may be a sign of progress.***


* Another show, Californication, mines similar themes but with (even weaker) plots and total implausibility. Here is an essay disagreeing with Scott: Adulthood Isn’t Dead.

** Breaking Bad and innumerable crime novels would have no driving impetus without drug prohibition. The entire crime sector would be drastically smaller almost overnight were we to legalize drugs and prostitution. That would be a huge win for society but harmful to fiction writers.

*** Usually I eschew polemics but today I make an exception.

Follow

Get every new post delivered to your Inbox.

Join 1,325 other followers

%d bloggers like this: