Tuesday, April 13, 2010

HLC's annual meeting and the Ed Department

Today's Inside Higher Education has a story on the Higher Learning Commission's annual meeting, which featured a presentation by Molly Corbett Broad, President of the American Council of Education. Broad's talk (according to the IHE story) focused in part on the Ed Department's stance toward postsecondary ed.

Think the ED is more favorably disposed toward postsecondary ed now that the Bush administration is out of office? Think again. As Doug Lederman, one of IHE's terrific ed reporters put it summarizing Corbett's talk,
"While the Bush administration often seemed to dislike and disparage higher education, the Obama administration will be tough on colleges because its officials value higher education and believe it needs to perform much better, and successfully educate many more students, to drive the American economy."

So -- not so much. The push here, as in the Bush era, is for "accountability," which is here defined as "teaching students what they need to achieve individual success in the 21st century economy." (See David Labaree for more on this, btw.)

The pressure here, as Lederman notes, is/will continue to be to "measure student learning." As Lederman says:

Pressure to measure student learning -- to find out which tactics and approaches are effective, which create efficiency without lowering results -- is increasingly coming from what Broad called the Obama administration's "kitchen cabinet," foundations like the Lumina Foundation for Education (which she singled out) to which the White House and Education Department are increasingly looking for education policy help.

Writing programs really have the opportunity to be part of the conversation on this, though. There are several things that we can do:
1. Have a hand in establishing the frame. What is "student learning" in writing programs? An institution? The issue now isn't talking with people in our programs -- to one another -- about how we're improving what we do, though this is of course important. Instead, it's about communicating with folks outside our programs and even institutions so that we all share a vocabulary about what we mean by "student learning" in writing classes. Of course, this is predicated on the assumption that we've done this work together, in our courses/programs... and if we haven't, we need to! With that in mind...

Another issue that often comes up in this frame is "comparability" (see the introduction and discussion of the VSA, for example -- it is intended to facilitate comparability). We also need to have a hand in establishing that frame, as well. Check with your institution - your IR folks or even your administration - to learn about your institution's comparables. How are they developed? (They're often based on size, student demographics, location, etc.) Does this "work" for your writing program? Often, the answer is no. That's because when your institution puts together its comparables, they're not thinking about student learning; they're thinking about other things. So put together your own comparables. Who has a program that looks like yours? Can you do some projects together? If you're part of the NSSE Consortium for the Study of Writing in College, you've got a jump on this; if you're not, you can a) use the NSSE CSWC questions if they're of interest/use; b) think about other projects.... Also see Pagano et al's article in CCC (December 2008) for a really interesting attempt at cross-institutional assessment.

2. Develop and follow through on assessments. Included here are being really careful to define what we mean when we set up those assessments: What is "critical thinking?" "Critical reading?" Whatever terms we use in those assessments we need to make sure we define carefully -- ideally, in conjunction/alliance with others on our campus. (See Bob Broad's _Organic Writing Assessment_ for more on this.)

3. Build alliances across institutions. In conjunction with a jointly-authored (with NCTE) white paper, the Council of Writing Program Administrators has an assessment gallery that features a set of principles drawn from best practices in postsecondary writing assessment that are linked to actual assessments from real 2- and 4-year institutions across the country. We need to speak from specific examples at our institutions that link to other specific examples from other institutions... to show that we talk about elements of student learning in writing classes and programs.



Why Writing and Educational Policy?

As the reader(s) of my previous short-lived blog (last changed in 2008) may recall, education policy is a particular interest of mine. In fact, for the last five or so years, it has been the focus of a lot of my research life. As a writing teacher, the director of a writing program, and an active member of an organization representing people with an interest in postsecondary writing instruction and program direction, I'm pretty attuned to things that might affect (in many ways, good and perhaps not so much) the experience that students might have in our postsecondary writing classrooms.

There's a lot happening with regard to education policy that has the potential to affect (see above re: caveat) that work *right now*, every day. I write about these things in published pieces. The problem, though, is that those published pieces take a long time to *be* published, where a blog is right here! Right now! So I've decided to re-launch my blogging career in order to:
1) Share policy news that might be important for postsecondary writing instructors and administrators;
2) Think, together with reader(s), about the implications of this news; and
3) Work together to be proactive.

There are some fundamental assumptions that will run through everything I write here; since this is an intro., it makes sense to spell those out. (I'm sure others will come up, too, and I'll add them as I go.)

Assumption 1: The landscape is changing. For the last 125+ years, postsecondary education has largely relied on itself -- that is, faculty have relied on faculty through the system of peer review -- to define, develop, and maintain "quality" in our courses, programs, research, and most else. This virtual circling of the academic wagons came (in the late nineteenth century) partially in response to a desire (shared by many members of elite cultures in the late 19th century U.S.) to fend off values and ideologies that were different and potentially threatening to those of the dominant culture. (This analysis, btw, comes from a lot of people - but especially Thomas Bender and James Carey.) Since the 1970s, though, in part because huge shifts in the American economy, there has been an increasing interest of those outside of the academy (often businesses) in making sure that they have a hand in defining what students should learn. (See Richard Ohmann on this.) Fast forwarding to the early 2000s, there's now a perception that students aren't learning what they need to in order to participate in the 21st century economy and that students don't understand what this is. Thus, the analysis (which is _everywhere_) says, there is a need for ___________ (Department of Education, think tanks, ACT, ETS...) to step in and make sure that postsecondary folks are being "held accountable" for doing what we're supposed to do, and to make their work visible to audiences outside of the academy. Of course, this is just what the system of peer review was designed to _prevent_ in some ways... so in part, we can look to ourselves when we think about how this current situation has evolved as it has.

Assumption 2: Working from research-based best practice principles is crucial, *and* building alliances is crucial, *and* sometimes there's some tension between those things. When I mention "being proactive," I am of course implying that there are things that we think are important that we might want to advance in the face of these policy issues facing us. And indeed, I do think that being proactive is important. On the other hand, I also think that developing alliances is also important. So - a tricky balance. On the one hand, we have our interests. On the other, we have a desire to build alliances with others whose interests might overlap with ours in some ways but might seem to not in others. It's easiest to "get" this when it's attached to specifics, so I'll write more about it when it comes up.

Assumption 3: There are *a lot* of players in this "postsecondary education" situation, and not all of them share the same motivations. Perhaps, postsecondary reader(s), you are at an institution that is part of the Voluntary System of Accountability? (So many of you are!) And perhaps, as part of this, your campus administers the CAAP or the MAPP, or even the CLA? An interesting study in "players" and potentially conflicting motivations. The VSA was created by the group now known as the Association of Public and Land Grant Universities (formerly NASULGC) as one strategy for "making learning visible" (see assumption 1). The acronym tests are some of the neither valid, reliable, or educationally sound ways that campuses can ostensibly demonstrate what students are learning (except that they don't do that at all). And who's doing the development? ACT, ETS, the Counsel for Aid to Education. At least the latter group consists largely *of* educators and their test includes *some* writing (though it's out of context and holds as much water as, say, the SAT writing exam).

Enough introduction, though there will be more...