Some more thoughts on assessment

I spent Saturday morning reading the first part of the George Kuh et al. edited Using Evidence of Student Learning to Improve Higher Education for a faculty reading seminar on assessment. As readers of this blog know, I have some pretty strong opinions about assessment and the rise of the campus assessocracy.  At the same time, I’m not closed minded about refining the tools that we use to evaluate our performance as faculty member and adjust our approaches to an ever changing body of students. 

Inkberry and Kuh set out in their introduction to understand the growth of assessment in recent years in higher education and the sometimes unrealized potential of assessment data to change the way that we teach and our students learn. They say all the right things. We need assessment for political reasons, but for it to be useful as well, we have to move beyond an attitude of compliance and embrace the potential of these begrudgingly assembled data sets. Assessment for the contributors to this book provides the evidence necessary to make constructive and informed changes to how we understand teaching and learning in the university classroom.

This introduces three chapters that look at various ways in which assessment data can be used more effectively to improve learning in higher education. To be clear, these contributions are well-meaning in their efforts to avoid the various elephants in the higher education room and to make the best out of an approach to learning improvement that carries with it as many political consequences as potential benefits.

The book begins with the idea that the “quality of student learning at colleges and universities is inadequate,” and while it’s hard to disagree with calls for continuous improvement, it is also such a generalized point of departure that it makes any specific response difficult. The transformation of higher education over the past five decades has been so significant that such simple claims should be avoided. Certainly higher education has changed and there will always be a need for faculty, administrators, and students to engage our dynamic world in new ways, but this has always been the case. Our generation’s “crisis in higher education” is no more pressing than in past generations and identifying a dynamic system as “inadequate” does little to encourage the kind of collaborative (rather than adversarial or obstructionist) approaches the book seeks to advocate.

Here are my thoughts:

1. Turning a Blind Eye. The elephant in the assessment room is that these practices have emerged in parallel with the rise of a highly paid, administrative culture in higher education. The rise of highly paid administrators tasked with improving efficiency, eliminating redundancy, and streamlining the educational process has led to centralization of authority and the risk of transforming faculty for specialized professionals to employees subordinate to a top-heavy administrative bureaucracy.

While I’m not sanguine that most universities are capable (or genuinely interested) in changing administrative culture any time soon, faculty will continue to chafe at the perceived loss of autonomy. The various authors refer to “initiative fatigue” as part of the trend that transformed assessment from an opportunity to a burden, but they don’t seem to be willing to admit that assessment represents a key manifestation of the tension between an administration probing the limits of its authority and faculty autonomy.    

2. Disciplinarity. The first three or four chapters in the book do little to recognize the significance of disciplinary practice in student learning. Disciplines have long acknowledged that the vitality of their fields of study depend upon continuous refinements in teaching and learning. These improvements have tended to be incremental, embedded within disciplinary practices, and to draw upon experiences across a wide range of campuses.

Unlike assessment, disciplinary discussions tend to be decentralized and grounded in craft approaches to knowledge production. There is no doubt that conversations about teaching in the disciplines generally lack the quantitative edge frequently embraced as the basis for “evidence-driven” improvements in student learning. At the same time, the failure to acknowledge the presence of rich and ongoing disciplinary conversations about learning and teaching especially in a book focused on making assessment data more useful on campus is significant.

If compliance culture bedevils the effective use of assessment data, it would perhaps behoove those committed to campus wide assessment to expand the scope of assessment more fully to include existing practices at the disciplinary level. Tapping these disciplinary conversation will be admittedly difficult because they tend to be far more informal and irregular than structured campus-wide assessment initiatives, but I suspect there would be great value to starting the assessment process with the question: “how do you improve teaching and learning in your discipline?” 

3. Research Design. One of the key problems with the vast bodies of campus wide assessment data is that most of it is designed to track a rather elusive problem: how do we engooden learning in higher education? With this or other similarly broad research questions – largely driven by the need to produce data for accreditation or other accountability programs – it is hard to imagine their immediate or regular utility at the level of a single class or even a departmental curriculum. 

It seems to me that good research design is more focused in the questions that it asks and the data that it produces. More focused research questions tend to involve more focused data collection practices and do not typically require (or encourage) the kind of continuous data collection at the core of most assessment strategies.

To be fair, University of North Dakota offers funding for focused assessment projects, but as far as I can tell, this data is not recognized as part of the larger university assessment protocols. More problematic still is that this data (or the analysis) is not particularly visible for use by the rest of the faculty (although in some cases, specific faculty research is made available). We need a white paper series that features specific research and makes data available for wider critique and use.  

4. Where does this lead? My old friend David Pettegrew has a saying: “There’s always more archaeology.” He usually pulls this out when I’m ranting about the need to get back into the field and collect more data. David’s quip is meant to remind me that collecting more data does not always result in more knowledge. It also serves as a useful reminder that collecting data for the sake of collecting data is not a very useful enterprise. 

The broad idea of continuously assessment student learning is not bad, but the idea of continuous improvement is difficult to sell in a culture where resources are increasingly scarce and diminishing returns represent a real disincentive to ongoing research. Typical research design produces a result and “always more archaeology” is a call to keep the goals of data collection in mind when doing research. The ultimate goal of assessment may be continuous improvement, but this is hardly a sustainable objective.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s