The Second Research, Part I

Facts and Myths

Few tasks are more like the torture of Sisyphus than institutional reporting in higher education: in its repetitious cycle, the forms annually emptied, the blank cells filled routinely, year after year, submission by submission. In response to what many regard as the irredeemable chore of external submissions, the Association for Institutional Research (AIR) offers its Forum as a brief respite from the drudgery of the profession — a respite, that is, if the discerning conference-goer chooses to ignore the obligatory concurrent sessions afforded to the US News and World Report and Common Data Set representatives. While this year’s (2015) theme seems more muted than recent forums’ themes, the materials invite the attendees to be “future-focused” in order to answer the “disruptive innovations” pushing our profession to join the pantheon of higher education leadership, several sessions directly address AIR professionals’ opportunity to grow and meet the promise of their institutional roles, while many, many more sessions provide would-be case studies, sponsored technological solutions, and exemplary approaches to manage our future-focused challenges.

The abundance of concurrent sessions notwithstanding, a certain something remains lacking to the IR professional – the semblance of an academic discipline at the core of the profession. Like the data provided to fill the empty cells of the external requesters, the session titles and abstracts read like a catalog of discrete, utilitarian responses to external requests. To be certain, disciplinary conferences (in my experience, historical and social science conferences) feature a wide variety of specialists who present papers with no discernible connection to other scholars. Nonetheless, such conferences are held together by the sensibility for and camaraderie of an academic discipline. In contrast, AIR panelists and presenters often emphasize institutional researchers’ paramount need to understand his or her particular institution or local institutional leaders. Experienced IR professionals generally advise new IR professionals to connect with others in similar institutions and provide little guidance on the rigor or training necessary to act in accordance with a set of rules or skills proper to a field of study. Instead of the continuity of a discipline, that is, institutional researchers’ point of reference as practitioners tends toward the discontinuities of context.

Everything I have learned about institutional research I learned through the Association for Institutional Research and its Forum. There is no doubt that the AIR Board and executive leadership seek to make a positive improvement for IR professionals in general. In particular, for the past five years, I have applied ideas from publications and presentations by a group of institutional researchers who subsequently formed the Association for Higher Education Effectiveness to advance leadership opportunities for IR professionals. To be clear, the following is not intended as a critique of AIR or its Forum.

The aim in Part I is to explore whether there is a dearth of discipline in the IR profession and, if so, why.

Density in Higher Education Literature

I recently read in its entirety the 21st volume of Higher Education: Handbook of Theory and Research, published in 2006.[1] While the contributors offered sage analysis and insights of benefit to institutional researchers, at the conclusion of my readings I recognized that the phrase “institutional research” in a formal sense was not indexed or substantively mentioned. The publication touts itself as “[a]n indispensable resource for administrators, researchers and policymakers in higher education,” so I was struck by the absence of a key administrative function in the consideration of theory and research in higher education. I subsequently reviewed the abstracts from the past thirty years of the publication and discovered, to my astonishment, only three abstracts directly reference “institutional research.”

Only the first reference, from 1989, substantively references the role of institutional research in higher education, examining “the institutional memory of higher education organizations by examining campus archives and offices of institutional research.” In 2000, an article on undergraduate persistence surmised that its “tentative findings suggest investments in institutional research and student success initiatives.” The most recent reference, in 2007, analyzes “the methods and goals of contemporary academic and institutional research investigating the effectiveness of collegiate educational practices,” and concludes unfavorably “that the search for ‘best practices’ in its current form will be ineffectual.”[2]

In short, one of the premier publications on theory and research in higher education provides no substantive direction or clear description for the practice of institutional research in higher education. Although institutional researchers clearly advance the academic careers of the authors who use IPEDS data or any of the myriad resources collected from student surveys that local institutional researchers must administer, the academicians featured in the handbooks offer no concrete or comprehensive set of central ideas for the practice of institutional research as an aspect of a discipline dedicated to the study of higher education. The same appears to be the case for Research in Higher Education, the premier journal associated with the Association for Institutional Research, in which the phrase, “institutional research,” is cited only 20 times in the last 20 years and 6 times in the past five years – or once per year – in its abstracts. Recently, the reference occurs more often than not with respect to student survey research such as the National Survey of Student Engagement or by the Cooperative Institutional Research Program at UCLA.

Academicians and publishers, it may be said, neglect to “set a place at the table” for the practitioners of institutional research in the discipline for Higher Education. Presidents and Provosts certainly learn the importance for institutional leadership and management of Enrollment Management, Finance, IT, Student Life, etc., through academic literature and higher education publications. So, if not the publication that touts itself as the “indispensable resource for [higher education] administrators…,” what substantial body of literature shapes executive leaders’ understanding and value for Institutional Research?

HIP-story

Whereas student survey research seems to be a favored methodology in reference to institutional research, a similar lacuna can be found in the conclusions drawn from the national benchmark surveys administered by institutional researchers. One growing body of literature in higher education endorsed by the Association of American Colleges & Universities, High Impact Educational Practices (HIPs)[3] based on the National Survey of Student Engagement (NSSE) and its supplemental surveys with faculty and beginning students, provides a quick primer on the significance of institutional research in the context of student engagement.

A review of the HIP titles reveals a peculiar emphasis for the HIPs adopted and favored by student engagement researchers:
 

    First-Year Seminars and Experiences
    Common Intellectual Experiences
    Learning Communities
    Writing-Intensive Courses
    Collaborative Assignments and Projects
    Undergraduate Research
    Diversity/Global Learning
    Service Learning, Community-Based Learning
    Internships
    Capstone Courses and Projects

 
None of these “high impact practices” speak directly to the role of institutional researchers (or administration in general) for the engagement and success of students, as seems to be true of most of the recent literature featured by the NSSE researchers (as of May 28, 2015).

The HIPs are a collection of “best practices” applicable to the classrooms and curricula of every institution. Strategic planning teams and task forces charged with improving student retention and engagement soon discover a body of literature that narrowly hinges on faculty initiatives for ten curricular solutions “that research suggests increase rates of student retention and engagement.” In deference to academic freedom, consequently, the work of such teams then succeeds or fails based on the readiness of the faculty to adopt the HIPs and to assume responsibility for improving retention at the institution. Moreover, the HIPs require costly investments in faculty and curriculum development, educational resources, or student learning enrichment opportunities that choose to ignore the importance of affordability in a college education or limit the ability to extend high impact practices to all students. Most curiously, none of these high impact practices is accompanied by clear directions on how best to conduct evidenced-based assessments to measure the return on investment in HIP ventures.

At least 13 AIR Forum sessions featured the NSSE and/or HIPs in the abstracts submitted to the 2015 AIR Forum. Understandably, higher education academicians and graduate students, who do not have access to student information systems, focus their scholarly research and presentations on the readily available resources that afford indirect evidence of student engagement on American campuses. Student engagement, however, is clearly an “issue” (per Terenzini[4]) for institutional researchers, and only one issue in a large portfolio of issues under institutional researchers’ purview. An attendee can certainly wonder why one single issue item (and one singular source of data on student engagement from survey research) warrants such a presence at the annual Forum. More problematically, one could ask, has the immense emphasis on student survey research and the emphasis on curricular activities retarded the development of more direct measures of student engagement recorded in the student information systems and the implementation of more direct means to improve student retention at colleges and universities?

The Myth of Context Intelligence

Terenzini’s categorization of institutional research as a combination of technical/analytical, issue, and context intelligences at first blush appears to offer a framework for a discipline. As an interpretative tool, institutional researchers and IR academicians frequently reference the categorizations in AIR presentations and feature the schematic in IR practitioner literature. Yet, the manner in which the framework imbeds the technical, issue, and contextual intelligence of institutional research in the interests of the local institution undermines its effectiveness as a basis for the formation of a discipline. Terenzini acknowledged problems with his categorizations at the 2012 AIR Forum[5], one of the few pieces explicitly on institutional research published by Research in Higher Education, but the influence of his original formula remains apparent in the words and advice offered in AIR Forum sessions.

To cite his 2012 restatement of the three intelligences:[6]

The first is the body of technical knowledge and information required to be an IR professional on any given campus. This foundational knowledge includes familiarity with the institution’s information and data structures, variable names and operational definitions, and the counting rules that are the basic building blocks of the institution’s major data systems (e.g., admissions, registration, personnel)… (my emphasis)

Substantive Tier 2 intelligence includes knowledge of the kinds of issues and decisions that middle- and upper-level administrators in functional units face. (my emphasis)

The contextually intelligent IR professional not only commands the analytical and personal skill sets and understands the topical domains that comprise Tiers 1 and 2, but also understands how to blend those two intelligence sets in a detailed and nuanced grasp of the context and culture of a particular IR operation—the institution where IR professionals practice their craft. (my emphasis)

In all three descriptions, the intelligence of the institutional researcher is not native to the profession and is prioritized by the particularities of the institution and its other administrators. The one form of intelligence that comes closest to forming the basis for a disciplinary approach, Tier 2, is positioned as the domain of other administrators in “functional units.” In short, the institutional researcher does not bring disciplinary expertise to the three tiers other than the ability to gain the intelligence of others. IR knowledge is always derivative, perhaps a reason that “intelligence” is the key word in Terenzini’s formalization. Consequently, the primary reference point for institutional researchers is “the requester,” the person or external entity with the knowledge to know what to request. Institutional researchers may add value by parsing what the requester wants (issue) and what the requester needs (technical / analytical) in the tiers of intelligence, but ultimately the institutional researcher is an indirect agent in institutional leadership and decision-making (context) or in the production of knowledge in higher education (scholarship and policy). Unsurprisingly, a discipline does not rise or take shape from the servicing of requests formed by others and prioritized according to the immediate psychological dispositions of others (“wants” and “needs”).

A second complication from this conceptualization of institutional research intelligence is the premise that professional specialization so thoroughly originates in the particularities and peculiarities of the institution. The premise implies that each institution is a discrete, unique, and incommensurate unit to a degree that mitigates the potential for institutional research to form a generalizable body of knowledge from its practice. While Terenzini’s revised characterization of “context” implodes the narrow scope of his first description of the tier, the original concept remains operative in AIR Forum discussions and the profession’s self-image. In his revision, while context is nationalized and internationalized, rather than make the leap and root institutional research in the global context, Terenzini recommends:

“At the very least, campus-based IR professionals should be aware of these broader national and international issues and their potential impacts for their campuses. (my emphasis)

Again, the institutional researcher need only “be aware of” (vaguely cognizant) of non-particular contexts and only to the extent that these issues impact the particularity of the institutions served. In the three tiers, the particularity of the institution is the one common foundational consideration for institutional researchers. The grounding of IR intelligence in this fashion is not conducive to disciplinary advancements as a field of study. The prioritization of institutional particularity as the basis of IR intelligence is, in effect, tantamount to staring at a single row of data in a data set and marveling at its unique qualities. Is it a wonder then that institutional researchers have difficulty finding a seat at the proverbial decision-making table when the vision of higher education in institutional research is regarded as unique to each institution and based solely on what others (i.e., functional units) deem most important to the institution?

A third questionable aspect of the three tiers is the manner in which social science methodologies and practices are lumped together with local “technical” intelligence in Tier 1: “The second [aspect] of Tier 1 intelligence is analytical and includes familiarity with and skill in using the tools of social science research.” In effect, the distinctive features of a generalizable social science discipline are rendered indistinguishable from the technical particularities of the institution: mastery of quasi-experimental research design is of the same order of intelligence as understanding how one institution’s student information system codes a first-time, full-time student. Secondarily, and more importantly, Terenzini suggests that institutional researchers merely “be familiar with” social science research (i.e., not the mastery of a social science), implying that the institutional researcher – as in other domains – operates with a derivative understanding of “good social science and education research.” The potential for disciplinary mastery and sophistication, as the institutional researcher scales from technical research to issue research to contextual research, is proscribed by the summary relegation of social science as a familiarity at the “technical/analytical” tier of IR intelligence.

Institutional research, defined in these terms, is entirely a secondary research practice and tacitly positions other administrative and academic functions as the primary domains for leadership and management. The burden for institutional researchers is to find a seat at the table of decision-making when other administrators and institutional stakeholders are always already deemed as more expert and more knowledgeable about higher education in general and in practice. Does not this burden, though, stem from the conceit – a chronic self-deceit – that an institutional researcher gains intelligence by assimilating the primary knowledge of requesters and decision-makers from other functional units and only through many years of service at one particular institution: a jack or jill of all trades, the master of none?

Summary to Part I

No place in higher education literature, no platform in High Impact Practices, no social scientific foundation for expertise: these are the facts and myths of institutional research. These facts and myths annually renew the lament of institutional researchers that the profession has too little impact on decision-making and a nominal role in higher education leadership.

To the contrary, when properly constituted, institutional research can be the tip of the spear for social science research, strategic planning, and continuous improvement for higher education. Our solutions, outlined in the 18 concrete services listed on the h|r home page, scale in complexity and vision to meet the value of the opportunities and the difficulty of the challenges for higher education today. We partner with institutional effectiveness and planning officers to build infrastructure for institutional research and to express the studied expertise and leadership that practitioners of institutional research bring to higher education.

NOTE: A rough draft was inadvertently posted on June 3 and the revised draft posted on June 5.

Footnotes    (↵ returns to text)

  1. J.C. Smart (ed.), Higher Education: Handbook of Theory and Research, Vol. XXI (Spring, 2006).
  2. From Higher Education: Handbook of Theory and Research: J. R. Thelin and M. V. Krotseng, “Higher Education’s Odd Couple: Campus Archives and the Office of Institutional Research,” Vol. V, (1989); M. Bonous-Hammarth, “Value Congruence and Organizational Climates for Undergraduate Persistence,” Vol. XV (2000); A. C. Dowd and V. P. Tong, “Accountability, Assessment, and the Scholarship of ‘Best Practice'” Vol. XXII (2007).
  3. The full white paper is available at: http://secure.aacu.org/store/detail.aspx?id=E-HIGHIMP (accessed May 28, 2015).
  4. P. T. Terenzini, “On the nature of institutional research and the knowledge and skills it requires,” Research in Higher Education Vol. 34, Iss. 1 (Feb. 1993), pp. 1–10.; P. T. Terenzini, “‘On the Nature of Institutional Research’ Revisited: Plus ça Change… ?” Research in Higher Education Vol. 54, Iss. 2 (March 2013), pp. 137-148.
  5. I attended the 2012 Forum in New Orleans and I have applied Terenzini’s categorizations for the creation of an office responsible for institutional research. The thoughts that follow first occurred to me in a discussion group at the 2015 Forum and seek only to suggest possible improvements to the characterization of the tiers.
  6. References to Terenzini, “‘On the Nature of Institutional Research’ Revisited…,” from the online document as of May 28, 2015.

Comments are closed, but trackbacks and pingbacks are open.