The Wealth of Institutions, Part I

Origin of Institutional Research and the Administration of the Work

Having already endeavored to estimate the necessary expenditures on institutional research functions that the typical U.S. college or university spends – $1,000,000 per year, largely out of the base need for compliance with governmental or accreditation policies – a few questions may be posed of direct relevance to the Presidents and Provosts of American institutions for higher learning: 1) to what extent are the costs for institutional research attributable to understaffed institutional research offices and the decentralization of institutional research functions; 2) is there an alternative method for organizing institutional research such that its accountability for institutional effectiveness will be made explicit and its contributions to generalizable knowledge well cataloged, and 3) what revenue or return on investment may be associated with the effective administration of institutional research offices.

For many presidents and provosts today, institutional research may not amount to much more than a localized practice to service reporting and compliance to federal, state, and other external agencies. As a consequence, such chief executives may regard the annual national expenditures by higher education on the administration of virtual institutional research offices as the cost of doing business imposed by federal bureaucracies or poor educational policies. As one President commented almost forty years ago, “The Office of Institutional Research has three essential objectives: (1) to develop and update routinely institutional profiles, (2) to complete and file ‘on a timely basis’ (to use the jargon of our time and conditions) required by federal and state reports, and (3) to conduct fundamental research into education itself.” The first and third objectives, however, would become increasingly untenable as the “proliferation of reporting demands” at that time redirected the focus of institutional researchers to reporting compliance and relegated the profession’s designation, “an evolving misnomer.”[1] In 2008, the editors of New Directions for Higher Education published a collection of essays entitled, “Institutional Research: More than Just Data,” suggesting the extent to which institutional research had evolved as a misnomer in the ensuing thirty years. The Editor’s Notes comment, “Offices of institutional research do deal with data, lots of data… However, most offices provide services that go beyond merely counting or reporting.” The series of articles then seek to inform higher education professionals, presumably, the ways in which institutional research differed from data.[2]

As outlined in prior briefs, concurrent with the expansion of external reporting, scholars on higher education (as a field of study) with near unanimity re-defined the profession of institutional research as applied research that supports decision-making at one particular institution and that as a rule does not contribute to generalizable knowledge about higher education or higher education administration. For presidents and provosts in higher education, the fundamental propositions regarding institutional research offered by scholars of higher education since the 1970s present important implications that reinforce the misnomer of institutional research. In the first respect, the knowledge and understanding of institutional researchers is highly localized and will not contribute to chief executives’ general understanding of higher education or the leadership qualities for higher education administration. In the second respect, institutional researchers’ basic “intelligence” is technical and analytical for the production of “facts and figures about an institution.” Leaders of small-to-medium colleges, the most numerous, will learn that their institutional research offices are immature, in “childhood” or “adolescence,” likely incapable of carrying out “the most sophisticated research projects” as at doctoral and research I universities. Moreover, these technical and immature institutional research offices often seek to function as “information authorities” at the institution. Facing that prospect, one finds it difficult to fault the executive leadership of small- and mid-sized colleges who have concluded that their institution may be better off with a virtual office of institutional research or with a single person capable of no more than the coordination of just the “data.”[3]

Presidents and provosts of doctoral-granting or Research I universities may find solace in the proposition that their institutional research offices have the contextual intelligence, maturity, and sophistication to deliver better support for decision-making and significant research on higher education. Independently-directed survey research by the National Association of System Heads, “the association of chief executives of the 44 colleges and university systems of public higher education in the United States and Puerto Rico,” reaches a regrettably contrary conclusion about the state of institutional research in the nation’s major public colleges and universities, worth quoting in full:

Higher education is going through a period of rapid change, faced with an imperative to increase student access and success without diluting quality and in the face of real financial constraints… Against this backdrop of demand for IR, the picture that emerges from this study is of a field that is at best unevenly positioned to support change. IR offices are running hard and yet many are still falling behind, deluged by demands for data collection and report writing that blot out time and attention for deeper research, analysis and communication. Many do not have the information they need to get at the performance questions of most interest to them, their boards or public officials, either because it doesn’t exist or because it’s not collected in a way that admits of analysis. The analytic functions in most systems and campuses remain topically stove-piped, with the named “IR” office focused primarily on student and student related research, with reporting and any research in other topical areas (resource use, efficiency and effectiveness, and personnel) handled by the budget and human relations offices. The overall ability of IR offices to use data to look at issues affecting many of the cross-cutting issues of the day — such as the connections between resource use and student success — is nascent at best.[4]

Unsurprisingly, with an assessment of institutional research capacities at public doctoral and Research I universities that differs dramatically from that offered by academic researchers in higher education, the system leaders and chief executives of the leading public institutions do not see eye-to-eye with institutional researchers on whether the institutional research profession has the capability to adapt incrementally to “the needs of the future” or if there must be “fundamental change” and “‘outside’ help” (p.26) to re-examine the concepts, assumptions, and strategy of institutional research (to cite Tyrrell’s prescient warning).

A chief executive, however, may not know and will not learn from white papers by system associations or from research by higher education scholars, that the capacities and standards currently being set for 21st-century institutional research trace their origins to the first administrative institutional research offices organized and fostered by college and university presidents. Institutional research offices of the 1940s, 1950s, and 1960s engaged in the types of “research, analysis, and communications… that encompass systems and institutional learning and improvement” proposed by the National Association of System Heads. Moreover, in the mid-twentieth century, the methods for the practice of institutional research more closely followed social scientific practices to advance generalizable knowledge about higher education and, in the application of their methods, institutional researchers deliberately and systematically explored the relationship between institutional resource allocations and student success (again, a NASH criticism). Although many of the original institutional researchers in the bureaus or offices of institutional research were faculty designated for reasons of expertise or interests, when the Association for Institutional Research organized in 1965, many of the original members worked in administrative offices and fully 40% did not have an academic rank.[5] In an important respect, the early history of institutional research is a testament to the importance of executive leadership for the direction of institutional research offices to produce generalizable knowledge about higher education and higher education administration.

College presidencies succeed and fail based on the quality of institutional research, planning, and continuous improvement at their institutions,[6] and, yet, the history of institutional research since the 1970s shows that chief executives of higher education largely disengaged from institutional research about the same time that scholars of higher education defined institutional research as support for decision-making at a particular institution and institutional researchers accepted a diminished role and function in the production of generalizable knowledge about higher education. The following brief contrasts the accomplishments of early institutional research offices under the management and leadership of presidents to the current conditions of institutional research recorded by scholars of higher education and the NASH survey. The comparison, one hopes, will illuminate how virtual offices or elaborate profusions of institutional research squander the wealth of institutions – both in terms of finances and knowledge – and, one suspects, will foreshadow the standards and fundamental changes that chief executives will have to undertake for the future of institutional research on higher education.

Of the Causes of Deficiencies from the Practice of Institutional Research and of the Order according to which its Imperfections are Naturally Distributed among the Different Ranks of Higher Education Institutions

The skill, efficiency, and judgment with which institutional research is applied in any institution determines the abundance or deficiency of the annual contributions to continuous improvements at the institution depending on the proportion of the resources spent on institutional research activities contributing to generalizable knowledge about higher education and those spent on external submissions or studies of only particular relevance to the institution. By definition, and in practice from small colleges (research on higher education) to doctoral and Research I universities (NASH “IR” survey), most colleges and universities invest in institutional research to satisfy compliance with external agencies, whether mandated or voluntary,[7] and to investigate research questions only of relevance to the particular institution and conducted by any administrator “of the college or university [who] may have a responsibility for institutional research,” regardless of expertise.[8] Largely, it should come as no surprise that the NASH survey discovered, “There is a good deal of variability in the way the IR function is configured between systems and campuses,” with designated IR office focused on “reporting about student and enrollment patterns” while the budget office and human resource offices performed “analytics” about resources and personnel.[9] Academic researchers of higher education who consider institutional research one of their domains of specialization, as recorded in the documents of the profession since the early 1970s have counseled executives to organize institutional research in exactly this manner.[10] What is surprising is that it took more than 40 years before independent research was conducted to determine or reveal whether chief executives of higher education were receiving sound advice from the academic literature.[11]

Since 2008, J. Fredericks Volkswein has represented institutional research as practiced by organizations in a “golden triangle,” most recently formed by three purposes, “institutional reporting and policy analysis,” “planning, enrollment, & financial management,” and “quality assurance, outcomes assessment, program review, effectiveness, accreditation.”[12] These categorizations can be refined, by dropping the term, “institutional,” and by eliminating some redundancy, as 1) reporting, 2) planning, and 3) effectiveness – to represent the three primary research objectives for which a decision-maker may seek institutional research. Any subject matter studied by the institution may fall into one of these three categories, such as retention, for example. To the first end, as reporting, each year the institution reports to IPEDS and various other entities the number of students retained from the prior fall term’s cohort of first-time freshmen. Subsequently, for planning purposes, the institution may decide to project future enrollments and institutional FTE students using, as one factor, the trend for retention rates of first-time student populations from past measurements. Thirdly, the institution may study significant differences between retained and non-retained first-time students in order to determine where or how to improve the retention rates of entering students. At each step in purpose, the complexity of the research increases from the measurement of a cohort headcount at two points in time to calculate a retention rate for reporting, the application of retention rates as a trend to project for resource planning, and lastly to the study of retention as an institutional outcome with underlying and discernible causes that may be improved.

As currently practiced, whether in a one-person office at small colleges or under the elaborate profusion of a public university, institutional research functions are rarely organized to scale with the complexity even to this basic institutional research taxonomy. Institutional research on retention initiated “by requests for data” more or less encounter problems in a manner consistent with the following description at some point along the way:

*****

The President asks for an enrollment projection for the upcoming year based on expected retention. Retention reporting is a well-established institutional research function and there will be a low-level registrar or programmer responsible for reporting retained student headcounts to the federal government. The responsible employee, however, may not have organized the institutional submissions into a fact book or widely distribute tables recording longitudinal retention rates. Even if so, an employee whose main role in retention is to count and report the number of students, will likely not have the background or skills to build an enrollment projection model from the recruitment and retention trends for students. That research objective then falls to another person, likely an accountant, who seeks out the retention rates and, having never worked with student records, soon discovers that the reported retention rates to the federal government apply only to first-time freshmen. Retention rates for transfer students and students at the sophomore level and up (including all graduate programs) are not reported externally – oh, and the admissions office has the admitted student data. Through a series of email exchanges with the Information Technology division and the Admissions Office, the accountant-type secures the information necessary to trend and project enrollments and discovers a disturbing downward trend. After the enrollment projections reach the president, s/he then asks if the institution’s peers are also experiencing enrollment declines. Nobody in the room can answer the question because no one is aware that the federal government organizes and publishes college’s submitted data into its IPEDS Data Center.

Either by dint of an expensive consultant or email exchange with colleagues, someone at the institution learns how to make peer comparisons and secures comparison tables for enrollments, first-time freshmen’s first-year retention rate, and the six-year graduation rate. The results show that the research-deficient institution is declining at a rate that outpaces local peers. The president then asks which types of students are leaving the institution before graduation, and everyone looks to the head of academics or academic support. The head of academic support, saddled with the project because the Provost already has too many responsibilities, soon learns that the institutional reporter just runs a SQL query each year to get the headcount and does not store an electronic file with which students are retained. A new, and even more extensive, series of emails and meetings with IT ensues in order to secure a data set to produce more trend tables with students sorted by gender, race and ethnicity, standardized test scores, HS GPA, etc. etc. The tables show that the institution is doing a poor job retaining students with certain demographic characteristics and with average or sub-average standardized test scores and HS GPAs. After reviewing the tables, the President then asks how the institution can improve the retention rate of the students who are “at risk” of withdrawing from the institution.

Clearly, the education of students is the Provost’s responsibility, and s/he must take on a new project. A team of faculty and administrators from academic support are organized into a committee to study ways to improve retention or a consultant is hired – either way, the costs to the institution begin to increase rapidly. The head of the committee is an esteemed member of the College or Department of Education, who like many in the field just so happens to prefer qualitative research methods. The committee members engage in extensive readings on “best” or “high impact” practices and conduct focus groups with students retained from prior years[13] to determine which practices to prioritize for implementation. The President eventually receives a report recommending that the institution organize (more) learning communities, service learning and study abroad opportunities, first-year seminars, etc., all of which will require faculty stipends and remissions to create and teach because the “best” practices require more hours than normal for class preparation. Either that, the report suggests, or the Admissions Office has to stop admitting so many unprepared students.

Upon reading the report, the President forwards it to the human resources director to estimate costs to implement the committee’s recommendations. The HR department manages its own submissions, so employee records are more or less available, but the HR director has no information about average class sizes or the catalog of course offerings. Another series of emails with IT ensues, along with many questions as to why HR needs student enrollment records (to comply with IT audit standards), but the HR director eventually receives the information necessary to project costs. The President reviews the report and sets it down, astonished by the cost estimates for the implementation of “best” practices to improve retention. The Provost is called for a special meeting regarding the committee’s report. If the Board is asked to invest that sum of money in faculty development for “best” practices, the President will have to say how much retention rates will improve. Here, the research becomes dicey for many institutions. Everyone involved in the research to this point has performed research with the native intelligence to count, to request SQL queries from IT, to perform routine calculations for planning on a calculator or MS Excel, and to read the literature on student retention in higher education. An effectiveness study to predict student success requires specialized skills and professional training. If lucky, the college will have a business or computer science department with a specialist in data mining or a social scientist with notable experience in regression analysis. If not, the institution will require a consultant. In any case, somebody is about to receive a chunk of the institution’s resources in the form of a contract or remissions (plus the adjunct costs corresponding to the remission). The request goes forward with the requisite exchanges between the consultant and the data management expertise of the IT department.

After the President receives the predictive analysis, the results will show that the improvements should go in one of three directions: 1) no implementation because costs are not recovered, 2) full implementation because costs are certain to be recovered, or 3) the most likely, a pilot implementation to see if the improvements work as designed. If the first, then the research process returns to the Provost to form a new committee to make new recommendations. If the second or third direction, the new program and interventions will require program evaluations to determine if the interventions are performing as predicted and designed – again, an effectiveness study, requiring more institutional resources to be dumped into consultants or remissions. At about this point, though, five to ten years have passed since the initial request for enrollment projections and the President announces to the campus community that s/he has taken a new presidential appointment at another college or university – in part, frustrated by the inability to get answers to the simple questions s/he asks about student attrition and improving retention. Over the years, the pace of the work on improving retention waxed and waned as the enrollment budget targets were met or missed. Work during the summers often ground to a halt when faculty, committee members, or key personnel were on leave or vacation. The new President on campus will soon learn, while seeking leadership synergies with key constituents,[14] that many in faculty and academic support were equally frustrated with the former president because of all the pressure to improve retention and that the real problems the institution faces are caused by an Admissions Office that admits too many students ill-prepared for college. Soon after, the chief enrollment officer leaves voluntarily or by request of the (new) President. The next strategic plan, largely crafted by faculty, will state the goal of doubling the number of honors students, those who earn average scores for Critical Reading / Math SAT at 1300 and above, without realizing that only 11% of the high school population earns 1300 or higher SAT scores and the only such students who attend this particular institution receive 75% to 100% unfunded tuition discount rates. Five to ten years later, the new President departs, having not met general freshman recruitment targets but also having produced a million dollar budget shortfall due to the success of the honors program.

*****

While not every college or university with a virtual office or elaborate profusion of institutional research will experience the above problems in an effort to improve retention, every institution will surely do so in several other areas. The IPEDS submissions record nearly every facet of institutional operations in some way: admissions, first-year students, total student body, 12-month enrollment, degree completions, graduation rates, academic libraries, human resources, and finance. Each submission records anywhere from dozens to hundreds of variables. A question by the President about any one of aspect of those submissions will engender the elaborate confusion described above if a trend table inadvertently measures some kind of distress or decline at the college or university. The entire institutional research enterprise of the college or university thus careens from one crisis to the next with no comprehensive plan for institutional effectiveness. In each crisis, a half-dozen or more individuals become responsible in turn for a project at some point along the way because somehow their expertise performing an operative function makes them the most qualified to conduct institutional research. The fragmented and byzantine systems of institutional reporting fail to facilitate the investigation of basic research questions regarding planning and each new question must be met with an extensive new project involving ad hoc “requests for data” to the information technology specialists. When planning and projections call for action and the measurements of an effectiveness study, the institutional research functions hit a wall or require the aid of external consultants. As the findings from the National Association of System Heads reveal, contrary to the conclusions of the report, the deficiencies of institutional research are not in the IR offices, but in the virtual offices and elaborate profusions of institutional research functions in the public systems of higher education – that is, the unsystematic organization of institutional research functions by the systems and their universities in a manner that retards “holistic analytics about overall performance.”[15]

The Nature, Accumulation, and Deployment of Institutional Research

The perceived need by public system leaders for “outside” expertise to advance institutional research is a deplorable condition for the profession at doctoral and Research I universities considering that the capabilities and standards that others now wish to uphold as the measures of effectiveness for institutional research offices stem directly from the origin of the profession in these institutions. If not deplorable, astonishing nonetheless, and indeed indisputable evidence of the need for fundamental change for institutional researchers at all institutions. The fault does not entirely fall at the doors of the institutional research offices. In the New Directions for Higher Education collection that urged higher education administrators to understand institutional research as “more than just data,” J. Fredericks Volkswein advocated for the structure and composition of “a professional bureaucracy” in institutional research offices, described as “a centralized professional bureaucracy… [that} reports to the president or the chief executive officer,” organized by core functions (reporting, planning, effectiveness) or by the core divisions of research clients (Academic Affairs, Business / Finance, Enrollment Management, Student Affairs).[16] Notably, what Volkswein offers as a formal framework for institutional research offices – as a corrective to the virtual offices and elaborate profusions of today – recaptures the origins of centralized, administrative institutional research offices from the mid-twentieth century. Three of the earliest publications on institutional research – from 1938, 1954, and 1962 – illustrate the critical role of presidential leadership and the centralization of institutional research functions in a single office for the advancement of generalizable knowledge and the accumulation of institutional effectiveness in higher education.

In 1938, Coleman R. Griffith described the development of the Bureau of Institutional Research at the University of Illinois in the 1930s.[17] “From the very first, [the Bureau of Institutional Research of the University of Illinois] has served as a fact-finding agency directly responsible to the president.” (248) In his usage, Griffith distinguished between “facts” and “data,” regarding the former as analyses “of some special agency to relate… several sets of data in a meaningful way and to judge the outcomes with the left eye on the cold figures and the right eye on the vital arts of teaching, learning, and research.” (249) Lastly, in an era prior to the mantra of particularity for institutional researchers, he emphasized “data… [as] a means to an end, the end being thoroughly educational in intent and outcome…” and “statistical data on teaching load and costs… [as] sturdy foundations for a type of self-appraisal which might well earn the respect of every thoughtful student of university administration.” (248) In 1954, the University of Minnesota Bureau of Institutional Research released a ten-year review of its program.[18] The Minnesota Bureau likewise developed from the initiative and “vision” of its president in the 1920s, initially established as The University Committee on Educational Research “to conduct systematic research on the University’s admissions practices…”(4) In the 1930s, the committee established its first central office and hired specialized staff after the subsequent president of university approved “an annual budget for committee activities, which encouraged more long-range planning of central office services.” (4-5) By the late 1940s, the Bureau formed under a coordinator and two appointees “to share responsibility for directing the program of the central research office.” (7) In the early 1950s, “with a somewhat enlarged central staff and budget, the research program was broadened…” and the Bureau incorporated “experimental studies in the learning situation, the improvement of various types of evaluative instruments, and the general appraisal of outcomes of University instruction.”(8)

Figure 1 | Portfolios of Institutional Research Reported in Publications of 1938, 1954, and 1962

Portfolios of Institutional Research Reported in Publications of 1938, 1954, and 1962
Portfolios of Institutional Research Reported in Publications of 1938, 1954, and 1962

As seen in Figure 1, the University of Illinois and University of Minnesota laid the foundations for the “golden triangle” of reporting, planning, and effectiveness. Moreover, at the time, there were no standards or specifications for the collection of statistical information regarding college and universities and the Bureaus of Institutional Research defined variables according to the research question. More to the point, the description of research often focused on “the ends” or objective of the research questions. Griffith pursues academic program benchmarks and reviews to “excite questions and guide discussion. If the time-wise data from a department reveal trends away from a flexible average, responsible officers ought, perhaps, to know the reasons.” (251) Sixteen years later, the University of Minnesota Bureau of Institutional Research demonstrates a level of sophistication and a breadth of activity that dwarfs that of the University of Illinois (and its own research portfolio) from the 1930s, and rightfully congratulated itself for having “pioneered… the study of [the university’s] own educational programs.”(3) The institutional researchers at Minnesota thoroughly engaged in research for planning and allocating the use of institutional resources and advanced effectiveness studies to measure differential student learning outcomes based on regional demographics, student characteristics, and mode of instructional delivery (see Chapters 2, Chapter 7, and Chapter 10). In the latter respect, the University of Minnesota Bureau of Institutional Research stepped beyond the excitation of questions and discussion to deliver concrete direction or recommendations for the improvement of university policies, student services, and instruction.

After 1955, the first year of an era of “gestation” according to the Association for Institutional Research, institutional collaborations and statewide coordination of investigations began to take shape as more presidents organized offices or bureaus of institutional research at the nation’s universities and standardization of research methods and practices formed.[19] In the early 1960s, institutional researchers held their first forums and, in 1962, L. J. Lins, Professor and Coordinator of Institutional Studies at the University of Wisconsin, edited a collection of articles contributed by institutional researchers from over 20 different colleges and universities. As the contributor from Fordham University characterized the research activities first deployed at Illinois and Minnesota as “more or less routine areas [such] as faculty load, class size, [and] space utilization…” and noted that Fordham, like the other institutions represented in the publication, had moved on to issues “[o]f greater importance…. in such fields as student and faculty images of the University, comparison of senior characteristics with the nation-wide findings… of the career aspirations of college seniors, student retention, and prediction of academic success.”[20] As seen in Figure 1, institutions also had enlarged the analysis of cost and budget considerations to include inter-institutional faculty salary comparisons (University of Puerto Rico), non-instructional unit cost estimates and budgeting (University of California), and facilities planning (Wisconsin State College System). In addition, the contributors extended the portfolio of institutional research to encompass effectiveness studies for the advancement of continuous improvements locally but also as generalizable knowledge on higher education in the areas of gender equity (University of Wisconsin), independent studies courses (Antioch College), and the use of prediction vs. actual GPAs to identify inequities in grading by departments and faculty (Saint Louis University). In short, the spectrum of reporting, planning, and effectiveness studies in the portfolio of research of importance to policy and practice emerged in the early years of professionalization for institutional research.

While each article did not provide a detailed history of the institution’s research functions, the formation of offices of institutional research by presidents to supplement ad hoc committees or projects and their progressions into the early 1960s is evidenced by several references. In the opening piece of Lin’s edited collection, Loring M. Thompson of Northeastern University (Boston) specifically draws a connection for the advancements of institutional research to the higher education, government, and industry partnerships in other industries such as “automation, space travel, and communications.” In the same class of partnerships, Thompson added “a movement… for universities to study themselves and to plan consciously for changes within themselves,” wherein offices of institutional research were “expected to make studies about the university itself” and “serve university presidents and faculty committees, providing a current, rational basis for leadership in education.” By the early 1960s, offices for institutional research had also reached a level of sophistication to consider forming standard methods and replicable approaches to the practice of self-study. The contributors from DePaul University carefully defined institutional research and “its purposes so it may be related to research as it is traditionally understood… just as scientific investigation in the natural and social sciences…” In that respect, the authors asserted, institutional research had to deliver more than “empirical evidence and method, but also the primary objective of true research: a contribution to theoretical knowledge.” The inquiry may “be pursued on a single institutional level…” but the investigators must also design research “that can be replicated and which can, therefore, be broadened to an inter-institutional inquiry.”[21] In effect, as had happened at the University of Minnesota in the early 1950s under the leadership of its presidents, the demands of – and for – offices of institutional research necessarily enlarged the scope of inquiry to multi-institutional investigations that require a degree of autonomous agency for these offices that enables them to consider and contribute to theoretical and/or generalizable knowledge.

James I. Doi, Director of Institutional Research and Professor of Higher Education at New York University, roughly captured the nature, accumulation, and deployment of institutional research offices in his summary of the “conceptual framework” of the profession as presented and discussed by participants at the 4th National Institutional Research Forum in 1964. Recognizing the field’s establishment in the nation’s largest public and private institutions, he generalized five propositions for the practice of institutional research:

1) the evolution of institutions of higher education from small, relatively simple organizations to large-scale, complex organizations essentially bureaucratic in structure and mode of operation; 2) the emergence of a new style of administration… described as scientific; 3) the evolution of institutional research as a form of organization behavior characterized by sporadic studies and collections of data to that characterized by coordinated and systematic conduct of studies needed for institutional improvement; 4) the emergence of institutional research specialists; and 5) the professionalization of these men [sic].[22]

Doi emphasized the order of the propositions “intended to suggest that each succeeding development is a consequence of the preceding development” and the professional institutional research office holder originated in “the change in the nature of IR as a form of organization behavior.”(52) By 1964, the field of institutional research was congealing into a practice by professional specialists who were charged with the direction and administration of an institutionalized system of inquiry to advance a scientific approach and understanding of higher education administration at complex universities. The subsequent distinctions between research on higher education and institutional research that dominates higher education literature since the 1970s – basic or applied research, academic or administrative function, scientific or managerial discipline, and higher education in general or the particular institution – has not yet convoluted the profession and Doi does not hesitate to acknowledge “institutional research as an organizational function has been and continues to be administration-oriented” (54) while maintaining a commitment to science and generalizable knowledge. The main challenge facing institutional research was, “[The] total institutional commitment on the part of both faculty and administration to the use of knowledge as the basis for decision-making has yet to evolve” (54) – in other words, the deficiencies in system, institutional, and faculty leadership to support change advocated by the conceptual framework for institutional research offices centrally and directly overseeing a portfolio of research for reporting, planning, and effectiveness.

Conclusion to Part I

The quantity of useful and productive institutional research expenditures is constituted at every institution in proportion to the quality of research questions posed to set such functions into motion and the scientific methodologies with which said functions are deployed.

If this simple proposition has been known to institutional researchers since the 1960s, why then has the nation’s state university system heads taken fifty years to “discover” for themselves an interest and the need to “creat[e] a culture of openness to inquiry and willingness to use data to document and improve performance”?[23] How has the neglect by institutional and state system leaders overseeing institutional research functions become an indictment of the institutional research offices and officers in state universities and, more generally, the profession’s ability to adapt to the demands of the future? The reliance on virtual offices and elaborate profusions of institutional research in the state university systems has been well documented by survey research regarding the roles and functions of institutional research since the origins of the profession. Whatever deficiencies exist in the institutional research profession in the present day reflect the lapse of leadership in the institutions and the systems which rejected or abandoned the promise of institutional research as envisioned fifty years ago. Since the 1970s, higher education leaders at large public universities have maintained that rude state in which institutional research affords no specialization and in which every division or department poses questions of planning and effectiveness for itself, without regard for the accumulation of knowledge and with no larger purpose than carrying on the business of the institution. The themes that emerge from the survey of institutional research and the report by the National Association of System Heads demonstrate the need for system heads and institutional researchers alike to reflect on the origins of the institutional research profession and the promise of centralized institutional research offices in the early 1960s.

Note: Post subject to edits through August 30, 2015.

Footnotes    (↵ returns to text)

  1. Donald N. Dedmon (President of Radford College), “Institutional Research: An Evolving Misnomer,” Vital Speeches of the Day Vol. XLIV, No. 16 (June 1, 1978), 482-83. Pres. Dedmon cites a report by Walter Kronkite suggesting that “government paper work” of all types cost the nation $100 billion per year and a case study that indicating federal mandated programs cost higher education institutions between 1% and 4% of their operating budgets at that time (1977).
  2. Dawn Geronimo Terkla, ed., Institutional Research: More than Just Data, New Directions for Higher Education No. 141 (Spring 2008).
  3. The paragraph summarizes prior criticism of scholarship on institutional research by higher education scholars, but also presents the implications by the order summarized in J. Fredericks Volkwein, “The Foundations and Evolution of Institutional Research,” in Institutional Research, 5-20.
  4. National Association of System Heads, “Meeting Demand for Improvements in Public System Institutional Research: Progress Report on the NASH Project in IR” (March 2014), accessed at http://www.nashonline.org/sites/default/files/nash-ir-report_1.pdf.
  5. Association for Institutional Research, The Association for Institutional Research: The First Fifty Years (Tallahassee, 2011), 57.
  6. Stephen Joel Trachtenberg, Gerald B. Kauvar, and E. Grady Bogue, Presidencies Derailed: Why University Leaders Fail and How to Prevent (Baltimore, 2013). The authors emphasize the “failure to meet business objectives” as one of the six themes of presidential derailments at all levels, and as the first theme of failure by presidents of private liberal arts colleges (22-23).
  7. Jennifer A. Brown’s general classification in “Institutional Research and External Reporting: The Major Data Seekers”, in Terkla, ed., Institutional Research, 87-96.
  8. Joe L. Saupe, The Functions of Institutional Research, 2nd Edition, For the Association for Institutional Research (Tallahassee: 1990), 6. Accessed at https://www.airweb.org/educationandevents/publications/pages/functionsofir.aspx on July 17, 2015.
  9. NASH, “Meeting Demand,” 19.
  10. The Association for Institutional Research recently organized its History pages according to an email sent to members on July 8, 2015.
  11. In the most recent handbook for institutional research from 2012, J. Fredericks Volkswein, Ying (Jessie) Liu, and James Woodell note, “Most of what we know about the profession of institutional research comes from several multistate and national surveys of AIR and regional members.”
  12. in his original formulation, the triangle was formed by “(1) institutional research and analysis, (2) planning and budgeting, and (3) assessment, effectiveness, accreditation.” See Volkswein, “Foundations and Evolution,” 6-7. I prefer to reserve the phrase, “institutional research,” for a different designation than a corner of the golden triangle, so I am using his more recent restatement from J. Fredericks Volkswein, “Gaining Ground: The Role of Institutional Research in Assessing Student Outcomes and Demonstrating Institutional Effectivness,” National Institute of Learning Outcomes Assessment, Occassional Paper #11 (Sept. 2011), 8 [Figure 2].
  13. This is to say, the research is conducted with students who have already been retained, revealing little-to-nothing directly about non-retained students.
  14. A reference to the “shared leadership” model described by Peter D. Eckel and Adrianna Kezar, “Presidents Leading: The Dynamics and Complexities of Campus Leadership,” in American Higher Education in the Twenty-First Century, edited by Phliph G. Altbach, Patricia J. Gumport, and Robert O. Berdahl (Baltimore: 2011), 304
  15. National Association of System Heads, “Meeting Demand for Improvements in Public System Institutional Research: Assessing and Improving the Institutional Research Function in Public University Systems” (February 2015).
  16. Volkswein, “Foundations and Evolution,” 14-16.
  17. Coleman R. Griffith, “Functions of a Bureau of Institutional Research,” The Journal of Higher Education Vol. 9, No. 5 (May 1938): 248-255. While Griffith does not provide the date the office was organized, he refers to data collected for 1923-24.
  18. Ruth E. Eckert and Robert J. Keller, eds., A University Looks at Its Program: The Report of the University of Minnesota Bureau of Institutional Research, 1942-1952 (Minneapolis: 1954).
  19. AIR, The First Fifty Years, 15; accessed at https://www.airweb.org/AboutUs/History/Pages/Books-Papers-Manuscripts.aspx (for members). The history cites J. W. Hicks and F. L. Kidner, California and Western Conference Cost and Statistical
    Study for the Year 1954–55
    (Berkeley, CA: 1955) and the recollections of J. L. Doi, The Beginnings of a Profession: A Retrospective View, New Directions for Institutional Research, Iss. 23 (1979), 33–41.
  20. L.J. Lins, ed., Basis for Decision: A composite of Current Institutional Research Methods and Reports for Colleges and Universities, released under The Journal of Experimental Education Vol. 31, No. 2 (Dec. 1962). The Fordham contributor was Francis J. Donohue, “A New Re-Accreditation Pattern Based on Institutional Research: The Self-Study at Fordham University,” 101-105.
  21. Loring M. Thompson, “Institutional Research, Planning, and Politics,” The Journal of Experimental Education Vol. 31, no. 2 (Dec. 1962), 89,91; Edward M. Stout and Irma Halfter, “Institutional Research and Automation,” The Journal of Experimental Education Vol. 31, no. 2 (Dec. 1962), 95.
  22. James I Doi, “The Role of Institutional Research in the Administrative Process,” in A Conceptual Framework for Institutional Research, The Proceedings of the 4th Annual National Institutional Research Forum, ed. by Clarence H Bagley (Pullman: 1964), 52. Membership records show that females held 10% of the positions in the AIR at its formation. See AIR, TThe First Fifty Years, 117.
  23. NASH, “Assessing and Improving the Institutional Research Function in Public University Systems,” n.p.

Leave a Reply

Your email address will not be published. Required fields are marked *