The Wealth of Institutions, Part II

Introduction to Part II

As many institutional researchers and researchers of higher education know, Francis E. Rourke and Glenn E Brooks asserted, in their work on the managerial revolution in higher education, “Institutional research lies as the heart of the trend toward the use of modern management techniques in higher education.” (44) What is perhaps less well-understood, or recalled, is that their chapter on institutional research largely constituted a censure of the profession. Two professors of political science with no discernible connection to institutional research, Rourke and Brooks based nearly the entirety of their analysis on survey research, with little regard for the growing literature and body of research by institutional researchers evident in the first National Institutional Research Forums of the early 1960s. From beginning to end, their chapter on institutional research emphasizes the many “perils” of institutional research to the institution, to faculty, but, especially, to other administrators. In the final analysis, they recommended that institutional research offices of the future be “distinct apart from the business side of operations,” effectively breaking with the origins in administrative research and subjecting the work of the offices to “an academic orientation… bolstering the point of view of the faculty.”[1]

As if a testament to the success of their intervention, the problems and issues that vex the profession of institutional research to this day may be traced to the “perils” that, according to Rourke and Brooks, the profession poses to an institution, both faculty and administration. As they wrote,

The fact that the faculty should look with suspicion upon the establishment of an office of institutional research in a university hierarchy is not surprising, in view of the common propensity of academic man [sic] to look with alarm upon any apparent extension in the power of university administration. What is not so generally recognized, however, is the extent to which the advent of such an office may arouse anxiety within the ranks of university administration itself.(54)

The litany of perils is too numerous to cite in full, but the key discouragements for administrators, often contradicting each other, were: IR techniques are “‘burglar’s tools’ which might eventually allow outsiders to gain entry into aspects of university decisions” (45); “an office ordinarily has very little opportunity to develop into an instrument of long-range planning, helping a university president to look ahead and anticipate… five or ten years [in advance]” (49); “a division of finance… may well suspect that the new institutional research unit will pull some power from its own jurisdiction…” (54); “the centralization of information–and administrative power–in the IR office” (54); “the prospect for having wide publicity given to certain disadvantageous information” (55); and “the IR director [who] took it upon himself to write a sharp letter of protest against [the college president’s] proposal” (61). The hobgoblin of institutional research threatened the administration, if not the institution itself, and the leadership of presidents, finance officers, and all others who may be placed at a disadvantage if “data” were centralized in one office.

Fortunately, their survey research showed, “Most IR directors appear to accept a limited conception of their own research function within the university.” (65) As a solution, for both institutional researchers, faculty and administration, if institutional largeness and external agencies made such offices a necessity on campus, the authors proposed to narrow institutional research to academic programming and leave administrative research to the individual units: “Under this arrangement, management problems connected with student registration, employee payrolls, and the utilization of physical facilities are not the responsibility of the office of institutional research.” (66) A proposal for a division of labors – the virtual office or elaborate profusion of institutional research functions – that mirrors the supposed shortcomings and inflexibility of the profession of institutional research today, according the National Association of System Heads.

Of the Different Progress of Institutional Research in Different Institutions

While Rourke and Brooks worked to solve the prisoner’s dilemma that institutional research imposed on faculty and administrators due to institutional largeness and external agencies’ interference, two researchers who advanced a limited conception of institutional research work, Paul L. Dressel and Joe L. Saupe, both at Michigan State University, sharpened the wedge – basic research vs. applied research – to drive between research on higher education and institutional research.

Paul L. Dressel, the senior member, wrote in the first handbook for institutional research, “Institutional research is different from the research of faculty members in a number of ways. It does not share the mantle of academic freedom; it is primarily utilitarian and therefore has a distinctive set of values; and its ultimate success depends less on the research findings than on the promotion of action to alleviate functional weaknesses and to increase the effectiveness of the institution.”[2] The various chapters in the handbook describe aspects of the work that can be done for a particular institution, “to probe deeply into the workings of an institution for evidence of weaknesses or flaws,” a purpose and program of research with negative connotations alluding to peril. At the conclusion, he surveys the future and presents institutional research with a choice between three options: a) applied research for the effectiveness of the college, b) operations research regarding efficiencies and resource allocations, and c) data coordinators for other offices and faculty committees that conduct their own institutional research – the virtual office or elaborate profusion corresponding to Rourke’s and Brook’s proposal. (23, 310-11).

Produced and released around the same period as Dressel’s handbook, Joe L. Saupe and James B. Montgomery, with the endorsement of the Association for Institutional Research, published The Nature and Role of Institutional Research: A Memo to a College or University, in which the distinction between basic research and applied research circumscribes institutional research’s knowledge production to local and ephemeral findings:

The critics seem to confuse institutional research, as we view it, with the more basic research on higher education carried out in the centers for the study of higher education and by scholars in higher education and related subject fields. Certainly the more fundamental research is essential and practicing institutional researchers would be proud to have the general researchers included with them in a broader category of those committed to institutional research. But institutional research is specific and applied and the other is general and theoretical, institutional research should not be expected to produce knowledge of pervasive and lasting significance, though on occasion it may. [my emphasis][3]

Saupe, credited by the Association for Institutional Research with drafting two definitive updates to the original statement of 1970 for the association, essentially states an epistemology – a theory of knowledge – for institutional research in general that categorically imbeds the knowledge produced by such research in the particularity of the institution and the extemporaneity of the intentions (“applied”) directing the investigation. The ephemeral nature of institutional research knowledge complements the principle for the organization of institutional research according to the “needs” of the institution, thus insuring that the practice of institutional research is rooted to the particularity of the institution. Qua ephemeral, institutional research knowledge does not entail accumulation and, thus, institutional research may not become self-directing or a program of research directed to the discovery of the next unknown. Institutional research is best left to the functional units who may perform “data studies that align with their own information needs” (AIR, 2015) and the offices of institutional research best deployed when responding to “requests for data.” In other words, Dressel and Saupe delineated an epistemology for institutional research that lends credence to the deployment of virtual offices and elaborate profusions of institutional research as advocated by two faculty researchers who were antagonistic toward the practice and administrative offices of institutional research.

In contrast to the Michigan State School of Institutional Research[4], consider the general prerequisites for scientific knowledge in Ernst Nagel’s philosophy of science: “[T]he organization and classification of knowledge on the basis of explanatory principles… is the distinctive goal of the sciences.” The same principle is evident in the National Research Council’s monograph on scientific research in education. Invoking “the metaphor of ‘accumulating’ knowledge,” the authors aim to call to mind two features of scientific knowledge: “First, it suggests that scientific understanding coalesces, as it progresses, to make sense of systems, experiences, and phenomena. The imagery also connotes the idea that scientific inquiry builds on the work that has preceded it.” These premises are the foundations for the three types of questions that scientific research on education, in general, poses: “description–What is happening? cause–Is there a systemic effect? and process or mechanism–Why or how is it happening?” Under the first type, education research focuses on measurement of the basic characteristics of students, institutions, theory development, various indicators of performance, and associations or correlations among variables. The second type of research question seeks to discover causal understanding – “Does x cause y? – to determine whether specified or theorized inputs may change outcomes. The third type of research question, assuming the discovery of causal relationships, then addresses the process or mechanisms through which x produces y.[5] The gradations in both accumulating knowledge and calibrating questions are at the heart of scientific research in education – and preclude institutional researchers according to the Michigan State school of thought for the profession.

Figure 2 | Progression of Institutional Research along the Spectrum of Scientific Research Questions

Progression of Institutional Research along the Spectrum of Scientific Research Questions
Progression of Institutional Research along the Spectrum of Scientific Research Questions

As illustrated in Figure 2, the research subjects and questions posed by offices of institutional research between the 1930s and early 1960s fit the typology for scientific research questions in education. As noted in Part I, the work of the 1930s to define, specify, and measure student characteristics and program performance initiated the scientific program. By the early 1960s, the basic measures to understand “what is happening” in higher education had become “routine” and offices of institutional research acquired responsibilities to investigate “more significant” questions on behalf of administration and faculty (Fordhamn, 1962). As Griffith noted in 1938, “The data collected by a self-appraising agency do not often answer any questions of education policy, but they are always a fruitful source of questions.”[6] By the 1950s, studies by bureaus of institutional research had reached a level in some areas of investigation to answer questions of educational policy and to ask its own fruitful questions to direct further investigations. At the conclusion of a review of graduate students in Education by the Bureau at University of Minnesota, the author concludes, “The findings of these study highlight the need for searching studies of the long-term outcomes of graduate instruction. The study report in Chapter 18 represents a first attempt in this direction…”[7] Lastly, then, by the early 1960s, the offices of institutional research represented in Lin’s edited volume devise more complex research designs and conduct more studies entailing specialization and an understanding of prior findings, resulting in the need for (more) full-time staff in an administrative capacity.

The editors of the 1954 volume made clear that the studies reported in the volume represented work over a ten year period and, in essence, the collection of research articles reflect the harnessing of “fruitful questions” that mark “key developments in program building at Minnesota, illustrating how the University has used this plan of self-study to appraise its practices, and to encourage experimentation along new lines.”(9) Later, in Lins 1962 publication, the introduction by Loring M. Thompson foresees institutional research “able to make long range studies and come to grips with methods for teaching creativity as well as academic excellence…. long-term studies of the whole educational process.” (91). In effect, self-study was changing over time from the study of the institution to the study of higher education in general, that is, from self-study to social science. The key “organizational device” in this transformation was the administrative office of institutional research born of institutional largeness and complexity at the premier public and private institutions of the time, to be sure, but nonetheless borne out by the principles and practice of institutional research as scientific research in higher education. The advances and possibilities are the basis for the hopeful and future-oriented perspective of early practitioners like Tyrrell and Doi.

To the misfortune of the profession, however, the Michigan State School of Institutional Research prevailed and arrested further advances in institutional research as a social scientific enterprise for forty years and counting. The institutional particularity and the ephemeral knowledge theorized for institutional research imposed practices that permanently secured “unsystematic variegation” of the office structure. As Rourke and Brooks wrote in 1966, “This review of the various activities which institutional research offices undertake on university campuses might well suggest that there are as many different roles such offices can play as there are universities.” In 1990, Alton L. Taylor recommended – ironically, in a work entitled, Organizing Effective Institutional Research Offices – that institutional research offices in “large universities” be “located directly under the vice-president for academic affairs…” where the researchers may focus on the internal environment (academics) and “consider whether or not other suitable units could examine” the external environment (planning) — that is, Rourke’s and Brook’s recommendation. In 2008, Volkswein cautioned readers “in generalizing about the practice of institutional research, because we know that organizational arrangements are highly variable from campus to campus and state to state.”[8] When the foundational documents of the profession as defined by the scholars of higher education recommend virtual offices and elaborate profusions, how could it be otherwise?

According to the natural course of institutional research, the greater part of the questions posed by an institutionalized system of inquiry is, first, directed to the compilation of descriptive statistics and correlations, afterwards to systemic and causal analysis, and last of all to investigations of process and mechanism for improvements. The virtual office or elaborate profusion of institutional research, anchored by the “needs” of the particular institutions and directed “by requests for data” by functional unit leaders, insure that institutional research functions rarely move from the first order of inquiry, descriptive statistics, and prohibit efforts in the second and third order by virtue of the complexity and costs entailed for external consultants. An institutional research office must be able to scale its operations along the lines of the “golden triangle” from reporting to effectiveness and across the types of scientific research questions on education from description to mechanism. Most large institutions, to the contrary, have deliberately formed offices to perform only one aspect of each axis: descriptives and reporting.

Systems for a Portfolio of Institutional Research

Consider again, the description of retention research initiated by a president at a small college or university with a virtual office or elaborate profusion of institutional research. Each time the president receives a response to the research question at hand, another question immediately presents itself. At each stage in the process, a different leader from one of the functional units takes on the responsibility without prior knowledge or input into the previous stage of the research. The information technology or institutional research office equipped only to “respond to requests for data” fields each stage in the process as if it is a wholly unrelated project driven by the “needs” of the functional units’ decision makers. And, yet, organized by the questions of the president to improve retention and assure the sustainability of the institution, the institutional research itself begets the need for more institutional research along the spectrum of scientific types of questions. That is, the program of institutional research is self-actualizing, forming an “organization and classification of knowledge on the basis of explanatory principles…” At the same time, according to the Michigan State School of Institutional Research, the process must begins as if for the first time, according to the “needs” of the institution as determined by a key executive, only to come to halt or interruption when the key executive departs for another institution or the institution needs to address a new crisis.

As Philip H. Tyrrell astutely noted, the practice of institutional research proceeds along spectra of investigations requiring a designed program of research. Figure 3 visualizes institutional research along both spectra of the golden triangle from reporting to effectiveness study and type of scientific research questions as defined by the National Research Council’s Scientific Research in Education. The matrix results in nine modalities of systematic inquiry which an office of institutional research may be deployed for the study of higher education. Viewed through this framework, it is easy to conceptualize the practice of institutional research as a “focused and balanced portfolio of research that addresses short-, medium, and long-term issues of importance to policy and practice” – the fourth principle of scientific research in education. Before considering specific subjects or areas of research under each modality, the domain of each modality should be considered under the reporting, planning, and effectiveness headings and the manner in which scientific research questions lead to specialization and complexities that favor centralization of institutional research in administrative offices.

In the simplest form of reporting, the institution organizes its research functions for the reporting of statistical and qualitative information to internal and external clients based on internal standards or prejudices determined by the motivations or “needs” of the requester. At the second stage of reporting, the practice of institutional research gains a modicum of standardization with basic research instruments such as survey research, focus groups, or ethnographic studies that define variables in advance and the breadth of measurable phenomena at the institution expands to include psychographic or attitudinal characteristics of students, faculty, and others in the educational setting of higher education. At the third stage of reporting, general standards for the phenomena of educational settings form and the operations of the institution tend toward greater automation in data collection though the use of student information and enterprise resource planning systems, but also in the measurement and reporting of higher education phenomena such as those statistics represented by the Integrated Post-Secondary Education Data System (IPEDS). Institutions that have progressed into the second and third stage of reporting will soon discover that rudimentary correlations and projections from historical statistics facilitate planning, resulting in the first stage of institutional research in planning. These four modalities of institutional research may be carried out by virtual offices or elaborate profusions of institutional research when quality and consistency are of little importance to higher education leadership.

Figure 3 | Modalities of Institutional Research under Volkswein’s Golden Triangle and the Types of Scientific Research Questions

Modalities of Institutional Research in Volkswein's Golden Triangle and the Types of Scientific Research Questions
Modalities of Institutional Research in Volkswein’s Golden Triangle and the Types of Scientific Research Questions

Depending on the degree to which institutional research in general has progressed toward standardization and automation, benchmarks and competitive analyses drawing on inter-institutional comparisons enable researchers to understand colleges and universities as members of higher education in general – or, “between college” research and analysis. At the third stage of planning, the institution employs predictive analytics or other forms of structural equation modeling to refine understandings and explanations of institutional efficiency and with student record level – or “between student” – advanced statistical methods. After the institution develops the second order and third order modalities of planning, the research will of necessity lend itself to the consideration of educational policy locally, and more broadly for higher education in general to the degree that institutional research in general becomes standardized and broadly organized. At the second phase of effectiveness studies, the institution attempts to identify the indicators or independent variables that logically signify the attainment of desired outcomes and goals of policies, practices, and designed interventions. Lastly, at the third stage, the effectiveness studies are defined in theoretical terms and the research designed based on sound social scientific principles to elicit findings and conclusions about the processes and mechanisms for the fulfillment of institutional missions and outcomes for higher education in general. The five higher order modalities of institutional research – formed from the gradation in purposes for institutional research, the refinement of the scientific research questions posed, and the advanced analytical methodologies (largely quantitative) utilized – require the organization and direction of a centralized office of institutional research, as the history and origins of such offices testify.

Looking back, again, at the trajectory of projects undertaken by the first offices of institutional research from the 1930s to the early 1960s, the two axes of the “golden triangle” and types of scientific research questions reveal the transition of institutional research from self-study to social science. The earliest projects disclosed by the University of Illinois in 1938 tended to fall under first order modalities of reporting and planning at a time before inter-institutional comparisons and perspectives had taken shape. By the 1950s, still bound to the consideration of a single institution, the University of Minnesota’s Office of Institutional Research established research subjects methods in each of the nine modalities, in part by regarding its history, its position in the state’s higher education landscape, and its individual departments as elements in the larger whole of higher education. The publication of its report, proudly claiming its novelty and leadership in social scientific research in higher education, tacitly prefigures the university as the unit of analysis and the University of Minnesota as one observation or instance from which generalizable knowledge about higher education may be drawn. In 1962, as institutional researchers assembled the first National Institutional Research Forums, the exchange of methodologies and knowledge[9] present in Lin’s edited volume provide a testament to how quickly and effectively the practice of institutional research may professionalize as a field of study for higher education. As Lins noted in the foreword “In preparing this publication, the editor sought to provide reports…. useful to persons in each institution irrespective of its type, size, or goal… [and] providing an avenue for exchange of methods and results of research.”

Lastly, the portfolio of research on higher education first undertaken by offices of institutional research at the time of their origin remains, to this day, the portfolio of research on higher education for institutional research as a field of study. The research questions posed for the types of projects in Figure 3 are, with some variation due to the accumulation of knowledge, the research questions posed for the types of projects order by input-process-output by Michael F. Middaugh in 1990, as well as the research questions posed for the types of projects organized by function-customer by J. Fredericks Volkswein in 2008.[10] The jurisdiction for offices of institutional research has long been established. Yet, the different progress of institutional research in different eras and institutions gave occasion to different systems for representing the portfolios of institutional research, to the degree that the system served to produce knowledge of only particular importance and of no lasting significance – or, to the degree that the Michigan State School of Institutional Research shaped the system. The system of organization and classification for institutional research as illustrated in Figure 3 enables every institution and its administration to join the effort to explore meaningful and significant questions of what works on their campuses and to actualize the potential for scientific discovery with institutional research on higher education.

Revenue or Return on Investment from the Effective Administration of Institutional Research

The NASH report on institutional research states, “While most system offices see [resource use, cost and tuition control, and meeting workforce needs] as areas of emerging priorities for future research, that view is not held by the majority of campus IR offices.” (3) An interesting observation that more or less reveals that survey research about institutional research continues to befuddle the profession of institutional research after more than 60 years. The first offices of institutional research created the standard methods to measure the efficient use of institutional resources and with the express intent to improve student learning outcomes and success. The unit cost, salary, and space utilization studies and their corresponding methodologies originate with the practice of institutional research as a specialized administrative function. Considering the likelihood that most public university systems likely have virtual offices or elaborate profusions of institutional research designed to produce knowledge of only particular importance and of no lasting significance, it may be fruitful for systems heads to consider the revenues or return on investments derived from a centralized institutional research infrastructure for the study of resource allocations and student success.

Nearly every institution meets the demands for data submissions by state and federal governments in order to retain eligibility for student financial aid and accreditation, consequently and absent performance funding, there is very little competitive advantage in organizing institutional research for purposes of compliance alone. A centralized office of institutional research, however, mitigates risks associated with external submissions. Untimely submissions to the federal government may earn the institution fines. Faulty or insufficient submissions to accreditation agencies may be met with a response placing an institution on “notice” or “probation.” Inaccurate submissions in voluntary reporting earns the college or university the public exercise of shaming known as “de-listing.” While no office of institutional research will eliminate all risks associated with institutional reporting, the virtual offices and elaborate profusions of institutional research organized to serve the needs or interests of the functional units or producing data regarded as having no lasting significance will generally be prone to inaccuracies.

Due to the vagaries of money and inflation, many private colleges and universities are spending $30,000 per full-time equivalent student in the United States, or costs of $1,000 per student credit hour (SCH) (see Figure 3 here). At a mid-sized institution of 100,000 credits per year, every dollar in unit costs savings at the institution from the implementation of academic program review using quantitative metrics and administrative reviews for the offices providing academic or institutional support saves the college $100,000 in expenditures. A one percent improvement in efficiencies from the centralized design and oversight of program and administrative review yields $1,000,000 for reinvestment in the effective programs and units on campus to augment student success. Investments in student success, specifically first-time student retention then multiplies the revenue of the college. The mid-sized college generating about 100,000 credits per year is roughly 4,000 heads, 1,000 of which are first-time freshman or new transfers. Since annual out-of-pocket costs of attendance for students have approached $20,000 in recent years, every one percentage point of improvement in freshman and transfer retention results in $200,0000 of additional revenue the very next year. Over four years, a sustained improvement of one percentage point in retention annually generates almost $1,000,000 of revenue from tuition and housing from sophomores to fifth year seniors who would otherwise have transferred elsewhere or left college without a degree.

As one final example, consider the practice of institutional research for generalizable knowledge about higher education. Such offices directly generate revenue for the college or university from research grants awarded by federal and private agencies. Such colleges and universities earn opportunities to experiment in methods of unit cost savings and student success without having to incur the total expenditures. With a commitment to producing generalizable knowledge on higher education, institutions with centralized offices of institutional research empowered to conduct research in every modality of institutional research contribute findings of pervasive and significant value to higher education in general. Recent grant awards by the Fund for the Improvement of Postsecondary Education (FIPSE) amount to $1,000,000 per year – or 1% of the revenue generated by the college (excluding unfunded aid).

One percent improvement in efficiencies, one percentage point improvement in retention, grant revenue equal to one percent of revenues: an office of institutional research working the five modalities of institutional research not performed by virtual offices easily delivers competitive advantages that cover the costs of centralization when mindful of these targets. Every institution around the globe may calculate its unit cost savings from one percent improvement in efficiencies, from one percentage point improvement in retention, and one percent of grant revenues — but how many large public universities with virtual offices or elaborate profusions of institutional research know their numbers or hold their continuous improvement efforts accountable on a cost basis in such terms?

Conclusion: Public Deficiencies of Research on Higher Education

The survey of institutional research by the National Association of System Heads exhibits many of the public deficiencies in research on higher education, particularly as it pertains to institutional research. Granted, several of the founding documents endorsed by the Association for Institutional Research broadcast the theoretical premises of the Michigan State School of Institutional Research, however, if the nation’s public university systems organize and conduct institutional research to produce knowledge that is not pervasive and of no lasting significance, then the faculty and administration of said institutions must be held accountable, not the profession of institutional research. On the other hand, scholars of higher education who have defined institutional research for nearly forty years uphold the work of polemicists like Rourke and Brooks as exemplary models of scholarship on higher education. Saupe and Montgomery lauded the Managerial Revolution in Higher Education as “[a]n excellent review of institutional research and computer usage in colleges and universities.” The deplorable state of scientific research in higher education – that led to the intervention of the National Research Council and its publication on research standards for higher education scholarship – certainly has not served well administrative institutional research offices or the state systems of higher education.

And, yet, is the solution to be found in the agenda and interventions of the Bill & Melinda Gates Foundation which funded both the NASH study and the AIR initiative? Perhaps institutional research is common sense. Perhaps the most important decision makers in the institution are the faculty and students on a different calendar than the executive leadership. Perhaps faculty should be paid more after we adopt the noble perspective of student-centered analysis. Perhaps also prettier pictures of data values will propel us toward the rapture of decision-making. Fortunately, if so, the Microsoft Corporation, today (July 24, 2015), releases its next generation of Power BI Tools for the “data studies” experts among us who possess the innate ability to perform institutional research and uncover findings that are not pervasive and of no lasting significance.

On the other hand, if institutional research is a scientific enterprise that requires specialization and organization in centralized administrative offices under the direction of college and university presidents, the solution lies in a past largely forgotten and neglected by scholars of higher education. The portfolio of short-, mid-, and long-range projects (again, Principle 4 of Scientific Research in Education) conducted by offices of institutional research in the mid-twentieth century exemplifies the leadership and management of several presidents of the time, including the celebrated Clark Kerr at the University of California.[11] Such executive leaders of higher education exhibited a willingness and an openness to extend “the self-examined life” to the educational settings and the practices of higher education, and to define their leadership – not as scientific management – but in the principles of scientific research and the discovery of the wealth of institutions. If education research is a scientific and applied research, who better than the presidents and provost of a nation to direct institutional research on higher education?

Note: Updated November 17, 2016.

Footnotes    (↵ returns to text)

  1. Francis E. Rourke and Glenn E. Brooks, The Managerial Revolution in Higher Education (Baltimore: 1966), 44, 66, Chapter 3 passim.
  2. Paul L. Dressel, ed., Institutional Research in the University: A Handbook (San Francisco: 1971), 38.
  3. Joe L. Saupe, and James R. Montgomery, The Nature and Role of Institutional Research … Memo to a College or University, (Association for Institutional Research: Nov. 1970), 8; accessed at http://files.eric.ed.gov/fulltext/ED049672.pdf on July 21, 2015.
  4. I use this phrase as a cognate of schools of thought in other fields of study, such as the “Chicago School of Economics,” in which a dominant school is often mistaken as the only paradigm for inquiry and to signify that there are indeed theories, epistemology, and institutionalized systems of inquiry for institutional research that form the basis for a discipline. Moreover, by giving it a name, the Michigan State school of thought may be interrogated, refuted, and replaced by an updated version of the prior school of thought evident in the writings of Philip Tyrrell or James Doi.
  5. Ernst Nagel, The Structure of Science, 4; National Research Council, Scientific Research in Education (Washington, DC: 2002), 30, 99.
  6. Coleman R. Griffith, “Functions of a Bureau of Institutional Research,” The Journal of Higher Education Vol. 9, No. 5 (May 1938): 248.
  7. Ruth E. Eckert, “Graduate Students in Education,” in Ruth E. Eckert and Robert J. Keller, eds., A University Looks at Its Program: The Report of the University of Minnesota Bureau of Institutional Research, 1942-1952 (Minneapolis: 1954), 175.
  8. Rourke and Brooks, 53; Taylor, 31; J. Fredericks Volswein, “The Foundations and Evolution of Institutional Research,” in Institutional Research: More than Just Data, ed. by Dawn Geronimo Terkla, New Directions for Higher Education No. 141 (spring 2008), 9.
  9. Not to be confused with “data exchanges.”
  10. Michael F. Middaugh, “The Nature and Scope of Institutional Research,” in Organizing Effective Institutional Research Offices, ed. Jennifer B. Presley, New Directions for Institutional Research No., 66 (summer 1990), 36-43; Volkswein, 15-16 .
  11. Arthur Padilla, Portraits in Leadership: Six Extraordinary University Presidents, American Council on Education Series on Higher Education (Westport, Conn.: 2005), Chapter 5.