In Praise of Data-Driven Decision Making

Introduction to a Decisive Matter

A common vogue in education for some time, as an occasional paper by the Rand Corporation from nearly ten years ago attests, education leaders often assert that data-driven or evidence-based decision-making is improving the operations of their institutions and the outcomes of their students. Despite the self-assurance, as the Rand paper revealed six years earlier, academic literature on educational initiatives often shows that “empirical research on data use [to improve educational outcomes] continues to be weak.”[1] The absence of evidence for the effective use of data, nonetheless, has not resulted in the diminished urgency to acquire more (and more complex) data to form convincing arguments to “engage, inspire, and prepare current and future students for success.”

Student success may be defined in any number of ways in higher education, but the retention of first-time college students into their sophomore year of college ostensibly qualifies as one key performance indicator for institutional effectiveness. Aside from student learning outcomes and engagement, two more esoteric measures of student success, institutions perhaps focus student success initiatives on the retention of first-year students more so than any other type of student success initiative. And, yet, despite the cheerful hum of data-driven decision making in education and the enlivening air of education policy research on data-driven decision-making sponsored by private foundations, first-year full-time student retention increased by only 0.8% between 2006 and 2012 at all institutions of higher education in the United States according to the National Center for Education Statistics (NCES) (Figure 1). Astonishingly, 4-year public institutions reported a 0.1% decline over the six year period and 2-year public institutions witnessed a 0.8% decline in full-time (FTFT) student retention. Among private institutions, often highly dependent on tuition for long term sustainability, 4-year nonprofit colleges and universities attained a nominal 0.6% improvement in FTFT student retention.

Retention of First-Time Full-Time Degree-Seeking Undergraduates
Retention of First-Time Full-Time Degree-Seeking Undergraduates

The private sector vendor, Noel-Levitz, provides a tool to estimate the impact of a 0.8% improvement in student retention in the nation. In 2012-13, American colleges and universities retained approximately 18,000 more students from 2012 to 2013 than they would have at retention rates from 2006 to 2007. At the average listed tuition and fees for college in constant dollars (2011-12), the increased sophomore population resulted in an additional $350 million in revenues for colleges and universities. A sizable sum to be sure, but the estimated 1.61 million first-time students retained regardless of the improvements between 2007 and 2012 generated over $31 billion dollars in gross tuition and fee revenue. Overall, then, improved student retention in higher education, largely by tuition dependent private colleges and universities that discount nearly half of their list price for tuition and fees according to NACUBO Discount Survey, generated little more than a 1% increase in gross revenue from the retention of first-time students.

The additional $350 million in revenue from the retention of first-time students pales in comparison to the increase in total institutional expenditures during the same period. Between 2006-07 and 2011-12, again in constant dollars, the NCES reports total expenditures by institutions of higher education increased by $64 billion, from $419 billion to $483 billion. Compounding first-year student retention over the course of four years to graduation and accounting for tuition increases, the improved student retention of 0.8% accounts for no more than $1 billion of the $64 billion of additional college and university expenditures: or 1.5%. Clearly, institutions of higher education found other means to fund their budgets during this six year period, such as tuition increases, federal loans, and of course competitive grants from private foundations to study interventions to improve college success and affordability.

Of what consequence, one may ask, is the prodigious output of publications on data-driven decision making for higher education executives in the U.S.?

In a Humor to Act Awhile the Institutional Researcher

Like many in institutional research, I came to the profession by an indirect route and from an unrelated academic discipline. My entire academic education is in the discipline of History, initially using quantitative methods to study family labor force participation and economic strategies with public-use data samples from cost of living studies and Census records from the 20th-century United States. During an interruption in my graduate education, I worked for approximately 6 years with market research vendors serving Fortune 500 corporations in the information technology sector during the “dot com” era in the latter half of the 1990s. At the tail end of the dot com boom, I joined Educational Outreach at the University of Washington (UWEO) in Seattle where I had the good fortune to work for two years with an accomplished historian who became a university administrator with a deserved reputation as an innovator and educational entrepreneur in online learning, self-sustaining degree programs at a public university, and the establishment of community partnerships to extend access to education to under-served students, among other accomplishments.[2]

I returned to academia for several years to complete a Ph.D. in History, while transitioning my field interests to the cultural and social history of business associations, institutions, and their leaders in the United States. During this period, fortunately as it turned out, I maintained a connection with institutional research by contracting with my former employer at UWEO, the local chapter of the American Association of University Professors (AAUP), and other vendors serving academic institutions and policy interests. When I completed my graduate studies in 2008-09, the recession and other circumstances in the job market for U.S. historians resulted in my return to the profession of institutional research. Subsequently, I worked as an institutional research, planning, and effectiveness officer for a community college in Illinois and for a private, religious nonprofit institution in New York. In sum, over the past fifteen years, through my research and connections, I have had the opportunity to study the operations and advance the strategic plans of a flagship state university, suburban community college, and regional tuition-dependent private college.

Initially, I was drawn to the field of History because of its immense complexity. Many may know history as the memorization of dates, names, and other trivia. The best academic historians, however, master the theoretical foundations of their own field, their historiography, as well as the theoretical foundations for closely related fields such as linguistics (ancient historians), anthropology (cultural historians), economics (business and labor historians), gender studies (historians of gender and sex), psychology (historical biography) and philosophy (historians of ideas). The study of history is a transgressive discipline that brings the historian into contact, and debates, with the practitioners of other disciplines, particularly those who neglect the potential and actualities of change over time.

Not to be confused with institutional reporting, institutional research is as deeply complex and rich as the discipline of history for a field of study. An adept institutional researcher has use for an academic understanding of institutional theory and organizational change, whether from traditional historical institutionalism, micro-economic theories of the firm, the new institutionalism of sociology, or at the very least the trajectory of organizational efficiency from Taylorism to Six Sigma training. An education, nonetheless, remains the output of colleges and universities, so a well-rounded institutional researcher will also grasp the literature and utilize the findings from national demographics, education science, and developmental psychology on academic preparedness, student learning, freshman resilience, educational ecological systems, and social attitudes about college life. In order to apply the study of institutions and research on learning outcomes locally, an accomplished institutional researcher must then design valid research and perform appropriate statistical analyses to understand how best to improve student success at their own institutions.

Anticipated Changes to Institutional Research Role, 2014-17
Anticipated Changes to Institutional Research Role, 2014-17

With these theoretical and scientific methodologies, institutional research has the tools to systematically investigate and understand the diverse operations of a higher education institution. In effect, an effective institutional researcher is able to delve into the specialized areas of admissions, student retention, graduate placement, and even alumni giving, among other common areas of institutional operations. In this respect, institutional research is – or could be – as organically interdisciplinary as history, with the ability to crossover to the operations of the many facets and strategic initiatives of an institution of higher education. As illustrated in Figure 2, those in the profession often express expectations or hope for greater integration in the strategic and analytical focus of the college.[3] Too often, though, in my experience and from my knowledge of colleagues’ presentations at the annual conferences of our profession, the integration of institutional research offices and continuous improvement efforts (data-driven decision making) does not reach such synergy.

How is it, one then may ask, that the work of institutional research offices has not had a greater influence on the performance of U.S. higher education institutions, in such areas as student retention, in recent years?

Too Well Acquainted with Data-Driven Decision Making in Higher Education

In a 2014 book on leadership and management in institutional research, published by the Association for Institutional Research, William E. Knight begins his preface by stating his concern “about the unfulfilled potential” of the field of institutional research, noting that “Numerous studies have indicated that information is not being used as effectively as it could be by campus leaders.”[4] I share the concern held by Dr. Knight and many in the institutional effectiveness and research profession that the potential for our mission-critical offices is too often unfulfilled in higher education. More importantly, I have concluded that the all-too-frequent failures of strategic plans and continuous improvement initiatives for student success and organizational efficiency in higher education directly stem from the unfulfilled promise of these offices.

Clearly, institutional leaders are making data-driven decisions routinely. What seems less certain are the domains in which decisions operate effectively in institutions of higher learning. The occasional paper by the Rand Corporation above notes, “Notions of DDDM (data-driven decision making) in education are modeled on successful practices from industry and manufacturing… and debates about measurement-driven instruction in the 1980s…” While the paper thoroughly explores the use and influence of data on decisions, what it neglects to consider is how individual decisions make themselves manifest in the operations and improvements of an institution of higher education. In the field of history and other social sciences, the tension between institutional direction and individual decisions can be construed as a contest of primacy between structure and agency. What DDDM advocates and researchers seem to neglect is that “the way we have always done things” has a powerful hold over institutional agents and a charge to enact a decision may, and often does, produce unintended outcomes, if any outcome at all.

Although the Rand paper raises the “quality of the decisions” made with the aid of data, the basic assumption of the the model is that decision makers always already stand ready to process more data, or more reliable and valid data, into better quality decisions in a linear direction. An analysis of leadership traits and success between higher education and corporate leaders by Witt / Kieffer suggests, alternatively, that college and university leaders differ most notably in the measures for Altruism, “focus on helping or providing service to others,” and Commerce, “strong interest in money, profits, investment, and business opportunities.” (One suspects this distinction between higher education and corporate leadership extends deep into the general body of administrators.) Given the same data and evidence, do altruistic- and commerce-focused leaders in higher education reach the same decisions on how best to improve student success and organizational efficiencies? While that is a subject worthy of concrete research, I suspect most institutional effectiveness and research officers would suggest that is not likely the case.

Leadership Traits and Success in Higher Education (Witt / Kieffer Study)
Leadership Traits and Success in Higher Education (Witt / Kieffer Study)

In our positions, we work closely with at least two significant leaders and decision-makers in the president and the provost of the college or university. In each stop in my career, I had to read and react to my new president and provost in order to understand the individuals’ “altruistic” vs. “commercial” read on data and determine how best to communicate relevant research findings from my office for their decisions. At the same time, the direct reports to the presidents and provosts, whose areas are often effected by research findings from my office, were often engaged in their own readings and reactions to the executives and disputed research findings that reflected poorly on the management of their departments or divisions. As one core strategy of institutional control, second-tier administrators often hired or cited external vendors as outside experts who are in some vague sense more qualified to provide “data” or best practices for the institution than the institution’s effectiveness and planning office – data and experts who of course the department administrators directly control through contracts. Needless to say – in the contests over altruistic and commercial decisions – utilization of data produced by an institutional effectiveness and research office is subject to fits and starts, to steps forward and backward, and to misinformation and miscommunications, while the institution as a whole suffers from a surfeit of contradictory information on its success and progress to strategic goals.

In short, the high flown character of data-driven decision making rests on relatively naive concepts of data and decision-making that may prove useful in the theory and analysis of private sector firms or manufacturing industries, but fails to account for the prevalence of “decision-driven data making”[5] among the institutions and constituencies of higher education.

A Not-So Ex Tempore Discourse on Institutional Reporting

All senior administrators with any length of experience in higher education come to an institution with fixed ideas about the role of “institutional research.” More often than not, the perception of institutional research is that of a clerical function of collecting data from other departments and submitting data to external agencies, properly called “institutional reporting.” Interestingly enough, the Latin meaning of “data” is “something given,” akin to what we often call “facts.” In that vein, institutional research offices are required to submit the facts – student record-level data or descriptive statistics – to state, federal, and accrediting agencies in order to maintain higher education credentials. Over time, the largess of institutional reporting expanded the meaning of “something given” to include gratis submissions of key performance metrics as “data requests” to college guide publishers, would-be college ranking experts, and every imaginable consumer of data on institutions of higher education.

The free dissemination of what most call “data” – descriptive statistics regarding the performance of institutions of higher education – has perhaps impeded continuous improvement efforts more than any other business practice in higher education. How is it that the U.S. News and World Report knows more about the institutional performance of colleges and universities than the federal government or the Association for Institutional Research? Why does payscale.com have more information about the salary outcomes of college graduates than the presidents of the colleges and universities leading the education of American students and, again, the Association for Institutional Research? Why is it that the Washington Monthly is best able to measure the public good of colleges and universities than our state and federal governments and, once again, the Association for Institutional Research? In sum, why is it that institutional research and institutional reporting have been conflated in a manner that subjects institutions of higher education to the political and profitable agendas of national magazines and online career content-providers rather than a professional and academic body of institutional research experts?

Quality of National Data on Institutional Performance: IR Professionals' Ratings
Quality of National Data on Institutional Performance: IR Professionals’ Ratings

Generally, the failure of institutional reporting to entities that have nothing to do with compliance to higher education standards is evident to institutional researchers. Despite reliance on unreliable data, the casual reader of U.S. News and World Report or Princeton Review often has more information about the performance of colleges and universities than the the average employee of colleges and universities. I personally have sought to redress this perverse gap in institutional “knowledge” with the frequent strategies of institutional effectiveness and planning officers: the publishing model and consulting model. The publishing model focuses on the internal dissemination of fact books, common data sets, and other routine reports. The presumption, however, like that of the data-driven decision making, is that just any consumer of institutional fact books and reports is able to analyze descriptive trend statistics and make quality decisions based on their supposed area of specialization. Information, published en masse, is in actuality highly subject to cherry-picking to support prior decisions or existing prejudices for action.

The consulting model, in which the institutional research office provides research services like an external vendor, is a vast improvement over the publishing model. As a former market research vendor, my approach from the start more closely aligned with the recent trend in institutional research to expand business intelligence capacities.[6] Unfortunately, the consulting model soon founders on the abilities of “the internal clients” of institutional research and business analytics who may or may not have the competencies to make better decisions when afforded more data, or more reliable and valid data. The conceptual framework for data-driven decision making in the Mathematica Policy Research report provides an illustration (Figure 2b) that encapsulates the nuances and complexity of data used for decision making. Division and department leaders in institutions of higher education – like those in the private sector – rightfully earn their positions for reasons other than their ability to discern the reliability and validity of data in decision making. Somehow, though, educational leaders seemingly are presumed to be more effective making data-driven decisions and implementing strategic plans than the private sector.

Although sage patrons of wisdom may have determined that educational institutions should be more effective in realizing their missions and implementing their strategic plans than private sector corporations, the reality is that organizational change is far more complicated in higher education than the paradigm of “data-driven decision making” in business firms remotely anticipates.

The Seasonable Truth of Institutional Effectiveness

Our intent in this brief is to draw attention to possible reasons that the best-laid strategic plans for higher education have more often than not resulted in ambiguous or negligible evidence of success. We nonetheless favor academic research that shows that data-driven decision making indeed contributes to the efficiencies of businesses and institutions.[7] In essence, the eighteen components of a comprehensive institutional effectiveness and research office described on our site represent what we regard as the basic infrastructure for business analytics and social scientific research to advance student success at colleges and universities.

Future briefs will further explain why we believe that the services and solutions of Historia | Research will provide our clients an advantage over other institutions of higher education.

Footnotes    (↵ returns to text)

  1. C. E. Coburn and E. O. Turner, “The Practice of Data Use: An Introduction,” American Journal of Education 118, no. 2 (Feb. 2012), 99-111.
  2. See: http://en.wikipedia.org/wiki/David_Szatmary.
  3. Darlena Jones, Impact of Business Intelligence on Institutional Research (Tallahassee: Association for Institutional Research, 2013), 8.
  4. William E. Knight, Leadership and Management in Institutional Research  (Tallahassee: Association for Institutional Research, 2014), iii.
  5. A turn of phrase I take from Jeffrey L Buller, Change Leadership in Higher Education: A Practical Guide to Academic Transformation (San Francisco: Jossey-Bass, 2015).
  6. See Chapter 9 in Knight, Leadership and Management in Institutional Research.
  7. Brynjolfsson, Erik and Hitt, Lorin M. and Kim, Heekyung Hellen, “Strength in Numbers: How Does Data-Driven Decisionmaking Affect Firm Performance?” (April 22, 2011). Available at SSRN: http://ssrn.com/abstract=1819486 or http://dx.doi.org/10.2139/ssrn.1819486.

Leave a Reply

Your email address will not be published. Required fields are marked *