The recent focus on data visualization has generated a lot of buzz about how we present evaluation findings. We certainly need new ways to present technical information effectively. Our clients have been falling asleep for years! However, are we being distracted from finding real meaning?

Recent articles on data visualization slip quickly through 1) understanding, 2) collecting, and 3) analyzing to land on 4) communicating. The whole analysis piece seems to be relegated to a back room where we pull exciting bits of data from a huge pile of information and think about how to make them look sexy.

In our rush to visualize data, are we sacrificing the very thing we are hired to do—namely to evaluate?

It takes a whole new set of skills and a lot of time to create really interesting graphics and PowerPoints but our clients don’t seem to be giving us more time to do this. In fact it feels like there is less time than ever while expectations continue to mount. So something has to give somewhere. My fear is that the time needed for synthesis, reflection, and interpretation is losing out in our search for interesting data points and sound bites.

I recently renewed my acquaintance with an old friend from teacher’s college, namely Benjamin S. Bloom. As you probably know, the Taxonomy that he and his colleagues devised classifies learning objectives into three domains: cognitive, affective, and psychomotor (Bloom, 1956). It is the cognitive domain that has resonated most with educators over the years and it has utility for evaluators as well. When we collect data and try to figure out what it means, we are approaching our programs as learners. Bloom reminds us that higher-order learning is predicated on attaining lower-order knowledge first. His framework provides a useful checklist to ensure that we use due diligence as we move from concrete to abstract knowledge in the cognitive hierarchy. The taxonomy looks like this:

blooms  

In the context of evaluation activities, we can use the taxonomy as follows:

  1. Knowledge—recalling or collecting the information we have obtained through our research by defining, describing and organizing it. Data entry is an example of creating knowledge.
  2. Comprehension—grasping the basic meaning of our data by interpreting, charting and summarizing it or by translating numbers into words or vice versa. Preparing tables and charts is one way to help us understand the information we have collected.
  3. Application—using the information obtained in new and concrete ways by applying rules, principles, concepts and theories. Populating our program theory or conceptual framework with study findings is a good example of application.
  4. Analysis—breaking down the data collected into component parts to understand their relationship to each other as well as to the overall structure. Categorizing, comparing and modeling information happen here. This is also where most data visualization occurs.
  5. Synthesis—by seeing patterns and by putting the parts together again in unique ways, we create something new. The comprehensive reporting of findings can occur in a variety of ways but there is no shortcut to finding new meaning. Recent pressure for rapid turnaround and a limited appetite for full-fledged final reports are challenging our ability to do this well.
  6. Evaluation—based on elements drawn from all of the above steps, and using defined criteria to interpret what we are looking at, we judge the value of the program as it is represented by the data we have collected. This highest level in the cognitive hierarchy is also the hardest. It takes time and mental acuity, not to mention political wherewithal, to reflect on meaning and make a judgment. We often try to find ways to avoid producing meaningful and defensible conclusions.

While the taxonomy has gone through a number of changes over the years, the original framework still provides us with an excellent reference point. Here are some ways I have used it recently:

  • Structuring the sequence of questions in surveys and interviews, starting with knowledge and comprehension and ending with synthesis and evaluation. This approach facilitates the flow of the participants’ thought patterns by grounding them first in what they know well and then moving outward into the realm of interpretation.
  • Structuring the activities for an expert panel that was reviewing evaluation findings. By providing participants with time to confirm their understanding of the information obtained, and by allowing them to apply those findings to their own areas of expertise, they were much better able to see patterns in the results. They could then integrate them into a meaningful new whole and at that point were confident in their ability to render a judgement about the study. This structured approach also prevented a rush to judgment before the findings had been processed.
  • Scheduling enough time in a large study for the analysis period. We were able to prepare working documents of findings for each line of inquiry, triangulate the findings across tools, synthesize the results, and then, in consultation with stakeholders, reflect on and interpret the findings and determine their value.
  • Formulating questions for workshops and courses. By moving from knowledge and comprehension questions through application to analysis and synthesis, participants are better able to evaluate their learning on specific topics.


No matter how artistically they are presented, data are not enough. We cannot shortchange the critical thought processes needed to learn what our studies are trying to tell us. We need to know with confidence that our conclusions are appropriate and strong and that we have stepped carefully through the learning hierarchy to arrive at them. Bloom can help us do that.

Do you have an evaluation challenge right now that could benefit from applying Bloom’s taxonomy? Let’s start the conversation!


[Originally published in Ask Gail, an e-column for the Washington Evaluators, April 2014. Vol 2. Issue 1.]

Resources:

Bloom, B. S. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook 1; Cognitive Domain. New York: David McKay Co. Inc.

Photo: http://goo.gl/zVlxiY

Model: http://www.learnnc.org/lp/pages/4719