Monday, August 09, 2010

Do Organizations that 'Fess Up Do Better?

Geoffrey W. McCarthy, a retired chief medical officer for the V.A., wrote, in a letter to the NYT on 9 August in response to an article on radiation overdoses in medical tests about two approaches to how organizations manage information about organizational errors. He notes that the issue illustrates the contradictions between "risk management" and "patient (or passenger or client or consumer) safety."

He notes that the risk manager will say "don't disclose" and "don't apologize" because these could put the organization at legal or financial risk. A culture of safety and organizational improvement, though, would say "fully disclose," not because it will help the patient, but because it is a necessary component of organizational change. The organization has to admit the error if is going to avoid repeating it, he asserts.

This suggests a number of sociology of information connections, but we'll deal with just one here. This example points to an alternative to the conventional economic analysis of the value of information. The usual approach is to "price" the information in terms of who controls it and who could do what with it (akin to the risk manager's thinking above). But here we see a process value -- the organization itself might change if it discloses the information (independent, perhaps, of the conventional value of disclosure or non-disclosure). One could even imagine an alternative pricing scheme that says "sure, Mr. X might sue us, but by disclosing the information we are more likely to improve our systems in a manner that lets us avoid this mistake in the future (along with the risk it poses to us and the costs it might impose on society). Why pour resources into hiding the truth rather than into using the information to effect change?

One rebuttal to this says that an organization can do both, and maybe so. Another would say that this is just mathematically equivalent to what would happen in litigation (perhaps through punitive damages).

But I think that Mr. McCarthy is onto something in terms of "information behaviors." There are, I expect, a whole bunch of "internal externalities" associated with what we decide to do with information. In other places I've examined the relational implications of information behavior. This points to another family of effects: organizational. More to come on this.

Information and Educational Assessment I

In a letter to the NYT about an article on radiation overdoses, George Lantos writes:

My stroke neurologists and I have decided that if treatment does not yet depend on the results, these tests should not be done outside the context of a clinical trial, no matter how beautiful and informative the images are. At our center, we have therefore not jumped on the bandwagon of routine CT perfusion tests in the setting of acute stroke, possibly sparing our patients the complications mentioned.

This raises an important, if nearly banal, point: if you don't have an action decision that depends on a piece of information, don't spend resources (or run risks) to obtain the information.  The exception, as he suggests, is when you are doing basic science of some sort.

Now consider, for a moment, the practice of "assessment" in contemporary higher education.  An industry has built up around the idea of measuring educational outcomes in which a phenomenal amount of energy (and grief) is invested to produce information that is (1) of dubious validity and (2) does not, in general, have a well articulated relationship to decisions.

Now the folks who work in the assessment industry are all about "evidence based change," but they naively expect that they can, a priori, figure out what information will be useful for this purpose.


They fetishize the idea of "closing the loop" -- bringing assessment information to bear on curriculum decisions and practices -- but they confuse the means and the ends.  To show that we are really doing assessment we have to find a decision that can be based on the information that has been collected.  Not quite the "garbage can model of decision-making," but close.


Perhaps a better approach (and one that would demonstrate an appreciation of basic critical thinking skills) to improving higher education would be to START by identifying opportunities for making decisions about how things are done and THEN figuring out what information would allow us to make the right decision and THEN how we would best collect said information.  Such an approach would involve actually understanding both the educational process and the way educational organizations work.  My impression is that it is precisely a lack of understanding and interest in these things on the part of the assessment crowd that leads them to get the whole thing backwards.  Only time will tell whether these scientist-manqués manage to mediocritize higher education or not.