Grading:
1. It is too biased toward centering when there are other factors that also impact on the "condition rarity" and overall desireability of a stamp. 2. Although grading stamps according to a single "number" (or number and letter) has an obvious attraction, it can be difficult to tell exactly what (other than centering) went into that number. Yes, the PSE down-grades for things like faults, but without making the down-grade explicit, one is left to speculate. For example, if a stamp gets a grade that is worse than you'd expect on the basis of its centering, is that because of faults that you can't see in the picture or is it because the PSE experts were out too late the previous night? 3. Uniform criteria for centering means that well-centered stamps from notoriously poorly centered issues don't get the recognition that they deserve. This is particularly likely with less advanced collectors who might tend to believe that an 85 is an 85, period.
This last problem seems half-way to being solved, because SMQ gives you a good idea of the range of what is available (and, of course, submitted to PSE) for a given stamp or issue. All that needs to be done is to put on the PSE certificate exactly how a particular stamp stacks up against its peers. For example, along with your grade of, say 85, the PSE could note on the certificate that, as of the certificate's date, 5% of submitted examples had graded higher. Then you'd know what your 85 really means. Yes, I recognize that the percentages are dynamic and you can always look them up in SMQ but at least the certificate itself would finally reflect the condition idiosyncracies of a particular issue. (Yes, this is only as of a certain date but, as Scott T. points out, even the grade itself is subject to change over time.) In this manner, you could still use uniform criteria for grading the various issues and still give each issue its due. I think that other Board members have said much the same thing, so hopefully this will receive real consideration.
In terms of the first two issues that I noted, we had a similar problem a couple of decades ago in neurosurgery. The "level of consciousness" is an important predictor of outcome after a head injury and is also used to triage patients to the appropriate level of care. The problem was that we had no standardized, let alone precise, way of describing the level of consciousness.
The first attempts to put a single,objective "number" on the level of consciousness proved inadequate. For example, on a scale of 1-15 (with 15 being normal)some scores of 8 define "coma" and others don't. What neurosurgeons did was come up with three "sub-scales" -- best motor response (M), best verbal response (V), and best eye-opening response (R). Each one is scored according to a hierarchy of responses (and the scales can, and are, assigned different weights). Then the three scores are added together to give a single, comprehensive score. However, the subscale scores are always (or at least should be) noted. Thus M5-V2-E1 and M5-V1-E2 both add up to 8 but (take my word for it) only the first one constitutes "coma." Yes, in most cases, we just go by the total score, but each of the subscale scores is there for those who want the detail. In actual practice, this "Glasgow Coma Scale" scoring system has proven simple and reliable and it is now a world-wide standard. The analogy for grading stamps is worth considering. And, as difficult as grading stamps might be
|