| | | Editorials | Opinions | November 2009
Discrepancy Indexes Guillermo Ramon Adames y Suari - PVNN November 19, 2009
| | Without questioning the veracity of the Country's data, we had to build frames of reference that we called in the Office of Statistics: Discrepancy Indexes. | | | | One of the most difficult tasks in the United Nations was to evaluate projects suggested by the Member States. As an organization whose opinion was requested and that evaluation proper, was part of the procedure for the authorization and finally the financing of a project under the UN heading. We had to evaluate what was suggested as a potential educational project: I worked for UNESCO for 20 yrs.
We had to evaluate all the components: everything from planning, strategy, feasibility, staff, budget and timing etc. If experts were needed and their recruitment was required, the Organization launched an expert worldwide recruitment procedure.
But the back bone of the potential project was the budget. This was always a headache. Countries presented their arguments with the requirements. We had to check the necessities and means available and the pricing. We could not question Member State's information but a budget evaluation had to be presented. Without questioning the veracity of the Country's data, we had to build frames of reference that we called in the Office of Statistics: Discrepancy Indexes.
How did these indexes work? We evaluated through our own set of data in our terms what the needs were. For financial data we obtained source data from the International Monetary Fund or the World Bank. We had offices in Paris that are shared with the OCDE. The UN Fund for Population gave us population data. Education data was available at home and so on. We generated differences in well defined concepts, between what was declared by Member Countries and what we calculated. Those were the discrepancy indexes. For various countries particularly in Latin America, differences were in the whereabouts of 30 to 50% for calculations requiring computation with population factors.
As an example, Member countries would declare a far higher level of analphabetism which would be situated about 30 to 50% above our calculations. Unemployment was calculated at about 15 to 25% above what was declared. Budget availability ranged about 40 to 60% below. In other words Member countries under study would require an "additional" 40 to 60% budget in their requirements to alphabetize 30 to 50% additional figures. Costs appeared to be modified in a similar range. This does not mean that governments were declaring wrong figures: This meant that methodologies could not be the same and approaches could be different. Political presentations were part of the data and that was understandable. Particularly unemployment. But we had to produce a frame of reference, as precise as possible.
Where does this take us? Basically to evaluate government's declarations when a figure is required for a programme evaluation. We used these discrepancies for evaluation of our projects but we could evaluate, for example, which was the discrepancy between an unemployment figure and another one given by our methodology. Here entered also the definition of unemployment. Budget was treated likewise. There were few countries with minor differences: I can cite Hong Kong (before going back to China) Japan, Switzerland, The Netherlands, New Zealand and the Scandinavian countries. Other countries were only biased (with the possibility that the methodology was slightly different) in certain areas. The Great Britain had better population data than economical data.
The US and Canada are difficult countries: there is not one definition for all the nation for certain concepts. There is no "universal" educational system for the American Union. Neither is there one for Canada. Degrees do not have the same programmes: for an MS. in engineering. Or an MBA Harvard educational programmes is certainly different from Laval and even Yale or UCLA or Tualne. I want to be diplomatic and avoid getting into precisions of other countries' data.
All this gave us an idea of "numerical difference" whether it was "concept", "methodology" or data itself. But this certainly gave us a frame of reference to operate world wide.
Guillermo Ramσn Adames y Suari is a former electoral officer of the United Nations Organization. Contact him at gui.voting(at)gmail.com |
|
| |