S&T statistics at a 100-year crossroads
By Dr Benoit Godin
Science, technology, and innovation (S&T&I) systematic statistics are one hundred years old this year and one can now utilize dozens of statistics to measure research and its results. Despite this productive history, of which Statistics Canada is part, much remains to be done to make the statistics more relevant to policy makers.
It was 100 years ago that James McKeen Cattell, an American psychologist and editor of Science for 50 years (1895-1944), published the first edition of a directory of scientists entitled American Men of Science. Based on the directory, Cattell published regular statistical analyses for 30 years on the demography, geography and what he called the performance of scientists. At the same time (early 1900s), psychologists developed bibliometrics (counting academic or scientific papers) as a statistical tool for measuring the advancement of psychology as a science.
Since these very first exercises, the measurement of S&T&I has evolved considerably. At its very beginning, statistics were concerned with measuring the size of the scientific community (counting the number of "men of science") and scientists' activities (counting papers). The measurements were conducted by scientists themselves, among them psychologists and geographers.
In the early 1920s, these statistics and their sources became institutionalized and, from the 1940s to the 1950s, new ones were constructed. It was no longer scientists, but government departments and national bureaus like Statistics Canada, that produced these statistics, those on which most of us rely today.
The main works conducted by these public institutions, unlike previous measurements, dealt with measuring a "national budget for science" by counting the money devoted to R&D. The focus was no longer exclusively on universities, as Cattell's had been, but on all economic sectors: industry, government, university, and non-profit. The focus was no longer on "men of science" either, but on organizations and their R&D activities. Above all, the focus was on measuring the efficiency or "productivity" of the science system, defined as the output arising from research activities.
This new perspective brought about entirely new indicators, which were soon placed within an accounting framework known as the input-output model. Inputs were defined as investments in the resources necessary to conduct scientific activities - money and scientific and technical personnel. Outputs were defined as what comes out of these activities - knowledge and inventions.
More recently, output, or production of a "good", came to mean outcomes or productivity of an economic type - the impacts of research on economic progress and productivity, including that from innovations. Most OECD policy documents, as well as national policies, now use increased productivity as the aim of innovation policy. The whole of the European Union's innovation strategy is directly linked to the rhetoric on gaps in productivity between European countries and the United States. It is specifically statisticians - academic as well as official - who feed the policy-makers with statistics on productivity and who develop models linking research and productivity.
Over the past 100 years, thanks to national statistical offices, we have gained in terms of the diversity, quality and robustness of statistics. However, we have also lost some fundamentals. The very first measurements on science centered on counting "men of science", because human resources was considered to be the ultimate resource: "the ceiling on research and development activities is fixed by the availability of trained personnel, rather than by the amounts of money available. The limiting resource is manpower". Today, the statistics on human resources in S&T are one of the poorest sets of statistics that we possess. National as well as international organizations are trying to remedy the deficiency, but progress is slow. They should be encouraged by all of us in these developments.
Another lost opportunity of the last 100 years, with regard to statistics, is measuring outcomes. If one distinguishes output from outcomes, there very few indicators on the outcomes of research. Those that exist are all of an economic type, among them productivity indicators. If there is a priority for statisticians in the coming decades, it is developing statistics to measure the social impacts of science: education, health, environment, quality of life, etc.
These were precisely the outcomes that Cattell identified as arising from science. To him, industry was a means, not an end. Unfortunately, today it is the market and the priorities of governments with regard to the market that entirely drive measurements. Representatives of the indicators of output and outcomes of an economic type include: patents, innovations, high-technology trade, and the technological balance of payments.
Admittedly, the challenges are many for anyone concerned with measuring intangible outcomes of a social type. But weren't measurements of R&D as challenging in the 1950s and 1960s when governments started collecting statistics? There are currently initiatives in many countries looking at precisely how to measure outcomes other than those of a strictly economic variety.
Unfortunately, these initiatives are not conducted by statistical offices, but by government departments, for their own ad hoc needs and not necessarily for developing national indicators of a systematic nature. Developing such indicators is nevertheless as important as measuring social capital or knowledge management. Statistical offices should be encouraged to get into the "business" of constructing methodologies and collecting data to measure the social outcomes of research. This calls for less of the same and more imagination and innovation.
The past 100 years has been a very productive period with regard to statistics on science. Since the 1950s, we owe this progress mainly to governments and their statistical offices. Most of the time, academics are users of statistics produced by the state. Statistics on science are now at a crossroads. Users of statistics are asking for much more information than before because their analyses and/or decisions are more fine-grained. National aggregates are no longer enough, and neither are standard classifications. With their current expertise and networks, statistical offices have the capacity to go further. Let's develop a vision for the future. FMI: http://www.csiic. ca/centennial.html.
Dr Benoit Godin is a professor of science studies at the Institut national de la recherche scientifique (INRS).