%20Schumacher%20from%20Pixabay%20.jpg)
Image by Dirk (Beeki®) Schumacher from Pixabay
Much less reported than the world’s top innovative economies is the actual development of the Global Innovation Index (GII). Yet, without a technically sound methodology behind the index, it is impossible to derive rankings that matter.
Firstly launched in 2007, the index is now a widely recognised benchmarking tool for ranking about 130 economies by innovation around the world. The 2020 edition marks the 10th consecutive year that the European Commission’s Joint Research Centre (JRC) assesses the GII through a methodological lens, at the invitation of the GII developers - Cornell University, INSEAD Business School and the World Intellectual Property Organization. This article celebrates this recurrence by shedding light on the independent assessment, to which the JRC has contributed since the GII moved its first steps to guarantee its transparency and reliability.
Key areas of focus of the JRC assessment
The JRC’s statistical independent assessment is based on the recommendations of the Handbook on Constructing Composite Indicators (OECD/JRC, 2008) as well as on the experience gained from the review of more than 100 international composite indicators and scoreboards covering a wide array of policy domains.
The assessment focuses on the statistical soundness of the index to evaluate how well the available data reflect the underlying conceptual framework. The GII in fact gathers economy-level data using about 80 indicators, which are then grouped into 21 sub-pillars, 7 pillars, 2 sub-indices and, finally, into an overall index. The index provides a synthetic measure of innovation performance, whereby higher scores signal better performance.
The audit also looks into the impact of key modelling set-ups on the results. The development of a composite indicator involves assumptions and subjective decisions. The GII is the outcome of a number of choices concerning, among other things, the definition of the aggregation formula and the weights assigned to its different components to come up with a single score for each country. To check the robustness of the GII to these choices, the audit report provides so-called ‘confidence intervals’ for each country, which indicate by how many positions countries could move in the rankings depending on different modelling assumptions, such as a different set of weights. In simple terms, the smaller the confidence intervals, the more reliable the rankings.
Key findings
A meaningful multi-level structure. The grouping of variables into sub-pillars, pillars, and an overall index is statistically coherent in the GII framework, meaning that the selected indicators are statistically related to the components they have been assigned to. The available empirical data thus support the theoretical understanding of innovation. Furthermore, this year, all but two of the 80 indicators are found to be sufficiently influential. This means that virtually all indicators contribute to the countries’ scores, at different aggregation levels, which is a very positive feature of the framework.
A statistically sound tool. For 76% of the economies covered by the GII 2020, the confidence intervals comprise fewer than 10 positions. Five countries should however be treated with care: Brunei Darussalam, the United Republic of Tanzania, Uzbekistan, Togo, and Myanmar. These are more sensitive to the methodological choices, as signaled by their confidence interval widths of approximately 20 positions. Still, this is a remarkable improvement compared to previous editions, where more than 40 economies had confidence intervals comprising more than 20 positions. The improvement is the direct result of the developers’ choice since 2016 to adopt a more stringent criterion for an economy’s inclusion, which requires at least 66% data availability within each of the two sub-indices.
An aggregate index that matters. For about half of the economies included, the GII ranking and the rankings of any of the underlying seven pillars differ by 10 positions or more. As a result, GII rankings are an effective metric enabling to highlight a composite dimension of innovation that do not emerge directly by looking into the indicators or pillars separately. At the same time, the JRC analysis points to the value of duly considering all the GII components on their own merit. By doing so, economy-specific strengths and bottlenecks in innovation can be identified and serve as an input for evidence-based policymaking.
A tool that keeps improving. The GII is much more than a ranking of economies with respect to innovation. It best represents an evolving attempt to find metrics and approaches that capture the richness of innovation, based on the best practices of statistical techniques and transparency, matured over 13 years of constant refinements. For instance, the choice of the GII team following JRC advice to abandon the efficiency ratio (ratio of Output to Input Sub-index) since the 2019 edition deserves a particular mention. In fact, ratios of composite indicators (the Output Sub-Index divided by the Input one in this case) come with much higher uncertainty than their sum (the average of the Input and Output Sub-Indexes giving rise to the GII).
The results of the JRC assessment confirm that the GII meets international quality standards for statistical soundness, which show that the index is a reliable benchmarking tool for economic innovation practices around the world.
The JRC is pleased of having contributed with its independent audits throughout the years to make the GII a trustworthy tool, and thus enabling policymakers, business managers and executives to derive evidence-based conclusions about the state of innovation around the world.
Michaela Saisana, Valentina Montalto, Ana Neves, Giacomo Damioli are all from the European Commission Joint Research Centre.