“You say you want a [data] Revolution”: A proposal to use unofficial statistics for the SDG Global Indicator Framework
Guest working paper by authors
By Steve MacFeely1 and Bojan Nastav2
We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten – Bill Gates 
In 2015, the United Nations (UN) launched its most audacious and ambitious development plan; The 2030 Agenda and corresponding Sustainable Development Goals (SDGs). That agenda covers sixteen separate dimensions of development ranging from eradication of extreme poverty, achieving gender equality, ensuring sustainable consumption and production to combating climate change. It also includes a seventeenth multi-dimensional goal to address implementation. This goal comprises five operational sub-dimensions: finance, technology, capacity-building; trade and systemic issues.
Unlike the previous development programme, the Millennium Development Goals (MDGs), the SDGs explicitly require statistical performance indicators to be compiled, personified by the Global Indicator Framework (GIF) which was adopted by the UN General Assembly in July 2017. The broad scope of the 2030 Agenda means that (currently) 232 performance indicators are required. Many of these indicators are not produced regularly if at all. In fact, the Inter-Agency and Expert Group on Sustainable Development Goal Indicators (IAEG-SDGs) calculated in May 2018 that less than half of the selected indicators for the GIF could be populated.
Various agencies and economists have attempted to put a cost on populating the GIF. The estimates vary enormously, but all are far in excess of existing funding . In an environment of faltering multilateralism, it seems unlikely that available funding will match requirements. Yet political expectations appear to be very high; perhaps irrationally so, considering the scale and complexity of the SDG targets and the resultant indicators. Historic difficulties in populating the more modest MDG indicators suggest these expectations may be very optimistic. Therefore, in order to meet expectations a new, or supplementary, approach is required.
One supplementary approach could be the introduction of a mechanism to certify unofficial statistical indicators as official. We make this suggestion somewhat reluctantly. Our hesitancy arises as we believe the ideal situation is one where National Statistical Offices (NSOs), National Statistical Systems (NSSs) and International Organisations (IOs) are mandated and properly funded and resourced to compile all required national and international official statistics respectively. However, as this is not the case, and it is hard to envisage a sudden and dramatic improvement, then alternative solutions must be considered. The pessimistic viewpoint sees this as the thin edge of a dangerous wedge, where the funding to NSOs may be further reduced and the standing of NSSs and official statistics are further undermined. A more optimistic perspective recognizes the opportunities to develop the role and mandate of official statistics beyond its current role.
The idea of using unofficial data to compile official statistics, be they national or international is not new. NSOs use unofficial data everyday as inputs to compiling official statistics. IOs must also resort to using unofficial data to compile global aggregates. In fact, the Committee for the Coordination of Statistical Activities (CCSA) published guidelines Recommended Practices on the Use of Non-Official Sources in International Statistics  on the topic. But what if we were to go a step further? Rather than simply using unofficial data as inputs to derive or compute official statistics, what if we could use already compiled unofficial statistics to fill some of the gaps in official statistics? At this point it may be useful to clarify a very important point. Data and statistics are not the same thing. While the terms are frequently (and incorrectly) used inter-changeably or synonymously, they are in fact two different things. Data are basic elements or single pieces of information. Statistics are numerical data that have been organized through mathematical operations.
This is not a new idea. Several papers have raised this issue before, either directly or indirectly, as to whether there is a new role for official statistics as a certification authority ; ; ; . Hammer et al [8: p.19] summarise the issue succinctly: ‘statistical agencies could consider new tasks, such as the accreditation or certification of data sets created by third parties or public or private sectors. By widening its mandate, it would help keep control of quality and limit the risk of private big data producers and users fabricating data sets that fail the test of transparency, proper quality, and sound methodology.’
In this paper we discuss, whether such a mechanism might be useful in the specific context of compiling indicators for the 2030 Agenda. From the outset, we would like to clarify that the proposal put forward in this paper is not driven by any ideological position but rather by a desire to find a pragmatic, yet professional, solution to what we perceive as a serious problem. In making this proposal our intention is not to be subversive or iconoclastic. We have no desire to undermine or corrode official statistics. We are not setting out to promote or argue for the privatization or ‘uberfication’ of official statistics. Nor are we advocating a completely open wiki-stat approach. On the contrary – we are staunch defenders of the need for impartial, independent official statistics. But given the pace of progress, the cost of developing the SDG indicators and the weight of expectation, we feel it is necessary to ask whether there are other approaches? Specifically, we are asking whether there might be a way to collaboratively harness the intellectual power of those outside the official statistics tent to avoid needless delays, duplication and expenditure. Bordt and Nia [9: p.1] argue that populating the GIF is an ‘adaptive challenge requiring us to go beyond any one authoritative expertise to discover and generate new capacity, new expertise, and new ways of doing things.’ We agree. We also argue that in a post-truth era, official statistics might do well to take more control of a rapidly fragmenting and complicated information environment. Our fear is that, as Gates has warned, we (the statistics community) are underestimating the changes are underway in the world of data and statistics. In brief, this is a risk management strategy.
This proposal is in keeping with the inclusive spirit of the 2030 Agenda and the idea of holistic data ecosystems. To date, many of the discussions regarding the GIF have placed official national statisticians, official international statisticians and other statistical compilers as competitors. But perhaps there is a way to collaborate rather than compete? This latter aspect of collaboration and data sharing is at the heart of recent recommendations of the Bogota Declaration of The UN Global Working Group on Big Data .
The paper is divided into two parts. Part 1 (Background and Context) outlines some of the background issues, such as measurement difficulties and the likely costs associated with populating the GIF to help readers understand the scale of the challenge facing the global statistical community. Part 2 (A Proposal for a system to accredit unofficial statistics) outlines the proposal and argues the approach is consistent with the philosophy of the 2030 Agenda. The arguments put forward are also consistent with notions of the wider data revolution and a longer historic trend of embracing ‘unofficial’ scientific discovery through accreditation and validation by a recognized authority.
Part 1 – Background and Context
‘The data demands arising from the SDGs are huge and cannot be realistically met
by official data alone’ – M. Kituyi, UNCTAD Secretary General 
1.1 Measuring the SDGs
From a statistical perspective the measurement challenges arising from the 2030 Agenda are huge. To assess progress, each of the 169 complex, multi-faceted targets requires a statistical indicator. In fact, many of the targets are so complex, 232 indicators have been agreed. In truth, if all aspects of the targets were covered properly, then perhaps twice that number would be required .
The MDG requirements were modest in comparison with the SDGs. Nevertheless, at the end of the MDG lifecycle in 2015, countries could populate on average, only 68 per cent of MDG indicators . In their final MDG progress report, the United Nations [13: p.10] warned that ‘Large data gaps remain in several development areas. Poor data quality, lack of timely data and unavailability of disaggregated data on important dimensions are among the major challenges. As a result, many national and local governments continue to rely on outdated data or data of insufficient quality to make planning and decisions.’
The far reaching ambition of the 2030 Agenda has led to development targets that are well ahead of available official statistics and statistical concepts. In many cases, appropriate statistical methodologies do not yet exist from which to generate indicators. To elaborate this problem and facilitate the population of the GIF the IEAG-SDG has classified all SDG indicators in to three tiers on the basis of their conceptual development and availability of data. The tiers are:
Tier 1: the indicator is conceptually clear, has an internationally established methodology, standards are available, and data are regularly produced by countries for at least 50 per cent of countries and of the population in every region where the indicator is relevant.
Tier 2: the indicator is conceptually clear, has an internationally established methodology, standards are available, but data are not regularly produced by countries.
Tier 3: no internationally established methodology or standards are yet available for the indicator, but methodology/standards are being (or will be) developed or tested.
In May 2018, the IEAG-SDG reported that only 40 per cent of the selected indicators could be classified as Tier 1 (see Table 1). Furthermore, they reported that 27 per cent of the indicators remained classified as Tier 3. While Table 1 shows the not inconsiderable improvements in conceptual development and data availability that has been made since 2016, it also highlights the magnitude of the task still facing the global statistical community. The pace of transition of indicators through the tiers to reach Tier 1 is likely to slow as presumably the low hanging fruit will be picked first.
Table 1 – Number of SDG Indicators by Tier
Source: Derived from IEAG-SDG 
Source: Derived from IEAG-SDG 
1.2 Who Measures?
Countries understandably guard and protect their reputations preciously. Consequently, countries can be quite sensitive about what is measured and who does the measurement. This sensitivity has often led to tensions between official national statistics compilers and external compliers, whether they are IOs, Non-Governmental Organisations (NGOs), universities or other countries. In the context of the 2030 Agenda, this has led to some tensions as to whose data should be used in the compilation of the indicators for the GIF.
Countries, perhaps not unreasonably, are anxious that only official national data are used to populate the SDG indicators. But there are some circumstances where this approach may be sub-optimal. In thinking about this, it is useful to remember that the primary purpose of the GIF is to produce global indicators.
The first reason to query the ‘country data’ approach is where specific national official statistics do not exist. Unfortunately, this is not an uncommon problem. It makes perfect sense to use good quality national official statistics when they exist but if they don’t, and there is insufficient data to populate global indicators (see Tier 2 – Table 1), then other approaches must be found. The demand for SDG indicators has exacerbated this problem as many of the targets (and consequent indicators) fall far outside the scope of traditional official statistics and thus are not guided by agreed international measurement standards (see Tier 3 – Table 1).
A second problem with the ‘country data’ approach is where problems with the national official statistics exist. Problems could mean anything from incompleteness, errors or inaccuracies, non-adherence to international standards, inconsistencies over time, or imbalances. A good example of where this might arise is the asymmetries that frequently exist between bilateral trade datasets. From a global perspective, unbalanced trade data are not especially useful, and so steps are taken by IOs to remove these asymmetries. This may lead to a mismatch between official national statistics and official international statistics. This issue is not unique to international trade, problems with national data exist across all statistical domains. Despite the best efforts of countries and IOs, internationally comparable data will be a real challenge for the GIF.
A third and more delicate issue is that of impartiality. Targets, such as for example, 16.5 or 16.6 which deal with corruption, bribery and institutional accountability provide perfect examples of why it might make sense to use external data sources and independent statistical compilers. There are clearly cases where official national statistics cannot be trusted to provide an independent or impartial picture. This is not to say, that all national data are untrustworthy. On the contrary – most national official statistics are trustworthy. But there are cases (either indicator or country specific) where arguments can be made that more independent and trustworthy data may exist.
Another exception to the ‘country data’ approach arises from what can be termed the data revolution. Today our day-to-day dependence on technology is leaving a bewildering array of ‘digital footprints’. This has created a deluge of digital data. Some of these new digital datasets are global in scope offering the possibility of compiling genuinely harmonised global statistics. In such cases, where a single data source might provide better quality and more consistent data than the amalgamation of multiple individual country datasets, it would seem insensible to discount their use for the purpose of global reporting. This might be applicable to targets such as 15.1 that deal with forest, drylands, wetlands and mountain regions governed by international agreements. Arguably superior quality and internationally comparable data could be derived from satellite imagery than from multiple national datasets of which many will be based on irregular sample surveys of varying quality.
1.3 The Cost of Measurement
Unlike the MDGS, the SDGs are universal. The SDGs are also much broader in scope, far beyond simply reducing extreme poverty, to encompass the survival of our planet, improving equity and freedom in our societies and trying to develop a more stable and sustainable economic model. One of the implications of such a broad and ambitious development agenda is the price tag. Estimates vary, but Ambassador Macharia Kamau of Kenya, who co-chaired the SDG intergovernmental consultative process, anticipates that implementing the 2030 Agenda could cost somewhere between $3.5 and $5 trillion per year . Ibrahim Thiaw  (2016), United Nations Assistant Secretary-General and Deputy Executive Director of the United Nations Environment Programme, estimates it will cost at least an additional US$1.5 trillion annually over the Millennium Development Goals. The intergovernmental committee of experts on sustainable development financing  estimated the value of investment in infrastructure required to achieve the eradication of poverty alone at between $5 and $7 trillion annually.
To put these numbers in perspective, total Official Development Assistance (ODA) contributions from the OECD Development Assistance Committee members’, averages about $113 billion per year (in current prices)3. Since Monterrey in 2002, when the wealthier nations of the world, promised to contribute 0.7% of their Gross National Income to ODA , the cumulative shortfall in ODA (2002 – 2016) has risen to $2.4 trillion (current prices) or $2.9 trillion in 2016 constant prices. Since 2015, and the commencement of Agenda 2030, the average country effort hasn’t changed appreciably (see Figure 1.3) and remains well short of the 0.7% target.
Figure 1.3 – Net Official Development Assistance (Total)
as a % of Gross National Income, 2002 – 2017
Source: OECD DAC: https://data.oecd.org/oda/net-oda.htm and authors own calculations.
Developing the statistical concepts and collecting the data required for the GIF will not be inexpensive either. The Global Partnership for Sustainable Development Data estimates around $650 million per year is needed to collect data, of which only $250 million is currently funded . PARIS21 [20: p.11] have estimated that ‘funding for statistics needs to be increased from current commitments of between US$300 million and 500 million to between US$1 billion and 1.25 billion by 2020.’ Irrespective of which estimate is used, these sums clearly exceed existing funding . More recently PARIS21  have estimated that ODA devoted to data and statistics increased from $591 million in 2015 to $623 million in 2016. A welcome development but still only one third of 1% of ODA, and still far short of the $740 million spent in 2013.
1.4 Summary of challenges
Part 1 has provided some of the background and context that are relevant to the proposals put forward in Part 2. The universal and broad scope of the 2030 Agenda present real measurement challenges for the global statistical community. Populating the GIF will be a challenging and complex task with enormous resource implications, even for developed countries with sophisticated statistical systems. History suggests that it is highly unlikely that by 2030 all of the 232 indicators will be populated. Today, only 40% of the SDG indicators can be populated. One of the risks with the Tier system is that it has created a vacuum, and as the saying goes: ‘nature abhors a vacuum’. Who will fill that vacuum and how? At a time when multilateralism is faltering, when funding is not matching ambition, and where the ‘data revolution’ has brought new competition, we see countries clinging to an anachronistic view, pioritising ‘country data’ and international organisations jealously laying claim to indicators to attract or safeguard funding. If the development and statistical communities are serious about populating the GIF then it is time to consider alternative approaches.
Part 2 – A Proposal for a system to certify unofficial statistics
“one of the greatest tasks of our era may be figuring out how to unlock and harness the value of [private and civil sector] data to provide actionable insights for positive social and economic impacts” Stefaan Verhulst 
As outlined in Part 1, the data demands arising from the 2030 Agenda are enormous. If the history of MDG data is any indication of future outcomes, then it suggests that a large portion of the GIF could remain empty for much of the remaining time between now and 2030. Addressing the data gaps using only traditional approaches will realistically not achieve success. For this reason, we propose, not only using existing unofficial data as inputs to derive SDG indictors but also using already compiled unofficial indicators or statistics.
The rationale behind this proposal is straight-forward. The demand for data to populate the SDG GIF far outstrips supply from traditional sources. Yet there are no shortage of data and indicators in existence; if anything, the opposite is true, we are awash with both. The statistical and information landscape has changed utterly over the past decade. Today there are an unimaginable range of statistical indicators being compiled by a wide variety of producers: civil society; academia; NGOs; and the private sector. For the purposes of the GIF many of these indicators have not been considered to date. Bearing in mind the scale of challenge facing the statistical community, we argue, it is time to rethink this approach.
2.1 A Proposal
An agreed recognized and mandated body, with the authority and competence to certify statistics as ‘fit for purpose’, would review unofficial statistics to see whether they can be certified as ‘official’ for the purposes of populating the SDG GIF. Statistics certified ‘fit for purpose’ could be accredited and used as official statistics. For the purposes of this discussion ‘Fit for purpose’ means that an indicator or statistic meets pre-defined quality and metadata standards and has been compiled in an impartial and independent manner. Those pre-defined standards and criterion must be open and transparent to all. For the purposes of this argument, the term quality can be interpreted in the broadest sense, encompassing all aspects of how well statistical processes and outputs fulfil expectations as a SDG indicator. In more concrete terms, ‘fit for purpose’ would mean that any statistic must be relevant, accurate, reliable, coherent, timely, accessible, and interpretable. The statistic must be produced using sound methodologies, concepts and reliable systems. The statistic must also be compiled within an institutional environment that recognises the need for objectivity, impartiality and transparency. This last point is important. For a statistic to be designated official, neither the input data nor the methodologies can be proprietary but must be available to all and open to scrutiny (subject to obvious confidentiality constraints).
This proposal envisages the SDG GIF being populated from a combination of official statistics and unofficial (but certified official) statistics. By pooling all available indicators an improved completion rate will be achieved. To ensure a level playing field and maintain quality standards a formal accreditation system is required. By combining official and accredited unofficial sources into a single high-quality ‘pool’ the chances of successfully populating the GIF will increase (See Figure 2.2).
In this new regime the indicator pool would comprise of:
- Official national statistics. These are statistics produced by the NSO in accordance with the Fundamental Principles of Official Statistics , other than those explicitly stated by the NSO not to be official; and all statistics produced by the NSS i.e. by other national organisations that have been mandated by national government or certified by the head of the NSS to compile statistics for their specific domain.
- Unofficial national statistics that are accredited as ‘official national statistics’ by the NSS for the purposes of supplying statistics to populate the SDG – MGF.
- Official international statistics. These are statistics, indicators or aggregates produced by a UN agency or other IO in accordance with the Principles Governing International Statistical Activities . It is often necessary for a UN agency, or other international organisation, to modify official national statistics that have been provided by an NSO or another organisation of a NSS, in order to harmonise statistics across countries, to correct evidently erroneous values or to reconcile with international standards. Furthermore, in the absence of an official national statistic, a UN agency or other international organisation may compile estimates. Thus, it is not sufficient to define official international statistics as simply the reproduction of official national statistics.
- Unofficial international unofficial statistics that are accredited as ‘official international statistics’ by the body mandated by the UN Statistics Commission for the purposes of supplying statistics to populate the SDG GIF.
Figure 2.2 – A proposed future: Using unofficial data and statistics to compile SDG indictors
This supplementary approach would only be used when particular conditions apply. Firstly, it should be a measure of last resort, and only considered when all other official options have been exhausted. Specifically, when:
- Tier 3 indicators (i.e. indicators with no internationally established methodology or standards are available) remain unpopulated and when realistically, no methodology or standards will be developed in time. The concept of ‘in time’ will need to be specified – perhaps by 2025 would be a reasonable cut-off.
- when Tier 2 indicators (i.e. where the indicator is conceptually clear, has an internationally established methodology and standards are available) remain unpopulated and data are not being systematically produced. Here too, a cut-off date will be needed. Again 2025 might be sensible.
Secondly, compilers of unofficial indicators hoping to secure accreditation must demonstrate their adherence to the principles of official statistics. For national accreditation this means observance of the UN Fundamental Principles of Official Statistics (ibid). In particular, principles 1 (impartiality), 2 (professionalism), 3 (scientific standards), 6 (confidentiality), 9 (international classifications) are of special relevance and should be rigorously tested. Principle 5 (quality and other aspects of data) is also extremely important. For global accreditation it would mean adherence to the Principles Governing International Statistical Activities (ibid).
Thirdly, unofficial indicators will be required to meet a defined set of quality standards. For national accreditation, the indicator would be required to meet the same standards and conditions as set out in the national code of practice or national statistical quality framework. For international accreditation, the indicator will be expected to meet the quality standards as defined in the UN Statistical Quality Assurance Framework . Furthermore, clear metadata standards should be set for accreditation. In cases where standards don’t yet exist, the Common Metadata Framework  sets out suitable generic standards that could be used as criteria for accreditation.
Finally, prospective compilers of official SDG indicators must be able to guarantee that they can supply those indicators for, at least, the lifetime of Agenda 2030. In practical terms, this means being able to supply, at a minimum, the statistic on an annual basis for the years 2010 – 2030. While sufficient funding is important, in line with the Fundamental Principles of Official Statistics (ibid), that funding must be free of any political or ideological conditions or influence. Access to the indicator itself must also be open and constraint free.
2.2 How does this differ from the current situation?
Unofficial or ‘non-official’ data sources are already being used as inputs in the compilation of official statistics all around the world, both at national level and international level. At national level, for example, unofficial data are frequently used to supplement official survey data in the derivation of consumer price index expenditure weights, retail sales index trading day weights, and in many aspects of compiling national accounts. Typically, at national level, there are no official guidelines or accreditation systems used in these processes. Depending on the quality and detail of the metadata, the reliance of an individual statistic on unofficial data may or may not be clear. As noted above, NSOs will be guided by their own national codes of practice and the UN Fundamental Principles of Official Statistics (ibid), in particular, principle 5 which states that ‘Data for statistical purposes may be drawn from all types of sources, be they statistical surveys or administrative records4…’
The same is true at international level, except that IOs are directed by the Committee for the Coordination of Statistical Activities guidelines on the use of unofficial data. Those guidelines, Recommended Practices on the use of Non-Official Sources in International Statistics , provide direction on the use of unofficial source data. No formal accreditation system is necessary when using unofficial data as they are effectively subsumed into official aggregates and thus are covered by the formal ‘official’ label applied to the derived indicator. In other words, accreditation of the unofficial data is implicit. The guidelines however stay silent on the use of fully developed indicators.
Thus, both NSOs and IOs already regularly use unofficial source data to compile official statistics. This practice is expected to grow as statistical agencies are now looking beyond survey data and administrative records to investigate whether big data is a useful source of data for compiling official statistics. In 2018, 34 NSSs from around the world had registered 109 separate big data projects on the Big Data Project Inventory5 compiled by the UN Global Working Group on Big Data. IOs had logged a further 91 projects . NSOs and IOs are investigating a wide range of big data sources, from satellite imagery to mobile phone CDR records to augment or supplant existing data sources or generate completely new statistics. The question now is how all of this activity will be integrated with the compilation of official statistics more generally.
This proposal goes a step further than existing practices and frameworks, in that it anticipates using, in the specific cases outlined in section 2.1, creating a larger ‘pool’ from which SDG indicators can be taken. This pool would comprise of not only unofficial source data to derive official statistics, but also using already developed unofficial indicators or statistics (but reclassified as official) – see Figure 2.2. Now compilers of statistics (official and unofficial) would submit bids (proposals) to the IEAG-SDG for consideration. Bids would only be considered if they adhere to agreed quality and metadata standards and broader principles of official statistics.
2.3 Risks associated with adopting this proposal
No doubt persuasive counter-arguments can be made against implementing this proposal. After all such a move will introduce risks. But not adapting to the modern data world runs the risk of achieving only a partially populated GIF, which in turn risks tarnishing the reputation of the global statistical community. A business as usual approach also puts NSSs, particularly those in developing countries, under unnecessary pressure to compile a range of new statistics.
This section outlines some of the most likely risks in adopting this proposal. There are legal concerns, reputational risks and practical implementation issues, such as costs, to be considered. Some of these issues are discussed briefly.
2.3.1 Legal Issues:
In theory accreditation could be done at national level or at global level. At national level, it will be important that compilers of unofficial indicators can demonstrate that they adhere to the same standards as compilers of national official statistics. In most countries, the national accrediting body will most likely be (but not necessarily) the head of the NSS, or if a formal system does not exist, then the head of the NSO. In some countries this may be the same person. Here some legal hurdles might need to be jumped. For example, not only might the unofficial statistic itself need to be accredited as an official statistic, but the compiling agency might also need to be certified as a public body or a recognized statistical agency or authority in order to comply with national statistical legislation and/or national codes of practice. For example, in some countries official statistics are defined as statistics compiled by the NSO or other public institutions. Such a broad accreditation might be seen as a bridge too far. However, this caveat might be circumvented by outsourcing the actual compilation of the statistic (under license) to a third party but the statistic itself would be disseminated by a recognized body of the NSS or the NSO itself. This approach would also satisfy the UN Fundamental Principles of Official Statistics (ibid).
At global level, as no head of the global statistical system exists, an accreditation body would need to be mandated. The UN Statistical Commission would seem to be the appropriate body to mandate such an accreditation board. One could imagine that they might ask the IEAG-SDG to take on this additional task. Assuming the IAEG – SDG is mandated as the statistical accreditation body, they would most likely need additional statistical support (as the indicators in question will most likely fall outside the expertise of traditional NSO statisticians), in particular from IOs who can provide both technical, professional and secretarial support.
Equally, at the global level there is no statistical law to impose constraints. The UN Fundamental Principles of Official Statistics (ibid) discussed above apply only to official national statistics, and so, do not have anything to say regarding the compilation of official international statistics (which would include global SDG indicators). The Principles Governing International Statistical Activities (ibid), which are the equivalent of the fundamental principles for compilers of official international statistics, are also silent on who exactly can compile international statistics or who is a member of the international statistical community. As the CCSA has expanded considerably over recent years, there is clearly some flexibility regarding the interpretation of how ‘international statistical community’ can be interpreted6. There is also some ambiguity as to what an official international statistic is. The UN Statistics Quality Assurance Framework (ibid: p.9) defines official International statistics as ‘statistics, indicators or aggregates produced by a UN agency or other international organisation in accordance with the Principles Governing International Statistical Activities (ibid) formulated by the Committee for the Coordination of Statistical Activities’. But this framework applies only to UN agencies and thus does not prescribe the activities of other non-UN IOs.
There is no doubt more to be said on this matter. Nevertheless, a preliminary assessment suggests that there are no absolute legal barriers sufficient to prevent either national or global accreditation mechanisms being put in place, should that be desired. Nor would such mechanisms, if done carefully, breach the letter or the spirit of the UN Fundamental Principles of Official Statistics (ibid) or the Principles Governing International Statistical Activities (ibid).
2.3.2 Reputational risks:
There will naturally be concerns that certifying unofficial statistics as official may ultimately undermine or tarnish the official statistics brand. A valid argument can be made that by using unofficial statistics, the line between official and unofficial statistics may become blurred and the reputation of official statistics will be damaged or put at risk. Such a risk must be anticipated and mitigated as official statistics have many unique qualities and enjoys a reputation worth preserving and delineating. Consequently, it will be very important that the protection of the official statistics brand is carefully considered.
There may also be concerns that in allowing some unofficial sources to be designated as official, this may be the thin end of a dangerous wedge, whereby the compilation of official statistics is slowly outsourced or privatized and incrementally taken away from NSOs and NSSs. Some may fear also that this is somehow an admission of failure – that official statisticians cannot deliver. There may be concerns too that in an era of data revolution, but reduced funding for official statistics, that official statistics is already surrendering ground to other information providers and this proposal will only add fuel to the fire. In other words, effectively outsourcing the production of official statistics may further drain funding from NSSs and IOs. Perhaps so, but a (cold) data war is already underway. There is a growing asymmetry in the resources available for the compilation of public/official and private/unofficial statistics and indicators. In a world where official estimates are increasingly being challenged by alternate facts it may be unwise to take the future of official statistics for granted. This may sound alarmist, but developments in Greece  and ; Canada ; Norway ; and most recently in Tanzania  and  provide sobering reminders that the impartiality and independence of official statistics can be surprisingly fragile.
The reputational risks outlined in this section are not trivial and must be carefully considered and mitigated. Official statistics must adapt in a way that allows official national and international mechanisms to take some control (or at least exert more influence) over a rapidly fragmenting information landscape. Reputation is a double-edged sword. The risk of reputational damage arising from certification of unofficial data must be balanced against the risk of reputational damage to official statistics failing to deliver on the expectations arising from Agenda 2030.
2.3.3 Double standards:
To certify unofficial indicators as official, a level playing field will be essential. Careful thought must to be given to ensuring that quality standards are comparable, so that neither unofficial nor official compilers are placed at a disadvantage. It will be very important that unofficial statistics don’t enjoy light touch regulation vis-a-vis their official counterparts or vice-versa. If unofficial statistics are to be used, then they must adhere to the same high-quality standards as official statistics. The dimensions of those quality standards, for the purposes of compiling UN statistics, are defined by the UN Statistics Quality Assurance Framework (ibid: p.22) as: relevance; accuracy; reliability; coherence; timeliness; punctuality; accessibility; and interpretability.
Adherence to the principles of official statistics must also be a condition for accreditation. Although the principles themselves are not overly specific in technical terms, their importance cannot be overstated. In particular: principles 1 (impartiality); 2 (professionalism); and 6 (confidentiality) are of paramount importance. More technical in nature but no less important are principles 9 and 5 which deal with use of international classifications and quality standards respectively. Thus, adherence to the UN fundamental principles of official statistics (ibid) must apply to all compilers. In particular, unofficial statistics must adopt the same standards of openness and transparency of metadata.
In order to accredit unofficial statistics as official, these quality dimensions and principles must be assessed and judged ‘fit for purpose’ for SDG indicators. Indicators must also be available for the entire duration of the 2030 agenda. Ideally this means from 2010 – 2030. Any indicator selected as an SDG indicator must provide certainty on this issue.
2.3.4 Data neutrality
Conflict of interest is always a risk when consumers of data become compilers. Advocacy or ideology may encourage statisticians to achieve a certain result or outcome. The impartiality or agnosticism of official statisticians is one of its key strengths. The European Statistics Code of Practice  and the UN’s Fundamental Principles of Official Statistics (ibid) both stress the need for official statistics compiled free from political and external interference.
The counter argument is that ‘there is no such thing as neutral information’ [34: p.179] and that consumers probably know the context better and so can compile better, more nuanced, statistics. These are not invalid arguments. As Borgman [35: p.69] points out ‘Behind every mechanism for organizing knowledge are unstated and undocumented assumptions.’ Rudder [36: p.257] too notes ‘behind every number there’s a person making decisions’. Or as the title of Gitleman’s book  eloquently puts it ‘”Raw Data” Is an Oxymoron.’ Even the choice of indicator can reflect political ideology or an attempt to control the narrative.
Every statistic comprises several conscious and subconscious decisions – how to treat outliers, how to impute for missing values or what level of aggregation should be chosen? The list of decisions is almost endless. So, no statistic is strictly neutral in the sense that choices have unavoidably been made during compilation. But perhaps the more relevant question is whether the statistics were compiled to provide impartial information or to advocate for a specific objective? Not always an easy question to answer. The purpose of official statistics is the former, to provide statistics and information, that in as far as is possible are free from any political agenda. The argument as to whether other agents can compile better statistics than official statisticians is at the heart of the debate as to whether centralised or decentralised statistical systems are better. There are strengths and weaknesses with either approach. Centralised statistical systems are typically seen as strong on independence and impartiality but sometimes struggle with relevancy, owing to their remoteness from policy debate. Decentralised statistical units often produce highly relevant statistics but are more susceptible to political interference and pressure to present statistics relating to ministerial policies and outcomes in a favourable light, thus compromising the credibility of the statistics  and . From the perspective of accrediting unofficial statistics, all compilers must be able to demonstrate adherence to principle 1 of the UN Fundamental Principles of Official Statistics (ibid).
The risks associated with users compiling statistics already exist. These risks can arguably be mitigated through implementation of codes of practice, quality standards, transparent metadata, open data standards and peer reviews.
2.4 The risks associated with not adopting the proposal
Some of the risks associated with implementing the proposed approach have been outlined above. But there are also risks in not considering such an approach. It is also important to carefully consider these. The main risks would appear to be those arising from unaddressed competition.
We live in a world where development funding is not exclusively provided by States. Philanthropic funding is now increasingly important, with funds, such as, the Gates, Ford, Hilton and Rockefeller foundations making enormous sums of money available. Unfortunately, relatively little is known about these philanthropic funds, what they fund or how they decide what gets funded. Salazar  estimated that in 2009, the top 10 philanthropic foundations made US$5.6 billion available, of which, US$3.6 billion was given to ‘global development’. In 2016, Viergever and Hendriks  estimated that the 10 largest philanthropic funders of health research together funded research costing $37.1 billion, constituting 40% of all public and philanthropic health research spending globally. They note the need for increased transparency about who the main funders are globally.
The danger for official statistics is philanthropically funded projects may inadvertently be counterproductive; competing with official statistics and the SDG GIF. In the growing world of online collaboration, competition to the SDG GIF could emerge at any time. If other data compilers in civil society or the private sector feel disenfranchised or frustrated with the official approach they may develop competing frameworks. Arguably this has begun already. The Sustainable Development Solutions Network (SDSN)7, the Global Partnership for Sustainable Development Data (GPSDD)8 and the United Nations Global Pulse9 are all, in one way or another, competing with the UN Statistical Commission. They are all competing for funding and other resources to improve data and statistics for development. Take the GPSDD for example – reportedly a network of more than 280 members, including governments, the private sector, civil society, international organizations, academic institutions, foundations, statistics agencies, and other data communities, it was established to fully harness the data revolution for sustainable development. Their ambition is to, among other things: strengthen inclusive data ecosystems; drive data collaborations; drive global collaboration to improve production and use of data; develop global data principles and protocols for sharing and leveraging privately held data; Bring together data communities at global and national level to spur innovation and collaboration; harmonize data specifications and architectures; and ensure the interoperability of technology platforms for assembling, accessing, and using data. These all seems like sensible ambitions. The risk of course is that, in doing so, it may undermine the global structure established by countries to do exactly this – the United Nations. The risk also, is that, several of the organisations who have joined the network, may have done so under duress, as they can’t risk being excluded or being seen to be irrelevant. The distinction between voluntary collaboration and forced cooptation is often blurred.
In terms of addressing the threat of competition, arguably it is better that official statistics takes control and propagates statistical standards, rather than building a wall in an attempt to shut-off or safeguard official statistics from other compilers. Furthermore, in the rapidly changing data environment that we live in today, not adapting may be the bigger risk.
2.5 Ideological arguments
It is clear there is resistance in many countries to governments collecting more data. The argument underlying this resistance is supposedly fears of a Big Brother state ; ; . Despite statistical legislation and the UN fundamental principles, respondents, but most particularly firms, don’t trust NSOs to safeguard their data from other arms of government or not to use their data for non-statistical purposes. As an aside, MacFeely  notes the incongruity of these concerns and the lack of concern regarding the emergence of a corporate or private sector Big Brother. But there is ideology at play here. The neo-liberal agenda aims to minimise the role of the public sector. Landefeld  warns, even in the data sphere, there will be resistance by industry to expanded government oversight.
Thus, one can anticipate ideological arguments against accreditation, along the lines that this is an expansion of the role of government. But as Reich [44: p.5] correctly points out ‘Government doesn’t “intrude” on the “free market”. It creates the market.’ Polanyi [45: p.61] too notes the importance of the ‘deus ex machina of state intervention’ for the formation of markets. The UN or national government must set the data standards to be used, whether it is defining post codes, tax numbers, personal identification numbers or statistical classifications – these are all part of a nations data infrastructure . Even Hayak , the godfather of modern liberal economics, understood this, explaining that in line with liberal principles, the State should exercise control of weights and measures.
In any event, challenging the establishment of national or global accreditation mechanisms on the grounds of such ideology is a specious argument. An accreditation system will facilitate wider participation of the private sector, academia, NGOs and civil society in the 2030 Agenda. It opens a doorway, for indicators that have traditionally been excluded from consideration, to compete for recognition as an official SDG indicator.
2.6 Consistent philosophy
The 2030 Agenda emerged from a globally inclusive, open and democratic process. In line with this philosophy, contributions on the compilation of SDG indicators could also be open and inclusive. To an extent they already are, in that anyone can propose indicators, or comment on existing proposals. But to date, it has been envisaged that compilation will be the exclusive permit of official statisticians (either national or international). But what if the power and knowledge of unofficial data and unofficial statisticians could be harnessed? This indeed would be a data revolution.
The idea of an accreditation system is not inconsistent with the philosophy underlying the UN Fundamental Principles of Official Statistics (ibid). In particular principle 5 which states:
‘Data for statistical purposes may be drawn from all types of sources, be they statistical surveys or administrative records. Statistical agencies are to choose the source with regard to quality, timeliness, costs and the burden on respondents’.
In other words, statistical agencies should in principle use the widest variety of data sources possible to compile official statistics provided the quality of those data are sufficiently good and the costs are not prohibitive. Why not go one step further, and argue that statistical agencies should in principle use, not only the widest variety of data, but also the widest variety of statistics for the purposes of providing official statistics to feed the SDG GIF?
The idea is also broadly consistent with the spirit of the 2030 Agenda itself, which states ‘Data and information from existing reporting mechanisms should be used where possible’ [48: para. 48]. So, like the fundamental principles, the 2030 Agenda recognises the importance of reusing existing data and information from other official systems. Again, one could argue that what we are proposing is simply an extension or relaxation of this condition – in particular, a relaxation of the ‘existing reporting mechanisms’. The 2030 Agenda also noted that any ‘global review will be primarily based on national official data sources’ (our emphasis) [49: para 74a]. Thus, it was recognized from the start the GIF might require data from outside national official sources. The document wisely didn’t set any conditions or limitations on what these sources might be.
This proposal is also consistent with the broad philosophy or vision put forward by the Independent Expert Advisory Group on a Data Revolution for Sustainable Development in their report ‘A World That Counts’. In this report, they state ‘New institutions, new actors, new ideas and new partnerships are needed, and all have something to offer the data revolution. National statistical offices, the traditional guardians of public data for the public good, will remain central to the whole of government efforts to harness the data revolution for sustainable development. To fill this role, however, they will need to change….and strong collaboration between public institutions and the private sector’ [49: p.9]. The report stresses the need to create incentives for private sector participation and comes tantalizingly close a number of times to proposing something quite radical, but it never quite does10 – it highlights the importance of data sharing but never statistics. In short, they advocate a vibrant ‘global data ecosystem’ [49: p.17] and an extended concept of statistical systems. We interpret (global or national) data ecosystems as something much broader than (global or national) statistical systems – See Figure 2.6
The NSOs mapped in Figure 2.6 don’t require any explanation. A NSS is the collection of statistical institutions or units within a country that collects, compiles and disseminates official statistics on behalf of national government. For the purposes of this argument, we understand a data ecosystem to be the amalgam of all data and statistical actors in a country, including official statistics and holders of public sector or administrative data, private and commercial sector data holdings and indicators, research data, civil society and NGO data holdings. We acknowledge that in an era of globalising data imposing a distinction between a national and a global data ecosystem is perhaps somewhat archaic. The idea of constraining global digital data to a ‘country’ or that data will respect national borders is anachronistic. Thus, we acknowledge that data ecosystems may need to be international by default. The important point is that data ecosystems are much broader than official statistical systems.
Figure 2.6 – (National/Global) Statistical and Data Ecosystems
The official statistical system, whether national or international, should retain control of the process for standards and certification. Thereafter, there is no reason why NSOs or NSSs could not accredit unofficial statistics or indicators for the purposes of compiling SDG indicators. Furthermore, with the evolution of modern, globalised data sources, there is no reason why international organisations or the United Nations could not establish regional or global accreditation systems to facilitate the use of good quality unofficial statistics.
2.7 Lessons from history
Scientific discovery has always relied on amateur inventors or scientists. Many important contributions were made ‘by men with minimal scientific education’ [50: p.201]. John Harrison, a clock maker, invented the famous H1 ships chronometer used to estimate longitude; Michael Faraday discovered diamagnetism, electrolysis, and electromagnetic induction; Gregor Mendal, a Czech Augustinian monk, pioneered experiments on dominant/recessive qualities of genes in peas; William Herschel, an amateur astronomer forged the development of telescopic lenses and discovered the planet Uranus; and Charles Darwin was the legendary amateur naturalist famous for his contribution to the theories of evolution. Weinberger  points out, the reason that amateurs such as these could make such important contributions and have them recognised was that there were bodies, such as the Royal Society, the Royal Astronomical Society or the Académie des Sciences to test and validate their work.
There are lessons we can learn from this approach. Just as professional scientists did not have the monopoly on scientific wisdom in the past, official statisticians do not have the monopoly on information today. In fact, when it comes to mining new forms of digital data, official statisticians are for the most part far behind their unofficial counterparts. Today, many unofficial statistics are produced by a wide variety of compilers, ranging from: journalists; researchers; social media outlets; civil society; academia; commercial enterprises; lobby groups; and NGOs. The quality of these statistics varies enormously, from one end of the quality spectrum to the other. In many cases the quality is hard to determine, as the underlying data and methodologies are proprietary and shrouded in mystery. In other situations, the statistics are clearly of good quality and are accompanied by supporting metadata. It seems unwise therefore to tar all unofficial data with the same brush.
Is there a way to sift and sort this effort in such a way as to harness it? Could NSOs (at country level) or the UN (at the global level) provide a mechanism that could test and validate unofficial statistics and accredit them for the purposes of the SDG GIF? That is the question posed in this paper. Without such a system, new statistics will emerge daily, leaving the public unclear as to their quality and utility. But by providing a quality assurance stamp, NSOs at country level and the UN at the global level could say which statistics are ‘facts’. The UN could become today, what the Académie des Sciences was to the Victorian era, in terms of validation. Winning such recognition might provide the necessary incentive for many compilers to become less proprietary with their data and methodologies and algorithms.
The demands made by the SDG GIF are colossal with enormous implications for national statistical systems. In May 2018 only 40 per cent of the selected indicators for the SDG GIF could be populated. The costs of populating the GIF exceed existing funding. It seems unlikely that funding will increase sufficiently to match requirements. Yet the global statistical system is expected to deliver a fully populated GIF to support the 2030 Agenda. Although these expectations are not realistic, failure to deliver could nevertheless result in significant reputational damage to that system, with far reaching repercussions.
It is time for a data revolution. The Dubai Declaration, drafted at the conclusion of the 2018 UN World Data Forum acknowledges ‘that the data demands for the 2030 Agenda require urgent new solutions that leverage the power of new data sources and technologies through partnerships between national statistical authorities and the private sector, civil society, and the academia and other research institutions.’ [52: para.7]. We agree. It is time to consider new approaches to populating the SDG GIF. Experience from the MDGs tells us that by 2030 many of the SDG indicators will not be populated. Without considerable investment, most Tier II and III indicators, are unlikely to become Tier I indicators. Few countries will be capable of producing the country level data required for the foreseeable future. While it is very important that countries feel ownership of the SDG process, the insistence on prioritising country statistics may ultimately be self-defeating; the focus should be on the best available statistics. There is a risk that in taking a rigid position on the source of statistics, countries are simply trying to hold back the tide. The data deluge will overcome them eventually.
Hence the proposal for a new approach. There will naturally be concerns that the proposal outlined in this paper may contribute to a wider corrosion of official statistics, multilateral systems and public goods. There may be fears that this is the thin end of a wedge that will ultimately allow greater privatization of official statistics. There will be concerns too regarding the quality of any ‘outsourced’ indicators, and even whether they have been compiled free of political or advocacy pressures. These are all valid concerns that must be addressed if an accreditation system is to be introduced. As already stated, this is not an argument for the privatisation or ‘uberfication’ of official statistics, nor is it an attempt to subvert NSOs or NSSs. Quite the contrary, the argument is that in order to protect official statistics and NSSs, those systems must evolve and adapt.
The approach proposed here is consistent with the open philosophy adopted during the consultation and negotiation phase of the 2030 Agenda. One could think of it as democratizing the SDG GIF but in a controlled way with clear rules. It would harness the intellectual power of NGOs, civil society and the private sector, giving them an incentive to share their data. In a world of ‘alternative facts’ it might also allow NSOs and the UN to assert their mandate and protect their legitimate role as custodian of knowledge and protector of deliberative public spaces.
The information environment is changing. Official statisticians must remain vigilant – complacency will create vulnerabilities. The proposal outlined here brings risks, but it may be necessary to open up and surrender a position of dominance or monopoly today in order to survive tomorrow. With every bold initiative there are risks. It is essential that such a system not be adopted blindly but carefully considered, and if adopted, known risks must be mitigated. As Diamond [53: p.433] points out, all ‘decisions involve gambles, because one often can’t be certain that clinging to core values will be fatal, or (conversely) that abandoning them will ensure survival.’ For better or worse, the Tier II and Tier III indicators have created a vacuum and if this vacuum is not filled by official statistics, then it will be exploited by someone else. In a rapidly changing and increasingly competitive data world, official statisticians must collaborate or perish. In doing so it may not be easy to decide what core values to discard and which to cling on to. But given the experience with the MDG indicators, it is highly improbable that by 2030, the majority of the SDG indicators will be populated. The question for official statisticians is whether it is time to try something different or just keep doing the same thing over and over again, hoping for a different result; a practice Einstein defined as insanity.
The proposal here is that official statistics switch from a purely production or manufacturing based model to a mixed business model: one combining the manufacture of official statistics with the franchising of production under license. One could think of this approach as a decentralized supply chain model. This is not a wiki approach but rather a spoke – hub, or HQ – subsidiary model. This proposal envisages the creation of a regulated market place, where compilers bid to populate SDG indicators. NSOs and the UN, as independent brokers of information, would be the quality controllers. The benefits of such an approach would be the enormous human and organizational capital that could be harnessed from all around the world. It would allow official statistics to tap into and avail of immense creativity and innovation, possibly accelerating change and reducing duplication, but in a controlled way. This approach positions NSOs and IOs as the guardians of public trust, the data stewards for the 21st century, safeguarding data and statistics as public goods.
This proposal is not a panacea. Myriad problems will remain, new ones will arise. But it may unleash the untapped productivity of a wider data ecosystem. It should be stressed that this proposal is specific to addressing gaps in the SDG GIF, and consequently the scope is limited to populating SDG indicators. With the necessary adjustments, the scope of this proposal could be scaled and adapted to incorporate other statistical domains. In other words, the core element of this proposal, accreditation, could be universally applied to official statistics, but this discussion lies outside the scope of this paper.
 Gates, W. H. (1995). The Road Ahead. Penguin Books Ltd. London.
 United Nations Conference on Trade and Development (2016). Development and Globalization: Facts and Figures 2016. Available at: http://stats.unctad.org/Dgff2016/index.html [last accessed October 21, 2018].
 Committee for the Coordination of Statistical Activities (2013). Recommended Practices on the Use of Non-Official Sources in International Statistics. Available at: https://unstats.un.org/unsd/accsub-public/practices.pdf [last accessed May 31, 2018].
 Cervera J.L., P. Votta, D. Fazio, M. Scannapieco, R. Brennenraedts and T, van der Vorst (2014). Big Data in Official Statistics. Eurostat ESS Big Data Event Rome 2014 – Technical Event Report. Available at: https://ec.europa.eu/eurostat/cros/system/files/Big%20Data%20Event%202014%20-%20Technical%20Final%20Report%20-finalV01_0.pdf [last accessed January 18, 2018].
 Landefeld, S. (2014). Uses of Big Data for Official Statistics: Privacy, Incentives, Statistical Challenges, and Other Issues. Discussion paper presented at the United Nations Global Working Group on Big Data for Official Statistics, Beijing, China October 31, 2014. Available at: https://unstats.un.org/unsd/trade/events/2014/beijing/Steve%20Landefeld%20-%20Uses%20of%20Big%20Data%20for%20official%20statistics.pdf [last accessed January 18, 2018].
 MacFeely. S. (2016). The Continuing Evolution of Official Statistics: Some Challenges and Opportunities. Journal of Official Statistics, Vol. 32, No. 4, 2016, pp. 789–810.
 MacFeely, S. (2018). The Big (Data) Bang: What will it mean for compiling SDG indicators? UNCTAD Research Paper series, SER.RP_2018_5 No. 23.
 Hammer, C.L., D.C. Kostroch, G. Quiros (2017). Big Data: Potential, Challenges, and Statistical Implications. IMF Staff Discussion Note, September 2017 SDN/17/06. Available at: https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2017/09/13/Big-Data-Potential-Challenges-and-Statistical-Implications-45106 [last accessed October 21, 2018]
 Bordt, M. and Nia, A. B. (2018). SDG Implementation – what to do when it’s not clear what to do? ESCAP Stats Brief, Issue No. 6, August 2018. Available from: https://www.unescap.org/sites/default/files/Stats_Brief_Issue16_Aug2018_SDG_implementation.pdf [last accessed September 3, 2018].
 United Nations Global Working Group on Big Data (2017). Bogota Declaration. 4th Global Conference on Big Data for Official Statistics in Bogota, Colombia, November 8-10, 2017. Available at:https://unstats.un.org/unsd/bigdata/conferences/2017/Bogota%20declaration%20-%20Final%20version.pdf [last accessed September 5, 2018].
 Kituyi, M. (2016). ‘Development and Globalization: Facts and Figures 2016. Available at: http://stats.unctad.org/Dgff2016/index.html [last accessed October 21, 2018].
 MacFeely, S. (2018). The 2030 Agenda: An Unprecedented Statistical Challenge. International Policy Analysis, Friedrich Ebert Stiftung. November 2018.
 United Nations (2015). The Millennium Development Goals Report 2015 – Summary. Available at: http://www.un.org/millenniumgoals/2015_MDG_Report/pdf/MDG%202015%20Summary%20web_english.pdf [last accessed October 26, 2017].
 Inter-Agency and Expert Group on Sustainable Development Goal Indicators (2018). Tier Classification for Global SDG Indicators. Available at: https://unstats.un.org/sdgs/iaeg-sdgs/ [last accessed June 25, 2018].
 Deen, T. (2016). UN targets trillions of dollars to implement sustainable development agenda; Available at: http://www.ipsnews.net/2015/08/u-ntargets-trillions-of-dollars-to-implement-sustainable-development-agenda/ [last accessed October 31, 2016].
 Thiaw, I. (2016). Environment and the Implementation of the SDGs and the 2030 Agenda: A Policy Perspective. IISD – SDG Knowledge Hub. March 29, 2016. Available at: http://sdg.iisd.org/commentary/guest-articles/environment-and-the-implementation-of-the-sdgs-and-the-2030-agenda-a-policy-perspective/ [last accessed October 26, 2018].
 Intergovernmental Committee of Experts on Sustainable Financing (2014). Report of the Intergovernmental Committee Experts on Sustainable Development Financing. UN General Assembly A/69/315*, August 15, 2014. Available at: http://www.un.org/ga/search/view_doc.asp?symbol=A/69/315&Lang=E [last accessed October 26, 2018].
 United Nations (2003). Monterrey consensus of the International Conference on Financing for Development; available at: http://www.un.org/esa/ffd/monterrey/MonterreyConsensus.pdf [last accessed October 31, 2016].
 Runde, D. (2017). The Data Revolution in Developing Countries Has a Long Way to Go. Forbes, February 25, 2017. Available at: https://www.forbes.com/sites/danielrunde/2017/02/25/the-data-revolution-in-developing-countries-has-a-long-way-to-go/#620717201bfc [last accessed September 4, 2018].
 PARIS21 (2015). A road map for a country-led data revolution. Available at: http://www.oecd-ilibrary.org/docserver/download/4315051e.pdf?expires=1457406953&id=id&accname=guest&checksum=6B4747834B1E459F5E186E65EE1034B5 [last accessed January 11, 2017].
 PARIS21 (2018). Partner Report on Support to Statistics – Press 2018. Available at: http://www.paris21.org/sites/default/files/inline-files/PRESS2018_V3_PRINT_sans%20repres_OK_0.pdf [last accessed November 17, 2018].
 Verhulst, S. G. (2018). How to harness private data towards meeting the SDGs? The need for data stewards. Blog, July 3, 2018. Available at: https://undataforum.org/ [last accessed August 15, 2018].
 United Nations (2014). Fundamental Principles of Official Statistics. Resolution 68/261 adopted by the General Assembly on January 29, 2014. A/RES/68/261. Available from: https://unstats.un.org/unsd/dnss/gp/FP-New-E.pdf [last accessed October 21, 2018].
 Committee for the Coordination of Statistical Activities (2014). Principles Governing International Statistical Activities. Available at: https://unstats.un.org/unsd/accsub-public/principles_stat_activities.htm [last accessed October 21, 2018].
 Committee of the Chief Statisticians of the United Nations System (2018). United Nations Statistics Quality Assurance Framework. Available at: https://unstats.un.org/unsd/unsystem/documents/UNSQAF-2018.pdf [last accessed April 25, 2018].
 United Nations Economic Commission for Europe (2009). Common Metadata Framework – Part A – Statistical Metadata in a Corporate Context: A guide for managers. Available at: http://www.unece.org/fileadmin/DAM/stats/publications/CMF_PartA.pdf [last accessed October 24, 2018].
 European Statistical Governance Advisory Board (2017). Opinion of the European Statistical Governance Advisory Board (ESGAB), concerning professional statistical independence and staffing resources in the Hellenic Statistical Authority (ELSTAT)’, ESGAB/2017/10. Issued from Helsinki, May 11, 2017. Available at: http://ec.europa.eu/eurostat/documents/34693/344453/ESGAB+doc.++2017_10_ESGAB+Opinion+on+Greek+developments_11.05.2017.pdf/e684a082-caf4-4c27-a38f-ee9717a09772 [last accessed January 5, 2018].
 Greene, M. (2017). By convicting an honest statistician, Greece condemns itself. Politico, March 8, 2017. Available at: https://www.politico.eu/article/greece-andreas-georgiou-elstat-by-convicting-an-honest-statistician-greece-condemns-itself/ [last accessed January 5, 2018].
 Zimonjic, P. and Kupfer, M. (2016). Chief statistician resigns over government’s failure to ‘protect the independence’ of StatsCan. CBC News, Sep 16, 2016 and Updated: September 17, 2016. Available at: https://www.cbc.ca/news/politics/statscan-wayne-smith-resigns-1.3765765 [last accessed September 3, 2018].
 The Local (2017). Norway statistics bureau head leaves post over conflict. November 13, 2017. Available at: https://www.thelocal.no/20171113/norway-statistics-bureau-chief-resigns-over-conflict [last accessed January 5, 2018].
 World Bank (2018). World Bank Statement on Amendments to Tanzania’s 2015 Statistics Act. Statement, October 2, 2018. Available at: https://www.worldbank.org/en/news/statement/2018/10/02/world-bank-statement-on-amendments-to-tanzanias-2015-statistics-act [last accessed October 21, 2018].
 Reuters (2018). Tanzania law punishing critics of statistics “deeply concerning”: World Bank, October 3, 2018. Available at: https://af.reuters.com/article/idAFKCN1MD16H-OZATP [last accessed October 21, 2018].
 European Commission (2017). European Statistics Code of Practice – For the National Statistical Authorities and Eurostat (EU Statistical Authority). Adopted by the European Statistical System Committee. November 16, 2017. Available at: https://ec.europa.eu/eurostat/documents/4031688/8971242/KS-02-18-142-EN-N.pdf/e7f85f07-91db-4312-8118-f729c75878c7 [last accessed September 5, 2018].
 Luce. E. (2017). The retreat of Western Liberalism. Little, Brown Books, London.
 Borgman, C.L. (2015). Big Data, Little Data, No Data – Scholarship in the Networked World. Cambridge, MA: MIT Press.
 Rudder, C. (2014). Dataclysm: What our online lives tell us about our offline selves. 4th Estate, London.
 Gitelman, L. (Ed) (2013). ‘Raw Data’ is an Oxymoron MIT Press, Cambridge, MA.
 PARIS21 (2005). Models of Statistical Systems, Document Series No. 6. 2005. Available at: https://www.paris21.org/sites/default/files/2101.pdf [last accessed October 22, 2018].
 MacFeely, S. and N. Barnat (2017). Statistical capacity building for sustainable development: Developing the fundamental pillars necessary for modern national statistical systems. Statistical Journal of the IAOS, Vol.33, No. 4, pp.895–909.
 Salazar, N. (2011). Top 10 philanthropic foundations: A primer. DEVEX, August 1, 2011. Available at: https://www.devex.com/news/top-10-philanthropic-foundations-a-primer-75508 [last accessed January 4, 2018].
 Viergever, R. F. and T. C. C. Hendriks (2016). ‘The 10 largest public and philanthropic funders of health research in the world: what they fund and how they distribute their funds’, Health Research Policy and Systems, Vol. 14, No. 1, pp.1-15.
 Noyes, K. (2015). Scott McNealy on privacy: You still don’t have any. PC World, IDG News Service, June 25, 2015. Available at: https://www.pcworld.com/article/2941052/scott-mcnealy-on-privacy-you-still-dont-have-any.html [last accessed January 29, 2018].
 Taplin, J. (2017). Move Fast and Break things – How Facebook, Google and Amazon cornered culture and undermined democracy. Little, Brown and Company, New York.
 Reich, R. (2015). Saving Capitalism: For the Many, Not the Few. London: Icon Books Ltd.
 Polanyi, K. (1944). The Great Transformation – The political and economic origins of our time.’ Beacon Press, Boston.
 MacFeely, S. and J. Dunne (2014). Joining up public service information: The rationale for a national data infrastructure. Administration, Vol.61, No.4, pp. 93–107,
 Hayak, F.A. (1944). The Road to Serfdom. The University of Chicago Press, Chicago.
 United Nations General Assembly (2015). Transforming our world: the 2030 Agenda for Sustainable Development. Resolution A/RES/70/1 adopted by the General Assembly on 25 September 2015, 21 October 2015. Available at: http://www.un.org/ga/search/view_doc.asp?symbol=A/RES/70/1&Lang=E [last accessed October 22, 2018].
 Independent Expert Advisory Group on a Data Revolution for Sustainable Development (2014). A World That Counts: Mobilising the Data Revolution for Sustainable Development. Report prepared at the request of the United Nations Secretary-General. Available from: http://www.undatarevolution.org/report/ [last accessed October 22, 2018].
 Ferguson, N. (2011). Civilization – The West and the Rest. The Penguin Press, New York.
 Weinberger D. (2014). Too Big to Know. Basic Books, New York.
 United Nations (2018). The Dubai Declaration: Supporting the Implementation of the Cape Town Global Action Plan for Sustainable Development Data. October 24, 2018. Available at: https://undataforum.org/WorldDataForum/wp-content/uploads/2018/10/Dubai-Declaration-on-2030-Agenda_Draft-22-October-2018.pdf [last accessed October 24, 2018].
 Diamond, J. (2005). Collapse – How societies choose to fail or succeed. Viking, Penguin Group, New York.
2 United Nations Conference on Trade and Development.
3 Authors own calculations based on OECD Development Assistance Committee Statistics (Table 1: Net Official Development Assistance) 2002 – 2014.
4 It is not clear why ‘all types of sources’ are so narrowly described as only including ‘survey or administrative records’ as clearly NSOs use ‘all’ types of data.
5 https://unstats.un.org/bigdata/inventory/ [examined on 27 April, 2018]. These numbers are a best estimate. Projects are not always well defined or explained on the inventory. Some projects seem to incorporate several projects or big data sources.
6 Membership of the CCSA comprises international and supranational organizations, whose mandate includes the provision of international official statistics in the context of the Principles Governing International Statistical Activities (ibid), and which have a permanent embedded statistical service in their organization and regular contacts with countries. At the inaugural meeting in 2003, there were 25 agencies. By 2017, the CCSA had expanded to 45 member agencies.
10 No doubt this was deliberate. But it is nevertheless ironic in a report discussing revolution.