Human Rights in the Digital Age: Challenging Issues on the UN Agenda

By Elena Marmo

Download UN Monitor #14 (pdf version).

The potential and challenges of the digital economy are emerging steadily on the UN agenda. The UN General Assembly’s Committee on Social, Humanitarian, and Cultural Issues (Third Committee) closed its 74th session in November 2019 adopting over 60 resolutions on a wide range of subjects, only one of which (A/C.3/74/L.11) addressed digital technologies.

The Committee heard presentations from a variety of Independent Experts and Special Rapporteurs, two of whom addressed in their reports the human rights implications of emerging digital technologies. The Special Rapporteur on Extreme Poverty and Human Rights, Phillip Alston, focused his report on the digital welfare state. The Special Rapporteur on the Protection of the Right to Freedom of Expression, David Kaye, addressed online hate speech. They both pointed to the fact that, despite their potential benefits, digital technologies come with considerable risks for entrenching inequalities or undermining rights. Together the reports also offer a cry for help from the human rights and UN treaty-body system, a system that is struggling amid funding crises despite its vital role in cross-cutting and transformative policy recommendations.

UN High Commissioner for Human Rights Michele Bachelet echoed these concerns at a Third Committee side event on Human Rights in the Digital Age: “The digital revolution is a major global human rights issue. Its unquestionable benefits do not cancel out its unmistakable risks.”

Philip Alston’s Report on the Digital Welfare State

In presenting his 2019 report, Philip Alston highlighted major concerns with trends towards outsourcing provision of welfare services to the private sector and technology companies. Governments previously utilized human resources to manage social protection and welfare benefit programmes; however, as he pointed out, in an effort to cut budgets, many governments are transitioning their welfare systems to be digital, with algorithms now making the decisions humans were once responsible for. What this does, Alston emphasized in his statement, risks:

…becoming Trojan Horses for neoliberal hostility towards social protection and regulation. The digitization of welfare systems has very often been used to promote deep reductions in the overall welfare budget, a narrowing of the beneficiary pool, the elimination of some services, the introduction of demanding and intrusive forms of conditionality, the pursuit of behavioural modification goals, the imposition of stronger sanctions regimes, and a complete reversal of the traditional notion that the state should be accountable to the individual.

As Alston’s report notes, “the private sector is often the driving force for the adoption of these systems”. Their quick adoption without related regulation and legislation is coupled with “the reluctance of many governments to regulate the activities of technology companies and the strong resistance of those companies to taking any systematic account of human rights considerations”. This “leads to many large technology corporations operating in an almost human rights-free zone” and “is further exacerbated by the extent to which the private sector is taking a leading role in designing, constructing and even operating significant parts of the digital welfare state”. Further, as Alston points out, “many of the programmes used to promote the digital welfare state have been designed by the very same companies that are so deeply resistant to abiding by human rights standards” and those same companies are highly resistant to regulation. Such actions lead to deepening inequalities, undermining a social contract in which the state provides assistance to its constituents in need, underpinned by not just civil and political but also cultural, economic and social human rights.

Alston’s report focuses on the need to fundamentally change an understanding of technology as being independent and apolitical. As Alston notes: “Digital welfare state technologies are not the inevitable result of scientific progress, but instead reflect political choices made by humans. Assuming that technology reflects preordained or objectively rational and efficient outcomes risks abandoning human rights principles along with democratic decision-making.”

David Kaye’s Report on Online Hate Speech

David Kaye, presenting his report, highlighted the dangers of online hate speech, the need for regulation and the alarming role of the private sector in this process. His remarks and his full report parallel that of Alston in sounding the alarm regarding the rapid growth of technology amid slow progress in regulation and international legislation. In his address, Kaye outlined “the trend of continuing deterioration of the rights to freedom of expression” and the role of online hate speech. He argued, the term’s “weakness (‘it’s just speech’) also seems to inhibit governments and companies from addressing genuine harms such as the kind that incites violence or discrimination against the vulnerable or the silencing of the marginalized”.

A failure to call online hate speech by its true name (racism, sexism, homophobia, harassment, assault etc.) makes it difficult to establish the urgency to regulate it; “hate speech” remains a hollow and nebulous term. It is this intersection of online hate speech, freedom of expression, and inequality that presents an opportunity for human rights law to offer standards to govern state and private sector approaches to regulation. In his remarks, Kaye added:

Governments and the public have legitimate concerns about online hate but new laws that impose liability on companies are failing basic standards, increasing the power of those same private actors over public norms, and risk undermining free expression and public accountability. Companies likewise are not taking seriously their responsibilities to respect human rights. It is on their platforms where hateful content spreads, spurred on by a business model and algorithmic tools that value attention and virality. They have massive impact on human rights and yet all fail to articulate policies rooted in human rights law, as the UN Guiding Principles on Business and Human Rights call upon them to do.

Member State Responses to Digital Challenges

Member State responses to the issues raised by the Special Rapporteurs varied. Slovenia and the European Union acknowledged the value in alternatives to the linear economy, with the European Union (EU) noting, “all alternatives have potential if steered in the right direction…we need governments, regulatory framework to steer them in the right direction”. The EU did not enumerate what type of frameworks would be supported, and instead emphasized the importance of extending access to digital technologies rather than reevaluating the use of technologies as well as their intended and unintended consequences.

France also appeared to extol digital technologies, noting the role of Information and Communications Technologies (ICTs) to “facilitate access to services and benefits for rural and [persons with] disabilities”. However, Morocco raised concerns regarding the lack of regulation and problematic human rights practices of technology companies, as well as the potential for digital welfare technologies to “extend extra costs on taxpayers to access digital services” rendering “poverty more acute”, while Eritrea observed the “overreliance on the private sector with conflicting priorities can be an impediment to eradicating poverty”.

In his replies to Member States, Alston expressed uncertainty about the EU’s track record in upholding human rights and also warned developing nations to take heed and not move at “break-neck speed” to establish complex and expensive digital welfare systems due to the lack of regulation and quickly progressing nature of the field. As Alston reported, what technology is new today might be outdated in a year or two, and could prove to be a drain on vital public funds. If developed nations establish investment in digital technologies as a precondition to Official Development Assistance (ODA) or otherwise, it could create a cycle in which development assistance results in developing countries spending public funds on expensive technologies and diverting funds from social services, as well as entrenching inequalities among nations.

In response to Kaye’s report, various EU members, along with China, Russia, Bahrain, United States, Canada and Iceland echoed their support of combatting online hate speech, with varying interpretations and priorities. Many, including Canada, advocated addressing situations wherein “states use the allegation of hate speech to control voices counter to their perspective”. This interpretation of hate speech focuses specifically on state/government infringement upon the right to freedom of expression, without addressing the concerns of private sector responsibilities that Kaye addressed in his remarks.

Lithuania asked, “How to encourage private entities to protect human rights in their work?” while Russia discussed “western control of media” and “denial of alternative news sources” and China highlighted the need to balance “respect citizen rights to expression, but also need for rule of law”.  Replying to Member States, Kaye noted: “the question isn’t merely is a particular restriction consistent with the law of a particular state. That’s rule by law. It’s a question of whether those laws themselves are consistent with fundamental human rights standards. And so those standards in a rule of law society mean not only this freedom of expression to be guaranteed and even promoted by the state”.

Box 1: UN General Assembly Special Event: Questioning Digital Technologies

A special event during the 74th Session of the General Assembly titled “Emerging models of economic activities: Implications for sustainable development” highlighted challenges associated with digital technologies. Along with members of academia, civil society and the private sector, moderator Hamid Rashid, Chief of Development Research at the UN, highlighted technology’s role in deepening inequalities, called the “digital divide”. Noting “bigger concerns of distributional consequences of a digital economy”, he asked: “What should/could policymakers do to better manage the digital economy to ensure it doesn’t worsen income inequality?”

Few of the panelists answered this question. Rather, most, including representatives of the European Union, Caribou Digital, and the University of Massachusetts pointed to the benefits. Pointing to the difficulty policymakers face, panelist Alex Rosenblat, Research Lead at the Data & Society Research Institute highlighted a specific case of the rideshare company Uber and its use of “tech-washing” to obfuscate its responsibilities as an employer of the nearly 1 million drivers using the app. Rosenblat stated, “Uber leveraged sharing economy rhetoric…[which] suggests the concerns we bring to traditional economy are no longer needed because what we are doing now is ‘sharing’”. This of tech-washing goes far beyond Uber and self-defined ‘sharing economy’ companies. Mohammad K. Koba, Deputy Permanent Representative of Indonesia to the United Nations, pointed to the urgency of UN action, saying that, the General Assembly and wider UN are “considered to be a talk-shop only, no action, talking only”.

How will the UN tackle this moving forward?

As part of its 75th Anniversary, the UN has launched the UN75 initiative, the “biggest-ever global conversation”. The initiative calls upon Member States, members of civil society and the private sector to host their own “global dialogues” and offers a “toolkit for a forward-looking conversation to reimagine our future”. The proposed dialogues are given guiding questions: “What kind of future do we want to create? Are we on track? What action is needed to bridge the gap?”

The initiative has identified five key issues as foci for the dialogues: “The Impact of Digital Technologies” along with “the climate crisis, inequality, new forms of conflict and violence, and the rapid changes in demography”. At the multi-stakeholder “Leaders Summit” planned for 21 September 2020, it is intended that Member States-will adopt by consensus a pre-negotiated political declaration. As yet the deliberations have not reflected the challenges and pitfalls of digital technologies, but rather have focused on the potential of such technologies. An elements paper to be shared with Member States sometime mid-April will frame the scope moving forward.

At a briefing on his 2020 Priorities, Secretary-General Guterres suggested ways to utilize the Internet Governance Forum, the High-Level Panel on Digital Cooperation, the Open-Ended Working Group on information and telecommunications in the context of security, and the Group of Government Experts on advancing responsible behavior in cyberspace. Of the existing architecture, only the Internet Governance Forum remains open and accessible to members of civil society. However, it also does not have any negotiated outcome, nor any decision-making power.

The High-level Panel on Digital Cooperation was set up by the Secretary-General in 2018. It is jointly chaired by Jack Ma of Alibaba (an e-commerce platform) and Melinda Gates (Co-chair of the Bill and Melinda Gates Foundation), two proponents of market based approaches, with the purpose of “lay[ing] the foundations of an inclusive digital economy and society for all”. The panel’s 2019 report, The Age of Digital Interdependence provides a series of policy recommendations and states that, “we need to focus our energies on policies and investments that will enable people to use technology to build better lives and a more peaceful, trusting world”.

The High-level Panel report offers a series of recommendations and identifies values to govern digital cooperation, including; “respect, human-centredness, human flourishing, transparency, collaboration, accessibility, sustainability and harmony”. These values, while commendable, do not reflect internationally agreed and legally binding human rights obligations, which hold the power to form a basis for accountability. As multilateralism continues to be considered “under threat,” and the UN75 initiative aims to “reimagine our future”, legally binding human rights become even more important to serve as the basis for building multilateral responses to emerging and pressing issues.

Fabrizio Hochschild, Special Adviser to Secretary-General on the UN 75th Anniversary (UN75) and also on frontier technologies highlighted the high-level panel’s report at the 2019 Annual Internet Governance Forum. Hochschild noted the need to “apply shared values, principles, understandings and objectives for improved, global digital cooperation architecture”. This was further iterated by the Secretary-General’s address to the Group of Friends on Digital Technologies, a group led by Mexico and Singapore: “The Panel’s recommendations emphasise the need to close the digital gap, grow human and institutional capacity, recognize human rights in digital contexts, build cyber trust and security, and agree on a new global architecture for digital cooperation”. The Secretary-General also referenced the Multi-stakeholder Forum on Science, Technology and Innovation as a forum for advancing discussions, though due to the global Coronavirus, the 2020 Session has been cancelled, with the next meeting scheduled for 2021.

At a briefing on his 2020 Priorities, Secretary-General Guterres cited “‘four horsemen’ that endanger 21st-century progress and imperil 21st-century possibilities…– epic geopolitical tensions, the climate crisis, global mistrust and the downsides of technology. He stated: “to address the dark side of digital world, we must steer technology for positive change”.

At the inaugural “global dialogue” as part of the UN75 initiative, the Secretary-General noted: “The United Nations is a tailor-made platform for governments, business, civil society and others to come together to formulate new protocols and norms, to define red-lines, and to build agile and flexible regulatory frameworks”.

You may also like...