25 November 2024

The widespread adoption of generative artificial intelligence (AI) platforms like ChatGPT and DALL-E following the COVID-19 pandemic has transformed many facets of our digital lives. Using natural language generation (NLG) and large language models (LLM), generative AI has become an efficient productivity tool for creating a wide range of content¡ªfrom articles to visuals, reports, videos and voiceovers¡ªwithout explicit instruction. Much like seasoning a dish with salt, AI enhances productivity but requires careful control.

The emergence of generative AI has undoubtedly increased output, effectiveness and creativity. It also carries significant hazards, especially regarding information integrity and human rights, since AI systems are being increasingly incorporated into digital platforms .

The risks to information integrity posed by generative AI

Realistic AI-generated or -mediated content can be highly believable, hard to detect and rapidly spread. When such content conveys false or misleading information, it can deepen trust deficits.

AI tools are widely used not only because of their higher-quality output but also due to their easier-to-use interfaces and increased accessibility. These factors can yield both favourable and unfavourable results. On the positive side, developments in AI provide greater efficiency, convenience and a certain degree of democracy. On the other hand, it is important to understand that LLMs are not intended to communicate the truth. Instead, without guaranteeing accuracy or factual information, they provide likely claims based on patterns in training data. Because many users are ignorant of this aspect of LLMs, they have the ability to mix accurate and , treating both identically, which compromises information integrity.

Society often follows trends without questioning them, allowing essential values like human rights to be overshadowed. 

Moreover, digital platforms and generative AI are a wonderful combination for disseminating fake and misleading information, as algorithms on most digital platforms prioritize content with increased user interaction over content accuracy. The increased use of AI-generated material as a weapon to skew facts could lead to quicker spreading of false information. Although fact-checking technologies have advanced, digital platforms and AI algorithms still lack reliable mechanisms to confirm the legitimacy of material regularly. Furthermore, different platforms and jurisdictions have , which means that even when inaccurate material is found, it may take hours or days to rectify.

If the process of fact-checking digital material continues to function in this manner, the spread of mis- and disinformation could increase, raising further concerns regarding authenticity, bias, privacy of information, and more. Dissemination of such information also endangers democratic institutions and fundamental human rights.

You need to be human to understand human rights

Article 19 of the Universal Declaration of Human Rights states that ¡°everyone has the right to freedom of opinion and expression¡±. But such freedom must not be abused or twisted to do harm to other people or communities. Unfortunately, AI-generated content has increasingly been used to spread hate speech, xenophobia and discriminatory rhetoric, targeting vulnerable populations such as and ethnic minorities.

A 2019 study by Deeptrace, a cybersecurity company based in Amsterdam, revealed that . Additionally, , particularly targeting women and children to lure them into exploitation.

There are also human rights challenges from the technological side. AI learning algorithms rely primarily on vast amounts of data collected from extensive digital and social media platforms, which is uploaded by humans or generated through human interactions. Given the sheer volume of data amassed, it is nearly impossible for humans to review or even skim through every piece of information thoroughly. AI models use this data to identify trends, generate forecasts and create synthetic content, but . This lack of sophistication in AI models could lead to the dissemination of harmful biases and outright misleading information that could violate human rights.

To make AI content accurate and reliable, developers should start with thorough testing to catch biases, errors and vulnerabilities at the development stage. Innova Labs from Pixabay

Another potential source of human rights violations is the working environment of personnel responsible for maintaining and training AI systems. They are often underpaid and exposed to disturbing content, and work under conditions that can lead to psychological distress, an issue that has recently started to attract attention. , more than 150 workers involved in the AI systems of Facebook, TikTok and ChatGPT gathered in Nairobi and pledged to establish the first African Content Moderators Union.

Society often follows trends without questioning them, allowing essential values like human rights to be overshadowed. We could be especially vulnerable in the case of AI as it becomes ever more deeply embedded in our daily lives. This rapid expansion of generative AI platforms brings ethical challenges to the forefront, particularly regarding human rights. Right now, weak policies leave people¡¯s privacy and autonomy vulnerable, allowing tech companies to exploit digital spaces for profit.

As society becomes more reliant on digital platforms and services, addressing these concerns, particularly those related to human rights, becomes increasingly urgent.

We are humans after all

The road ahead is indeed long and difficult, and as AI continues to evolve, a responsible approach is critical¡ªone that aligns technological innovation with preserving truth, dignity and human rights. A multifaceted approach is a must to mitigate the risks posed by generative AI.

The owners of digital platforms should be held accountable for content that is shared, particularly in the context of AI-amplified disinformation. 

From a technical standpoint, to make AI content accurate and reliable, developers should start with thorough testing to catch biases, errors and vulnerabilities at the development stage. Transparency about data, algorithms and decision-making is essential to build trust and address the impact of AI on information integrity, as the use of diverse datasets helps avoid harmful biases and leads to more balanced content. As recommended in the 51³Ô¹Ï Global Principles for Information Integrity, technology companies should empower users to provide input and feedback on all aspects of trust and safety, privacy policy and data use, recognizing user privacy rights. User choice and control should be improved, along with compatibility with a range of services from varied providers.

Regarding human rights, legal authorities and human rights groups ought to actively support the development and use of AI in a manner that respects individual privacy and dignity. Stronger data protection regulations should be enforced as part of this advocacy in order to shield people from intrusive data collection and stop AI systems from using personal data for its own purposes without permission. Human rights watchdogs must incorporate AI ethics into existing human rights frameworks while ensuring that AI applications respect individual rights, such as privacy, freedom of expression and the right to non-discrimination. Special protections are crucial for vulnerable groups, including women, children, older adults, persons with disabilities, indigenous peoples, refugees, LGBTIQ+ individuals, and ethnic or religious minorities.

The owners of digital platforms should be held accountable for content that is shared, particularly in the context of AI-amplified disinformation. More advanced content monitoring systems that can quickly identify and remove or label AI-generated misinformation should be developed. Digital platforms might also ensure that people understand how content is chosen and promoted by being more transparent about their algorithms and data collection practices.

Together, we can ensure that generative AI is used appropriately going forward, and that its benefits are achieved without endangering information integrity and human rights. The call for responsible AI practices is not just an option but a necessity to guarantee a just and equitable digital future.

?

The UN Chronicle is not an official record. It is privileged to host senior 51³Ô¹Ï officials as well as distinguished contributors from outside the 51³Ô¹Ï system whose views are not necessarily those of the 51³Ô¹Ï. Similarly, the boundaries and names shown, and the designations used, in maps or articles do not necessarily imply endorsement or acceptance by the 51³Ô¹Ï.