The session “Ethical AI Advocacy for Civil Society in MENA” was held as part of Bread&Net 2022 in Beirut between 15 to 17 November. We hosted Qusai Suwan, Policy Officer at the Jordan Open Source Association, to discuss how civil society in the MENA region can advocate for rights-based AI policies. Suwan highlights the basics of AI, its potential threats, and how to approach AI tools from a human rights perspective.
The session presents two guides, created in collaboration with ICNL, designed to engage people in non-technical conversation about AI ethics and, at a later stage, AI regulation.
What is AI?
AI has many definitions that have changed over time. What is referred to as “intelligence,” like artificial intelligence, is not constant, and it’s agreed upon that there is no practical difference between machine learning and AI.
Examples of what is not considered AI
- A computer playing chess based on predefined rules
- Not all Chatbots are considered AI
- Auto-braking systems in cars based on distance
What is considered AI
- Recommending videos based on your watching history
- Self-driving cars
- Computer playing chess after a human’s playing history
- Chatbot that offers helpful advice after learning patterns of human communication
Generally, learning is associated with AI; if it looks intelligent and can’t improve or be trained with time, we usually don’t consider it AI.
Examples of types of AI
- Supervised (guided): Where a computer algorithm is trained on input data for output data, it can theorize the relationship between them.
- Unsupervised (not guided): An example is when you’re watching a Youtube video, and Youtube recommends some related videos. This is like clustering videos in a specific group.
- In reinforcement learning, AI learns from interaction with real life or the environment (it can be simulated or actual).
AI is like the human brain and how it works. The human brain has a billion neurons that are connected and operated in a way that we don’t understand, and this is precisely being replicated in artificial networks in AI. We’ve used elementary mathematical functions that became connected in a highly complex way. We know how it works, but we don’t accurately know. Even experts don’t know the specific reason for the results they see.
- Imitating and perpetuating human biases and prejudices
For example, Amazon used AI to screen CVs. Unfortunately, they ended up discriminating against women because the algorithm learned that being a woman is probably a predictor of not being a good programmer or employer, so their algorithms excluded women.
Also, in 2021, Instagram associated hashtag #AlAqsaa (the mosque) with terrorist organizations as per the categorization of the United States, so most of the content was screened and taken down by AI.
- Creating and spreading false information and content
Here we’re focusing on AI’s ability to create stories from scratch and create news; these can focus on specific topics and can be guided by human beings, so when a human decides that he needs to create a campaign or to have content about a particular issue with certain opinions, content is generated by the reasonable manner that most of the readers will not suspect that AI generates this. So the threat here is to manipulate public opinion or human consciousness.
Also, deep fakes are one of the threats in this category, and they can lead to sexual abuse or pornography and misleading people.
- Enabling mass surveillance of communities
AI enables detailed profiling of individuals, like identifying people based on facial recognition. Research has shown that we have digital signatures in many different ways, like the way we move, the way we move our mouse, or use our keyboard, and this also includes profiling preferences and habits.
For example, social scoring in China uses automation a lot. If someone throws trash from a train or does something against the law, their score decreases. Every move is monitored, and this can affect many aspects of people’s lives. People with high scores can travel on special trains, for example, unlike those with lower scores.
- Exacerbating unemployment and deepening income inequality
Robots, such as self-driving cars, can quickly replace humans in many jobs or activities. We expect many jobs to become obsolete soon because robots are more competent and efficient. Because this is possible now, we can produce goods and services without human beings, as the need for employees is diminishing.