Governments and private sectors in West Asia and North Africa are calling to employ AI to strengthen their economies and streamline services. The frameworks for using AI vary in each country, but overall, there is either no regulation or merely soft non-binding principles.
Almost every Arabic-speaking country in WANA has created an AI strategy with very little thought given to regulation in these strategies. In contrast, most strategies emphasize the concern of regulation stifling innovation. This has been a common argument in AI regulation around the world and has led to dangerous uses of data and harm to individuals.
In general, most countries in the region are in the preliminary phase of AI use. They are hoping to encourage its use with the primary goal of stimulating the economy. It is a huge focus in the Gulf region, largely in the business sector rather than the public sector, but AI strategies aim to deploy AI in almost every sector. The main reason behind investing in AI is to diversify away from oil-dependent economies. The UAE is the leader in terms of the implementation of AI, including public sector use. Generally, it is most concerning when AI is used in the public sector, as this can often deal with very vulnerable populations and is related to essential services.
Similar to issues seen with other technology, a huge problem is the use of data and the lack of effective privacy protections. At the crux of AI is data. The ability to assess these programs’ privacy protections becomes more difficult as it can be hard to see how data is being used. Furthermore, accountability is harder to determine with AI. The liability of decisions needs to be stated clearly in the framework, and this generally goes unmentioned.
In terms of engagement, focusing on data may be the easiest way to enter these conversations. The lack of specific and clear regulation for AI is concerning as it gives companies and governments almost free rein to implement these tools in any way they choose. Lastly, overall a main area to watch in this region is how AI can amplify state power in general, but particularly in terms of policing and surveillance.
What do we mean when we say AI?
AI is an umbrella term that incorporates many different types of processes. It broadly means the use of computers to mimic human intelligence or decision-making. AI is used to simplify processes so that less human power is needed. This can be as simple as using an algorithm to process data and give an output, or as complicated as machine learning, where the system does not need explicit instructions to create conclusions. Below is a brief rundown of some terms about AI in general.
Machine Learning is a process that aims to use data to mimic human learning. It trains the algorithm to classify and predict insights but is usually supervised by humans. There is a subset of machine learning that uses deep learning, which has very minimal human intervention.
Data Mining sorts through large datasets to determine hidden patterns and relationships. It usually uses raw data and machine learning algorithms to reach its conclusions.
Statistical / Data Analysis examines data sets to find trends, it is used to make sense of data, it usually uses less data than data mining. This is often used to then help make risk scores of prediction technology that is often used in policing.
Facial recognition technology is an advanced form of biometric authentication. It usually uses AI algorithms and machine learning to detect human faces. Once the person’s face is captured it uses datasets to confirm that it is that face.
Automated decision-making refers to decisions being made without human intervention. It can be done either through a known algorithm or machine learning. It uses the data to reach a decision. This can be used to determine if someone should be awarded a loan, or cash assistance for example, but can be used in any field.
As mentioned, Gulf countries are at the forefront of investing and pushing toward AI to diversify their oil-dependent economies.
Bahrain and Kuwait
Bahrain has stated that it recognizes the “importance of AI to streamline processes and inform strategic decision-making of businesses across all industries and has taken steps in the study and application of AI in different fields.” As such, Bahrain has indicated the need to invest in modern technology in its 2030 vision and has shown interest in smart-city technologies. Bahrain has also called for using AI in the judiciary.
Kuwait has similarly pressed for AI innovation and so far has been using some AI in healthcare to help doctors detect early signs of diseases. In both countries, there is no specific AI legislation. Other laws, such as data protection laws, would likely govern AI use.
The UAE first launched its strategy in 2017, setting up an AI ministry to create a long-term strategy. It has created the UAE Council for Artificial Intelligence, which is focused on using AI in education, healthcare, COVID-19 prevention, aviation, and promoting AI startups and AI talent. The UAE has firm aims to become a destination for AI and AI innovation as outlined in their National Strategy for Artificial Intelligence 2031.
The UAE has stated that it wants to ensure a “legal environment to support the adoption of AI.” They state that they are working with global partners to ensure ethical AI and that the “UAE Artificial Intelligence and Blockchain Council will add to its remit to review national approaches to issues such as data management, ethics, and cybersecurity. They will also review the latest international best practices in legislation and global risks from AI. Furthermore, the Council will ultimately oversee the implementation of the AI Strategy in the UAE.”
The UAE has no specific legislation regarding AI, but does have non-binding guidelines such as Dubai’s “Ethical AI Toolkit.” They have existing legislation that AI would fall under that deals with privacy-related issues, these being the Federal Data Protection Law, the DIFC Data Protection Law and the Health Data law, the penal code, consumer protection law, and the UAE civil transactions laws. Although if a product is defective, liability is usually held with the creator, causation can be harder to prove with AI. Under the civil code, if a person has control over the object they are liable under article 316. If we’re using the example of a self-driving car, the system may be responsible as well as the person driving and liability can possibly be shared.
Moreover, if there is little regulation in place it’s hard to know what would make an AI product defective unless it’s something relatively clear like autonomous vehicles. However, even this can be confusing. For example, if a person swerved to avoid hitting someone and hit someone else, it’s hard to determine who should be liable here. The lack of a clear framework shows a regulatory gap that should be fixed before implementing this technology.
In aviation, they have set up automated robots to detect the faces of criminals in airports. This “robocop” concept has been used to aid police as well. The UAE is partnering with SAS for predictive policing software that will use data analytics and have “artificial intelligence capabilities.” Abu Dhabi has an extensive network of cameras that performs location-based predictive policing. It uses machine learning and historical data to determine where a crime is most likely to occur.
Predictive policing based on historical data like this is problematic as it often targets low-income areas or immigrant areas, as those who are historically policed. This can increase police presence in the lives of the already marginalized. Moreover, when using these types of tools, a clear accountability framework must be established. If police are acting off AI recommendations, who is in charge if the action leads to unfair detention? Does this use amplify the state’s power and allow it to infringe on people’s privacy? Furthermore, there is the question of what control the vendor has over the data the software processes. It is still being determined what kind of predictions the software will make, which has vast implications. If it is risk assessments on certain individuals, the question of how that risk is assessed must be addressed. This is similar for location-based prediction.
During the pandemic lockdowns, they used the program “Ooyon” which monitors the permits of residents to leave their homes through voice, face, and license plate recognition. In education, machine learning algorithms are being used to predict students at risk of dropping out and the employability of graduates. The model uses variables such as “socioeconomic background, behavioral issues, scores, attendance, among other data points, to make predictions.” This type of prediction can harm those from traditional “underperforming” backgrounds and may cause unnecessary prejudice. Banks are also using AI for customer service support and fraud detection.
The UAE has also developed a strategic partnership with China that includes AI. The Chinese Firm Sense Time has established a research and development hub in Abu Dhabi. Sense Time is a highly controversial tech company responsible for developing a facial recognition tool used in China to identify ethnic Uyghurs, even when wearing sunglasses or hats.
The Qatar AI strategy emphasizes the importance of data accessibility. They call for Qatar to develop data governance and guidelines that facilitate “broad access to and sharing data consistent with the recently released Qatar Data Privacy Laws.” In terms of ethics, they acknowledge the biases that algorithms carry with them and recommend “guidelines for the level of explainability and interpretability required for different types of decisions made by AI algorithms.” The strategy is built on six pillars: education, data access, employment, business, research, and ethics. The use of AI during the 2022 FIFA World Cup in Qatar was a test run for the use of these kinds of surveillance technologies.
In 2019, the government of Saudi Arabia created the Saudi Data and AI Authority (SDAIA), which is in charge of setting up regulations. It had decided to postpone the full enforcement of the Saudi Personal Data Protection Law until March 17, 2023. This focus on AI is part of the diversification of the economy and provides economic growth. Like other Gulf countries, KSA plans to use it in the public sector, for smart cities, health care, energy, education, and finance. Saudi Arabia’s National AI Capability Program was developed with the help of Huawei. Saudi Arabia is infamously developing Neom, a smart city that will incorporate many AI services.
Oman does not have firm regulations in place but has released policy guidelines for the use of AI innovation in the public sector. The policy has six main principles, these being: inclusiveness, empathy, accountability, equity, transparency, and security. Oman is developing their first smart city, Madinat Al Irfan which promises to improve the lives of its citizens. Smart cities, especially in regions that don’t have a strong regulatory and data protection framework allow for countries to increase their surveillance capabilities on citizens. Oman has stated they intend to use biometric security check technology. They have also said they intend to turn Duqm, and Ras el Hamra, a community in Muscat, into a smart city.
There is little use of AI in this region, apart from Israel, but similar to the Gulf, Jordan is hoping to move in this direction.
Incorporating AI into Israel’s national security strategy is extremely dangerous for Palestinians. The government hopes to implement AI in the public sector, but how this may affect Palestinians remains unknown. Israel intends to develop an AI framework, but these soft regulations and ethical principles are not binding.
The government announced its strategy for the armed forces in 2022 at The Blavatnik Interdisciplinary Cyber Research Center and Tel Aviv Center for AI and Data Science at Tel Aviv University. Allegedly, AI played a key role in the May 2021 conflict. The IDF is incorporating more AI tools, such as the Spice Bombs, with automatic target recognition.
Further examples include AI-powered guns that only require the soldier to hit a button. The creator of the technology argues that this is to protect soldiers and civilians by being more accurate and only hitting the “terrorist.” Currently, these tools only fire tear gas, stun grenades, and sponge-tipped bullets. Using technology like this makes it easier to skirt blame as soldiers follow the technology. This technology makes Israel’s dominance and oppression even more powerful.
Israel has already started using these tools to increase their surveillance mechanisms further. Facial recognition cameras, sensors, and tools have been installed in the West Bank to monitor Palestinians. Facial recognition cameras make it so that even if a person does not have their ID, they can be identified. The Blue Wolf app also works to capture images of Palestinians and match them to a database of images referred to as the “Facebook for Palestinians.” The app then flashes yellow, red, or green to inform the soldier whether the person should be detained, arrested, or left alone. Israel has also been found to use predictive policing technologies such as data analytics to monitor Palestinian Facebook accounts. The system searches for photos of Palestinians killed or imprisoned by Israeli forces to identify who may be suspicious.
Jordan published an Artificial Intelligence policy in 2020, but it is primarily a document that aims to encourage the development and use of these technologies. In terms of governance, they have formed a committee, the National Ministerial Committee for Artificial Intelligence, to implement the policy. The committee is tasked with developing AI ethics, but the policy also stresses the fears of stifling innovation through regulation. Jordan has recently released their AI roadmap for 2023-2027, a continuation of the 2020 policy.
North African countries are in the preliminary stages of AI adoption, wanting to benefit from AI and encouraging more investment and growth in the sector. In terms of engagement, this is a stage where monitoring and uptake in technologies would be useful. Their lack of regulatory frameworks and meaningful actions when saying these technologies need to be democratic, accountable, and transparent is problematic. Similar to what we see with other technologies, a huge concern when using this technology is how the data used to run all these programs is being used and stored.
Algeria has developed a national strategy for research and innovation in artificial intelligence, but this is still very much in the stage of just wanting to spend more time and resources on these technologies. Tunisia is making headway in startups that use AI to serve various goals. For example, one was created to detect high-risk COVID variants, another created smart sensors to help in agriculture and farming. One possibly risky category is ‘Unfrauded,’ which helps determine car insurance fraud. The worry here is if there is a human verifier in case of false positives and what the consequences are if deemed to be a fraudulent claim. Egypt is also in the process of developing its AI framework. They are interested in using AI to help their healthcare, education, and agriculture sectors but no real mention of what they plan to do other than “use machine learning.” There are plans to use AI to make healthcare and education more accessible, but this raises questions about the strength of internet literacy and data literacy in remote areas, as well as concerns about data privacy and protection, given the sensitivity of health data.
No country in the region has developed a proper AI framework, and any moves towards it always show deference to developers’ need to innovate without restrictions. Although these countries do have data protection laws, the issue is that if the state is the one requesting the data, it becomes much easier to surpass data protection regulations as they usually have stipulations that allow for government access to such data. If the data becomes necessary for the contract, the data can then be processed, for example. The partnership that Saudi Arabia and the UAE have made with various Chinese organizations is a concern, as it is unclear if data sharing is occurring and shows a move towards developing technologies to monitor populations extensively. The UAE is using AI in multiple sectors which can have a negative impact on people’s liberties. The use of predictive policing and cameras is especially concerning. This technology could be used to harm LGBT+ people as well as activists.
Healthcare seems to be the main focus for implementing AI in this region. As of now, in hospitals at least, it is mostly to help speed up processes and assess in more detail than doctors can (for example with MRI scans), rather than it being an autonomous device that is not supervised by human experts.
With healthcare, the main focus is the security of the data and who has access to it. Can the government look at the data beyond diagnosis needs if it’s government hospitals? A significant concern may be that if the technology develops and starts making recommendations to doctors, this can lead to compliance and automation bias, which can significantly impact people’s health. The false positive/false negative rate must be published, and doctors must be trained to second-guess the technology at least.
Smart cities are being developed in several Gulf countries, and their development has many issues. Firstly, they are highly susceptible to being hacked and rendering the area unusable, so a way for engagement is to determine how well cyber security measures are in place. If an entire city is controlled by software, that software needs to be extremely robust, updated, and monitored extensively. Data hacks can become an even more significant issue than they already are if your entire reality is embedded with software. Furthermore, many proposed technologies to ‘streamline’ and ‘anticipate your every need’ require extensive data monitoring, meaning every resident will be under excessive surveillance.
In general, using these technologies almost always allows for greater surveillance of individuals, and therefore, is especially concerning in authoritarian regimes. The use of AI in the private sector gives rise to surveillance capitalism, whereby our privacy is traded in for “convenience,” but in reality, our behavior is being predicted, influenced, and modified. In the private sector, the use of driverless taxis and human-free shops and grocery stores (with no contact and no checkouts) does reduce possibilities for employment. Furthermore, these kinds of shops all require apps to function and are equipped with extensive surveillance capabilities. If these become the norm, people will have no choice but to opt into these measures.
Engagement with these programs is difficult in any region due to the ability to invoke trade secrecy and governments’ lack of AI-focused frameworks. But the lack of even minor accountability frameworks makes accessing information much more difficult in the WANA region. Each country has made statements of aiming for ethical and accountable AI without any concrete legislation to achieve this.
For the North African countries mentioned, this is very much a watching stage to see how they start implementing these strategies. For the Gulf region, calls for testimonials could perhaps provide insight on the human impact of these tools being used, as they go unmentioned in most research. Research on data protection with a focus on how AI can further infringe on data protection is a logical first move in this field.