Methodology
All findings and analyses were developed through the review of publicly available information, including online articles, research papers, investment announcements, and news reports.
We relied on internationally recognized news outlets and research platforms such as Reuters, Bloomberg, Financial Times, and Telecom Review to obtain accurate and up-to-date information on global corporate partnerships and investment figures. To contextualize and verify these findings within the regional landscape, we also consulted local and Gulf-based sources, including AGBI (Arab Gulf Business Insight), Zawya, Middle East AI News, and Gulf Business.
Wherever possible, we cross-referenced figures and announcements across multiple sources to ensure consistency. However, due to the limited availability of official government data or detailed financial disclosures in some cases, some investment amounts or project details remain approximate or based on reporting from trusted media outlets. No direct interviews were conducted, and no internal or confidential documents were accessed as part of this research.
Article
The Gulf Cooperation Council (GCC) states, particularly Saudi Arabia, the UAE, Qatar, and Oman, are increasingly becoming central hubs for artificial intelligence (AI) investment. Driven by ambitions outlined in strategic visions such as Saudi Vision 2030 and the UAE National AI Strategy 2031, regional governments and private entities are injecting billions into AI to position themselves as leaders in technological innovation and economic diversification. While these developments promise considerable economic benefits, they simultaneously raise critical questions about digital rights and surveillance.
Vision 2030 is Saudi Arabia’s national development plan, launched in 2016 by Crown Prince Mohammed bin Salman, aimed at reducing the Kingdom’s reliance on oil and diversifying its economy. The plan sets out a broad framework for economic, social, and cultural reforms designed to modernize the country and create new revenue streams outside of the petroleum sector.
Similar to Vision 2030, the UAE National AI Strategy 2031 is the United Arab Emirates’ long-term plan to become a global leader in artificial intelligence by the year 2031. Launched in 2017, it was one of the world’s first national AI strategies, and it is part of the UAE’s broader economic diversification and digital transformation agenda.
AI investments in the Gulf are multifaceted, reflecting both the ambitions of state-led development strategies and the growing involvement of private sector entities. These investments are not confined to a single domain but span a variety of sectors that the Gulf states see as critical for future-proofing their economies. Key areas include energy, healthcare, finance, smart cities, and defense industries where AI can either increase efficiency, reduce costs, or by the aspiration of the governments of the Gulf, position these countries at the forefront of global technological leadership.
Unsurprisingly, a significant proportion of these investments are led or heavily backed by government-related entities. For example, Saudi Arabia’s Public Investment Fund (PIF), established in 1971 as the sovereign wealth fund of Saudi Arabia and one of the largest in the world, plays a central role in financing AI projects and driving the Kingdom’s Vision 2030 agenda. Similarly, in the United Arab Emirates (UAE), the Mubadala Investment Company, an Abu Dhabi-based sovereign wealth fund with over $280 billion in assets, is a key player in directing AI investments. Alongside Mubadala, G42, an Abu Dhabi-based AI and cloud computing firm established in 2018, has emerged as a major actor in the region’s AI landscape, involved in projects spanning healthcare, surveillance, and data infrastructure. These entities ensure that AI development aligns with national strategic goals like economic diversification and technological self-sufficiency.
However, these initiatives are not purely local ventures. While many AI projects aim to establish a strong physical and infrastructural presence within Gulf territories, such as AI data centers, research hubs, and smart city components, a substantial part of the investment strategy involves forging global partnerships. These partnerships include collaborations with Western tech giants like IBM, Databricks, and Cerebras, as well as venture funding into startups abroad, such as Qatar’s investment in Builder.ai, a London-based software development platform. This dual approach, investing both in domestic capabilities and international partnerships, serves several purposes. It helps Gulf countries exert influence over major technology companies, expand their geopolitical soft power, and import much-needed technical expertise to their own territories. Ultimately, this reflects these governments’ high risk of utilizing AI to further their authoritarian practices, shifting from oil-dependent economies to data-driven governance systems. While this transformation is often framed as progress, the consolidation of advanced technologies in the hands of regimes with limited transparency, weak accountability, and well-documented histories of repressing dissent presents serious risks. These governments have previously used imported surveillance technologies to monitor political opponents, journalists, and even members of their own royal families. One notable example is the use of Pegasus spyware, developed by the Israeli firm NSO Group, which was deployed by several Gulf states to hack phones and monitor individuals without their knowledge. In one widely reported case, a senior Emirati royal was alleged to have used Pegasus to spy on his ex-wife and her legal team during a custody dispute in the United Kingdom. The same types of governments now gaining access to advanced AI tools, such as facial recognition, behavioral analytics, and predictive modeling, are already known to abuse surveillance powers.
Several key individuals and entities are driving AI expansion in the Gulf. Omar bin Sultan Al Olama, UAE Minister for AI, is spearheading national strategies, while Saudi Arabia’s Saudi Data and AI Authority (SDAIA) leads efforts to integrate AI across governmental services. Regional companies, notably G42 in the UAE, are pivotal in shaping AI agendas, partnering with global tech giants like IBM and Microsoft.
Despite the promising economic narrative, Gulf states’ history of surveillance and digital rights violations casts a shadow over AI developments. The Internet Governance Forum (IGF) held in Riyadh, Saudi Arabia, sparked controversy, notably over concerns of omni-surveillance and digital repression. The forum, while advocating responsible AI usage via the Riyadh AI Declaration, was criticized for contradictions between public statements and the Kingdom’s documented human rights record, including misuse of technologies such as Pegasus spyware against activists.
As such, the rise in AI investments inevitably amplifies concerns about surveillance and data misuse. The integration of AI into surveillance apparatuses, such as facial recognition, predictive analytics, and digital identity systems, presents heightened risks of human rights abuses, particularly privacy violations and discriminatory monitoring of marginalized or activist communities.
Gulf states currently lack robust regulatory frameworks to ensure transparency and accountability for AI systems. Without comprehensive legislation, the expanded use of AI could exacerbate existing patterns of state surveillance, undermining individual privacy, freedom of expression, and other fundamental human rights.
The Gulf’s ambitious push into AI presents a double-edged sword, immense economic potential on one side and considerable ethical and digital rights challenges on the other. Balancing technological advancement with the protection of human rights requires transparent governance, stringent regulation, and active engagement from both civil society and international stakeholders. Without these safeguards, the region’s promising AI future risks becoming eclipsed by data abuse and increased surveillance..
Section 1: Key Investments in AI
AI is not inherently problematic, whether as a form of technological innovation or as an investment strategy, but the question of ownership remains central. The risks associated with AI do not emerge solely from its capabilities but from the context in which it is deployed and the structures that control its development and application. In settings where civil society is weak, political dissent is criminalized, and transparency is lacking, the consolidation of AI infrastructure within state or quasi-state entities can facilitate large-scale, codified repression.
As such, it is important to investigate and expose who are the central actors and stakeholders behind this surge in AI investments.
Before diving into a deeper analysis of the relationship between technology and human rights in the region, it is important to lay out the current scheme of investments in the GCC.
Saudi Arabia is currently the largest investor in AI compared to other Gulf countries, driving a significant wave of technological projects across the region. At LEAP 2025, a major technology conference held annually in Riyadh, Saudi authorities announced $14.9 billion in AI-related initiatives, reinforcing their role in the global AI market. One of the flagship projects is NEOM, the planned $500 billion smart city backed by the Saudi Public Investment Fund (PIF). NEOM recently invested $100 million in Pony.ai, a Chinese autonomous vehicle company founded in 2016 and headquartered in Guangzhou and California, specializing in self-driving technology. The agreement includes plans to develop autonomous transport infrastructure in NEOM and the wider Middle East and North Africa region, including a regional R&D and manufacturing hub.
State-owned enterprises like Aramco, Saudi Arabia’s national oil company and one of the world’s largest energy producers, also play a central role in these initiatives. In recent years, the company has expanded its focus beyond fossil fuels, aligning with Saudi Vision 2030 to diversify into renewable energy, petrochemicals, and digital technologies. Aramco’s partnership with Cerebras Systems, a California-based AI hardware company known for developing the world’s largest computer chip designed to accelerate AI workloads, focuses on advancing AI-driven solutions within the energy sector, signaling a move to embed AI into traditional industries.
The United Arab Emirates (UAE) is following a similar path, placing G42 and the government-backed MGX investment firm, founded in 2024 to manage AI-focused assets, at the heart of its ambitious AI strategy. MGX is notably involved in the $500 billion Stargate Project, a major AI infrastructure initiative announced in January 2025 by U.S. President Donald Trump, alongside international tech giants such as OpenAI (a U.S.-based AI research company behind ChatGPT), Oracle (a U.S.-based cloud services and database software company), and SoftBank (a Japanese multinational conglomerate known for its investments in technology and AI).
Additionally, the Polynome Group, a UAE-based investment and technology group, launched a $100 million fund to support AI startups in areas such as robotics and AI software. The fund plans to back up to 40 companies over five years, contributing to the growth of the local AI ecosystem.
In Qatar, the Qatar Investment Authority (QIA), the country’s sovereign wealth fund established in 2005, made a significant $250 million investment in Builder.ai, a London-based software development platform that enables businesses to build custom software applications without coding expertise. Similarly, Databricks, a U.S.-based AI and data analytics company known for its unified data platform, expanded into the Gulf region through Azure Qatar, a partnership with Microsoft’s Azure cloud platform, marking a key step in integrating global AI firms into the local landscape.
There is also growing interest in Oman, where Elon Musk, CEO of Tesla and SpaceX, has called for increased AI investments during events in Muscat. It was also reported that the Oman Investment Authority (OIA) has acquired a stake in Musk’s artificial intelligence company XAI. This signals that the investment flux might spread into other Gulf states beyond Saudi Arabia and the UAE.
Beyond local development, Gulf countries are investing in global tech companies to build influence and import expertise. One example is the $100 million partnership between Open Innovation AI, an Abu Dhabi-based AI startup focused on AI operating systems, and World Wide Technology (WWT), a U.S.-based technology service provider specializing in digital strategy and IT infrastructure, integrating Open Innovation AI’s platform into WWT’s operations, on sectors like sustainability and education.
These investment patterns reveal a clear strategy among Gulf countries to position themselves within global AI networks. Through a combination of domestic infrastructure projects and international partnerships, they are working to expand their influence over AI development while acquiring the technical knowledge needed to support their local ambitions.
Section 2: Key Stakeholders and Influential Figures
In the United Arab Emirates (UAE), the Mubadala Investment Company and G42 have been instrumental in channeling investments into AI technologies. Under MGX, established in 2024, Mubadala and G42 aim to manage assets worth $100 billion, focusing on AI-driven investments.
Similarly, Saudi Arabia’s Public Investment Fund (PIF) has been proactive in AI investments, launching a $40 billion fund dedicated to AI initiatives. The PIF’s strategy includes attracting foreign tech companies to establish operations within the Kingdom, in an effort to foster domestic AI development.
Key individuals have played pivotal roles in advancing AI initiatives in the Gulf. In the UAE, Sheikh Tahnoun bin Zayed Al Nahyan, the national security adviser, chairs G42 and has been influential in steering the country’s AI strategy. In Saudi Arabia, Yasir Al-Rumayyan, governor of the PIF, has overseen significant investments in AI and technology sectors.
While in comparison to state actors, the private sector’s role in AI investments within the Gulf remains limited, there are signs of gradual growth. Venture capital investments in AI startups across the Middle East have been relatively modest, with reports indicating a total of $700 million invested in AI startups in the region as of November 2024. However, activity from private investors has shown signs of expansion in the first quarter of 2025, suggesting a slow but steady rise in private sector engagement. New funds and incubators are emerging, and regional startups are beginning to attract attention from both local and international investors, although the scale remains small compared to state-led initiatives.
Interestingly, global technology companies have established significant partnerships with Gulf nations. One motivation behind these initiatives might be the race between the United States and China to win over the economic domination race in the Gulf.
In fact, the Gulf region has become a key battleground in the ongoing technological competition between China and the United States, with both powers trying to secure influence over the future of AI, cloud infrastructure, and digital services in the SWANA. The region’s strategic importance, combined with its substantial financial resources and demand for technological development, has attracted significant investments from both sides. American technology companies such as Microsoft, IBM, Google, and Amazon have ramped up their engagement in the Gulf, including Microsoft’s $1.5 billion investment in G42, aimed at strengthening AI capabilities and securing a foothold in the region’s digital infrastructure. At the same time, China has expanded its presence through partnerships like Alibaba’s $238 million cloud computing deal with Saudi Telecom and the $400 million investment by Prosperity7, a Saudi fund, into China’s Zhipu AI, one of the leading generative AI startups. Chinese firms such as Huawei, Tencent, and Baidu have also pushed into the Gulf, with Baidu planning to deploy its Apollo Go robotaxis in Dubai, marking the first international launch of its autonomous vehicle services. Moreover, just recently, China’s Tencent Cloud has launched its first cloud region in the Middle East, in Saudi Arabia. Tencent will also invest more than $150 million in Saudi Arabia over the next few years.
Section 3: The benefits of the local investment
While concerns about state dominance and surveillance risks are valid, there is also potential for AI development in the Gulf and the SWANA region to bring tangible benefits. Much of the existing AI infrastructure, including social media platforms, content moderation tools, and language models, has been developed in Western contexts, primarily shaped by U.S. and European regulatory frameworks, cultural assumptions, and linguistic biases. These systems often fail to adequately understand the region’s languages, dialects, and social contexts, leading to misinterpretations, content removal errors, or algorithmic biases against certain cultural or political expressions.
The development of local AI models that are capable of processing Arabic dialects, Farsi, Turkish, Kurdish, and other regional languages with greater accuracy could help bridge this gap. AI trained within the region, reflecting its unique linguistic, cultural, and social nuances, offers an opportunity to design systems that better serve local populations, whether in healthcare, education, or digital communication.
Additionally, the dominance of Western tech platforms, particularly those based in the United States, has led to repeated failures in addressing the linguistic and cultural complexity of the SWANA region. Meta, for instance, has been widely criticized for its content moderation practices that result in the systematic over-moderation of Arabic content and the under-moderation of Hebrew content, especially in politically sensitive contexts as in Palestine. This disparity is not just a product of biased policies but also of structural design choices. Meta has historically outsourced its moderation operations for Arabic to low-paid moderators with limited training in local dialects or regional political sensitivities. Moreover, the algorithms used to enforce moderation decisions are largely trained on English and a few dominant Western languages, with little attention given to dialectical variations or socio-political nuances. One of the most illustrative examples is when Meta almost banned any use of the Arabic word “Shaheed” on all its platforms. Moreover, reliance on Meta’s Dangerous Organizations and Individuals (DOI) list in content moderation has also resulted in the disproportionate restriction of Palestinian and other Arab political speech, often conflating activism with extremism. These systemic gaps highlight the longstanding and urgent need for locally trained and contextually grounded AI systems that can better navigate the region’s linguistic diversity and socio-political realities. Developing such tools within the Gulf and the broader SWANA region would not only improve accuracy and fairness but also promote freedom of expression in a region where dissidents are already censored by governments.
While the broader digital rights concerns remain, the potential for more contextually aware and linguistically competent AI systems offers a path toward improving regional digital environments that have long been shaped by external actors.
Section 4: Potential Threats and Digital Rights Implications
As artificial intelligence capabilities expand across the Gulf, concerns grow that these technologies will reinforce and scale existing surveillance infrastructures. The Gulf countries have a history of employing digital tools for monitoring and social control, often targeting journalists, activists, and human rights defenders. AI systems, particularly those used in facial recognition, predictive policing, and biometric data analysis, risk making these practices more widespread, automated, and less transparent.
One significant risk is the violation of privacy and misuse of personal data. AI technologies integrated into security systems, smart cities, or public services can process large volumes of sensitive personal information, often collected without proper consent. In the absence of strong legal protections, this data can be exploited for monitoring political dissent or marginalized communities under broad national security justifications. These threats are compounded by the lack of independent oversight or enforcement mechanisms in the Gulf’s current regulatory landscape.
There have been many documented encounters of human rights violations linked to state-sponsored use of advanced surveillance tools in the Gulf monarchies, particularly NSO’s Pegasus spyware. Multiple Gulf countries, including Saudi Arabia, the United Arab Emirates (UAE) have been documented using Pegasus to target journalists, lawyers, activists, and political opponents. A prominent case includes the surveillance of Loujain al-Hathloul, a Saudi women’s rights activist, whose phone was infected with Pegasus shortly before her arrest and detention. Similarly, Ahmed Mansoor, an Emirati human rights defender, was repeatedly targeted with Pegasus, leading up to his imprisonment under harsh conditions in the UAE. Bahraini activists, including members of Bahrain Center for Human Rights and Bahrain Institute for Rights and Democracy, were also identified as victims of Pegasus intrusions, exposing their communications and networks to government scrutiny.
In the case of Jamal Khashoggi, forensic investigations revealed that Pegasus was used to monitor his fiancée Hatice Cengiz and other associates both before and after his assassination in Istanbul in 2018. Reports also indicate that Omani journalists and activists have been subjected to spyware attacks, although these cases receive less international attention.
As Gulf countries continue to adopt AI-driven surveillance technologies such as facial recognition and predictive policing, the potential for amplifying these abuses increases significantly. These new technologies could make surveillance more automated, pervasive, and difficult to detect, further threatening the privacy and safety of those who speak out against repression. In the absence of independent oversight and meaningful legal safeguards, the expansion of AI risks reinforcing these violations and entrenching a digital environment hostile to freedom of expression and human rights.
In fact, the UAE has already implemented such a system in Dubai. The Oyoon program, launched by Dubai Police in 2018 which had an achieved objective of reaching 300 000 cameras across the city by 2023, is a comprehensive surveillance initiative that integrates artificial intelligence (AI) with an extensive network or surveillance. Designed to enhance public safety and streamline law enforcement operations, Oyoon, meaning “eyes” in Arabic, utilizes AI technologies such as facial recognition, license plate reading, and behavioral analytics to monitor and analyze real-time data from various locations, including streets, public transportation, and commercial areas.
The deployment of algorithms, smart city infrastructures, and constant surveillance systems across the Gulf is creating conditions that resemble large-scale laboratories for social control and data extraction. These environments, where surveillance technologies are embedded into everyday urban life, offer authorities unprecedented visibility over populations, movements, and behaviors. This model echoes the use of surveillance technologies in other regions, notably in Palestinian territories, where Israel has been criticized for turning areas like the major cities of the West Bank into testing grounds for military and surveillance tools. In such contexts, the integration of facial recognition, predictive policing, and biometric tracking allows for continuous monitoring and profiling of targeted communities.
Events like the 2024 Internet Governance Forum (IGF) in Riyadh highlighted the reality of this surveillance environment. During the IGF, where activists criticized Saudi Arabia’s human rights record, official UN video footage of the panel was made unavailable on public platforms shortly after the event. The footage, which included statements from the families of detained activists, disappeared from the UN’s online archive. This incident drew widespread criticism from human rights groups, who viewed it as an example of the broader culture of censorship and control that AI and digital technologies risk deepening if left unchecked.
Both G42 and Aramco, being central players in Gulf AI investments, have been implicated in controversies surrounding human rights and digital surveillance. G42, based in Abu Dhabi, has faced scrutiny for its historical ties with Chinese technology firms including Huawei and SenseTime. These companies are associated with supplying surveillance technologies used in human rights abuses, particularly against Uyghur minorities in China. Although G42 has reportedly scaled back some of these partnerships due to international pressure, its involvement in sensitive sectors like health data, facial recognition, and AI surveillance systems continues to raise alarms about privacy and freedom of expression in the UAE. G42 has also been linked to the ToTok scandal, where a seemingly benign messaging app was exposed as a mass surveillance tool, allegedly used by Emirati authorities to monitor citizens and foreign nationals.
These concerns are not only hypothetical as some of these potential threats have either already happened or are happening in other regions.
In regions like Xinjiang, China, technologies that would be used in a smart city have been used to track ethnic minorities and enforce digital checkpoints. Similarly, in the West Bank, Israel has used facial recognition and automated surveillance infrastructure to monitor and restrict the movement of Palestinians. NEOM’s stated vision of full biometric integration risks replicating these systems in the Gulf context.
Perhaps one of the most concerning scenarios is the UAE-based MGX’s investment in the Stargate Project, which involves partnerships with global AI developers like OpenAI, Oracle, and SoftBank, which could allow Gulf governments to influence how foundation models are trained and used. Foundation models are not neutral. The data they are trained on shapes their outputs, and the decisions about what to include or exclude are deeply political. If entities with poor human rights records help steer how these models are constructed, there is a risk that AI systems could be designed to reinforce censorship, erase narratives critical of the state, or downplay the experiences of oppressed communities. Language models could be optimized to suppress certain terms or reframe political issues, creating digital environments where dissent is silently excluded and only state-approved discourse is visible
Underlying all these concerns is the issue of infrastructure ownership, particularly as it relates to semiconductors and AI chips. The global chip shortage in recent years has exposed the fragility of tech supply chains and the strategic value of chip manufacturing. High-performance AI chips such as those made by Nvidia or AMD are tools of computation that could be used in mass execution of surveillance operations, real-time data analysis, and behavioral modeling. Some professionals and policymakers in the United States have argued that the country should restrict access to these chips and avoid exporting its most advanced AI hardware to foreign governments. While their concern is rather political and economical, the empowerment of authoritarian governments could open new doors and give more ground for further censorship and oppression, this time at a global level. For example, if these chips are embedded into surveillance platforms, authoritarian regimes may be empowered to scale repression faster and with greater precision than ever before.
The increasing acquisition of AI technologies by entities like G42 and Aramco, poses significant risks. Their involvement in global AI partnerships grants them access to sophisticated systems that could further entrench state control and surveillance, worsening the existing human rights concerns across the region.
The development of advanced technologies in regions that have historically lacked digital infrastructure and representation, particularly in the Arab world, presents an important opportunity. Locally developed AI systems have the potential to reduce cultural bias, improve linguistic accuracy, and create digital spaces that better reflect the lived realities and diverse voices of the region. This could lead to more equitable systems, better protection for freedom of expression, and a shift away from the dominance of Western-centric platforms that often fail to understand or accommodate local contexts. However, the greatest concern lies not in the technology itself, but in the identity and track record of the actors leading this progress. When the primary drivers of technological development are state institutions known for repression, censorship, and the suppression of dissent, the risk is that these new tools will be used not to empower, but to control. The promise of regional innovation must therefore be met with vigilance, transparency, and strong safeguards to ensure that technological growth does not come at the cost of fundamental rights.
Featured image via AFP.