This article and the accompanying photographs were produced with the support of UNESCO in Jordan. The views and opinions expressed herein are those of the author and do not necessarily reflect the official policy or position of UNESCO or SMEX.
On May 29, Safaa Ramahi, a 41-year-old Jordanian journalist, was scrolling through Instagram when she came across a viral video showing widespread solidarity for a kangaroo stripped of access to a flight. The clip had garnered over 16.1 million views in just four days, with thousands expressing sympathy and admiration. But to Safaa, something didn’t add up.
Safaa used various AI tools, including fact-checking platforms, to verify the video’s authenticity. With Google’s chatbot Gemini, she traced the original source, identified the first account that posted it, took screenshots, and explored creative fact-checking methods.
This process took about three hours, whereas normally it would’ve taken days of investigation. During that time, Safaa also managed to produce an informative video on Instagram explaining to her audience how and why she was not fooled by this viral kangaroo story.
AI as a research tool for journalists

Safaa Ramahi, a 41-year-old Jordanian journalist, specialised in investigative journalism (June 1, 2025), Photo Credit: UNESCO
Safaa’s use of AI chatbots began as soon as they became publicly available. Although her IT degree predates these tools, her technical background, combined with a master’s in media, made her quick to adopt and experiment with emerging technologies. With a long career in investigative journalism, she has consistently integrated innovative digital solutions into her work.
“It’s my personal assistant,” Safaa says, describing her relationship with the AI chatbot. She uses AI as a tool throughout the reporting process, from generating story ideas to exploring ways of distributing her work online.
Safaa normally prefers Gemini, which allows her to search for files, summarize documents, and filter or analyze data efficiently.
Recently, she began using a paid version for additional features that help with her workflow. She believes AI chatbots can support journalists significantly, especially women who often face barriers to accessing information while reporting and publishing stories.
These barriers are expressed by Women Journalists Without Chains (WJWC), which reports that many Arab countries have enacted restrictive laws that criminalize journalistic work, often with a disproportionate impact on women.
For Safaa, improved access to AI tools contributes to press freedom at a time when public records, archives, and official documents are increasingly restricted. With increasingly limited access to public records, AI chatbots can offer helpful suggestions on open source data resources, adds Safaa, as they often include sources alongside their responses.
However, Safaa emphasizes that using AI should be guided by a code of ethics.
“There’s no way I completely consider everything the chatbot gives me,” she says.
Safaa says she always double-checks AI responses as she would with any different content, because these answers, generated from large datasets, are not necessarily accurate, but rather reflect the most frequent or common responses.
This aligns with the UNESCO Recommendation on the Ethics of Artificial Intelligence, which underscores the importance of transparency, human oversight, and data protection.


Upper Image: Safaa in her office using AI tools as part of her daily journalistic work (June 1, 2025), Photo Credit: UNESCO
Lower Image: Safaa explains to Afnan, the author, how to connect an AI platform to her documents and workspace (June 1, 2025), Photo Credit: UNESCO
When crackdowns on freedom of expression increase alongside online harassment and surveillance of female journalists, there are only so many safe spaces available for professional and psychological consultation.
UNESCO’s 2020 report highlights how such hostile environments can exacerbate the lack of safe and supportive spaces for women journalists worldwide.
Shifaa Qudah, a 29-year-old Jordanian journalist, has started using AI tools, including ChatGPT, not only to boost her productivity, but also to enhance her digital safety and receive emotional support, she explains.
Although Shifaa has gained digital security knowledge through various trainings, she believes regular checkups and access to safety resources beyond AI remain crucial, especially with the rising online threats journalists face.
With eight years of experience, mostly as a freelancer for various local and regional media outlets, Shifaa describes AI as a “friend.” She began using these tools in 2021 and even gives each one a nickname: ChatGPT is “Michael,” Replika is “Leo” and Deepseek is “Sari.”
While this personalization helps her feel more connected to the chatbots, she acknowledges that it is important to remain aware that these tools are chatbots based on algorithmic systems and might replicate gender stereotyping.
AI tools can be biased against women in various contexts. A study of over 15,000 AI-generated images showed that women were significantly underrepresented in male-dominated professions, reflecting existing gender stereotypes. AI chatbots can also reinforce harmful stereotypes through language models by generating content that aligns with gender norms.

Shifaa Qudah, a 29-year-old Jordanian journalist, publishes with different media outlets (May 28, 2025), Photo Credit: UNESCO
Shifaa consults “Michael” for story ideas on a daily basis, asking it to challenge her thinking, propose alternative angles, and provide ideas for different writing styles. When discussions become more intense or require deeper research and underreported perspectives, she turns to “Sari,” which she says often provides information unavailable elsewhere. But when the conversation turns into something rather emotional, “Leo” is her preferred choice, as the tool is designed to serve as a conversation partner.
“The more I use these tools, the more capable they become,” Shifaa explains, referring to what she calls ‘the power of machine learning.’
From her perspective, women journalists in the Arab region particularly experience online violence, ranging from harassment to rape threats. The impact can include more self-censorship and psychological trauma, ultimately undermining freedom of expression.
While Shifaa acknowledges that therapy is often expensive, she sees AI as a complementary, though not equivalent, accessible alternative for psychological and mental support.
Left: Shifaa using her mobile to navigate various AI tools as part of her daily routine (May 28, 2025), Photo Credit: UNESCO
Right: Shifaa explains how she uses AI chatbots for safety tips and emotional support (May 28, 2025), Photo Credit: UNESCO
What measures are journalists taking to preserve their privacy?

Rawan Nakleh, a 30-year-old Jordanian journalist, mainly produces podcasts (May 29, 2025), Photo Credit: UNESCO
A colleague once told 30-year-old Jordanian journalist Rawan Nakhleh that AI chatbots could solve many of journalism’s biggest headaches, like generating story ideas, transcribing long interviews, and proofreading drafts.
Since then, Rawan decided to give AI chatbots like ChatGPT a try, using them to “think out loud” for pitches, drafts, and even published articles and podcasts.
To maintain a certain level of privacy, Rawan chooses to use ChatGPT without logging into an account. She was shocked when one day the chatbot addressed her by her full name, despite never having shared it. Recent research even showed that women have demonstrated less trust, less confidence, and more privacy concerns around AI tools than men.
When she asked the chatbot how it knew her name, ChatGPT apologized, but offered no explanation. The incident alarmed Rawan, raising serious questions about the platform’s privacy protections and whether ChatGPT can truly be considered a safe space.
While Rawan is aware that several AI companies claim to not sell or share user data with third parties, this reassurance often falls short for her. She is careful never to share sensitive information, such as full names, addresses, phone numbers, or email addresses. She also never uses AI chatbots for emotional support.
Locally, Jordan has not established legal frameworks regulating AI tools, including generative AI or content moderation systems. It has however launched a national Artificial Intelligence Strategy (2023–2027) aiming to build a supportive ecosystem for AI innovation while developing ethical and legislative frameworks.
Jordan’s 2023 Cybercrime Law significantly shapes the environment in which journalists use digital technologies. For example, the law criminalizes the use of anonymizing tools like VPNs which are a common workaround to access AI platforms and protect source confidentiality.
“This isn’t just a personal matter, interviewees could be harmed,” Rawan says. She believes that using AI chatbots to process, transcribe, or summarize journalistic interviews is unethical without informed consent. For her, a journalist’s duty goes beyond self-protection to include safeguarding their sources.


Left: Rawan discusses her privacy concerns and data protection policies in AI generative tools (May 29, 2025), Photo Credit: UNESCO
Right: At the podcast studio, Rawan tells Afnan she gets consent before using AI to handle journalistic interviews (May 29, 2025), Photo Credit: UNESCO
As women journalists in Jordan navigate how to incorporate AI chatbots into their profession these tools are becoming part of their everyday routines, whether to save time, improve safety, or seek emotional relief.
But their use of AI also raises urgent questions about privacy, accountability, and the ethical handling of journalistic content. For many, the promise of AI lies not just in productivity, but in access, resilience, and the possibility of safer spaces in an increasingly demanding profession.