JEDDAH: A competition to promote Islamic values and new technologies to make Qur’an studies more accessible to those with disabilities was held in Jeddah on Sunday.
The closing ceremony of the 16th edition of the Jeddah University Qur’an Competition for the Hijri year 1446 AH was held under the patronage of Prince Saud bin Abdullah bin Jalawi, the governor of Jeddah.
More than 1,000 students took part in the competition, which featured five components: memorization and recitation; people with disabilities; technological challenges in Qur’anic services; Qur’anic calligraphy; and a photography competition.
Winners and participating institutions were honored by Prince Saud at the end of the ceremony.
Ibrahim Shaheen from Egypt took first place in the visually impaired category and received an SR7,000 ($1,866) cash prize. The 17-year-old memorized the entire Qur’an since the age of 12, using braille to study the holy book.
He told Arab News: “I spent four years memorizing the Qur’an starting at the age of nine,” he said. “Reciting the five parts for the competition was very easy and smooth, and I generally review the entire Qur’an every two weeks.”
Safaa Habeeb Allah, head of the judging committee for the technology challenge, said the competition was “a wonderful opportunity for students from various universities and schools to contribute to serving the Qur’an.”
The winner of the SR20,000 technology challenge was the Wijdan app which provides users with emotional support by using a combination of resources from Islamic scripture and modern psychology.
The app was developed by Radwa Ammar Abdel-Moaty, Suad Anis Al-Saadi, and Gharam Khalil Al-Sharabi, all from Jeddah University.
Abdel-Moaty told Arab News that the inspiration for the app came after noticing similarities between modern psychological theories and passages in the Qur’an.
“This observation was the seed for the Wijdan project, the team combined the Qur’an and psychology using artificial intelligence, and the app mainly consists of two components: the psychological link and the emotion library,” Abdel-Moaty explained.
“The psychological link is an interactive chatbot that engages with the user, identifies emotions and responds based on the Qur’an and Sunnah. The emotion library acts as an index of human emotions mentioned in both the Qur’an and Sunnah.”
Abdel-Moaty added: “We used several technologies. First, we applied text and sentiment analysis using NLP (Natural Language Processing) in the interactive chatbot.
“We also developed smart emotion dictionaries and a reminder system that tracks your recurring emotions — whether sadness or joy — and offers advice and guidance accordingly.”
The app also has text-to-speech technology to make it accessible to those with disabilities, and includes AI-generated illustrations for children.
Another aspect of the competition focused on technologies to help people with disabilities to learn the Qur’an.
The section winner was “Talaa — With Every Sign, a Verse is Recited,” a project that allows deaf and mute people to recite the Qur’an by using sign language.
It was created by Obay Rayan Ghulam and Aseel Ahmed Al-Hammadi from Jeddah University, who received a cash prize of SR10,000.
Obay told Arab News: “The inspiration for the project came from learning that the deaf and mute make up approximately 3.4 percent of the global population — around 55 million Muslims.
“Though the percentage may seem small, the number is enormous. Despite this, there are very few specialized resources or experts catering to their needs.”
Obay continued: “The core idea is to give this segment of the community the same independence everyone else enjoys — the ability to recite anytime, anywhere, without needing an interpreter or teacher. All they need is sign language and a device.
“They can see whether their sign is correct and whether they’ve recited the verse accurately, then move to the next one. There’s also a page where they can learn the Qur’an by reading the verses, memorizing them, and then reciting again.
“We used the Madinah Qur’an as our source, which shows the Qur’anic text in Arabic script with sign language letters above. For this, we used deep learning and computer vision techniques, particularly the YOLO (you only look once) model.”