March 29, 2025:
Last year personnel from Unit 8200 trained an Artificial Intelligence/AI model to understand spoken Arabic. The model used the many telephone conversations and text messages obtained while electronically monitoring the Palestinian territories.
Unit 8200 began building its model to create a sophisticated chatbot tool that could answer questions about monitored people by using massive quantities of data collected. Unit 8200 accelerated system development after the Hamas attack out of Gaza in October 2023.
Unit 8200 built a Large Language Model/LLM, a deep learning system based on all the Arab language material they had collected. Unit 8200’s AI models knew what everyone in the West Bank was doing. This was done with large-scale retention of intercepted Palestinian telecommunications. That meant using AI to analyze communications, recognize patterns and make predictions.
In 2022 ChatGPT became available and Unit 8200 experts modified ChatGPT to use massive amounts of information to expertly handle complex requests.
After the Hamas 2023 attacks Israel called up several hundred thousand reservists. These included personnel who knew how to create LLMs. These experts began building an LLM that understands written Arabic found in media broadcasts, literature and media. This was a problem because most of the collected data was spoken Arabic.
Unit 8200 collected all the spoken Arabic text they had and created a database of about a hundred billion words from groups that were hostile to Israel. Israelis also trained the model to understand specific terms used by anti-Israel groups. This massive collection of training data included large volumes of communications between Palestinians, which was just what the model needed to succeed.
Earlier machine learning models used by Unit 8200 made wide-scale surveillance of Palestinians effective as a form of control, particularly in the West Bank where they said it has contributed to a greater number of arrests. The model enabled Israeli intelligence specialists to automatically analyze intercepted phone conversations and identify Palestinians planning to attack soldiers or Israelis living in illegal settlements. When Israeli soldiers entered West Bank communities, the AI system detected people using words indicating hostile activity like throwing rocks or using firearms against soldiers.
When used to select targets for airstrikes, the AI sometimes makes mistakes when pilots are sent to attack innocent civilians instead of militants. The Israelis admit that mistakes are made but, in a war for survival, occasional mistakes are acceptable. Israel is fighting groups that want to destroy Israel and drive all surviving Jews from the region.