This story is part of Data Narratives, a Civic Media Observatory project that aims to identify and understand the discourse on data used for governance, control, and policy in El Salvador, Brazil, Turkey, Sudan, and India. Read more about the project here and see our public dataset for the full analysis covered in the text below.
The discourse around AI entered Turkey with the naivete attached to most innovations in the contemporary age, just like anywhere else in the world. Could AI drive our cars? Will robots speak to us? Can AI cure cancer? Will all social problems be resolved in seconds now that we have the perfect incorruptible and unbiased technocrat? In short, any existing problem was viewed in the framework of AI. The possibilities seemed endless! AI: the king of the jungle, the mighty lion, will get rid of the flawed pretenders and save us with its perfection.
But, while elsewhere around the world discussions around AI are widespread and accessible, in Turkey this has not been the case. Not yet anyway. Although an abundance of AI experts exist, their interaction with the public at large is rare. This has created a very broad understanding of what AI is: in most people's minds, it is machine learning and associated technologies. As such, there is a profound lack of overall knowledge about the technology, including its limits or growing concerns around its broader impact. The use-cases of AI elsewhere already indicate that all the noise around this technology is far from the lion's mighty roar and more like the annoying buzz of a mosquito.
Less than 40 years have passed since the triumph of neoliberalism, which declared the end of grand narratives, the end of politics, and the end of history. Today, these statements are seen as even more ridiculous than they were decades ago, as history manifests itself clearly and violently in every possible instance. Still, the idea of depoliticized and enlightened despotism by an unbiased and incorruptible technocracy survives in many geographies, including Turkey. Perhaps this search for incorruptible rules isn’t shocking, considering that Turkey ranks 115 out of 180 countries on Transparency International's corruption perception index. In comes the king of the jungle, our AI overlord, to solve any controversial problem with the certainty only a sci-fi android can have.
The use of AI in elections and political blackmail
So where and how is AI being utilized when it comes to Turkey? Most recently, AI “magic” was used during the latest municipal elections. While answering questions about the list of controversial candidates ahead of the local election, the leader of the main opposition Republican People’s Party (CHP) Özgür Özel, pointed towards AI as the culprit — apparently the party leadership relied on this technology before announcing the party candidates. Meanwhile, the ruling Justice and Development Party (AKP)’s mayoral candidate for Istanbul, Murat Kurum, vowed to solve the mega city’s zombie traffic problem using AI. Making a good guess about a candidate’s chance to win and then balancing that with a party’s values is a difficult task. So is declaring one’s stance on car-centric infrastructure and the state of public transport as a candidate for the municipality of a megacity. Any decision made will mean a political sacrifice for the possibilities rejected, unless there is a perfect machine making those decisions, of course. AI for the win-win!
However, as election results indicated, even AI could not help the ruling party in the local elections. The AKP received the lowest percentage of votes in the history of its participation in municipal elections in Istanbul and across the country overall. Neither did AI help the main opposition, the Republican People's Party (CHP). Although the party secured a historic victory (the controversial candidate Özel referred to was picked by AI) , it actually lost to the AKP candidate. Surely, neither of the results is ascribable to AI, even if partially. Yet there is no doubt these examples will be brought up whenever someone mentions AI in a political context again.
Where AI can actually touch people’s lives and make itself felt, the narrative about it is much less incredible than its claimed and/or its hypothetical usage in politics.
In most real-life examples and case studies, we see AI mentioned as something far away from the omnipotent technocrat that can solve complex problems, from local politics to urban infrastructure. These examples and cases illustrate the annoying at best, if not the abusive side of AI technology.
A search of “yapay zeka” (Turkish for artificial intelligence) on any online search engine first lists results celebrating AI, similar to the sentiments shared by politicians. Potential and (sometimes not exactly accurate) actual uses of AI, mostly abroad, are shown through rose-colored glasses. Keep scrolling and another face to the AI debate emerges. Here, we see scores of people complain about AI, describing it as annoying and frustrating. Devoid of empathy and competence, AI technology starts sounding more like the late cultural critic Mark Fisher’s conceptualization of “boring dystopia.”
The dark side of AI tech
Irritating is not the worst AI can be. There is a darker side to it. Even before Taylor Swift's deep fakes alerted the US Congress, we had instances of deepfake methods being used to harass people sexually in Turkey. A young woman alerting the media about a stalker who used deep fakes based on her social media images sparked public discussions. Just before the 2023 general elections, a member of the parliament and the leader of the Turkey Workers’ Party (TİP) Erkan Baş also had AI-generated pornographic photographs of him distributed on Twitter. Since then, those images have been entirely deleted from social media. In both of these examples, it becomes clear that neither relatively unknown people nor public figures are safe from this postmodern sexual harassment method.
Another way deep fake technology is used in Turkey is to mimic voices, as a conman attempted to do with President Recep Tayyip Erdoğan’s voice to demand money from his victims. The mosquito metaphor should be more evident now than it was at the beginning of this article. For a lot of people, AI just buzzes around their heads, annoying and disgusting, threatening to carry old diseases in a more efficient way.
The discourse that exists, of course, follows its actual usage. For example, there is, no honest discussion about the use of AI to recreate public discrimination as was the case in the Netherlands, since AI isn’t used in public services in Turkey yet. As AI keeps growing in use, so will the discourse. However, this growth and debate should be supported by expanded critical discussions by people who actually understand the technology and what it entails — the good, the bad, and the ugly.
In the end, while the technology does have potential uses, the current reality of AI in Turkey is closer to the bothersome mosquito than the majestic lion. A fitting metaphor, considering the lack of expert knowledge in public discourse as well. After all, lions do not actually live in jungles.