Another brick in the wall

0
126

One of the big words of the century, so far, has been “silo”, the idea that things are separated into categories or departments, effectively cutting people off from each other and denying them the chance to interact and learn from shared experiences. The prevailing thought is that silos are bad and should be broken down, as much as possible. Communication, it is believed, is the key to peace and joy and togetherness, happiness and a better world. So far, so good.

The problem is that we, as people in society, are moving further and further away from this. Technology has, instead, been gradually isolating us from one another. Everyone can relate to the situation where people gather for a meeting, or a meal, or some other activity, and immediately take out their cell phones and stop talking to each other. We have our own “friends” on social media, people we really never have to connect with, because we can simply post something to them, email them, text them, tweet, send to Instagram, and a whole host of other hosts. It is a situation that has been acknowledged, but no effective solution to our sleepwalking into isolation has been found as yet.

But there is a far more insidious danger to society as a working part of life, and it is, if anything, far more dangerous to our collective and individual well-being. People a few decades ago used to imagine what the world would be like in the future, and would imagine science fiction scenarios of flying cars, or robots that would do all our work for us. Others saw the future as a dark place, the kind of autocratic society envisioned by Orwell in his novel “1984″, or Arthur C. Clark’s cold and remorseless computer “HAL”. But now we have the rise of Artificial Intelligence, AI, which is creating possible scenarios far beyond the imagination of previous science fiction writers.

In fact, Darrell West, senior fellow at the Center for Technology Innovation at the Brookings Institution, has said that he doesn’t need to read sci-fi, as he feels as if he’s already living it. Increasingly, in the coming years, AI will be used to develop government policies, watch over national finances, healthcare systems, and other vital programs. Computer programs are already being used as instruments of surveillance. The Israel-based NSO Group are marketing their Pegasus software which allows users to access the computers and phones of others without their knowledge. A recent article by Ronald J. Deibert, Professor of Political Science and Director of the Citizens Lab at the University of Toronto’s Monk School of Global Affairs and Public Policy, described what programs such as Pegasus can do:

“a spyware operator can surreptitiously intercept texts and phone calls, including those encrypted by apps such as Signal or WahtApp; turn on the user’s microphone and camera; track movements through a device’s gps; and gather photos, notes, contacts, emails, and documents. The operator can do almost everything a user can do and more, including reconfigure the device’s security settings and acquire the digital token that are used to securely access cloud accounts so that surveillance on a target can continue even after the exploit has been rmoved from a device – all without the target’s awareness.”

Perhaps your Smart Phone is smarter than you imagine? Pegasus might be confined largely to government and corporate use at the moment, but there are other, cheaper apps available already. Other text and image generative models such as ChatGPT-4 and Midjourney  can create what we’ve come to know as Deep Fakes, where videos, audio, and text files can seem to show individuals saying and doing things they never actually did, just by rendering genuine photos, video, or audio files in a way that Photoshop only hinted at. Books, newspapers, and all media formats will be affected by this development.

In a recent Guardian article on AI, the potential dangers of such technology were laid out: “Fake articles circulating on the web, or citations of non-existent articles, are the tip of the misinformation iceberg. AI’s incorrect claims may end up in court. Faulty, harmful, invisible and unaccountable decision-making is likely to entrench discrimination and inequality. Creative workers may lose their living thanks to technology that has scraped their past work without acknowledgment or repayment”.

Don’t forget, no matter how sophisticated computer programs may be, they still rely in the first instance on human programming. These are not neutral systems, and the algorithms will reflect the ideology and philosophy of those who write the programs. This has the immediate implications for society as a whole, on how we relate to each other, how we see reality itself. If we can’t tell reality from simulation, the genuine from the counterfeit, we will effectively surrender ourselves to the Big Brothers Orwell predicted. Only who the Big Brothers (there will be many) may be, we can’t know. 

This is not speculation or conspiracy theorising. It is happening already. As the Guardian put it: “The horse has not merely bolted; it is halfway down the road and picking up speed – and no one is sure where it’s heading”. We should talk again…..

LEAVE A REPLY

Please enter your comment!
Please enter your name here