The perils of Generative AI

In Technology and Culture, a quarterly interdisciplinary journal of the Society for the History of Technology, the author of Kranzberg’s Laws, Melvin Krazenburg wrote what appears to be a fair reflection of Artificial Intelligence as it is today. In the first Law of technology, he wrote, “Technology is neither good nor bad, nor it is neutral.”

According to him, any technology that evolves, whether good or bad depends on the context and time. The technology can produce different results based on the context and they are circumstantial i.e. they could change when the situation changes.

One cannot argue on the benefits of Artificial Intelligence, which I am personally a strong advocate of, they are profound and immensely useful. This is especially in areas of medicine, digital literature, artwork, process automation, I mean the opportunities are endless. On the other hand, there is no denial on how it can manifest in the hands of a wrong actor, with a potential to lead to dangerous and unprecedented harm which we never heard of in pre-AI world. Take, for example, the tragic news about a US teenager Sewell Seltzer III who took his own life after getting addicted to an AI character bot on one of the popular chatbot platforms. As the relationship with the companion chatbot became insanely intense, the 14-year-old began withdrawing from family and friends, while getting into trouble at school. This had ultimately cost his life. This is not just a one-off case, there are too many of others to be true.

Generative AI is deceptive, manipulative, has harmful biases but compelling. It has the power to convince, through complex models that have digested millions and millions of human conversations with varying contexts. It is ridiculously attractive but on the flipside, still immature and cannot represent a reliable content and many times has produced fabricated information that can be quite intimidating or threatening (the “hallucination”). Look at the multitude of apps available on mobile devices, that can produce deepfakes, promote misinformation and hatred, generate true voices by taking samples of a few minutes, generate compromising pictures of anyone with a click of a button, whether at random or real-life people, whether male or female, by just supplying photos of them. I recently saw a feed on LinkedIn that showed a cake in the shape of a cat and getting morphed into some other object. The cat was melting during the morph and I thought that is something an ordinary human can even tolerate to see. Forget the animal activists. Such is the imagination and the extent of the weird side of AI.

The real issue is this. AI technology is easily available in everyone’s hands. It is more ubiquitous than ever before, something that can be developed and deployed rapidly, something that can reach anyone at an extraordinary speed and scale to converse and interact with. This is especially true with social media, who seem to employ these technologies on a massive scale, without really appreciating the unforeseen consequences it could bring. There is something more worrying, when it is hard to believe it is still in its nascent stages and with a lot of growing up to do, and without an active intervention and guardrails for these high-risk AI systems, the outcome could be unpredictable and devastating to humanity. Experts even suggest that regulators have an “off-switch” to remove AI systems from the market if they prove to be harmful or pose unacceptable risks.

The hype is compelling, the media propaganda is glossy and the label is everywhere. Educate yourself and get educated on what really AI is, how good and bad it is, and don’t ever get carried away. Even the creators cannot explain the depth of its capabilities. And save yourself from the peril.