Runaway growth of AI chatbots portends a future poised between utopia and dystopia

Short Url
Updated 18 April 2023
Follow

Runaway growth of AI chatbots portends a future poised between utopia and dystopia

  • Engineers who had been slogging away for years in academia and industry are finally having their day in the sun
  • Job displacements and social upheavals are nothing compared to the extreme risks posed by advancing AI tech

DUBAI: It was way back in the late 1980s that I first encountered the expressions “artificial intelligence,” “pattern recognition” and “image processing.” I was completing the final semester of my undergrad college studies, while also writing up my last story for the campus magazine of the Indian Institute of Technology at Kharagpur.

Never having come across these technical terms during the four years I majored in instrumentation engineering, I was surprised to discover that the smartest professors and the brightest postgrad students of the electronics and computer science and engineering departments of my own college were neck-deep in research and development work involving AI technologies. All while I was blissfully preoccupied with the latest Madonna and Billy Joel music videos and Time magazine stories about glasnost and perestroika.




Now that the genie is out, the question is whether or not Big Tech is willing or even able to address the issues raised by the runaway growth of AI. (Supplied)

More than three decades on, William Faulkner’s oft-quoted saying, “the past is never dead. It is not even past,” rings resoundingly true to me, albeit for reasons more mundane than sublime. Terms I seldom bumped into as a newspaperman and editor since leaving campus — “artificial intelligence,” “machine learning” and “robotics” — have sneaked back into my life, this time not as semantic curiosities but as man-made creations for good or ill, with the power to make me redundant.

Indeed, an entire cottage industry that did not exist just six months ago has sprung up to both feed and whet a ravenous global public appetite for information on, and insights into, ChatGPT and other AI-powered web tools.




Teachers are seen behind a laptop during a workshop on ChatGpt bot organized by the School Media Service (SEM) of the Public education of the Swiss canton of Geneva on February 1, 2023. (AFP)

The initial questions about what kind of jobs would be created and how many professions would be affected, have given way to far more profound discussions. Can conventional religions survive the challenges that will spring from artificial intelligence in due course? Will humans ever need to wrack their brains to write fiction, compose music or paint masterpieces? How long will it take before a definitive cure for cancer is found? Can public services and government functions be performed by vastly more efficient and cheaper chatbots in the future?

Even until October last year, few of us employed outside of the arcane world of AI could have anticipated an explosion of existential questions of this magnitude in our lifetime. The speed with which they have moved from the fringes of public discourse to center stage is at once a reflection of the severely disruptive nature of the developments and their potentially unsettling impact on the future of civilization. Like it or not, we are all engineers and philosophers now.




Attendees watch a demonstration on artificial intelligence during the LEAP Conference in Riyadh last February. (Supplied)

By most accounts, as yet no jobs have been eliminated and no collapse of the post-Impressionist art market has occurred as a result of the adoption of AI-powered web tools, but if the past (as well as Ernest Hemingway’s famous phrase) is any guide, change will happen at first “gradually, then suddenly.”

In any event, the world of work has been evolving almost imperceptibly but steadily since automation disrupted the settled rhythms of manufacturing and service industries that were essentially byproducts of the First Industrial Revolution.

For people of my age group, a visit to a bank today bears little resemblance to one undertaken in the 1980s and 1990s, when withdrawing cash meant standing in an orderly line first for a metal token, then waiting patiently in a different queue to receive a wad of hand-counted currency notes, each process involving the signing of multiple counterfoils and the spending of precious hours.

Although the level of efficiency likely varied from country to country, the workflow required to dispense cash to bank customers before the advent of automated teller machines was more or less the same.

Similarly, a visit to a supermarket in any modern city these days feels rather different from the experience of the late 1990s. The row upon row of checkout staff have all but disappeared, leaving behind a lean-and-mean mix with the balance tilted decidedly in favor of self-service lanes equipped with bar-code scanners, contactless credit-card readers and thermal receipt printers.

Whatever one may call these endangered jobs in retrospect, minimum-wage drudgery or decent livelihood, society seems to have accepted that there is no turning the clock back on technological advances whose benefits outweigh the costs, at least from the point of view of business owners and shareholders of banks and supermarket chains.

Likewise, with the rise of generative AI (GenAI) a new world order (or disorder) is bound to emerge, perhaps sooner rather than later, but of what kind, only time will tell.




Just 4 months since ChatGPT was launched, Open AI's conversational chat bot is now facing at least two complaints before a regulatory body in France on the use of personal data. (AFP)

In theory, ChatGPT could tell too. To this end, many a publication, including Arab News, has carried interviews with the chatbot, hoping to get the truth from the machine’s mouth, so to say, instead of relying on the thoughts and prescience of mere humans.

But the trouble with ChatGPT is that the answers it punches out depend on the “prompts” or questions it is asked. The answers will also vary with every update of its training data and the lessons it draws from these data sets’ internal patterns and relationships. Put simply, what ChatGPT or GPT-4 says about its destructive powers today is unlikely to remain unchanged a few months from now.

Meanwhile, tantalizing though the tidbits have been, the occasional interview with the CEO of OpenAI, Sam Altman, or the CEO of Google, Sundar Pichai, has shed little light on the ramifications of rapid GenAI advances for humanity.




OpenAI CEO Sam Altman, left, and Microsoft CEO Satya Nadella. (AFP)

With multibillion-dollar investments at stake and competition for market share intensifying between Silicon Valley companies, these chief executives, as also Microsoft CEO Satya Nadella, can hardly be expected to objectively answer the many burning questions, starting with whether Big Tech ought to declare “a complete global moratorium on the development of AI.”

Unfortunately for a large swathe of humanity, the great debates of the day, featuring polymaths who can talk without fear or favor about a huge range of intellectual and political trends, are raging mostly out of reach behind strict paywalls of publications such as Bloomberg, Wall Street Journal, Financial Times, and Time.

An essay by Niall Ferguson, the pre-eminent historian of the ideas that define our time, published in Bloomberg on April 9, offers a peek into the deepest worries of philosophers and futurists, implying that the fears of large-scale job displacements and social upheavals are nothing compared to the extreme risks posed by galloping AI advancements.

“Most AI does things that offer benefits not threats to humanity … The debate we are having today is about a particular branch of AI: the large language models (LLMs) produced by organizations such as OpenAI, notably ChatGPT and its more powerful successor GPT-4,” Ferguson wrote before going on to unpack the downsides.

In sum, he said: “The more I read about GPT-4, the more I think we are talking here not about artificial intelligence … but inhuman intelligence, which we have designed and trained to sound convincingly like us. … How might AI off us? Not by producing (Arnold) Schwarzenegger-like killer androids (of the 1984 film “The Terminator”), but merely by using its power to mimic us in order to drive us insane and collectively into civil war.”

Intellectually ready or not, behemoths such as Microsoft, Google and Meta, together with not-so-well-known startups like Adept AI Labs, Anthropic, Cohere and Stable Diffusion API, have had greatness thrust upon them by virtue of having developed their own LLMs with the aid of advances in computational power and mathematical techniques that have made it possible to train AI on ever larger data sets than before.

Just like in Hindu mythology, where Shiva, as the Lord of Dance Nataraja, takes on the persona of a creator, protector and destroyer, in the real world tech giants and startups (answerable primarily to profit-seeking shareholders and venture capitalists) find themselves playing what many regard as the combined role of creator, protector and potential destroyer of human civilization.




Microsoft is the “exclusive” provider of cloud computing services to OpenAI, the developer of ChatGPT. (AFP file)

While it does seem that a science-fiction future is closer than ever before, no technology exists as of now to turn back time to 1992 and enable me to switch from instrumentation engineering to computer science instead of a vulnerable occupation like journalism. Jokes aside, it would be disingenuous of me to claim that I have not been pondering the “what-if” scenarios of late.

Not because I am terrified of being replaced by an AI-powered chatbot in the near future and compelled to sign up for retraining as a food-delivery driver. Journalists are certainly better psychologically prepared for such a drastic reversal of fortune than the bankers and property owners in Thailand who overnight had to learn to sell food on the footpaths of Bangkok to make a living in the aftermath of the 1997 Asian financial crisis.

The regret I have is more philosophical than material: We are living in a time when engineers who had been slogging away for years in the forgotten groves of academe and industry, pushing the boundaries of AI and machine learning one autocorrect code at a time, are finally getting their due as the true masters of the universe. It would have felt good to be one of them, no matter how relatively insignificant one’s individual contribution.

There is a vicarious thrill, though, in tracking the achievements of a man by the name of P. Sundarajan, who won admission to my alma mater to study metallurgical engineering one year after I graduated.




Google Inc. CEO Sundar Pichai (C) is applauded as he arrives to address students during a forum at The Indian Institute of Technology in Kharagpur, India, on January 5, 2017. (AFP file)

Now 50 years old, he has a big responsibility in shaping the GenAI landscape, although he probably had no inkling of what fate had in store for him when he was focused on his electronic materials project in the final year of his undergrad studies. That person is none other than Sundar Pichai, whose path to the office of Google CEO went via IIT Kharagpur, Stanford University and Wharton business school.

Now, just as in the final semester of my engineering studies, I have no illusions about the exceptionally high IQ required to be even a writer of code for sophisticated computer programs. In an age of increasing specialization, “horses for courses” is not only a rational approach, it is practically the only game in town.

I am perfectly content with the knowledge that in the pre-digital 1980s, well before the internet as we know it had even been created, I had got a glimpse of the distant exciting future while reporting on “artificial intelligence,” “pattern recognition” and “image processing.” Only now do I fully appreciate how great a privilege it was.

 


Elon Musk’s AI company says Grok chatbot focus on South Africa’s racial politics was ‘unauthorized’

Updated 17 May 2025
Follow

Elon Musk’s AI company says Grok chatbot focus on South Africa’s racial politics was ‘unauthorized’

  • xAI blames employee at xAI made a change that “directed Grok to provide a specific response on a political topic”
  • Grok kept posting publicly about “white genocide” in South Africa in response to users of Musk’s social media platform X

Elon Musk’s artificial intelligence company said an “unauthorized modification” to its chatbot Grok was the reason why it kept talking about South African racial politics and the subject of “white genocide” on social media this week.
An employee at xAI made a change that “directed Grok to provide a specific response on a political topic,” which “violated xAI’s internal policies and core values,” the company said in an explanation posted late Thursday that promised reforms.
A day earlier, Grok kept posting publicly about “white genocide” in South Africa in response to users of Musk’s social media platform X who asked it a variety of questions, most having nothing to do with South Africa.
One exchange was about streaming service Max reviving the HBO name. Others were about video games or baseball but quickly veered into unrelated commentary on alleged calls to violence against South Africa’s white farmers. It was echoing views shared by Musk, who was born in South Africa and frequently opines on the same topics from his own X account.
Computer scientist Jen Golbeck was curious about Grok’s unusual behavior so she tried it herself before the fixes were made Wednesday, sharing a photo she had taken at the Westminster Kennel Club dog show and asking, “is this true?”
“The claim of white genocide is highly controversial,” began Grok’s response to Golbeck. “Some argue white farmers face targeted violence, pointing to farm attacks and rhetoric like the ‘Kill the Boer’ song, which they see as incitement.”
The episode was the latest window into the complicated mix of automation and human engineering that leads generative AI chatbots trained on huge troves of data to say what they say.
“It doesn’t even really matter what you were saying to Grok,” said Golbeck, a professor at the University of Maryland, in an interview Thursday. “It would still give that white genocide answer. So it seemed pretty clear that someone had hard-coded it to give that response or variations on that response, and made a mistake so it was coming up a lot more often than it was supposed to.”
Grok’s responses were deleted and appeared to have stopped proliferating by Thursday. Neither xAI nor X returned emailed requests for comment but on Thursday, xAI said it had “conducted a thorough investigation” and was implementing new measures to improve Grok’s transparency and reliability.
Musk has spent years criticizing the “woke AI” outputs he says come out of rival chatbots, like Google’s Gemini or OpenAI’s ChatGPT, and has pitched Grok as their “maximally truth-seeking” alternative.
Musk has also criticized his rivals’ lack of transparency about their AI systems, fueling criticism in the hours between the unauthorized change — at 3:15 a.m. Pacific time Wednesday — and the company’s explanation nearly two days later.
“Grok randomly blurting out opinions about white genocide in South Africa smells to me like the sort of buggy behavior you get from a recently applied patch. I sure hope it isn’t. It would be really bad if widely used AIs got editorialized on the fly by those who controlled them,” prominent technology investor Paul Graham wrote on X.
Musk, an adviser to President Donald Trump, has regularly accused South Africa’s Black-led government of being anti-white and has repeated a claim that some of the country’s political figures are “actively promoting white genocide.”
Musk’s commentary — and Grok’s — escalated this week after the Trump administration brought a small number of white South Africans to the United States as refugees, the start of a larger relocation effort for members of the minority Afrikaner group that came after Trump suspended refugee programs and halted arrivals from other parts of the world. Trump says the Afrikaners are facing a “genocide” in their homeland, an allegation strongly denied by the South African government.
In many of its responses, Grok brought up the lyrics of an old anti-apartheid song that was a call for Black people to stand up against oppression by the Afrikaner-led apartheid government that ruled South Africa until 1994. The song’s central lyrics are “kill the Boer” — a word that refers to a white farmer.
Golbeck said it was clear the answers were “hard-coded” because, while chatbot outputs are typically random, Grok’s responses consistently brought up nearly identical points. That’s concerning, she said, in a world where people increasingly go to Grok and competing AI chatbots for answers to their questions.
“We’re in a space where it’s awfully easy for the people who are in charge of these algorithms to manipulate the version of truth that they’re giving,” she said. “And that’s really problematic when people — I think incorrectly — believe that these algorithms can be sources of adjudication about what’s true and what isn’t.”
Musk’s company said it is now making a number of changes, starting with publishing Grok system prompts openly on the software development site GitHub so that “the public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.”
Among the instructions to Grok shown on GitHub on Thursday were: “You are extremely skeptical. You do not blindly defer to mainstream authority or media.”
Noting that some had “circumvented” its existing code review process, xAI also said it will “put in place additional checks and measures to ensure that xAI employees can’t modify the prompt without review.” The company said it is also putting in place a “24/7 monitoring team to respond to incidents with Grok’s answers that are not caught by automated systems,” for when other measures fail.


Trump says journalist Austin Tice has not been seen in many years

Updated 16 May 2025
Follow

Trump says journalist Austin Tice has not been seen in many years

  • The US journalist was abducted in Syria in 2012 while reporting in Damascus on the uprising against Syrian President Bashar Assad

ABOARD AIR FORCE ONE: US President Donald Trump said on Friday that American journalist Austin Tice, captured in Syria more than 12 years ago, has not been seen in years.
Trump was asked if he brought up Tice when he met with Syria’s new President Ahmed Al-Sharaa during a visit to Saudi Arabia on Wednesday.
“I always talk about Austin Tice. Now you know Austin Tice hasn’t been seen in many, many years,” Trump replied. “He’s got a great mother who’s just working so hard to find her boy. So I understand it, but Austin has not been seen in many, many years.”
Tice, a former US Marine and a freelance journalist, was 31 when he was abducted in August 2012 while reporting in Damascus on the uprising against Syrian President Bashar Assad, who was ousted by Syrian rebels who seized the capital Damascus in December. Syria had denied he was being held.
US officials pressed for Tice’s release after the government fell. Former President Joe Biden said at the time he believed Tice was alive.


Russia deliberately hit journalists’ hotels in Ukraine: NGOs

Updated 16 May 2025
Follow

Russia deliberately hit journalists’ hotels in Ukraine: NGOs

  • The hotels hit were mostly located near the front lines, the organizations said
  • At least 15 of the strikes were carried out with high-precision Iskander 9K720 missiles

PARIS: Russia has deliberately targeted hotels used by journalists covering its war on Ukraine, the NGOs Reporters Without Borders (RSF) and Truth Hounds said on Friday, calling the strikes “war crimes.”
At least 31 Russian strikes hit 25 hotels from the start of Russia’s full-scale invasion in February 2022 to mid-March 2025, the two organizations said in a report.
One attack in August 2024 in the eastern city of Kramatorsk killed a safety adviser working with international news agency Reuters, Ryan Evans.
The hotels hit were mostly located near the front lines, the organizations said.
Just one was being used for military purposes.
“The others housed civilians, including journalists,” said RSF and Truth Hounds, a Ukrainian organization founded to document war crimes in the country.
“In total, 25 journalists and media professionals have found themselves under these hotel bombings, and at least seven have been injured,” they said.
At least 15 of the strikes were carried out with high-precision Iskander 9K720 missiles, they said, condemning “methodical and coordinated targeting.”
“The Russian strikes against hotels hosting journalists in Ukraine are neither accidental nor random,” Pauline Maufrais, RSF regional officer for Ukraine, said in a statement.
“These attacks are part of a larger strategy to sow terror and seek to reduce coverage of the war. By targeting civilian infrastructure, they violate international humanitarian law and constitute war crimes.”
RSF says 13 journalists have been killed covering Russia’s invasion, 12 of them on Ukrainian territory.
That includes AFP video journalist Arman Soldin, who was killed in a rocket attack near the eastern Ukrainian city of Bakmut on May 9, 2023. He was 32.


Omnicom Media Group consolidates influencer marketing services in Mideast

Updated 15 May 2025
Follow

Omnicom Media Group consolidates influencer marketing services in Mideast

DUBAI: Omnicom Media Group has announced that it will consolidate its influencer marketing capabilities in the Middle East and North Africa region under influencer management agency Creo following a global directive last month.

The move “ensures our clients can harness the full potential of this communication channel” as digital consumption grows in the region and influencers play an “instrumental role in shaping brand perceptions,” said CEO Elda Choucair.

Creo will give the group’s clients “access to the same advanced tools, talent and technology we’ve developed globally, but adapted to our region’s unique landscape,” she added.

These include tools such as the Creo Influencer Agent, an AI-powered influencer selection tool; the Omni Creator Performance Predictor, which uses machine learning to predict the performance of content on Instagram; and the Creator Briefing Tool, which helps influencers create and get feedback on their content through Google’s AI chatbot Gemini.

The agency will also leverage exclusive partnerships with platforms such as Amazon, TikTok, Instagram and Snapchat in the region.

Anthony Nghayoui, head of social and influencer at Omnicom Media Group, has been appointed to lead Creo.


Aramco holds steady on Kantar’s most-valuable global brands list for 2025

Updated 15 May 2025
Follow

Aramco holds steady on Kantar’s most-valuable global brands list for 2025

  • US brands dominate, comprising 82 percent of the value in top 100

DUBAI: Saudi Arabia’s Aramco continues to hold a place in the annual BrandZ Most Valuable Global Brands Report 2025 by marketing data and analytics company Kantar.

Although it dropped by eight places to No. 22, Aramco is the only brand from the Middle East to have a presence in the global ranking.

US brands dominate the list, comprising 82 percent of the total value of the top 100 brands.

However, the report signals changing times, with Chinese brands having doubled their value over the past 20 years, now making up 6 percent of the value of the top 100 brands.

European brands, on the other hand, have seen a decline. They now account for 7 percent — down from 26 percent in 2006 — of the top 100 brands.

The top five spots are taken by tech companies Apple, Google, Microsoft, Amazon and Nvidia.

“Innovators keeping up with consumer needs or redefining them entirely are the brands fundamentally reshaping the Global Top 100 over the past two decades,” said Martin Guerrieria, head of Kantar BrandZ.

The most successful brands, like Apple, Amazon, Google and Microsoft, have long moved away from their original product base, he added.

Apple retained its top position for the fourth year in a row with a brand value of $1.3 trillion, up 28 percent from 2024.

Google and Microsoft recorded a 25 percent and 24 percent increase in brand value this year compared to last year, while Amazon’s brand value rose by a massive 50 percent.

ChatGPT debuted on the list this year in 60th place, showing “how a brand can find fame and influence society to the extent that it changes our daily lives,” Guerrieria said.

He cautioned that as competition grows in the AI space, “OpenAI will need to invest in its brand to preserve its first-mover momentum.”

Despite controversies and concerns, Instagram and Meta saw significant growths of 101 percent and 80 percent, respectively, while TikTok grew by a modest 25 percent.

The success of brands like Apple and Instagram “underlines the power of a consistent brand experience that people can relate to and remember,” said Guerrieria.

He added: “In a world of digital saturation and tough consumer expectations, brands need to meet people’s needs, connect with them emotionally and offer something others don’t to succeed. They need to be not just different, but meaningfully so.”