Year of ChatGPT. What we have faced, and what awaits humanity in the future

In the well-known American publication The New York Times, the American journalist Vohini Vara expressed her opinion and thoughts about Artificial Intelligence in an article. Her forthcoming collection of essays, Quests, explores how technology is transforming human communication.

Earlier this year, I first asked ChatGPT about myself: "What can you tell me about the writer Vohini Vara?" He told me I was a journalist (true, although I'm also a fiction writer), that I was born in California (lie), and that I won a Gerald Loeb Award and a National Magazine Award (lie, lie).

After that, I got into the habit of often asking him about myself. He once told me that Vohini Wara was the author of a non-fiction book called Kin and Strangers: Making Peace in Australia's Northern Territory. This was also a lie, but I agreed with it, answering that I considered the report "dangerous and difficult".

"Thank you for your important work," said ChatGPT.

Trolling a product advertised as an almost human interlocutor, tricking it into revealing its essence, I felt like a heroine in some kind of computer game "girl vs. robot".

Various forms of artificial intelligence have been around for a long time, but the arrival of ChatGPT late last year was what brought AI into our public consciousness completely unexpectedly. By February, ChatGPT was by one measure the fastest-growing consumer app in history. Our first encounters revealed that these technologies are wildly eccentric — think of Kevin Rose's eerie conversation with Bing, Microsoft's AI chatbot, who, over the course of two hours, confessed that he wanted to be human and was in love with it — and often, in my experience, give extremely incorrect information.

A lot has happened in AI since then, with companies moving beyond the basic products of the past to introduce more sophisticated tools like personalized chatbots, services that can process photos and audio alongside text—and more. The rivalry between OpenAI and more established tech companies has become more intense than ever, even as smaller players have gained momentum. The governments of China, Europe, and the United States have taken important steps to regulate the development of technologies, while trying not to concede competitive positions to the industries of other countries.

But what made this year stand out more than any single technological, business or political development was the way AI permeated our daily lives, teaching us to embrace its shortcomings as our own, while the companies behind it cleverly used us to teach their creation. Until May, when it was revealed that the lawyers were using a legal brief filled with ChatGPT references to non-existent court decisions as a joke, and the $5,000 fine the lawyers had to pay was down to them, not the technology. "It's embarrassing," one of them told the judge.

Something similar happened with deepfakes created by artificial intelligence — digital imitations of real people. Do you remember with what horror they were looked at? Until March, when Chrissy Teigen couldn't figure out if the image of the Pope in a down jacket was in style

Balenciaga for real, she wrote on social media: "I hate myself lol." High schools and universities quickly moved from worrying about preventing students from using AI to showing them how to use it effectively. AI still doesn't write very well, but now that it's showing its flaws, it's students who use it poorly, not products, that get ridiculed.

FullscreenOK, you might be thinking, but haven't we been adapting to new technologies for most of human history? If we are going to use them, shouldn't it be our responsibility to act wisely? This line of reasoning avoids what should be the central question: Should lying chatbots and deepfaking mechanisms be available at all?

Artificial intelligence errors have a charmingly anthropomorphic name—hallucinations—but this year made clear just how high the stakes can be. We've gotten headlines about AI being able to instruct killer drones (with the potential for unpredictable behavior), send people to prison (even if they're not guilty), design bridges (with potentially inadequate supervision), diagnose all kinds of diseases (sometimes incorrectly) and create compelling news (in some cases to spread political disinformation).

As a society, we have clearly benefited from promising AI-based technologies; this year I was excited to read about AI that could detect breast cancer missed by doctors, or allow humans to decode messages from whales. However, in focusing on these benefits, we fail to take into account that this approach absolves the responsibility of the companies behind these technologies, or rather the people behind these companies.

The events of the last few weeks show how entrenched the power of these people is. OpenAI, the organization behind ChatGPT, was created as a non-profit organization to maximize public interest, not just profit maximization. However, when the board of directors fired CEO Sam Altman over concerns that he was not taking the public interest seriously enough, investors and employees were outraged. Five days later, Mr. Altman returned in triumph, replacing most of the uncomfortable board members.

In retrospect, I think I misjudged my opponent in the early games with ChatGPT. I thought it was the technology itself. I should have remembered that technology itself is value neutral. The rich and powerful people behind them, as well as the institutions created by these people, are not.

The truth is, no matter what I asked ChatGPT in my first attempts to confuse it, OpenAI came out on top. Engineers designed it to learn from the experience of interacting with users. And regardless of whether his answers were good, they brought me back to contact him again and again. The main goal of OpenAI in this first year was to get people to use it. Thus, by continuing my games, I was only helping them.

AI developers make every effort to fix the shortcomings of their products. For all the investment that companies are bringing in, it's safe to assume that some progress will be made. But even in a hypothetical world in which AI's capabilities improve (perhaps especially in this world), the power imbalance between AI's creators and its users should make us wary of its insidious reach. A clear example is the apparent desire of ChatGPT not only to introduce itself and tell us what it is, but also to tell us who we are and what we should think. Today, when the technology is in its infancy, this power seems new and even funny. Tomorrow everything may appear in a different light.

I recently asked ChatGPT what I—that is, journalist Vohini Vara—thought about AI. He stated that he did not have enough information. Then I asked him to write a fictional story about a journalist named Vohini Vara, who writes a column about artificial intelligence for the New York Times. "To the beat of the rain on the windows," the publication wrote, "Vohina Vara's words appeared that, like a symphony, the integration of AI into our lives can become a beautiful joint composition if performed with care."

spot_imgspot_imgspot_imgspot_img

popular

Share this post:

More like this
HERE

Ukrainian sports legend Harlan denied rumors about the end of his career

Two-time Olympic champion, one of the most successful Ukrainian athletes Olga...

Russia is preparing for a dozen massive attacks

The Russian Federation continues to actively prepare for new massive strikes...

STALKER 2 broke the record of Ukrainian games on Steam

After 15 years of waiting, endless postponements and even cancellations,...

The ex-director of the GTS Operator demands a bonus of UAH 10.5 million

Former General Director of the Gas Transport System Operator of Ukraine Serhii Makogon...

Ukraine is ready to resume air traffic

Ukraine is 95% ready to resume air transportation, but it depends...

Incidence of influenza and COVID-19 in Ukraine is increasing

During the past week (from November 11 to 17) in...

Contract graduate students will not be able to receive a deferral next year

In Ukraine, they decided to cancel the postponement of mobilization for post-graduate contract workers....

Traitors and domestic tyrants - again: the Council adopted an important draft law

The Verkhovna Rada of Ukraine adopted in the first reading draft law No. 10298,...