Index Investing News
Friday, May 16, 2025
No Result
View All Result
  • Login
  • Home
  • World
  • Investing
  • Financial
  • Economy
  • Markets
  • Stocks
  • Crypto
  • Property
  • Sport
  • Entertainment
  • Opinion
  • Home
  • World
  • Investing
  • Financial
  • Economy
  • Markets
  • Stocks
  • Crypto
  • Property
  • Sport
  • Entertainment
  • Opinion
No Result
View All Result
Index Investing News
No Result
View All Result

Let’s All Calm Down About AI’s ‘Extinction Risk’

by Index Investing News
June 6, 2023
in Opinion
Reading Time: 5 mins read
A A
0
Home Opinion
Share on FacebookShare on Twitter


For a hot minute last week, it looked like we were already on the brink of killer AI.

Several news outlets reported that a military drone attacked its operator after deciding the human stood in the way of its objective. Except it turned out this was a simulation. And then it transpired the simulation itself didn’t happen. An Air Force colonel had mistakenly described a thought experiment as real at a conference.

Even so, fibs travel halfway around the world before the truth laces up its boots and the story is bound to seep into our collective, unconscious worries about AI’s threat to the human race, an idea that has gained steam thanks to warnings from two “godfathers” of AI and two open letters about existential risk.

Fears deeply baked into our culture about runaway gods and machines are being triggered — but everyone needs to calm down and take a closer look at what’s really going on here.

First, let’s acknowledge the cohort of computer scientists who have long believed AI systems, like ChatGPT, need to be more carefully aligned with human values. They propose that if you design AI to follow principles like integrity and kindness, they are less likely to turn around and try to kill us all in the future. I have no issue with these scientists.

But in the last few months, the idea of an extinction threat has become such a fixture in public discourse that you could bring it up at dinner with your in-laws and have everyone nodding in agreement about the issue’s importance.  

On the face of it, this is ludicrous. It is also great news for leading AI companies, for two reasons:  

1) It creates the specter of an all-powerful AI system that will eventually become so inscrutable we can’t hope to understand it. That may sound scary, but it also makes these systems more attractive in the current rush to buy and deploy AI systems. Technology might one day, maybe, wipe out the human race, but doesn’t that just illustrate how powerfully it could impact your business today? 

This kind of paradoxical propaganda has worked in the past. The prestigious AI lab DeepMind, largely seen as OpenAI’s top competitor, started life as a research lab with the ambitious target of building AGI, or artificial general intelligence that could surpass human capabilities. Its founders Demis Hassabis and Shane Legg weren’t shy about the existential threat of this technology when they first went to big venture capital investors like Peter Thiel to seek funding more than a decade ago. In fact, they talked openly about the risks and got the money they needed.

Spotlighting AI’s world-destroying capabilities in vague ways allows us to fill in the blanks with our imagination, ascribing future AI with infinite capabilities and power. It’s a masterful marketing ploy. 

2) It draws attention away from other initiatives that could hurt the business of leading AI firms. Some examples: The European Union this month is voting on a law, called the AI Act, that would force OpenAI to disclose any copyrighted material used to develop ChatGPT. (OpenAI’s Sam Altman initially said his firm would “cease operating” in the EU because of the law, then backtracked.) An advocacy group also recently urged the US Federal Trade Commission to launch a probe into OpenAI, and push the company to satisfy the agency’s requirements for AI systems to be “transparent, explainable [and] fair.” 

Transparency is at the heart of AI ethics, a field that large tech firms invested more heavily in between 2015 and 2020. Back then, Google, Twitter, and Microsoft all had robust teams of researchers exploring how AI systems like those powering ChatGPT could inadvertently perpetuate biases against women and ethnic minorities, infringe on people’s privacy, and damage the environment. 

Yet the more their researchers dug up, the more their business models appeared to be part of the problem. A 2021 paper by Google AI researchers Timnit Gebru and Margaret Mitchell said the large language models being built by their employer could have dangerous biases for minority groups, a problem made worse by their opacity, and they were vulnerable to misuse. Gebru and Mitchell were subsequently fired. Microsoft and Twitter also went on to dismantle their AI ethics teams.

That has served as a warning to other AI ethics researchers, according to Alexa Hagerty, an anthropologist and affiliate fellow with the University of Cambridge. “‘You’ve been hired to raise ethics concerns,’” she says, characterizing the tech firms’ view, “but do not raise the ones we don’t like.’” 

The result is now a crisis of funding and attention for the field of AI ethics, and confusion about where researchers should go if they want to audit AI systems is made all the more difficult by leading tech firms becoming more secretive about how their AI models are fashioned.

That’s a problem even for those who worry about catastrophe. How are people in the future expected to control AI if those systems aren’t transparent, and humans don’t have expertise in scrutinizing them? 

The idea of untangling AI’s black box — often touted as near impossible — may not be so hard. A May 2023 article in the Proceedings of the National Academy of Sciences (PNAS), a peer-reviewed journal of the National Academy of Sciences, showed that the so-called explainability problem of AI is not as unrealistic as many experts have thought till now. 

Technologists who warn about catastrophic AI risk, like OpenAI CEO Sam Altman, often do so in vague terms. Yet if such organizations truly believed there was even a tiny chance their technology could wipe out civilization, why build it in the first place? It certainly conflicts with the long-term moral math of Silicon Valley’s AI builders, which says a tiny risk with infinite cost should be a major priority. 

Looking more closely at AI systems now, versus wringing our hands about a vague apocalypse of the future, is not only more sensible, but it also puts humans in a stronger position to prevent a catastrophic event from happening in the first place. Yet tech companies would much prefer that we worry about that distant prospect than push for transparency around their algorithms.

When it comes to our future with AI, we must resist the distractions of science fiction from the greater scrutiny that’s necessary today. 

© 2023 Bloomberg LP 


The Motorola Edge 40 recently made its debut in the country as the successor to the Edge 30 that was launched last year. Should you buy this phone instead of the Nothing Phone 1 or the Realme Pro+? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.



Source link

Tags: AIscalmextinctionletsrisk
ShareTweetShareShare
Previous Post

News Analysis: Prince Harry Puts Britain’s Press on Trial

Next Post

Man Utd Boss Ten Hag Wants “Important” £80k p/w Ace At OT

Related Posts

Air site visitors controller union is … uncontrolled

Air site visitors controller union is … uncontrolled

by Index Investing News
May 15, 2025
0

Because the air site visitors management disaster drags on, placing lives in peril and snarling logistics at key journey hubs,...

Line of conscience: Why didn’t luxurious manufacturers communicate up?

Line of conscience: Why didn’t luxurious manufacturers communicate up?

by Index Investing News
May 15, 2025
0

Again then, I used to be enchanted by the craft. The heritage. The obsession with element. However through the years,...

Applaud pope’s name for mercy –
Las Vegas Solar Information

Applaud pope’s name for mercy – Las Vegas Solar Information

by Index Investing News
May 16, 2025
0

LETTER TO THE EDITOR: By Claire Vinet, Santa Fe, N.M. Thursday, Might 15, 2025 | 2 a.m. Kudos to the...

“Racism Is Solely White” – Anti-White Artwork Set up Goes Viral, Sparks Outrage – FREEDOMBUNKER

“Racism Is Solely White” – Anti-White Artwork Set up Goes Viral, Sparks Outrage – FREEDOMBUNKER

by Index Investing News
May 15, 2025
0

Through Remix Information,Quite a few anti-White slogans have been utilized in an artwork set up on the College of Grenoble...

The left tries to assert white South Africans aren’t worthy of refugee standing

The left tries to assert white South Africans aren’t worthy of refugee standing

by Index Investing News
May 15, 2025
0

The very individuals who facilitated the invasion of this nation by tens of millions of unlawful migrants for the final...

Next Post
Man Utd Boss Ten Hag Wants “Important” £80k p/w Ace At OT

Man Utd Boss Ten Hag Wants "Important" £80k p/w Ace At OT

UBS expects agreement on Credit Suisse loss guarantee by June 7

UBS expects agreement on Credit Suisse loss guarantee by June 7

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED

EU sets natural gas price cap proposal well above current levels (NYSEARCA:UNG)

EU sets natural gas price cap proposal well above current levels (NYSEARCA:UNG)

November 23, 2022
Trump tariffs would add prices for customers, economist says

Trump tariffs would add prices for customers, economist says

December 17, 2024
Bitcoin Mining Income Jumped 24% in November as Worth Approached 0K

Bitcoin Mining Income Jumped 24% in November as Worth Approached $100K

December 2, 2024
“Exceptional” player now willing to leave in January

“Exceptional” player now willing to leave in January

September 17, 2023
Regulators Place Porch’s Insurance Subsidiary Under Supervision

Regulators Place Porch’s Insurance Subsidiary Under Supervision

September 6, 2023
Cedric Soares reveals disappointment at lack of Arsenal playing time

Cedric Soares reveals disappointment at lack of Arsenal playing time

October 22, 2022
Brent Renaud, US Journalist, Shot Lifeless In Ukraine

Brent Renaud, US Journalist, Shot Lifeless In Ukraine

March 14, 2022
Vedanta To Increase Up To Rs 3,000 Crore By way of NCD Problem

Vedanta To Increase Up To Rs 3,000 Crore By way of NCD Problem

February 11, 2025
Index Investing News

Get the latest news and follow the coverage of Investing, World News, Stocks, Market Analysis, Business & Financial News, and more from the top trusted sources.

  • 1717575246.7
  • Browse the latest news about investing and more
  • Contact us
  • Cookie Privacy Policy
  • Disclaimer
  • DMCA
  • Privacy Policy
  • Terms and Conditions
  • xtw18387b488

Copyright © 2022 - Index Investing News.
Index Investing News is not responsible for the content of external sites.

No Result
View All Result
  • Home
  • World
  • Investing
  • Financial
  • Economy
  • Markets
  • Stocks
  • Crypto
  • Property
  • Sport
  • Entertainment
  • Opinion

Copyright © 2022 - Index Investing News.
Index Investing News is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In