What is the AI’s existential threat to humanity?
Bora
7 replies
Today's newspaper had a section about how AI should be considered a societal risk, with statements from creators of these large language models.
> Executives from top artificial intelligence companies have warned that the technology they are building should be considered a societal risk on a par with “pandemics and nuclear wars,” according to a 22-word statement from the Center for AI Safety that was signed by more than 350 executives, researchers and engineers.
> Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down. Those fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building poses grave risks and should be regulated more tightly.
I don't understand how these exes are leading these conversations, as they are responsible for creating these products with potential harms in the first place.
Replies
Gaël de Mondragon@gael_de_mondragon
List of acronyms maker
I don't know if they're "leading" those conversations. Those conversations have been going on for a long time but the press probably didn't mention it because it seemed silly (too science fictiony) at the time.
Or AI wasn't trending...
Some sources that I know that have been talking about this already for a while:
https://80000hours.org/problem-p...
https://ourworldindata.org/artif...
Share
Prolinky
@gael_de_mondragon Great resources, thanks Gaël 🖖
"Since we’re worried about systems attempting to take power from humanity, we are particularly concerned about AI systems that might be better than humans on one or more tasks that grant people significant power when carried out well in today’s world."
People are already acting on bad terms, people in power, such as governments with the power of AI are now more powerful manipulating people.
The primary existential threat posed by AI to humanity is the potential misuse of highly advanced systems for malicious purposes or unintentional harmful consequences, such as autonomous weapons or surveillance systems infringing on privacy. Additionally, there's a risk of job displacement due to automation, which could result in significant social and economic disruption.
Prolinky
@oswaldsoto_ Agree, there will always be bad players, no matter what the technology or the system is. Also with the current hype on implementing AI in software, product makers don't think much from this perspective, about what potential harms the future may bring.
ThemAIGuys
There isn't one, people watch too many movies! 😂
ThemAIGuys
@yampolskymax I absolutely don't disagree with that Max, but people love convenience! and the healty minds stay healthy no matter what.
@carl_brook AI just made us more lazy Carl!
"Alexa, turn off the lights"