Today's newspaper had a section about how AI should be considered a societal risk, with statements from creators of these large language models.
> Executives from top artificial intelligence companies have warned that the technology they are building should be considered a societal risk on a par with “pandemics and nuclear wars,” according to a 22-word statement from the Center for AI Safety that was signed by more than 350 executives, researchers and engineers.
> Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down. Those fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building poses grave risks and should be regulated more tightly.
I don't understand how these exes are leading these conversations, as they are responsible for creating these products with potential harms in the first place.
ThemAIGuys