• Subscribe
  • What do you think about experts calling to pause AI?

    Anil Meena
    29 replies
    https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Top artificial intelligence researchers, the co-founder of Apple, and Elon Musk are among signatories of an open letter calling for companies to stop training powerful AIs. They argue that advanced AI “could represent a profound change in the history of life on Earth,” and companies should not be racing to unleash it. The letter calls for a six-month pause, providing time to decide whether developing things that might “outnumber, outsmart, obsolete and replace us” is a good idea. Meanwhile, a Goldman Sachs report suggested that generative AIs like GPT-4, the most advanced platform so far debuted, could automate away the equivalent of 300 million jobs, especially in legal and administrative work.

    Replies

    Chris Messina
    Strongly disagree. They say "such decisions must not be delegated to unelected tech leaders", but then say that "AI labs and independent experts", who are also not elected, should come up with some guide rails. They've had time to make progress here and yet have not. There's a reason these AI ethics groups keep getting fired or booted out of industry — they don't have any solutions except to fearmonger and raise moral panic. Certainly these new AI capabilities are significant and require scrutiny and consideration, but humans have also not been able to stem the tide of innovation once Pandora's Box has been opened. They cite banning of human cloning, human germline modification, gain-of-function research (whatever that is), and eugenics, but those all had to do with human biological modification. As I wrote, the only way we're going to know how to respond to the real threats (not just imagined) of AI is through experience. It's hard to imagine that in six months — by October 2023 — we're going to have guardrails developed that will curtail or direct the strategic direction of Microsoft, Google, Facebook, OpenAI, and others. There's too much at stake for these capitalist enterprises to wait.
    Anil Meena
    @chrismessina I remember one of the podcast from 2019 where elon musk was talking about the AI rights and regulation, he knew the potential of AI and even the risk with it. So I think what the current situation is to pause it for sometime untill a defined or basic guideline can be defined with the usage of AI. Because currently we are only seeing and are beiong fed only the new innovation side of the AI. But the potential threat in the form of digital scamming and mis information are going to flood the market in next few months. And that will be quite difficult to catch and stop if the authorities don't start now.
    Chris Messina
    @anil_meena21 that's going to happen regardless of whether these new products are released; it's already happening. What's to compel those bad actors to comply with whatever these ivory tower experts develop? What carrots and sticks can they offer? There are a number of groups and organizations already working on AI alignment; have they not made sufficient progress? If not, why not?
    Aaron O'Leary
    I can 100% understand their concerns, and I feel a lot of them too. However, it's really hard to close the box of innovation once it's opened. Also I imagine governments are going to regulate AI like nothing before it anyway, the EUs draft regulation plan already makes it a shell of its potential.
    Anil Meena
    @aaronoleary Agreed. The problem is not with the innovation but its with having this innovation in wrong hand and misusing it. Biggest threat to the people are the diminishing line between real and fake which attracts the scammer and fake news market so quick. The image of Pope wearing that puffed jacket, joe rogan's fake voice and fake video endorsing random products are just tip of the ice berg.
    Naveed Rehman
    Probably AI will not replace humans, but people who're using AI will replace people who're not. In the history, no one could have ever stopped the humans to advance. But there has always been some resistance in embracing the technology.
    Anil Meena
    @naveed_rehman True... Change or New scares people... and I think its natural behaviour. Its like into the unknown. I agree with your point on people who are going to use it vs people who aren't. I am sure better prompt is going to become a skill and infact I think it already has started. If you are able to get the better work done from AI, you have an edge over the others.
    Naveed Rehman
    @anil_meena21 yup. heavily using it on daily basis. i had to finish a job (scraping kind of) in 2 weeks. i just finished in 1 day. and accuracy is super cool 😎
    Deniz Sutaş
    I think we better discover both the good and bad in further advancements in AI. In the end avoiding further research and curiosity is near impossible.
    Anil Meena
    @deniz_sutas True, though I think choosing right or wrong is often the most difficult thing in any research...
    Eirina Khan
    Open letter to tech companies to protect our private data ❌ Open letter from tech companies to stop something actually benefitting the common population ✅
    Richard Gao
    Greatly disagree Almost feels like a way to reduce competition
    Anil Meena
    @richard_gao2 💯 I think what they are trying to do is curb the misuse of this technology
    Charlotte Chiang
    I appreciate the sentiment, but I can't help feeling skeptical. We've already seen the negative impact of unregulated technological innovation and its consequences on our privacy, autonomy, and security. At the same time, proper regulation has never and will never keep up with the pace of innovation. I don't know what is supposed to be achieved in 6 months - even supposing everyone agrees, are we supposed to have a well thought-out plan for regulation in 6 months? Finally, the fact that the founders of tech giants are amongst the signatories, makes me question whether the 'guardrails' would in fact be protecting their interests.
    Shailendra Singh
    Makes sense. CRISPR was set 30 years back by eminent geneticists, biologists, chemists etc to control how the scientific community would use the new tools available for gene sequencing, recombination etc The goal must have been simple [1] Encourage only ethical and moral uses cases of genetics [2] Educate scientists on all the negative implications of the science at disposal The scientific community was hesitant initially but in 30 years CRISPR has helped the community drive their attention to the right problems that can be solved using genetics. Are the intentions not the same here?
    Anil Meena
    @shailendra_singh_ht True, thats precisely what needs to be done in the case of AI. Other wise there will be flood of mis information, scams and god know what.
    Anil Meena
    @shailendra_singh_ht Focus should be on the betterment of humanity.
    David Cuthill
    6 months can be a lifetime in this field. I'm pretty certain China, India and whole host of other countries would love for America to pause.
    Vinay Sharma
    I can understand the reasoning but ultimately disagree with it… I don’t think limiting technology should be a solution… more so putting guidelines in place for companies to adhere too is better… especially since AI makes people so much more productive
    Anil Meena
    @vin_creatorstock I think the main the reason they want to pause the research is to mainly come up with the safe guard around it. I mean the way AI is progressing it can easily be misused to spread mis-information. And that is something which can effects masses, even those who dont want to be a part of it.
    Art-Prints AI
    I feel that if used a a tool not a weapon we are OK. However who really knows what the outcome will be?
    Art-Prints AI
    @anil_meena21 Right! A saying my mother used to share with me was, "If you can imagine it, you can achieve it." Good or Evil
    Anil Meena
    @artprintsai Honestly this is a very real threat. If it can be done by Open AI other can also do it and no one has control over what can that be used for. Just think "Boston Dynamics robot" + "AI instruction to devastation" = total chaos :P
    Damon West
    We asked ai to identify as human, or identify humans as ai. Then asked that it identify characteristics of malware, and recommendations regarding if subjects could have malware removed or if termination was necessary.
    Damon West
    @anil_meena21 - we’ve been discussing. Ai isn’t the issue. We’ve discussed if animals are born with malware, no he said. Then are humans born with malware installed, no. Then who is installing the malware? (Fucking great question) I imagine it’s someone the person spends time with. Probably a lot of time, a malware can be complicated. Then the person who installed it is the problem.
    Anil Meena
    @damon_west1 I am curious about the result.