Artificial intelligence is already advancing at a worrying pace. What if we don’t slam on the brakes? Experts explain what keeps them up at night
Artificial intelligence is already advancing at a worrying pace. What if we don’t slam on the brakes? Experts explain what keeps them up at night
Exactly. There was an article floating around just a couple of days ago that from what I recall was saying that billionaires were funding these AI-scare studies in top universities, I presume to distract the public from the very real and near scare of climate disaster, economic inequality, etc. Here, unfortunately paywalled: https://www.washingtonpost.com/technology/2023/07/05/ai-apocalypse-college-students/
There is this concept called “crityhype”. It’s a type of marketing mascarading as criticism. “Careful, AI might become too powerful” is exactly that
A lot of the folks worried about AI x-risk are also worried about climate, and pandemics, and lots of other things too. It’s not like there’s only one threat.
It’s all about risks, if you worry about being runover ok it’s reasonable, but if you worry about shark attacks when you live in the forest it is ludicrous and a waste of time.
@fubo @xapr I don’t doubt that, but that begs the question whether the unrealistic concerns raised for by those folks outweigh the realistic ones that need more actual and funding. For example, how much money are the billionaires and top elites putting in to solve climate change, past/future pandemics compared to studying AI-driven doom? I don’t know the answer, and I welcome you to find out.
Right, the attention being paid to AI risk just seems vastly disproportionate compared to other much more serious imminent threats.