That’s ridiculous, of course it counts as AI. It’s not conscious, and it’s not very intelligent, but it has some intelligence by any reasonable definition.
That’s ridiculous, of course it counts as AI. It’s not conscious, and it’s not very intelligent, but it has some intelligence by any reasonable definition.
Sure, I’m not certain at all, maybe, but are you certain enough to bet your life on it?
I never said how long I expected it to take, how do you know we even disagree there? But like, is 50 years a long time to you? Personally anything less than 100 would be insanely quick. The key point is I don’t have a high certainty on my estimates. Sure, might be perfectly reasonable it takes more than 50, but what chance is there it’s even quicker? 0.01%? To me that’s a scary high number! Would you be happy having someone roll a dice with a 1 in 10000 chance of killing everyone? How low is enough?
It makes my blood boil when people dismiss the risks of ASI without any notable counterargument. Do you honestly think something a billion times smarter than a human would struggle to kill us all if it decided it wanted to? Why would it need a terminator to do it? A virus would be far easier. And who’s to say how quickly AI will advance now that AI is directly assisting progress? How can you possibly have any certainty on any timelines or risks at all?
Something you may not have considered is that the majority of our brains are used for things like sensory input and motor control, not for thinking. This is why brain size relative to body size is so important. A whale has a far larger brain than you or I, but is significantly less intelligent.
Why does an AI have to be sentient to be intelligent?
I think that might be a chatgpt specific thing, I tried with bing in precise mode and it responded with this:
“A sow is an adult female pig and piglets are baby pigs. Pigs have four feet, so a sow with six piglets would have a total of 28 feet (4 feet for the sow + 6 piglets * 4 feet each). Is that what you were asking?”
What ability do you think that they are currently missing that makes them ‘regurgitation machines’ rather than just limited and dumb but genuine early AI?
Extending the ending of the war by a year would be far worse.
I don’t think that follows at all actually. Every weapon has a balance of harm against benefit, if you outlaw cluster bombs why not mines? Why not grenades, or regular artillery? The reason is because the defensive value outweighs the potential harm. I think it’s fairly clear that this is the case for cluster bombs too, while it is not for mustard gas.
The US keeps them because the alternative would cost significant capability. That would need to be made up for with other weapons. Politics and appearance costs impact things too, and for nations that could never stand a chance against russia/China without US help there is a much stronger argument for earning points by outlawing them.
The greatest risk to Ukrainian children is the Russian invasion, and the odds of Ukraine protecting them from that are far greater given these new munitions.
digikam for image and video collection management and viewing (also does duplicate detection)
You have to admit it is more complicated than that though. It’s more complicated than jas describes too.
Is enabling people trafficking by having a fleet of boats hanging out on the libian coast really going to save more people than ending the practice all together?
It’s absolutely true that countless other things should be done to help the poor around the world, but I genuinely don’t see how encouraging masses of people to set out to sea in sinking boats helps anything at all?
Well I wont be, and just because one thing might be higher probability than another, doesn’t mean it’s the only thing worth worrying about.