Not because they can’t be done right and you can’t teach people to use them.
But because there’s a slippery slope of human nature where people want to offload the burden of decision to a machine, an oracle, a die, a set of bird intestines. The genie is out and they will do that again and again, but in a professional organization, like police, one can make a decision of creating fewer opportunities for such catastrophes.
The rule is that people shouldn’t use machines above their brains, as one other commenter says, and they should only use this in a logical OR with their own judgment made earlier, as another commenter says, but the problem is in human nature and I’d rather not introduce this particular point of failure to police, politics, anything juridical and military.
Cops are still necessary. It’s giving humans a machine to blame any failure upon is a very bad thing.
I personally think these "AI"s are supported by governments. There’s been a lot of talk 10-15 years ago how many government official’s functions can be replaced by AI (without quotes), since these functions do not require agenda and are not even too fuzzy, but require semantic understanding. So "AI"s (with quotes) are being used like a vaccine, so that the wide mass of humans would hate the guts of the very idea, having experienced them (EDIT: and wouldn’t want actual semantic reasoning systems). Why - because people working in governments love power and hate transparency, they also hate the idea of being replaced with machines.
Or maybe it’s a conspiracy theory and they all really believe in accelerationism.
some political groups engage in mismanagement on purpose to make people dislike the government, that’s hardly a conspiracy, but it’s a little weird to think they’re propping up the misuse of LLMs rather than that being a natural consequence of stupid capitalism
Even when given the best and most sophisticated tools and equipment available, police will manage to fuck things up at every opportunity because they’re utterly incompetent.
But the system seems to be better than police officers. Which is entirely believable. Humans have all kinds of biases that make the decisions we make far less than desirable.
Per the article, it has decreased the risk of repeated violence and, according to an expert, its the best systen we have. Why would you want to go back to a worse system? This is using our brains in an attempt to overcoming our biases.
The police accepted the software’s judgment and Ms. Hemid went home with no further protection.
This is what happens when you rely on your Nintendos, instead of using your damn brains.
And that’s why I’m against ALL such things.
Not because they can’t be done right and you can’t teach people to use them.
But because there’s a slippery slope of human nature where people want to offload the burden of decision to a machine, an oracle, a die, a set of bird intestines. The genie is out and they will do that again and again, but in a professional organization, like police, one can make a decision of creating fewer opportunities for such catastrophes.
The rule is that people shouldn’t use machines above their brains, as one other commenter says, and they should only use this in a logical OR with their own judgment made earlier, as another commenter says, but the problem is in human nature and I’d rather not introduce this particular point of failure to police, politics, anything juridical and military.
What is that?
It’s from movie Idiocracy from hospital scene. Initial diagnosis.
Here’s this part of the scene: https://youtu.be/LXzJR7K0wK0
It’s 2505 and the average man from 2005 is now by far the smartest man in the world.
It’s a Doctor’s diagnostic desk from the film, “Idiocracy”
Absolutely, ACAB
Cops are still necessary. It’s giving humans a machine to blame any failure upon is a very bad thing.
I personally think these "AI"s are supported by governments. There’s been a lot of talk 10-15 years ago how many government official’s functions can be replaced by AI (without quotes), since these functions do not require agenda and are not even too fuzzy, but require semantic understanding. So "AI"s (with quotes) are being used like a vaccine, so that the wide mass of humans would hate the guts of the very idea, having experienced them (EDIT: and wouldn’t want actual semantic reasoning systems). Why - because people working in governments love power and hate transparency, they also hate the idea of being replaced with machines.
Or maybe it’s a conspiracy theory and they all really believe in accelerationism.
some political groups engage in mismanagement on purpose to make people dislike the government, that’s hardly a conspiracy, but it’s a little weird to think they’re propping up the misuse of LLMs rather than that being a natural consequence of stupid capitalism
No, I meant governments doing certain things on purpose to discourage people from trusting that whole direction.
Even when given the best and most sophisticated tools and equipment available, police will manage to fuck things up at every opportunity because they’re utterly incompetent.
But the system seems to be better than police officers. Which is entirely believable. Humans have all kinds of biases that make the decisions we make far less than desirable.
Per the article, it has decreased the risk of repeated violence and, according to an expert, its the best systen we have. Why would you want to go back to a worse system? This is using our brains in an attempt to overcoming our biases.