Now I got your point. You’re right - the AI in question will inherit the biases and the worldviews of the people coding it, effectively acting as their proxy. IMO for this reason the bot’s actions should be seen as moral responsibility of those people (i.e. instead of “the bot did it”, it’s more like “I did it through the bot”).
in the case of reddit automod bot yeeting content based on included words… most of that is stupid, I agree, but then it’s those mod’s community.
Even if we see the comm as belonging to the mod, it’s still a shitty approach that IMO should be avoided, for the sake of the health of the community. You don’t want people breaking the rules by avoiding the automod (it’s too easy to do it), but you also don’t want content being needlessly removed.
Plus, personally, I don’t see a community as “the mod’s”. It’s more like "the users’ ". The mods are there enforcing the rules, sure, but the community belongs as much to them as it belongs to the others, you know?
Now I got your point. You’re right - the AI in question will inherit the biases and the worldviews of the people coding it, effectively acting as their proxy. IMO for this reason the bot’s actions should be seen as moral responsibility of those people (i.e. instead of “the bot did it”, it’s more like “I did it through the bot”).
Even if we see the comm as belonging to the mod, it’s still a shitty approach that IMO should be avoided, for the sake of the health of the community. You don’t want people breaking the rules by avoiding the automod (it’s too easy to do it), but you also don’t want content being needlessly removed.
Plus, personally, I don’t see a community as “the mod’s”. It’s more like "the users’ ". The mods are there enforcing the rules, sure, but the community belongs as much to them as it belongs to the others, you know?