Rep. Joe Morelle, D.-N.Y., appeared with a New Jersey high school victim of nonconsensual sexually explicit deepfakes to discuss a bill stalled in the House.
Rep. Joe Morelle, D.-N.Y., appeared with a New Jersey high school victim of nonconsensual sexually explicit deepfakes to discuss a bill stalled in the House.
How different is photoshopped fakes from AI fakes? Are we going to try to bad that too?
ETA: *ban that too. Thx phone kb.
What does the method matter? If the result is an artifact that is convincing enough for the average person to believe that the subject knowingly posed for sex acts that never occurred, the personal experience and social stigma is traumatizing no matter how it was made.
That’s my point. If we’re going to ban AI fakes should we then ban ALL fakes? Where do we draw the line and how do we do that without limiting free speech? I’m not sure it is possible.
And the days of believing everything you see are over but most don’t know it yet.
It’s ever-changing. We’re social animals, not math equations, so it’s all according to the kind of society we want.
All freedoms are in tension between “freedom to” and “freedom from”. I can have the freedom to fire my gun in the air. I can have the freedom from my neighbor’s randomly-falling bullets. I can’t have both of those codified in law (unless I’m granted some special status over my neighbors).
I think that, many times, what we run into is a mismatch between a group thinking in terms of “freedom to” and a group thinking in terms of “freedom from”.
The “freedom to” folks feel like any restriction on their ability to act is a breach of liberty, because they aren’t worried about “freedom from”. If, for example, I live in the middle of nowhere and have no neighbors, what falling bullets do I have to fear except my own?
The “freedom from” folks feel like having to endure the effects of others’ actions is a breach of liberty, because they aren’t worried about “freedom to”. If I spend my life dodging falling bullets, I’m not likely to fire more into the sky.
We said the same thing about the printing press. And it plunged us into a long period of epistemic chaos, with rampant plagiarism and reverse-plagiarism (attributing words to someone who never spoke them). The fallout of this led the crown to seize presses and allocate exclusive printing rights to a chartered monopoly (with some censorship just for funsies).
We can either complain it’s too hard and do nothing, eventually leading to an overreaction to a policy that is obviously not sustainable… Or we can learn from history, get our heads in the game, and start imagining a framework that embraces the transformative power of large-scale computing while respecting the humanity of our comrades.
C2PA is a good start, but it’s probably DOA in the hacker zeitgeist. We tend to view even an opt-in standard for proof of authenticity as a gateway to universal requirements for proof of authenticity and a locked-down tyrannical internet forever and ever. Possibly because a substantial portion of us are terminally online selfish assholes who never have to spend a second worrying about deepfakes of ourselves. And also fancy ourselves utilitarian techno-solutionists willing to sacrifice the squishy unquantifiable touchy-feely human emotions that just get in the way of objective rational progress towards a transhuman future. It’s a noble sacrifice, we say, while profiting disproportionately and suffering none of the fallout.
You’re restricting speech whether or not you confine your censorship to only AI-generated images.