Thursday, October 17, 2019

Diving back into deepfakes

It's been a busy day of catching up with a dear friend. I've not caught up, alas, on as much grading as I need to, nor have I yet really sunk my attention into updating disinformation and deepfake research. This report (linked here) by an Amsterdam-based company called DeepTrace suggests that there have been over 13,000 deepfake videos produced (and that's counting only US sources).

About 96% of these, apparently, are pornographic in nature, almost totally involuntary deepfakes of females in hetero-oriented vids. That in itself is a worrisome issue, a sad route into new, are-we-really-talking-about-this questions about consent, misogyny, and representation.

For all the worry about political processes being disrupted by deepfakes, though, so far there's not been much to suggest that we've been radically fooled in political realms.

I take that back: we've been fooled, but not by deepfakes. As Lindsey Graham getting hoaxed by Russian pranksters pretending to be Turkish officials to edited videos ("shallowfakes" or "cheapfakes") of Nancy Pelosi or Jim Acosta demonstrate, humans hardly require machine learning AI systems to get duped. Indeed, the most directly deepfake-related political hoax stories from Gabon and Malaysia involve not actual deepfakes but allegations that X or Y videos of politicians have been altered or deepfaked.

As Thom Dunn argues, "The worst thing about deepfakes is that we know about deepfakes." The possibility of perfectly faked or invisibly altered video undermines the reliability of video evidence we've long (but not too long) assumed as infallible. If the Gabon and Malaysian examples are any guide, default skepticism will likely, primarily aid politicians and other relatively privileged folk who wish to dodge accountability. Caught on tape? Think again. That's just deepfaked video to make me look bad. Deepfake news.

On the positive side, deep/shallowfake video imagery holds some promise for resisting certain vectors of surveillance capitalism. Jing-Cai Liu, an industrial design student in the Netherlands, recently publicized a projected-video face mask, essentially a helmet that projects a fake face onto your own in order to foil facial recognition technologies. News and video of the mask went viral, erroneously being spread as having been developed and used by Hong Kong protesters to circumvent Chinese government facial recognition. It was not.

But it could be.

Brave new worlds in front of us.

More tomorrow,

JF

No comments:

Post a Comment