Welcome to the 54Net FORUM

The FORUM is provided for Classmates and their families to present information of interest to the Class.

Big Tech's timid deepfake defense


Facing a widely predicted onslaught of fake political videos before the 2020 election, social media companies are the bulwark that will either keep the videos at bay or allow them to flood the internet.

But, but, but: These platforms are loath to pass judgment on a clip's veracity on their own — an approach experts say could lead to a new election crisis.

"A deepfake could cause a riot; it could tip an election; it could crash an IPO. And if it goes viral, [social media companies] are responsible," says Danielle Citron, a UMD law professor who has written extensively about deepfakes.

The big picture: Edited videos, from the most basic tweaks to the most convincing AI-fueled deepfakes, are swiftly becoming easier to create. So far we've only seen simple manipulations — "cheapfakes," they're sometimes called — but experts almost universally believe that more sophisticated forgeries are coming.

Facebook, Twitter and YouTube have massive power over what people watch, hear and read on the internet. But they have long insisted that they're not media companies and shouldn't decide what is true and what isn't.

They have instead relied on their existing rules against things like nonconsensual porn and election manipulation — if a fake video falls into those categories, it's gone.
They also watch for behaviors that suggest a botnet or coordinated misinformation campaign.
But manipulated videos that don't set off either alarm can fall through the cracks.
In interviews with Axios, experts largely rejected the platforms' reasoning and said they have a responsibility to prepare for the 2020 elections.

Citron says shrinking away from arbitrating truth is a "cop-out" and that platforms should more aggressively block and filter out potentially dangerous edited videos.

Jack Clark, policy director at OpenAI, says companies can do more to verify whether or not a video was taken by the person who posted it and that they should plaster huge banners across manipulated videos to ward users away.

Sam Gregory, a deepfakes expert at human-rights nonprofit WITNESS, says firms should thoroughly explain how they treat individual videos and why — and coordinate between themselves to quickly halt the spread of a manipulated video.

What they're doing: Axios attended a content moderation meeting last month at Facebook's Menlo Park headquarters, where the company began considering rules to reduce or even take down manipulated media that is presented as true.

For now, when bad information starts to spread, platforms generally reduce its reach and/or add fact checks from outside organizations.

Facebook solicits outside evaluations so that it doesn't have to pass judgment itself — but the checking process is slow, and partner organizations don't always have the expertise to evaluate manipulated video. Once a video is determined to be falsified, it's shown less often on Facebook.
Twitter tries to bury fake videos and other "untrusted" content way down in users' timelines, so that they have to scroll past countless other tweets to get to them. But if someone links directly to an offending tweet, it can still be seen.

YouTube is the outlier — it bans some forms of trickery, including misleadingly edited videos, a spokesperson tells Axios.

What's next: Platforms and outside researchers are racing to create technology that can detect deepfakes and authenticate non-edited videos.

Replies:
There have been no replies.



Post a reply:
Name:
Email:
Subject:
Message:
bold italic underline left align right align center align url email image move quote horizontal rule

Link Name:
Link URL:
Image URL:
Password To Edit Post:





Create Your Own Free Message Board or Free Forum!
Hosted By Boards2Go Copyright © 2020


<-- -->