Recently came across Microsoft's still image to video technology
Vasa 1.
The combination of deepfake audio + video is something I can see wreaking havoc on society. What happens when trust is unobtainable?
As mentioned earlier in this thread, there are people working on solutions:
I have a friend who works in cyber sec who's job it is to detect deep fakes.
Some examples can be found from
Intel and over at
Papers with code. There's a relevant
bill in the US Senate.
Seems like there will be a cat and mouse game between those who create fake content and those who detect it. If you can't detect what's fake, what do you do? If something is detected, how should the viewer be informed?
The
Content Authenticity Initiative does attempt to provide a solution for those who release content and want to give users the option to
verify whether or not it's legit, but now there's a whole other process when releasing content and you're relying on users to check.
The capability to create false content also provides another potential exploit. Ever been caught on camera doing something malicious? Just call it a deepfake.
Supposedly a company got
scammed out of $25 million due to some video call shenanigans.
Real time deepfake interactions are an impressive technology. People talking to robots makes the
dead internet theory sound more real everyday.