1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News Facebook, Microsoft launch Deepfake Detection Challenge

Discussion in 'Article Discussion' started by bit-tech, 6 Sep 2019.

  1. bit-tech

    bit-tech Supreme Overlord Staff Administrator

    Joined:
    12 Mar 2001
    Posts:
    2,418
    Likes Received:
    43
    Read more
     
  2. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,301
    Likes Received:
    313
    (Un?)intentionally creating an adversarial network that could be used to generate training sets to improve your 'deep fake' network.
     
  3. John_T

    John_T Member

    Joined:
    3 Aug 2009
    Posts:
    528
    Likes Received:
    20
    Are you suggesting people shouldn't work against it then? Isn't that a bit like saying there shouldn't be anti-virus and anti-malware companies, because they could just use the data to write better viruses and malware?

    I don't think it's an exaggeration to say that Deepfakes have the potential to be one of the most damaging concerns of the 21st century - not just causing embarrassment or humiliation on a personal level, (bad as that is) but also for the more insidious purposes of swaying public opinion, rigging elections or even starting and justifying wars.

    Even in well education and naturally suspicious populations, there's an element of "If you throw enough mud, some of it sticks" - but in less well education populations, targeted disinformation, if delivered believably enough, could prove decisive.
     
  4. John_T

    John_T Member

    Joined:
    3 Aug 2009
    Posts:
    528
    Likes Received:
    20
    Good grief, I wrote 'education' instead of 'educated', not once but twice!

    Of all the words to get wrong, what a wally...
     
  5. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,301
    Likes Received:
    313
    No, but currently 'depfakes' are trained through poor measures like SSIM and PSNR, with occasional human intervention for final A/B tuning. This results in the current "you might be fooled if you didn't look too closely" results*. By training neural networks to spot deepfakes, that also gives you the adversarial network to train your deepfake generator network with a much higher quality performance function (e.g. think how much better deepfakes would be if every network training round was judged by a human rather than one in every few million rounds). They would allow the current perceptual quality of deepfakes to get much better, much faster. There's also the concern of reliance on such an automated deepfake-identification system resulting in false negatives. You already have armchair experts that claim to judge whether images are photoshopped or not based on looking at erroneous and easily fooled techniques like error level analysis.

    I have no reason to oppose this on a technical basis (it's not like there is any viable method to stop anyone performing this research either) but the consequences also need to be considered.


    * That such results - even manually airbrushed without any automagic AIs involved - can already effectively sway public opinion (and have done since the dawn of photography) means in my view the arguments about some sort of deepfake apocalypse are a little overblown.
     
  6. John_T

    John_T Member

    Joined:
    3 Aug 2009
    Posts:
    528
    Likes Received:
    20
    I get what you're saying about all the technical aspects of it all, but I suppose my main point is: What's the alternative?

    If doing nothing is not an option, then logically the only alternative is to try to do 'something'.

    As for an apocalypse of sorts, I did in fairness say it had the potential. I don't realistically believe WW3 is about to kick off in the next few years between the USA & (let's say) China over a deepfake video or two, but I do believe watching a video of something is far more effective than seeing a simple photograph. The right sort of deepfake campaign pushed into an already volatile region such as (say Kashmir or somewhere in the Middle-East) - where people are already inflamed and ready to believe the worst - I do honestly believe that could escalate very quickly. As we know from history, even small conflicts between small powers sometimes have the potential to spiral out of control unexpectedly quickly, sucking bigger powers in with them. The danger of unintended consequences.

    It's all very well looking at that inciting video with the suspicious and analytical eye of a highly-educated (and technically relevant) software engineer - but is a Kashmiri shepherd or ME shopkeeper going to be making that same judgement call before going on a riot or an uprising?

    The genie is out of the bottle now, and it's not going back in...
     
  7. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,301
    Likes Received:
    313
    My point was that you could do the same thing with a bunch of actors, a good makeup artist, and a basic set - today or anytime in the last couple of decades (or longer with nation-state budget rather than independent agitator) - with a higher fidelity and verisimilitude than fiddling with deepfake video. I suspect that certain Sneaky Beaky Agencies have been doing exactly that for a very long time without the need to wave the AI magic wand over the process.
     
Tags: Add Tags

Share This Page