Special Pages

Friday, May 26, 2023

The Same Old Mistakes With Deep Fakes and AI Images

Sometime this past Monday, a faked image of an explosion outside of the Pentagon triggered a response by high frequency traders on Wall Street that hammered Wall Street.  Or, at least, portions of it (some stock index or other) for at least portions of the day.  This is the image, as CNN tweeted it.  Note the time tag (bottom left under the video capture) says 7:32PM, so well after the image showed up and after the upset it caused was over.  For some brief period maybe someone might have thought there was an attack on the pentagon, and the US was done for. 

American Wire News reported:

“A falsified photograph of an explosion near the Pentagon spread widely on social media Monday morning, briefly sending US stocks lower in possibly the first instance of an AI-generated image moving the market,” according to Bloomberg.

“It soon spread on Twitter accounts that reach millions of followers, including the Russian state-controlled news network RT and the financial news site ZeroHedge, a participant in the social-media company’s new Twitter Blue verification system.”

All the proper pearls have been clutched over this threat from AI-generated images, sometimes called "Deep Fakes".  There's worry that, as the Council on Foreign Relations voiced,

Deep fakes, highly realistic and difficult-to-detect depictions of real people doing or saying things they never said or did, are a serious problem for democratic governments and the world order. How can they be stopped?

My bet is the Council isn't so much concerned about the pictures being edited but that they're not the ones controlling it.  The problem is that photographs have been edited to change their meaning as long as photographs have existed.  We're not talking since the advent of Photoshop, we're talking about since the mid 1800s with guys in a dark room manipulating negatives.  Take this famous pair of photos.  

This picture is of Joseph Stalin and some loyal staff; the one closest to the river was deleted by photo editing - a common fate of commissars that Stalin had no further use for.  This is when people whispered "airbrush!" instead of "Photoshop!"  

The problem with AI is that people take it too seriously.  They don't look at the output and say, "not bad, but it got this part all wrong..."  They act like it's really intelligence and not a slightly less stupid than usual computer game.  You see people all wrapped up about it and saying, "you'll never be able to believe your eyes again", or "how can we ever trust our senses again?"  I'm pretty sure I've heard that since the advent of Photoshop.  When was that?  1990?  It was an ancient belief back then.  I drop by CW at Daily Timewaster a couple of times a day, and it doesn't matter how beautiful his "smile of the morning" is or how famous she is.  There's always at least one comment along the lines of, "dude, I can tell by the pixels around her neck that's been 'shopped" - or something like that.

I did a dive into this back in 2019 that had a pretty funny video in which someone put Steve Buscemi's face on actress Jennifer Lawrence's body.  The top pick for goofy/funny today is someone did a mix of Joe Biden as Dylan Mulvaney complete with cans of Butt Light.  



10 comments:

  1. Butt Light? Oh.....I thought it was Pud Light.

    ReplyDelete
  2. I believe very, very little of what I see and hear these days, from almost any source. And even the ones that I trust for straight info can be fooled themselves. It's a very sad situation.

    ReplyDelete
  3. what was that you were saying about a "deadly" virus that escaped from a lab in China

    ReplyDelete
  4. I saw this coming, and developed an unbreakable method and algorithm for ensuring the original authenticity of images. This, as well as the code, was published as the cover story on a Dr. Dobbs Journal, mid 1996.

    Everybody said, "Great! Wonderful!" and then nobody implemented it. Clearly the problem isn't severe enough to bother with fixing it using a solution that has been out in public for 27 years.

    ReplyDelete
    Replies
    1. Fascinating. It actually underlines my snark that the only reason the Council on Foreign Relation cares is that they want to be the ones in charge of deciding what the fake news is. It's not "misinformation" they fear; they fear not being in charge of declaring misinformation.

      And add Dr. Dobbs to the "if you're old enough to remember" or "dorky enough to remember" list.

      Delete
    2. Yeah, along with Byte and a half-dozen small yet valuable magazines (though DDJ was a bit more techy/obscure).

      There is a deadly race going on in human civilization. Can we learn things faster than we forget them? I'm not sure the right side is winning any more.

      Delete
    3. I remember those publications because I subscribed to them back in the day. I even ran across a DDJ I had saved for some reason but it is now obscure.

      Malatrope, is that code floating around out there? A little program to scan images for authenticity would be nice to have.

      Delete
    4. BillB, images had to be coded by the hardware that captured them (a circuit to do that was also included in the article). The algorithm embedded the image's checksum into a random walk of the least significant bits of RGB values, and thus only worked on non-compressed images. You could not visually see the embedded data. but you could extract the checksum and determine if anything had been changed.

      There was a whole academic industry spawned by my article, apparently. Other's extended it to include compressed images, by various means, but no manufacturer of cameras/phones/etc ever stepped up to include it in any product. As SiG says, that would compromise the powers-that-be and they couldn't let that happen now, could they?

      I'm outing myself with this, but if you're interested just search for "Image Authentication For A Slippery New Age". Most of the interest seems to be in Asia, but if you look hard enough you can find copies of the original article.

      Delete
    5. Didn't mean for you to out yourself. I thought it was some kind of "magic" (advanced technology) sans hardware. Using steganography to embed authenticating data into a picture is a good idea.

      Delete
    6. No worries, not the first time.

      There are all sorts of programs out there that attempt to detect photoshopping. They rely on things like texture changes, odd goings-on near segmented boundary areas, differences in focus and contrast or saturation gamma, and such. Of course, they can make a judgment that suggests an image is fake, but they cannot prove it.

      I also suspect that an image created by generative AI, like the one shown in this post, are immune to such measurements because the image is built entirely from scratch, not pasted together with bits and pieces. Thus textures and lighting would be consistent across the entire image.

      It is a serious problem, and I'm afraid the result will simply be that nobody will believe anything at all unless seen by their own eyes. And even that interpretation can be influenced (ref: any and all UFO observations).

      Delete