Shostack + Friends Blog

 

Bing’s ChatGPT

ChatGPT in the headlines again An AI generated image of a robot making converation at a cocktail party

Last week, Microsoft picked up Tay’s shovel and...wow. Headlines like: Bing: “I will not harm you unless you harm me first” (Simon Willison), More news outlets get caught up in nasty conversations with Bing chatbot over facts (Alan Boyle, Geekwire), From Bing to Sydney (Stratechery), and A Conversation With Bing’s Chatbot Left Me Deeply Unsettled (Kevin Roose, New York Times).

Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward....

But I think the kicker is this: Microsoft’s Bing is an emotionally manipulative liar, and people love it (James Vincent, the Verge). Mr. Vincent is both right, and incomplete. ChatGPT has been trained on a corpus that includes an infinite stream of emotionally manipulative language. And given the right prompts, it veers towards using that language in its production. The prompt included things like “engaging”, “informative” and “rigorous.” It’s like giving a freshman a description of how sophmores think about debate. (There's a list in the Simon Wilson article that Marvin von Hagen talked Bing into revealing.) So Mr. Vincent is right that people love it, because they are being fed the tropes that draw them in at a rate and quality that people have never experienced. (There’s also an element of wanting to watch something awful happening.)

There’s a lot more to say about the impact of these things on the world, but for now, I wanted to share the more interesting stories.

Dreamstudio, prompted with "A robot charming everyone at a cocktail party in the matrix, painting, HQ, 4k"