Watermarks
Watermarks show us wierd edges of AI workI’ve been playing with diffusion image generators lately. I think they’re interesting as technical tools, and worth investigation. They’re also challenging and working with them a little helps me think about those challenges.
One of the challenges is getting neat images. I really liked the hacker in a hoodie and the Disney character, but got meh results for Presidents Daily Brief and Appsec Landscape posts. Exploring this is a useful reality check on the media, who tend to be selecting amazing results for their stories.
These tools are trained on millions or billions of scraped images, and some of the things they do are strange artifacts of that. For example, the excerpt that headlines this post seems to have learned that stock photos have watermarks on them.
It makes sense. Lots of bloggers make fair use of images in their posts, and stock image businesses want to get paid. So they watermark their images, and we go to sites like Flickr and Unsplash. Some of the unwitting providers of training images, for example, Getty Images, are suing. Copyright law is rarely simple, and attorney Kate Downing has a deep analysis of the suit against Stable Diffusion. My prediction: no one but the copyright attorneys will be happy at the end of it. The copyright laws are the result of intense and ongoing lobbying, and don’t make a lot of sense the way they might if they were reasoned from first principles. That’s shaky ground for the courts to build on. The best outcome I can see might be a form of mechanical licensing, following player piano precedents. (And since those rules allow cover music, it’s not the worst place for us to end up.)
The full image which prompted this post was:
There’s also interesting ethical challenges. Especially with a new book that draws on Star Wars, I’ve spent a lot of time over many years talking to lawyers about “fair use.” I think it’s reasonable for me to use AI images in places that I wouldn’t otherwise pay someone. I also pay people, like Oskar who did the launch party posters. I'm comfortable with that, and each business will need to consider their policy.
More generally, these tools really feel like something new, and how they roll out will reflect and amplify power structures. Some people will use them to drive costs down; others will use them to amplify their voices or drown out other voices.
The change isn’t restricted to images. After I recorded a podcast with Bob Gourley of OODA, he showed me new tools he’s building, like ask a cyber threat analyst and ask a corporate board director (it’s interesting to give them the same task and see the results.) These are also being used for marketing, for example, see My Experience After Using For 18 Months (not to pick on Mr. Ruby, note the grammatical error in the headline.)
Make time to play with these, and think about what they mean for you. AI can’t do that until the next round of training data: blog posts like this one.
Dreamstudio, “fence with a wide open field. fence toward on bottom sky on top dark greys and blacks. stock photography. HQ, 4k” Other params — steps: 40; sampler: automatic; seed: 3007701878; cfgScale: 9; model: Stable Diffusion v1.5; CLIP enabled: true