We're starting to really get into it now on the legal framework that underpins AI training data. Disney is trying to kick over Midjourney with one foot while wedging its other foot in the door at OpenAI for a possible licensing deal. It's kind of a hardcore play, when you think about it; it's something like testing the Death Star on Alderaan.
Now another case has travelled all the way through the gullet - this time between Anthropic and a trio of authors. The result is very odd, and it's mostly being described as a win for Anthropic but there is a weird taste here. Essentially, the judge said that training AI models on works is fair use because it's sufficiently transformative. That's kind of a big deal, and without any kind of appeal to a higher court or to heaven itself this probably represents the endgame of legal remedies. But the judge also said that Anthropic has to go to court over the fact that they got the books for the training data by Sailing The High Seas. It turns out that some will, in fact, steal a car. Or a novel, etc. Is this like Solomon and his baby splitting exercise? You can train as long as you bought a copy from Barnes & Noble? Is that the same as a license? We'll need to wait until the December trial to find out, maybe.
There's stuff like Glaze to protect images I guess, our engineer doesn't really place a lot of stock in that kind of approach, but devouring twenty-five years of output by the mentally ill probably won't help.
(CW)TB out.