Friday Links: Copyright, weird AI video, and fuzzy cameras

Friday Links: Copyright, weird AI video, and fuzzy cameras

Here are this week's links. I'll be on a break next week, so unfortunately, there will be no posts. However, I'll be back again in the first week of September. I wish everyone happy holidays!

  • Artists Score Major Win in Copyright Case Against AI Art Generators. I'm not sure this counts as a major win yet (at least not as major as the headline suggests), but the judge in high-profile infringement cases has ruled that the court case can proceed. At issue is the use of copyrighted work (images, texts) in the training of AI models + the ability to subsequently prompt the model to produce results very similar to those original works. Should it be successful, the suit is likely to be extended to a lot of the image-generation AI systems out there (and ultimately to other modalities). My guess is that it will ultimately end up legal in many jurisdictions to train on copyrighted materials but illegal to produce very similar works, especially if they are being explicitly prompted for. Unfortunately, the founders of some of the companies involved promoted the ability to do exactly that. I'd also expect artists and brands to start offering their materials for inclusion and training and licensing for ongoing fees.
  • Authors sue Claude AI chatbot creator Anthropic for copyright infringement. In related news, copyright claims are coming to Anthropic, too, with this new lawsuit against Anthropic (similar suits are already open against OpenAI). Part of the lawsuit is to do with the fact that some of the source materials were used for free but contained pirated, unpaid copies of many books. While the argument is that human authors who read a book for inspiration at least buy the book or use legitimate means (e.g. a library) to get a copy, Anthropic and others did not even do this. That argument seems to hold water in my view, but if the outcome is that companies training models have to buy exactly one copy of every book they train on, it won't be a very lucrative win.
  • AI Video going through the awkward, weird stage. Meanwhile, AI is marching on to conquer video generation. This Twitter/X series is a nice snapshot of the crazy videos people are producing with AI video generation at the moment. There are still a lot of errors and glitches, but there is some unique content as well. I especially like the Mad Max Muppets and Doctor Undefined.
  • Unreasonably Effective AI with Demis Hassabis. Nice interview with CEO and Co-founder of Google Deepmind, which is at the core of Google's AI efforts. If you're wondering about the title, it has to do with the fact that it was a surprise to everyone that the chant agents have gotten as good as they have. There are some interesting insights into what might be coming in new iterations of Gemini. In a nutshell, agentic behaviors, getting better at validation, AI for self-validation, and why Google only open sources behind the curve models.
  • DifuzCam: Replacing Camera Lens with a Mask and a Diffusion Model (via Jonathan Fishoff). This paper is quite mindblowing in what it does. The researchers used an extremely low-quality camera (a deliberately distorted one, in fact) and were then able to recover the images using a diffusion model (the kind used in image generators). As you'd expect, the result isn't always perfect, but it's extremely impressive how a very small amount of signal can be turned into something recognizable. It seems to hint at why humans see faces and objects in the clouds on a lazy summer's day: our recognition algorithms are kicking in and showing us an echo of something we've seen.

If you missed it, I also posted a longer form piece on the data economy last week. Happy holidays!