Sunday Links: AI Animation, Co-Pilot impact, and the Musk-Altman files

The Musk v Altman files, AI Ads and Quantum with AI.

Sunday Links: AI Animation, Co-Pilot impact, and the Musk-Altman files

Here are this week's links:

  • No animators were employed in the making of this advertisement. In the "all publicity is good publicity" department, Coca-Cola poked the bear this week by releasing an all-AI-generated holiday advertisement. The ad drew plenty of criticism for its quality and (predictably) lack of soul. The YouTube comments are scathing in places, and "The irony of the 'Real Magic' tagline at the end of a fake commercial." more or less sums up the thread. It seems all but inevitable that AI-generated ads are coming. Likely, this ad will get play years down the line as one of the first from a major brand, so taking some pain now might be logical. Still, it needs to be done more artfully than this to avoid a backlash, and human animators + AI will still be the best choice for a long time.
  • Does GitHub Copilot improve code quality? Here’s what the data says. (Well, Microsoft's Data!). This article covers survey data from a test of 200 Python developers. It's actually a fairly interesting read, but it's also clearly from a company that has an interest in promoting AI for code, so reading some of the comments posted on Slashdot is a good counterbalance. It seems clear that inline code augmentation tools will boost productivity. One aspect is worrying, though: resources like Stackoverflow will become less complete because fewer questions will be asked and fewer questions answered. Perhaps there needs to be an open-source way to capture questions and answers for future training.
  • SmolTalk Released: The Dataset Recipe Behind the Best-in-Class Performance of SmolLM2. The world of small models continues to be exciting, and SMol2 from Hugging Face is one of the models that works really well and is below 2B parameters. This week, the team released the training data set for the models under an Apache2 license, which means others can have a shot at building similar models. It's also interesting to note that the data set is mostly synthetic. What does synthetic data look like? The statements are things like the phrase, "A box of ice cream bars costs $x and contains three bars. 6 friends want to each eat 2 bars. How much money will that require per person?\nIf we know the answer to the above question is 5, what is the value of unknown variable x?" Followed by structured solutions. This hones the model's problem-solving and reasoning skills with as small an input set as possible. (You can see the data set here.)
  • Google DeepMind AI can expertly fix errors in quantum computers [Paywalled]. Using AI to tune other systems - in this case, quantum computers makes a lot of sense. Searching through parameter spaces fast and trying combinations + using some learned "intuition" is going take us a few leaps beyond what humans can do (though then maybe reach its own natural boundary). Using AI to tune quantum computers that may, one day run AI is very "Inception." On a side note, researchers in China have reportedly cracked widely used RSA and AES security ciphers using quantum processing. Your credit card data is probably still safe for a while, but the clock is ticking.
  • OpenAI Email Archives (from Musk v. Altman) by Habryka. This is a fascinating read on the birth of OpenAI. It's also fascinating to see how Elon's emails are often just two words. Do you think your emails are short and to the point? Think again! Of course, someone in the comments already used Elevenlabs to turn it into a podcast/play.

Wishing you a great rest of the weekend.