Saturday Links: Spark, Data Namecalling, and Notebook Inception

Self reverencing AI/ML Notebooks, English language app builders and a lot of Monkeys.

Saturday Links: Spark, Data Namecalling, and Notebook Inception

Here are the links for the week:

  • GitHub Spark lets you build web apps in plain English. Building functioning apps with natural language prompts is truly the holy grail of AI-enhanced programming. Now, GitHub is showing its first preview service to do just that. There are plenty of startups going after this prize, but with GitHub in the game, it seems inevitable that it'll become a staple built-in feature of coding environments. Building simple apps isn't that impressive. What is, though, is being able to modify the app post-generation and keep using the AI to refactor parts of it. It doesn't sound like Spark can do this yet, but that's when true power will be unleashed. If the model is good enough to iterate towards an ever-improving final product, it'll be hard to argue for code editing at all.
  • AI Schoolyard Name Calling. Salesforce CEO Marc Benioff has increasingly been haranguing Microsoft about the supposed failure of its co-pilot services. Firstly, it's early days, and there's no doubt a lot of learning going on at Microsoft. It's hard to imagine co-pilot won't be part of the future of the digital world and that there will be many embedded in Microsoft products/services. More cutting, though, is Benioff's central claim, which seems to be that Microsoft doesn't have the data to do a good job of building these agents. Presumably, he means the kind of sales and marketing data that Salesforce holds for many of its customers. That may be true right now, but let's be realistic: if you are using Salesforce as a database, it's an extremely expensive database. It's likely that the biggest effect of Benioff's posturing could be Salesforce customers getting tempted to shift their data out of Salesforce to somewhere else (Azure?) so that they can build a much cheaper, more effective AI-driven sales support layer on top.
  • Notebook Llama. Meta has released an open source version of Google's amazing NotebookLm tool (which I've mentioned in multiple previous posts). The system uses LLama models to create a script, then embellishes it and feeds it to text-to-speech actors. It's currently not as good as NotebookLM but it's great to see an open source version. Now it can likely be heavily tuned. The saccharin effusiveness of the Google NotebookLLM podcasters is a little much at times. In fact you can go pretty (forgive me..) "meta" on this. I asked NotebookLM to create me a podcast on Notebook Llama, and it did a great job. Next-level inception (though it doesn't end up self-referencing). It's actually a great summary of how the Notebook services work.
  • A Guide To Linearity and Nonlinearity in Machine Learning. The difference between linear and non-linear functions is fundamental to the behavior of machine learning models. Non-linearities allow us to model much more complex problems but also, generally, cause computational complexity to rise. Manuel Brenner's nice primer article on the topic is a useful read to get a handle on these concepts.
  • Universe would die before monkey with keyboard writes Shakespeare, study finds. This isn't an AI story, but I do love the idea of researchers spending their time estimating the probabilities here. Spolier alert -> the chances of even full sentences coming together are staggeringly low. The whole premise is flawed though, you could get much further simply by teaching the monkeys to write. Give them AI and who knows what they can do.

Wishing you a great weekend!