Sunday Links: Nobel prizes, grounding, and icebergs

Two nobel prizes for AI, grounding via open source facts and, surprisingly, an Iceberg.

Sunday Links: Nobel prizes, grounding, and icebergs

My apologies that this post is so late this week (barely squeaking in for Sunday), but some weekend travel stopped me from getting it wrapped up before now. (I used to be great at writing on planes, but I'm out of practice these days - time to get better at it again!).

Anyway, here we go:

  • Nobel Prize in chemistry awarded to scientists for work on proteins, and Nobel physics prize 2024 won by AI pioneers John Hopfield and Geoffrey Hinton. It's hard not to start this week with two huge Nobel prize announcements going to teams that used AI to fundamentally further their fields. In both cases, AI tools have been used to explore the space of possible solutions to challenging problems and helped generate + evaluate candidates. Whether one agrees with Nobels being handed out this year or in 2025, 2026, ... AI represents another step change in what types of understanding we can garner about the universe. Gary Marcus has been fulfilling his role as curmudgeon and doomer since the launch of ChatGPT, but at the technical level, I like his take on the differences in approach between the two works that won a Nobel take (and the future they suggest). I'm firmly in the camp that neuro-symbolic is what will lead to the biggest breakthroughs in the long term.
  • Concepts: Types of Deep Learning. To help unpick the mystery of what's actually happening with these different models, Turing post has a nice illustrated guide to the different types of deep learning. These are high-level but nice explanations.
  • Meet DataGemma: Google DeepMind's Effort to Ground LLMs in Factual Knowledge. Sticking with the ML architectural theme, The Sequence has a useful intro to new Google research on building DataGemma. DataGemma tries to reduce hallucinations in LLMs by "grounding" them in large bases of facts and statistics. The work looks at how to do this architecturally (using variants of Rag or "RIG": "Retrieval Integrated Generation") but also tries to answer the question of what to actually ground on (in other words, which facts should be the basis of reality). This is a key area that will matter a great deal in AI development. Google's answer for now is a project called Datacommons, which aggregates large amounts of public data (from the UN and other sources) so that it can be queried. This makes a lot of architectural sense, and we should be happy that the project is open source - it will take a real collaborative effort to curate such a fact base. From all the blurb, it's still unclear if the result can be fully used for commercial purposes. I guess we'll see how that evolves. I believe we'll need the means to have structured wiki-pedia-like managed data lakes that anyone can tap into and form factual grounding. Some of this will become political, and there will be differences of opinion (just like Wikipedia sometimes is), so there will likely never be a single source of truth for something but facts with provenance and weights.
  • The Intelligence Age. This post by Sam Altman is a couple of weeks old now (old news) and was fairly widely knocked for being hyperbolic. In a nutshell, he states he's confident we'll be able to create superintelligences and that this is possible because "deep learning works." He goes on to argue that the intelligence age will lead to huge abundance. I also believe we'll be able to create systems that are super-human in their intelligence and that deep learning was one of the key unlocks that we needed. The amount of human intellect now directed at making these approaches better and pairing them with other methods is staggering. I also think that we will see tremendous opportunities and prosperity unlocked. What I'm a lot more skeptical about is that we'll be able to scale usage fast (due to power, component, and political problems) and, possibly even more of a worry, be far-sighted enough to carry humanity through what will be a long period of messy disruption. Our social institutions are quite fragile even without the AI wave. With it, we'll need to be astonishingly good at spreading the benefits and protecting against potential harms.

For the last link, I want to go a little off-script to something that made me laugh and brings in some humility:

  • The hidden underside of an iceberg: Laurent Ballesta’s best photograph. If you've spent any time in the Enterprise software world, you'll have spent plenty of time seeing "Iceberg" diagrams on PowerPoint slides. To my knowledge though, I've never seen an actual underside view photograph. It looks majestic, and whatever those PowerPoint slides claimed, it's probably not making much of a dent below the waterline.

Thank you, and have a wonderful week.