Saturday Links: Alpha Geometry, The end of the Browser & OS plus Sleeper Agents
The end of the week turned into a big work crunch, so here we are with links on Saturday instead of Friday. Here's are this week's links with a distinctly science/technical lean in this case:
- Google's AlphaGeometry can learn to solve geometric puzzles to the highest human standard. In a paper published in Nature, Google researchers show that designing a system that learns to solve complex geometry problems using a Neuro-symbolic approach is possible. I expect this will be one of the leading technical trends for AI going forward - combining the new high-powered learning techniques with a symbolic "world model" system that is used to determine what steps would be possible. Calling it a "Milestone towards AGI" might be over-egging the result since this is a constrained logic domain, but the architectural principle here is very powerful.
- Anthropic's Sleeper Agents show that you can hide malicious behaviors in LLMs. The Anthropic team shows that it's possible to train specific responses into an LLM that are only activated by hidden prompt inputs + that cannot be detected or removed by fine-tuning. This is not especially surprising given that trained LLMs are so complex that there are likely many hidden behaviors. It's a reminder of the power of the systems themselves. One obvious implication is that it's important to "trust" the providers/maintainers of any LLM you rely on. Much like hiring a human, you may find it with one hand in the till if you hire an LLM without doing background checks. Increasingly checking the sources of training data and training one's own models will also be an important counter to potentially malicious hidden behavior. A last important implication is that "guardrail" wrapper systems have very limited effect against such scenarios since they would have to be more powerful than the LLM itself. The scientific paper is here.
- Satya Nadella foreshadows the end of the browser and the OS on Bloomberg Live. Davos has generated a plethora of industry leaders talking about AI; Satya Nadella's was perhaps the most interesting of all. The headline I picked is not one that got much traction in the general press, but from about minute four onwards, Nadella drops a number of zingers. No doubt he's overhyping change to ding Google, but amongst the comments was a statement that search will change fundamentally, that perhaps the browsers and even the OS "all in some sense collapse." Microsoft's pitch is that co-pilots will be the new UI. I agree that massive change is coming (in devices, interaction paradigms, browsers, and at the OS level). Microsoft has much to gain here (I'll write more about this in the future). On the other hand, Apple and Microsoft are currently in the driving seat regarding audience access; how fair they are depends largely on how hard they are willing to cannibalize their own businesses.
- New material found by AI could reduce lithium use in batteries. Microsoft and PNNL produce another AI powered materials breakthrough, this time in reducing Lithum requirements in batteries. High value outcomes such as this are a huge boon of AI technology. In this case a specifically trained materials science system was used to analyze large numbers of combinations to find candidate materials that could be tested.
- Finishing on another tech note: Transformers are Multi-State RNNs. This paper from researchers at FAIR/Meta shows that transformer models (the heart of the current cutting-edge technology) can be seen as recurrent neural networks (RNNs) with unlimited hidden states. This is an important relationship to draw since it will allow analysis techniques from RNNs to transformer models. It's possible it also helps us understand how to go beyond transformers into things like Liquid Neural Networks.
Wishing you a great weekend.