Saturday Links: Open-source robot control, EU AI act, and AI weapons
The Ai world goes more open source, with robotics platforms, Google for military applications and Github Co-pilot.

Hi to you all. The AI world continues to move along. Open source seems to be the hero again. Here are this week's links:
- Hugging Face brings ‘Pi-Zero’ to LeRobot, making AI-powered robots easier to build and deploy. This open-source foundational model is designed to take voice inputs and convert them into options. It's great to see Hugging Face start to build out the ecosystem of robotics-focused models. Interesting that the PiZero announcement comes just a couple of weeks after a big $500 funding round for SkildAI. Robotics seems like it will turn into a dance between open software + commodity hardware and full-stack integrated hardware+software approaches. It seems likely that open-source will have a strong upper hand here. Robots will eventually go wherever humans go and beyond. It will be hard for any one company to keep up with all the hardware variants that will be required.
- Sam Altman: OpenAI has been on the ‘wrong side of history’ concerning open source. In a reaction to questions about DeepSeek (see last week's links), Sam Altman went on the record to suggest that they maybe should have done more in terms of open-source. This seems likely to mean open-sourcing more of the older, now quasi-obsolete models. It seems unlikely they will open up their leading-edge models. Still, even open-sourcing the older models would fuel the fires of experimentation.
- The Commission publishes guidelines on AI system definition to facilitate the first AI Act’s rules application. The first compliance deadline of the EU AI Act was just passed last Sunday (2nd February 2025), and applications of AI on the prohibited list are now, indeed, prohibited. The guidelines seek to clarify what is in and out of scope but do not count as legal advice. The categories of unacceptable risk systems from the original text of the act are: 1) harmful AI-based manipulation and deception, 2) harmful AI-based exploitation of vulnerabilities, 3) social scoring, 4) Individual criminal offense risk assessment or prediction, 5) untargeted scraping of the internet or CCTV material to create or expand facial recognition databases, 6) emotion recognition in workplaces and education institutions, 7) biometric categorization to deduce certain protected characteristics, 8) real-time remote biometric identification for law enforcement purposes in publicly accessible spaces. Some of these are indeed obviously harmful. Others, though, such as social scoring or offense risk detection, could, on the face of it, stray into areas where AI is already used (fraud detection, for example). The guideline document is quite helpful in delineating what is and is not in scope, and it also reiterates clear exceptions for R&D and (some forms of) personal rather than commercial use. In some sense, this set of rules coming into force is the easy case. Later rules apply to lower-risk systems, and there are a lot more of those.
- Google owner drops promise not to use AI for weapons. This headline should probably really be, "AI for military applications is bad, ... unless it's for our military applications.". In the shock of the emergence of ChatGPT, there was a moment in time when the view of what AI could become was a common source of wonder and fear for everyone, no matter what their nationality. The last 18-24 months, though, have made it clear that there will be military applications, and companies are aligning themselves, in part, with national interests. It is hard to imagine that the shift in tone from Google also wasn't somewhat influenced by the shift in government in Washington. How to feel about this? From a purely societal point of view, it is hard to imagine a world where military power doesn't have some role to play in nation-state interactions. To that extent,t it makes sense allies want to protect themselves. We are also still far away from run-away AIs that wipe out the planet. However, there really do need to strong policies in place as to how such systems can be used. A large percentage of dystopian Sci-Fi (and most Sci-Fi is dystopian) involves run-away AIs. If you'd like a near term "agentic" take I can highly recommend Kill Decision and the Daemon duology, by Daniel Suarez.
- GitHub Copilot: The agent awakens. Probably the least surprising news of the week - Github is getting into the coding agent game. Startups like Cognition's Devin and Magic have had about a year's headstart. Their products are no better than the fledgling Github offer – but will they be able to overcome the inherent distribution power that Github has?
In some non-AI related news, Matthew Ball has an interesting annual review of the business of Gaming. 226-slide presentation The takeaway is that the video game industry is at a point where growth has slowed, and there are huge challenges as well as various types of strategic lock-in. The presentation is an interesting insight into a highly evolved market, but also quite scary if you are in the business of making games.
Happy weekend (!), and maybe buy or try an Indie game to make a small studio happier (Itch.io for early titles, Steam for those that are more evolved).