Friday Links: IBM goes to the North Pole, AI goes to MOMA and the UK Government is further in on AI that people knew.
Also: a battery of 10,000 questions probably won't tell you for sure if a future smart AI system is safe
Thanks to everyone who sent me notes on the Watermarking post earlier in the week. Lots of great observations for a future post! This week’s links:
- Unsupervised now in Moma’s permanent collection (direct link to the video here). For this art piece, AI truly did provide a new toolkit. It’s pretty stunning in its scale and depth. On the other end of the spectrum, some people have called it a fancy screensaver. I guess it gets a tick in the box for being provocative, at least!
- IBM Goes to the North Pole: one of the unsung effects of the AI boom is the renewed focus (and variety) of new chips being developed. This week, IBM released details of a new line of prototype chips that mimic organic brain connections by combining compute and memory on a single chip. If successful, this might allow neural networks to run directly on a chip, resulting in 25x power savings and 20x speedup.
- Big players allocate (wait for it…) $10M to a new AI Safety Fund. No doubt they are spending on the same topic elsewhere, but as a headline it hurts. $10M is a tiny amount for such a critical topic. For context, this is about 20 minutes of revenue for Google.
- The UK Government’s use of AI raises hackles. The Guardian investigated UK Government AI use via Freedom of Information Act requests. Multiple departments use AI for tasks such as pension assessment and immigration. Discriminatory results were flagged in several of these processes. It’s not surprising to see this use, the errors, and also the outrage. There is already huge unchecked use of algorithmic solutions in Government, which is not labeled as AI. Public interest in AI is just surfacing more. Hopefully, it will lead to deeper audits of all automated decision-making, not just the newest AI-labeled systems. Virginia Eubanks’ 2018 Book “Automating Inequality” is an excellent primer on how bad things already are.
- Ending on a more upbeat note, Vedeo.AI has a showcase of short AI-made movies. Many still with lots of artifacting and weirdness, but the video is the hardest form to produce seamlessly. When this becomes automatic, you’ll be able to tell your kids how clunky the results were in the early days.
As a bonus this week, there is one more: Demis Hassabis’ interview on the seriousness of AI risk (also in the Guardian). Demis is one of the founders of Google Deepmind and states that the dangers of AI should be taken as seriously as climate change. I definitely think the risks should be addressed; I’m not sure how I feel about them being put in the same bucket as climate change, which large parts of humanity are still resolutely ignoring.
The interview also includes a quote saying: “I can imagine in future you’d have this battery of 1,000 tests, or 10,000 tests could be, and then you get safety Kitemark from that.” I guess this may have been designed to make people feel better, but if there’s one thing that seems highly unlikely, it’s that we’ll be able to declare a high-intelligence AI system “safe” by using a standardized battery of tests.
Lastly, there is also an update on the cruise-related car crash mentioned in last week’s links. California just suspended Cruise’s operating license since it seems the car moved without permission after the incident happened. Murkier indeed than expected.