Easter Links: AI doom 2027, data extraction, and robot runners
The AI 2027 report, running robots, lacklustre productivity and data extraction.

Wishing everyone a happy Easter weekend if you are celebrating, and a quiet one if you are not! AI news didn't really slow down, so here are this week's most interesting links:
- AI 2027. This widely shared report is a futurist's look at what could happen if we do produce AI superintelligence in the next few years. The report is really well done graphically as well as in terms of storytelling, and you get to pick one of two endings (both seem somewhat dystopian to me). I have several critiques of the article, but it is definitely worth reading. The main positive is that I do think advances in the next few years will be quite swift, and that alignment matters. However, I think the timeline beyond 2026 is far too fast and, above all, it underestimates the economic disruption that will happen before superintelligence is likely to play any major role. If the timelines were stretched out by 3- 4x, I'd say some of the big disruptions are more plausible. In that time, though, there would be a lot of fragmentation, different approaches, and it's just as likely that there could be a slow diffusion of AI, much of which ends up being put to benevolent uses. As an aside, I'm also not sure China will need to "steal" any tech from anyone; there are plenty of smart labs there able to do just as much as US and European Labs.
- Shifting Work Patterns with Generative AI. This 6-month study (hat tip Azeem Azhar) studied knowledge worker behaviour across 56 companies to determine how work shifted when using tools such as ChatGPT. The studies were done using Microsoft Co-pilot and Office suites. The results showed a clear speed up in email processing (saving 3hrs / week), some in collaborative document completion, and no change at all in meeting behaviour. Looking at the results, this very much seems like a "first-order" effect study in that it focuses on generic knowledge work. My guess is that there is a much deeper impact in places where specialized roles adapt to use AI.
- China pits humanoid robots against humans in half-marathon for first time. The Robot 1/2 Marathon finally happened. It reminds me of the early Robocup tournaments, except with even more expensive hardware on the line + possibly human safety.
- Leaked Scale AI Dashboard Offers New Details Into Projects for Meta, Google, and X.ai. Scale.AI is the leading infrastructure provider for the data-labelling that is often used to classify inputs to AI training. Both automated and human processes determine what a training input contains so that it can be used to push the learning AI in the right direction. This peek into the large users of Scale is an interesting view into how important data-labelling still is and the kinds of projects going on at the big model-building companies. From fine-tuning, language learning, and trying to make sure responses are neutral for upcoming elections, there are lots of data challenges.
- Deck raises $12M to ‘Plaid-ify’ any website using AI. AI meets APIs in this story, which will seem a little peripheral (not least because of the cryptic Techcrunch title). What Deck and its competitors do is allow you to log into many of your existing web applications and extract your own personal data, keeping it in sync. If they can succeed, then in theory, you can share this data with other web apps and AI, so this is a potential key step in moving beyond today's web apps. It seems like there may be a long road to user adoptions and many web apps will likely fight this data extraction, but they are likely to forced to allow it for legal reasons.
Wishing you all a happy weekend!