Saturday Links: Consultancy revenue, minimal code and Go
Here are this week's links. A bit less Apple, a bit more machine learning.
- Accenture says strong AI demand to power 2024 revenue growth. AI consulting is definitely paying off right now, with Accenture projecting over $2B of generative AI-related revenue in 2024 and making a hiring push. How much of this will be "replace spreadsheets with AI" and how much will be "replace AI with spreadsheets"?
- These 94 lines of code are everything that is needed to train a neural network. Everything else is just efficiency. At the other end of the spectrum, the basics of what all the fuss is about. A nice tweet from Andrej Karpathy pointing back to some of the simple early code that he implemented to understand back-propagation.
- London premiere of movie with AI-generated script cancelled after backlash. A film script written by GPT4 and then produced by humans (inc. human actors) was bound to come along, but someone had to be first. Now the premier of the movie in question "The Last Screenwriter" has been cancelled in London after pushback. The cancellation seems more part of the art than an unfortunate incident, I guess it'll drive more interest in the long run. Given that AI tools are already coming into wide use in the film industry (and the US Screenwriters Guild approved use so long as the humans involved are compensated), it seems like AI-assisted productions will become more and more common.
- Chegg to lay off 23% of global workforce. Chegg provides textbook and learning services worldwide and is one of the big success stories of 2010's tech boom. The company helps students learn and access materials. However, it's also an example of the creeping effects of AI on some business models. Quite a number of the services Chegg provides (math checkers, interactive Q&A etc.) might well now be partly automated by AI systems. Even though those AI systems may still be (less accurate), the future where individuals have access to personal AIs might mean that those systems will be "good enough" for tasks like "help me with homework. Welcome to the Diamond Age.
- Can Go AIs be adversarially robust? A nice paper (hatip Alessio Lomuscio) on Go AI systems and whether they can be made robust against unusual attacks. The authors focus specifically on easy but unusual exploits that human players would spot but lie outside of the spectrum of normal good play. The paper includes some nice distinctions between static adversarial and adaptive attacks. Overall, they conclude that while we're continually improving the average performance of ML systems, it remains vastly harder to improve the worst-case performance.
- Apple delays launch of AI-powered features in Europe, blaming EU rules. Ok, damn, Apple had to crash the party with at least one bit of news. Apple has communicated that they plan to hold back the launches of three new iOS features in Europe (Phone Mirroring, SharePlay Screen Sharing enhancements, and Apple Intelligence). The reason given is that since EU rules force it to open access to such features to 3rd party developers, those parties might exploit the device by using them. Or, more generally, opening up these features may add more general risks. It's hard to evaluate this claim more generally, but it seems more like a short-term strong-arm tactic. Apple's AI features are (in large part) a wrapper strategy allowing other models to be called in the background - this element of AI seems non-invasive. Other parts allow access (via Siri) between apps - but this has already been possible in part. The point Apple is trying to make is, "If you allow third-party app stores where we do not vet the apps, they could get up to no good." Surely, this is already true, however. I can't imagine that Apple will want to hold onto a two-speed iOS world for long. Perhaps what they are trying to get is "exemptions" of access to certain features if you are in a 3rd party app store. Privacy vs. openness.
Wishing you a great weekend.