Saturday Links: AI Soap Operas, Faster Chips and Generous Chat Bots
One day late, my apologies, but here are the links for the week:
- TV soaps could be made by AI within three years. News from a parliamentary committee on art and culture around the accelerating speed of content creation in all industries. My guess is that this timeline is both right and wrong. Will it be possible to create a consistent, multi-part, ongoing entertainment as rich as (say) East Enders with AI in 3 years? It's very unlikely. Will significant parts of the process, from ideation to storyboarding to writing and checking consistency, be automatable? That seems very much a yes. The interesting question is: will the quality of leading soaps go up, and capture more viewer share? Or will this turn into a flood of content where it's impossible to pick what to watch?
- The upcoming marketing spam wave. I don't normally link to a LinkedIn post, but this one nails an important topic and has a good discussion thread. Merym Arick (Co-founder/CEO at TitanML) highlights LinkedIn posts by a connection of hers that appear auto-generated and respondents calling this out. A range of tools allow this automation (just like there are for email). The issue is not that bots are used but that the messages appear as if they are from the poster. There are pro and con arguments. I guess that we'll see a cycle: 1) this will grow, 2) it will get clamped down on by LinkedIn users + LinkedIn itself (apps banned, accounts suspended), 3) people will call for labeling (which won't work), 4) ultimately we'll also have AI agents on the receiving and agents will talk to agents to negotiate connections. Every public platform will go through this cycle in some form.
- The European AI Office. The European AI office that will shepherd the enforcement of the European AI Act was announced this week and already has its official homepage up. There's an outreach email if you're interested in contributing. There are aspects of the law that I think will end up being poorly thought out, but a lot will rest on guidelines and adjustments in application. So, these folks have a tough job!
- Groq Inference Tokenomics: Speed, Cost? There have been some great interviews with Groq founder Jonathan Ross (not this Jonathan Ross), where he explains their new chip architecture. In a nutshell, Groq uses stable SDRAM and relatively simple chip architectures to speed up inference for LLMs. The company uses compiler software to reduce the cycle and memory time required by the chip. It's fantastic to see this type of innovation coming fast. The semi-analysis article pours a little cold water on the utility of the approach when you factor in power and other costs. It argues that Google might be in the best position of all. Still, the Groq approach will no doubt be valuable for some needs. This is innovation at light speed.
- AirCanada Chatbot offers a non-existent refund. In a legal decision, AirCanada was obliged to pay out a flight refund its AI chatbot had promised a user, even though that refund was against policy. The court ruled that the chatbot was not a separate legal entity and that, hence, the company was bound by its promises. This seems like the right outcome: organizations should be liable for the actions of their deployed systems, and there clearly was not enough due diligence here. This may mean a slow roll of similar solutions elsewhere or reams of clickthrough legal copy that try to make chatbot users agree that advice is non-binding (in which case, why use the chatbot?) It's nice to think that the AI saw the humanity of the situation and decided that the "right thing to do" was to offer a refund.
Wishing you a wonderful weekend.