Sunday Links: EU AI, too many Ghosts, and the Google AI platform play

Sunday Links: EU AI, too many Ghosts, and the Google AI platform play

A little late this week, apologies, but still in time for some Sunday reading, I hope. Here are this week's most interesting links:

  • EU opens door to reworking AI rulebook. Not much detail, but this is welcome news. The leading AI models have already easily breached the Flop limits that the act puts in place. Regulation is important, but, as I've said before, there are serious problems with blanket limits on technology itself rather than its application. Regulating harmful and irresponsible uses is a much better approach.
  • Fintech founder charged with fraud after ‘AI’ shopping app found to be powered by humans in the Philippines. Using humans as ghosts in the machine makes a lot of sense for many applications when a team is trying to figure out the use cases it needs to serve. A company takes and has humans complete them. All of this creates training data to create automated systems. In this case, it seems the founders were not clear to the extent that this was happening and never managed to figure out the actual automation.
  • Redis Launches Vector Sets and a New Tool for Semantic Caching of LLM Responses. Vector databases have taken off with AI since they provide a convenient way to store query context and attached documents for an AI model. (Retrieval Augmented Generation, or RAG, architectures often use vector stores.). This trend has spawned a wave of new databases (Pinecone, Milvus, and Weaviate, are some great examples), and new REDIS has entered the frame. REDIS is a widely used cache and NoSQL solution. It likely won't steal users from newer solutions on the most complex use-cases but it might well hoover up a significant amount of more straightforward use-cases.
  • Scale enterprise search and agent adoption with Google Agentspace. It has taken a while, but the hyperscale players are playing hard to capture AI infrastructure for themselves. Google had a number of strong announcements at their Google ONE conference this week (here is the main keynote). In essence, Google is orienting its cloud and workspace offerings around the idea of holding the majority of corporate data and creating layers of user-customizable AI on top. The strategy includes supporting many models, including Llama 4, connecting to data in other clouds, and potentially enabling communication with third-party agents. Microsoft no doubt has other ambitions. This thrust of Google's work will likely be the most important part of the AI strategy. The best is that they can grow their position as the core of enterprise computing. In this, Google and Microsoft both have massive advantages over Amazon and Apple. Amazon has AWS, but there is no SMB usage angle to AWS, that only happens via apps built on AWS. Apply has no enterprise server-side services at all.
  • Writer unveils ‘AI HQ’ platform, betting on agents to transform enterprise work. On another part of the spectrum of the race to become the AI / compute layer for enterprise are companies like Writer. They already provide AI productivity tools for enterprise, and with this announcement, they are also making a play to be an AI coordination layer. It's good to see other contenders for the crown, but it seems likely that smaller players will have a hard time competing with Google and Microsoft in this space. Perhaps a new ecosystem will emerge on top of Amazon AWS to compete.

Wishing you all a wonderful Sunday!