Sunday Links: Self Driving Cars, Watermarking, and the need for safety with Bots

Waymo raises more money, uncontrolled bots can be dangerous, and Notebook LM might be the future of content.

Sunday Links: Self Driving Cars, Watermarking, and the need for safety with Bots

A little late this weekend, but here are the top links from the week:

  • Waymo Closes $5.6 Billion Funding Round From Alphabet, Others. Waymo takes the next step in the self-driving race. Along with Cruise it's taking the "highly controlled" approach to self driving, with limited areas and huge amount of data, as opposed to the Tesla approach of super broad coverage, less sensors but less precision in control. Ultimately I think both ends of the spectrum will have to come together, so it's exciting to see Waymo get more investment. Announcements don't cover what the investment will be used for, but hopefully it'll help to hone the platform to more quickly roll out to new locations + reduce the cost of the sensors and vehicles.
  • Google releases tech to watermark AI-generated text. Google is open-sourcing a version of the "AI text detection" tech that is embedded in Gemini. The system can be used to grade text for word frequencies and patterns that are more likely to come from AI. This is only for textual outputs (not images). While it's always good to see more open source, I really doubt these systems have much utility in the long run. We've already seen what happens when they flag false positives and people believe them (extremely distressed human students.). With many LLM models / fine tunes proliferating and the inevitable drift in human language usage, it seems likely this looks more like a convergence path between human and AI text than any clear separation. Five years out, Ten years out, I think we have to ready ourselves for a world where any text or even voice speech we see could be part of AI synthesis.
  • Google's Notebook LM, which we covered in the last couple of weeks, starts cropping up everywhere. I'm a fan of, but a mediocre participant in endurance sports, including Ultra-running (someone just persuaded me to sign up for the 60km Sierra Nevada Ultra trail next April - I'll be cursing you all the way, Kevin!). In looking up a short bio of Courtney Dawalter (one of Ultra-running's greats), my top hit was a video posted two days ago from YouTube channel 90 North "How Courtney Dauwalter became ultrarunning’s GOAT". It's a pretty decent summary video, and without prior knowledge, you might not realize that the narrative is a vanilla output from Notebook LM. I recognized the voices and cadence immediately. Once Google LM allows tweaks to tone and also different voices, it's hard to see how we won't have huge amounts of content like this out there. One also wonders about the IP stack here - no doubt this was created by uploading the transcripts from 5-10 other videos, her Wikipedia pages, and not that much else. The video footage is clips from existing videos. Attribution will be a significant problem in this world. If you're interested in Dawalter's story (absolutely worth it), there are other human-created videos like this.
  • One Thing is Clear: AI Is Leading to 1000+ New Competitors in SaaS(1). Short and sweet post by Jason Lemkin on Saaster. I think this is a key thing that isn't obviously for investors and startup founders. Yes, AI is a new broom, but it has accelerated functionality creation for everybody, shifted UI paradigms for everybody and enables new ways to create value. But. But... everyone has those tools at the same time, so there's an explosion of competitors in every segment. Many will burn out, but some will sustain for a time with Venture money (and push up costs for others), a few will hang on. SaaS incumbents have an advantage (data, reach) but they'll be assaulted on all sides. (1) SaaS = Software as a Service.
  • 14-Year-Old Was 'Groomed' By AI Chatbot Before Suicide: Lawyer. This is a sad story in which a 14 year old took their own life through depression and many of his meaningful interactions were with chat bots. These have lead to lawsuits against Character.AI which provided some of those bots (in the form of fantasy characters). I'm not a legal expert so I have no idea what merit the lawsuits have but it's a horrific to see cases where technology and online interactions leading to such harm. I'm not sure every harm can prevented but as a minimum any company providing "personal interaction" services of this nature should have stringent checks in place on content. Humans are fragile, especially in our teens but at many times during or lives. Live is hard, we're social wired to take cues from others and when we express our deepest fears / feelings, those that hear them must take care. That includes AI. Any company enabling human-AI inter-relations should be putting a huge amount of effort into preventing harms like this. No doubt there are positive examples also (perhaps people talked out of harming themselves), but that "do no harm" should be front and centre in any system design.

Wishing you all a good week!