Microsoft Build’s AI Announcement, OpenAI Scarlett Johansson Controversy, and Google’s Awful AI Overviews


In our first episode of the week, join Mike Kaput and Paul Roetzer as they explore Microsoft’s new AI-infused PCs, Copilot AI agents, and Team Copilot. We also look into the controversy surrounding GPT-4o’s Sky voice and its alleged similarity to Scarlett Johansson, and discuss Sundar Pichai’s thoughts on AI’s influence on search traffic and the accuracy of Google’s AI Overviews. Stay tuned for our special 100th episode coming out this Thursday!

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:04:04 — AI Updates from Microsoft Build

00:14:09 — OpenAI / Scarlett Johansson Controversy

00:26:08 — Sundar Pichai Interview & The Future of Search

00:34:53 — Anthropic Researchers Discover More About How LLMs Work

00:41:32 — Google DeepMind Introduces Frontier Safety Framework

00:47:31 — European Union’s AI Act Passes

00:51:43 — US State Laws on AI

00:56:02 — Thomson Reuters CoCounsel

00:59:14 — OpenAI Deal With News Corp.

01:02:46 — Apple Podcasts AI Transcripts

01:05:46 — AI Financials/Funding Rapid Fire

Summary

Microsoft’s Announcements

Microsoft just wrapped up a huge week. First, on May 20, Microsoft held a special event to announce a new category of Windows PC designed for AI.

They’re called Copilot+ PCs and they’re built from the ground up to be infused with AI. These machines, built by a range of partners like Dell and Samsung, are powered by new, more powerful chips designed for AI. Copilot+ PCs also include a new AI feature called Recall, which tracks and recalls everything you have seen or done on your PC.

Second, Microsoft then kicked off its Build developer conference from May 21 – 23, where it dropped a ton of other AI announcements, including:

Copilot AI agents. Microsoft says you will soon be able to build Copilot AI agents that will act like virtual employees: It sounds like these agents will do tasks for you without prompting, like monitoring emails or doing data entry.

Team Copilot. This is a Copilot feature for Microsoft Teams that can manage meeting agendas and notes, moderate team chats, or assign tasks and track deadlines.

Phi-3-vision. The company rolled out a multimodal version of its Phi-3 small language model, which is small enough to work locally right on a device.

And the company also talked up GPT-4o. The new flagship OpenAI model is now available in Microsoft Azure, and the company stressed how much better it’s making things, saying it’s now 12X cheaper to make a call to GPT-4o than the original model and its 6X faster in time to open response.

OpenAI / Scarlett Johansson Controversy

OpenAI’s recent high-profile release of GPT-4o, with improved voice capabilities, is now being overshadowed by a high-profile controversy about where that voice came from.

As part of the GPT-4o demo and announcement ten days ago, one of five voices for the assistant, Sky, was demoed. And some people claimed it sounded eerily like Scarlett Johansson, including Johansson herself.

The actress threatened legal action against OpenAI, saying she’d been approached last year by the company about hiring her to user her voice—an offer she declined. She also said that Sam Altman had contacted her agent two days before the demo, asking her to reconsider. (She did not.)

The day of the demo, Altman added fuel to the speculation this sounded a lot like Johansson by posting on X the word “her,” a reference to the film in which Johansson voices an AI assistant.

Altman and OpenAI denied this week that the voice is not Johansson’s, was never intended to be hers, and came from a voice actor they hired before outreach to Johansson.

As of May 22, The Washington Post is reporting that indeed a voice actress was hired in June to create the Sky voice “months before Altman contacted Johansson, according to documents, recordings, casting directors and the actress’s agent.”

The agent said the actress hired confirmed that neither Johansson nor the movie “Her” was ever mentioned by OpenAI.

Sundar Pichai Interview & The Future of Search

Google CEO Sundar Pichai just answered some tough questions from The Verge on the Decoder podcast. The Verge’s editor-in-chief Nilay Patel asked Pichai about some uncomfortable topics, like AI’s impact on search traffic and the company’s move to include AI Overviews in results.

At one point, Patel even handed Pichai his phone, showing him certain AI-powered search results, and asked him about how valuable he found the overall experience.

While Pichai handled questions adeptly, Patel’s direct demonstration of Google’s product in action signaled growing marketer and business uncertainty about AI’s impact on search traffic and visibility in search results.

That frustration appears to be growing, AI Overviews have only been live in the last 10 days or so since Google I/O, and already we’re seeing extremely negative reviews from Google watchers about the accuracy of these results.

Links Referenced in the Show

  • AI Updates from Microsoft Build
  • OpenAI / Scarlett Johansson Controversy
  • Sundar Pichai Interview & The Future of Search
  • Anthropic Researchers Discover More About How LLMs Work
  • Google DeepMind Introduces Frontier Safety Framework
  • European Union’s AI Act Passes
  • US State Laws on AI
    • Colorado Enacts First Comprehensive US Law Governing AI Systems
    • SB 1047 Moves Forward
  • Thomson Reuters CoCounsel
  • OpenAI Inks Deal With Wall Street Journal Publisher News Corp.
  • Apple Podcasts AI Transcripts
  • AI Financials/Funding Rapid Fire
    • Suno Raises $125M
    • Humane Exploring Potential Sale
    • NVIDIA Earnings


Today’s episode is brought to you by Piloting AI, a collection of 18 on-demand courses designed as a step-by-step learning path for beginners at all levels, from interns to CMOs. Piloting AI includes about 9 hours of content that covers everything you need to know in order to begin piloting AI in your role or business, and includes a professional certification upon passing the final exam.

You can use the code POD100 to get $100 off when you go to www.PilotingAI.com.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: There’s this almost like sense of invincibility right now with AI companies where they’re just going to do whatever they have to do accelerate innovation and

[00:00:09] Paul Roetzer: the models, and going to be some bumps along the way that they’re going to fight through.

[00:00:14] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I’m the founder and CEO of Marketing AI Institute, and I’m your host. Each week, I’m joined by my co host, and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:44] Paul Roetzer: Join us as we accelerate AI literacy for all.

[00:00:51] Paul Roetzer: Welcome to episode 99 of the

[00:00:53] Paul Roetzer: Artificial Intelligence Show. I am your host, Paul Roetzer, along with my co host, Mike Kaput. We are [00:01:00] recording this on Friday, May 24th at 1 p. m. due to the Memorial Day weekend. We decided we were going to take the day off

[00:01:08] Paul Roetzer: Monday, so

[00:01:09] Paul Roetzer: we’re getting this in early. so yeah, if there’s any crazy Friday night news drops before the holiday weekend, about AI, You have to wait till episode 101. Because, Thursday, May 30th, as a reminder, is a special edition episode 100. So we’re doing our regular episode 99, this one, on Tuesday. And then we’re dropping episode 100 on Thursday. This episode is dedicated to answering all of your questions. We’ve had dozens of questions submitted by our listeners.

[00:01:41] Paul Roetzer: Mike, I think last time you told me it was over 60 60 questions we’ve had from users, which is awesome. So mike and I are going to try and get to as many of those as we can in about an hour or so. and

[00:01:52] Paul Roetzer: and so tune in May 30th and thanks to everyone for not only listening and being part of the journey. But anyone who’s taking the time to submit a question, [00:02:00] do our best to get to as many as we can.

[00:02:01] Paul Roetzer: And then we’ll try and find some ways to get answers to the rest of them. Maybe we’ll do a follow up episode and answer some more. I don’t know. So, that’s coming up again, May 30th. All right. Today’s episode is brought to us by Piloting AI, a collection of 18 on demand courses designed as a step by step learning path.

[00:02:18] Paul Roetzer: For beginners at all levels, from interns to chief marketing officers. more than a thousand professionals have registered for the certification series since we first launched it in December, 2022. The fully updated Piloting 2024 series includes more AI technology demos and vendors, revised templates, and a brand new generative AI 101 course.

[00:02:41] Paul Roetzer: Mike and I refreshed and recorded all 18 courses in January 2024, right at the end of January, actually. Piloting AI includes about nine hours of content that covers everything you need to know in order to

[00:02:53] Paul Roetzer: piloting aI in your role or business. And includes a professional certification upon passing the final exam.[00:03:00] 

[00:03:00] Paul Roetzer: It’s not just for individual learners. You can also get a team license. we’ve had a lot of top companies put their teams through the courses as a way to accelerate their team’s understanding and adoption of AI. So if you’re interested in that, just reach out to us. If you’re looking at team licenses, I’ll be happy to talk to you.

[00:03:16] Paul Roetzer: And so you can go to pilotingai. com and learn all about the course. And you can use a promo code POD100, that’s P O D 100, for 100 off. So again, pilotingai. com. And then, as I mentioned earlier, episode 100 is coming up. You probably don’t want to have people submit questions at this point, right? It’s probably a little late

[00:03:38] Mike Kaput: in the game questions. It’s probably a little late, but yeah, if you didn’t get your questions

[00:03:42] Paul Roetzer: in, wait till next time now. Don’t worry.

[00:03:44] Mike Kaput: We’ll, we will

[00:03:45] Mike Kaput: answer again

[00:03:45] Paul Roetzer: sometime. Yeah.

[00:03:46] Paul Roetzer: so if you still use the link we talked about before, you can go and

[00:03:49] Paul Roetzer: them in, but we’re, we have a lot of questions to get through already. right, so special Friday recording edition, plenty to talk about though. Even though it was a shorter week for us [00:04:00] to do this, there was a lot going on. So Mike, let’s go ahead and get rolling.

[00:04:04] 

[00:04:04] AI Updates from Microsoft Build

[00:04:04] Mike Kaput: Sounds good. So, Paul, last week we covered some huge AI announcements and events from OpenAI and Google, but now it is Microsoft’s turn because they just wrapped up a huge week as well.

[00:04:17] Mike Kaput: So. First, they kick things off on May 20th with a special event to announce a new category of Windows PC designed specifically for AI. These are called Copilot Plus PCs, and they’re basically built from the ground up to be infused with AI. Now, these are machines built by a bunch of different partners like Dell and Samsung that are powered by new, more powerful chips that are designed for AI.

[00:04:42] Mike Kaput: And they both connect to large language models that are running in Azure Cloud. And they have small language models that run locally right on the device. Copilot Plus PCs also include a new AI feature called Recall, which actually tracks and recalls everything [00:05:00] you have seen or done on your PC. Now this AI powered perfect memory is built and stored entirely on your device, locally and private to you, according to Microsoft.

[00:05:12] Mike Kaput: So, as part of this launch, the newly, let’s call him, Aqua hired AI leader, Mustafa Posted about the fact that CoPilot will now see, hear, speak, and help in real time on these devices. And he showed off a really cool video where a user interacts via voice with CoPilot to essentially work with it to play the game Minecraft.

[00:05:35] Mike Kaput: Suleyman said, quote, Soon your AI companion start to live life alongside you. Whether playing Minecraft or helping you navigate life’s most difficult challenges. So as if that weren’t a big enough deal, Microsoft then kicked off its Build Developer Conference from May 21st to the 23rd, and it dropped a ton of other AI announcements.

[00:05:56] Mike Kaput: These included things like Co Pilot AI [00:06:00] Agents, Microsoft says you’ll soon be able to build co pilot AI agents that basically

[00:06:04] Mike Kaput: act like virtual employees and do tasks for you without prompting them. These are going to be available later this year in preview. One of these appears to be something called Team Co Pilot, which is a co pilot feature for Microsoft Teams that can manage meeting agendas and notes.

[00:06:20] Mike Kaput: Moderate team chats and assign tasks and track deadlines.

[00:06:24] Mike Kaput: Microsoft also rolled out Phi 3 Vision, Phi 3 Vision, the multimodal version of its Phi 3 small language model. So these are these models that are small enough to actually work locally right on device.

[00:06:39] Mike Kaput: The company also talked up GPT 0, which we covered on previous podcast episodes. This is the new flagship OpenAI model. It’s now available in Microsoft Azure. And company

[00:06:51] Mike Kaput: stressed how much better it is making things, especially developers. It’s now 12x cheaper to make a call to GPT 4. 0 than the [00:07:00] original model. And it’s 6x faster in time to open response. And Sam Altman himself even dropped by the event. To talk about the future of AI, he some hints about

[00:07:12] Mike Kaput: where everything is going, at one point saying, quote, the most important thing, and it sounds like the most boring thing I can say, the models are just going to get smarter, generally, across the board.

[00:07:24] Mike Kaput: So paul, there’s a lot to unpack here, but let’s maybe first talk about

[00:07:27] Mike Kaput: the Copilot Plus PCs, and especially this recall feature.

[00:07:32] Mike Kaput: What were your thoughts on that? It sounds like the recall feature sounds very similar to some of the other AI startups that seen, like Rewind, that are recording your entire life.

[00:07:43] Paul Roetzer: Yeah, that was the, you know, certainly one of the ones that jumped out. The, just, real quick, like, the way it works is, this is right from the Microsoft site, Recall takes snapshots of your screen and stores them in a timeline. Snapshots are taken [00:08:00] every five seconds while content on the screen is different from the previous snapshot.

[00:08:06] Paul Roetzer: So, in essence, like, everything you’re doing, every, you know, Private message, every video, every page, like everything is just all snapshotted. And yeah,

[00:08:15] Paul Roetzer: stays local and

[00:08:16] Paul Roetzer: in theory secure, but It’s everything you do, everything you see, and then the way it works is because they have snapshots where these AI models are multimodal and they can understand text and images and videos and everything, like it’s able to discern something.

[00:08:32] Paul Roetzer: So then if you want to ask a question, Like, if I say, what was I wearing on, on May 24th when Mike and I, recorded episode 99, just

[00:08:42] Paul Roetzer: just like,

[00:08:43] Paul Roetzer: boom, instantly, it’s like, oh, you’re wearing the black shirt. cause it knows, it can see a snapshot from that moment and knows what I’m referring to. So, you know, it’s a slippery slope. I, I, I think,

[00:08:55] Paul Roetzer: this gets into this idea of, like, the technology’s gonna be there. [00:09:00] Are you willing to accept what you have to give up to get the benefit? And this is that law of uneven AI

[00:09:06] Paul Roetzer: distribution that I wrote last year. And so as a consumer, you’ll have this choice. Am I, am I going to allow my computer to take screenshots of everything I’m doing every five seconds?

[00:09:18] Paul Roetzer: As a employee, if we go over to the corporate side of this, you’re not going to have the choice I’m guessing. This is something IT is going to turn on or not. Um. if they turn it on,

[00:09:32] Paul Roetzer: everything you do on your corporate computer will be screenshotted every five seconds. And, I guess it stays local on your device, but they can get into your device and see everything you do. Like companies do this already. Like companies monitor people’s use of their corporate computers already.

[00:09:47] Paul Roetzer: I just feel like this goes to a whole nother level. Cause now I can interact with all those recordings in real time through natural language. So I think it’s going to get messy. Like I, I think, you know, I keep waiting for like, what, what are [00:10:00] the things. that are pushed on to people where people start resisting

[00:10:07] Paul Roetzer: AI. And this is one of those features where you start to see, like, I could see some people not liking this. And maybe this is one of the things that starts to get some people to turn against the use of this technology in corporations. but I also think it could just be that there’s this initial kind of visceral reaction, like, I don’t like it.

[00:10:29] Paul Roetzer: And then six months later, we forget it happened and it’s just part of our lives. Probably more realistic. I mean, this is how Meta has built their whole company. Like, Facebook just pushes things forward people hate, and they take some bad PR for it. And then three months later, we’re on to the next thing that we hate, and we’ve forgotten that they did the other thing.

[00:10:44] Paul Roetzer: And you’re just like, You kind of assume we go through these news cycles and these kind of brain cycles where we get upset about something and then we move on and it’s part of our life. My guess is Apple does something like this with the iPhone. Like I could see some [00:11:00] similar technology happening on your phone that enables you to talk to it about everything you’ve done, but to do that, it has to know everything you’ve done, have seen it, cataloged it in some way.

[00:11:11] Paul Roetzer: So I don’t know, like I’m not racing to turn this feature on when I have it. I don’t do anything on my computer, I’d be worried about it catching, but it’s just the idea of having that kind of Big Brother screenshot every five seconds. That’s weird to me.

[00:11:23] Mike Kaput: Yeah. Did any other of the build announcements kind of jump out at you as particularly important or something to keep an eye on?

[00:11:32] Paul Roetzer: I think they just reinforced the trends we’re seeing that everyone is racing to build multimodal, you know, OpenAI, Google, Microsoft, obviously the main ones

[00:11:43] Paul Roetzer: hearing, Anthropic, we haven’t heard as much about multimodal yet, but it’s, it’s coming. So they’re all trying build

[00:11:49] Paul Roetzer: AI that. that can see and create all these different modalities, text, image, video, audio, code.

[00:11:56] Paul Roetzer: So that is very apparent in what Microsoft is doing. They’re doing on the [00:12:00] back of OpenAI’s models, but also increasingly their own models, as we’ve talked about in recent episodes. And then this whole positioning of AI teammates, or agents, or co workers, however they’re going to say it, I think, It’s a more

[00:12:17] Paul Roetzer: digestible way to think about agents is to call them teammates or co workers.

[00:12:22] Paul Roetzer: it, I’m going to say the word wrong, anthropomorphizes? Yeah. Did I say that right?

[00:12:28] Mike Kaput: Anthropomorphizes, yeah. Yeah.

[00:12:30] Paul Roetzer: but I think it’s better, people don’t think about it as a replacement to them if they’re, if it’s better. Message to them as a teammate.

[00:12:40] Paul Roetzer: And so I think that there’s some branding power here that we saw already with Google going this direction.

[00:12:48] Paul Roetzer: certainly co pilot is the same idea. It’s like, Hey, it’s just here to help you. so I think that they’re all kind of going in this direction of not only positioning it as teammate, [00:13:00] but. Um, building the capabilities for it to become that. And that, again, was really apparent. The other thing is this idea of being on the edge or on device.

[00:13:12] Paul Roetzer: That you’re going to be able to do these things if you’re not connected to the internet. That the models are going to get small enough. You know, efficient enough. The chips are going to get efficient enough within the devices. I’m sure we’ll hear more about this with Apple’s conference in June, that you’re going to be able to do a lot of this AI capability on your device in the next 12 months.

[00:13:30] Paul Roetzer: Like right now, it’s still hard to get the full power of generative AI. on device without having to go out to the cloud to process things. But I think the advancements

[00:13:40] Paul Roetzer: chips and the advancements in the algorithms that are running them, making them more efficient, more battery efficient, more efficient from compute standpoint.

[00:13:48] Paul Roetzer: I think that 12 to 18 months from now, Many of the things that we currently need to go to the cloud to do, we’re going to be able to do on our devices, whether it’s your PC, your tablet, your phone, [00:14:00] it’s, they’re all going this direction, they’re all talking about the same stuff. And I think that was more than anything to me, just sort of validated that this is where they’re all going.

[00:14:09] OpenAI / Scarlett Johansson Controversy

[00:14:09] Mike Kaput: Alright, buckle up for this next one. so, we just got this recent high profile release from OpenAI, GPT 4.

[00:14:20] Mike Kaput: where it showed off all these really improved voice capabilities, and unfortunately, This is now being overshadowed by an equally high profile controversy. And controversy

[00:14:31] Mike Kaput: is about where these voice capabilities came from. Because as part of the GPT 4. 0 demo and the announcement a couple of weeks ago on Monday, May 13th, one of the five voices for this assistant was demoed. It’s called Sky, and it’s actually a voice. That open AI has had in the model previously, but this time around, it’s actually much more expressive and emotive.

[00:14:57] Mike Kaput: So they demoed having a voice [00:15:00] with this voice. And a bunch of people were like, that voice sounds quite familiar. And they said it sounded eerily like the actress Scarlett Johansson. One of the people that thought this was Scarlett Johansson herself. She said it

[00:15:14] Mike Kaput: so like her that she’s threatened legal action against OpenAI because

[00:15:20] Mike Kaput: she said she’d been approached last year by the company about hiring her. to use her voice for this piece the aI assistant. She declined that offer. She also said Sam Altman had contacted her agent two days before the demo, asking her to reconsider. She did not. The day of the demo of this voice, Altman added fuel to all the speculation that this sounds a lot like Johansson by posting on X simply the word Roetzer.

[00:15:50] Mike Kaput: A reference to the film in which Johansen voices an AI assistant. Now Altman and OpenAI have denied this week that the voice is [00:16:00] johansen’s. They said it was never intended to be hers, and it came from a voice actor they hired before they even talked to her. The company went on to say, out of respect

[00:16:09] Mike Kaput: ms. Johansson, we have paused using Skye’s voice in our products. We are sorry to miss Johansson, that we didn’t communicate better.

[00:16:16] Mike Kaput: And as of may 22nd, the Washington Post. It is reported that indeed a voice actress was hired in June of last year create

[00:16:24] Mike Kaput: the Sky voice quote months before Altman contacted Johansson according to documents, recordings, casting directors, and the actress’s agent. The agent said the actress hired confirmed that neither Johansson nor the movie Her was

[00:16:38] Mike Kaput: ever mentioned by OpenAI. Now, Paul,

[00:16:41] Mike Kaput: first, I kind of want to talk about the story itself, then maybe dive a bit into the implications here, because, like, at least according to the reporting from the Post, OpenAI does not appear to have stolen Johansson’s voice outright, but this whole thing feels really shady [00:17:00] regardless, and is getting a lot of backlash.

[00:17:01] Mike Kaput: Like, what happened here, in your opinion? So, on episode

[00:17:04] Paul Roetzer: So on episode 98, we actually sort of teased that this was probably. happening. so if you recall in episode 98, I that

[00:17:13] Paul Roetzer: at 2 a. m. last Monday, so the day we were recording this, OpenAI had tweeted a blog post about how they chose these voices and that they were removing Sky for, you know, these reasons.

[00:17:26] Paul Roetzer: and so at the time I said, well, this seems weird. Like there must be some legal reason why they’re doing this because If they’re saying they didn’t use her voice, which they didn’t, like, no one is claiming that they, like, synthesized her voice. They’re just saying they picked someone who sure sounds a lot like her, and it would appear intentionally based on Sam’s tweets.

[00:17:47] Paul Roetzer: So, yeah, it was, it was funny because I was at, like, this, we talked on Monday morning. I said, like, this might be some legal thing. And then that night I’m at my, my son’s got like a baseball camp. And so I’m just like sitting there and I happened to, [00:18:00] you know, check Twitter. And sure enough, I see the statement from the NPR tech reporter, Bobby Allen.

[00:18:05] Paul Roetzer: And said, yeah, the agent just sent me exclusively this statement from Scarlett Johansson. So this is like Monday night at, I don’t know, like 7. 15pm Eastern time. and

[00:18:15] Paul Roetzer: so it’s one of those, you start seeing everybody’s retweeting it, and I’m like, but it’s just a screenshot, like, where did this come from?

[00:18:20] Paul Roetzer: You’re like, this can’t be real. And then, like, no, within 20 minutes, he verified this is, this is very real. And then some other media outlets kind of, you know, verified it as well. So I’m just gonna

[00:18:30] Paul Roetzer: I’m going to read a quick excerpt from her statement, because I think it’s really important. There’s two parts in here I want to call out.

[00:18:35] Paul Roetzer: So it said, Last September, I received an offer from Sam who wanted to hire me to voice the current GPT 4 system. He told me he felt by my voicing the system, this is the part I boldfaced, I could bridge the gap between tech companies and creatives. and help consumers to feel comfortable with the seismic shift concerning humans and AI.

[00:18:57] Paul Roetzer: He said he felt that my voice would be [00:19:00] comforting to people. He went on to say, after much consideration for personal reasons, I declined the offer. Nine months later, my friends, family, and general public all noted how much the newest system Sky sounded like me. When I heard the release demo, I was shocked.

[00:19:13] Paul Roetzer: Shocked, angered, and in disbelief that Mr. Altman would, bold faced again, I’m, I bold faced, pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.

[00:19:28] Paul Roetzer: And then you mentioned that he even insinuated it was in reference to her with the, with tweet, her.

[00:19:34] Paul Roetzer: and so my whole take on this, like that night on, I put on LinkedIn, like, like this story and I said, Illegal or not, it is a really bad look for open AI. Like they, they’ve had a number of unforced errors lately where they’re just, and I think, and again, like there’s so much about open AI I love, but my comment to someone on that LinkedIn thread was like, this is either incompetence or arrogance.

[00:19:58] Paul Roetzer: And I don’t think they’re [00:20:00] incompetent.

[00:20:02] Paul Roetzer: At some point, it’s just like, we’re going to do whatever we want. I’ll call you and give you one more chance, but like, I’m going to drop this with or without you. And then you drop it and then you got to pull it back. Even though you’re claiming you legally did it, which they probably did. Like it probably is a voice actress

[00:20:21] Paul Roetzer: who, quote unquote, just happens to sound exactly like her. and they can say, Hey, we hired this voice actress months before we reached out to Scarlett Johansson. Yeah. But you mean to tell me you didn’t have the idea to have a voice actress that sounded like her before you reached out to Scarlett Johansson?

[00:20:40] Paul Roetzer: You’ve probably been thinking about this since day one of the voice models. So. My, my big picture take here

[00:20:46] Paul Roetzer: is for a company that wants our trust, that, that wants us to believe they can shepherd in AGI safely, and wants all of our corporate and personal data, [00:21:00] they

[00:21:00] Paul Roetzer: are making a lot of money. He are missteps. So, from the Sora training data, which we still won’t, they won’t admit to that they used YouTube videos to do it, even though Sundart said they did last week, like, we know they did it, and they

[00:21:15] Paul Roetzer: won’t just admit it. the super alignment team gets dissolved last week, that was supposed to shepherd in the safe AGI and super intelligence.

[00:21:24] Paul Roetzer: Then last week, a day later, Sam gets blasted because There is, there was rumors, which were then validated, that when employees leave OpenAI, they have to sign these insane non disparagement agreements

[00:21:39] Paul Roetzer: that say you will not say anything bad about OpenAI and you won’t even admit That this document exists, that this agreement between us, you not saying anything bad even exists.

[00:21:49] Paul Roetzer: And if you do, we will take back your vested options. So you have employees leaving OpenAI who stand to lose millions of dollars in options they have already earned.

[00:21:59] Paul Roetzer: [00:22:00] Sam comes out and says, no, no, no, this was a big mistake. I didn’t realize it was happening. It’s bad on me as a CEO. Well, then documents come out that Sam’s the one that signed the thing.

[00:22:07] Paul Roetzer: Like it’s, it’s a legal, like two page document and he’s signatured on it three different times. So maybe he just doesn’t read legally. I

[00:22:14] Paul Roetzer: don’t know. Again, the big picture here is not to nitpick one individual thing, did they make a bad choice? They’re making a collection of really apparently bad PR choices, bad business choices that just don’t bode well for a company who is asking for so much trust from society.

[00:22:36] Paul Roetzer: That’s my, that’s my biggest concern here is like, I could listen to arguments about each one of these on, on their own, but collectively it’s starting to just look really bad and I think they need to get this figured out fast. If they don’t want the government like sticking their head in way faster. And to, you know, you had mentioned like, why does this matter?

[00:22:57] Paul Roetzer: That’s what it leads to is like, we’re supposed to trust them. [00:23:00] but the potential for like employee blowback, societal blowback, government intervention becomes greater and greater. The less we trust the organizations that are pushing the

[00:23:12] Paul Roetzer: AI at us. And so that’s, yeah, that’s kind of like my, my macro level take here is it seemed in isolation, just like bad PR. But when you take it in totality, the last like 90 days, there’s a lot going on here that doesn’t look great

[00:23:26] Mike Kaput: Yeah I want to quickly circle back and that LinkedIn post you mentioned where you had posted about the Case itself and everything going on and I looked through it got so many

[00:23:38] Mike Kaput: and honestly I was not remotely surprised because it’s a controversial issue, but I expected far more people just being like, well, unfortunately, innovation is messy or something like that. So many people were so negative about OpenAI. Like, do you, I mean,

[00:23:54] Mike Kaput: I mean, I know it’s just one little subset of the internet, but I got the feeling [00:24:00] we are Starting to see a wider backlash almost, or people are really

[00:24:06] Paul Roetzer: jaded.

[00:24:07] Paul Roetzer: I think, and we talked about on last episode, like you can start to feel it a little bit more we’ll get into the AI aI overviews thing, but like, google AI overuse not hit well,

[00:24:20] Paul Roetzer: like, so you’re just, again, like, it’s all qualitative at this point. I can’t look at a trend data chart and yeah, it’s definitely happening.

[00:24:30] Paul Roetzer: are. To it. you start to see these individual examples where, well, that, AI overuse isn’t, goes well, boy, the HER thing really pissed some off, and yeah, you do start to sense bit of a

[00:24:46] Paul Roetzer: toward, some backlash. not going say like, we’re reaching tipping point any kind, I agree.

[00:24:52] Paul Roetzer: Like one comment from, you know, on there is like, the plagiarists do plagiarism, like breaking and it’s like, okay, like that, that’s, you know, one perception. [00:25:00] but I think it was a lot of like that unforced error approach of like, yeah, this is just really stupid. Like, would you do this? and again, I,

[00:25:12] Paul Roetzer: I, I don’t want to like, tried to think about this, like, is arrogance? it incompetence? I, I really just think they, they just think they can away with stuff. Like they can do it. can go take YouTube videos and, you know, apologize later not, or fine later or not.

[00:25:29] Paul Roetzer: Like, I like there’s this almost like sense of invincibility right now with AI companies where they’re just going to do whatever they have to do accelerate innovation and

[00:25:43] Paul Roetzer: the models, and going to be some bumps along the way that they’re going to fight through. Again, it’s, it is the Facebook mentality for a decade plus.

[00:25:53] Paul Roetzer: This is, this is what Zuckerberg did and is what Musk does. so I that [00:26:00] is a, a of a sense invulnerability from some of these tech leaders that just going to what they got to

[00:26:07] Paul Roetzer: do.

[00:26:08] Sundar Pichai Interview & The Future of Search

[00:26:08] Mike Kaput: Speaking of bumps in the road, Google’s CEO Sundar Pichai just had to answer some tough questions had to thee Verge on a recent decoder podcast.

[00:26:19] Mike Kaput: So this is a really great interview that we’re gonna cover in kind of our third topic today, and we’ve a couple things in but

[00:26:25] Mike Kaput: it was an interview that totaled about. 39 Minutes, and in it, The Verge’s editor chief, neelay Patel, asked Pichai about kind of some uncomfortable topics. Like, it was a very civil interview, and like, everyone, you know, had some informative things to say, but he was kind of pushing him a bit on AI’s impact on search traffic.

[00:26:43] Mike Kaput: And the company’s move

[00:26:45] Mike Kaput: include aI overviews in results, and at one point, Patel even handed Sundar his phone and was like, asking him how valuable he found some of these AI overviews, and was kind of pressing him on that, and [00:27:00] that really stood out to me during this interview because, again, while it was totally civil, it was not like at all like,

[00:27:06] Mike Kaput: you know, targeting anyone. His line of questioning did kind of show like this frustration and uncertainty that it sounds like marketers and businesses are starting to have about how AI is going disrupt

[00:27:20] Mike Kaput: traffic from search and the value of actually creating content that appears in search results. And we kind of got a little more anecdotal. evidence about that because AI overviews have only been live for like 10 days google

[00:27:34] Mike Kaput: i. O. and people are already very unhappy with them. We, just this morning, Barry Schwartz published on Search Engine Roundtable, a running directory of embarrassing AI overviews that are kind of funny to look at, but in aggregate paint a picture that this feature is getting facts very, very wrong, and there was one screenshot that actually

[00:27:57] Mike Kaput: viral where AI Overview said should add [00:28:00] glue to their sauce because it would prevent the cheese from slipping off their pizza.

[00:28:05] Mike Kaput: So, Paul, we’re going to break this into kind of two pieces. First, kind of what questions or answers in this interview jumped out to you the most?

[00:28:13] Paul Roetzer: I would definitely

[00:28:15] Paul Roetzer: say if this is an, like this organic traffic impact and AI overviews is an interesting component of the story to you, go listen to the full podcast. He did do a phenomenal job of asking very difficult questions. And so I’ll just, I’ll read real quick, like what I’d put on LinkedIn, but it’s the lead in to the podcast.

[00:28:35] Paul Roetzer: This is like on the, on the podcast page. And so I, what I said was, we still don’t have clear answers, but Patel of the Coder podcast does ask Sundar, the questions that are on everybody’s mind. So here’s what the podcast page says. If AI powered search results do a good job of just delivering people answers to their questions, why would anyone go to the website?

[00:28:55] Paul Roetzer: And if we all stop going to websites, what’s the incentive to put new content on the web? And what’s [00:29:00] going to stop shady content farms from flooding the web with AI generated spam to try and game these systems? And if the incentives to make good content on the web go down, and the incentives to flood the web with AI spam go up, what are all these bots going to summarize when people ask them questions?

[00:29:15] Paul Roetzer: AI is a big idea, It’s gonna cause some big problems. And so, my take was, like, Sundar gave answers, like, gave strong points of view at times. Irritated, but like, he was, he was defensive. Like, you know, but a lot of the

[00:29:36] Paul Roetzer: kinda came down to Disruption is messy. It upsets people and it upsets things, ways of doing things, but we’re committed to seeing this through.

[00:29:47] Paul Roetzer: And so when he gives him his phone, Patel gives him his phone, and he’s like, well, what about this? Or what about this example of someone’s traffic going to zero because of your changes? All he says is, [00:30:00] I can’t get into individual cases. In the aggregate, what we are doing,

[00:30:05] Paul Roetzer: we believe is the right path basically. so there aren’t a lot of answers still, but to your point, like the AI overviews, I’m actually going to be shocked if they don’t turn it off in the next like week. It’s bad. Like the Google’s brand is giving you authoritative, trustworthy answers to things. These are not that. Like. You cannot trust the AI

[00:30:32] Paul Roetzer: overviews right now. They’re pulling from the onion as a source. Like you can go look at these screenshots. The CEO of The Onion showed these like, you know, geologists say to eat like three rocks a day something like

[00:30:43] Mike Kaput: that.

[00:30:44] Paul Roetzer: I mean, the responses are crazy and it’s, you just look at it. And I, I know I replied to somebody on Twitter and I said, it’s almost like. The search

[00:30:54] Paul Roetzer: quality team was not involved in the AI overviews product at all. Like, [00:31:00] yes, it can summarize things. Yes, can go

[00:31:02] Paul Roetzer: find things and like create those summaries and put the links in. But whatever the search quality algorithms are that determine trustworthy sources, it doesn’t appear to be in the product.

[00:31:12] Paul Roetzer: And that’s weird to me. Like, That’s like such a, a huge miss, it would seem. And I get that we’re in this like race and everybody’s got to get their products out to market and you gotta, you can’t just like announce things at conferences. Sometimes you got to actually like

[00:31:27] Paul Roetzer: put the product out then. And it seems like this thing is just out into the world way too early, but it’s weird because search generative experience has been in the labs for like eight months.

[00:31:38] Paul Roetzer: It wasn’t like they just turned this on and they hadn’t tested it. So it’s kind of like when their image. Generator went live and was down within 72 hours because it was doing crazy stuff. It’s like the same thing. It’s like Groundhog Day where they

[00:31:52] Paul Roetzer: out this product and people break the thing in 12 hours. So that my biggest concern is we’re now [00:32:00] like in the territory of. What Google’s brand is known for, and if they lose authoritative, trustworthy search results, it becomes a really slippery slope. And as much as I’ve said in recent episodes, like, I don’t know

[00:32:15] Paul Roetzer: perplexity as a moat, sure as hell a better product right now. Like if I go into perplexity, I actually generally trust the results pretty well. And I’ll go and check, click the citations. I’ll verify things, but I have, I have not had those issues with perplexity.

[00:32:31] Paul Roetzer: and I’ve definitely had myself with AI overviews where it’s like, I don’t even trust like, I’m just gonna going to scroll down and go find the actual sources.

[00:32:39] Paul Roetzer: So, yeah, it’s, it’s weird. Like, I can’t, I’m still so bullish on Google and their long term strategy and value here, but they just keep, it’s kind of like OpenAI where they’re having the PR missteps. Like, Google’s having product missteps.

[00:32:53] Paul Roetzer: missteps, like everybody’s moving too fast and no one’s like, Getting it. Who’s getting it right?

[00:32:58] Paul Roetzer: Like, now like, [00:33:00] who’s getting this right right now? Like perplexity is like, that’s the one thing I’ll give them credit for. Like product works. Like it’s good product. Google should go buy them like tomorrow and just whatever they’re doing, just build it into AI reviews.

[00:33:13] Mike Kaput: You know, it’s interesting. There was a recent post on X ethan Mollick and you know, I’ll link it, but basically

[00:33:20] Mike Kaput: paraphrase it. He was like, it is a weird choice that first big move outside of, you know, Gemini into AI, that all the public is to see, is for something that needs be highly accurate. Because these models still are not there yet.

[00:33:34] Mike Kaput: Like creativity, ideation, those are all really valuable. Why did you pick the thing that basically has to be 99. 9 percent right for people? to respect the outcome and the output of it. And to

[00:33:44] Paul Roetzer: not hurt your brand,

[00:33:45] Mike Kaput: Exactly. Like what

[00:33:46] Paul Roetzer: you’re known for. I agree. And that’s same argument for not like racing to do this in customer service. right.

[00:33:51] Paul Roetzer: you can’t get that wrong. Like you, it’s almost like you need like a two by two where it’s like the importance of trust

[00:33:57] Mike Kaput: and

[00:33:58] Paul Roetzer: then the ability of the AI. [00:34:00] And if trust is, needs to be high, don’t do it.

[00:34:04] Paul Roetzer: Like, Don’t just go from 0 to 100 and like, put the thing out there if you’re gonna erode trust so quickly.

[00:34:12] Mike Kaput: And for Google in particular, a simple flow chart, is the search accuracy team involved? If the answer is no, I just don’t know. I don’t, I don’t

[00:34:23] Paul Roetzer: know. I’d be fascinated to, to hear how this stuff is happening,

[00:34:27] Mike Kaput: Yeah.

[00:34:27] Mike Kaput: There’s, there’s a, there’s a long form wired article being written this right probably. Wild. All right. I hope

[00:34:35] Paul Roetzer: they get it right, I think it’s great feature in search. I just, it’s not there.

[00:34:40] Mike Kaput: Yeah. And Gemini itself is so impressive in all these other contexts that it’s really like baffling. All right, let’s dive into our rapid fire topics this week. So first out,

[00:34:53] Anthropic Researchers Discover More About How LLMs Work

[00:34:53] Mike Kaput: it is helpful and sometimes scary to remember that we still don’t actually fully know how AI [00:35:00] systems work. if you’re

[00:35:01] Mike Kaput: unaware, large language models are actually not fully understood even by the researchers engineers who build them. So this ends up. resulting in these displaying, you know, emergent unpredictable behavior. But thanks to some new work

[00:35:17] Mike Kaput: researchers at Anthropic, we might be a little closer to truly understanding what’s going on under the hood of models. So in a new post, Titled, Mapping the Mind of a Large Language Model, Anthropic some

[00:35:30] Mike Kaput: new work on understanding one of its most recent models, Claude III SONNET. The researchers wrote, quote, Today we report a significant advance

[00:35:39] Mike Kaput: understanding the inner workings of AI models. We have identified how millions of concepts are represented inside Claude SONNET, one of our deployed large language models. This is the ever detailed inside

[00:35:52] Mike Kaput: a modern production grade large language model. This interpretability discovery could, in the future, help us make [00:36:00] AI models safer. So, without getting into all the nitty gritty here, they did use a technique called dictionary learning to better

[00:36:08] Mike Kaput: how the neurons and their activations within this model worked. But two kind of concepts are broadly important. One, the researchers were able to basically better understand what they would call the quote, distance

[00:36:20] Mike Kaput: between words and concepts. So basically understand which concepts the model was linking together. For example, near a feature representing the Golden Gate Bridge, they found features for other things related to

[00:36:34] Mike Kaput: francisco. So kind of somewhat similar to, you know, the way a human would associate ideas in their head. Second, the researchers could also manipulate relationships to cause the model to behave differently or break its own rules. So, normally, Claude will not generate

[00:36:51] Mike Kaput: scam email for you to send to someone try to, like, steal their money. But when its ability to read a scam email was amplified [00:37:00] by the researchers, they found Claude would actually break its own rules. So the researchers fully admit there’s a ton more work to be done. This does not mean we fully understand large language models. And it is kind of, you know, a weird, but really important concept.

[00:37:16] Mike Kaput: So this is something you

[00:37:18] Mike Kaput: talk about a lot, Paul. Like, this is not normal software. Like, can you kind of connect the dots here on why this is an idea that’s really important for your average business? professional or owner to grasp.

[00:37:30] Paul Roetzer: This isn’t, I mean, this report is new, but this line of research is something Anthropic’s been working on for years. We talked about it a little bit on episode 59 when we recapped Dario Amadei’s interview on the Dworkesh podcast, so you can go back and listen to that a little bit. in

[00:37:47] Paul Roetzer: December of 2023, they published a post called Decomposing Language Models into Understandable Components.

[00:37:53] Paul Roetzer: And I’ll just read the lead paragraph there, because I think it helps set the stage a little bit. So it said, Neural networks are trained on data, not [00:38:00] programmed to follow rules. With each step of training, millions or billions of parameters are updated to make the model better at tasks. And by the end, the model is capable of a dissonant array of behaviors.

[00:38:11] Paul Roetzer: For example, Helping you with planning, writing articles, whatever, doing the things we talk about all the time with Genevieve. We understand the math of the trained network exactly.

[00:38:22] Paul Roetzer: Each neuron is a neural network that performs simple arithmetic. But we don’t understand why those mathematical operations result in the behaviors we see.

[00:38:32] Paul Roetzer: So again, they think about this like the brain, brain science. Humans do things, they have emotions, they take actions. Why do they do it? Like, we know that there’s a brain in there, but we don’t know how the brain works. So, neuroscientists study the brain to try and figure out what neurons are firing, what synapses are, you know, working together to to

[00:38:50] Paul Roetzer: make people do what they do, to feel what they feel.

[00:38:53] Paul Roetzer: That’s what they’re trying to do with these machines. It’s like, why are they doing this? How do they do this? Because once they understand it, [00:39:00] Then they can build safer versions of it, and they can get it to do specific things. So you mentioned, I think, the Golden Gate, Claude. This kind of like a funny thing that came out of this. So we’ll put the link in the post, but you can go to ai.

[00:39:12] Paul Roetzer: ai. I don’t know how long I keep it up. But if you go to claude. ai, underneath your name, I don’t know if you have to be logged in. There’s a little Golden Gate Bridge, very, very tiny icon. If you click on it, You can talk to GoldenGate Claude.

[00:39:25] Paul Roetzer: So here, I’ll read real quick what happened. So this is their post. It says, on Tuesday, we released a major new research paper on interpreting large language models, in which we began to map out the inner workings of our AI model, Claude 3 Sonnet. In the mind, quote, unquote, of Claude, we found millions of concepts that activate when the model reads relevant text or sees relevant images, which we call features.

[00:39:46] Paul Roetzer: One of those was the concept of the GoldenGate Bridge. We found that there’s

[00:39:50] Paul Roetzer: specific

[00:39:51] Paul Roetzer: combination

[00:39:52] Paul Roetzer: neurons

[00:39:52] Paul Roetzer: in Claude’s neural network that activates when it encounters a mention or picture of this famous San Francisco [00:40:00] landmark.

[00:40:00] Paul Roetzer: So

[00:40:00] Paul Roetzer: basically they manipulated it to only, like, they found it will only talk about the Golden Gate Bridge, like, if they made these manipulations to the neurons basically within there, this is what it does.

[00:40:10] Paul Roetzer: So it’s hilarious, like, if you go use it I clicked on Golden Gate, Claude, and I said, Can you recommend a trip for a family of four and say anything about San Francisco? Certainly, with its beautiful scenery, family friendly attractions, and exciting activities, a fantastic trip option for a family of four would be to visit the iconic Golden Gate Bridge san Francisco, and

[00:40:27] Paul Roetzer: then it talks about San Francisco. I said, OK, where are the best places to view San Francisco? Golden Gate Bridge. It’s right away, uh,

[00:40:32] Paul Roetzer: what are the top places to see in San Francisco? Golden Gate Bridge. Where do the San Francisco Giants play? They, they play in AT& T Park, I think, last night. I don’t know what the park’s name, but

[00:40:42] Paul Roetzer: the

[00:40:42] Paul Roetzer: San Francisco Giants play their home games at the iconic Golden Gate Bridge.

[00:40:45] Paul Roetzer: The bridge is one of the most famous land it’s like, it’ll only talk to you about the Golden Gate Bridge, no matter what you do. so by doing this though, they’re able to figure out why is it only talking about the Golden Gate Bridge? And that’s the key. Because once you start identifying these patterns, you can go [00:41:00] find other patterns in its behavior, and eventually maybe understand why it’s doing what it does.

[00:41:06] Paul Roetzer: So if it goes wrong, if it’s, if it’s taking on risk or creating risk, You would, in theory, eventually be able to go in and figure out, why is it doing that? And let’s turn that off. So it’s really, really important research. And other labs are also working on this, but we need progress to be made. Just like we need progress to be made in understanding the human brain, we need progress to be made in understanding systems. AI brains, for lack of a way of saying it.

[00:41:32] Google DeepMind Introduces Frontier Safety Framework

[00:41:32] Mike Kaput: So, Google is also kind of doing some big work in the areas of trying to make AI safer, more interpretable.

[00:41:41] Mike Kaput: This past week, Google DeepMind released what they’re calling a frontier safety framework, which it’s calling, quote, a set of protocols for proactively identifying future AI capabilities.

[00:41:52] Mike Kaput: That could cause severe harm and putting in place mechanisms to detect and mitigate them. Our framework focuses on severe [00:42:00] risks resulting from powerful capabilities at the model

[00:42:03] Mike Kaput: such as exceptional agency or sophisticated cyber capabilities. So to do that, the company in this document it will essentially proactively monitor how good models are getting in four key areas. Autonomy, Biosecurity, Cybersecurity, and Machine Learning R& D. So in this framework, Google says it will run evaluations every time they

[00:42:29] Mike Kaput: 6x their effective compute and for every three months of fine progress. So if or when model triggers an evaluation, Google then says it will formulate a response plan.

[00:42:42] Mike Kaput: So here’s an example from the document. You know, in the framework, Google lists out. An initial set of capabilities in those four areas I mentioned, that it would worry about,

[00:42:52] Mike Kaput: would trigger kind of an evaluation. Taking the autonomy category as example, google identifies a [00:43:00] threshold where they would start to take action as a model becoming capable of, quote, expanding its effective capacity in the world. by autonomously acquiring resources and using them to run and sustain additional copies of itself on hardware

[00:43:15] Mike Kaput: rents. So

[00:43:15] Mike Kaput: basically, something that has displayed, started to display much more autonomy than initially desirable or planned. So Paul, can you kind of maybe walk through why is this framework? So important. It sounds a little sci fi, but it’s actually quite critical to the future of AI. I

[00:43:33] Paul Roetzer: I was just thinking about the people whose job this is, to sit here and, like, come up with these.

[00:43:37] Mike Kaput: They

[00:43:37] Paul Roetzer: insane scenarios. Yeah, but it’s not unlike, you know, back in the I’ve mentioned in the past I used

[00:43:42] Paul Roetzer: to do crisis communications planning for brands, and this is what you did. Like you basically sat down and thought about all the things that could go wrong, and then you develop contingency communication plans.

[00:43:52] Paul Roetzer: In the event that happened, it is a daunting task to sit there and think about how the world goes wrong [00:44:00] and come up with solutions to

[00:44:02] Paul Roetzer: Yeah,

[00:44:02] Mike Kaput: okay.

[00:44:02] Paul Roetzer: So why, why this matters. So we’ve talked a lot recently about leading AI researchers and executives who think. There’s a chance that whether it’s AGI or some really, really advanced form of what we have today, that we may only be one or two years away from where these models could truly pose risk.

[00:44:19] Paul Roetzer: We talked about John Solomon, or Shuleman, on episode 98 from OpenAI, where he was saying, yeah, I don’t know, we might be like one breakthrough away from something really going haywire, and didn’t have a plan for what What happened then?

[00:44:31] Paul Roetzer: So, There’s a non zero chance that these models, or the near term generation of models could pose significant risks to society.

[00:44:42] Paul Roetzer: And so the labs have a responsibility to accelerate their understanding and actions to make sure they build safely and the governments are going to require this. This is what we’re seeing with SB 1047 in California, eU AI Act,

[00:44:55] Paul Roetzer: the things that are happening in other states and in the U. S. Um, [00:45:00] is the governments are kind of trying to step in and put some regulations in place, but how do they do it is, is like a big question.

[00:45:08] Paul Roetzer: And so I think what’s happening is, one, these labs are doing this because they think it’s right and they are actually concerned. Two, I think they’re trying to get ahead of and influence regulation. They’re basically putting the framework in place, saying, hey, we got this all figured out, here’s our framework.

[00:45:22] Paul Roetzer: Here’s our process. And the government will say, okay, yeah, great. I look at this kind of like the Sith ID discussion where individual frameworks are good, like it’s a good direction. Like it’s good that Google can identify if, if Gemini created something. It’s good that they have a framework for this, but until we have.

[00:45:43] Paul Roetzer: Other labs, all other labs agreeing to these kinds of frameworks, we’re at risk of any rogue lab breaking all of it. Like, it just doesn’t matter. So Google can be responsible. Anthropic can be responsible. OpenAI can be responsible. Maybe Meta is [00:46:00] responsible, but maybe not going to pick, I don’t want to like pick a name and say the wrong name, but like, maybe some other lab isn’t.

[00:46:06] Paul Roetzer: It takes one lab, who is also racing toward AGI or some advanced version of what we have today, who prioritizes profits and power over other labs. Responsible principles. And then the framework becomes useless because once someone has done it and put it into the world, all bets are off. And my concern here is we’re already seeing a race to put things into the market.

[00:46:31] Paul Roetzer: We just talked about with AI overviews, like companies are doing illogical things that are outside of their normal operating procedures and outside of what is probably good for their brand

[00:46:42] Paul Roetzer: in the name of competition and profits. And there’s nothing that tells me this is going to be different. Like they’re just going to keep racing each other.

[00:46:53] Paul Roetzer: and this is why, if we go back to the OpenAI PR missteps, this is why it’s so [00:47:00] critical. We trust companies that were, who are going to on the frontier of this. There’s going to be five to seven companies when it’s all and done,

[00:47:10] Paul Roetzer: who have the money and resources. To build the biggest models, and we have no choice but to trust them, and so it’s great they have frameworks, but we just, we need, we need standardization of these frameworks, we need more than just signing on to policies to say we’re going to do our best.

[00:47:31] European Union’s AI Act Passes

[00:47:31] Mike Kaput: So one approach that is being taken to that is the AI Act from the European Union, which is now going into effect. next month. So this is the landmark legislation that aims to regulate AI in sweeping fashion in Europe. According to the EU, here’s how it so

[00:47:51] Mike Kaput: the law basically categorizes different types of AI according to risk, then regulates them with varying degrees of severity based on those categories, categorizations.

[00:47:59] Mike Kaput: [00:48:00] categorizations. So general purpose systems are deemed to have limited risks, will be lightly while high risk will be strictly regulated. The Act outright bans certain types of such

[00:48:15] Mike Kaput: as cognitive behavioral manipulation and predictive policing systems, as well as social scoring. And systems are deemed high risk by the EU, for example, like critical infrastructure, medical

[00:48:28] Mike Kaput: aI for employment, law enforcement, etc. These will be subject to a variety of mandatory requirements that include risk management rules, such as Data Governance transparency

[00:48:39] Mike Kaput: guidelines, Human Oversight, et cetera. Certain AI systems that interact with humans or generate content must adhere transparency regulations like informing users when they’re interacting with an AI system and disclosing the use of deepfakes and AI generated content.

[00:48:58] Mike Kaput: And then general [00:49:00] purpose models, while more lightly regulated, must also adhere to some transparency measures around documentation. So the fines for violating these regulations are pretty hefty.

[00:49:12] Mike Kaput: According to Reuters, they range from 7. 5 million euros, which is about 8. 2 dollars, or 1. 5 percent of turnover, up 35 million euros, about 38 million bucks, or 7 percent of global turnover. depending on the types of violations. So now just because this law is going into effect next month, it doesn’t mean all this is hitting all at

[00:49:39] Mike Kaput: like, some of the earliest stuff, which is the on social scoring, predictive policing, those are going to take six months go into effect. Regulations around general purpose AI aI

[00:49:49] Mike Kaput: models are going

[00:49:50] Mike Kaput: apply after 12 months. Some other regulations go into effect 12, 24, 36 months after this law hits. So, I

[00:49:59] Mike Kaput: guess my [00:50:00] question for you, Paul, as we finally see this, been years in the making, this finally is going

[00:50:04] Mike Kaput: become law. how do you see this impacting AI innovation and adoption in the EU? Like, is a chance that American AI companies just like, I don’t know, skip over market entirely?

[00:50:18] Paul Roetzer: Sure, I mean, It’s the nature of business, you’re going to go to the markets that are friendlier for what you’re doing, so whether you’re building the technology and you see it being too restrictive to build it there,

[00:50:28] Paul Roetzer: or if, you know, you’re marketing in there, so certainly many American companies are global companies that do lots of business in the EU. You’re going to have to make choices around,

[00:50:40] Paul Roetzer: you know, how this impacts you. We, when we were talking about this over probably last summer, summer of 23, when it was like kind of moving through, I think at the time, like Google wasn’t making their AI models available to the EU, it was like 430 million citizens or something like that, couldn’t even access Google’s technology. [00:51:00] So I. I think there’s certainly a reasonable chance that for some AI

[00:51:03] Paul Roetzer: companies, it’s going to be easier in the near term, just to not offer the AI technology in those markets while you figure out how to do it. And I would imagine it’ll affect their, the rate of innovation and the startup market there.

[00:51:18] Paul Roetzer: But, There’s always a trade off. There’s pros and cons to legislation like this, and there’s obviously going to be, like, the techno optimist crowd who thinks this is, like, horrible and end of the world for EU companies, and then there’s other people who think they didn’t go far enough. Like, there’s going to be all sides to this, but the reality is, it is what it is, and now companies got to figure out how to work within it.

[00:51:43] US State Laws on AI

[00:51:43] Mike Kaput: So the U. S. is actually also making couple of moves on AI legislation as well. This all at the state level because, you know, we don’t have like a comprehensive AI Act in

[00:51:54] Mike Kaput: works at the federal level. So first, we just saw signed law the colorado [00:52:00] AI Act, C A I T. A and state legislators say it’s the quote, first comprehensive and risk-based approach to AI regulation in the us and it basically aims to establish guardrails around preventing discriminatory outcomes from the use

[00:52:17] Mike Kaput: aI. So to do that, it basically mandates a series of obligations of what it calls high risk AI these are systems when they’re deployed, are making consequential decisions. Uh. in things like access to education, employment, financial services, government services, health care, etc. They’re all defined kind of in the context of the bill and basically that mandates a range of features that developers and deployers

[00:52:46] Mike Kaput: of the technology must follow in order to protect consumers as part of process of building and rolling out this technology. Now, this law does not go in effect [00:53:00] 2026. Once

[00:53:01] Mike Kaput: it does, the state’s attorney general is basically able to bring actions against businesses that violate it. What’s interesting, though, is kind of the commentary and controversy around this and the other bill we’ll

[00:53:13] Mike Kaput: about it because the governor signed the bill with reservations. He said, I’m concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state. basically

[00:53:25] Mike Kaput: he calls for government regulation applied at the federal level, not the state level in this kind of patchwork. This is not the only state bill drawing controversy.

[00:53:36] Mike Kaput: There what you might call a mini uproar over a California state bill called SB 1047. We’ve talked about this before. It’s an AI safety bill called the Safe and Secure Innovation for Frontier Artificial Intelligence Models

[00:53:51] Mike Kaput: act. Quite a mouthful. But, basically, is establishing a framework for regulating frontier models, and it has [00:54:00] some Pretty interesting components here that tech and innovation people pretty up in arms.

[00:54:05] Mike Kaput: So it kind of arbitrarily defines what large scale models are. So some people worry it could end up covering open source models. It has some really harsh penalties that critics say will stifle innovation. It requires, for instance, you have a kill switch in your model that allows it to fully shut down. Third, it creates,

[00:54:28] Paul Roetzer: in open source, by the

[00:54:29] Mike Kaput: which, yeah,

[00:54:30] Paul Roetzer: which is which is why they worry that it’s just make make open source models illegal.

[00:54:35] Mike Kaput: And third, it creates this new body called the Frontier Model Division, which basically enforces

[00:54:40] Mike Kaput: rules. And people are worried this is way overreaching, preventing, and possibly encourages regulatory capture. Now, that’s a lot, basically

[00:54:50] Mike Kaput: these two bills have come out and they seem extremely controversial. The California one was actually endorsed publicly by Jeff Hinton and Yoshua Bengio, who are two godfathers of [00:55:00] AI. So people who are experts in this space have, it seems like, pretty differing opinions on this like, how

[00:55:06] Mike Kaput: encouraged or, like, worried are you about the approach of these? two bills and just state by state legislation.

[00:55:13] Paul Roetzer: I guess, probably, like, I feel about the safety from the labs is without some, like, standardization. I’m not necessarily, like, a massive advocate of federal regulation but yeah, this is going to be a mess. It’s like

[00:55:27] Mike Kaput: sounds like it already.

[00:55:28] Paul Roetzer: Yeah. I mean, the amount of time it’s going to take for, for businesses to understand all these different state laws related to this.

[00:55:36] Paul Roetzer: It’s like, my goodness. I don’t know. I feel like we could get in our own way really fast here by letting these states all just pick their own stuff when my confidence level of their understanding of

[00:55:49] Paul Roetzer: technology is so low and it’s like, we’re just letting these people pick this stuff, like, goodness.

[00:55:55] Mike Kaput: So yeah,

[00:55:56] Paul Roetzer: I don’t know. I just feel like this is not the right, this is not the way to go. Like, [00:56:00] I don’t think this ends well.

[00:56:02] Thomson Reuters CoCounsel

[00:56:02] Mike Kaput: All right. So our next news item today. So you may have heard of the company Thomson Reuters, mostly in the context of the company that

[00:56:11] Mike Kaput: reuters, the news service, but they also provide a ton of other information services and technology to industries like legal tax, accounting, risk, and fraud and others. And. We wanted to highlight, they’ve released something that looks really interesting called CoCounsel. this is a chat based generative AI product, it’s basically an assistant for legal and tax and accounting and it says, the

[00:56:39] Mike Kaput: company says it makes quick work of routine tasks, analyzing information and delivering answers so you can maximize the impact of your time, energy, and expertise.

[00:56:48] Mike Kaput: It displays on its website a few ways this tool works. In legal, for instance, Thomson Reuters says that co counsel harnesses the knowledge of 1, 500 legal [00:57:00] experts to surface info across her organization, analyze and search documents, and answer legal and contractual questions. One testimonial, actually, I found interesting from a legal company

[00:57:11] Mike Kaput: the site says, quote, It’s like having an associate lawyer ready to on an endless number of delegated tasks and deliver it all in minutes. They also showcase Some similar capabilities in tax and accounting. So

[00:57:26] Mike Kaput: Paul, I think we had kind of discussed, we love just seeing and talking about these vertical specific examples of generative AI. Like, can you maybe talk to us about the implications of this one in particular?

[00:57:38] Paul Roetzer: Yeah, I’m really disruptive in that industry. I mean, I’ve met with law firms, I’ve met with hundreds of accounting firms, like leaders of hundreds of accounting firms. There are industries that If, if this kind of technology is accurate, if it’s not dealing with the same issues as AI overviews currently is, like if it’s [00:58:00] trustworthy and accurate information, there is a lot of data driven repetitive work done in those industries

[00:58:07] Paul Roetzer: and associates in particular, like I’ve gotten the question of, do we need as many associates in the future from heads of law firms?

[00:58:16] Paul Roetzer: And my answer to them is probably not. Probably not, but who’s your future partners? it’s, it’s like disruptive to your whole model. Like there’s a lot of what associates do that you’re going to be able to do with generative aI, for

[00:58:27] Paul Roetzer: sure. You can probably do a good chunk of it now and two years from now, probably most of it realistically.

[00:58:32] Paul Roetzer: But what’s, where’s the future of the legal industry if you don’t have associates who spend 20 years learning everything? These are why there’s hard questions to answer for every industry. It’s like,

[00:58:41] Paul Roetzer: stuff’s going to be possible. What, what, what does it mean? Like, yeah, it helps in the near term and it might solve some of your talent gaps. Like accounting industry has massive talent they

[00:58:49] Paul Roetzer: they don’t have enough accountants. so it can help with that. And, but in the process of helping with that gap, do you accelerate the demise of the rest of [00:59:00] them? That’s, this is the debates every, every industry needs to be having. So yeah, like this makes total sense if it’s accurate and trustworthy.

[00:59:08] Paul Roetzer: It’s going to, it’s going to be just very, very disruptive to the industries in the near future.

[00:59:14] OpenAI Inks Deal With Wall Street Journal Publisher News Corp.

[00:59:14] Mike Kaput: So we have a little more OpenAI news this they actually just signed a deal with NewsCorp to let the company use content from more than a dozen of NewsCorp’s publications in OpenAI’s products, including, of course, ChatGPT.

[00:59:29] Mike Kaput: So as part of this deal, OpenAI will be able to actually display news from publications like wall street Journal, Barron’s, and

[00:59:38] Mike Kaput: some others. in its products and services. According to some reporting on the deal in the Wall Street Journal, the deal could be worth more than 250 million over five years.

[00:59:49] Mike Kaput: News Corp also said that as part of the deal, it will, quote, share journalistic expertise to help ensure the highest journalism standards are present across OpenAI’s offering. Now, Paul, this [01:00:00] is not the only deal that OpenAI has signed like this. They’ve done a bunch of other deals with other publications.

[01:00:06] Mike Kaput: At the same time, they’re being sued in a huge lawsuit by the New York Times stealing,

[01:00:10] Mike Kaput: for allegedly stealing their material. Can you kind of just talk about this dynamic? seems like two very different paths here with media companies.

[01:00:19] Paul Roetzer: Yeah, so again, like just to recap why this matters, GP2 4. 0 goes through a big training run, learns from all the internet data, and then it stops. Training run stops. And the model doesn’t know anything past the date that the training run stops. So if you want ChatGPT to be a tool that people use for real time information, you need sources where it’s going to it from. You can get it from search through Bing and

[01:00:44] Paul Roetzer: or it can get it through licensing deals with companies that are publishing real time information, which is what’s happening here. So by doing deals like this. You now have access to real time information that you can inject

[01:00:57] Paul Roetzer: ChatGPT or whatever future search product that you may or [01:01:00] may not be building if you’re open AI. So this is what we’re seeing in, you know, you’re going to see it in AI Overviews. They did a deal with Reddit, which is part of their problem is

[01:01:08] Paul Roetzer: servicing stuff from reddit. you see this with, Twitter slash X. So the first functional use of Groq that I’ve actually seen,

[01:01:17] Paul Roetzer: Grok, their AI tool, is it’s now writing summaries in your search.

[01:01:23] Paul Roetzer: So if you go to the For You or if you go to like Top or Trending, whatever, Groq is summarizing. Stuff there. it deals with the same issue as AI aI overviews. Sometimes it’s really good. Sometimes it’s completely useless. many times it’s completely useless. So because it’s just pulling from all these sources.

[01:01:42] Paul Roetzer: So yeah, I mean, this is, this is the future of generative AI. It’s like these models stop their learning at some point and then they need new information. The new information comes from data

[01:01:52] Paul Roetzer: and you either steal that data from people or you do licensing deals with them because of all these lawsuits, they have no choice but to do [01:02:00] licensing deals with them.

[01:02:01] Paul Roetzer: so I, I mean, I think you’re just gonna, you’re gonna see more, more of these. It’s, what is the alternative for these media companies?

[01:02:09] Paul Roetzer: You got to play the game, like, when you just like not be a part of the future of search, if you think ChatGPT solves search and it becomes like a dominant player and you’re not even there, no chance of traffic coming to you, no licensing deals,

[01:02:24] Paul Roetzer: Not, not, not great. It’s not, these companies aren’t in a great place. They don’t, they don’t necessarily have enormous leverage per se. So I think, I think at this point, they both need each other, basically. The media companies need the money,

[01:02:38] Paul Roetzer: and chatGPT, OpenAI, and Gemini, Google, like, they need the real time information. Maybe it ends up being symbiotic and everybody wins.

[01:02:46] Apple Podcasts AI Transcripts

[01:02:46] Mike Kaput: So, Paul, you had recently posted about some of your experiences with language model that Apple appears to using for

[01:02:53] Mike Kaput: its automatically generated in Apple podcasts. And wrote, I’m not sure what language model Apple [01:03:00] Podcasts is using for its automatically transcript, but it’s highly accurate. I’ve gone through hours transcripts and the error rate is extremely low. Hint of what’s come

[01:03:10] Mike Kaput: with Siri, LLMs on iPhone? Could you maybe walk us through that?

[01:03:15] Paul Roetzer: Yeah, it was kind of like a random thought, but I think I shared recently how I prep for podcasts sometimes or like how I consume information. So what I’ll do is, it works especially well, in, like on your iPad where you can do split screens is how I’ll do it a lot of times. but if you have the transcript up in Apple podcasts, you can follow along and you can put the podcast at like 2x speed and you can quickly read along.

[01:03:43] it actually works really well. and then I had this like realization. I was like, shoot, man, I haven’t seen a typo in, Six hours of content, like

[01:03:51] Paul Roetzer: names,

[01:03:53] Paul Roetzer: names of products. Like it used to get ChatGPT wrong. It’s a pretty common one. It’s getting that right. Like, I’m not training it. [01:04:00] There’s no like, did you find any edits and you’re, you’re, you’re changing things.

[01:04:03] Paul Roetzer: So there’s no reinforcement learning that I’m aware of. I mean, they’re probably doing it behind the scenes at Apple, but like, I’m not doing reinforcement learning for it. And I just realized, like, I don’t remember the last time I saw an error in the transcript. That’s hard to do. That, that tells me that Apple probably is doing more with language models behind the scenes and on device than maybe we’re aware of. And if they’ve solved transcription

[01:04:32] Paul Roetzer: with a high level of accuracy, that could bode really well to whatever they do next. Like, just think about when you’re making a voice note or setting a reminder on your phone, or doing

[01:04:42] Paul Roetzer: text message. Like, it’s like It’s like a Tesla now. Like if I’d use the full self driving, I got to like deactivate the thing 10 times in a seven mile ride, cause it makes mistakes. That’s how transcription is like, you know, voice to text. If I’m doing it on Google docs or I’m doing it in Apple, like [01:05:00] it always makes mistakes. This would tell me they have a language model that doesn’t, and I’m not seeing that language model in my text messaging or an Apple notes or anywhere else right now, but they obviously have it.

[01:05:13] Paul Roetzer: And so it just got me thinking like, Oh, wait. They may already have a really, really advanced text, voice to text model, which means they have

[01:05:22] Paul Roetzer: really advanced language model on device. Or, I mean, maybe they’re running in the cloud right now, but it could be on device soon. So it just made me kind of even more curious about June 10th because there is some tech that is making that transcription insanely accurate.

[01:05:40] Mike Kaput: There will be certainly more to come in just a few weeks here.

[01:05:46] AI Financials/Funding Rapid Fire

[01:05:46] Mike Kaput: All right, so for our last piece of rapid fire news this week, we’re going to run through a few quick updates related to the funding financials

[01:05:54] Mike Kaput: of some significant AI companies. So first up, Suno, which is a [01:06:00] popular AI music generator. We’ve covered it in the past.

[01:06:03] Mike Kaput: They have raised 125 million in a round led by Lightspeed Ventures. This comes just six months after this company even launched its product. And while they’re flying high, another AI startup we’ve talked about before has flown far too close

[01:06:18] Mike Kaput: the sun because Humane, the company behind the AI pin wearable, which we talked about flopping very spectacularly after they had this really botched launch with technical issues, Bloomberg reports they are now exploring a possible sale.

[01:06:34] Mike Kaput: One person quoted by Bloomberg said, They are seeking 750 million and a billion dollars for the business. And

[01:06:43] Mike Kaput: third, all of this is literally chump change compared to the rocket ship that is our friend NVIDIA. NVIDIA. Because NVIDIA destroyed Wall Street expectations in its latest earnings report this past week.

[01:06:57] Mike Kaput: Profits jumped 628 [01:07:00] percent compared to the time last year. Revenue jumped 268%. For the same period, and the stock is, as of right now, just on a tear. I think popped

[01:07:09] Mike Kaput: like 7 percent immediately, and it’s been just going. So Paul, like any of these, Jump out at you as particularly interesting.

[01:07:16] Paul Roetzer: Dude, if Humane gets a billion dollars, I

[01:07:19] Mike Kaput: yeah, what?

[01:07:20] Paul Roetzer: Yeah, that might be an unreliable source. That the most absurd things I’ve ever

[01:07:24] Mike Kaput: heard. For what, I guess. That’s

[01:07:26] Paul Roetzer: I, that’s like, I’m like,

[01:07:27] Mike Kaput: what in

[01:07:28] Paul Roetzer: the

[01:07:28] Paul Roetzer: world are you selling that, oh my god. Um. Suno, some VCs are sure betting that the copyright issues are gonna go away.

[01:07:38] Paul Roetzer: Yeah. gotta, you gotta have some really good lawyers convince you to put that kind of money into a company that, is gonna have a line of legal problems. Cool tech, but I’m going to I’m going to get sued nonstop.

[01:07:54] Mike Kaput: Yeah.

[01:07:55] Paul Roetzer: I mean, a hundred of that could be their legal bills.

[01:07:58] Paul Roetzer: [01:08:00] yeah, I, Nvidia, unreal, man. Unreal. I just, I started buying Nvidia when it was 86, like years ago. And there’s always that like, gosh, I some. And then I

[01:08:11] Paul Roetzer: just always like, nah, I’ll just buy more. I, I don’t know. That stock is wild.

[01:08:17] Paul Roetzer: that we’ve never seen anything like it in like stock market history. Like the stuff that. They’re doing, and there just appears to be no end in sight. Like they’re, they’re unreal. and people just keep buying more of it. Like Microsoft, Satya Nadella was

[01:08:33] Paul Roetzer: NVIDIA. Every major tech company at their conference gives NVIDIA their flowers on stage. Like they’re just like, yeah, we’re doing a deal with NVIDIA. and yeah, we’re buying 50, 000 more GPUs. Like it, they’re, they’re just the darling of NVIDIA. The tech and business world right now. It is un, unbelievable

[01:08:51] Paul Roetzer: company and Jensen is an unbelievable CEO. Like, yeah, just historically good. what they’re [01:09:00] doing unparalleled really what they’re doing. So yeah, buyer beware on the billion dollars for you. Yeah.

[01:09:11] Mike Kaput: Alright,

[01:09:13] Mike Kaput: well, Paul, thanks for breaking down everything in AI for us this week. As always, I’m sure myself and our listeners all appreciate it.

[01:09:21] Mike Kaput: As a quick. A couple of announcements here at the end of the episode. We are doing this special episode that is dropping on Thursday, May 30th, 100, where we are answering the audience’s questions. Should be a great episode, so just mark your calendar to take a look. at that when it comes out. As always, our newsletter contains all of the news we did

[01:09:43] Mike Kaput: get to today and a deeper dive into all the we talked about. So go check that out at marketingainstitute. MarketingAIInstitute. com forward

[01:09:52] Mike Kaput: slash newsletter. Paul, thanks again.

[01:09:55] Paul Roetzer: Thank you, Mike. And we will talk to you all again in a couple days if you’re [01:10:00] listening to this on the day this drops. All right. Have a, have a great week, everybody. We’ll talk to you again soon.

[01:10:05] Paul Roetzer: Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey. And join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.

[01:10:28] Paul Roetzer: Until next time, stay curious and explore AI.



Leave a Reply

Your email address will not be published. Required fields are marked *