Llama 3.1, Runway Steals YouTube Videos, Is There an AI Bubble?, DeepMind AI Math Breakthrough, Altman Op-Ed & SearchGPT


AI models are getting more powerful, but they are also dealing with some big ethical and financial hurdles. Join our hosts as they unpack Meta’s release of Llama 3.1, dive into the controversy surrounding Runway’s AI training practices, and examine the potential AI bubble looming over the industry. Plus, take a look into the latest updates from Apple, OpenAI and Google Deepmind.

Listen Now

Watch the Video

 

Timestamps

00:03:50 — Meta Releases Llama 3.1

00:25:44 — Runway and YouTube

00:34:09 — Are We In an AI Bubble?

00:46:46 — DeepMind AlphaProof and AlphaGeometry 2

00:51:48 — Sam Altman Op-Ed Future of AI Warning

00:58:08 — GPT4-o Mini LMSYS Evaluation

01:01:09 — OpenAI Announces SearchGPT AI

01:03:51 — Presidential Moves on AI

01:08:58 — Ethan Mollick Warns about RAG Talk-to-Document Systems

01:11:54 — Apple’s AI Features Will Roll Out Later Than Expected

Summary

Meta Releases Llama 3.1

Meta has released Llama 3.1, its latest and most advanced open large language model. Llama 3.1 405B is touted as the world’s largest and most capable openly available foundation model, rivaling top closed-source AI models in various capabilities.

Meta says it “rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.”

It also has an expanded 128K token context window and improved support for 8 languages.

Training Llama 3.1 405B involved significant technical challenges, requiring over 16,000 H100 GPUs and processing more than 15 trillion tokens during training.

Importantly, Meta has also said multiple times that the new models will open up new applications like “synthetic data generation to enable the improvement and training of smaller models.”

Runway and YouTube

AI video generation company Runway has come under scrutiny for using YouTube videos without permission to train its new model Gen-3.

According to an internal spreadsheet obtained by 404 Media, the company trained this model by scraping thousands of videos without permission from various sources, including popular YouTube creators and brands, as well as pirated films.

Overall, the leaked document shows a company-wide effort to collect training data from unauthorized sources. This raises significant questions about the ethics and legality of Runway’s data collection practices, especially given the company’s high profile in the AI industry.

Runway raised $141 million from investors including Google (which owns YouTube) and Nvidia last year, reaching a $1.5 billion valuation.

This also comes on the heels of stories about companies like Apple, Salesforce, and Anthropic doing the same thing using YouTube videos, to some degree.

Are We In an AI Bubble?

This is a question increasingly on the minds of AI watchers as even the most successful AI companies appear to be losing eye-watering amounts of money—and generative AI firms as a whole are jockeying to create sustainable, profitable business.

The highest profile example of this is OpenAI, which, despite its popularity and prominence, could be facing staggering losses of up to $5 billion this year, according to a recent analysis by The Information. The analysis reveals the enormous costs associated with developing and running large language models.

OpenAI’s AI training and inference costs could reach a whopping $7 billion in 2024, with $4 billion dedicated to running ChatGPT and $3 billion for training new models.

Add to this an estimated $1.5 billion in staffing costs for its rapidly growing workforce of about 1,500 employees, and OpenAI’s total operating costs could soar to $8.5 billion this year.

On the revenue side, things look a bit brighter, but not enough to offset these massive expenses. ChatGPT is set to make $2 billion anually, with added income from its API services. The company’s total revenue is expected at $3.5 to 4.5 billion this year.

However, this still leaves OpenAI with a potential loss of $4 billion to $5 billion for the year.

Leaders like Anthropic are also burning through cash. The Information estimates its burning about $2.7B this year, with only about one-tenth to one-fifth the revenue of OpenAI.

Links Referenced in the Show

  • Llama 3.1
  • Runway and YouTube
  • Are We In an AI Bubble?
  • DeepMind AlphaProof and AlphaGeometry 2
  • Sam Altman Op-Ed
  • GPT-4o Mini LMSYS Eval
  • SearchGPT
  • Presidential Moves on AI
  • Warnings About RAG Talk-to-Document Systems
  • Apple’s AI Features Will Roll Out Later Than Expected

This week’s episode is brought to you by MAICON, our 5th annual Marketing AI Conference, happening in Cleveland, Sept. 10 – 12.   Early bird pricing ends Friday. If you’re thinking about registering, now is the best time. The code POD200 saves $200 on all pass types.

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: The risk we would take by not doing this is far greater than if we’re wrong. if we spend 100 billion and AI ends up not delivering value and we don’t ever get close to AGI and this was all just bubble, Who cares? Like it’s, it’s insignificant.

[00:00:15] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I’m the founder and CEO of Marketing AI Institute, and I’m your host. Each week, I’m joined by my co host. and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:45] Paul Roetzer: Join us as we accelerate AI literacy for all.

[00:00:53] Paul Roetzer: Welcome to episode 107 of the Artificial Intelligence Show. I’m your host, Paul Roetzer, along with my co host, Mike [00:01:00] Kaput. We are recording this on July 29th, 10 a. m. Eastern time. I think that timestamp may be important this

[00:01:06] Paul Roetzer: because after last week, I feel like we’re just kind of entering the busy season here.

[00:01:12] Paul Roetzer: This week’s interesting. So there was some big announcements, big news items, but

[00:01:18] Paul Roetzer: in prepping for the podcast today, there’s like the first four or five topics are all very significant. And I think it’s, Very macro level significant. And what we try really hard to do on this show is take these macro level topics and break them down so they make sense.

[00:01:39] Paul Roetzer: So business leaders, marketers, entrepreneurs, educators, people who listen to this show regularly. We try and make this stuff. mean something to you. So on the surface, an article about, you know, a trillion dollars in data centers doesn’t sound that interesting to a marketer, but as we will explain today, [00:02:00] all of this is connected.

[00:02:01] Paul Roetzer: So. This is one where like, I woke up on a Monday morning early and you start prepping for the show. I’m like, oh man, I need that third cup of

[00:02:08] Paul Roetzer: today. This is a lot. So, we got a ton to talk about. Mike’s done a great job as always curating the stuff into the main topics and rapid fire items.

[00:02:19] Paul Roetzer: So we got a lot to go through, so, I’ll do the quick intro up front. Today’s episode is brought to us by MAICON, our marketing AI conference, the fifth annual marketing AI conference. It’s going to be happening in Cleveland, September 10th to the 12th. There are more than 60 sessions, including. keynotes from Mike Walsh, the author of The algorithmic leader.

[00:02:40] Paul Roetzer: my friend and world renowned marketing speaker, Andrew Davis, Liz Grennan, partner at Quantum Black at McKinsey, and dozens of other incredible speakers. So, still got time. There’s about five minutes. 43 days until the event, I think. I saw a countdown recently. so if you’re thinking about joining us, we’ve [00:03:00] got a promo code for you.

[00:03:01] Paul Roetzer: Pod200, that’s POD200. That’ll get 200

[00:03:06] Paul Roetzer: off any pass you choose. From just a straight main conference to an all access with workshops. Mike and I are both running half day workshops on September 10th, so when you register, you can join one of those workshops. Jim Stern’s also doing a workshop, so we have amazing workshops.

[00:03:22] Paul Roetzer: And then just the two days of incredible sessions, again, over 60 sessions focused on both applied AI and strategic ai. So big picture stuff. So go to ma con.ai, that’s M-A-I-C-O n.ai. And click register to use your POD200 code at checkout. All right, Mike. Let’s, we talked a little bit about Llama 3. 1 last week.

[00:03:45] Paul Roetzer: It was rumored it was coming out. It did, and that is our lead topic today.

[00:03:50] Llama 3.1

[00:03:50] Mike Kaput: Absolutely. So yeah, last week we talked about how Meta had been talking about, or it had leaked that it was going to release Llama 3.

[00:03:57] Mike Kaput: 1. It indeed has done that, which is its [00:04:00] latest and most advanced open large language model. Now Llama 3. 1. 405b. has 405 billion parameters. It’s touted as the world’s largest and

[00:04:10] Mike Kaput: capable openly available foundation model and meta says that it quote rivals the top AI models when it

[00:04:18] Mike Kaput: to state of the art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.

[00:04:26] Mike Kaput: This model also has an expanded 128k Token context window and improved support for eight languages. Now, interestingly, training Llama 3. 1405b involved significant technical challenges. They used over 16, 000 H100 GPUs and processed more than 15 trillion tokens. during training. So importantly, and we’ll talk about this in a little bit here, Meta has also said that in addition to just being a generally open and powerful large language model,

[00:04:59] Mike Kaput: [00:05:00] that llama 3. 145b is also going to open up new applications like Quote, synthetic data generation to enable the improvement and training of smaller models. So remember that point as we get a little deeper into this conversation. But before we get into that, Paul, I want to first cover the basics. Maybe can you talk us through at a high level?

[00:05:22] Mike Kaput: What’s so significant about this model’s capabilities? And I think in the process, if we can, it would be helpful to clarify what meta means when they call this model, let’s say, quote, open or quote, open source, because they’re using that, I believe, in a little different way than maybe. Typically, it’s been used.

[00:05:38] Paul Roetzer: I think that They’ve just decided that they’re going to define open source as what they’re doing. So, you do get a lot of arguments about this, and it actually reminds me, you know, seven, eight years ago, when the traditionalist AI researchers, you know, wanted to control the definition of AI, and they just sort of, like, [00:06:00] lost.

[00:06:00] Paul Roetzer: The control of that, and it just became what we believe it to be today. I feel like open source is very quickly heading in this direction. So I think traditional, traditionalists from an open source perspective feel like the data should be shared as well. That the data that was used to train on it, you’re not going to get that.

[00:06:17] Paul Roetzer: Like they’re, as we will learn with Runway in a minute, like. There are a number of reasons why these companies, even the supposed open companies, are not sharing the data used to train these models. some as competitive, some because they’re stealing copyrighted material and don’t want to admit to it yet.

[00:06:36] Paul Roetzer: there’s a lot of reasons, I guess. But, They are very clearly considering what they’re doing open source, whether traditionalists want them to call it that or not, it is, it is going to be known as that, and that just means they’re sharing the weights of the model basically. They’re sharing the technical details of how it was trained, what the weights were.

[00:06:56] Paul Roetzer: Um, they, meta goes into great detail about the technical [00:07:00] infrastructure used to train the models. So they’re being very open. They’re just not sharing the, the training data.

[00:07:08] Mike Kaput: So maybe talk to me about the bigger picture implications of a model of this size and power that is open. Like you had written this past week on LinkedIn, kind of as part of an in depth breakdown, you said that Zuckerberg has made the strategic decision to spend tens, if not hundreds of billions of dollars in coming years to commoditize the frontier model market and undercut the core revenue channels for the major players.

[00:07:38] Mike Kaput: Closed model companies. So like, can you unpack what you’re talking about there and why this is such a big deal in that context?

[00:07:46] Paul Roetzer: Yeah, let me, I’ll go into like three aspects here that I think are really relevant. So the first is Zuckerberg’s letter that came out on Tuesday when they released the model that is really a manifesto for open source. And [00:08:00] since we talk so much on the show about open versus closed models and that there’s really no.

[00:08:05] Paul Roetzer: There’s no clear path forward as to like what wins, but this is kind of the battle that we’re in. the second component I want to talk about is the fact that much larger models are

[00:08:16] Paul Roetzer: soon. Um, as big as this is, this is child’s play compared to what’s going to be coming in the next 12 to 18 months. And then I’ll kind of wrap my thoughts with some additional context from that LinkedIn post you mentioned.

[00:08:28] Paul Roetzer: So, zuckerberg’s letter, which I would recommend people read, if you’re, again, if you’re interested in this thread of AI and you want to understand the bigger stuff that’s going on here, he published a letter called Open Source AI is the Path Forward.

[00:08:41] Paul Roetzer: he leads with an analogy of Linux and Op Open Source Operating System. As an argument for the future, being open versus closed, he uses this Linux analogy in every interview he does. Like, this is his main argument. So, his premise is that what Linux did, and what open source AI will do, is makes [00:09:00] things more affordable, and over time, things become more advanced, more secure, because there’s a broader ecosystem of people, developers, working within Linux.

[00:09:09] Paul Roetzer: these models and developing its capabilities. but he also says very early on in his letter, starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency. we, I believe he’s previously acknowledged Llama 4 was already in training,

[00:09:32] Paul Roetzer: And I’ll come back to that in a minute. They announced in Zuck’s letter widespread distribution was already in place. So with the launch, all major cloud companies, including AWS, Microsoft Azure, Google, Oracle, and more, you can access

[00:09:50] Paul Roetzer: model through them. You can get it through Perplexity already. Pretty sure you can actually choose 3. 1 to be your model.

[00:09:57] Paul Roetzer: They also said, Zuckerberg said companies like ScaleAI, [00:10:00] Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data. This is a huge play for them is to work with these service provider companies who can actually go in and build these fine tuned models for enterprises.

[00:10:13] Paul Roetzer: He went on to say Meta is committed to open source AI. he outlined why he believes open source is the best development stack, why open sourcing Llama is good for Meta. and why open source AI is good for the world. and then this really becomes the focus of the letter and makes his case for the future.

[00:10:30] Paul Roetzer: So I, I think it’s important to go through these, not just because Zuck is saying it, but this is very clearly, as I already alluded to, a manifesto for open source. So this is putting a stake in the ground for anyone who wants to push This idea of open source being key to the future, this, this letter can be used as a basis for that.

[00:10:50] Paul Roetzer: So I’ll just kind of hit some of his key points. So, The first is why open source is good for developers and then in turn good for brands and businesses and enterprises who can work [00:11:00] with these developers to build these models. he says we need to train, fine tune, and distill our own models, every organization, meaning versus closed, like buying OpenAI’s closed model.

[00:11:10] Paul Roetzer: Or, you know, Google’s Gemini, for example. So you need to train, fine tune, and distill our own models, speaking as brands. Every organization has different needs that are best met with models of different sizes that are trained and fine tuned with their specific data on device tasks and classification tasks.

[00:11:26] Paul Roetzer: Tasks require smaller models, while more complicated tasks require larger models. Um, next he said we need to control our destiny and not get locked into a closed vendor. He’s taking

[00:11:37] Paul Roetzer: shot at OpenAI, obviously. A lot of this, directly or indirectly, is against OpenAI. And then he directly goes at Apple as well.

[00:11:45] 

[00:11:45] Paul Roetzer: um, says need protect our data. Many organizations handle sensitive data and they need to, It’s secure and can’t send to closed models over cloud APIs. We’ve definitely heard that from enterprises we talked to. We need a model that is more efficient and affordable to [00:12:00] run. Developers can run inference on Llama.

[00:12:02] Paul Roetzer: Again, inference is at the moment when you actually use the model, the, you know, the output it’s giving you, not the training of the model itself. And that this enables it to be, you know, bigger and faster. So now why is it good for Meta? Everyone’s always saying like, why would you give this away? You’re spending billions of dollars on infrastructure and then training these models and running them.

[00:12:21] Paul Roetzer: Why are you giving this away? So he says, we have to ensure not locking into a competitor’s closed ecosystem where they can restrict what we build. This is leads into his comments on Apple. He says, and he talks about this in interviews all the time. one of my formative experiences has been building our services constrained by what Apple will let us build on their platforms between.

[00:12:40] Paul Roetzer: The way they tax developers, which I think is like 40 percent or something of every app sale goes to Apple. The arbitrary rules they apply and all the product innovations they block from shipping. It’s clear that Meta and many other companies would be freed up to build much better services. Now this, I thought it was interesting.

[00:12:58] Paul Roetzer: And again, I listened to the [00:13:00] interview he did, um,

[00:13:02] Paul Roetzer: where talked about this. And I think this, alludes to. They’re going to go insanely hard at glasses, like these Ray Ban glasses they have, and like Project Astra that we saw from Google, like a preview. I could see them spending billions on that in the next like 18 to 24 months, because what he wants to do is create a new operating system.

[00:13:23] Paul Roetzer: And it actually probably explains why he spent 10 billion on trying to create the metaverse over the last, you know, decade or so. It’s because he’s trying to get off of the iPhone. He, he built his apps within a controlled, ecosystem by someone else that dictates to him and Meta what they can and can’t do.

[00:13:42] Paul Roetzer: And he doesn’t want that in the future. So Metaverse was his first play to get around that. His second play is now to build glasses, most likely. and then to build their own language models, build their own intelligence into those devices that no longer require Apple. [00:14:00] So that’s why they invested and bought Oculus and tried to build a metaverse to Oculus.

[00:14:03] Paul Roetzer: Now they’ll try and do it through these devices. So again, he didn’t say that explicitly, but that is absolutely what he’s implying here. he also said he expects AI development will continue to be very competitive, which means that open sourcing any given model isn’t giving away a massive advantage over the next best models at that point.

[00:14:20] Paul Roetzer: The path for Llama to become the industry standard is being consistently competitive, efficient, and open, generation after generation. This one actually goes to one of the points I’ll finish with in my LinkedIn post because he validated exactly what I was thinking, there. And then he says a key difference between meta and closed models.

[00:14:38] Paul Roetzer: is that selling access to AI models isn’t our business. That means openly releasing LLAMA doesn’t undercut our revenue, sustainability, or ability to invest in research like it does for closed providers, meaning they’re not making their money on, you know, selling the models for 20 bucks a month, or the cloud services for, you know, for everything to be hosted.

[00:14:56] Paul Roetzer: And then he did say, and this was a direct shot at Regulatory capture from [00:15:00] OpenAI and Gemini. this is one reason several closed providers consistently lobby governments against open source. And then the final notes from his letter, why open source is good for the world, says AI has potential, more potential than any modern technology to increase human productivity, creativity, and quality of life.

[00:15:18] Paul Roetzer: And to accelerate economic growth while unlocking progress in medical and scientific research. he also says there’s an ongoing debate about the safety of open source AI models. And in my view, is that open source AI will be safer than the alternatives. Um, he also says my framework for understanding safety is that we need

[00:15:37] Paul Roetzer: protect against two categories of harm, unintentional and intentional. Now, some people may roll their eyes at this point because, The whole idea that we’re entrusting Zuckerberg and Facebook of all people with security and safety they

[00:15:52] Paul Roetzer: have a questionable, history of how they have used data and how that, their products have impacted [00:16:00] people. so that all being aside, The Ugly Truth.

[00:16:05] Paul Roetzer: Is that the book about Facebook? I think that was I read it a years ago. I’ll look on it when you’re, when Mike’s talking next, but there was a book probably about three or four years ago that told kind of a rather, unsavory history of Facebook, but anyway, so then he goes on to say open source should be significantly safer since the systems are more transparent and can be widely scrutinized, historically open source software has been more secure.

[00:16:29] Paul Roetzer: So my feeling here is should be significantly safer is doing a lot of work. I don’t know that this will stay true and he has not committed to all future models being open

[00:16:41] Paul Roetzer: Like he has in interviews, there may come a point where this isn’t it. And even, you know, when they first announced 405B a few months ago, they didn’t say it was going to be open source.

[00:16:50] Paul Roetzer: Like, I don’t know that they were sure yet if it was safe to do it. And I do think that’s probably why we haven’t seen GPT 5 yet. I don’t know that they’ve managed to make it safe enough yet to [00:17:00] release

[00:17:00] Paul Roetzer: it.

[00:17:01] Paul Roetzer: So, I do think that there’s a time where meta He changes direction here and does not open source future models, but he maybe leaves that open to that.

[00:17:09] Paul Roetzer: And then he goes into just some government stuff. So then he ends with the bottom line is that open source AI represents the world’s best shot at harnessing this technology to create the greatest economic opportunity and security for everyone. So, if you’re for or against open source, or if you’re neutral on the topic,

[00:17:23] Paul Roetzer: go read the full article, it’s very helpful.

[00:17:26] Paul Roetzer: The second point I wanted to make, and that he alludes to, is much bigger models are coming very soon. So, they trained Llama 3. 1 405B on 16, 000 H100s from NVIDIA. It’s about 480 million in chips. Not the cost to do the training, but just the infrastructure, roughly. They don’t know the final price of these things, but roughly 30, 000 per chip. Uh, in March of this year, Meta announced that they had created two 24, 000 GPU [00:18:00] clusters that were being trained to use Llama3. So this is only 16k of 48, 000 that they already acknowledged was for Llama3 specifically. So they didn’t even train this on the most powerful structures they had already in place.

[00:18:17] Paul Roetzer: They also at that time said that by the end of 2024, we’re aiming to continue to grow our infrastructure build out that will include 350, 000 NVIDIA H100 GPUs as part of our portfolio that will feature compute power equivalent to nearly 600, 000 H100s. So again,

[00:18:38] Paul Roetzer: Llama 3. 1 405B, which in some ways is on par with the most powerful language model in the world.

[00:18:44] Paul Roetzer: was trained on 16, 000, they plan to have 600, 000 by the end of this year, 600, 000 equivalent H100s, which I think they’re, they’re counting for the new Blackwell chip, the

[00:18:58] Paul Roetzer: B200 in there, [00:19:00] because that’ll be, you know, whatever, 5, 10, 20 times more powerful than the H100s. That’s 18 billion in chips. That’s not the data centers and the training, that’s just the chips they’re buying from NVIDIA.

[00:19:13] Paul Roetzer: So they’re planning on having 600, 000. We know, XAI is going to, you know, claim to have a hundred thousand by the end of the year. Who knows how many OpenAI has? Like they are very good at keeping that a secret.

[00:19:24] Paul Roetzer: Yeah. The rumors are that GPT 4 was trained on 25, 000 A100s, which is a previous generation of NVIDIA.

[00:19:32] Paul Roetzer: So probably the equivalent of, I don’t know, 10, 000 GPUs or H100s, let’s say. So. We’re, we’re like, we know all the frontier model companies have hundreds of thousands more GPUs than they’ve used to train the versions we currently have. And so as long as these things keep getting smarter, the more data they’re given, the more GPUs they’re given, the more time we’re heading toward this massive leap forward.

[00:19:56] Paul Roetzer: Now this doesn’t even account for the fact that these are just large language models. [00:20:00] Like Yann LeCun, the chief scientist and What is his title? executive vice president or something of AI. So he’s the chief AI scientist and an EVP at Meta. He doesn’t even think language models are the path to intelligence.

[00:20:12] Paul Roetzer: So like all this spending and stuff isn’t even what he envisions being the path to

[00:20:16] Paul Roetzer: intelligence. So with that being said. I will wrap up my thoughts on this section with what I posted on LinkedIn, because this is what I find myself thinking about all the time.

[00:20:27] Paul Roetzer: So meta release llama 3. 1. 405b. Early reports are that it is very impressive, very powerful, and most importantly, open source.

[00:20:35] Paul Roetzer: The excerpt you read already. In essence, Zuckerberg has made the strategic decision to spend tens, if not hundreds of billions of dollars in the coming years to commoditize the frontier model market and undercut the core revenue channels for anthropic and open AI. And the emerging market potential for Google, Microsoft, Amazon by giving the technology away.

[00:20:54] Paul Roetzer: Why? Because he can. They are printing money without a cloud or enterprise [00:21:00] software business. They don’t need the AI models to grow their revenue. They can just infuse their AI into their existing social networks, which I believe is about 4 billion people or so have Microsoft, or not Microsoft, have meta applications like WhatsApp and Instagram and Facebook.

[00:21:15] Paul Roetzer: And their future AR Glasses, Metaverse, whatever it is, Cash Cows, an attempt to dominate the AI model market by winning with developers. This drives innovation and makes them look like

[00:21:25] Paul Roetzer: Robin hood of AI. But here’s what I can’t stop thinking about. How big of a leap will GPT 5, or whatever they name it, be when

[00:21:34] Paul Roetzer: it finally launches this year? GPT 4 was released in March 2023, that’s 17 months ago, which in AI timelines is a lifetime.

[00:21:43] Paul Roetzer: It’s taken this long for all of these other companies who have been spending billions of dollars on research and development, talent, GPUs, and training to create comparable models to GPT 4 and GPT-4o, GPT 4 class of models, we’ll say.

[00:21:59] Paul Roetzer: In [00:22:00] nearly one and a half years, no one has built a definitively better model, but everyone seems to have caught up, which takes me back to GPT 5. OpenAI knows their competitors are already working on bigger, more generally capable, multi modal models, as well as smaller, more efficient models. Meta

[00:22:17] Paul Roetzer: Meta is training Llama4, XAI is training Groq2 and Groq3 that we talked about on episode 106.

[00:22:23] Paul Roetzer: Anthropic is probably training Claude4, Google is probably training Gemini2. So does OpenAI just build a model that gets surpassed three months later by the next frontier model as Zuckerberg? proposes in his letter?

[00:22:36] Paul Roetzer: Or do they release something that sets the standard that everyone chases for one to two years again?

[00:22:42] Paul Roetzer: And if they do, what are the breakthroughs that would need to be achieved in order to give them the pole position in the pursuit of AGI? In short, have, and this is really critical here, have AI frontier models been commoditized, as Zuckerberg would present that they are, and the future [00:23:00] is simply continuous, continuous parallel innovations across leading labs?

[00:23:05] Paul Roetzer: Or does OpenAI still have an unknown competitive advantage that gives them a unique ability to build the most advanced models possible? I guess only time will tell, but I don’t think we’ll have to wait long to find out. So I think this is. The critical thing, and again, I’ll kind of read that line again, because I think this is the foundation of it all.

[00:23:26] Paul Roetzer: Have AI frontier models been commoditized and the future is simply continuous parallel innovations across leading labs? Or does OpenAI or Google, with their new math product that we’re going to talk about, has someone found a breakthrough? Or will Meta be the one to unlock with Yann LeCun, that actually leapfrogs everybody and they can’t catch up for a couple of years.

[00:23:48] Paul Roetzer: And we don’t know the answer to this. Zuckerberg seems to be on the path of, yeah, it’s just basically parallel innovation. And all these researchers are jumping lab, lab, and they’re just taking the innovation with them and giving it to the [00:24:00] other lab. And everybody’s just going to keep like every three to six months, one up on each other.

[00:24:03] Paul Roetzer: And there’s going to be no definitive, most powerful model in the world. I don’t know if I believe that, but that’s what they’re presenting.

[00:24:11] Mike Kaput: So, as kind of a coda to this topic, just really quickly, could you maybe Talk a little bit just kind of in a near term practical sense about this whole thing They’re talking about using synthetic data to train smaller models and like why that’s an important kind of near term piece here

[00:24:27] Paul Roetzer: Well, they’re like, so the one concern people have is we run out of data, that there’s only so much data in the world that’s publicly available on the internet, whether it’s, you know, audio files, video files, text files, images, whatever it may be. That at some point they run out of data to give these models and they need to create new data.

[00:24:48] Paul Roetzer: and so by creating these bigger models, those models can create new data that’s derivatives of human intelligence. And then that data can be used to train the models. [00:25:00] And research is sort of, conflicted at the moment on whether or not, This will actually work. And as you scale up this synthetic data, does it actually start to break down?

[00:25:09] Paul Roetzer: so so we don’t know, but, like know Anthropic is very big on this. I believe OpenAI is like a lot of the research labs seem to think that there is definitely a path forward for this, where synthetic data becomes the primary training data.

[00:25:23] Mike Kaput: And so kind of the core thesis here is you give these models more compute and more data, and so far they appear to get smarter. And we just talked about all the chips. Clearly we’ve got a long runway of more compute to give them. So if you can crack code on the data piece of this, it seems like we could be in for some significant improvement.

[00:25:42] Paul Roetzer: That is the assumption.

[00:25:44] Runway and YouTube

[00:25:44] Mike Kaput: So our second big topic this week is that AI video generation company Runway has come under some, not great scrutiny for using YouTube videos without permission to train its new model, Gen [00:26:00] 3.

[00:26:01] Mike Kaput: According to an internal spreadsheet that was obtained by 404 Media, the company trained this model by scraping thousands of videos without permission from different sources, including popular YouTube creators and brands, as well as pirated films.

[00:26:17] Mike Kaput: So overall, this leaked document kind of portrays a company wide effort to collect training data from unauthorized sources.

[00:26:24] Mike Kaput: Sources. So, this obviously raises pretty significant questions about the ethics and legality of Runway’s data collection practices. I mean, they’re a giant in their space of the industry.

[00:26:35] Mike Kaput: They’ve raised 141 million from investors that include Google, which owns YouTube, and companies like NVIDIA. They’ve got a 1. 5 billion valuation. And this comes on the heels of stories about companies like Apple, Salesforce, Anthropic, doing the same thing using YouTube videos, to some degree, which we did cover a bit on last week’s episode.

[00:26:58] Mike Kaput: So Paul, first up, like, [00:27:00] what’s going on here? Is this as bad as it seems? It certainly seems like one of the more extensive exposés on this practice, though obviously Runway’s not the only one doing it.

[00:27:10] Paul Roetzer: So, this is u I think I called it on LinkedIn when I posted about it. Like, the uncomfortable part of all of this is we love these tools. Like we love Runway. I’ve been using Runway for 5 years. I know how it’s trained. Like, this isn’t a surprise to me. I know how SOAR is trained. Like,

[00:27:26] Paul Roetzer: so I think what’s happening now is because generative AI has become so mainstream that there’s more awareness now outside of the AI research community of how this all works.

[00:27:38] Paul Roetzer: works. this isn’t even the ugly stuff. Like this is, this is like table stakes, how everyone is doing this. So basic premise here, and I would suggest reading the article, because if you, if you don’t understand how these models are trained, this is really illustrative

[00:27:53] Paul Roetzer: of how it works. In essence, what they do is they say like, okay, we want to.

[00:27:57] Paul Roetzer: Make sure this model is really good [00:28:00] at, like, drone footage, that it can, like, follow a car on the road and create, like, a view, so someone can goes and searches YouTube to find 20 videos of a drone following a car on the road. And then they basically take those videos, that someone has a copyright to, and they train their model on it.

[00:28:19] Paul Roetzer: And then you can go in and the model all of a sudden has this amazing ability to, like, give a drone view. Well, it’s because they use somebody’s data for it. There’s examples where they had specific YouTubers, like influencers, who, who pay money to like travel places and film things in specific ways and styles.

[00:28:35] Paul Roetzer: And you could go in and give Runway that influencer’s name. And just like the name and say video, like say Mike Kaput video, and it would generate a video that looks almost exactly like an actual YouTube video from that person because they didn’t red team it out of it. So what they do is they train it on all this copyrighted data.

[00:28:57] Paul Roetzer: And so these things learn to emulate [00:29:00] what it sees, create videos that look almost exactly what it sees. But what they then do is they go in and put in. Restrictions, so if you say a Pixar like movie, it won’t create it because they know they’ll get sued by Pixar. So they strip the stuff out for the big brands, so that if you keyword search for it, it’s trained not to output that, so that when their lawyers go and check it, They don’t find stuff.

[00:29:24] Paul Roetzer: Doesn’t mean it’s not capable of doing it. It’s 100 percent capable of doing because they trained it on Pixar movies, or shorts, and Disney movies, and everything.

[00:29:33] Paul Roetzer: And so, this is how this works. They, these things have to learn from something, so they learn from human creativity.

[00:29:41] Paul Roetzer: Now, the argument of the lawyers is, That’s how humans learn.

[00:29:44] Paul Roetzer: So like, if I wanted to learn how to shoot something like Mike Kaput’s YouTube channel, I’m going to go study Mike’s YouTube channel, and then I’m going to use Mike’s style as an influence for what I create. So that’s going to be the argument of the lawyers on the side of them. Now, the catch [00:30:00] for all of this, and the weird part Why don’t they just admit it?

[00:30:03] Paul Roetzer: Like, we know that they’re doing it. Everyone knows they’re doing it. But yet, when like OpenAI was specifically asked, How is Sora trained? And the CTO just dodged it. Like, I’m not sure if it was use YouTube videos. Yeah, you are. Like

[00:30:18] Paul Roetzer: and we know you are, but Google knows all this too, and it’s against the terms of use for YouTube videos to be downloaded and used to train models, and yet Google knows people are doing it. YouTube’s aware of this, they’re not stopping anybody because they probably did it too. And so it’s just this really weird thing. And so I’ll, you know, I’d put some comments about this on, on LinkedIn and, friend of mine, Sharon Torek, who’s an IP attorney actually commented. So I’ll share her perspective as an IP attorney.

[00:30:50] Paul Roetzer: She says big AI is rolling the dice that they will be too big to fail by the time a high court determines that their use of copyrighted works to train their models is unlawful if one [00:31:00] does. To me, again, this is Sharon’s words, the lack of transparency about their training practices is telling. Of course, they’re privately owned entities who are protecting their assets and shareholders and slowing down isn’t a practical option for them.

[00:31:14] Paul Roetzer: My view is that they’re moving forward until IP owners will probably have to make licensing deals with them to protect and monetize their works. And then someone commented and she replied to that one and said, many rights holders won’t have the resources to prove damages. Others that do, like big entertainment IP owners, will make the business decision to license rather than deal with the infringement.

[00:31:36] Paul Roetzer: So we all know it’s unethical. We all know it’s probably illegal. But at the end of the day, it’s going to be a better business decision for these big companies to just accept that it was done and this is what happens and just make a deal and try and make money in the process rather than trying to sue them and prove damages.

[00:31:51] Paul Roetzer: Now the alternative opinion, which was provided by a friend of mine, Brad Reynolds, who’s the chief strategy officer and senior vice president of AI at Expedient. Which is a [00:32:00] data center and cloud service provider. He said, not a lawyer here, but there’s plenty of, of, or there’s a pretty famous case of HIQ versus LinkedIn.

[00:32:09] Paul Roetzer: HIQ was a data analytics company that scraped a bunch of public LinkedIn and used it in their tool, but

[00:32:15] Paul Roetzer: not as exact copies. The case. After much expensive legal investment by both sides, it was determined that if the info was public and used in a transformative way, LinkedIn had no claim. It would be different if it was behind some sort of user registration that requires you to log in and agree to terms and conditions.

[00:32:34] Paul Roetzer: But once you post it in a public space, it is free for others to consume, assuming they don’t just copy and paste it and claim it as their own, which is where the transformative nature comes in. In terms of large language models and transformative, I would say Where, uh, say we’re in Llama 3 source code, can you find the exact content?

[00:32:53] Paul Roetzer: So he’s just saying like it’s transformative. In terms of why they are cagey in our extremely litigious culture, and one in [00:33:00] which the loser has no downside other than time invested, Even a small probability of success when sent a certain set of lawyers to roll the dice on a class action lawsuit. Giving those folks extra ammunition, if only on a 5 percent roll, is dangerous because it can cause a lot of operational distraction.

[00:33:17] Paul Roetzer: So in essence, like, this is kind of the tech side of this is they’re just rolling the dice and the IP attorney side is basically kind of similar. And at some point, someone’s got to decide that they want to take down this industry. Like. Was it Peter Thiel who took down like the, that one media

[00:33:33] Mike Kaput: Right, Gawker, yeah.

[00:33:34] Paul Roetzer: Yeah, someone needs to to decide, like, I’m willing to spend a hundred billion on I’m taking down these foundation models. that happens, then we, then it’s going to be a really interesting time. But I still err on the side of what Sharon is saying. And said all along. I just feel like at some point, this is going to become the norm.

[00:33:53] Paul Roetzer: People will talk more openly about how they’ve been doing it and licensing deals will be struck and [00:34:00] unfortunately, the small creators, the innovators who won’t have a say in any of this aren’t going to make any money as a result and I don’t hear anybody solving for that.

[00:34:09] Are We In an AI Bubble?

[00:34:09] Mike Kaput: Alright,

[00:34:10] Mike Kaput: so in third big topic we want cover week, is really trying to kind of pick apart and answer following question, which people are increasingly asking, which is, is there an AI bubble? So this is increasingly on the minds of AI watchers, as even the most successful AI companies appear to be losing.

[00:34:31] Mike Kaput: Eye watering amounts of money. And generative AI companies as a whole are kind of jockeying to figure out sustainable, profitable business models. The highest profile example of this is OpenAI, which, despite its popularity and prominence, could be facing staggering losses of up to 5 billion this year, according to a recent analysis by The Information.

[00:34:58] Mike Kaput: Now, this is based on previously [00:35:00] undisclosed internal financial data and insights from individuals close to the business. And their analysis reveals just these enormous costs that are associated with developing and running large language models. Like, for instance, OpenAI’s AI training and inference costs could

[00:35:17] Mike Kaput: 7 billion in 2024. pfft That’s roughly 4 billion dedicated to running ChatGPT and 3 billion for training new models. There’s an estimated 1. 5 billion in staffing costs its growing workforce of about 1, 500 employees. So their total kind of operating costs could soar to 8. 5 billion, according the information. But they are on track to generate 2 billion annually, their total revenue for the year is projected with some additional income coming in from API services to maybe land between 3.

[00:35:53] Mike Kaput: billion, so that’s kind of where they’re getting 5 billion number, which leaves OpenAI losing 5 billion. [00:36:00] Anywhere from four five billion this year, they’re not the only one doing this. There’s other leading companies like Anthropic that are burning through cash. The information also estimates they’re burning about 2.

[00:36:12] Mike Kaput: 7 billion this year, and they estimate they only have about one tenth to one fifth the revenue of OpenAI. Now, Paul, that sounds dire, but There’s actually probably some more going on here because people are definitely tempted to pile on when they read stories like this, you know, because they can validate what plenty of people believe or want to believe about generative AI that, you know, it’s overhyped, overvalued, no real use cases or business models.

[00:36:39] Mike Kaput: But you and I have kind of talked offline about how there’s more to the story here. Can you kind of zoom us out and put into perspective for what’s going on here?

[00:36:50] Paul Roetzer: It’s crazy that sounds to most business leaders, maybe most people would listen to this podcast. Losing 5 billion or more in a year is insignificant. [00:37:00] Like in the grand scheme of what they’re pursuing, market potential they’re pursuing, 5 billion is nothing. I mean, we just talked about the GPU, you know, the cost GPUs, like 18 billion is what matters.

[00:37:12] Paul Roetzer: Spending just on. GPUs, like not even the training and the inference everything else.

[00:37:18] Paul Roetzer: 5 billion is, is nothing. I don’t know off the top of my head, the R& D budgets of NVIDIA and Google. Maybe you could find real quick, Mike, but I want to say it’s like 50 billion a year, like something like that.

[00:37:31] Paul Roetzer: I mean, they’re not spending insignificant amounts of money. Taken shots at future earnings. So, relatively, not lot of money. Now, are we in a bubble? Maybe. Like, nobody really knows. I’m 100 percent confident that these super scalers are just getting started on the spending. So, I saw a great tweet from, let’s see, who’s this from?

[00:37:53] Paul Roetzer: Doug, Clinton. So he is with Deepwater Management, an investment firm, um, [00:38:00] at Doug Clinton on Twitter. And he shared this, like, really cool, visual of what he was calling a Pascalian wager. So had to look up who, Blas Pascal was, but basically a 17th century French philosopher, mathematician, and physicist.

[00:38:17] Paul Roetzer: And in the 1600s, he suggested that make a life defining gamble when deciding whether not to believe in God. And so it creates basically this grid, like the four squares, and the argument is based on the idea that if God exists, then believing in him is Infinitely beneficial. But if he doesn’t, then the loss is finite for both believers non believers.

[00:38:38] Paul Roetzer: So Pascal believed that since it’s impossible to prove or disprove God’s existence, it’s better to bet that God does and live accordingly. so he this basic premise and he put it into a grid of Is AI profound or not, basically? and so again, it’s like, it’s, it’s a really great one to go look at, but basically in the grid, he says [00:39:00] AI is overblown.

[00:39:00] Paul Roetzer: So let’s say that these super scalers like Amazon and Microsoft and OpenAI and Google,

[00:39:07] Paul Roetzer: um, believe aI is profound, it ends up being overblown. His point there is, well, who cares? You waste hundreds of billions of dollars on AI infrastructure that you can eventually use for something else anyway. So don’t really like lose anything.

[00:39:22] Paul Roetzer: If get it wrong, then there’s AI is overblown, belief and, and, and, or AI is actually overblown. And then there’s the belief it’s overblown and it’s, he says, lame reward, save, wasting hundreds of billions dollars on AI infrastructure. So you decide, eh, we’re just not going to do Like it’s not worth it.

[00:39:40] Paul Roetzer: not going to and compete. And so you save a couple hundred billion dollars over the next five, 10 years. But then you have no potential to unlock the trillion dollar plus market. So that’s the like AI ends up being overblown argument. AI ends being profound. So if you, if AI is profound and you believe it, calls it heaven.

[00:39:59] Paul Roetzer: You [00:40:00] capture hundreds of billions in incremental annual revenue because you were willing to spend the money now to do it. AI

[00:40:06] Paul Roetzer: ends up being profound and you don’t do anything about it. Well, you’re in hell. You lose significant ground to competition, maybe no longer a megacap company. So, in essence, what they’re saying, which, which then gets played out in comments made, um, this past week by Sundar Pichai and Mark Zuckerberg.

[00:40:27] Paul Roetzer: So, David Kahn, who is with Sequoia, tweeted, Google and CEOs, both in the last 24 hours, now agreeing with my AI arms race narrative, which was based on something else written. there are four major players in the game problem, and two are now on the record about strategies. What he calls out, he

[00:40:44] Paul Roetzer: says, Google and Meta CEOs gave the same answer yesterday when asked about the disconnect between aggressive AI capital expenditure spending and lack of revenue monetization.

[00:40:54] Paul Roetzer: So. Here we go. 5 billion lost to open AI, and they’re not realizing the money back yet. So here’s why [00:41:00] they’re doing

[00:41:00] Paul Roetzer: So Google CEO Sundar Pichai spoke on the earnings call and said, quote, the risk under investing is dramatically greater than the risk of over investing. Over

[00:41:10] Paul Roetzer: investing scenarios still leave you with infrastructure that has a long, useful life that you can apply across the business and work through, while not investing to be at the frontier has more significant downsides.

[00:41:22] Paul Roetzer: So basically, the risk we would take by not doing this is far greater than if we’re wrong.

[00:41:29] Paul Roetzer: if we spend 100 billion and AI ends up not delivering value and we don’t ever get close to AGI and this was all just bubble, Who cares? Like it’s, it’s insignificant. and then Zuckerberg said basically the same thing.

[00:41:43] Paul Roetzer: AI will be fundamental if products are able to grow massively over the next few years, which I think there’s a very good chance of. I’d much rather over invest and play for that outcome than save money by developing more slowly. acknowledges a meaningful chance a lot of companies are over building now and will look back and [00:42:00] they’ve spent some number billions more than they had to.

[00:42:02] Paul Roetzer: But they’re all making rational decisions because the downside being behind leaves you out of position for the most important technology over the next 10 to 15 years. so I would connect this to your own business. Like, if you’re sitting on the sidelines waiting to see AI will be profound or not, you’ve lost.

[00:42:22] Paul Roetzer: Like, you will lose. Like, so you’re basically betting it’s not going to be profound. But your competitors are betting going to profound

[00:42:31] Paul Roetzer: and they’re willing to invest in the infrastructure and upscale their talent and, you figure aI out, pilot and scale it, you’re going to have no chance.

[00:42:42] Paul Roetzer: And that’s where I think a lot of this comes down to is like, where are we really in the adoption curve, the understanding curve, and is it going out the way people think it is? and, And I just wouldn’t want to be on the side that’s betting it’s not going to be profound when it comes to a business standpoint. [00:43:00] Then there was one other article we’ll link to, this is from The Economist, I believe I had this. Let me double check this source. yeah, so this is The Economist and it’s called, What are the threats the 1 trillion artificial boon? so in this, it talks about, it says Alphabet’s capital spending is expected to grow by about half this year, this on AI, to 48 billion.

[00:43:26] Paul Roetzer: Much will be spent on AI related gear. this is basically talking about the infrastructure layer, where all this is going. estimates that Alphabet, Amazon, Meta, and Microsoft will together splurge 104 billion on building AI data centers this year. Add in spending by smaller tech firms other industries, and the total AI data center binge between 23 and 27, 2023 and 2027, could reach 1.

[00:43:50] Paul Roetzer: 4 trillion. so to understand where the market is going, you have to follow supply chain. You got to look not just back forth. At those major players, but where [00:44:00] else is it coming from? So in this article, it says AI investment can broadly be split into two. Half goes to chip makers like NVIDIA.

[00:44:07] Paul Roetzer: rest is spent on equipment that keeps chips whirring, ranging networking gear to cooling systems. the Economist looked at 60 such companies. So again, a really cool analysis if want to understand this a little deeper. It is forecast to sell, let’s see the chips. Oh, like for example, Eaton, which is actually about 30 minutes from our home in Cleveland, like there it’s a Klum based company, says an American maker of industrial machinery said that in the past year, it is saw more than fourfold increase in customer inquiries related to AI data center products.

[00:44:37] Paul Roetzer: So basically we’re at this point where. is betting that AI will be profound, from the, the suppliers to whoever, and if you want to get a sense of whether or not that belief, that conviction remains, you got to follow the supply chain because of the supply chain for all the data center, uh, gear, everything related to this infrastructure, including [00:45:00] energy, continues to accelerate and their earnings continues.

[00:45:03] Paul Roetzer: project out increases over the next 12 months, that means demand is still there for this. and so this is like going back to, know, a few ago when I announced that we launched SmarterX, our AI, research firm. This is why, like the, the story of AI is so much broader and to understand what it means to our businesses and our own lives, we have like understand a more holistic approach this, which is what we try and do, as I said in the lead into this, like this.

[00:45:30] Paul Roetzer: these macro trends. We’re trying to help everyone understand these in a way that makes sense to you in your career, because you might hear about some earnings from some company in Taiwan and what that matter? Well, it matters because it’s an indicator that the big companies that all use to do our jobs, like Google and Microsoft and OpenAI, still believe that there’s massive scale ahead.

[00:45:52] Paul Roetzer: It’s okay, cool. Like the models are getting, what it basically means is At the end of day, it looks like the models are going to getting bigger and smarter. So [00:46:00] everything we’ve now talked about over the first 40 minutes here, that’s what it comes down to is all indications based on this, as we are not in a bubble and

[00:46:09] Paul Roetzer: that going to keep getting bigger and smarter, and then those bigger and smarter models are going to train smaller, more efficient models. and off we go.

[00:46:17] Mike Kaput: That’s an awesome analysis. And yeah, just like really Emphasizing one of those points Zuckerberg said is not only are we probably not in a bubble, this behavior is very rational when you start unpacking it. And again, we’ll see what the outcomes are. But yeah, it’s you once you really start scratching below the surface of some

[00:46:36] Mike Kaput: stuff, you start see, okay, like, here’s exactly the thought process going into it, which I think is extremely valuable for any business owner or leader. to be keeping tabs on.

[00:46:46] DeepMind AlphaProof and AlphaGeometry 2

[00:46:46] Mike Kaput: Alright

[00:46:46] Mike Kaput: so whats really cool is this

[00:46:48] Mike Kaput: next topic, the beginning of our rapid fire, kind of shows us how these models might be getting smarter because Google DeepMind has achieved a significant milestone in [00:47:00] mathematical reasoning In their latest AI systems, one is called Alpha Proof, the other is called Alpha Geometry Two.

[00:47:08] Mike Kaput: These breakthrough models just demonstrated their capabilities by solving complex problems from the 2024 International Mathematical Olympiad, the IMO, it’s one

[00:47:18] Mike Kaput: of the most prestigious mathematical competitions for pre-college students. These combined AI systems solved 4 out of 6 problems from this year’s IMO, earning a score of 28 out of 42 possible points.

[00:47:31] Mike Kaput: Now, that puts them just shy of gold medal threshold of 29 points, so basically the AI system is at the same level as a silver medalist in this competition. So, AlphaProof, which is a reinforcement learning based system

[00:47:47] Mike Kaput: formal math reasoning, tackled algebra problems and one number theory problem. It also solved the hardest problem in the competition.

[00:47:55] Mike Kaput: AlphaGeometry2 is an improved version of the geometry solving [00:48:00] system, and it proved a geometry problem in just 19 seconds. The system’s performance was evaluated by prominent mathematicians as well. And they were quite impressed with how these systems were able to actually reason through, in non obvious ways, mathematical problems and theorems.

[00:48:19] Mike Kaput: Now, google DeepMind sees this as a step towards a future where AI tools actually assist mathematicians in exploring hypotheses, solving problems that have not yet been solved, and expediting the time consuming elements of the process of solving those problems. So,

[00:48:37] Mike Kaput: Paul, like, on the surface, this is about AI doing really well in a math competition, but it is so much more than that.

[00:48:44] Mike Kaput: Like, why does this breakthrough in mathematical reasoning matter so much to where AI is going?

[00:48:51] Paul Roetzer: It makes the language models smarter, more generally capable. I think that’s the biggest thing is like we know for these models to keep getting smarter, to [00:49:00] approach average human level, superhuman level in terms of their capabilities at cognitive work, they need more advanced reasoning capabilities and math is a way to achieve that.

[00:49:11] Paul Roetzer: And so this is all the pursuit of AGI. So again, like everything kind of comes back to we’re. People spend so much time analyzing where we are today and what these models can do, and sometimes they lose sight of, well, what if these research labs are right, and we can make these models significantly smarter, almost, you know, skilled, expert, human level, or beyond.

[00:49:31] Paul Roetzer: And that’s what these kinds of breakthroughs potentially lead to. So, you know, quick recap, like, what, what does advanced reasoning, logical thinking abilities do? It, it enhances our ability to think. to use these models to assist in decision making, uh, drives it more complicated problem solving, helps us, solve deeper problems that might require system two type thinking, you know, not a quick solution.

[00:49:53] Paul Roetzer: We have to actually think through different steps, different processes. certainly efficiency and productivity of the models in terms of applying [00:50:00] them to regular tasks. complex projects, improved accuracy and consistency, the ability to like self check their work is, is one example. collaborative intelligence where we’re going to be working more closely with these things like we would other humans.

[00:50:13] Paul Roetzer: So like if Mike and I were working on a project together, I know Mike has, Tons of experience and knowledge and capabilities, but I don’t know if Mike also can think logically about problems. And if I’m trying to solve a problem, I want to like work through it with someone who’s a good strategist, who thinks about things, connects seemingly unconnected aspects of the problem.

[00:50:32] Paul Roetzer: Like that, that’s, AIs don’t do that right now, but that’s where this goes is this idea of like a true strategist, not just an assistant that does things like a limited assistant, but actually like a smart strategist. So.

[00:50:45] Paul Roetzer: I think this is, um, a known area that they’re all working on. They’re all trying to get there. Strawberry, which we talked about from OpenAI, I think it was last week. Same basic concept. And I, the one thing I was kind of laughing at. [00:51:00] Someone, so when the data’s come out, someone tweeted, Aiden I don’t know who this person is, but, Topology AI on Twitter. So he put a chart up of the Google breakthrough in terms of like scoring on, the IMO problems.

[00:51:14] Paul Roetzer: And he said, open AI is the opportunity to do the funniest thing. And he was showing like basically at achieving gold level. Well, Sam Altman replied, LLL to

[00:51:21] Paul Roetzer: So there’s there’s a, there’s definitely a part of me that thinks that, again, Sam never tweets anything that doesn’t have some deeper meaning to it.

[00:51:31] Paul Roetzer: My guess is whatever we see next from OpenAI probably blows these results out. Um, or at least at minimum follows a similar trajectory and probably surpasses this outcome. But they’re all working on the same basic problem.

[00:51:48] Sam Altman Op-Ed Future of AI Warning

[00:51:48] Mike Kaput: So related to open AI, Sam Altman actually has just issued a stark warning about the future of artificial intelligence in a recent Washington Post [00:52:00] opinion piece. So in this piece, Altman writes that we’re at a critical juncture where the world must choose between two paths.

[00:52:07] Mike Kaput: And basically, those two paths are a democratic vision for AI, led by the United States and its allies, or an authoritarian one controlled by regimes that don’t share with each other. democratic values. He actually writes, quote, there is no third option, and it’s, it’s, it’s time to decide which path to take.

[00:52:26] Mike Kaput: So he points out that while the US currently leads in AI development, this position is not guaranteed, because authoritarian governments, particularly China and Russia, are investing heavily to catch up and potentially overtake

[00:52:41] Mike Kaput: West. And he warns that if these regimes win the race, They will closely guard AI’s benefits and use AI to cement their own power.

[00:52:50] Mike Kaput: So Altman says to actually maintain the lead and ensure a democratic future for AI, we need to get four big things right. First, we need [00:53:00] robust security measures to protect AI intellectual property and data. Second, we need AI Significant investment in physical infrastructure within the U. S. and democratic countries to support AI systems, which he believes

[00:53:13] Mike Kaput: create a ton of jobs. He even went so far as say,

[00:53:15] Mike Kaput: quote, infrastructure is destiny. It determines the direction the entire nation goes in. Now, third, he urges the development of a clear AI. Basically, getting smart about export controls and foreign investment rules. And finally, he suggests creating new models for establishing global norms in AI development and deployment with a focus on safety and inclusivity.

[00:53:42] Mike Kaput: Now, Paul, these seem like pretty reasonable, interesting points from one of the top leaders in AI, but Sam Altman’s a pretty busy guy. He’s got, you could say he has a lot on plate. Like, why am I reading this now? Right,

[00:53:54] Paul Roetzer: The, so, obviously, the timing is intentional. I don’t, you know, [00:54:00] pretend to know why Sam does exactly what he does, but, the few things I would, that immediately came to mind for me is the elections are coming up. So, you know, we want to start influencing how the different administrations might look at things in the United States.

[00:54:12] Paul Roetzer: We’re in the election cycle now. Talk a little bit more about that in a moment. legislation at state and national levels is evolving and at the state level, moving maybe a little faster than people would want in the AI research community, specifically it was SB 1047 in California is now through like eight rounds of revisions and seems to be gaining more and more traction.

[00:54:33] Paul Roetzer: obviously the open source movement. is probably on Sam’s mind, and while he’s actually an advocate of open source in some ways, he does have obvious concerns about it in others. I would imagine GPT 5 is nearing release at some point here, and they’re trying to kind of pave the way for what that might lead to.

[00:54:51] Paul Roetzer: And then I assume they’re in the process of raising historic levels of money, and whether they find a way to eventually break from their non profit charter [00:55:00] and an IPO, or just raise 100 billion, 500 billion, like whatever the number is, it’s going to be massive. And so I think all of that is probably happening right now.

[00:55:12] Paul Roetzer: And so I think we’re probably going to see more editorials from Sam, probably more, very public facing interviews and thoughts. so I don’t know, a few episodes ago, I shared my thoughts that the government, the US government needed an Apollo level mission or beyond,

[00:55:26] Paul Roetzer: Um, With aI and not just on infrastructure, like it is, I agree, I think keeping the infrastructure in the United States is critical to the safety and security.

[00:55:35] Paul Roetzer: Ironically, Sam’s the one that was rumored to be talking to other governments about moving data centers into, their countries. But, maybe this is his play to try and force them to keep it here, like to do what needs to be done so he doesn’t have to take it somewhere else. You know, he’s probably, Playing both sides here. I also think the government needs to invest massively in education and training the next generation of workforce. There probably needs to [00:56:00] be some level of government subsidies. we’re not going to get into his, Sam’s

[00:56:05] Paul Roetzer: Universal basic income study, but we’ll put a link to it in the newsletter and in the show notes, he just came out with like a seven year study.

[00:56:11] Paul Roetzer: He led. To basically gauge whether providing people with university income or universal basic income or a stipend basically each month, how it would impact people, because that’s one of his solutions when AI starts doing a lot of the knowledge work jobs that we just give people money.

[00:56:26] Mike Kaput: Right.

[00:56:27] Paul Roetzer: I do think that at some point we’re gonna.

[00:56:30] Paul Roetzer: Need something like that. there’s precedent with the pandemic. The U S did provide money to people. They can go back and look at the data of how that impacted people. But I think that’s not a realm of possibility. I think the government needs to be looking at everything as possibilities. Um, there’s two, two lines.

[00:56:48] Paul Roetzer: I, you know, bold faced out of, out of this, and then we’ll kind of move on. I feel like this could have been a main topic for sure. he called ChatGPT and other systems limited assistance. Um, [00:57:00] but said, and I think in the past he’s called them embarrassing, like these, the current systems. More advances will soon follow and will usher in a decisive period in the story of human society.

[00:57:11] Paul Roetzer: Now, anyone who’s heard me give keynotes before, I often talk about Sam’s Moore’s Law for Everything blog post from March 2021. And he laid out Exactly what was going to happen, and it happened. He doesn’t write things like this when he’s not confident that he has seen the future already. And so I would, I would put a lot of stock in this stuff.

[00:57:34] Paul Roetzer: Sam is not one that’s out there using a lot of hyperbole and trying to over exaggerate, he generally has a history of previewing the future for people because he’s seen it already. And I think people should take stuff like this very seriously. And then he said, we are witnessing the birth and evolution of a technology, I believe to be as momentous as electricity or the internet.

[00:57:57] Paul Roetzer: Sundar Pichai has echoed those words. I think [00:58:00] fire was the one sundar has used. he said, AI can be the foundation of a new industrial base. It would be wise for our country to embrace it.

[00:58:08] GPT4-o Mini LMSYS Evaluation

[00:58:08] Mike Kaput: So, in some other OpenAI news this past week, OpenAI’s new, smaller, cheaper model, GPT-4o Mini, has now some public benchmarking results on lmsys. org, which is a popular, credible chatbot ranking site we’ve talked about quite a bit. They use user feedback and reviews, as well as ELO ratings, which are used in chess, to rank the capabilities of all the major AI models out there.

[00:58:36] Mike Kaput: You can see all these in their chatbot arena. at lmsys. org.

[00:58:40] Mike Kaput: And according to these ratings, GPT-4o Mini delivers because on the leaderboard as of right now, and it can change very quickly, it is currently sitting right around the top of the leaderboard, joint width, or slightly behind depending on when you check GPT 4.

[00:58:56] Mike Kaput: 0. Now the key here is that this is while [00:59:00] GPT-4o Mini is 20x cheaper than to operate. Now, it only has about 7, 000 user votes as of today. For context, GPT-4o has way more. It is 65, 000. It’s been out longer. Others have over 100, 000. So

[00:59:16] Mike Kaput: just take that with a grain of salt, but super interesting that GPT-4o Mini is already delivering kind of Paul on this promise that we’ve talked about of more powerful intelligence. that gets cheaper and cheaper over time and that trend is really going to change things. Do you see this as an example of that starting to play out a little bit?

[00:59:38] Paul Roetzer: Yeah, and I also think GPT-4o Mini is the prelude to GPT 5. Like if you go read the announcement post from July 18th, where they say, introducing our most cost efficient small model, there’s capabilities that they haven’t released yet from this model. So it specifically says today, GPT-4o Mini supports text and vision in the API with support for text, image, video, [01:00:00] and audio inputs and outputs coming in the future.

[01:00:02] Paul Roetzer: and then down at the bottom, it talks about like how they’re basically just releasing these initial capabilities. And so I do think that there’s hints, when you start piecing this stuff together of, of what the next iteration of models will be. But I do think that they’re going to keep, this is what I talked about.

[01:00:18] Paul Roetzer: I think it was last week where we say like, know, GPT-4o right now seems amazing. This stuff’s gonna be like free. it, like, six months from now when we have the next version of this, this is the stuff where you can be like, back, like, oh, that was cute. Remember GPT-4o mini? It

[01:00:33] Paul Roetzer: could like pass all these exams. And like, this is the stuff they’re just gonna give away. And that’s weird. Like, it’s very bizarre to, like, think about the future where this intel, this level of intelligence is, one, Limited in Sam’s words, or embarrassing in his words. And, and yet it’s so powerful when applied to the right use cases in companies.

[01:00:54] Mike Kaput: And like you mentioned in a previous topic, like it’s kind of hard to remember. I mean, 17 months [01:01:00] ago, GPT 4 just came out. Look how far we’ve come. I mean, 17 months is a long time, but it’s not that long for the innovation that we’ve seen.

[01:01:08] Paul Roetzer: Yeah.

[01:01:09] OpenAI Announces SearchGPT AI

[01:01:09] Mike Kaput: All right. So one other piece of OpenAI news this week, OpenAI has unveiled a prototype of what they’re calling Search AI.

[01:01:17] Mike Kaput: GPT, which is an AI powered search engine. So SearchGPT uses the GPT 4 family of models to generate responses to user queries, presenting information with real time facts in a conversational format, allowing for follow up questions. They also feature a sidebar with relevant links. This search engine also includes a visual answers feature, and OpenAI plans to eventually integrate these search capabilities directly.

[01:01:46] Mike Kaput: into ChatGPT. So right now, it is just a prototype. It’s available only to a small group of test users, but their approach, at least up front from what they’ve demonstrated, appears to have a real emphasis [01:02:00] on clear, prominent attribution and links to sources, and publishers will have options to manage how their content appears in search results.

[01:02:09] Mike Kaput: And this appears to be a response to some of the recent controversies with, among others, perplexity, which is getting, uh, Lambasted for not, showing prominent enough attribution to the sources it’s taking information from in search results. So, Paul, I guess, again, it’s still very, very early, but how big a deal is this that they’re previewing this right now?

[01:02:33] Paul Roetzer: We’ve known it was coming. This hasn’t been a big secret that they were exploring search. Sam’s been asked about it many times in interviews.

[01:02:42] Paul Roetzer: Um, You can sign up for a wait list, so you can go in, I don’t think you can do it through a business account with ChatGPT, but you can do it through your personal account, sign up to get on the wait list.

[01:02:51] Paul Roetzer: I think, you know, initially the biggest impact is going to be, if it’s really good, I’d be really worried if I was perplexity for one, um, [01:03:00] perplexity is great, but I don’t know how many users it actually has. I would, I would imagine it’s in the

[01:03:04] Paul Roetzer: low millions, like not probably more than 10, 20 million. I might be way off on that, but I got to guess that. ChatGPTs. Who knows? 200 million users. and that’s probably just like natively in their app. That doesn’t even account for through Bing and through other platforms. So yeah, they have a distribution problem. If you’re

[01:03:23] Paul Roetzer: and if ChatGPT or openAI nails this, then that’s, that’s bad news. it’s also worrisome for Google. And then it did just kind of came to my mind. Like, I wonder if this is the future of Bing too. Like, what if. If, OpenAI creates a better search to power a better knowledge assistant and agent,

[01:03:43] Paul Roetzer: does that just usurp the need to, like, have Bing be a standalone product? I have no idea, but it just kind of crossed my mind as a possibility.

[01:03:51] Presidential Moves on AI

[01:03:51] Mike Kaput: So there’s also a couple kind of major moves on AI happening at the presidential and presidential candidate [01:04:00] level.

[01:04:00] Mike Kaput: So allies of former U. S. President Donald Trump have drafted a sweeping executive order. that could reshape the landscape of AI in the U. S. if he

[01:04:11] Mike Kaput: re elected. This order proposes several key initiatives, including launching a series of, quote, Manhattan projects to develop military AI technology. It involves immediately reviewing and potentially rolling back, quote, unnecessary and burdensome regulations on AI, and creating industry led agencies to evaluate AI models.

[01:04:32] Mike Kaput: And secure systems from foreign adversaries. So this kind of comes amidst a couple other important pieces of news. There’s kind of this broader, it seems, political realignment going on in Silicon Valley with some, major executives starting to back Trump as a candidate. That includes Elon Musk, venture capitalists like Andreessen and Ben Horowitz.

[01:04:53] Mike Kaput: It also comes amidst a broader shake up in the presidential race, with VP Kamala Harris now the [01:05:00] presumptive Democratic nominee after Joe Biden has declined to run again. And that’s interesting as well, because Harris has been, in many ways, the administration’s public face of AI in the last few years, and she played a crucial role in creating Biden’s executive order on AI,

[01:05:16] Mike Kaput: is a comprehensive order covering a ton of different aspects of safe and secure, AI creation and usage in the U.

[01:05:24] Mike Kaput: S., and like you alluded to, Paul, it’s also at a time when increasing amounts of legislation are happening at the state level, too. So, are you expecting AI to become, like, a major direct issue in the election? Obviously, that’ll have a huge impact, like we’ve talked about, in the form of behind the scenes and with deepfakes, things like that, but do you, are you seeing this become a major, like, policy pillar?

[01:05:48] Paul Roetzer: So one quick thought for everyone is, this is not going to become a, we’re not going to get into politics on show.

[01:05:55] Mike Kaput: right. There are

[01:05:56] Paul Roetzer: too many tech podcasts that, in my opinion, [01:06:00] have, lost their initial value and audience because they’ve chosen to become political analysts. That is not going to be Mike and I.

[01:06:09] Paul Roetzer: will talk about politics to the extent that it relates to the future of artificial intelligence and business. We will not be giving you our personal opinions on, um, politics or political leaders. That is not what you’re here for. That being said, what Trump’s campaign is doing is he has courted some billionaires from Silicon Valley, including Elon Musk, David Sachs, the one you didn’t mention, Palmer Luckey, who, interestingly enough, was let go from Meta, believed to be, there’s

[01:06:45] Paul Roetzer: a little bit of both sides here, uh, because he backed Trump in 2016. Palmer Luckey is the inventor of Oculus, so sold Oculus to Meta. He now runs a defense company called anduril, Anduril [01:07:00] Industries.

[01:07:00] Paul Roetzer: Uh, so when we talk about Manhattan projects for AI military, my guess is it’s Palmer Luckey that he’s listening when it comes to that stuff. So a lot of what Trump’s platform has become is whatever these billionaires want, so that they’ll give him whatever the money is.

[01:07:16] Paul Roetzer: So Musk originally supposedly committed 45 million a month. He then backed off and said, it’s not in fact more than 45 million a month. But prior to Harris entering the race, he was being backed by four or five, Peter Thiel’s another one, Silicon Valley leaders who are basically setting the AI agenda and then it’s trumpeted out through, no pun intended. on Trump’s

[01:07:37] Paul Roetzer: platform. So that’s, that’s what’s happening there. Trump himself has no known opinion of AI or Bitcoin or anything. it’s, it’s all just about what are, what do these, supporters want. What we know about the Biden administration that Harris is a part of is they do have an executive order.

[01:07:54] Paul Roetzer: So there’s actual like legislation out there about it. And and ironically, they just [01:08:00] released, on July 26, the fact sheet that gave an update on AI actions and, commitments related to AI. this is because it’s been 270 days since the executive order went into place. So if you want to get an update on what’s going on with the current executive order, It goes into pretty great detail, including the fact that Apple just signed on with the voluntary commitments.

[01:08:24] Paul Roetzer: but they talk about the different advancements that were made and how they’ve pretty much met all the different executive order things, related to risk and safety and consumer safety and responsible AI innovation. So it’s there to be read. This is a rapid fire item, so I won’t get into going through the trade analysis, but But again, our perspective is we got to pay attention to what the, in the United States, what these two potential administrations, think and say about AI because it, it ends up affecting all of us eventually.

[01:08:55] Paul Roetzer: and, and I’ll kind of leave it at that.

[01:08:58] Ethan Mollick Warns about RAG Talk-to-Document Systems

[01:08:58] Mike Kaput: that. All right. So we’ve got a [01:09:00] few more rapid fires here. next up, AI expert, Ethan Mollick, who we talk about all the time on this podcast, big fans of his work. He’s put out a timely warning for companies that are building retrieval augmented generation or RAG AI systems.

[01:09:16] Mike Kaput: This past week, he posted on LinkedIn, quote, warnings for companies building talk to your document RAG systems with AI, which is a lot

[01:09:24] Mike Kaput: companies. 1. No one is testing the final LLM output enough. It can be both true and misleading. There are no automated benchmarks for this. 2. No one will ever check on the primary source.

[01:09:35] Mike Kaput: Seriously, our research shows this. 3. Users really don’t really understand them well. Among many misunderstandings, they expect the RAG system to work like a search engine. Not as a flawed, forgetful analyst. Four, LLM systems are persuasive, not passive. They want to make the user happy, and they will persuade them that the results are what they wanted if they aren’t.

[01:09:59] Mike Kaput: [01:10:00] And five, users are used to type 1 errors in search. False negatives. For instance, the document that is there wasn’t found. They aren’t used to type 2 errors with false positive. Details are made up that aren’t there.

[01:10:14] Mike Kaput: So. Paul, like, why is he talking about this now? And why is this kind of warning important to keep in mind?

[01:10:23] Paul Roetzer: We’ve talked, recently about this idea of precision and, large language models aren’t precise. They, so if you’re using them for a task internally that requires precision, often, you know, data analysis is a good example. Like if you’re using something where you need to know the actual numbers, or, you know, we just might publish the state of marketing ad report last week.

[01:10:48] Paul Roetzer: If we’re using an LLM, like code interpreter within ChatGPT to assist in the analysis of the data. A human has to double check everything because they’re not [01:11:00] precise. They will make mistakes and it’ll seem correct. And if you ask it, did you check your work? It’ll say, yes, I’m a hundred percent sure it’s correct.

[01:11:07] Paul Roetzer: Like that’s how these things work. And because I can’t go back to AI literacy because of the lack of understanding, the lack of education and training, you just give these tools to a salesperson or a business analyst or a financial analyst, say, Hey, great, we’ve got this new system. And it only looks in the, you know, a hundred documents we’ve given it, and it’s great.

[01:11:24] Paul Roetzer: And you don’t tell them it can just make stuff up and that it might actually be wrong. And 98 percent of the time it’s right, but the 2 percent of the time it’s not is a massive problem for us. we’re going to run into all kinds of issues of this going wrong. And now the retrieval augmented generation certainly seems to help, and it does improve the accuracy and precision, but it doesn’t necessarily mean that we can just turn it over to people and have them assume it’s going to be right, because for the foreseeable future, it’s not.

[01:11:54] Apple’s AI Features Will Roll Out Later Than Expected

[01:11:54] Mike Kaput: Alright, so our final rapid fire topic this week. [01:12:00] Apple has announced that its much hyped Apple Intelligence generative AI features are going to actually be rolling out a little later than expected.

[01:12:10] Mike Kaput: Those features are now going miss the launch of the new iPhone in September and instead begin rolling out as part of software updates coming in October, according to Bloomberg.

[01:12:21] Mike Kaput: However, Apple is actually planning to make some of these features available to developers for early testing, possibly as soon

[01:12:30] Mike Kaput: this week. So, Apple Intelligence is

[01:12:33] Mike Kaput: going to be a huge deal. We covered it when it was released at WWDC. Is this delay at all worrisome to you or indicative of anything more than just them kind of dotting the i’s and crossing the t’s on these products?

[01:12:48] Paul Roetzer: Probably fits in the whole precision thing we just talked about, like, it doesn’t worry me. It doesn’t surprise me at all. Like, there’s a lot of testing that has to into

[01:12:57] Paul Roetzer: this. So I would think by the [01:13:00] September unveiling. you know, they’re going to have some insane previews of what this technology is going to do.

[01:13:06] Paul Roetzer: And they for sure, through some stuff. I wouldn’t be shocked if it gets delayed further than October, honestly. But my feeling is, that’s okay. Like, we’re talking about pushing AI out to billions of people, through these devices. Like, let’s, let’s get it right. That’s going to be a lot of consumers first.

[01:13:22] Paul Roetzer: known interaction where it becomes part of their daily workflows. So I want them to get it right for sure. I personally will, you know, get my order in for new iPhone, iPad, the second I, I can, when they’re going to announce this stuff. So I’m looking forward to it.

[01:13:40] Mike Kaput: Alright,

[01:13:40] Mike Kaput: a couple quick housekeeping notes before we wrap up this week. As always, you can find all the stories we talked about in this podcast, deeper dives on those, as well as some topics we didn’t get to cover in our weekly newsletter.

[01:13:53] Mike Kaput: If you go to MarketingAIInstitute/

[01:13:56] Mike Kaput: slash newsletter, you can sign up for This Week in AI, [01:14:00] which is a comprehensive brief on all the news you need to know each and every week. It’s a great way to make sure you don’t fall behind. Keeping up with AI. Last but not least, if you have not left us a review on your podcast platform of choice, we would absolutely appreciate you doing so.

[01:14:17] Mike Kaput: It helps us improve the show and it also helps us get in front of even more listeners, which is really, really helpful to spreading the message of AI literacy for all. So if you haven’t given us a review yet, please, please, please take a minute to do so. to let us, let us know how we’re doing.

[01:14:35] Mike Kaput: Paul, packed week. Thanks again for unpacking it all for us.

[01:14:39] Paul Roetzer: I just realized the next episode is going to be in August.

[01:14:42] Mike Kaput: Ha

[01:14:43] Paul Roetzer: Going fast. All right. Well, we’ll see you all in early August next week.

[01:14:49] Paul Roetzer: Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey and join more than 60, 000 professionals and [01:15:00] business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community. Until next time, stay curious and explore AI.



Leave a Reply

Your email address will not be published. Required fields are marked *