OpenAI pledges to give U.S. AI Safety Institute early access to its next model


OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Safety Institute, a federal government body that aims to assess and address risks in AI platforms, on an agreement to provide early access to its next major generative AI model for safety testing.

The announcement, which Altman made in a post on X late Thursday evening, was light on details. But it — along with a similar deal with the U.K.’s AI safety body struck in June — appears to be intended to counter the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more capable, powerful generative AI technologies.

In May, OpenAI effectively disbanded a unit working on the problem of developing controls to prevent “superintelligent” AI systems from going rogue. Reporting — including ours — suggested that OpenAI cast aside the team’s safety research in favor of launching new products, ultimately leading to the resignation of the team’s two co-leads, Jan Leike (who now leads safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who started his own safety-focused AI company, Safe Superintelligence Inc.).

In response to a growing chorus of critics, OpenAI said it would eliminate its restrictive non-disparagement clauses that implicitly discouraged whistleblowing and create a safety commission, as well as dedicate 20% of its compute to safety research. (The disbanded safety team had been promised 20% of OpenAI’s compute for its work, but ultimately never received this.) Altman re-committed to the 20% pledge and re-affirmed that OpenAI voided the non-disparagement terms for new and existing staff in May.

The moves did little to placate some observers, however — particularly after OpenAI staffed the safety commission entirely with company insiders including Altman and, more recently, reassigned a top AI safety executive to another org.

Five senators, including Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s policies in a recent letter addressed to Altman. OpenAI chief strategy officer Jason Kwon responded to the letter today, writing that OpenAI “[is] dedicated to implementing rigorous safety protocols at every stage of our process.”

The timing of OpenAI’s agreement with the U.S. AI Safety Institute seems a tad suspect in light of the company’s endorsement earlier this week of the Future of Innovation Act, a proposed Senate bill that would authorize the Safety Institute as an executive body that sets standards and guidelines for AI models. The moves together could be perceived as an attempt at regulatory capture — or at the very least an exertion of influence from OpenAI over AI policymaking at the federal level.

Not for nothing, Altman is among the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which provides recommendations for the “safe and secure development and deployment of AI” throughout the U.S.’ critical infrastructures. And OpenAI has dramatically increased its expenditures on federal lobbying this year, spending $800,000 in the first six months of 2024 versus $260,000 in all of 2023.

The U.S. AI Safety Institute, housed within the Commerce Department’s National Institute of Standards and Technology, consults with a consortium of companies that includes Anthropic, as well as big tech firms like Google, Microsoft, Meta, Apple, Amazon and Nvidia. The industry group is tasked with working on actions outlined in President Joe Biden’s October AI executive order, including developing guidelines for AI red-teaming, capability evaluations, risk management, safety and security and watermarking synthetic content.

Microsoft’s AI-powered Canva-like Designer app lands on iOS and Android


Microsoft announced on Wednesday that its AI-powered Designer app is officially coming out of preview and is now available to all users on iOS and Android. The Canva-like app lets people generate images and designs with text prompts to create things like stickers, greeting cards, invitations, collages and more.

Designer is now accessible in more than 80 languages on the web, available as a free mobile app, and as an app in Windows.  

The app features “prompt templates” that are designed to help jumpstart the creative process. The templates include styles and descriptions that you can experiment with and customize, and you can share templates with others in order to build on each other’s ideas.

In addition to stickers, you can create emojis, clip art, wallpapers, monograms, avatars and more with text prompts.

You can also use Designer to edit and restyle images with AI. For instance, you can upload a selfie and then choose from a set of styles and write in any extra details you want to see to transform your photo.

Image Credits: Microsoft

Soon, Designer will include a “replace background” feature that will allow you to use text prompts to transform images.

With the launch of the standalone Designer app, Microsoft shared that it’s bringing the service to apps like Word and PowerPoint through Copilot. People who have a Copilot Pro subscription can create images and designs in their workflow. Soon, users will get the option to create a banner for their document in Word based on the content of their document.

As part of Wednesday’s announcement, Microsoft revealed that Microsoft Photos on Windows 11 is getting a deeper integration with Designer. Users can now use AI to edit photos without leaving the Photos app. You can now do things like erase objects, remove backgrounds and auto-crop images directly within the app.

In spite of hype, many companies are moving cautiously when it comes to generative AI


Vendors would have you believe that we are in the midst of an AI revolution, one that is changing the very nature of how we work. But the truth, according to several recent studies, suggests that it’s much more nuanced than that.

Companies are extremely interested in generative AI as vendors push potential benefits, but turning that desire from a proof of concept into a working product is proving much more challenging: They’re running up against the technical complexity of implementation, whether that’s due to technical debt from an older technology stack or simply lacking the people with appropriate skills.

In fact, a recent study by Gartner found that the top two barriers to implementing AI solutions were finding ways to estimate and demonstrate value at 49% and a lack of talent at 42%. These two elements could turn out to be key obstacles for companies.

Consider that a study by LucidWorks, an enterprise search technology company, found that just 1 in 4 of those surveyed reported successfully implementing a generative AI project.

Aamer Baig, senior partner at McKinsey and Company, speaking at the MIT Sloan CIO Symposium in May, said his company has also found in a recent survey that just 10% of companies are implementing generative AI projects at scale. He also reported that just 15% were seeing any positive impact on earnings. That suggests that the hype might be far ahead of the reality most companies are experiencing.

What’s the holdup?

Baig sees complexity as the primary factor slowing companies down with even a simple project requiring 20-30 technology elements, with the right LLM being just the starting point. They also need things like proper data and security controls and employees may have to learn new capabilities like prompt engineering and how to implement IP controls, among other things.

Ancient tech stacks can also hold companies back, he says. “In our survey, one of the top obstacles that was cited to achieving generative AI at scale was actually too many technology platforms,” Baig said. “It wasn’t the use case, it wasn’t data availability, it wasn’t path to value; it was actually tech platforms.”

Mike Mason, chief AI officer at consulting firm Thoughtworks, says his firm spends a lot of time getting companies ready for AI — and their current technology setup is a big part of that. “So the question is, how much technical debt do you have, how much of a deficit? And the answer is always going to be: It depends on the organization, but I think organizations are increasingly feeling the pain of this,” Mason told TechCrunch.

It starts with good data

A big part of that readiness deficit is the data piece with 39% of respondents to the Gartner survey expressing concerns about a lack of data as a top barrier to successful AI implementation. “Data is a huge and daunting challenge for many, many organizations,” Baig said. He recommends focusing on a limited set of data with an eye toward reuse.

“A simple lesson we’ve learned is to actually focus on data that helps you with multiple use cases, and that usually ends up being three or four domains in most companies that you can actually get started on and apply it to your high-priority business challenges with business values and deliver something that actually gets to production and scale,” he said.

Mason says a big part of being able to execute AI successfully is related to data readiness, but that’s only part of it. “Organizations quickly realize that in most cases they need to do some AI readiness work, some platform building, data cleansing, all of that kind of stuff,” he said. “But you don’t have to do an all-or-nothing approach, you don’t have to spend two years before you can get any value.”

When it comes to data, companies also have to respect where the data comes from — and whether they have permission to use it. Akira Bell, CIO at Mathematica, a consultancy that works with companies and governments to collect and analyze data related to various research initiatives, says her company has to move carefully when it comes to putting that data to work in generative AI.

“As we look at generative AI, certainly there are going to be possibilities for us, and looking across the ecosystem of data that we use, but we have to do that cautiously,” Bell told TechCrunch. Partly that’s because they have a lot of private data with strict data use agreements, and partly it’s because they are dealing sometimes with vulnerable populations and they have to be cognizant of that.

“I came to a company that really takes being a trusted data steward seriously, and in my role as a CIO, I have to be very grounded in that, both from a cybersecurity perspective, but also from how we deal with our clients and their data, so I know how important governance is,” she said.

She says right now it’s hard not to feel excited about the possibilities that generative AI brings to the table; the technology could provide significantly better ways for her organization and their customers to understand the data they are collecting. But it’s also her job to move cautiously without getting in the way of real progress, a challenging balancing act.

Finding the value

Much like when the cloud was emerging a decade and a half ago, CIOs are naturally cautious. They see the potential that generative AI brings, but they also need to take care of basics like governance and security. They also need to see real ROI, which is sometimes hard to measure with this technology.

In a January TechCrunch article on AI pricing models, Juniper CIO Sharon Mandell said that it was proving challenging to measure return on generative AI investment.

“In 2024, we’re going to be testing the genAI hype, because if those tools can produce the types of benefits that they say, then the ROI on those is high and may help us eliminate other things,” she said. So she and other CIOs are running pilots, moving cautiously and trying to find ways to measure whether there is truly a productivity increase to justify the increased cost.

Baig says that it’s important to have a centralized approach to AI across the company and avoid what he calls “too many skunkworks initiatives,” where small groups are working independently on a number of projects.

“You need the scaffolding from the company to actually make sure that the product and platform teams are organized and focused and working at pace. And, of course, it needs the visibility of top management,” he said.

None of that is a guarantee that an AI initiative is going to be successful or that companies will find all the answers right away. Both Mason and Baig said it’s important for teams to avoid trying to do too much, and both stress reusing what works. “Reuse directly translates to delivery speed, keeping your businesses happy and delivering impact,” Baig said.

However companies execute generative AI projects, they shouldn’t become paralyzed by the challenges related to governance and security and technology. But neither should they be blinded by the hype: There are going to be obstacles aplenty for just about every organization.

The best approach could be to get something going that works and shows value and build from there. And remember, that in spite of the hype, many other companies are struggling, too.