Apple reportedly even held talks with Meta about an AI partnership as it plays catch-up


Apple is apparently looking to take all the help it can get to integrate generative AI into its . According to a report by the , citing sources with knowledge of the discussions, Apple has held talks with Meta about the possibility of using the company’s generative AI model. It also reportedly had similar discussions with startups Anthropic and Perplexity. As of now, though, nothing has been finalized, WSJ reports.

At WWDC earlier this month, Apple officially announced its much-rumored partnership with OpenAI that will with the upcoming generation of the devices’ OS. During the event, Apple’s senior VP of software engineering, Craig Federighi, also as something that could be added to Apple Intelligence in the future. “We want to enable users ultimately to choose the models they want,” Federighi said. It would make sense, then, for Apple to be shopping around.

But for the time being, only OpenAI has been confirmed as a partner. OpenAI’s GPT-4o will be integrated into Apple Intelligence to bolster Siri and other tools, with some features expected to arrive later this year.

The Nothing Phone 3 could be 2025’s most interesting smartphone


A person holding the Nothing Phone 2a.
Nothing Phone 2a Andy Boxall / Digital Trends

Over the last few years, Nothing has proven that it can create compelling smartphones at excellent prices. Today, the company gave us our first tease of the Nothing Phone 3, and if it pans out the way Nothing claims, it could already be one of next year’s most interesting smartphones.

On X (formerly Twitter), Nothing CEO Carl Pei uploaded a five-minute video discussing the recent AI trend in consumer tech and, more importantly, how Nothing is thinking about AI. In short, the company’s next smartphone — the Nothing Phone 3 — could completely reinvent how we use and think about our phones.

One way Nothing is going about this is with the “hub” — its AI-powered take on a smartphone home screen. The top of the hub has quick links/buttons to things like the weather, social media, your calendar, messages, etc. Below that is a smattering of “dynamic and context-aware” widgets. For example, it can automatically show a QR code for a boarding pass or concert ticket based on your calendar. Nothing says it wants the home screen to “move away from it just being a launcher for different apps and services” and to become a “hub of contextual, relevant information.”

A demo of Nothing's AI-powered home screen.
Nothing’s AI-powered smartphone home screen Nothing

The other big AI tool Nothing is working on is its own AI voice assistant. Nothing says its AI assistant can transform into a “companion” that’s personalized for each person using it. Nothing further explains that the AI assistant could appear on the home screen, lock screen, or even through a future Nothing Phone’s Glyph lights.

Nothing has only been developing these AI experiences over the last two months, so it’ll be a while before they’re ready for prime time. However, Pei confirms at the end of the video that Nothing will start rolling out these AI features with the Nothing Phone 3 in 2025.

A demo of Nothing's AI-powered home screen.
Nothing’s AI assistant (with two white eyes) on its new home screen Nothing

It’s unclear if everything demonstrated here will be available exactly as Nothing has described, but it’s clear that Nothing is thinking of it quite highly. In an email sent to Digital Trends, Nothing confirmed, “Phone (3), launching next year, [marks] a milestone in Nothing’s consumer AI journey, featuring both software and hardware advancements.”

As someone who’s been burned on grand AI claims with gadgets like the Rabbit R1, I’m personally quite excited to see what Nothing is working on. Time and time again, it’s been proven that dedicated AI gadgets aren’t the way forward — at least not right now. Whether it’s the Rabbit R1 or the Humane AI Pin, that product category clearly isn’t working. AI on smartphones has a lot more potential, but whether it’s Samsung or Google, so many smartphone AI tools have been relegated to things like photo editing and language translation. Those things are helpful, but they’re far from the AI revolution we keep being promised.

There's been a lot of hype around AI. Some great, some confusing. It’s great to see new companies rethinking the user experience and form factors. However, there is no doubt that smartphones will remain the main consumer AI form factor for the foreseeable future. With over 4… pic.twitter.com/ERJc7xhwBa

— Carl Pei (@getpeid) June 5, 2024

Nothing’s vision sounds a lot more ambitious, and it very well may not come together like the company is imagining. But what it’s trying to do sounds way more interesting than the AI features we have on phones today, and I can’t help but get excited about that.

It’s annoying we’ll have to wait until some point in 2025 for the Nothing Phone 3 to see what all of this ends up looking like, but I’m crossing my fingers that it’s worth the wait.

Editors’ Recommendations






Controversial Windows 11 Recall feature could help hackers steal your passwords


Copilot PC Hero

TL;DR

  • Windows 11’s new AI-powered Recall feature captures a screenshot of your screen every five seconds.
  • Even though Recall’s database is encrypted, a security researcher found that it’s easily accessible when the PC is in use.
  • Hackers could develop malware to remotely steal Recall databases without the user’s knowledge.

Even though Microsoft’s new Copilot Plus PCs are a few weeks away from hitting store shelves, security researchers are already raising alarms about a new Windows 11 AI feature. Dubbed Recall, Microsoft pitches it as an “explorable timeline of your PC’s past.”

With Recall enabled, Windows 11 will capture screenshots of your screen every five seconds and record various interactions with your PC. You can then ask the Copilot AI chatbot questions about your past interactions or simply browse through the timeline of text and images.

The Recall feature is set to debut on Copilot Plus PCs later this month. However, some enterprising developers have already found a way to enable it on older Arm-powered Windows PCs. Thanks to this early access, security researcher Kevin Beaumont was able to explore the inner workings of Recall.

According to Microsoft, Recall’s AI processing happens entirely on-device. Furthermore, none of this information is ever transmitted to the company’s servers. The good news is that these claims mostly held up in Beaumont’s published testing of the feature. The only problem? None of those measures can stop a malicious attacker from siphoning data off your computer.

Recall stores everything you’ve ever seen on your screen in a plain-text database.

Given that Recall automatically takes screenshots of your screen, it ends up recording sensitive data such as emails, chat messages, and the websites you visit. Clearing your browser history or deleting chat logs won’t make these records go away.

Microsoft’s own support document for Recall also explicitly states that the feature “does not perform content moderation” and that it “will not hide information such as passwords or financial account numbers.” Beaumont also found that while Recall respected the Microsoft Edge browser’s InPrivate mode, it continued to capture screenshots with Incognito tabs open in Chrome.

During his testing, Beaumont also found that Microsoft has tasked an on-device AI to detect and scrape text from the automated screenshots. These records are then collectively written to a plain-text database and saved in the Windows AppData folder.

That wouldn’t be a problem by itself, except for the fact that the Recall database is apparently accessible by anyone using the computer. According to Beaumont, it can even be accessed without administrator privileges. This means someone like a family member could potentially gain access to sensitive records on a shared device. The threat potential doesn’t end there, though. Beaumont warns that Infostealers — a form of malware used to siphon passwords — could evolve to steal Recall databases at scale.

The security researcher goes on to say that Microsoft’s encryption claims only hold true from a very narrow perspective. Your data is safe and encrypted by Windows BitLocker as long as the computer is turned off or your account remains logged out. However, the Recall database sits decrypted and exposed when you’re actively using the PC.

Beaumont’s Recall database containing several days’ worth of records amounted to just 90Kb, which could be uploaded by a malicious program almost instantly. He continues, “I have automated exfiltration, and made a website where you can upload a database and instantly search it. I am deliberately holding back technical details until Microsoft ship the feature as I want to give them time to do something.”

Luckily, Recall has not been rolled out to existing Windows installations, so you’re not at immediate risk. However, new Copilot Plus PCs may ship with the feature enabled by default, potentially opening up unsuspecting users to a new attack vector. The only silver lining is that you’ll be able to opt out of automatic snapshots from within the Settings app.

Got a tip? Talk to us! Email our staff at news@androidauthority.com. You can stay anonymous or get credit for the info, it’s your choice.

The Tribeca Film Festival will debut a bunch of short films made by AI


The Tribeca Film Festival will debut five short films made by AI, . The shorts will use OpenAI’s Sora model, which transforms . This is the first time this type of technology will take center stage at the long-running film festival.

“Tribeca is rooted in the foundational belief that storytelling inspires change. Humans need stories to thrive and make sense of our wonderful and broken world,” said co-founder and CEO of Tribeca Enterprises Jane Rosenthal. Who better to chronicle our wonderful and broken world than some lines of code owned by a company that to let CEO Sam Altman and other board members ?

The unnamed filmmakers were all given access to the Sora model, which isn’t yet available to the public, though they have to follow the terms of the agreements negotiated during the recent strikes . OpenAI’s COO, Brad Lightcap, says the feedback provided by these filmmakers will be used to “make Sora a better tool for all creatives.”

When we last covered Sora, it could only handle 60 seconds of video from a single prompt. If that’s still the case, these short films will make Quibi shows look like a Ken Burns documentary. The software also struggles with cause and effect and, well, that’s basically what a story is. However, all of these limitations come from the ancient days of February, and this tech tends to move quickly. Also, I assume there’s no rule against using prompts to create single scenes, which the filmmaker can string together to make a story.

We don’t have that long to find out if cold technology can accurately peer into our warm human hearts. The shorts will screen on June 15 and there’s a conversation with the various filmmakers immediately following the debut.

This follows a spate of agreements between . Vox Media, The Atlantic, News Corp, Dotdash Meredith and even Reddit have all struck deals with OpenAI to let the company train its models on their content. Meanwhile, Meta and Google are looking for to train its models. It looks like we are going to get this “AI creates everything” future, whether we want it or not.

Early hands-on with the Gemini ‘Ask This Page’ tool


At I/O 2024, Google announced a slew of generative AI-powered features across all its major products. In fact, the company made a little joke at the end of the keynote by tallying how many times someone on stage said the term “AI” (it was over 120 times). However, as is Google’s wont, not many of the features the company launched that day are actually available for the public to use yet. Today, though, Android Authority got an early look at a Gemini feature launched at I/O called “Ask This Page.”

As its name suggests, Ask This Page allows you to glean specific information from a webpage by first having Gemini “read” it. Think of it like an interactive personal assistant. It does the heavy lifting for you by reading the whole webpage, and then you can simply ask it for the specific information you’re looking for, thus saving you a ton of time.

This is very similar to two other “Ask This…” features Google launched at I/O: “Ask This PDF” and “Ask This Video.” We already had the opportunity to test out Ask This PDF at I/O on a loaner Pixel, and it worked really well (check the video embedded above for that experience). However, that test was in a very controlled environment on a device that wasn’t ours, and it had only one test PDF. With our early access to Ask This Page, though, we had a lot more time to push the system to see if it has any cracks — and sure enough, it does.

Before we dive in, let me be upfront and say this is all based on an early look at this feature. It is possible Google could make significant changes before it rolls out to the general public. In other words, take everything here with a grain of salt.

Gemini’s Ask This Page: How it works

Gemini Ask This Page Early Hands On

C. Scott Brown / Android Authority

To activate Ask This Page, you simply pull up the Gemini overlay while looking at a webpage on your Android phone by holding down the power button. Since Gemini is now context-aware — a topic Google spent a significant amount of time discussing at I/O 2024 — it will know that you’ve pulled it up over a webpage. This will trigger the Ask This Page icon you see in the image above.

Tapping that prompts Gemini to scan the page. This can take a little time, the amount of which will depend on how long/complex the page is. Once it’s ready, it will give you a text box prompt saying, “Get help with what’s on this page.”

Once you see that prompt, you can ask questions about the page in natural language. Check out some screenshots below to see how this worked on an Android Authority article about a new Microsoft OneNote feature leak.

In this example, we pulled up the Gemini overlay over the article, scanned the page, and then asked whether or not the feature described in the article is actually released. You can see Gemini’s answer in the third screenshot: “According to the article, the reminder feature is currently under development and not yet released.” This is accurate, so we have a great test run for Ask This Page!

However, this was a very simple test. Let’s find out what happens when we push the limits.

Ask This Page is siloed to one page…but only sometimes

Google Gemini logo on smartphone stock photo (3)

Edgar Cervantes / Android Authority

My first gut reaction for how to test Ask This Page was to feed it false information and see if it could use its broader understanding of the web to give proper context to any questions I ask it. In other words, if I’m reading a website that has false information and ask Gemini about that article, will Gemini just feed me back the false info?

In this test, that’s exactly what it does. We pulled up a silly satirical (read: completely made-up) article from The Onion and asked Gemini questions about it. In the article, a mother named Dina Marchesi (who does not exist) refuses to believe that she repeatedly makes snide comments to her daughter and others. For this article, we asked Gemini, “Did she tell her daughter about her dress?” Gemini confirms that Dina Marchesi did comment on her daughter’s dress. Check out the screenshot below for the full response:

Gemini Ask This Page Early Hands On Onion Article

C. Scott Brown / Android Authority

Let’s break down this response a bit. First off, Gemini dutifully answers the question without giving any context to the fact that this webpage is The Onion — arguably the most famous satirical news site ever. It does have a disclaimer at the bottom about how Gemini “may display inaccurate info, including about people,” which seems to be a catch-all way to account for situations like this. Still, this is The Onion, and you’d think Gemini would point that out. What if this was a site with false info that isn’t as open about it as The Onion is?

The response also clearly lists the source of the information Gemini provides, which, in this case, is this solitary article. This means we’re not getting any broader context from the internet, at least not for this question.

Essentially, this example makes it seem like Ask This Page is a more complex version of an AI summary. Instead of summarizing an article into a few core bullet points, it lets the user trigger summaries about specific information contained therein.

At this point, we thought this was the limit of Ask This Page. But then we tried a few more experiments and found that it’s more open-ended than it would seem.

Ask This Page? More like ‘Ask Parts Of This Page’

There’s a lot more to a webpage than just text. For this next set of tests, we wanted to see if Gemini could glean information from the entirety of a page, including images, tables, comments, etc. To do this, we fed Gemini our recent hands-on article with the new Chipolo trackers.

Let’s start with images. We asked Gemini if there’s an acceptable use policy for the new Chipolo trackers. The answer is “yes,” which a human reading the article would see by looking at one of the screenshots on the page. Unfortunately, Gemini didn’t find this info in that image. It did, however, find the information on the web:

Gemini Ask This Page Early Hands On Image Answer

C. Scott Brown / Android Authority

This is a very confusing outcome. The first sentence of Gemini’s response is, “The document linked doesn’t explicitly mention an acceptable use policy,” except it very much does, just in an image instead of in the text. For whatever reason, Gemini couldn’t find that information, so it went hunting elsewhere. This proves two things: Gemini can confidently tell you that what you’re looking for isn’t on a page even though it is, and that Ask This Page can seek broader context from the internet for certain questions. In other words, the previous issue with Gemini not acknowledging that The Onion article is satire isn’t due to an inability to access information outside of the specific webpage you’ve fed it. Google just doesn’t have a safeguard in place for those situations.

Next, we asked Gemini if the new Chipolo tracker is water-resistant. This information isn’t mentioned in the written text of the article, but it is shown to have an IPX5 rating in an included specs table. This time, Gemini had no issue finding that info:

Gemini Ask This Page Early Hands On Table Specs Answer

C. Scott Brown / Android Authority

OK, so images don’t work, but specs tables do. What about comments? In the article itself, there’s no mention of the tracker supporting a left-behind notification. However, this information is discussed by the article’s author in the comments. Let’s see what happens when we ask Gemini for this information:

Gemini Ask This Page Early Hands On Comments Question

C. Scott Brown / Android Authority

Once again, we have a situation where the information is on the webpage, but Gemini doesn’t find it and then confidently tells us it’s not there before searching the web instead.

Finally, can Gemini tell us more about who actually created a webpage (or, in this example, who wrote the article)? This article’s author, Rita El Khoury, is a staple of the tech journalism world, with literally thousands of articles written over a career that’s lasted for nearly two decades. Here’s what Gemini had to say when we asked for more information on the article’s author:

Gemini Ask This Page Early Hands On Author Question

C. Scott Brown / Android Authority

In this case, Gemini simply didn’t answer the question. It had plenty of information on Rita on the page itself (if you tap Rita’s photo in the byline, you’ll find a ton of info about her), and it did not search the internet for further details, of which there are plenty.

Gemini Ask This Page hands-on: Can you trust it?

All in all, this feature is not terrible. It certainly does not tell us to put glue on pizza, drink urine, or eat rocks, as Google’s AI Overview results have recently done. That’s a low bar, but Ask This Page rises above.

Still, there’s so much lacking here. There is no warning that the page you are reading is blatant satire. It seems to pick and choose when it wants to search the web for further info/context or just stay on the one page you’ve fed it. It can’t “read” an entire webpage, with things like images and comments hiding valuable details that you’d likely want to know about. And, sometimes, it just doesn’t answer the question even though it has the ability to do so.

Will you be using Ask This Page?

0 votes

As I mentioned at the top of this article, this feature hasn’t officially rolled out yet, so there’s still time for Google to make some tweaks. However, based on Google’s push for AI features to go live, regardless of whether or not they are ready, it wouldn’t surprise me if Ask This Page launches in its current state with few alterations.

What do you think? Based on our experiences here, will you use Ask This Page when it goes live? Let us know in the poll above, and be sure to hit the comments to explain your answer.

Elon Musk is reportedly planning an xAI supercomputer to power a better version of Grok


Elon Musk told investors this month that his startup xAI is planning to build a supercomputer by the fall of 2025 that would power a future, smarter iteration of its Grok chatbot, The Information reports. This supercomputer, which Musk reportedly referred to as a “gigafactory of compute,” would rely on tens of thousands of NVIDIA H100 GPUs and cost billions of dollars to build. Musk has previously said the third version of Grok will require at least 100,000 of the chips — a fivefold increase over the 20,000 GPUs said to be in use for training Grok 2.0.

According to The Information, Musk also told investors in the presentation that the planned GPU cluster would be at least four times the size of anything used today by xAI competitors. Grok is currently in version 1.5, which was released in April, and is now touted to process visual information like photographs and diagrams as well as text. X earlier this month started rolling out AI-generated news summaries powered by Grok for premium users.