Justice Department to Criminally Charge Boeing: Reports


Two boeing logos on display

Photo: Anadolu (Getty Images)

The U.S. Justice Department intends to criminally charge Boeing for breaching a settlement connected to two deadly 737 Max jetliner crashes, according to reports from Bloomberg and Reuters. The federal government is reportedly seeking a guilty plea from Boeing, which may include a $243.6 million criminal fine and force the planemaker to bring on an independent compliance monitor.

The Boeing-DOJ settlement followed a 2017 crash in Indonesia, which killed all 189 people on board; and a 2018 crash in Ethiopia, which killed all 157 people on board. Despite opposition from some lawmakers and relatives of those killed in the incidents, Boeing secured the $2.5 billion settlement in 2021, which temporarily protected it from criminal prosecution. The agreement required the planemaker to report evidence and allegations of fraud and “strengthen its compliance program,” the Justice Department said at the time.

Then a panel blew off an Alaska Airlines-operated Boeing plane in January, uncloaking continuing safety and compliance issues at the company. Four months later, the federal government said in a court filing that Boeing had breached its 2021 agreement by failing to “design, implement, and enforce a compliance and ethics program to prevent and detect violations of the U.S. fraud laws throughout its operations.”

The DOJ has now decided to bring criminal charges against Boeing and wants the planemaker to accept a plea deal, according to several reports. Such a deal would include about a quarter of a billion dollars in additional fines, per Bloomberg; it could also force Boeing to bring in an independent monitor to make sure the firm follows anti-fraud laws, per AP News.

The DOJ reportedly told the 737 Max crash victims’ families and lawyers about the plea deal on Sunday, and said it would give the planemaker a week to decide whether to accept the offer or argue its case in court. Boeing did not immediately respond to a request for comment on the reports.

AI Chatbots Are Running for Office Now


Victor Miller [Archival audio clip]: She’s asking what policies are most important to you, VIC?

VIC [Archival audio clip]: The most important policies to me focus on transparency, economic development, and innovation.

Leah Feiger: That is so bizarre. I got to ask, could VIC be exposed to other sources of information other than these public records? Say, email from a conspiracy theorist who wants VIC to do something not so good with elections that would not represent its constituents.

Vittoria Elliott: Great question. I asked Miller, “Hey, you’ve built this bot on top of ChatGPT. We know that sometimes there’s problems or biases in the data that go into training these models. Are you concerned that VIC could imbibe some of those biases or there could be problems?” He said, “No, I trust OpenAI. I believe in their product.” You’re right. He decided, because of what’s important to him as someone who cares a lot about Cheyenne’s governance, to feed this bot hundreds, and hundreds, and hundreds of pages of what are called supporting documents. The kind of documents that people will submit in a city council meeting. Whether that’s a complaint, or an email, or a zoning issue, or whatever. He fed that to VIC. But you’re right, these chatbots can be trained on other material. He said that he actually asked VIC, “What if someone tries to spam you? What if someone tries to trick you? Send you emails and stuff.” VIC apparently responded to him saying, “I’m pretty confident I could differentiate what’s an actual constituent concern and what’s spam, or what’s not real.”

Leah Feiger: I guess I would just say to that, one-third of Americans right now don’t believe that President Joe Biden legitimately won the 2020 election, but I’m so glad this robot is very, very confident in its ability to decipher dis and misinformation here.

Vittoria Elliott: Totally.

Leah Feiger: That was VIC in Wyoming. Tell us a little more about AI Steve in the UK. How is it different from VIC?

Vittoria Elliott: For one thing, AI Steve is actually the candidate.

Leah Feiger: What do you mean actually the candidate?

Vittoria Elliott: He’s on the ballot.

Leah Feiger: Oh, OK. There’s no meat puppet?

Vittoria Elliott: There is a meat puppet, and that Steve Endicott. He’s a Brighton based business man. He describes himself as being the person who will attend Parliament, do the human things.

Leah Feiger: Sure.

Vittoria Elliott: But people, when they go to vote next month in the UK, they actually have the ability not to vote for Steve Endicott, but to vote for AI Steve.

Leah Feiger: That’s incredible. Oh my God. How does that work?

Vittoria Elliott: The way they described it to me, Steve Endicott and Jeremy Smith, who is the developer of AI Steve, the way they’ve described this is as a big catchment for community feedback. On the backend, what happens is people can talk to or call into AI Steve, can have apparently 10,000 simultaneous conversations at any given point. They can say, “I want to know when trash collection is going to be different.” Or, “I’m upset about fiscal policy,” or whatever. Those conversations get transcribed by the AI and distilled into these are the policy positions that constituents care about. But to make sure that people aren’t spamming it basically and trying to trick it, what they’re going to do is they’re going to have what they call validators. Brighton is about an hour outside of London, a lot of people commute between the two cities. They’ve said, “What we want to do is we want to have people who are on their commute, we’re going to ask them to sign up to these emails to be validators.” They’ll go through and say, “These are the policies that people say that are important to AI Steve. Do you, regular person who’s actually commuting, find that to actually be valuable to you?” Anything that gets more than 50% interest, or approval, or whatever, that’s the stuff that real Steve, who will be in Parliament, will be voting on. They have this second level of checks to make sure that whatever people are saying as feedback to the AI is checked by real humans. They’re trying to make it a little harder for them to game the system.

Social media companies have too much political power, 78% of Americans say in Pew survey


Finally, something that both sides of the aisle can agree on: social media companies are too powerful.

According to a survey by the Pew Research Center, 78% of American adults say social media companies have too much influence on politics — to break it down by party, that’s 84% of surveyed Republicans and 74% of Democrats. Overall, this viewpoint has become 6% more popular since the last presidential election year.

Americans’ feelings about social media reflect that of their legislators. Some of the only political pursuits that have recently garnered significant bipartisan support have been efforts to hold social media platforms accountable. Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) have been working across the aisle on their Kids Online Safety Act, a bill that would put a duty of care on social media platforms to keep children safe; however, some privacy advocates have criticized the bill’s potential to make adults more vulnerable to government surveillance.

Meanwhile, Senators Lindsey Graham (R-SC) and Elizabeth Warren (D-MA) have also forged an unlikely partnership to propose a bill that would create a commission to oversee big tech platforms.

“The only thing worse than me doing a bill with Elizabeth Warren is her doing a bill with me,” Graham said at a Senate hearing in January.

It’s obvious why Americans think tech companies have too much political power — since the 2020 survey, social platforms were used to coordinate an attack on the Capitol, and then as a result, a sitting president got banned from those platforms for egging on those attacks. Meanwhile, the government is so concerned about the influence of Chinese-owned TikTok that President Biden just signed a bill that could ban the app for good.

But the views of conservative and liberal Americans diverge on the topic of tech companies’ bias. While 71% of Republicans surveyed said that big tech favors liberal perspectives over conservative ones, 50% Democrats said that tech companies support each set of views equally. Only 15% of adults overall said that tech companies support conservatives over liberals.

These survey results make sense given the rise of explicitly conservative social platforms, like Rumble, Parler and Trump’s own Truth Social app.

During Biden’s presidency, government agencies like the FTC and DOJ have taken a sharper aim at tech companies. Some of the country’s biggest companies like Amazon, Apple and Meta have faced major lawsuits alleging monopolistic behaviors. But according to Pew’s survey, only 16% of U.S. adults think that tech companies should be regulated less than they are now. This percentage has grown since 2021, when Pew found that value to be 9%.

Liberals and conservatives may not agree on everything when it comes to tech policy, but the predominant perspective from this survey is clear: Americans are tired of the outsized influence of big tech.

Bluesky now allows heads of state to join the platform


Now that Bluesky has opened itself up to the public and , the team’s decided it’s finally time to allow world leaders on board, too. A post from the official account on Friday notified users, “By the way… we lifted our ‘no heads of state’ policy.” The policy has been in place for the last year as Bluesky worked through all the early growing pains of being a budding social network.

Bluesky remained an invite-only platform from its launch in February 2023 until February of this year, when it finally ditched the waitlist. had said last May that it wasn’t ready for heads of state to join, and even asked users to give its support team notice “before you invite prominent figures.” It’s since grown to more than 5 million users, alone in the day after it stopped requiring invite codes.