Tech Brew // Morning Brew // Update
Plus: Live, laugh, longevity.

Nothing says Valentine's Day like sitting across a candlelit table from your phone. On February 13, a New York bar ditched its regular clientele for one night to host something a little different: an AI dating cafe. Organized by an AI relationship app, the pop-up featured dim lighting, sultry decor, and one bistro-style chair at every table—with a smartphone stand for your AI companion, of course.

Attendees could meet a new AI partner or pick up with one they'd already been seeing—and at least one guest left with a specific note for the organizers: "The biggest problems I have is that you cannot have X-Rated conversations. I want the X-Rated conversation, I want the intellectual stimulation too."

Also in today's newsletter:

  • Is the Pentagon going to war against Anthropic?
  • Bryan Johnson launches an even more expensive way to age like the tech elite.
  • How to start vibe coding.

—Whizy Kim and Saira Mueller

THE DOWNLOAD

Photo illustration of the White House breaking the Anthropic logo into pieces.

Morning Brew Design, Adobe Stock

TL;DR: Over the long weekend, the Pentagon reportedly threatened to cut off Anthropic after it refused to allow the use of Claude for mass surveillance of Americans or fully autonomous weaponry—it’s the only AI model currently available in the military’s classified systems. Defense Secretary Pete Hegseth is apparently close to branding Anthropic a “supply chain risk,” which would require contractors to ditch Claude. The fight is an early look at how the US government might strong-arm AI labs into abandoning their own ethical standards and whether those labs will resist.

What happened: It's one thing to beef with a rival AI company, and another to lock horns with a government department that controls the largest military budget on Earth. Anthropic is reportedly now in a fast-escalating standoff with the Pentagon over where national security ends and company-imposed ethical guardrails begin. Axios reported that Hegseth is close to cutting Anthropic off entirely, and even designating it a “supply chain risk,” a move usually reserved for foreign adversaries, not a domestic tech company.

In recent months, the Pentagon has pushed Anthropic, OpenAI, Google, and xAI—all of whose models it uses in some capacity—to loosen restrictions for AI models so they can be used for “all lawful purposes,” including weapons development, intelligence collection, and battlefield operations with essentially no company-imposed limits. But Anthropic has put its foot down on two red lines: no mass surveillance of Americans and no fully autonomous weaponry. After months of back-and-forths, the Pentagon is now threatening to sever all ties with the AI company. "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this," a senior official told Axios.

Per another Axios report, tensions rose after the military used Claude in its January operation to capture former Venezuelan President Nicolás Maduro. According to a senior administration official, an Anthropic exec asked Palantir about Claude’s use; the Pentagon read that as implied disapproval. Anthropic denies this, saying it hasn’t commented on specific operations.

The Pentagon partnership: Anthropic signed a contract worth up to $200 million with the Department of Defense last July, making Claude the first frontier AI model used on the Pentagon's classified networks. Claude Gov, the version of the AI model tailored to national security work, has received high praise internally, with an official telling Axios that other models "are just behind." But OpenAI, Google, and xAI have all reportedly shown more openness to the Pentagon's demands, and the department is confident all three will agree to their ”all lawful purposes" language, a senior official told Axios.

Why it matters: A "supply chain risk" designation would require Pentagon contractors to certify that they don’t use Claude in their workflows. The penalty sends a strong message: Imposing your own ethical limits on military use could carry severe economic consequences.

This standoff could also be a defining test for the AI industry. Anthropic's CEO Dario Amodei has argued AI should support national defense "in all ways except those which would make us more like our autocratic adversaries." One Pentagon insider described Anthropic as the most "ideological" of the AI labs. Internally, the pressure is showing, too—last week, Anthropic's head of safeguards research resigned, warning that "the world is in peril."

What's next: While the Pentagon is threatening to hit Anthropic where it hurts, the AI company says it's having "productive conversations, in good faith" with the DoD. —WK

From The Crew

A stylized banner image that says Signal or Noise.

I tried a $150 “creamy” keyboard for the first time

As the first-time owner of a fancy mechanical keyboard, I suddenly have a whole new language to describe how typing feels and sounds—clicky, clacky, thocky, even creamy. I tried the premium NuPhy Gem80, which retails at $149.95, and came away convinced that the appeal isn’t “better tech,” exactly. It’s the satisfaction of putting it together yourself: opening the kit, layering the foam and plating, inserting the (pre-lubed) switches, closing the top plate, then popping in the keycaps. There’s also a subtle pleasure in tactile, responsive keys that press more deeply than your typical laptop keyboard. And yes, the marbly sound of the keys makes typing feel more virtuosic.

The NuPhy Gem80 keyboard on a colorful backgroundNuPhy

But it took me too much fiddling to get it actually working correctly: I got a lot of key “chattering,” where a single press produced double or triple presses until a firmware update fixed it. Plus, no matter how pretty it is or how good it sounds, it’s hard for me to justify such a high price. When the core function of the product is simply… typing, is the premium feel and sound really worth over $100?

The Good: The Gem80 kit came with clear instructions. The keys feel nice to press.

The Bad: It wasn’t a pure plug-and-play experience after assembly, which one might expect in a product with a $150 price tag.

Verdict: Noise —WK

Disclosure: Companies may send us products to test, but they never pay for our opinions. Our recommendations are unbiased and unfiltered, and Tech Brew may earn a commission if you buy through our links.

If you have a gadget you love, let us know and we may feature it in a future edition.

Together With Wistia

THE ZEITBYTE

Portrait of Bryan Johnson, man with pushed back hair, black tee, and gold chain.

Tech Brew/Getty Images

Bryan Johnson, the millionaire biohacker known for his ever-more-absurd experiments to live forever, announced a new venture last week: a $1 million-per-year longevity program called Immortals where just three lucky winners will get to follow his exact “don’t die” regimen. For that seven-figure annual fee, you get round-the-clock access to "BryanAI”—a custom always-on AI concierge—continuous biomarker tracking, and the exact protocol Johnson has followed for the past five years—an ascetic life that requires being in bed by 8:30pm, drinking a lot of green sludge, and never touching alcohol. Oh, and swallowing over 100 pills a day. Still, he says over 1,500 people applied in the first 30 hours.

At this point, Johnson is less a wellness entrepreneur and more a living Silicon Valley parable of man’s folly. Since selling his payments company to PayPal in 2013, he’s tried to become the “most measured man in human history,” tracking not just blood biomarkers but also his nocturnal erections. He even injected himself with his teenage son's blood plasma in his anti-aging pursuit (it apparently didn’t work) and launched a competition to see who could “reverse age” the most.

If you’re worried about getting priced out of eternal life, fret not. Though the Immortals program isn’t taking interest-free monthly payments yet, Johnson says cheaper $60,000 tiers—and even free ones—will come as it scales. The long game, apparently, is to democratize immortality. We’ll check back in another 40 years to see how that’s going. —WK

Chaos Brewing Meter: /5

A stylized image with the words open tabs.

Readers’ most-clicked story was this one about a shirtless streaker who was wearing Meta glasses when he jumped onto the field during the Super Bowl, filming the entire run in first-person (the footage is absolutely worth watching).

SHARE THE BREW

Share The Brew

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 0

Click to Share

Or copy & paste your referral link to others:
techbrew.com/r/?kid=073f0919

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011