1. Bluesky Feeds /
  2. David Colarusso /
  3. AI & the Law

This feed is sourced from curated lists of lawyers, tech folks, et al. It pulls out posts that seem to be talking about AI & law.  It's limited to about 100 posts at a time. Sorted by a mix of engagement and newness.

Feed on Bluesky

Feeds Stats

  • 💙 Liked by 53 users
  • 📅 Updated about 1 month ago
  • ⚙️ Provider skyfeed.me

AI & the Law Likes over time

Like count prediction
The feed AI & the Law has not gained any likes in the last month.

Feed Preview for AI & the Law

Ed Zitron
@edzitron.com
about 1 hour ago
So this is a very, very desperate move - a very silly app of generative AI slop that will be incredibly expensive for OpenAI to run, and on top of it, they're requiring rights holders to *opt out* once they see violations. You can't do it in advance. They're washed, and trying everything to survive.
50
205
855
The New York Times
@nytimes.com
about 2 hours ago
California now has one of the strongest sets of rules about A.I. in the U.S. Read more about the measure Gov. Gavin Newsom signed into law on Monday.
California’s Gavin Newsom Signs Major AI Safety Law

nyti.ms

California’s Gavin Newsom Signs Major AI Safety Law

Gavin Newsom signed a major safety law on artificial intelligence, creating one of the strongest sets of rules about the technology in the nation.

4
43
245
Dare Obasanjo
@carnage4life.bsky.social
about 1 hour ago
OpenAI realizes they don’t have enough training data to compete with Google’s Veo being trained on all of YouTube’s content. The solution? They will now train on copyrighted videos unless content owners opt-out per video. 😬
Exclusive | OpenAI’s New Sora Video Generator to Require Copyright Holders to Opt Out

www.wsj.com

Exclusive | OpenAI’s New Sora Video Generator to Require Copyright Holders to Opt Out

Executives at the company notified talent agencies and studios over the last week.

8
13
45
The Questionable Authority
@questauthority.bsky.social
about 2 hours ago
This is OpenAI attempting to force the law to develop in their chosen direction. Opt-out is simply not how American copyright law is currently structured.

Also: “OpenAI is planning to release a new version of its Sora video generator that creates videos featuring copyright material unless copyright holders opt out of having their work appear, according to people familiar with the matter.”

www.wsj.com

7
27
83
Missing The Point
@missingthept.bsky.social
about 3 hours ago
I’m withdrawing money out of Sam Altman’s checking account unless he has specifically opted out.
Exclusive | OpenAI’s New Sora Video Generator to Require Copyright Holders to Opt Out

www.wsj.com

Exclusive | OpenAI’s New Sora Video Generator to Require Copyright Holders to Opt Out

Executives at the company notified talent agencies and studios over the last week.

0
18
67
Reuters Legal
@legal.reuters.com
about 2 hours ago
California Governor Gavin Newsom signed a state law on Monday that would require ChatGPT developer OpenAI and other big players to disclose their approach to mitigating potential catastrophic risk from their cutting-edge AI models.
California's Newsom signs law requiring AI safety disclosures

reut.rs

California's Newsom signs law requiring AI safety disclosures

California Governor Gavin Newsom signed a state law on Monday that would require ChatGPT developer OpenAI and other big players to disclose their approach to mitigating potential catastrophic risk from their cutting-edge AI models.

0
2
16
Free Law Project ⚖
@free.law
about 1 hour ago
Today we and other legal technology providers¹ filed² an amicus brief in Thomson Reuters v. ROSS Intelligence. It's simple: Headnotes cannot be copyrighted and ROSS's use was fair. storage.courtlistener.com…. — ¹˒ ² See below for these excellent orgs and those that helped!
A screenshot from the summary of the argument in the PDF: 

“[N]o one can own the law. ‘Every citizen is presumed to know the law,’ and
‘it needs no argument to show . . . that all should have free access’ to its
contents.” Georgia v. Public.Resource.Org, Inc., 590 U.S. 255, 265, (2020) (citation
omitted). So too for judicial opinions. Id. (“[Judges] cannot be the ‘author[s]’ of the
works they prepare ‘in the discharge of their judicial duties.’”). The same should be
true of the headnotes at issue in this case, which serve a discrete and limited purpose
as often near-verbatim summaries and verbatim quotes that faithfully and accurately
describe a specific point of law from a judicial opinion.
Individual headnotes are uncopyrightable because they lack the originality
required for copyright protection, and the district court erred in finding to the
contrary. And, because there are no or only a few other ways to concisely and
precisely express the specific individual legal points stated in an opinion, the
expression in a headnote cannot be distinguished from the underlying legal idea it
aims to convey. The district court therefore also erred by rejecting the merger
defense.
Another screenshot: But even if headnotes like those in Westlaw’s platform were copyrightable,
the district court erred in concluding on summary judgment that Appellant ROSS
Intelligence’s indirect use of those headnotes as inputs to train a new artificial
intelligence (AI) legal research tool was not fair use. Instead, a correct application
2of the four fair use factors should have concluded that ROSS’s use was highly
transformative and would not serve as a market substitute for headnotes.
A third:

The district court came to that
conclusion after analogizing the headnote author’s editorial judgment “to that of a
sculptor.” D.E. 770 at 7. The court’s logic is: just as a sculptor takes an
uncopyrightable block of marble and creates copyrightable expression by choosing
what to cut away and what to leave in place, Appellees’ Westlaw creates protectable
expression by taking a court opinion and “identifying which words matter and
chiseling away the surrounding mass.” Id.
The sculpture analogy, however, crumbles upon closer inspection. Its most
fundamental flaw is the notion that a court opinion is somehow equivalent to an
untouched, blank block of marble. Not so. The more accurate analogy is that a
judicial opinion is the final product of a judge taking a block of marble and carefully,
skillfully chiseling away the surrounding mass to create a host of precise details,
each of which reveal a specific point of law or fact. The resulting opinion looks
nothing like the initial block of marble; it is instead a highly sculpted work made up
entirely of many discrete bits of expressive (but uncopyrightable) content that are
very directly tailored to the specific case.2
1
1
6
Eileen Clancy 🧿
@clancyny.bsky.social
about 3 hours ago
OpenAI's new product requires that a copyright owner opt-out if they don't want their work included. The "startup began notifying talent agencies and studios over the past week about the product, which it plans to release in the coming days, and the opt-out process."

this is not gonna go down well www.reuters.com/technology/o...

www.reuters.com

7
41
47
Mike Lissner
@michaeljaylissner.com
about 1 hour ago
I contacted about 40 legal tech orgs to ask for their signature on this amicus brief against Thomson Reuters. 12 said they were interested. Five signed. They deserve a lot of credit. They are: Cicerai Dispute Resolution AI Juristai Paxton AI, and Trellis Research

Today we and other legal technology providers¹ filed² an amicus brief in Thomson Reuters v. ROSS Intelligence. It's simple: Headnotes cannot be copyrighted and ROSS's use was fair. storage.courtlistener.com/recap/gov.us... — ¹˒ ² See below for these excellent orgs and those that helped!

A screenshot from the summary of the argument in the PDF: 

“[N]o one can own the law. ‘Every citizen is presumed to know the law,’ and
‘it needs no argument to show . . . that all should have free access’ to its
contents.” Georgia v. Public.Resource.Org, Inc., 590 U.S. 255, 265, (2020) (citation
omitted). So too for judicial opinions. Id. (“[Judges] cannot be the ‘author[s]’ of the
works they prepare ‘in the discharge of their judicial duties.’”). The same should be
true of the headnotes at issue in this case, which serve a discrete and limited purpose
as often near-verbatim summaries and verbatim quotes that faithfully and accurately
describe a specific point of law from a judicial opinion.
Individual headnotes are uncopyrightable because they lack the originality
required for copyright protection, and the district court erred in finding to the
contrary. And, because there are no or only a few other ways to concisely and
precisely express the specific individual legal points stated in an opinion, the
expression in a headnote cannot be distinguished from the underlying legal idea it
aims to convey. The district court therefore also erred by rejecting the merger
defense.
Another screenshot: But even if headnotes like those in Westlaw’s platform were copyrightable,
the district court erred in concluding on summary judgment that Appellant ROSS
Intelligence’s indirect use of those headnotes as inputs to train a new artificial
intelligence (AI) legal research tool was not fair use. Instead, a correct application
2of the four fair use factors should have concluded that ROSS’s use was highly
transformative and would not serve as a market substitute for headnotes.
A third:

The district court came to that
conclusion after analogizing the headnote author’s editorial judgment “to that of a
sculptor.” D.E. 770 at 7. The court’s logic is: just as a sculptor takes an
uncopyrightable block of marble and creates copyrightable expression by choosing
what to cut away and what to leave in place, Appellees’ Westlaw creates protectable
expression by taking a court opinion and “identifying which words matter and
chiseling away the surrounding mass.” Id.
The sculpture analogy, however, crumbles upon closer inspection. Its most
fundamental flaw is the notion that a court opinion is somehow equivalent to an
untouched, blank block of marble. Not so. The more accurate analogy is that a
judicial opinion is the final product of a judge taking a block of marble and carefully,
skillfully chiseling away the surrounding mass to create a host of precise details,
each of which reveal a specific point of law or fact. The resulting opinion looks
nothing like the initial block of marble; it is instead a highly sculpted work made up
entirely of many discrete bits of expressive (but uncopyrightable) content that are
very directly tailored to the specific case.2
0
0
3
Ed Zitron
@edzitron.com
about 6 hours ago
It is now inevitable the AI bubble bursts. Deutsche Bank and Bain both have said that this era cannot last. Every single "vibe coding" or "power of AI" story perpetuates a myth that will savage our markets and hurt retail investors. It's going to be horrible. www.wheresyoured.at/the-c….
I Need You To Listen To Me
I do apologize for the length of this piece, but the significance of this bubble requires depth.

There is little demand, little real money, and little reason to continue, and the sheer lack of responsibility and willingness to kneel before the powerful fills me full of angry bile. I understand many journalists are not in a position where they can just write “this shit sounds stupid,” but we have entered a deeply stupid era, and by continuing to perpetuate the myth of AI, the media guarantees that retail investors and regular people’s 401Ks will suffer.

It is now inevitable that this bubble bursts. Deutsche Bank has said the AI boom is unsustainable outside of tech spending “remaining parabolic,” which it says “is highly unlikely,” and Bain Capital has said that $2 trillion in new revenue is needed to fund AI’s scaling, and even that math is completely fucked as it talks about “AI-related savings”:

Even if companies in the US shifted all of their on-premise IT budgets to cloud and reinvested the savings from applying AI in sales, marketing, customer support, and R&D into capital spending on new data centers, the amount would still fall short of the revenue needed to fund the full investment, as AI’s compute demand grows at more than twice the rate of Moore’s Law, Bain notes. 
Even when stared in the face by a ridiculous idea — $2 trillion of new revenue in a global software market that’s expected to be around $817 billion in 2025 — Bain still oinks out some nonsense about the “savings from applying AI in sales, marketing, customer support and R&D,” yet another myth perpetuated I assume to placate the fucking morons sinking billions into this.

Every single “vibe coding is the future,” “the power of AI,” and “AI job loss” story written perpetuates a myth that will only lead to more regular people getting hurt when the bubble bursts. Every article written about OpenAI or NVIDIA or Oracle that doesn’t explicitly state that the money doesn’t…
I also believe that the way to stop this happening again is to have a thorough and well-sourced explanation of everything as it happens, ripping down the narratives as they’re spun and making it clear who benefits from them and how and why they’re choosing to do so. When things collapse, we need to be clear about how many times people chose to look the other way, or to find good faith ways to interpret bad faith announcements and leak.

So, how could we have seen this coming?

I don’t know. Did anybody try to fucking look?
3
36
167
Kevin M. Kruse
@kevinmkruse.bsky.social
about 10 hours ago
In the past week or so, President Dementia has posted to social media (1) what seemed to be a DM ordering his attorney general to prosecute his enemies, (2) an obviously fake AI video of him announcing "medbeds" are real, and (3) this video sent by a company lobbying him on medicinal weed.

i don’t think he was supposed to post this. this is a company lobbying for medical cannabis, specifically for medicare coverage of it, and i think they made a whole commercial for an audience of one. “YOU will…” “cementing YOUR legacy,” millions will thank you… this was just for trump

Video thumbnail
Play button
35
224
847
Techmeme
@techmeme.com
about 3 hours ago
California Governor Gavin Newsom signs SB 53 into law; the first-in-the-nation AI safety law requires AI companies to disclose their safety testing regimes (Chase DiFeliciantonio/Politico) Main Link | Techmeme Permalink
0
3
15
The Wall Street Journal
@wsj.com
about 4 hours ago
Exclusive: OpenAI is planning to release a new version of its Sora video generator, which creates videos featuring copyrighted material unless copyright holders opt out of having their work appear.
OpenAI’s New Sora Video Generator to Require Copyright Holders to Opt Out

on.wsj.com

OpenAI’s New Sora Video Generator to Require Copyright Holders to Opt Out

Executives at the startup notified talent agencies and studios over the last week.

75
33
48
Reuters Legal
@legal.reuters.com
about 2 hours ago
OpenAI is rolling out parental controls for ChatGPT on the web and mobile on Monday, following a lawsuit by the parents of a teen who died by suicide after the artificial intelligence startup's chatbot allegedly coached him on methods of self-harm.
OpenAI launches parental controls in ChatGPT after California teen's suicide

reut.rs

OpenAI launches parental controls in ChatGPT after California teen's suicide

OpenAI is rolling out parental controls for ChatGPT on the web and mobile on Monday, following a lawsuit by the parents of a teen who died by suicide after the artificial intelligence startup's chatbot allegedly coached him on methods of self-harm.

2
4
4
Ed Zitron
@edzitron.com
about 6 hours ago
The AI bubble inflated through mythology that LLMs and generative AI were "always getting more powerful," without ever defining what that meant. In reality, these probabilistic models never amounted to the promises that boosters and investors had made. www.wheresyoured.at/the-c….
In 2022, a (kind-of) company called OpenAI surprised the world with a website called ChatGPT that could generate text that sort-of sounded like a person using a technology called Large Language Models (LLMs), which can also be used to generate images, video and computer code. 

Large Language Models require entire clusters of servers connected with high-speed networking, all containing this thing called a GPU — graphics processing units. These are different to the GPUs in your Xbox, or laptop, or gaming PC. They cost much, much more, and they’re good at doing the processes of inference (the creation of the output of any LLM) and training (feeding masses of training data to models, or feeding them information about what a good output might look like, so they can later identify a thing or replicate it).

These models showed some immediate promise in their ability to articulate concepts or generate video, visuals, audio, text and code. They also immediately had one glaring, obvious problem: because they’re probabilistic, these models can’t actually be relied upon to do the same thing every single time.

So, if you generated a picture of a person that you wanted to, for example, use in a story book, every time you created a new page, using the same prompt to describe the protagonist, that person would look different — and that difference could be minor (something that a reader should shrug off), or it could make that character look like a completely different person.

Moreover, the probabilistic nature of generative AI meant that whenever you asked it a question, it would guess as to the answer, not because it knew the answer, but rather because it was guessing on the right word to add in a sentence based on previous training data. As a result, these models would frequently make mistakes — something which we later referred to as “hallucinations.”
And that’s not even mentioning the cost of training these models, the cost of running them, the vast amounts of computational power they required, the fact that the legality of using material scraped from books and the web without the owner’s permission was (and remains) legally dubious, or the fact that nobody seemed to know how to use these models to actually create profitable businesses. 

These problems were overshadowed by something flashy, and new, and something that investors — and the tech media — believed would eventually automate the single thing that’s proven most resistant to automation: namely, knowledge work and the creative economy. 

This newness and hype and these expectations sent the market into a frenzy, with every hyperscaler immediately creating the most aggressive market for one supplier I’ve ever seen. NVIDIA has sold over $200 billion of GPUs since the beginning of 2023, becoming the largest company on the stock market and trading at over $170 as of writing this sentence only a few years after being worth $19.52 a share.

Sidenote: those figures reflect the fact that Nvidia’s stock split 10-to-1 in 2024 — or, said plainly, if you held one share before the split, you’d hold ten shares afterwards, changing the unit price of the company’s equity (making it cheaper to buy a share, and thus, more accessible to retail investors) without changing the absolute value of the company. 

This bit isn’t necessarily important to what I’ve written, but given the subject of this newsletter, I think it’s important to lean towards being as explicit as possible about the numbers I share.
While I’ve talked about some of the propelling factors behind the AI wave — automation and novelty — that’s not a complete picture. A huge reason why everybody decided to “do AI” was because the software industry’s growth was slowing, with SaaS (Software As A Service) company valuations stalling or dropping, resulting in  the terrifying prospect of companies having to “under promise and over deliver” and “be efficient.”

Things that normal companies — those whose valuations aren’t contingent on ever-increasing, ever-constant growth — don’t have to worry about, because they’re normal companies. 

Suddenly, there was the promise of a new technology — Large Language Models — that were getting exponentially more powerful, which was mostly a lie but hard to disprove because “powerful” can mean basically anything, and the definition of “powerful” depended entirely on whoever you asked at any given time, and what that person’s motivations were. 

The media also immediately started tripping on its own feet, mistakenly claiming OpenAI’s GPT-4 model tricked a Taskrabbit into solving a CAPTCHA (it didn’t — this never happened), or saying that “people who don’t know how to code already [used] bots to produce full-fledged games,” and if you’re wondering what “full-fledged” means, it means “pong” and a cobbled-together rolling demo of SkyRoads, a game from 1993.

The media (and investors) helped peddle the narrative that AI was always getting better, could do basically anything, and that any problems you saw today would be inevitably solved in a few short months, or years, or, well, at some point I guess.
LLMs were touted as a digital panacea, and the companies building them offered traditional software companies the chance to plug these models into their software using an API, thus allowing them to ride the same generative AI wave that every other company was riding. 

The model companies similarly started going after individual and business customers, offering software and subscriptions that promised the world, though this mostly boiled down to chatbots that could generate stuff, and then doubled down with the promise of “agents” — a marketing term that’s meant to make you think “autonomous digital worker” but really means “broken digital product.”

Throughout this era, investors and the media spoke with a sense of inevitability that they never really backed up with data. It was an era based on confidently-asserted “vibes.” Everything was always getting better and more powerful, even though there was never much proof that this was truly disruptive technology, other than in its ability to disrupt apps you were using with AI — making them worse by, for example, suggesting questions on every Facebook post that you could ask Meta AI, but which Meta AI couldn’t answer.

“AI” was omnipresent, and it eventually grew to mean everything and nothing. OpenAI would see its every move lorded over like a gifted child, its CEO Sam Altman called the “Oppenheimer of Our Age,” even if it wasn’t really obvious why everyone was impressed. GPT-4 felt like something a bit different, but was it actually meaningful? 

The thing is, Artificial Intelligence is built and sold on not just faith, but a series of myths that the AI boosters expect us to believe with the same certainty that we treat things like gravity, or the boiling point of water.
1
5
77
Hyo Yoon Kang 강효윤
@hyoyoonkang.bsky.social
about 2 hours ago
Some BS decoded: “Copyright guardrail”: there’s none, you flipped the burden of proof for protection “Ecosystem”: BS word for LLM “Copyright and image right treated differently at Open AI”: it’s different applicable laws, not your ingenious invention. One IP and the other personality & privacy
0
1
3
Lia Russell
@liaoffleash.bsky.social
about 3 hours ago
🆕Gavin Newsom just signed the US’s first state AI regulations as Silicon Valley is going all in, despite overstated claims about what the tech can do and its environmental impacts www.sacbee.com/news/politic...
Gov. Gavin Newsom signs AI regulations, bucking Big Tech

www.sacbee.com

Gov. Gavin Newsom signs AI regulations, bucking Big Tech

After months of lobbying on both sides from Big Tech, safety advocates and Hollywood A-listers, Gov. Gavin Newsom signed off on artificial intelligence regulations that are expected to be a model for ...

1
0
8
Ed Zitron
@edzitron.com
about 6 hours ago
Generative AI is a failure. Across every major AI company and hyperscaler selling models or software or compute, there's only $61 billion of revenue in 2025 - on hundreds of billions of dollars of capex and investment. Every AI company is losing money. www.wheresyoured.at/the-c….
Every AI Company Is Unprofitable, Struggling To Grow, And Generative AI's Revenues Are Pathetic (around $61 billion in 2025 across all companies) comparable to their costs (hundreds of billions) 
Mea Culpa! I have said a few times “$40 billion” is the total amount of AI revenue in 2025, and I need to correct the record. $35 billion is what hyperscalers will make this year (roughly), and when you include OpenAI, Anthropic and other startups, the amount is around $55 billion. If you include neoclouds, this number increases by about $6.1 billion. In any case, this doesn’t dramatically change my thesis. 
As I covered on my premium newsletter a few weeks ago, everybody is losing money on generative AI, in part because the cost of running AI models is increasing, and in part because the software itself doesn’t do enough to warrant the costs associated with running them, which are already subsidized and unprofitable for the model providers. 

Outside of OpenAI (and to a lesser extent Anthropic), nobody seems to be making much revenue, with the most “successful” company being Anysphere, makers of AI coding tool Cursor, which hit $500 million ‘annualized” (so $41.6 million in one month) a few months ago, just before Anthropic and OpenAI jacked up the prices for “priority processing” on enterprise queries, raising its operating costs as a result.

In any case, that’s some piss-poor revenue for an industry that’s meant to be the future of software. Smartwatches are projected to make $32 billion this year, and as mentioned, the Magnificent Seven expects to make $35 billion or so in revenue from AI this year.
Even Anthropic and OpenAI seem a little lethargic, both burning billions of dollars while making, by my estimates, no more than $2 billion and $6.26 billion in 2025 so far, despite projections of $5 billion and $13 billion respectively. 

Outside of these two, AI startups are floundering, struggling to stay alive and raising money in several-hundred million dollar bursts as their negative-gross-margin businesses flounder. 

As I dug into a few months ago, I could find only 12 AI-powered companies making more than $8.3 million a month, with two of them slightly improving their revenues, specifically AI search company Perplexity (which has now hit $150 million ARR, or $12.5 million in a month) and AI coding startup Replit (which also hit $150 million ARR in September). 

Both of these companies burn ridiculous amounts of money. Perplexity burned 164% of its revenue on Amazon Web Services, OpenAI and Anthropic last year, and while Replit hasn’t leaked its costs, The Information reports its gross margins in July were 23%, which doesn’t include the costs of its free users, which you simply have to do with LLMs as free users are capable of costing you a hell of a lot of money.

Problematically, your paid users can also cost you more than they bring in as well. In fact, every user loses you money in generative AI, because it’s impossible to do cost control in a consistent manner.
1
9
61
Ed Zitron
@edzitron.com
about 6 hours ago
Beneath the surface of the AI compute story lies a dirty secret: most of the revenue is hyperscalers like Microsoft or OpenAI, with less than $1 billion of actual revenue in selling AI compute to other companies after hundreds of billions of dollars of capex. www.wheresyoured.at/the-c….
There Is Less Than A Billion Dollars In AI Compute Revenue Outside of NVIDIA, Hyperscalers and OpenAI, And NVIDIA Is Using Its Compute Deals To Help Neoclouds Raise More Debt  To Buy More GPUs
As I went into recently on my premium newsletter, NVIDIA funds and sustains Neoclouds as a way of funnelling revenue to itself, as well as partners like Supermicro and Dell, resellers that take NVIDIA GPUs and put them in servers to sell pre-built to customers. These two companies made up 39% of NVIDIA’s revenues last quarter. 

Yet when you remove hyperscaler revenue — Microsoft, Amazon, Google, OpenAI and NVIDIA — from the revenues of these neoclouds, there’s barely $1 billion in revenue combined, across CoreWeave, Nebius and Lambda. CoreWeave’s $5.35 billion revenue is predominantly made up of its contracts with NVIDIA, Microsoft (offering compute for OpenAI), Google (hiring CoreWeave to offer compute for OpenAI), and OpenAI itself, which has promised CoreWeave $22.4 billion in business over the next few years.

This is all a lot of stuff, so I’ll make it really simple: there is no real money in offering AI compute, but that isn’t Jensen Huang’s problem, so he will simply force NVIDIA to hand money to these companies so that they have contracts to point to when they raise debt to buy more NVIDIA GPUs.
Neoclouds are effectively giant private equity vehicles that exist to raise money to buy GPUs from NVIDIA, or for hyperscalers to move money around so that they don’t increase their capital expenditures and can, as Microsoft did earlier in the year, simply walk away from deals they don’t like. Nebius’ “$17.4 billion deal” with Microsoft even included a clause in its 6-K filing that Microsoft can terminate the deal in the event the capacity isn’t built by the delivery dates, and Nebius has already used the contract to raise $3 billion to… build the data center to provide compute for the contract.

Here, let me break down the numbers:

CoreWeave: Microsoft (60% of revenue in 2024, providing compute for OpenAI), NVIDIA (15% of its 2024 revenue), Meta, OpenAI (which picked up Microsoft's option for more future capacity), Google (providing compute for OpenAI)
Lambda: Half of its revenue comes from Amazon and Microsoft, and now $1.5 billion over four years will come from NVIDIA, which at its current revenue ($250m in the first half of 2025) would make NVIDIA its largest customer.
Nebius: With similar revenue to Lambda, Nebius' largest customer is now Microsoft, though it reports a number of smaller companies and institutions on their customers page.
From my analysis, it appears that CoreWeave, despite expectations to make that $5.35 billion this year, has only around $500 million of non-Magnificent Seven or OpenAI AI revenue in 2025, with Lambda estimated to have around $100 million in AI revenue, and Nebius around $250 million without Microsoft’s share, and that’s being generous.
Here’s Why This Is Bad
I dunno man, let’s start simple: $50 billion a quarter of data center funding is going into an industry that has less revenue than Genshin Impact. That feels pretty bad. Who’s gonna use these data centers? How are they going to even make money on them? Private equity firms don’t typically hold onto assets, they sell them or take them public. Doesn’t seem great to me!

Anyway, if AI was truly the next big growth vehicle, neoclouds would be swimming in diverse global revenue streams. Instead, they’re heavily-centralized around the same few names, one of which (NVIDIA) directly benefits from their existence not as a company doing business, but as an entity that can accrue debt and spend money on GPUs. These Neoclouds are entirely dependent on a continual flow of private credit from firms like Goldman Sachs (Nebius, CoreWeave, Lambda for its IPO), JPMorgan (Lambda, Crusoe, CoreWeave), and Blackstone (Lambda, CoreWeave), who have in a very real sense created an entire debt-based infrastructure to feed billions of dollars directly to NVIDIA, all in the name of an AI revolution that's yet to arrive.

The fact that the rest of the neocloud revenue stream is effectively either a hyperscaler or OpenAI is also concerning. Hyperscalers are, at this point, the majority of data center capital expenditures, and have yet to prove any kind of success from building out this capacity, outside, of course, Microsoft’s investment in OpenAI, which has succeeded in generating revenue while burning billions of dollars. 

Hyperscaler revenue is also capricious, but even if it isn’t, why are there no other major customers? Why, across all of these companies, does there not seem to be one major customer who isn’t OpenAI? 

The answer is obvious: nobody that wants it can afford it, and those who can afford it don’t need it.
1
7
53
Ed Newton-Rex
@ednewtonrex.bsky.social
about 5 hours ago
Genuinely emotional reading this. $1.5 billion to authors, in the biggest copyright settlement in history. Big tech is not above the law. This is just the start. www.nytimes.com/2025/09/2….
0
6
22