Calling Nvidia niche feels a bit wild given their status-quo right now, but from a foundry perspective, it seems true. Apple is the anchor tenant that keeps the lights on across 12 different mature and leading-edge fabs.
Nvidia is the high-frequency trader hammering the newest node until the arb closes. Stability usually trades at a discount during a boom, but Wei knows the smartphone replacement cycle is the only predictable cash flow. Apple is smart. If the AI capex cycle flattens in late '27 as models hit diminishing returns, does Apple regain pricing power simply by being the only customer that can guarantee wafer commits five years out?
So let's say TMSC reciprocated Apple's consistency as a customer by giving them preferential treatment for capacity. It's good business after all.
However, everyone knows that good faith reciprocity at that scale is not rewarded. Apple is ruthless. There are probably thousands of untold stories of how hard Apple has hammered it's suppliers over the years.
While Apple has good consumer brand loyalty, they arguably treat their suppliers relatively poorly compared to the Gold standard like Costco.
At this scale and volume, it's not really about good faith.
Changing fabs is non-trivial. If they pushed Apple to a point where they had to find an alternative (which is another story) and Apple did switch, they would have to work extra hard to get them back in the future. Apple wouldn't want to invest twice in changing back and forth.
On the other hand, TSMC knows that changing fabs is not really an option and Apple doesn't want to do it anyway, so they have leverage to squeeze.
At this level, everyone knows it's just business and it comes down to optimizing long-term risk/reward for each party.
Apple has used both Samsung and TSMC for its chips in the past. Until the A7 it was Samsung, A8 was TSMC, and the A9 was dual-sourced by both! Apple is used to switching between suppliers fairly often for a tech company; it's not that it's too hard for them to switch fab, it's that TSMC is the only competitive fab right now.
There are rumours that Intel might have won some business from them in 2 years. I could totally see Apple turning to Intel for the Mac chips, since they're much lower volume. I know it sounds crazy, we just got rid of Intel, but I'm talking using Intel as a fab, not going back to x86. Those are done.
But wasn't the reason they split with Samsung because they copied the iphone in the perspective of Jobs (to which he reacted with thermonuclear threats)?
They did had the expertise building it after all. What would happen, if TSMC now would build a M1 clone? I doubt this is a way anyone wants to go, but it seems a implied threat to me that is calculated in.
Job's thermonuclear threats were about Android & Google, not Samsung because Schmidt was on Apple's board during the development of Android.
> "I will spend my last dying breath if I need to, and I will spend every penny of Apple’s $40 billion in the bank, to right this wrong. I’m going to destroy Android, because it’s a stolen product. I’m willing to go thermonuclear war on this."
The falling out with Samsung was related, but more about the physical look of the phone
Doesn't seem likely, TBH. Nevermind the legal agreements they would be violating, TSMC fabs Qualcomm's Snapdragon line of ARM processors. The M1 is good, but not that good (it's a couple generations old by this point, for one). Samsung had a phone line of their own to put it in as well. TSMC does not.
At the end of the month, laptops with Intel's latest processors will start shipping. These use Intel's 18A process for the CPU chiplet. That makes Intel the first fab to ship a process using backside power delivery. There's no third party testing yet to verify if Intel is still far behind TSMC when power, performance and die size are all considered, but Intel is definitely making progress, and their execs have been promising more for the future, such as their 14A process.
Apple is the company that just over 10 years ago made a strategic move to remove Intel from their supply chain by purchasing a semiconductor firm and licensing ARM. Managing 'painful' transitions is a core competency of theirs.
I think you’re correct that they’re good at just ripping the band-aid off, but the details seem off. AFAIK, Apple has always had a license with ARM and a very unique one since they were one of the initial investors when it was spun out from Acorn. In fact, my understanding is that Apple is the one that insisted they call themselves Advanced RISC Machines Ltd. because they did not want Acorn (a competitor) in the name of a company they were investing in.
Not all of Apple‘s chips need to be fabbed at the smallest size, those could certainly go elsewhere. I’m sure they already do.
Is there anyone who can match TSMC at this point for the top of the line M or A chips? Even if Intel was ready and Apple wanted to would they be able to supply even 10% of what Apple needs for the yearly iPhone supply?
I would imagine they could split their orders between different fabricators; they can put in orders for the most cutting edge chips for the latest Macs and iPhones at TSMC and go elsewhere for less cutting edge chips?
presumably they already do that (since non cutting edge chip fab is likely to be more competitive and less expensive) so, given they are already doing that, this problem refers to the cutting edge allocations which are getting scare as exemplified at least by Nvidia's growth
It's ridiculous that a trillion dollar company feels beholden to a supplier. With that kind of money, it should be trivial to switch. People forget Nvidia didn't even exist 35 years ago. It would probably take like 3 to 5 years to catch up with the benefit of hindsight and existing talent and tools?
And anyway consumers don't really need beefy devices nowadays. Running local LLM on a smartphone is a terrible idea due to battery life and no graphics card; AI is going to be running on servers for quite some time if not forever.
It's almost as if there is a constant war to suppress engineer wages... That's the only variable being affected here which could benefit from increased competition.
If tech sector is so anti-competitive, the government should just seize it and nationalize it. It's not capitalism when these megacorps put all this superficial pressure but end up making deals all the time. We need more competition, no deals! If they don't have competition, might as well have communism.
I know you are maybe joking but I don't think the government nationalizing the tech sector would be a good idea. They can pull down the salaries even more if they want. It can become a dead end job with you stuck on archaic technology from older systems.
Government jobs should only be an option if there are enough social benefits.
I'm joking yes but as an engineer who has seen the bureaucracy in most big tech companies, the joke is getting less funny over time.
I've met many software engineers who call themselves communists. I can kind of understand. This kind of communist-like bureaucracy doesn't work well in a capitalist environment.
It's painful to work in tech. It's like our hands are tied and are forced to do things in a way we know is inefficient. Companies use 'security' as an excuse to restrict options (tools and platforms), treat engineers as replaceable cogs as an alternative to trusting them to do their job properly... And the companies harvest what they sow. They get reliable cogs, well versed in compliance and groupthink and also coincidentally full-blown communists; they're the only engineers remaining who actually enjoy the insane bureaucracy and the social climbing opportunities it represents given the lack of talent.
I'm going through a computer engineering degree at the moment, but I am thinking about pursuing Law later on.
Looking at other paths: Medicine requires expensive schooling and isn't really an option after a certain age and law, on the other hand, opened its doors too widely and now has a large underclass of people with third-tier law degrees.
Perhaps you can try to accept the realities of the system while trying to live the best life that you can?
Psyching yourself all the way, trying to find some sort of escape towards a good life with freedom later on...
It can be interpreted a different way too. Apple is just a channel for TSMCs technology. Also the cost to build a fab that advanced, in say a 3 year horizon, let alone immediately available, is not one even Apple can commit to without cannibalising its core business.
About 17 years ago I worked at a company that was clamoring to get products into Costco, when we did I was shocked at the fees they charged us for returns. If they're the gold standard for supplier relations it's a wonder anyone bothers being a supplier.
Apple loaned TSMC money in order to build manufacturing capacity back around the M1 era. They’ve done that for a number of suppliers and the “interest payments” were priority access to capacity. Everyone was complaining about how Apple got ARM chips while others had to wait in line.
That said, they did that for a sapphire glass supplier for the Apple Watch and when their machines had QC problems they dropped them like a rock and went back to Corning.
But is that really any different from any other supplier? And who tf do you think they’re going to drop TSMC for right now? They are the cock of the walk.
Counter argument is that is NVIDIA friendly to their supply chain? I have to think that maybe they are with their massive margins because they can be - their end buyer is currently willing to absorb costs at no expense. But I don't know, and that will change as their business changes.
Your underlying statement implies that whoever is replacing apple is a better buyer which I don't think is necessarily true.
> The only complete package integrator that manages to make a relationship work with Nvidia is Nintendo.
And thats probably because Nintendo isn’t adding any pressure to neither TSMC nor Nvidia capacity wise; iirc Nintendo uses something like Maxwell or Pascal on really mature processes for Switch chips/socs.
And also the Switch 1 was just the hardware for a nvidia shield tablet from nVidia’s perspective, without the downside of managing the customer facing side and with the greater volume from Nintendo’s market reach. (Not that it wasn’t more than that for consumers or Nintendo, just talking nvidia here)
I think that works out tremendously well for Nintendo, especially when you look at the Wii-U vs the Switch.
I shot a video at CNET in probably 2011 which was a single touchscreen display (i think it was the APX 2500 prototype iirc?) and it has the precise dimensions to the switch 1.
Nintendo was reluctantly a hardware company... they're a game company who can make hardware, but they know they're best when they own the stack.
> EVGA Terminates Relationship With Nvidia, Leaves GPU Business
> According to Han, Nvidia has been difficult to work with for some time now. Like all other GPU board partners, EVGA is only told the price of new products when they're revealed to everyone on stage, making planning difficult when launches occur soon after. Nvidia also has tight control over the pricing of GPUs, limiting what partners can do to differentiate themselves in a competitive market.
If your customers are known to be antagonistic to business partners, the correct answer is to diversify them as much as you can, even at reasonable costs from anything else.
Yep, you can be close allies with a nation and have many shared interests, and even a trade deficit with them as we in Britain did, and then they stab you in the back with tariffs.
Even if Apple isn't very good at reciprocating faithful service from its suppliers, there's also the matter of how it treats suppliers who cause it problems instead.
Suppliers really hate working with Costco. They're slow to pay, allow for only small margins, and often need too high of a percentage of a businesses revenue, all of which is not friendly towards suppliers.
Agreed TSMC can do whatever they want. in 2027 no other fabs will match what tsmc has today, anything that requires the latest process node is going to get more expensive, so your apple silicone and your AMD chips
No public company will be loyal or nice to their suppliers. That is just not in the playbook for public companies. They have "fiduciary duty", not human duty.
Private companies can be nice to their suppliers. Owners can choose to stay loyal to suppliers they went to high school with, even if it isn't the most cost-efficient.
> they arguably treat their suppliers relatively poorly compared to the Gold standard like Costco.
I’m not saying you’re wrong but you’re previous paragraph sounding like you were wondering if it was the case vs. here you’re saying it’s known. Is this all true? Do they have a reputation for hammering their suppliers?
It felt like a more confident statement and I was legitimately asking. I have little love for Apple. Ditched my Mac Studio earlier this year for a Linux only build after 20 years of being on Macs. I say this because I think folks think I am trying to sealion/“just ask questions:tm:” or some nonsense, when I am legitimately asking if this is a documented practice and what the extent is. I am not finding it easy to find info on this.
> Meanwhile, Vietnam will be the chief manufacturing hub "for almost all iPad, Mac, Apple Watch and AirPods product sold in the US".
> We do expect the majority of iPhones sold in US will have India as their country of origin," Mr Cook said.
Still not made in the US and no plan to change that. They will be selling products made in India/Vietnam domestically and products made in China internationally.
I tend to agree with you, feels to me like the root of this is essentially whether foundries will "go all in" on AI like the rest of the S&P 500. But why trade away one trillion-dollar customer for another trillion-dollar customer if the first one is never going away, and the second one might?
I think it is less of a trade and more of a symbiotic capital cycle, if I can call it that?
Nvidia's willingness to pay exorbitant prices for early 2nm wafers subsidizes the R&D and the brutal yield-learning curve for the entire node. But you can't run a sustainable gigafab solely on GPUs...the defect density math is too punishing. You need a high-volume, smaller-die customer (Apple) to come in 18 months later, soak up the remaining 90% of capacity and amortize that depreciation schedule over a decade.
That is the traditional textbook yield curve logic, if I'm not wrong? Smaller area = higher probability of a surviving die on a dirty wafer.
But I wonder if the sheer margin on AI silicon basically breaks that rule? If Nvidia can sell a reticle-sized package for 25k-30k USD, they might be perfectly happy paying for a wafer that only yields 30-40% good dies.
Apple OTOH operates at consumer electronics price points. They need mature yields (>90%) to make the unit economics of an iPhone work. There's also the binning factor I am curious about. Nvidia can disable 10% of the cores on a defective GPU and sell it as a lower SKU. Does Apple have that same flexibility with a mobile SoC where the thermal or power envelope is so tightly coupled to the battery size?
I am curious about the binning factor too since in the past, AMD and Intel have both made use of defect binning to still sell usable chips by disabling cores. Perhaps Apple is able to do the same with their SoCs? It's not likely to be as granular as Nvidia who can disable much smaller areas of the silicon for each of their cores. On the other hand, the specifics of the silicon and the layout of the individual cores, not to mention the spread of defects over the die might mitigate that advantage.
They do bin their chips. Across the range (A- and M-series) they have the same chip with fewer / disabled cpu and gpu cores. You pray a premium for ones with more cores. Unsure about the chip frequencies - Apple doesn’t disclose those openly from what I know.
With current AI pricing for silicon, I think the math’s gone out the window.
For Apple, they have binning flexibility, with Pro/Max/Ultra, all the way down to iPads - and that’s after the node yields have been improved via the gazillion iPhone SoC dies.
NVIDIAs flexibility came from using some of those binned dies for GeForce cards, but the VRAM situation is clearly making that less important, as they’re cutting some of those SKUs for being too vram heavy relative to MSRP.
Datacenter GPU dies cannot be binned for Geforce because they lack fixed function graphics features. Raytracing acceleration in particular must be non-trivial area that you wouldn't want to spend on a datacenter die. Not to mention the data fabric is probably pretty different.
The A40, L40S and Blackwell 6000 Pro Server have RT cores. 3 datacenter GPUs.
If you want binning in action, the RTX ones other than the top ones, are it. Look for the A30 too, of which I was surprised there was no successor. Either they had better yields on Hopper or they didn't get enough from the A30...
> For Apple, they have binning flexibility, with Pro/Max/Ultra, all the way down to iPads
The Pro and Max chips are different dies, and the Ultra currently isn't even the same generation as the Max. And the iPads have never used any of those larger dies.
> NVIDIAs flexibility came from using some of those binned dies for GeForce cards
NVIDIA's datacenter chips don't even have display outputs, and have little to no fixed-function graphics hardware (raster and raytracing units), and entirely different memory PHYs (none of NVIDIA's consumer cards have ever used HBM).
The desktop 4090 uses the AD102 die, the laptop 4090 and desktop 4080 use the AD103 die, and the laptop 4080 uses the AD104 die. I'm not at all denying that binning is a thing, but you and other commenters are exaggerating the extent of it and underestimating how many separate dies are designed to span a wide product line like GPUs or Apple's computers/tablets/phones.
"Ultra" isn't even binned - it's just 2x "Max" chips connected together.
Otherwise, yes, if a chip doesn't make M4 Max, it can make M4 Pro. If not, M4. If not, A18 Pro. If not that, A18.
And even all of the above mentioned marketing names come in different core configurations. M4 Max can be 14 CPU Cores / 32 GPU cores, and it can also be 16 CPU cores and 40 GPU cores.
So yeah, I'd agree that Apple has _extreme_ binning flexibility. It's likely also the reason why we got A19 / A19 Pro / M5 first, and we still don't have M5 Pro or M5 Max yet. Yields not high enough for M5 Max yet.
Unfortunately I don't think they bin down even lower (say, to S chips used in Apple Watches), but maybe in the future they will.
In retrospect, Apple ditching Intel was truly a gamechanging move. They didn't even have to troll everyone by putting an Intel i9 into a chassis that couldn't even cool an i7 to boost the comparison figures, but I guess they had to hedge their bet.
> yes, if a chip doesn't make M4 Max, it can make M4 Pro. If not, M4. If not, A18 Pro. If not that, A18.
No, that's entirely wrong. All of those are different dies. The larger chips wouldn't even fit in phones, or most iPad motherboards, and I'm not sure a M4 Max or M4 Pro SoC package could even fit in a MacBook Air.
As a general rule, if you think a company might ever be selling a piece of silicon with more than half of it disabled, you're probably wrong and need to re-check your facts and assumptions.
There are two levels of Max Chip, but think of a Max as two pros on die (this is simplification, you can also think of as pro as being two normal cores tied together), so a bad max can't be binned into a pro. But a high-spec Max can be binned into a low-spec Max.
Why are foundries going 'All In' on AI? They fab chips for customers, doesnt matter what chips they are and who the customer is.'Who will pay the most for us to make their chips first' is the only question TMSC will be asking. The market of the customer is irrelevant.
Still more predictable than GPU buys in the current climate. Power connector melting aside, GPUs in most cases get replaced less frequently than cell phones, unless of course you have lots of capital/profit infusion to for whatever reason stay ahead of the game.
Heck if Apple wanted to be super cheeky, they could probably still pivot on the reserved capacity to do something useful (e.x. revised older design for whatever node they reserved where they can get more chips/wafer for cheaper models.)
NVDA on the other hand is burning a lot of good-will in their consumer space, and if a competitor somehow is able to outdo them it could be catastrophic.
AI capex may or may not flatten in the near future (and I don't necessarily see a reason why it would). But smartphone capex already has.
Like smartphones, AI chips also have a replacement cycle. AI chips depreciate quickly -- not because the old ones go bad, but because the new ones are so much better in performance and efficiency than the previous generation. While smartphones aren't making huge leaps every year like they used to, AI chips still are -- meaning there's a stronger incentive to upgrade every cycle for these chips than smartphone processors.
Lifetime curve is something they can control. If they can predict replacement rate, makes sense to make chips go bad on the same schedule, saving on manufacturing costs.
Nvidia have been using TSMC since the Riva 128. That's before Apple started making any of their own silicon. GPUs are easily as predictable as mobile phones.
On the other hand, it's not like Apple can just switch fabs without any cost or difficulty. Sure, TSMC is undoubtedly happy to have a customer with predictable needs, but Apple is also subject to some level of lock-in.
Is the argument that Apple will go out of business? AAPL?
Wait,
> one player has a short-term ability to vastly outspending all the rest.
I assure you, Apple has the long-term and short-term ability to spend like a drunken sailor all day and all night, indefinitely, and still not go out of business. Of course they’d prefer not to. But there is no ‘ability to pay’ gap here between these multi-trillion-dollar companies.
Apple will be forced to match or beat the offer coming from whoever is paying more. It will cost them a little bit of their hilariously-high margins. If they don’t, they’ll have to build less advanced chips or something. But their survival is not in doubt and TSMC knows that.
If it takes 4 years to build a new fab and Apple is willing to commit to paying pay the price of an entire fab, for chips to be delivered in 4 years time - why not take the order and build the capacity?
> Not a system that necessarily works all that well if one player has a short-term ability to vastly outspending all the rest.
Well yeah, people were identifying that back when Apple bought out the entirety of the 5nm node for iPhones and e-tchotchkes. It was sorta implicitly assumed that any business that builds better hardware than Apple would boot them out overnight.
> It was sorta implicitly assumed that any business that builds better hardware than Apple would boot them out overnight
It's not "build better hardware" though, it's "continue to ship said hardware for X number of years". If someone buys out the entire fab capacity and then goes under next year, TSMC is left holding the bag
It's not that, either. Low-margin, high-volume contracts are the worst business you can take. It devalues TSMC's work and creates an unnatural downward force on the price of cutting-edge silicon. By ignoring Apple's demands they're creating natural competition that raises the value of their entire portfolio.
It really is about making better hardware. Apple would be out-bidding Nvidia right now, but only if the iPhone had equivalent value-add to Nvidia hardware. Alas, iPhones are overpriced and underpowered, most people will agree.
Yup; or potentially just purchasing a fab from them, given that Intel has signaled they want to leverage TSMC more, and much of Intel's remaining value is wrapped up in server-grade chips that Apple wouldn't be interested in.
But also; Apple is one of the very few companies at their size that seems to have the political environment to make, and more importantly succeed, at decade investments. The iPhone wasn't an obvious success for 5 or 6 years. They started designing their own iPhone chips ~the iPhone 4 iirc, and pundits remarked: this isn't a good idea; today, the M5 in the iPad Pro outperforms every chip made by EVERYONE else in the world, by 25%, at a tenth the power draw and no active cooling (e.g. 9950X3D). Apple Maps (enough said). We're seeing similar investments today, things we could call "failures" that in 10 years we'll think were obviously going to be successful (cough vision pro).
I thought the prediction was that the scaling of LLMs making them better would plateau, not that all advancement would stop? And that has pretty much happened as all the advancements over the last year or more have been architectural, not from scaling up.
You say that, but to me they seem roughly the same as they've been for a good while. Wildly impressive technology, very useful, but also clearly and confidently incorrect a lot. Most of the improvement seems to have come from other avenues - search engine integration, image processing (still blows my mind every time I send a screenshot to a LLM and it gets it) and stuff like that.
Sure maybe they do better in some benchmarks, but to me the experience of using LLMs is and has been limited by their tendency to be confidently incorrect which betrays their illusion of intelligence as well as their usefulness. And I don't really see any clear path to getting past this hurdle, I think this may just be about as good as they're gonna get in that regard. Would be great if they prove me wrong.
"Apple is smart. If the AI capex cycle flattens in late '27 as models hit diminishing returns, does Apple regain pricing power simply by being the only customer that can guarantee wafer commits five years out?"
That's the take I would pursue if I were Apple.
A quiet threat of "We buy wafers on consumer demand curves. You’re selling them on venture capital and hype"
Why should that change TSMC decision making even a little?
The reality is that TSMC has no competition capable of shipping an equivalent product. If AI fizzles out completely, the only way Apple can choose to not use TSMC is if they decide to ship an inferior product.
A world where TSMC drains all the venture capital out of all the AI startups, using NVidia as an intermediary, and then all the bubble pops and they all go under is a perfectly happy place for TSMC. In these market conditions they are asking cash upfront. The worst that can happen is that they overbuild capacity using other people's money that they don't have to pay back, leaving them in an even more dominant position in the crash that follows.
Nvidia is not a venture capital outlet. They are a self-sustaining business with several high-margin customers that will buy out their whole product line faster than any iPhone or Mac.
From TSMC's perspective, Apple is the one that needs financial assistance. If they wanted the wafers more than Nvidia, they'd be paying more. But they don't.
But Nvidia has had high-profile industry partners for decades. Nintendo isn't "venture capital and hype" nor is PC gaming and HPC datacenter workloads.
But Nvidia wasn't able to compete with Apple for capacity on new process nodes with Nintendo volumes (the concept is laughable; compare Apple device unit volumes to game console unit volumes). What has changed in the semiconductor industry is overwhelming demand for AI focused GPUs, and that is paid for largely with speculative VC money (at this point, at least; AI companies are starting to figure out monetization).
This article repeatedly cites revenue growth numbers as an indicator of Nvidia and Apple’s relative health, which is a very particular way of looking at things. By way of another one, Apple had $416Bn in revenue, which was a 6% increase from the prior year, or about $25Bn, or about all of Nvidia’s revenue in 2023. Apple’s had slow growth in the last 4 years following a big bump during the early pandemic; their 5 year revenue growth, though, is still $140Bn, or about $10Bn more than Nvidia’s 2025 revenues. Nvidia has indeed grown like a monster in the last couple years - 35Bn increase from 23-24 and 70Bn increase from 24-25. Those numbers would be 8% and 16% increases for Apple respectively, which I’m sure would make the company a deeply uninteresting slow-growth story compared to new upstarts.
I get why the numbers are presented the way they are, but it always gets weird when talking about companies of Apple’s size - percent increases that underwhelm Wall Street correspond to raw numbers that most companies would sacrifice their CEO to a volcano to attain, and sales flops in Apple’s portfolio mean they only sold enough product to supply double-digit percentages of the US population.
I think there's something about both the myth of the unicorn and of the hero founder/CEO in tech that forces a push towards legibility and easy narratives for a company - it means that, to a greater degree than other industries, large tech companies are a storytelling exercise, and "giant corporate blob that sprawls into everything" isn't a sexy story, nor is "consistent 3% YoY gains," even when that's translating into "we added the GDP of a medium-sized country to our cash pile again this year."
Every time a CEO or company board says "focus," an interesting product line loses its wings.
It's because the storytelling needed for Wall Street. It's the only way to get sky high revenue multiples, selling a dream, because if you're a conglomerate all you can do is to sell the P&L - it's like selling an index.
If you have a business division that's does exceedingly well compared to the rest, you make more money by spinning it off.
I think Asian companies are much less dependent on public markets and have as strong private control (chaebols in South Korea for example - Samsung, LG, Hyundai etc).
If you look at US companies that are under "family control" you might see a similar sprawl, like Cargill, Koch, I'd even put Berkshire in this class even though it's not "family controlled" in the literal sense, it's still associated with two men and not a professional CEO.
It might matter that Nvidia sells graphics cards and Apple sells computers and computer-like devices with cases and peripherals and displays and software and services. TSMC is responsible for a much larger proportion of Nvidia's product than Apple's.
I dislike this dramatization in reporting of mundane facts.
So report the facts but sentences like "What Wei probably didn’t tell Cook is that Apple may no longer be his largest client" make it personal, they make you take sides, feel sorry for somebody, feel schadenfreude... (as you can observe in the comments)
Doesn't seem like LLM generated text to me. Even prior to ChatGPT some journalists preferred to write in a novel-style with extraneous fluff like that.
A lot of people don't actually learn good writing at their fancy schools - but they do they learn the stylistic quirks that signal one went to the fancy school.
How do you think it got in the LLM training set in the first place?
For the last time.. Word (the program very popularly used by many reporters across the world to write articles) automatically autocorrects hyphens to em-dashes according to the default loaded grammar rules for En-US. The existence of em-dashes in an article does NOT immediately imply GenAI slop.
It seems a bit odd that data center operators aren’t willing to put their money where their mouth is.
Data center operators say: expand more quickly.
TSMC says: we need long term demand to justify that.
And all the data center guys say is: don’t worry that won’t be an issue, trust us.
I would think that if they were serious they would commit to cofinancing new foundries or signing long term minimum purchasing agreements.
Semiconductors is extremely cyclical. One of the reasons TSMC survived the previous boost-boom cycles is their caution. If you overexpand, you risk going out of business in the next downturn.
AFAIK only Apple has been commiting to wafers up to 3 years in the future. It would be a crazy bet for NVidia to do the same, as they don't know how big will be the business.
If the long term demand disappears, there may not be anyone left for TSMC to collect from on those MPA. This somewhat undermines their utility as a security.
> I would think that if they were serious they would commit to cofinancing new foundries or signing long term minimum purchasing agreements.
That would ruin TSMC and others' independence.
Nvidia already did buy Intel shares so it is a thing.
Nvidia did discuss with TSMC for more capacity many times. It's not about financing or minimum purchasing agreements. TSMC played along during COVID and got hit.
How do you figure? Demand for electronics skyrocketed when everyone working from home bought new laptops webcams, tablets.
There was a fire on a TSMC manufacturing line that caused a shortage early on but capacity recovered, demand stayed strong throughout and there was a massive spike at the end when car manufacturers needed to ramp back up to handle all the paused orders.
As far as I know there was never a demand dip at any point in there.
And what of the natural resources sustaining all of this? This conglomerate of data centers, gpus and other chips will surely have to push manufacturers to the maximum in other industries. I don't think sustainable energy, recycling and carbon credits will be enough to cover for it.
Explains why Apple is looking to diversify their fabs with Intel. If Intel can stay on their current trajectory and become a legitimate alternative they will do very well as a fab with additional available capacity.
The key here is Intel is expanding the idea of operating their fab for an external customer (foundry services). What they’re doing with specific fabs or processes is less important relative to their bigger emphasis on working for a client like Apple.
In some areas they may be shifting resources. But a lot has happened since last summer. They have received some cash infusions and 18a is in full production with yields, apparently, at acceptable levels. Rumors are Apple has already signed on.
New CEO said he'll continue with Foundry provided he gets significant customers to justify the cost. In a recent comment/press release, Intel said they are continuing production on 14A. Ergo, they have external customers (or Trump is bullying him into it, but I suspect it's mostly the former).
> Apple-TSMC: The Partnership That Built Modern Semiconductors
In 2013, TSMC made a $10 billion bet on a single customer. Morris Chang committed to building 20nm capacity with uncertain economics on the promise that Apple would fill those fabs. “I bet the company, but I didn’t think I would lose,” Chang later said. He was right. Apple’s A8 chip launched in 2014, and TSMC never looked back.
That's great! Apple has the resources to incentivize and invest in alternate production capacity(Intel, Samsung, or others). Sure, it will take years, but a thousand mile journey begins with one step...
Fabs are in kind of a catch 22. They need big business to improve and to get lots of business they need to be competitive. Im mostly familiar with that narrative in terms of Intel's current uphill battle - was it really the same for TSMC? I guess maybe there was a similar dynamic except the playing field was more even at that time, so it was a bit less of a catch 22.
The 2027 date was a guideline for their military to be "ready", which they may not be either. That is a far cry from the decision to actually make a move. They will only do that if they're certain it will work out for them, and as things stand, it is very risky for Xi.
I'm not sure how true it is or not but I heard that TSMC has the ability to remotely destroy all of their main fab equipment in the event the Chinese are invading Taiwan.
The most advanced ASML machines also cost something like $300-400M each and I am willing to bet if configured wrong can heavily damage themselves and the building they are in.
Alternatively, China could make progress fabricating and exporting its own chips and designing its own GPUs. The entire chip sector could go the way of solar panels and EVs with prices dropping and margins collapsing to near zero.
Yup, they're also like 5-10 years out from their own lithography machines as well. China wanted Taiwan before TSMC was a thing, by the time they take Taiwan back they won't need TSMC.
Buy in-demand fab output today, even at a premium price and even if you can't install or power it all, expecting shortages tomorrow. Which is pretty much the way the tech economy is already working.
So no, no hedge. NVIDIA's customers already beat you to it.
TSMC is already producing at their first one in Arizona (N4 process), second one comes online for N3 in 2028, and third one (N2) broke ground in April 2025 (online date 2029-30)
The projects seem to go well and then union bosses threaten to shut the whole thing down.
Then the essential skilled personnel can’t come train people because the visa process was created by and is operated by the equivalent of four year olds with learning disabilities. Sometimes companies say fuck it we’re doing it anyway and then ice raids their facility and shuts it down.
I’d post the news articles about th above, but your googling thumbs work as well as mine.
As a heavy user of OpenAI, Anthropic, and Google AI APIs, I’m increasingly tempted to buy a Mac Studio (M3 Ultra or M4 Pro) as a contingency in case the economics of hosted inference change significantly.
Don't buy anything physical, benchmark the models you could run on your potential hardware on (neo) cloud provider like HuggingFace. Only if you believe the quality is up to your expectation then do it. The test itself should take you $100 and few hours top.
The 8-10kW isn’t a big deal anymore given the prevalence of electric vehicles and charging them at home. A decade ago very few homes have this kind of hookup. Now it’s reasonably common, and if not, electricians wouldn’t bat an eye on installing it.
FWIW the M5 appears to be an actual large leap for LLM inference with the new GPU and Neural Accelerator. So id wait for the Pro/Max before jumping on M3 Ultra.
You'd want to get something like a RTX Pro 6000 (~ $8,500 - $10,000) or at least a RTX 5090 (~$3,000). That's the easiest thing to do or cluster of some lower-end GPUs. Or a DGX Spark (there are some better options by other manufacturers than just Nvidia) (~$3000).
Yes, I also considered the RTX 6000 Pro Max-Q, but it’s quite expensive and probably only makes sense if I can use it for other workloads as well. Interestingly, its price hasn’t gone up since last summer, here in Germany.
I have MacStudio with 512GB RAM, 2x DGX Spark and RTX 6000 Pro WS (planing to buy a few of those in Max-Q version next). I am wondering if we ever see local inference so "cheap" as we see it right now given RAM/SSD price trends.
Good grief. I'm here cautiously telling my workplace to buy a couple of dgx sparks for dev/prototyping and you have better hardware in hand than my entire org.
What kind of experiments are you doing? Did you try out exo with a dgx doing prefill and the mac doing decode?
I'm also totally interested in hearing what you have learned working with all this gear. Did you buy all this stuff out of pocket to work with?
Yeah, Exo was one of the first things to do - MacStudio has a decent throughput at the level of 3080, great for token generation and Sparks have decent compute, either for prefill or for running non-LLM models that need compute (segment anything, stable diffusion etc). RTX 6000 Pro just crushes them all (it's essentially like having 4x3090 in a single GPU). I bought 2 sparks to also play with Nvidia's networking stack and learn their ecosystem though they are a bit of a mixed bag as they don't expose some Blackwell-specific features that make a difference. I bought it all to be able to run local agents (I write AI agents for living) and develop my own ideas fully. Also I was wrapping up grad studies at Stanford so they came handy for some projects there. I bought it all out of pocket but can amortize them in taxes.
That you are writing AI agents for a living is fascinating to hear. We aren't even really looking at how to use agents internally yet. I think local agents are incredibly off the radar at my org despite some really good additions as supplement resources for internal apps.
What's deployment look like for your agents? You're clearly exploring a lot of different approaches . . .
the thing is GLM 4.7 is easily doing the work Opus was doing for me but to run it fully you'll need a much bigger hardware than a Mac Studio. $10k buys you a lot of API calls from z.ai or Anthropic. It's just not economically viable to run a good model at home.
You can cluster Mac Studios using Thunderbolt connections and enable RDMA for distributed inference. This will be slower than a single node but is still the best bang-for-the-buck wrt. doing inference on very-large-sized models.
True — I think local inference is still far more expensive for my use case due to batching effects and my relatively sporadic, hourly usage. That said, I also didn’t expect hardware prices (RTX 5090, RAM) to rise this quickly.
M3 Ultra with DGX Spark is right now what M5 Ultra will be in who knows when. You can just buy those two, connect them together using Exo and have M5 Ultra performance/memory right away. Who knows what M5 Ultra will cost given RAM/SSD price explosion?
yes, I'm using smaller models on a Mac M2 Ultra 32GB and they work well, but larger models and coding use might be not a good fit for the architecture, after all.
the sad part of this is that volume/priority at TSMC shifting from consumer chips that get sold to you and me, to corporate chips which likely will get sold to OpenAI/Amazon/MS or some other corporate datacenter, means that the un-democratization of computing power is well underway....
mirroring, come to think of it, the movement to un-democratize of modern governments...
(I would be happier if the news behind Nvidia's strength was sales of good, reasonably priced consumer GPU cards...but it's clearly not. I can walk down the street and buy anything from Tim Cook, but 9 out of 10 times, I cannot buy a 5080/5090 FE card from Jenson Huang).
I'm surprised that Apple is not considering opening up its own fabs. Tim Cook is all about vertical-integration and they have a mountain of cash that they could use to fund the initial startup capex.
Semiconductor manufacturing is not an incremental step for Apple. It's an entirely new kind of vertical. They do not have the resources to do this. If they could they would have by now.
> Taiwan Semiconductor Manufacturing Co. plans to spend a record of up to $56 billion this year to feed the world’s insatiable appetite for chips, as it grapples with pressure to build more factories outside Taiwan, especially in the U.S. [0]
Apple has less cash available than TSMC plans to burn this year. TSMC is not spending 50 billion dollars just because it's fun to do so. This is how much it takes just to keep the wheels on the already existing bus. Starting from zero is a non-starter. It just cannot happen anymore. So, no one in their right mind would sell Apple their leading edge foundry at a discount either.
There was a time when companies like Apple could have done this. That time was 15+ years ago. It's way too late now.
Designing CPUs also wasn't their core business and they did it anyway. Apple probably won't care that much about price hikes but if they ever feel TSMC can't guarantee steady supply then all bets are off.
I wonder what will happen in future when we get closer to the physical "wall". Will it allow other fabs to catch up or the opposite will happen, and even small improvements will be values by customers?
Apple has very much been wanted absolute flexibility to adopt major technology changes so much they’ve tried hard to not be the sole customer of a supplier and deal with political ramifications (source: Apple in China/Patrick McGee)
Closer to $40b for a new fab for an established company to do it all correctly. It's a much more major investment to open a fab without ever doing it before, then continually use the brain power/institutional knowledge you've built up to stay near the forefront of fab tech, and then basically have weird incentives to build a foundry for only your products rather than the world at large.
You're setting yourself up for making a huge part of your future revenue stream being set aside for ongoing chipfab capex and research engineering. And that's a huge gamble, since getting this all setup is not guaranteed to succeed.
Is that true? I guess what I mean is, is it $40B if you are trying to replicate the scale of a TSMC fab? Or could you do it for considerably less if the fab is initially designed to the needs of single customer (Apple)?
Closer to $40B for some of the latest fabs from TSMC you're seeing, yes. While there could be huge simplification in SoC and packaging processes if it was focused on a single product, Apple's needs will likely still be about having cutting edge processors, so it would still be pretty high even if they were to just buy TSMC.
oh, darn. my least favorite walled garden / vertical monopoly / rentseeker will have to raise prices. I'm sure they can spin this as a quality improvement.
I am very unhappy with the increased RAM prices - and now general increase in prices for hardware. To me this is collusion, a de-facto monopoly. Governments that don't stop this practice are also part of the mafia.
We really need many more smaller, more independent manufacturers. All the big guns, from NVIDIA, Apple, Intel, AMD, etc... have massively disappointed about 99.9% of us here now.
Quite the opposite actually, way too many people treat LLMs as oracles, all the while they are fundamentally unreliable at knowledge storage and retrieval. If there was legitimate doubt in the early days as to whether a collaborative encyclopedia could self organise and self censor into a reliable source, the engineering of LLMs makes the opposite a certainty.
How much new capacity is under construction? Seems like it should be a lot, but other than Arizona and Ohio and a few other places I'm not reading about a ton of cutting-edge node fab construction happening.
I find that my cell phone which is 4 generations old and my desktop computer which is 2 generations old are totally adequate for everything I need to do, and I do not need faster processing
I really don't care about most new phone features and for my laptop the M1 Max is still a really decent chip.
I do want to run local LLM agents though and I think a Mac Studio with an M5 Ultra (when it comes out) is probably how I'm going to do that. I need more RAM.
I bet I'm not the only one looking at that kind of setup now that was previously happy with what they had..
Apple has made some good progress on memory sharing over thunderbolt. If they could get that ironed out you maybe could run a good LLM on a cluster of Mac minis.
Again you cannot today but people are working on it. One guy might have gotten it to work but it’s not ready for prime time yet.
> Apple has made some good progress on memory sharing over thunderbolt
The only reason that Thunderbolt exists is to expose DMA over an artificial PCI channel. I'd hope they've made progress on it, Thunderbolt has only been around for fourteen years after all.
But do you use any ai services like chat gpt, Claude, Gemini? If so you’re offloading your compute from a local stack to a high performance nvidia gpu stack operated by one of the big five.
It’s not that you aren’t using new hardware, it’s that you shifted the load from local to centralized.
I’m not saying this is bad or anything, it’s just another iteration of the centralized vs decentralized pendulum swing that has been happening in tech since the beginning (mainframes with dumb terminals, desktops, the cloud, mobile) etc.
Apple might experience a slowdown in hardware sales because of it. Nvidia might experience a sales boom because of it. The future could very well bring a swing back. Imagine you could run a stack of Mac minis that replaced your monthly Claude code bill. Might pay for itself in 6mo (this doesn’t exist yet but it theoretically could happen)
> Imagine you could run a stack of Mac minis that replaced your monthly Claude code bill. Might pay for itself in 6mo (this doesn’t exist yet but it theoretically could happen)
You don't have to imagine. You can, today, with a few (major) caveats: you'll only match Claude from roughly ~6 months ago (open-weight models roughly lag behind the frontier by ~half a year), and you'd need to buy a couple of RTX 6000 Pros (each one is ~$10k).
Technically you could also do this with Macs (due to their unified RAM), but the speed won't be great so it'd be unusable.
I feel like China invading Taiwan isn't happening in our lifetimes. Yes, they stand to benefit from it, but I doubt any of the people in charge of decision making are that interested in rocking the boat. There's nobody forcing their hand and the country is doing great without needing to invade anyone.
Let's hope China doesn't get a leader like Donald Trump in our lifetimes, then I think your prediction will apply. Despite the political tensions, China and Taiwan are so deeply integrated economically that an invasion would hurt not only Taiwan and the global economy, but also China (directly and indirectly). The EU and the US are making efforts to re-shore some semiconductor manufacturing, but TSMC and others will probably still keep a sizable amount of manufacturing in Taiwan, so I don't think this interconnectedness will change anytime soon...
It seems that their leaders are and have been planning to take over Taiwan for decades. At least according to most of what I’ve read on the topic from all the various sources.
If or when China’s economic and/or demographics issues become problematic is exactly when the CCP likely would want to strike. At least seems to me like it’d be a good time to foment national pride.
Of course hopefully I’m wrong and you’re right.
Many of these larger geopolitical things are decades in the making. Even Trump’s Venezuela action has been a long time brewing. So much so that “US troops in Venezuela” has become a trope in military sci-fi. The primary change with Trump is how he presents and/or justifies it, or rather doesn’t.
"They stand to benefit from it" how!? The only thing they'll get is immediate geopolitical scorn which could very well escalate to mass military action considering how much TSMC now means for the world's economy. A single temporary shutdown of the fabs would mean a global economic apocalypse. They'd be inviting all powers of the world to attack them for no upside whatsoever, because it will all be over by the time they figure out how to leverage the fabs themselves.
There's some intersection point between long term decreasing in China's ability (demographic collapse) and long term increase in China's ability (their current build up of military hardware in air, land, and sea that is currently outpacing America's). Maybe somewhere in 10-20 years where their regional military power is much higher than America can project across the Atlantic but they still have a lot of military aged men.
Atlantic? IDK if China even has aspirations to play World Police like the US. Military protection of things like their interests and the stability of Belt and Road, sure, but I don’t see China trying something like the Gulf War or OEF.
It’s very possible that they will be able to dominate South China Sea and their zone of the Pacific, even now, given the proximity advantages and ship/missile production; and I think that would be satisfactory to them.
20 years from now, China’s sphere and America’s sphere are separate, with China having a lead in competing for Africa, and Europe in a very weird place socially, economically, demographically, and WRT Russia/US competition.
My point is that China can sustain a naval blockade of Taiwan nearly indefinitely, and at some point Taiwan will have to decide whether they want to live under siege forever (poor, cold, getting everything via scarce and expensive air freight), or give up come to a political solution.
I'm not like, rooting for this, I'm just trying to be realistic.
That's exactly what the USA has been doing to Cuba since 1959 and they're still (barely) hanging around. If we go by that example, it'll only end with with an actual invasion (which is what will happen to Cuba within one to two years).
The US has an embargo that doesn't impact other countries that want to trade with Cuba. China is going to put an actual cordon around Taiwan.
Also, the US has no historical reason for claiming Cuba and has no real domestic pressure to do so (nobody in either party is asking for it). China has been very clear they see Taiwan as a part of China and will reunite with it not for economic or strategic reasons, but for nationalistic ones.
The leader of China literally publicly told his military to have “all options for reunification of Taiwan ready by 2027”
What options do you suppose the military might be working on?
Training to surround, and blockade? (Check)
Information warfare? (Check)
Building high numbers of landing craft? (Check)
Building high numbers of modular weapon systems that can rapidly increase the number of offensive ships? (Check)
Building numerous high volume drone warfare ships and airborne launchers? (Check)
Keep in mind that there are public language cues that preceded invasion such as declarations of the invalidity of the other country’s sovereignty, declarations that the other country is already part of the invading country.
Have you seen any signs of that?
Your persistent doubts require ignorance of strong evidence.
This retarded meme gets posted about PRC bluffing but context behind it illustrates the opposite. The warnings were against US/TW based U2s overflights, which PRC was both warning and doing - actively attempting to shoot down despite having inferior capabilities. The chefs kiss that this is an USSR meme is that PRC shot down more U2s using modified soviet hardware than USSR herself. Even more so when consider PRC issued actual final warning to USSR that ended up in border skirmishes. PRC's actual final warning is "don't say we didn't warn you" which historically predicts PRC kinetic action with high certainty. USSR, India border skirmishes. Korean war against UN. PRC also has directly supported North Vietnam against the French, and threatened UK when they hinted at granting HK independence under Thatcher. That's every NPT nuclear state over territorial / security issues less important than TW. It doesn't always lead to immediate action, but has consistently been prelude to it.
The US has its own TSMC supply (insert comments about it not being cutting edge). And the US will stand-down and let China take Taiwan with no serious conflict in exchange for supply agreements. Not more than 5-10 years out at this point.
The US can't even remotely come close to stopping China in its own backyard today, in another 5-10 years they'll just have that much larger of a Navy. The US knows that's the situation. The US can supply a large one week bombing campaign against China and that's it, based on inventory levels. The US will exhaust its cruise missile supply instantly and the US has almost no meaningful drone-bomb supply. China can build cheap missiles by the tens of thousands perpetually, train them to the coast, and flatten Taiwan and any opponents as necessary. China is the only country that can sustain a multi-year WW2 style bombing campaign today, thanks to its manufacturing capabilities. Imagine them on a full war footing.
Yeah, I just don’t know that there’s the will to blow up the world economy for which flag flies over Taiwan.
China absorbing Taiwan (especially to Americans) just doesn’t seem like a radical, terrifying concept.
A Hong Kong style negotiated transfer might be best for the world - Taiwanese that want to leave can, the US can build up a parallel source of semiconductors, China gets Taiwan without firing a shot.
Is it better than the alternative? Do you think TSMC wants to see a Dongfeng or ATACMS headed for their fab, if the alternative is a negotiated handover?
> The US has its own TSMC supply (insert comments about it not being cutting edge)
USA has been strategically re-homing TSMC to the US mainland for a long time now. 30% of all 2nm and better technologies are slated to be produced in Arizona by 2030.
The real loser in all of this will be the EU which will be completely without the ability to produce or acquire chips. They'll just end up buying from China and USA, which will only further deepen their dependence on those countries.
That's announcing 40k WSPMs of eventual capacity spread across 28nm and 16nm nodes. I mean, it's a start, and I'm sure automakers are totally stoked given the Nexperia debacle, but the EU will remain completely dependent on foreign advanced node semiconductors.
Compare to TSMC's Arizona project, which will supply 30% of TSMC's 2nm and smaller process output. Already just one of the six planned TSMC fabs in Arizona is pumping out ~30k WSPMs at 5nm or smaller.
And that doesn't even get into CoWoS packaging, which is essential for all the highest-performance and highest-margin parts.
The fact is: In semiconductors, Europe is getting left in the dust. Sure they can fab some mature node chips for industrial uses--and that's not nothing--but Smartphone SoCs, "AI" accelerators, DRAM, even boring CPUs simply cannot be made any more in Europe, and to the limited extent that they can, they will be horrendously uncompetitive on the market and outclassed in every performance metric by Chinese and American chips.
EU is on a big sovereignty kick right now, which makes sense given that their foreign dependencies keep blowing up in their faces. So it's strange that EU is so complacent about their foreign dependency on advanced node semiconductors.
Has the Ukraine situation not shown that the EU has relegated itself to second fiddle?
It’s too old, too complacent, and too broke. Even compared to the US and our level of discord, there’s no unity across divisions.
The US absurdly threatens Greenland, but Denmark/EU’s response is “Sanction US tech or kick out US military bases on Europe”, rather than be able to rattle a saber back and show some credible backbone.
Without San Diego based Cymer they can't move forward on their latest and greatest. As far as I know they still do R&D in San Diego even after purchase.
"Our system produces 4X more power that enables better lithographic patterning, which is necessary to manufacture chips with smaller and more efficient feature sizes. In addition to being more powerful, our FEL system has programmable light characteristics that improve current capabilities and enable next-generation lithography (e.g., shorter wavelengths) - uniquely enabling the extension of Moore’s Law for decades. Connecting existing ASML scanners to an xLight FEL significantly improves the tool’s capabilities, delivering next-version scanner performance without the cost and complexities."
Is it supposed to work independently of other technology at some point?
Then anyways: multilateral cooperation is at the heart of scientific progress anyways. It's fitting that ASML is in a country that is culturally strongly influenced by its history of seafaring and trade.
Will see how the braindrain caused by people not wanting to live their lifes in a society taht doesn't share values like these will influence that whole technological armsrace thing.
Some people in Japan are coming up with a successor to EUV as far as I remember, what was their name again?
Taiwan's TSMC foundries are their nuclear currency: they must keep them to remain protected by others, and yet the others didn't completely build interchangeable and resilient capacity elsewhere to do what essential for them that they had the money to do.
So now Apple, Nvidia, AMD (possibly), and most car manufacturers will be up a creek without a paddle when China invades in 1-2 years. That is unless China's Xi is bluffing to mollify domestic war hawks and reunification zealots by going through the motions of building an army of war machines without intent to use them, but I don't think that's probable. It's possible that Trump already made agreements with Xi to cede "Oceania" if they allow the US to take Greenland and South America for empire-building neocolonialism.
Tim Cook failing on the Cook doctrine ("We believe that we need to own and control the primary technologies behind the products that we make") is ironic.
I'm sure if Apple could manage to run a fab with the quality and costs they get with TSMC, they would. I have little doubt they've been pushing forward on that mission.
Owning a leading edge fab is not practical for most companies, even huge some ones like Apple.
Intel has even struggled with it since they traditionally didn’t sell capacity to other buyers. It worked for Intel because they traditionally had a near-monopoly over the laptop, desktop, and server chip market.
Apple certainly has the money to spin up their own chip fabricator, but there’s no guarantee it would be as good as TSMC, it would cost billions, and they would have less of an ability to sell capacity to other customers.
At the end of that effort they could be left with a chip fab that produces chips that still cost the same or more than what TSMC manufactures them for. It might just be cheaper to try and outbid Nvidia for priority.
Per that very article, Sherman will be for support chips for power and peripherals, on legacy 45nm+ nodes.
Apple's investing heavily in the TSMC fab in Arizona, due to open in 2027, to have 3nm capabilities for its flagship chips, but it's unlikely that would ever cover a majority of that chipmaking.
Is it karma or is it just normal business activities? When you're a large player like this you get pricing power. If another large player moves in and also has pricing power then negotiations and things like that take place. Business deals, profits, &c. all ebb and flow and this is no different.
Weird take. If you want to undertake approximately a bajillion dollars in capex to prove out and scale up a new node, it is extremely to have one massive, anchor customer who will promise well in advance to offtake basically the entire thing for a bit and who has creditworthiness exceeded by few non-sovereign entities, and thus is able to write contracts against which it is easy to lend. Also this customer makes little chips (when your defect rate is higher) and bigger chips (when your defect rate is lower). Of course you don't try to synthesize this profile out of a bajillion tiny customers.
I think you are referring to thunderbolt cables with their signal conditioning chips, and if that’s the case then I would like to say that TSMC isn’t making those chips. Afaik Intel and maybe some others make the chips that go into thunderbolt cables.
Stock buy-backs can be part of an illegal scheme but, in general, they are one of the few mechanisms in corporate actions through which the regular joe shareholder doesn't get the short end of the stick.
How is owning a larger share of a company with proportionally less cash and a higher price per share than what you could have sold it for before bad.
Have you looked at precious metal charts as of late? Do 1/x and that's the value of the cash these companies are trading for a valuable business.
Here's what G AI estimates when asked about "base on public data, estimate how many mm^2 of apple/Nvdia silicon are produce in TSMC for the past 3 years."
Nvidia is the high-frequency trader hammering the newest node until the arb closes. Stability usually trades at a discount during a boom, but Wei knows the smartphone replacement cycle is the only predictable cash flow. Apple is smart. If the AI capex cycle flattens in late '27 as models hit diminishing returns, does Apple regain pricing power simply by being the only customer that can guarantee wafer commits five years out?
However, everyone knows that good faith reciprocity at that scale is not rewarded. Apple is ruthless. There are probably thousands of untold stories of how hard Apple has hammered it's suppliers over the years.
While Apple has good consumer brand loyalty, they arguably treat their suppliers relatively poorly compared to the Gold standard like Costco.
Changing fabs is non-trivial. If they pushed Apple to a point where they had to find an alternative (which is another story) and Apple did switch, they would have to work extra hard to get them back in the future. Apple wouldn't want to invest twice in changing back and forth.
On the other hand, TSMC knows that changing fabs is not really an option and Apple doesn't want to do it anyway, so they have leverage to squeeze.
At this level, everyone knows it's just business and it comes down to optimizing long-term risk/reward for each party.
There are rumours that Intel might have won some business from them in 2 years. I could totally see Apple turning to Intel for the Mac chips, since they're much lower volume. I know it sounds crazy, we just got rid of Intel, but I'm talking using Intel as a fab, not going back to x86. Those are done.
They did had the expertise building it after all. What would happen, if TSMC now would build a M1 clone? I doubt this is a way anyone wants to go, but it seems a implied threat to me that is calculated in.
> "I will spend my last dying breath if I need to, and I will spend every penny of Apple’s $40 billion in the bank, to right this wrong. I’m going to destroy Android, because it’s a stolen product. I’m willing to go thermonuclear war on this."
The falling out with Samsung was related, but more about the physical look of the phone
Is there anyone who can match TSMC at this point for the top of the line M or A chips? Even if Intel was ready and Apple wanted to would they be able to supply even 10% of what Apple needs for the yearly iPhone supply?
And anyway consumers don't really need beefy devices nowadays. Running local LLM on a smartphone is a terrible idea due to battery life and no graphics card; AI is going to be running on servers for quite some time if not forever.
It's almost as if there is a constant war to suppress engineer wages... That's the only variable being affected here which could benefit from increased competition.
If tech sector is so anti-competitive, the government should just seize it and nationalize it. It's not capitalism when these megacorps put all this superficial pressure but end up making deals all the time. We need more competition, no deals! If they don't have competition, might as well have communism.
Government jobs should only be an option if there are enough social benefits.
I've met many software engineers who call themselves communists. I can kind of understand. This kind of communist-like bureaucracy doesn't work well in a capitalist environment.
It's painful to work in tech. It's like our hands are tied and are forced to do things in a way we know is inefficient. Companies use 'security' as an excuse to restrict options (tools and platforms), treat engineers as replaceable cogs as an alternative to trusting them to do their job properly... And the companies harvest what they sow. They get reliable cogs, well versed in compliance and groupthink and also coincidentally full-blown communists; they're the only engineers remaining who actually enjoy the insane bureaucracy and the social climbing opportunities it represents given the lack of talent.
I'm going through a computer engineering degree at the moment, but I am thinking about pursuing Law later on.
Looking at other paths: Medicine requires expensive schooling and isn't really an option after a certain age and law, on the other hand, opened its doors too widely and now has a large underclass of people with third-tier law degrees.
Perhaps you can try to accept the realities of the system while trying to live the best life that you can?
Psyching yourself all the way, trying to find some sort of escape towards a good life with freedom later on...
That said, they did that for a sapphire glass supplier for the Apple Watch and when their machines had QC problems they dropped them like a rock and went back to Corning.
But is that really any different from any other supplier? And who tf do you think they’re going to drop TSMC for right now? They are the cock of the walk.
Don't look now: https://www.macrumors.com/2025/11/28/intel-rumored-to-supply...
Your underlying statement implies that whoever is replacing apple is a better buyer which I don't think is necessarily true.
The only complete package integrator that manages to make a relationship work with Nvidia is Nintendo.
And thats probably because Nintendo isn’t adding any pressure to neither TSMC nor Nvidia capacity wise; iirc Nintendo uses something like Maxwell or Pascal on really mature processes for Switch chips/socs.
I shot a video at CNET in probably 2011 which was a single touchscreen display (i think it was the APX 2500 prototype iirc?) and it has the precise dimensions to the switch 1.
Nintendo was reluctantly a hardware company... they're a game company who can make hardware, but they know they're best when they own the stack.
> According to Han, Nvidia has been difficult to work with for some time now. Like all other GPU board partners, EVGA is only told the price of new products when they're revealed to everyone on stage, making planning difficult when launches occur soon after. Nvidia also has tight control over the pricing of GPUs, limiting what partners can do to differentiate themselves in a competitive market.
https://www.gamespot.com/articles/evga-terminates-relationsh...
https://www.semiaccurate.com/2010/07/11/investigation-confir...
That means deprioritizing your largest customer.
Also theres the devil you know and the devil you dont know.
Private companies can be nice to their suppliers. Owners can choose to stay loyal to suppliers they went to high school with, even if it isn't the most cost-efficient.
I’m not saying you’re wrong but you’re previous paragraph sounding like you were wondering if it was the case vs. here you’re saying it’s known. Is this all true? Do they have a reputation for hammering their suppliers?
Apple has responded and has started moving a lot of manufacturing out of China. It just makes sense for risk management.
> China will remain the country of origin for the vast majority of total products sold outside the US, he added.
And international sales are a solid majority of Apple's revenue.
> Meanwhile, Vietnam will be the chief manufacturing hub "for almost all iPad, Mac, Apple Watch and AirPods product sold in the US".
> We do expect the majority of iPhones sold in US will have India as their country of origin," Mr Cook said.
Still not made in the US and no plan to change that. They will be selling products made in India/Vietnam domestically and products made in China internationally.
The tariffs are not bringing these jobs home.
Nvidia's willingness to pay exorbitant prices for early 2nm wafers subsidizes the R&D and the brutal yield-learning curve for the entire node. But you can't run a sustainable gigafab solely on GPUs...the defect density math is too punishing. You need a high-volume, smaller-die customer (Apple) to come in 18 months later, soak up the remaining 90% of capacity and amortize that depreciation schedule over a decade.
Apple OTOH operates at consumer electronics price points. They need mature yields (>90%) to make the unit economics of an iPhone work. There's also the binning factor I am curious about. Nvidia can disable 10% of the cores on a defective GPU and sell it as a lower SKU. Does Apple have that same flexibility with a mobile SoC where the thermal or power envelope is so tightly coupled to the battery size?
For example the regular M4 can have 4 P-cores / 6 E-cores / 10 GPU cores, or 3/6/10 cores, or 4/4/8 cores, depending on the device.
They even do it on the smaller A-series chips - the A15 could be 2/4/5, 2/4/4, or 2/3/5.
For Apple, they have binning flexibility, with Pro/Max/Ultra, all the way down to iPads - and that’s after the node yields have been improved via the gazillion iPhone SoC dies.
NVIDIAs flexibility came from using some of those binned dies for GeForce cards, but the VRAM situation is clearly making that less important, as they’re cutting some of those SKUs for being too vram heavy relative to MSRP.
If you want binning in action, the RTX ones other than the top ones, are it. Look for the A30 too, of which I was surprised there was no successor. Either they had better yields on Hopper or they didn't get enough from the A30...
As you cut SMs from a die you move from the 3090 down the stack, for instance. That’s yield management right there.
The Pro and Max chips are different dies, and the Ultra currently isn't even the same generation as the Max. And the iPads have never used any of those larger dies.
> NVIDIAs flexibility came from using some of those binned dies for GeForce cards
NVIDIA's datacenter chips don't even have display outputs, and have little to no fixed-function graphics hardware (raster and raytracing units), and entirely different memory PHYs (none of NVIDIA's consumer cards have ever used HBM).
Not binning an M4 Max for an iPhone, but an M4 Pro with a few GPU or CPU cores disabled is clearly a thing.
Same for NVIDIA. The 4080 is a 4090 die with some SMs disabled.
The desktop 4090 uses the AD102 die, the laptop 4090 and desktop 4080 use the AD103 die, and the laptop 4080 uses the AD104 die. I'm not at all denying that binning is a thing, but you and other commenters are exaggerating the extent of it and underestimating how many separate dies are designed to span a wide product line like GPUs or Apple's computers/tablets/phones.
Otherwise, yes, if a chip doesn't make M4 Max, it can make M4 Pro. If not, M4. If not, A18 Pro. If not that, A18.
And even all of the above mentioned marketing names come in different core configurations. M4 Max can be 14 CPU Cores / 32 GPU cores, and it can also be 16 CPU cores and 40 GPU cores.
So yeah, I'd agree that Apple has _extreme_ binning flexibility. It's likely also the reason why we got A19 / A19 Pro / M5 first, and we still don't have M5 Pro or M5 Max yet. Yields not high enough for M5 Max yet.
Unfortunately I don't think they bin down even lower (say, to S chips used in Apple Watches), but maybe in the future they will.
In retrospect, Apple ditching Intel was truly a gamechanging move. They didn't even have to troll everyone by putting an Intel i9 into a chassis that couldn't even cool an i7 to boost the comparison figures, but I guess they had to hedge their bet.
No, that's entirely wrong. All of those are different dies. The larger chips wouldn't even fit in phones, or most iPad motherboards, and I'm not sure a M4 Max or M4 Pro SoC package could even fit in a MacBook Air.
As a general rule, if you think a company might ever be selling a piece of silicon with more than half of it disabled, you're probably wrong and need to re-check your facts and assumptions.
There are two levels of Max Chip, but think of a Max as two pros on die (this is simplification, you can also think of as pro as being two normal cores tied together), so a bad max can't be binned into a pro. But a high-spec Max can be binned into a low-spec Max.
people are holding onto their phones for longer: https://www.cnbc.com/2025/11/23/how-device-hoarding-by-ameri...
Heck if Apple wanted to be super cheeky, they could probably still pivot on the reserved capacity to do something useful (e.x. revised older design for whatever node they reserved where they can get more chips/wafer for cheaper models.)
NVDA on the other hand is burning a lot of good-will in their consumer space, and if a competitor somehow is able to outdo them it could be catastrophic.
Like smartphones, AI chips also have a replacement cycle. AI chips depreciate quickly -- not because the old ones go bad, but because the new ones are so much better in performance and efficiency than the previous generation. While smartphones aren't making huge leaps every year like they used to, AI chips still are -- meaning there's a stronger incentive to upgrade every cycle for these chips than smartphone processors.
I've heard that it's exactly that, reports of them burning out every 2-3 years. Haven't seen any hard numbers though.
They really, absolutely, are not.
It's not about "will there be a new hardware", it's about "is their order quantity predictable"
If Nvidia pays more, Apple has to match.
Not a system that necessarily works all that well if one player has a short-term ability to vastly outspending all the rest.
You can't let all your other customers die just because Nvidia is flush with cash this quarter...
Is the argument that Apple will go out of business? AAPL?
Wait,
> one player has a short-term ability to vastly outspending all the rest.
I assure you, Apple has the long-term and short-term ability to spend like a drunken sailor all day and all night, indefinitely, and still not go out of business. Of course they’d prefer not to. But there is no ‘ability to pay’ gap here between these multi-trillion-dollar companies.
Apple will be forced to match or beat the offer coming from whoever is paying more. It will cost them a little bit of their hilariously-high margins. If they don’t, they’ll have to build less advanced chips or something. But their survival is not in doubt and TSMC knows that.
TSMC isn't running a charity, it sells capacity to the highest bidder.
Of course customers as big as Apple will have a relationship and insane volumes that they will be guaranteed important quotes regardless.
If it takes 4 years to build a new fab and Apple is willing to commit to paying pay the price of an entire fab, for chips to be delivered in 4 years time - why not take the order and build the capacity?
But Nvidia has also spent billions/year in TSMC for more than a decade and this just keeps increasing.
Well yeah, people were identifying that back when Apple bought out the entirety of the 5nm node for iPhones and e-tchotchkes. It was sorta implicitly assumed that any business that builds better hardware than Apple would boot them out overnight.
It's not "build better hardware" though, it's "continue to ship said hardware for X number of years". If someone buys out the entire fab capacity and then goes under next year, TSMC is left holding the bag
It really is about making better hardware. Apple would be out-bidding Nvidia right now, but only if the iPhone had equivalent value-add to Nvidia hardware. Alas, iPhones are overpriced and underpowered, most people will agree.
But also; Apple is one of the very few companies at their size that seems to have the political environment to make, and more importantly succeed, at decade investments. The iPhone wasn't an obvious success for 5 or 6 years. They started designing their own iPhone chips ~the iPhone 4 iirc, and pundits remarked: this isn't a good idea; today, the M5 in the iPad Pro outperforms every chip made by EVERYONE else in the world, by 25%, at a tenth the power draw and no active cooling (e.g. 9950X3D). Apple Maps (enough said). We're seeing similar investments today, things we could call "failures" that in 10 years we'll think were obviously going to be successful (cough vision pro).
The flat line prediction is now 2 years old...
Sure maybe they do better in some benchmarks, but to me the experience of using LLMs is and has been limited by their tendency to be confidently incorrect which betrays their illusion of intelligence as well as their usefulness. And I don't really see any clear path to getting past this hurdle, I think this may just be about as good as they're gonna get in that regard. Would be great if they prove me wrong.
That's the take I would pursue if I were Apple.
A quiet threat of "We buy wafers on consumer demand curves. You’re selling them on venture capital and hype"
The reality is that TSMC has no competition capable of shipping an equivalent product. If AI fizzles out completely, the only way Apple can choose to not use TSMC is if they decide to ship an inferior product.
A world where TSMC drains all the venture capital out of all the AI startups, using NVidia as an intermediary, and then all the bubble pops and they all go under is a perfectly happy place for TSMC. In these market conditions they are asking cash upfront. The worst that can happen is that they overbuild capacity using other people's money that they don't have to pay back, leaving them in an even more dominant position in the crash that follows.
Business is a little more nuanced than this audience thinks, and it’s silly to think Apple has no leverage.
From TSMC's perspective, Apple is the one that needs financial assistance. If they wanted the wafers more than Nvidia, they'd be paying more. But they don't.
This is the "venture capital and hype" being referred to, not Nvidia themselves.
That line is purified cope.
I get why the numbers are presented the way they are, but it always gets weird when talking about companies of Apple’s size - percent increases that underwhelm Wall Street correspond to raw numbers that most companies would sacrifice their CEO to a volcano to attain, and sales flops in Apple’s portfolio mean they only sold enough product to supply double-digit percentages of the US population.
The giant conglomerates in Asia seem more able to do it.
Google has somewhat tried but then famously kills most everything even things that could be successful if smaller businesses.
Every time a CEO or company board says "focus," an interesting product line loses its wings.
I think Asian companies are much less dependent on public markets and have as strong private control (chaebols in South Korea for example - Samsung, LG, Hyundai etc).
If you look at US companies that are under "family control" you might see a similar sprawl, like Cargill, Koch, I'd even put Berkshire in this class even though it's not "family controlled" in the literal sense, it's still associated with two men and not a professional CEO.
And ironically Apple acts like being a small contender the moment they feel some heat after a decade of relatively easy wins everywhere it seemed.
So finally there is a company that gives Apple some much needed heat.
That’s why I in absolute terms side with NVIDIA, the small contender in this case.
PS: I had one key moment in my career when I was at Google and a speaker mentioned the unit “NBU”. It stands for next billion units.
This is ten years ago and started my mental journey into large scale manufacturing and production including all the processes included.
The fascination never left. It was a mind bender for me and totally get why people miss everything that large.
At Google it was just a milestone expected to be hit - not one time but as the word next indicates multiple times.
Mind blowing and eye opening to me ever since. Fantastic inspiration thinking about software, development and marketing.
So report the facts but sentences like "What Wei probably didn’t tell Cook is that Apple may no longer be his largest client" make it personal, they make you take sides, feel sorry for somebody, feel schadenfreude... (as you can observe in the comments)
Okay, but this isn't a news article, it's an opinion piece on some guy's substack.
How do you think it got in the LLM training set in the first place?
AFAIK only Apple has been commiting to wafers up to 3 years in the future. It would be a crazy bet for NVidia to do the same, as they don't know how big will be the business.
https://youtu.be/K86KWa71aOc?t=483
That would ruin TSMC and others' independence.
Nvidia already did buy Intel shares so it is a thing.
Nvidia did discuss with TSMC for more capacity many times. It's not about financing or minimum purchasing agreements. TSMC played along during COVID and got hit.
As far as I know there was never a demand dip at any point in there.
Which barely impacts TSMC. Most of their revenue and focus is on the advanced nodes - not the mature 1s.
> As far as I know there was never a demand dip at any point in there.
When did I imply there was a demand dip? I said they built out too much capacity.
https://www.manufacturingdive.com/news/intel-layoffs-25-perc...
> Apple-TSMC: The Partnership That Built Modern Semiconductors
In 2013, TSMC made a $10 billion bet on a single customer. Morris Chang committed to building 20nm capacity with uncertain economics on the promise that Apple would fill those fabs. “I bet the company, but I didn’t think I would lose,” Chang later said. He was right. Apple’s A8 chip launched in 2014, and TSMC never looked back.
https://newsletter.semianalysis.com/p/apple-tsmc-the-partner...
Apple can and should do it again!
I don’t know the hedge to position against this but I’m pretty sure China will make good on its promise.
The 2027 date was a guideline for their military to be "ready", which they may not be either. That is a far cry from the decision to actually make a move. They will only do that if they're certain it will work out for them, and as things stand, it is very risky for Xi.
The most advanced ASML machines also cost something like $300-400M each and I am willing to bet if configured wrong can heavily damage themselves and the building they are in.
Buy in-demand fab output today, even at a premium price and even if you can't install or power it all, expecting shortages tomorrow. Which is pretty much the way the tech economy is already working.
So no, no hedge. NVIDIA's customers already beat you to it.
I know about the existence of the initiative but I don't know how it is progressing / what is actually going on on that front.
There's ~a dozen in the works or under construction
TMSC plans to have 2-3nm fabs operational in the next 2-3 years
So we're 2-3 years behind the standard (currently 2nm), and further behind on the bleeding edge sub-2nm fabs
Are the majority of the staff still shipped in from Asia?
https://www.tsmc.com/static/abouttsmcaz/index.htm
Then the essential skilled personnel can’t come train people because the visa process was created by and is operated by the equivalent of four year olds with learning disabilities. Sometimes companies say fuck it we’re doing it anyway and then ice raids their facility and shuts it down.
I’d post the news articles about th above, but your googling thumbs work as well as mine.
Any other time and place? The power to run it, plus the power to cool it.
What kind of experiments are you doing? Did you try out exo with a dgx doing prefill and the mac doing decode?
I'm also totally interested in hearing what you have learned working with all this gear. Did you buy all this stuff out of pocket to work with?
That you are writing AI agents for a living is fascinating to hear. We aren't even really looking at how to use agents internally yet. I think local agents are incredibly off the radar at my org despite some really good additions as supplement resources for internal apps.
What's deployment look like for your agents? You're clearly exploring a lot of different approaches . . .
Just look at what people are actually using. Don't rely on a few people who tested a few short prompts with short completions.
mirroring, come to think of it, the movement to un-democratize of modern governments...
(I would be happier if the news behind Nvidia's strength was sales of good, reasonably priced consumer GPU cards...but it's clearly not. I can walk down the street and buy anything from Tim Cook, but 9 out of 10 times, I cannot buy a 5080/5090 FE card from Jenson Huang).
NVidia gets the capacity because they're willing to pay more. If Apple wants to, they can pay more to get it back.
Or they could buy out Intel and sell off their cpu design division
At this point it would be corporate suicide if they were not outlining a strategy to own their own fab(s).
Apple has less cash available than TSMC plans to burn this year. TSMC is not spending 50 billion dollars just because it's fun to do so. This is how much it takes just to keep the wheels on the already existing bus. Starting from zero is a non-starter. It just cannot happen anymore. So, no one in their right mind would sell Apple their leading edge foundry at a discount either.
There was a time when companies like Apple could have done this. That time was 15+ years ago. It's way too late now.
[0]: https://www.wsj.com/business/earnings/tsmc-ends-2025-with-a-...
I wonder what will happen in future when we get closer to the physical "wall". Will it allow other fabs to catch up or the opposite will happen, and even small improvements will be values by customers?
You're setting yourself up for making a huge part of your future revenue stream being set aside for ongoing chipfab capex and research engineering. And that's a huge gamble, since getting this all setup is not guaranteed to succeed.
As would almost innumerable others.
We really need many more smaller, more independent manufacturers. All the big guns, from NVIDIA, Apple, Intel, AMD, etc... have massively disappointed about 99.9% of us here now.
Also Nvidia's margins are higher which means that they will be willing to pay a higher unit price.
This seems like an open and closed case from TSMC's side.
More likely they will not use leading the leading edge fab process, which TBH is fine for the vast majority.
I really don't care about most new phone features and for my laptop the M1 Max is still a really decent chip.
I do want to run local LLM agents though and I think a Mac Studio with an M5 Ultra (when it comes out) is probably how I'm going to do that. I need more RAM.
I bet I'm not the only one looking at that kind of setup now that was previously happy with what they had..
The only reason that Thunderbolt exists is to expose DMA over an artificial PCI channel. I'd hope they've made progress on it, Thunderbolt has only been around for fourteen years after all.
I’m not saying this is bad or anything, it’s just another iteration of the centralized vs decentralized pendulum swing that has been happening in tech since the beginning (mainframes with dumb terminals, desktops, the cloud, mobile) etc.
Apple might experience a slowdown in hardware sales because of it. Nvidia might experience a sales boom because of it. The future could very well bring a swing back. Imagine you could run a stack of Mac minis that replaced your monthly Claude code bill. Might pay for itself in 6mo (this doesn’t exist yet but it theoretically could happen)
You don't have to imagine. You can, today, with a few (major) caveats: you'll only match Claude from roughly ~6 months ago (open-weight models roughly lag behind the frontier by ~half a year), and you'd need to buy a couple of RTX 6000 Pros (each one is ~$10k).
Technically you could also do this with Macs (due to their unified RAM), but the speed won't be great so it'd be unusable.
I wish I were in that situation, but I find myself able to use lots more compute than I have. And it seems like many others feel the same.
Data is saying demand >>>>> supply.
They would benefit in what way?
Because their government seems to benefit a lot from Taiwan existing and being an enemy.
If or when China’s economic and/or demographics issues become problematic is exactly when the CCP likely would want to strike. At least seems to me like it’d be a good time to foment national pride.
Of course hopefully I’m wrong and you’re right.
Many of these larger geopolitical things are decades in the making. Even Trump’s Venezuela action has been a long time brewing. So much so that “US troops in Venezuela” has become a trope in military sci-fi. The primary change with Trump is how he presents and/or justifies it, or rather doesn’t.
It will likely be a naval plus air blockade to force a political solution to avoid the messiness of an invasion, but time is on China's side there.
Long term: demographics are worsening for China relative to now or 5 years ago.
Short term: China doesn’t yet have viable homegrown replacements for ASML, TSMC, etc.
Really short term: China blockading Taiwan and suffering the economic fallout would be much more painful than US blockading Cuba/Venezuela/etc.
A decisive kinetic action or a very soft political action, rather than a blockade seems more viable in the current state.
It’s very possible that they will be able to dominate South China Sea and their zone of the Pacific, even now, given the proximity advantages and ship/missile production; and I think that would be satisfactory to them.
20 years from now, China’s sphere and America’s sphere are separate, with China having a lead in competing for Africa, and Europe in a very weird place socially, economically, demographically, and WRT Russia/US competition.
I'm not like, rooting for this, I'm just trying to be realistic.
The US has an embargo that doesn't impact other countries that want to trade with Cuba. China is going to put an actual cordon around Taiwan.
Also, the US has no historical reason for claiming Cuba and has no real domestic pressure to do so (nobody in either party is asking for it). China has been very clear they see Taiwan as a part of China and will reunite with it not for economic or strategic reasons, but for nationalistic ones.
What options do you suppose the military might be working on? Training to surround, and blockade? (Check) Information warfare? (Check) Building high numbers of landing craft? (Check) Building high numbers of modular weapon systems that can rapidly increase the number of offensive ships? (Check) Building numerous high volume drone warfare ships and airborne launchers? (Check)
Keep in mind that there are public language cues that preceded invasion such as declarations of the invalidity of the other country’s sovereignty, declarations that the other country is already part of the invading country. Have you seen any signs of that?
Your persistent doubts require ignorance of strong evidence.
The US can't even remotely come close to stopping China in its own backyard today, in another 5-10 years they'll just have that much larger of a Navy. The US knows that's the situation. The US can supply a large one week bombing campaign against China and that's it, based on inventory levels. The US will exhaust its cruise missile supply instantly and the US has almost no meaningful drone-bomb supply. China can build cheap missiles by the tens of thousands perpetually, train them to the coast, and flatten Taiwan and any opponents as necessary. China is the only country that can sustain a multi-year WW2 style bombing campaign today, thanks to its manufacturing capabilities. Imagine them on a full war footing.
China absorbing Taiwan (especially to Americans) just doesn’t seem like a radical, terrifying concept.
A Hong Kong style negotiated transfer might be best for the world - Taiwanese that want to leave can, the US can build up a parallel source of semiconductors, China gets Taiwan without firing a shot.
USA has been strategically re-homing TSMC to the US mainland for a long time now. 30% of all 2nm and better technologies are slated to be produced in Arizona by 2030.
The real loser in all of this will be the EU which will be completely without the ability to produce or acquire chips. They'll just end up buying from China and USA, which will only further deepen their dependence on those countries.
Compare to TSMC's Arizona project, which will supply 30% of TSMC's 2nm and smaller process output. Already just one of the six planned TSMC fabs in Arizona is pumping out ~30k WSPMs at 5nm or smaller.
And that doesn't even get into CoWoS packaging, which is essential for all the highest-performance and highest-margin parts.
The fact is: In semiconductors, Europe is getting left in the dust. Sure they can fab some mature node chips for industrial uses--and that's not nothing--but Smartphone SoCs, "AI" accelerators, DRAM, even boring CPUs simply cannot be made any more in Europe, and to the limited extent that they can, they will be horrendously uncompetitive on the market and outclassed in every performance metric by Chinese and American chips.
EU is on a big sovereignty kick right now, which makes sense given that their foreign dependencies keep blowing up in their faces. So it's strange that EU is so complacent about their foreign dependency on advanced node semiconductors.
It’s too old, too complacent, and too broke. Even compared to the US and our level of discord, there’s no unity across divisions.
The US absurdly threatens Greenland, but Denmark/EU’s response is “Sanction US tech or kick out US military bases on Europe”, rather than be able to rattle a saber back and show some credible backbone.
They sent warships to Greenland. What level of saber rattling do you expect?
Is it supposed to work independently of other technology at some point?
Then anyways: multilateral cooperation is at the heart of scientific progress anyways. It's fitting that ASML is in a country that is culturally strongly influenced by its history of seafaring and trade. Will see how the braindrain caused by people not wanting to live their lifes in a society taht doesn't share values like these will influence that whole technological armsrace thing.
Some people in Japan are coming up with a successor to EUV as far as I remember, what was their name again?
[1] https://spectrum.ieee.org/nanoimprint-lithography [2] https://www.rapidus.inc/en/
My conspiracy theory is that there is some kind of "gentleman agreement" on this topic between the US and China.
As soon as Taiwan is not needed anymore by the US for chip fabrication, the US will at the very least loose their grip on it.
Note to commenters: that's my theory, does not mean I endorse it in any way.
So now Apple, Nvidia, AMD (possibly), and most car manufacturers will be up a creek without a paddle when China invades in 1-2 years. That is unless China's Xi is bluffing to mollify domestic war hawks and reunification zealots by going through the motions of building an army of war machines without intent to use them, but I don't think that's probable. It's possible that Trump already made agreements with Xi to cede "Oceania" if they allow the US to take Greenland and South America for empire-building neocolonialism.
I mean this is pretty fantastic.
Intel has even struggled with it since they traditionally didn’t sell capacity to other buyers. It worked for Intel because they traditionally had a near-monopoly over the laptop, desktop, and server chip market.
Apple certainly has the money to spin up their own chip fabricator, but there’s no guarantee it would be as good as TSMC, it would cost billions, and they would have less of an ability to sell capacity to other customers.
At the end of that effort they could be left with a chip fab that produces chips that still cost the same or more than what TSMC manufactures them for. It might just be cheaper to try and outbid Nvidia for priority.
https://appleinsider.com/articles/25/08/22/apple-chips-to-be...
Apple's investing heavily in the TSMC fab in Arizona, due to open in 2027, to have 3nm capabilities for its flagship chips, but it's unlikely that would ever cover a majority of that chipmaking.
https://www.aztechcouncil.org/tucson-chipmaker-tsmc-arizona-...
https://wccftech.com/tsmc-plans-to-bring-3nm-production-to-t...
(Apple is well known for shoving "lesser vendors" out of the way at TSMC)
How is owning a larger share of a company with proportionally less cash and a higher price per share than what you could have sold it for before bad.
Have you looked at precious metal charts as of late? Do 1/x and that's the value of the cash these companies are trading for a valuable business.
Also, https://aramzs.xyz/thoughts/dont-post-ai-at-me/