Apr. 19 at 3:44 PM
$GOOGL x
$MRVL — this is how the AI war is really being fought… not just models, but silicon.
👉Click to view @NasdaqPulse for timely updates amid the volatility.
Reports pointing to 2 custom chips in development:
Memory Processing Unit (MPU) to supercharge TPU throughput
Next-gen TPU optimized specifically for inference workloads
Everyone’s been chasing training (hello
$NVDA), but the real scale economics sit in inference. That’s where margins expand and hyperscalers win.
Partnering with Marvell here is key — they’ve been quietly becoming the backbone for custom data center silicon.
If this lands, Google tightens vertical integration across:
Compute (TPUs) + Memory + Software (Gemini stack)
That’s not just competing… that’s owning the stack.
Watch the second-order effect:
Less reliance on external GPUs = margin leverage + strategic independence.
Market still pricing
$GOOGL like a search company.
Tape might start pricing it like an AI infrastructure player.