I suppose in fairness and to the explanation it does give, the other thing that footprint allows is a shorter path for the pins that would otherwise be near the ends of the daughter board (e.g. on a DIMM), since they can all go roughly straight across (on multiple layers) instead of a longer diagonal according to how far off centre they are. But even if that's it, that's what I mean by it seeming incomplete. :)
All the traces going into the slot need to be length-matched to obscene precision, and the physical width of the slot and the room required by the "wiggles" made in the middle traces to length-match them restrict how close you can put the slot. Most modern boards are designed to place it as close as possible.
LPCAMM2 fixes this by having a lot of the length-matching done in the connector.
They are going to lose money when people buy new RAM, rather than a whole new laptop. While processor speeds and size haven't plateaued yet, it's going to take a while to develop significant new speed upgrades and in the meantime, the only other upgrade is disk size/long-term storage, which, aside from Apple, they don't totally control.
So, why should they relenquish that to the user?
It makes sense that the first ones to use this new standard would be Dell and Lenovo. They both have "business" lines of computers, which usually offer on-site repairs (they send the parts and a technician to your office) for a somewhat long time (often 3 or 5 years). To them, it's a cost advantage to make these computers easier to repair. Having the memory (which is a part which not rarely fails) in a separate module means they don't have to replace and refurbish the whole logic board, and having it easy to remove and replace means less time used by the on-site technician (replacing the main logic board or the chassis often means dismantling nearly everything until it can be removed).
You're thinking about this the wrong way around.
Suppose the user has $800 to buy a new laptop. That's enough to get one with a faster processor than they have right now or more memory, but not both. If they buy one and it's not upgradable, that's not worth it. Wait another year, save up another $200, then buy the one that has both.
Whereas if it can be upgraded, you buy the new one with the faster CPU right away and upgrade the memory in a year. Manufacturer gets your money now instead of later, meanwhile the manufacturer who didn't offer this not only doesn't sell to you in a year, they just lost your business to the competition.
- the manufacturer themselves benefit from easier to repair machines. If DELL can replace the RAM and send back the laptop in a matter of minutes instead of replacing the whole motherboard to then have it salvaged somewhere else, it's a clear win.
- prosumers will be willing to invest more in a laptop that has better chance to survive a few years. Right now we're all expecting to have parts fail within 2 to 3 years on the higher end, and budget accordingly. You need a serious reason to buy a 3000$/€ laptop that might be dead in 2 years. Knowing it could weather RAM failure without manufacturer repair is a plus.
So it's the egg and the chicken where if it'll be important to consumers, it might end up as catching up.
From what I gathered, it's around a watt per when idling (which is when it's most critical): the sources I found seem to indicate that ddr5 always runs at 1.1V (or more but probably not in laptops), while lpddr5 can be downvolted. That's an extra 10% idle power consumption per.
I love the disclosure at the bottom:
Full Disclosure: iFixit has prior business relationships with both Micron and Lenovo, and we are hopelessly biased in favor of repairable products.
FYI, the '2' at the end is because this isn't the first time this has been done. :)
LPCAMM spec has been out for a while. LPCAMM2 is the spec for next-generation parts.
Don't expect either to become mainstream. It's relatively more expensive and space-consuming to build an LPCAMM motherboard versus dropping the RAM chips directly on to the motherboard.
I kind of wish we could establish a new level in the memory hierarchy. Like, just make a slot where you can add slower more power hungry DDR RAM that acts as a big cache for the NVM storage, or that the OS can offload some of the stuff in main memory if it's not used much. It could be unpopulated in base models, and then you can buy an upgrade to stick in there to get some extra performance later if needed.
Too bad apple is almost guaranteed to not adopt the standard. I miss being able to upgrade the ram in macbooks.
Apple would require multiple LPCAMM2 modules to provide the bus width necessary for their chips. Up to 4 x LPCAMM2 modules depending on the processor.
The size of each LPCAMM2 module is almost as big as the entire size of an Apple CPU combined with the unified RAM chips, so putting 2-4 LPCAMM2 modules on the board is completely infeasible without significantly increasing the size of the laptop.
Remember, the Apple architecture is a combined CPU/GPU architecture and has memory bandwidth to match. It's closer to your GPU than the CPU in your non-Mac machine. Asking to have upgradeable RAM on Apple laptops is akin to almost like asking for upgradeable RAM on your GPU (which would not be cheap or easy)
For every 1 person who thinks they'd want a bigger MacBook Pro if it enabled memory upgrades, there are many, many more people who would gladly take the smaller size of the integrated solution we have today.
Can I please have upgradeable RAM on GPU? Pwetty pwease?
The non-Pro/Max versions (e.g. M3) uses 128-bits, and arguably is the kind of notebook that mostly needs to be upgraded later since they commonly come with only 8GB of RAM.
Even the Pro versions (e.g. M3 Pro) use up-to 256-bits, that would be 2 x LPCAMM2 modules, that seem plausible.
For the M3 Max in the Macbook Pro, yes, 4 x LPCAMM2 would be impossible (probably). But I think you could have something like the Mac Studio have them, that is arguably also the kind of device that you probably want to increase memory in the future.
Is it feasible to fit memory bandwidth like the M3 Max (512 bits wide LPDDR5-6400) with LPCAMM2 in a thin/light laptop?
Heck, make it only run full tilt when on an active cooling dock. Let it run half power when unassisted.
They have engineering difference. Depends on who you ask, it may or may not worth it
As far as bandwidth goes, you would only need one or two LPCAMM2 modules to match or exceed the bandwidth of non-Max M series chips. Accommodating Max chips in a macbook with LPCAMM2 would definitely be a difficult packaging problem.
https://www.anandtech.com/show/17024/apple-m1-max-performanc...
https://www.anandtech.com/show/17047/the-intel-12th-gen-core...
[1] https://www.pcworld.com/article/693366/dell-defends-its-cont...
Ifixit also links to a repair guide:
RAM is nice to upgrade, for sure. As well as an SSD, but CPUs are still a must. I would even suggest upgradeable GPUs but I don't think the money is there for the manufacturers. Why allow you to upgrade when you can buy a whole new laptop?
RAM/Storage are great upgrades because 5 years from now you can pop in 4x the capacity at a bargain since it's the "old slow type". CPUs don't really get the same growth in a socket's lifespan.
The technical differences between sockets aren't usually huge. Upgrade the memory standard here, add or remove PCIe lanes there. Using new cores with an older memory controller may or may not be doable, but it's quite simple to not connect all the PCIe lanes the die supports.
Because you can't swap the motherboard, your options for CPUs are going to be quite limited. Generally, only higher-tier CPUs of that same generation - which draw more power and require more cooling.
Generally a laptop is built designed to provide a specific budget of power to the CPU and has a limited amount of cooling.
Even if you could swap out the CPU, it wouldn't work properly if the laptop couldn't provide the necessary power or cooling.
And the good thing about mobile CPUs is that they have almost the same TDP across the various dual-quad versions(or whatever is the norm today).
> Because you can't swap the motherboard,
https://frame.work/ has entered the chat.
In a way I don't mind having non-replaceable ram in the framework ecosystem as an option. Put simply because the motherboard itself is modular and needs to be upgraded for the CPU. At that point though I would prefer on integrated ram CPU/GPU.
Looks like the best card they have out with MXM right now is a Quadro RTX 5000 Mobile which seem to be going for ~$1000 on eBay.
This way, you could keep power consumption low and be able to upgrade cpu to a new generation
And soldering stuff to the board is the default way to make something when upgradeability isn't a feature.
Why can't the signaling channels use a higher voltage and control circuitry on the memory stick step up and step down the gain to access the memory module?
It worries me.
EDIT: article about a tech demo of it on a laptop actually, hadn't seen this before: https://www.techradar.com/pro/even-a-laptop-can-run-ram-exte...
This[1] Anandtech article from last year has a better look at how the LPCAMM module works. Especially note how the connectors are now densely packed directly under the memory chips, significantly reducing the trace length needed. Not just on the memory module itself but also on the motherboard due to the more compact memory module. It also allows for more pins to be connected, thus higher bandwidth (more bits per cycle).
[1]: https://www.anandtech.com/show/21069/modular-lpddr-becomes-a...
Yet another standard for memory will just fail.
And you'd be able to have a lot more than 192GB.
Unless “impossibly far profit margin” is a technical requirement.
There is, Apple uses flash memory as swap to get away with low RAM specs, and the latency and speed required for that purpose all but necessitates putting the flash memory directly next to the SoC.
I do hope that a more widespread usage of compressed attachment gives us some development in that area where projects that were promising modular devices failed (remember those 'modular' phone concepts? available physical interconnects were one of the failures...). Sockets for BGAs have existed for a while, but were not really end-user friendly (not that LGA or PGA are that amazing), so maybe my hope is misplaced and many-contact connections will always be worse than direct attachment (be it PCB or SiP/SoC/CPU shared substrate).
As much as I like socketed / user-replaceable parts, fact is that soldering down a BGA is a very reliable way to make those many connections.
On devices like smartphones & tablets RAM would hardly ever be upgraded even if possible. On laptops most users don't bother. On Raspberry Pi style SBCs it's not doable.
Desktops, workstations & servers are the exception here.
Basically the high-speed parts of a system need to be as close together as physically possible. Especially if low power consumption is important.
Want easy upgrades? Then compute module + carrier board setups might be the way to go. Keep your I/O connectors / display / SSD etc, swap out the CPU/GPU/RAM part.
Not for the average person.
until you hit custom undocumented unobtainium proprietary chips. good luck repairing anything with those.
There's lots of value in tight integration. Improved signal integrity (ie, faster), improved reliability, better thermal flow, smaller packaging, and lower cost. Do I really want to compromise all of those things just to make RAM upgrades easier?
And how many times do I need to upgrade the RAM in a laptop, really? Twice? Why make all those sacrifices to use a connector, instead of just reworking the DRAM parts? A robotic reflow machine is not so complex that a small repair shop couldn't afford one, which is what you see if you to to parts of the world where repair is taken seriously. Why do I need to be able to do it at home? I can't re-machine my engine at home. It's the most advanced nanotechnology humanity can produce, why is a $5k repair setup unreasonable?
This is not to mention the direction things are really going, DRAM on Package/Die. The signaling speed and bus widths possible with co-packaged memory and HBM are impossible to avoid, and I'm not going to complain about the fact that I can't upgrade the RAM separately from the CPU, any more than I complain about not being able to upgrade my L2 cache today. The memory is part of the compute, in the same way the GPU memory is part of the GPU.
I hope players like iFixit and Framework aren't too stubborn in opposing the tight integration of modern platforms. "Repairable" doesn't need to mean the same thing it did 10 years ago, and there are so many repairability battles that are actually worth fighting, that being stubborn about the SOTA isn't productive.
Don't know would say the reverse, workstation might need the performance of DRAM on Package/Die, but I don't believe it's the case for mainstream user.
> A robotic reflow machine
Same maybe to service enterprise customer but probably way too expensive for mainstream.
I certainly hope that players continue to oppose tight integration and I'll try to support them. I value the ability that anyone can swap ram and disks to easily upgrade or repair their device more than an increase of performance or even battery life.
I recently cobbled up a computer for a friend's child with component from three different computers; any additional cost would have made the exercise worthless.