Clones tend to be vastly different - different technology node, architecture, die size, etc. - that's because they are generally functional clones, not mask clones.
(also, as a general shoutout to the low tech sandpaper technique for exploratory work, here's a sanded down RP2350 thrown under a clapped out SEM: https://object.ceph-eu.hswaw.net/q3k-personal/484e7b33dbdbd9... https://object.ceph-eu.hswaw.net/q3k-personal/3290eef9b6b9ad... )
IMO, I have mostly seen mislabeling, rebinning, and passing off obvious QC rejects.
example from many years ago: https://www.youtube.com/watch?v=e6DfBuPwAAA
That's the better method of course (results wise), but it's not nearly as accessible, hence my recent evangelism of the virtues of 2000 grit sandpaper.
I don't think that's quite accurate for reasonably modern MCUs. You can typically shake 10+ bits out of them, but you need to take a lot of precautions, such as providing very stable external reference voltage and shutting down unneeded subsystems of the chip.
They're still not as good as standalone ADCs, but they're at a point where you can actually use them for 90% of things that require an ADC.
In cases where you need more bits, there's a lot more that must go into the design, which is what gives me a pause about the article. There's nothing about the PSU the author is using or how he managed the MCU noise and RFI. So I don't know if the findings here are that these are knock-off devices with worse specs, or if his overhead LED lamp is causing a lot of interference.
The RP2350 has 9.2 ENOB on a 12 bit ADC. Sure, you might be able to decimate multiple samples to get more bits out of them, but the spec sheet supports the author's claim (https://www.raspberrypi.com/documentation/pico-sdk/hardware....). There are even lower cost MCUs like the CH32V003 that have even worse ADC performance.
On the other hand, some MCUs can definitely do 10+ bits, such as the STM32H7 line which gets 13+ ENOB from a 16 bit ADC. This is impressive, but the H7 MCUs are literally an order of magnitude more expensive than the RP2350, so they might not be something the author tinkers with much. https://www.st.com/resource/en/application_note/dm00628458-g...
With RP2040 (and an LDO for supply), using two such channels for pseudo differential measurement (the second one just tracks threshold noise) I typically obtain 16 ENOB at 1 kHz, more at DC.
It is critical to avoid any periodic activity on the chip, though. Putting cores to sleep and then waking them up again causes huge spurs. One has to e.g. sleep for random intervals to spread them around. Same with flash. USB can be used, it's noise doesn't normally exceed -100 dB for me.
Fun stuff!
PS: I have not tested DC accuracy. One would likely use a channel with reference and hope that GPIOs are well matched. Could be used to e.g. sense CC lines on USB or analog joysticks and other non-critical, low accuracy stuff.
Is this essentially trading time resolution for voltage resolution? Would just doing an exponentially weighted moving average in firmware achieve the same results?
The chip has a 12bit SAR ADC. Layout and board design mattered a lot, but even the worst ones had 10 bits worth, and the best one had nearly 12 bits effective.
That was without doing too much on the software side, meaning the other modules weren't running, besides a single serial output. On the bad boards the serial affected it, but on the good board very little.
The paragraph ending with "Compare that with a microcontroller ADC with a fixed 3.3 V range: 9 ENOB steps are ~6 mV" also seems to insinuate that no MCU has an analog reference that's independent from the supply, which just isn't true at all. Hell, NXP has a few that have a built-in programmable reference.
Those also depend on the noise of the supply, see PSRR graph if available.
The vast majority of counterfeit chips I've seen were from ghost shifts but IIRC TI fabs all their analog parts in house, I doubt they're ghost shift parts or failed QC.
I think its probably a relabeled ADS1015.
(also interestingly the STM32 clones I've seen had stacked die flash because they didn't fab them in a technology that could also do flash, so you can easily tell the counterfeit from sanding down the package and looking for an extra set of bonding wires; it's also a cool place to access the internal flash bus if you wanna bypass some readout protection :) )
I don’t usually buy from electronics markets in Shenzhen either so that probably helps.
Getting full spec performance out of an ADC requires having good layout power supply routing etc.
I would transplant the chips from PCB A to PCB B and vice versa. See if the performance follows the chip or the PCB.
Also check power consumption before / after board swaps. If they are fakes, that would be significantly different.
On the current project we started with an MCP3208 via SPI. It did the job but only has 8 channels and it's slow (100K samples per sec).
To get something faster we switched to ADS7953. It has 16 channels and runs 10 times faster. It's somewhat more complex to code, and you can only get the highest sample rate if you scan the inputs in a predictable order. But it sure flies.
To me, these chips feel like cars. The ADS7953 is somewhat of a ferrari, whereas the MCP3208 feels like a Toyota, simple to use, unimpressive performance.
I'd love to know the industry background about how these varieties of ADC chips came to be and carved their own space in the world, and how widely they are used (millions? billions?).
I recall reading about a project at CERN to design a 12bit ADC chip that could sample at tens of GHz, maybe 50 or more.
I was perplexed at how they could achieve this.
Turned out it was the same we programmers do. Parallel processing.
They had taken a 12bit SAR unit which ran at like MHz rates, and just cloned it many times. They then had a large analog multiplexer in front to route the signal to the active ADC unit in a round-robin fashion.
That takes a lot of chip real-estate, and the analog muxer had to be carefully designed.
For a simpler approach to speed there is Flash ADCs[1], which kinda brute-force it.
For precision I know multi-slope ADCs[2] are often used.
Sadly I don't know much about the history, and would also love to learn more about it. Bound to be some fascinating stories there.
[1]: https://en.wikipedia.org/wiki/Flash_ADC
[2]: https://www.analog.com/media/en/training-seminars/tutorials/...
What about the AD9226? It only has a single channel but can do up to 65 MSa/s at 12 bits. I bought one as a module for around $12 on AliExpress to experiment with software decoding of analog video. I only run it at 20 MSa/s and only use 8 bits because, funnily enough, the limiting factor is the speed at which I could get the data into my laptop. I connected it to a Raspberry Pi Zero and use the SMI peripheral as described here: https://iosoft.blog/2020/07/16/raspberry-pi-smi/
They need a lot of pins to be toggled. Otherwise they spit out no data.
And a lot of manual stuff means it's super DMA unfriendly. And you need DMA for high-speed stuff.
I'm currently working on a PLC program, replacing the PLC's basic cyclic input sampling (max 2K samples/sec) with a harder-to-use mechanism that lets you access the raw data off its 12-bit ADC at 10K samples/sec, which we consider unusually speedy.
We have another project that measures levels in water tanks. A sample rate of 1 sample per minute is plenty.
The edge case of the 1-bit conversion scheme used in SACD format is compelling from a few perspectives. The idea is to run the sampling rate in the megahertz region. SACD achieves 120dB of dynamic range with an extended frequency response up to ~100kHz. CD audio only achieves 96dB of range up to 20kHz with its 16-bit PCM scheme. From the analog hardware complexity standpoint, a bitstream converter is much simpler than a multi-bit converter. The 16-bit ADC might be cheaper due to the insane manufacturing volumes.
Trading bit depth for sample rate is a very compelling offer in many cases. The 3d graphics version of this is SSAA where you sample more pixels than your monitor needs in order to resolve higher frequency information.
Btw. the western list price is just an indicative at-most number anyway. Even a small-sized project gets discounted prices when you start talking to a sales rep.
If you want a flash ADC that can do 16 bit (and can do 16 bit at 100Mhz), however you'll have to probably mortgage your house.
Does it work? Well, does your design power up during factory testing, and then pass whatever things your rig (hope you made a few!) has in mind? Well, then, yes, in fact it does...
Also, and perhaps more importantly, the test rig is a lot simpler and a lot cheaper if you can generally trust manufacturer data. Sure, send off a few samples (likely prototypes with parts from Digikey instead of LCSC) to run extended testing in an environmental chamber with thermal imaging, build an endurance test rig that pushes the button once a second for four weeks to simulate once-daily use for years, whatever you want to do...but after that, if TI says it's good from -40 to +125, you're going to trust them on a lot of the edge cases.
Do 100% testing of the things you can test in-circuit if you can - power it up at room temperature and make sure it works once - but that doesn't mean you get the actual rated performance across all published environmental conditions.
Of interest from early in the article: I'm curious how these external ones compare to onboard, e.g. STM32's. Btw, the TI one listed is actually pretty simple to use in comparison. The ST integrated ones have more config and hardware considerations, e.g. complicated calibration procedures, external VREF (mentioned) etc. So, if you do app the config, is the integrated one as good?
The integrated ones usually have nice ways to integrate with timers and other onboard periphs.
LCSC is a grey market distributor whose sources of supply are of untraceable, dubious provenance. They are neither ECIA member nor participating distributor.
"LLMs produce AI slop on regular basis. This piece of code I found somewhere on the web sure is AI slop, I never saw any code by James Bowman but _I very much think these are made with the_ AI slop generators _or similar_". And the follow up is "Remember James Bowman? _I’m still curious about_ his code. I might look at it in something or other at some undetermined point in the future to see how it performs." Does that sound even remotely ok?
Its not even guilt by association. Its guilt by gut feeling? prejudice?
Single cycle readings defeat the point of sigma delta ADC setups.
You're taking many high noise samples and averaging them over time to get a better picture of the average voltage.
The ADC's internal delta-sigma ADC takes a lot of samples at a much higher modulation frequency and presents them as a single output value.
You do not get the direct delta-sigma output from an ADC like this. The internal logic handles that for you. It's okay to take single samples of the output.
Natively/internally, it runs at 860 samples per second, and you can configure it to provide that data at a lower sample rate at lower noise levels by averaging multiple readings together internally.
And if you want to see CERN's multimeter, check this out.
Marco Reps is a treasure.
Non-linear as hell - and evil side effects once you use the calibration curves.
There are a bunch of reasons but the primary reason is that good ADCs are made using a different mixed signal process than microcontrollers. MCU ADCs are capacitive charge-balancing successive-approximation type which limits their sensitivity and precision.
Standalone ADCs also eliminate significant sources of noise like temperature fluctuations and electronic noise (the digital logic on the chip often runs at less than 1Mhz for example)
Only ugly two-chip solutions or hyper exotic stuff with no community.
Weirdly honest deal, haha.
There you are. Assumptions. Nothing to do with LCSC.
That the ADS1115 costs <$1 on LCSC means they buy millions from them every year. They are one of the biggest trustable players in Asia.
I have access to our internal STM32 pricing. You'd be shocked.
One supplier I developed a relationship with showed us their internal numbers and it was $1,000-3,000 per wafer for 130nm-180nm nodes with a minimum order of 25 wafers. Once the part is designed and the mask is made, the cost is mostly just the setup plus whatever they want for the IP. The silicon itself is often cheaper than the packaging around it.
Takes an analog signal from something like light or sound and convert it into a digital signal.
Acronyms introduced in an article should be spelled out at least once please.