Not to oversimplify: first you have to flatten the rock and put lightning inside it.
Source: https://twitter.com/daisyowl/status/841802094361235456
Is the "rock" "thinking" though? What _is_ "thinking"? Another way to look at a CPU is as a rock that we "carved" to conduct electricity in a specific way. It doesn't think - we interpret how the electricity flew and "think".
I think this'll happen at some point, as silicon manufacturing hits final roadblocks and becomes increasingly commoditized, but it'd be nice if it were sooner rather than later.
(This would be nice for self-sufficient decentralized communities being able to produce their own microelectronics as well.)
Plus an electron microscope!
https://blog.classycode.com/implementing-a-cpu-in-vhdl-part-...
One just needs to avoid falling for the similarities with programming too much, and rather think in terms of signal and have the clock in mind.
The worst things are the tooling, e.g., I expected Quartus to crash or hang at any moment, I early wrote a TCL script to setup the project and FPGA pin mapping to be able to scratch the Quartus project at any time and just recreate it in seconds without losing any work (or my mind).
There's ghdl (open source) which I found quite nice, but that's only for virtual development and (at least then) had no support for getting out a bitstream to load onto a real FPGA, so often I used it for test benches and some quicker development tries before actually loading it onto the FPGA every so often.
The main issue is creating the masks. I'm not sure where you would get rubylith and the associated machinery for contact printing in this day and age.
As a side note: the thing stopping commodity VLSI is the CAD tools, not the silicon. Silicon runs are around $50K or so for an old node and Fab Shuttle/MPW(multi-project wafer) runs are often under $10K.
https://www.electronicsweekly.com/news/business/diy-chip-10k... https://www.hackster.io/news/efabless-google-and-skywater-ar...
If you are interested in self-sufficient decentralized communities, microchips are not essential for a good society or long life. https://aeon.co/ideas/think-everyone-died-young-in-ancient-s... They're useful for being able to make non specialized hardware that can run general programs that many can support. Microchips do calculations and are more useful in scaling, analog computers can take some roles but it will be more wasteful to produce specialized hardware.
I don't know if vaccuum tubes need globaliziaton to make but you are not going to make decentralized microchips with local goods, they are not fungible raw materials like food.
However, it is plausible to not rely too much on microchips in general, a pot can control fan speed with older motors, but we lose brushless fans. You may really enjoy post collapse guides if you are interested in what can be made locally.
http://www.survivorlibrary.com/
There are great youtube videos on like primative technology as well that makes things from scratch, even steel cannot be made locally without the proper materials so there needs to be some sort of global/larger community unless they live in a resource rich environment (the US does not have metals like Eurasia does).
Both are educational and cool, and neither makes economical sense.
Also, I wonder what other materials might work, even if they are much slower, lower power, etc. Copper Oxide, for example?
Basically, we need functional understanding of all components, and hierarchical functional equivalents/isomorphisms. Aka "What is the end function of logic gates?" "To do X" "Are there simpler, easier, faster, more feasible structures that do X? Any way whatsoever? Does not need to relate to the current discipline or our established approahces" and iterate that question over all components and steps of the chip making process.
i think it can be done and it would be fun. we have to filter out people who have a vested interest in chip manufacturers because they may try to over complicate the process to protect their purse. so like a vouch system, where we know the people coming in have the right heart and won't purposely screw up moral
Are you talking about distribution? Because I would have thought the small size/weight of wafers would mean those costs would be fairly small. Maybe (silicon) packaging could be done locally, but even then packaged chips weigh little.
https://cdn.aff.yokogawa.com/9/400/details/minimal-fab-one-s...
*I added a reference link
Having a thorough understanding of the process, I thought this was hilarious. But if you really want to understand the process, it's pretty terrible. It spends 10 steps on making a wafer, and then the bulk of the actual process is condensed to 16.
I'm going to guess at a number and say you're probably looking at $10-20MM all in.
As far as hiring good designers, you really need the relevant technical background to screen them. Assuming you need a team, I would suggest starting with someone in a director position at a large company with experience in your area of interest. They would be able to more concretely define the project and determine human resource requirements/allocation.
If your ASIC is just large but not complex (meaning lots of repeated structures), and you can get away with just a few strong designers, I would suggest hiring a consultant to help you define the project and screen candidates. EE profs at your local university might be a good start.
Feel free to shoot me an email (in my profile) if you want to describe your project in more depth and I'd be happy to offer what advice I can.
7nm logic mask set costs are estimated at $10.5M. Of course you could always use a slightly less leading-edge node and save a lot of money.
Slide 13:
https://semiwiki.com/wp-content/uploads/2020/03/Lithovision-...
To put it in perspective, a company a friend founded that is doing a very large but very regular ASIC for TSMC 10nm raised twice what you're ballparking and will end up doing another raise before tapeout. Chips nowadays are _expensive_ if you're anywhere near state of the art. The people who do very high performance chips are also very expensive.
But I'm also guessing that unless you're doing a bitcoin hashing chip, AI chip, or GPU, all of which is regular, your scale estimate is probably very off. If you're doing any of those, just don't.
That said, there are SOC vendors who do core plus, so it's possible that if this is just an accelerator, no matter how wide, you might be able to outsource the whole thing.
Integrated circuits are appealing to industry because they're integrated - they can be smaller and they reduce cost (long term; still need the upfront investment). These are important things for many products, especially in RF, but they aren't really driving factors for hobbyists.
That being said, people are trying! http://sam.zeloof.xyz/second-ic/
If it means designs inside an EDA environment, design get submitted and shows up realized, then that is possible now. And it's not all that expensive. eFabless chip ignite is quoting ~$10K for 100 QFNs in 130nm. That's getting into beater car territory. https://efabless.com/chipignite/2110C
If it means actually fabricate then I think there is no way because a DIY won't have the scale to compete on price and they won't be able to bring any custom processing step to justify being at boutique scale. Think about PCBs. We used to make them ourselves with chemical etch. Now I don't even use breadboards because a custom bare fab is $5 and the components cost more than that. I also get a much better electrical result, and it doesn't fall apart if I look at it funny.
The main problem with ASICs is the amount of skill/time that it takes to do it right. Floor planning, track planning, closing timing, etc. etc. on an ASIC is much harder than an FPGA. You don't even have to do half those things on an FPGA.
With an FPGA one can almost get compile and go if you're willing to be loose on area and performance. ASIC CAD tooling is no where near that at the moment. Closed or open source.
Then you do the lithography (photoresist developing, etching, etc) to expose specific regions on the silicon that you want to dope to create devices like transistors, for example. So first you might expose all of the p-type transistor (PMOS) diffusion areas and dope them. Then you'd remove all the photoresist, repeat the procedure to expose n-type diffusions and dope that. And so on for the various needs of that particular wafer.
PN junctions are created simply by having p-type doped silicon adjacent to n-type doped silicon. The boundary between the two is the PN junction. In practice what I usually see is a square of one type with a ring around it of the other, but these devices are not frequently used.
Do you use any countries or specific factories that do better refinement or are the raw materials directly shipped to the manufacturing country? What do you make from the wafers usually? Are certain sizes much harder to make? I know for example that larger sensor for digital cameras are much harder to make. I also heard of redundant circuits used to increase the yields of chips, how often is this used and when is it most useful versus less?
Another issue is the equipment used for manufacturing. It's very hard to come by, and the classic example is ASML (Netherlands), which dominates the market for lithography equipment.
I work on the design side, not in a fab, so I can't tell you much about sourcing or refining the silicon for wafers. Wafers are used to make every single microchip you can imagine. There has been a slow but continuous push towards using larger wafers, since its more cost-effective. I imagine it's more difficult, but couldn't tell you any specifics.
As far as manufacturing each individual integrated circuit: yes, larger is harder to manufacture because there is more physical space for a defect to occur. There are some design challenges as well when you get very large, but it's not a significant overhead because you're usually doing your design in sub-pieces anyways.
Some designs do use redundancy, as you mentioned. This is more often the case for very large, very uniform structures, like DRAM, flash, CPU cache, etc. But there's a tradeoff because you waste money on that redundancy for every chip that comes out with no defects. And there's overhead to actually testing the part in order to utilize the redundancy. In my experience, yields are targeted at the high 90%s these days, so the redundancy would have to be very cheap to be worth it. For almost all RF, analog, and mixed-signal circuits, there is no redundancy. I'd say most digital circuits, except the largest, also don't have any.
Also - does the US really have microchip supremacy? The highest tech fabs are non-us (Samsung and TSMC)
>Soviet computer software and hardware designs were often on par with Western ones, but the country's persistent inability to improve manufacturing quality meant that it could not make practical use of theoretical advances. *Quality control*, in particular, was a major weakness of the Soviet computing industry.
https://en.wikipedia.org/wiki/History_of_computing_in_the_So...
On linux, they are making one with an allwinner chip called the D1. https://www.hackster.io/news/sipeed-teases-linux-capable-64-...
RISC-V is not inherently better or more secure, its a different instruction set with no fees, so anyone can make one, its possible to be less secure.
If I had the skills I would immediately investigate how to couple a RISC-V CPU with some open GPU on that platform!
https://www.youtube.com/watch?v=NGFhc8R_uO4
lots longer, but far more informative and less fluffy.
For large digital circuits (e.g. CPUs), it's all automated. There's a lot of human involvement, but ultimately a computer is placing all the transistors and wiring them together.
https://en.wikipedia.org/wiki/Silicon_photonics https://en.wikipedia.org/wiki/Integrated_quantum_photonics
If minecraft taught me anything it’s that ore doesn’t spontaneously turn into ingots.
I haven't been in school for a while, so I'm not sure what's current. I really liked Baker's book - CMOS circuit design. It had a decent overview of the manufacturing process for the perspective of a designer, as well as good introductions to major design topics.
Unfortunately, with modern processes, most of the textbook design equations and learning no longer apply so it becomes as much learning an art as it is science.
Yeah of course when it comes to how things are actually done it's hard to know without actually working in the field. But I just wanted an overview.
Problem is, I don't remember what the game is called, and no amount of searching seems to help me. Anyone know what it was called?
- NandGame (2-dimensional circuit building, mission-based): https://nandgame.com/
- Logic World (3-dimensional circuit building, no missions/goals yet): https://store.steampowered.com/app/1054340/Logic_World/
>Print and explore the TIS-100 manual, which details the inner-workings of the TIS-100 while evoking the aesthetics of a vintage computer manual!
>Solve more than 20 puzzles, competing against your friends and the world to minimize your cycle, instruction, and node counts.
>Design your own challenges in the TIS-100’s 3 sandboxes, including a “visual console” that lets you create your own games within the game!
>Uncover the mysteries of the TIS-100… who created it, and for what purpose?
https://zachtronics.com/shenzhen-io/
>Build circuits using a variety of components from different manufacturers, like microcontrollers, memory, and logic gates. Write code in a compact and powerful assembly language where every instruction can be conditionally executed.
>Read the included manual, which includes over 30 pages of original datasheets, reference guides, and technical diagrams.
>Get to know the colorful cast of characters at your new employer, located in the electronics capital of the world.
>Get creative! Build your own games and devices in the sandbox. Engineering is hard! Take a break and play a brand-new twist on solitaire.
I didn't know an assembly game could be made, it's a pretty hard game only progammers and very logical people would enjoy.
Thanks
It would be difficult to learn on your own as explained in the article: you need a lot of specialized equipment, a high class of clean room, and a lot of very dangerous chemicals. (My brother once described what the hydrofluoric acid he used semi-regularly does to person and completely horrified our parents).
Downside of this field is that there are very few job opportunities without relocating. If you're in the US, you can work at Intel... or Intel. Unless you're willing to move to Taiwan and work at TSMC.
It's been a while, but I remember the biggest danger isn't the acidity itself, not even being a strong acid, but fluorine's tendency to "deep dive". It just sort of slowly eats into things and creates layers that are comparatively hard to remove. So if you spill hydrochloric acid or whatever on yourself, you wash it off, maybe get some severe tissue damage, but it's localized and washes off.
On the other hand, the HF tends to stick around, and as a fun side-effect, the fluoride salts it creates are poisonous to the body. And HF is tame compared to some fluorine chemicals used in chip etching/production...
Only if we're talking specifically about cutting edge logic. There's Texas Instruments and GlobalFoundries in the US on the trailing edge for logic. In memory, where the fabrication techniques are similar, there's Micron, IMFT in Utah, and Samsung in Texas. Not to mention that TSMC is building a leading edge logic fab in Arizona.
And then of course there are all of the capital equipment suppliers, where the US punches way above its weight. Applied Materials, Lam Research, and KLA are all headquartered in California and employ a lot of the same talent that Intel does.
I guess the point I was trying to make in my trite (and I admit, inaccurate) statement was that it's not as accessible of a career path as, say, coding. It's more like becoming a rocket scientist: there are very few companies to pick from in that field. And they're not typically in the same place geographically.
Unfortunately the dot bomb happened right after I graduated, and the anti-intellectual backlash of the early 2000s killed independent research during the outsourcing era, which never recovered.
Sadly from my perspective, very little has changed in 20 years. Computers only reached about 3-4 GHz, and kept doubling down on single-threaded performance for so long that companies like Intel missed out on multicore. Only Apple with their M1 seems to have any will to venture outside of the status quo. The future is going to be 256+ symmetric cores with local memories that are virtualized to appear as a single coherent address space. But that could take another 20 years to get here.
Meanwhile we're stuck with SIMD now instead of MIMD, so can't explore the interesting functional paradigms. Basically I see the world from a formal/academic standpoint, so I think in terms of stuff like functional programming, synchronous blocking communication, ray tracing, genetic algorithms, stuff like that. But the world went with imperative programming, nondetermistic async, rasterization, neural nets.. just really complicated and informal systems that are difficult to scale and personally I don't think much of. Like with software, honestly so much is wrong with the hardware world right now that it's ripe for disruption.
Also hardware was a dying industry 20 years ago. We wanted fully programmable FPGAs to make our own processors, but they got mired in proprietary nonsense. There really isn't a solution right now. Maybe renting time at AWS blah.
I feel a bit personally responsible for the lackluster innovation, because I wasn't there to help. I wasted it working a bunch of dead end jobs, trying to make rent like the rest of you. And writing text wall rants on forums that nobody will ever read anyway. So ya, don't be like me. Get involved, go work for a startup or a struggling company that has the resources to fix chips, and most importantly, have fun.
What about multi-core/multi-threading combined with massively out of order CPUs? Intel and AMD’s chips have a dozen or so execution ports. So you can have your PADD running on one port, and a PMUL on another. It just happens all being the scenes.
Intel tried a VLIW architecture with Itanium, but it was a flop for a variety of reasons. One of which was the lack of “sufficiently smart compilers”. There’s also the benefit to all the nuances of execution being in hardware: programs benefit from new CPUs without having to be recompiled. It has a much more intimate knowledge of how things are going than the software does (or even the compiler).
I find this an interesting opinion considering that the M1 is really just "The same, but a bit larger" - IE slightly higher performance at a higher cost.
What exactly do you see with the M1 that makes it so different?
But I don't want all that. I just want a flat 2D array of the same core, each with its own local memory. Then just run OpenGL or Vulkan or Metal or TensorFlow or whatever the new hotness is in software. All Turing-complete computation is inherently the same, so I feel that working in DSLs is generally a waste of time.
Arm is a relatively simple core so scaling an M1 to over say 64 cores is probably straightforward, at least on the hardware side. People complain that chips like that are hard to program, but it's only because we're stuck in C-style languages. GNU Octave or MATLAB or any vector language is trivial to parallelize. Functional languages like Julia would also have no trouble with them.
Once we aren't compute-bound, a whole host of computer science problems become tractable. But we can't get there with current technology. At least not without a lot of pain and suffering. What we're going through now isn't normal, and reminds me a lot of the crisis that desktop software reached in the mid 90s with languages like Java just before web development went mainstream.
> But the world went with imperative programming, nondetermistic async, rasterization, neural nets.. just really complicated and informal systems that are difficult to scale
...wat?
I've never heard of this. Could you elaborate, please?
America decided to double down on neoliberalism with the war on terror, so we've had endless bizarre legislation like the DMCA and PATRIOT act coinciding with our exploitation of developing countries and fear of the other. But we've only had a handful of the really important innovations like blue LEDs, lithium iron phosphate batteries, and enough Moore's Law to miniaturize computers into smart phones. We needed moonshots for stuff like cheap solar panels and mRNA vaccines a long time ago. We needed pure research that we didn't have. Yes we have these things today, but to me, having to wait around seemingly forever for them when we had the technology for this stuff in the 1980s, that looks like 20-40 years of unnecessary suffering.
For example, academia warned about the dangers of GMO foods and unpredictable side effects like autoimmune disease. Nobody ever listens or cares. Nobody cared when they warned about global warming or leaded gasoline either. But I am hopeful that this prolonged period of anti-intellectualism is finally ending and maybe the people standing in the way of progress are finally retiring. I've largely given up on real innovation from the tech world, so I've got my attention fixed on solarpunk now.
Generally the degrees are Electrical Engineering, with classes along the lines of https://ocw.mit.edu/courses/electrical-engineering-and-compu... (note that's from 2003, just an example)
There's also a ton of physics, and chemical and industrial engineering in the process steps.
I got hired on the architecture side for GPU's after working a few years. My academic background was computer graphics with a focus on parallel algorithms and performance optimisation. After a couple of years working mostly on low level code, C/C++ and assembly. I got a call from a recruiter.
The semiconductor industry is larger than just Intel and AMD. Like any job, taking some time to look around the career pages should give you a good idea what skills they are interested in.
https://www.nand2tetris.org/ is a nice introduction to the how processors are put together. Book wise Hennessy and Patterson's books, Computer Organization and Design and Computer Architecture: A Quantitative Approach are good for background. I never did much on the layout side but learning Verilog and/or VHDL would be helpful but not essential.
https://www.hackster.io/news/efabless-google-and-skywater-ar... https://www.electronicsweekly.com/news/business/diy-chip-10k...
Places like Shenzhen have a very good environment for this as well. https://www.youtube.com/watch?v=taZJblMAuko
Perhaps by studying Electronics Engineering (also called Computer Engineering, which is different from Computer Science).
At my university in USA, I remember recruiters from Intel setting up a stall or something to recruit students. It was in the building which mostly has computer science and computer engineering students, so I guess that's who they were looking for.
This was less than 5 years ago.
It isn't! The book "Code" by Charles Petzold is a great introduction to digital electronics and computer architecture. There's also the "Nand to Tetris" course (which I didn't take but people here are always recommending). You can build a simple CPU in a digital circuit simulator. If you're feeling adventurous you can write it in Verilog and simulate it, and even get it to run on a FPGA. This is all stuff you can teach yourself.
Of course this is not quite enough to make you a chip designer at AMD, but you'll know enough to get over the feeling that a microprocessor is an inscrutable artifact of alien technology brought from Alpha Centauri.
The relevant disciplines are physics (mostly condensed matter), inorganic chemistry, industrial engineering, and electronics.
Find a school that teaches semiconductor engineering, take their courses through vlsi and ASICs.
Then land a job at a fab and the rest is learn on the job training.
It’s like the difference between a PC board fab and an electronics design engineer, taken to the google power.
One of the guys I did my senior design project with ended up working with AMD on processor stuff, so there are educational opportunities, I think you just need to be more on the CompEng/EE side of things and make it your focus.
Most of the students in these classes were graduate students, so with our normal course load as seniors in engineering, this was a tremendous effort. For a four-credit class I would sometimes have to work 20+ hours a week just on one class.
But, it was a good stepping stone to get into the industry - my first job was at LSI Logic executing physical design, timing closure, etc for their customers. I learned a lot but eventually stepped away from it to focus on software and startups - I didn't want to die at that desk - the designs and teams were getting bigger and the design cycles longer. I did not relish the idea of working for 3 years on a single project.
I do look back on it fondly though as it was closer to what I consider 'real' engineering - we did a ton of verification work and if you screwed up, it might be a million in mask costs and 3 months of time to fix. We did screw up from time to time and the customer often had some fixes, so on a new design, there were expected to be a couple iterations of prototypes before you went to production. I think the last design I taped out was in the 110nm node - ancient by today's standards.
https://www.howtoinventeverything.com/
What I got from this is that I could make homemade coal by myself, MAYBE. I don't know if there's any climate on Earth where I could eek out a net energy return on primitive crops. If there was no one telling me what to do, I would surely starve in early agricultural times. But hey, that's what the Pharaoh's for, amirite?
Basic bronze tools are a mind-numbing mess to mentally process.
https://bootstrapping.miraheze.org/wiki/Main_Page also feels relevant
Also, it took me two passes through to notice the tooth brush in step 16.
In fact if you want to make something mostly flat with small features some variation of photo etching process tends to be the easiest and most repeatable way to go about that.
>3) Now you have 98% concentrated silicon dioxide. Purify it to 99.9% pure silicon dioxide.
>4) Purify it further to 99.9999999% polysilicon metal.
>While cutting-edge nanometer scale features are not likely to be accessible for a hobbyist, micron-scale amateur chip fabrication does appear to be quite feasible. I have not tried this myself, but Sam Zeloof has, and you should definitely check out his YouTube channel. I think you could probably even build some basic chips with far less equipment than he has if you get the optics right. You could probably make it a hobby business selling cusom chips to other tech people!
>A Word Of Caution: In case it wasn't already clear, I don't advise that anyone actually attempt making integrated circuits in their apartment in the manner shown in this video. The 'photoresist' and 'developer solution' in this video is just a colored prop. The real chemicals are usually hazardous and you should only work with them with proper safety gear in a well ventilated area or in a fume hood.
Its outdated and in reality you would go to Shenzhen or use a custom fab to make custom designed chips with raw materials sourced from special exotic materials that only make sense for scaled operation.
I highlighted the steps 3 and 4 because its not how its done at all. High grade silicon is obtained in a pure state and doped for the chips rather than obtaining random types and refining them.
Its not even easy compared to homemade nuclear reactors, which need a lot of natural sources of uranium to enrich but can be done, the refinement is more related to older germanium chips.
As for the design, one way is to re-use existing IP and join it together, e.g. see https://efabless.com/ etc.
High purity polysilicon is still produced by zone refining.
Even today China cannot match the quality of American chips and relies on US raw materials from Spruce Pine for manufacturing chips. It isn't optional, its like saying you can make a 1:1 steak with nothing but beef bones or chicken. https://ashvegas.com/bbc-report-spruce-pines-high-quality-qu...
>This ultra-pure mineral is essential for building most of the world’s silicon chips – without which you wouldn’t be reading this article.
(A note in passing: the semi industry doesn’t use hyperpure silicon. They use a lesser grade and add epitaxial layers.)
The crucible can stand the high temperature of molten silicon. The purified polycrystalline silicon is melted in the crucible. Then a single crystal ‘seed’ is dipped in the molten silicon and slowly withdrawn, while rotating. That’s how you make a high purity single crystal silicon ingot.
Before it’s zone refined, the poly is synthesized by reducing high purity silane gas (SiH4), which was in turn produced from quartz sand.
It would be interesting to know if the industry is still using natural quartz crucibles - the latest wafer size is now 450 mm - nearly 18 inches. Maybe someone else here can comment whether the traditional pulling process will be used at 450 mm.
Fetch decode execute cycle. Registers. Memory. An instruction set. An assembler. And plug it all into an emulator to watch your factorial(n) at work!
Here’s one someone else made earlier:
https://www.peterhigginson.co.uk/AQA/
It’s rubbish. So is yours, but you’ve got to build it first before you can brag.
It's fantastic and I highly recommend watching it.
Jokes aside, its fascinating to see how complex computers are under the many layers of abstraction that we've built on top of them.
Gave up on step 3: purify. Glad I did: each step after got more and more ridiculous.