If you’ve been around the internet for longer than Jayden Smith, you’re probably familiar with Moore’s Law. It’s often misquoted, often misunderstood, but its “law” status is rarely questioned. The most general possible way to state Moore’s Law is this: computing power tends to approximately double every two years. It gained notoriety because people like laws that let them predict the future of one of the world’s biggest industries, but the very physical basis for this principle means it is slightly different — and less reliable — than many people believe.
Though he did not give it that name, Moore’s Law was first proposed in a magazine article by Intel co-founder Gordon E. Moore. What it actually says is that the number of transistors that can be packed into a given unit of space will roughly double every two years. That prediction has remained impressively true, a fact that’s allowed everything from pocket-sized smartphones to Crysis 3, and the continuing computerization of the economy.
Yet, stated as a precaution about human abilities in physical manufacturing, and divorced from rather airy ideas like “computing power,” it becomes clear why Moore’s Law won’t necessarily always hold true. Remember that when Moore made his original prediction, he predicted a doubling every year, but he quickly amended this to every two years. Physical limitations on the manufacturing of these chips could easily push that number back to five years or more, effectively invalidating Moore’s Law forever, and revealing it to be nothing more than Moore’s Very Good But Ultimately Limited Prediction (MVGBULP).
Today, all consumer processors are made out of silicon — the second most abundant element in the Earth’s crust, after oxygen. But silicon is not a perfect conductor, and limits to the mobility of the electrons it carries impose a hard limit on how densely you can pack silicon transistors. Not only does power consumption come a huge issue, but an effect called quantum tunneling can cause problems for keeping electrons contained beyond a certain thickness threshold.
Outside of research facilities, silicon transistors don’t currently get smaller than 14 nanometers — and while some 10 nanometer chips designs might someday reach the market, it’s seen as a foregone conclusion that to keep to Moore’s Law over a long period of time, we’ll have to come up with newer and better materials to be the basis of next generation computers.
One oft-cited example is graphene, or the rolled up tubes of graphene called carbon nanotubes. Graphene is “atomically thin,” often called two-dimensional, and so it allows a huge increase on the physical side of things. On the other hand, graphene does not have a useful bandgap — the energy difference we need to navigate to bump electrons back and forth between the conducting and non-conducting bands. That’s how silicon transistors switch on and off, which is the entire basis for their method of computation.
If this problem can’t be offset in some way, a graphene computer would have to pioneer a whole new logical method for computing. One graphene computer chip from IBM proved to be incredibly fast, 10,000 times faster than a silicon chip — but it was not a general-purpose processor. Since graphene can’t be easily switched on and off in mass quantities, we can’t simply swap in graphene for silicon and keep on with modern chip architectures.
Other materials may offer more practical reductions in size and electrical resistance, and actually allow Moore’s Law to continue unbroken, but only if they hit the market quickly enough. Silicon-germanium, or just germanium alone, have been talked about for some time, but have yet to really materialize in any sort of affordable form. It was recently discovered that a material called titanium tri-sulfide can provide many of the same physical advantages as graphene, and do so with an achievable bandgap — such a super-material might be what’s needed, but graphene-like problems with manufacturing then rear their ugly heads.
Quantum computing could be another answer, but research is still so preliminary that it’s doubtful. Some believe they’ll offer such a huge and immediate upgrade over modern processors that computer encryption will come tumbling down. However, quantum computing won’t necessarily come in the form of a programmable digital computer right away; early quantum computers won’t be able to run Windows, even if they are more than fast enough in a theoretical sense. Of all the possible “solutions” to looming problems with Moore’s Law, quantum computing is probably the least realistic. It has a lot of potential for specific applications, but quantum PCs are still too far out to be worth considering.
Moore himself admitted that his Law “can’t continue forever” in a 2005 interview. It’s the nature of exponential functions, he said — they eventually hit a wall, and while that makes perfect sense in the purely hypothetical world of mathematics, it tends not to work out as well in the real world. It could be that Moore’s Law will hold up when viewed on the century scale, zoomed out to diminish the importance of any small fluctuations between new technologies. But the fact remains that right now, we’re entering a lull as we wait for the next great processing tech to arrive.