Tag Archives: intel

SteamOS, Ubuntu, or Windows 10: Which is fastest for gaming?

For years, game support on Linux has seriously lagged behind Windows, to the point that the OS was basically a non-option for anyone who wanted to game on a PC. In recent years, that’s begun to change, thanks to increased support for the OS via Valve and SteamOS. From the beginning, Valve claimed that it was possible to boost OpenGL performance over D3D in Windows, and it’s recently put a hefty push behind Vulkan, the Mantle-based API that’s a successor to OpenGL.

Two new stories took OpenGL out for a spin compared with Windows 10, on a mixture of Intel and Nvidia hardware. Ars Technica dusted off their Steam machine for a comparison in the most recent version of SteamOS, while Phoronix compared the performance of Intel’s Skylake Core i5-6600K with HD Graphics 530. The results, unfortunately, point in the same direction: SteamOS and Ubuntu simply can’t keep up with Windows 10 in most modern titles.

steambench

Ars tested multiple titles, but we’ve included the Source-based results here, because these are the games that the industry titan has direct control over. In theory, Valve’s own games should show the clearest signs of any OGL advantage, if one existed. Obviously, it doesn’t — L4D2 shows similar performance on both platforms, but TF2, Portal, and DOTA 2 are all clear advantages for Windows 10.

That doesn’t mean Linux gaming hasn’t come a long way in a relatively short period of time. All of these titles return playable frame rates, even at 2560×1600. There’s a huge difference between “Windows 10 is faster than Linux,” and “We can’t compare Linux and Windows 10 because Linux and gaming are a contradiction in terms.” It’s also possible that Valve is throwing most of its weight behind Vulkan and that future games that use that API will be on a much stronger footing against Windows in DX12 titles.

The penguinistas at Phoronix also took Windows and Ubuntu out for a spin with Intel’s HD Graphics 530 and a Skylake processor. Again, the results are anything but pretty for Team Penguin — while some titles, like OpenArena, ran nearly identically, most 3D applications showed a significant gain for Windows 10. Again, driver support is a major issue; Intel’s Linux drivers remain limited to OpenGL 3.3, though OpenGL 4.2 support is theoretically forthcoming by the end of the year. Under Windows, OGL 4.4 is supported, which gives that OS a decided advantage in these types of comparisons.

A complex situation

There are two, equally valid ways of looking at this situation. First, there’s the fact that if you want to game, first-and-foremost, Windows remains a superior OS to Mac or Linux, period, full-stop. There is no Linux distribution or version of Mac OS X that can match the capabilities of Windows for PC gaming across the entire spectrum of titles, devices, and hardware — especially if you care about compatibility with older games, which can be persnickety in the best of times.

That conclusion, however, ignores the tremendous progress that we’ve seen in Linux gaming over a relatively short period of time. There are now more than a thousand titles available for Linux via Steam. If you’re primarily a Linux user, you’ve got options that never existed before — and as someone who hates dual-booting between operating systems and refuses to do so save when necessary for articles, I feel the pain of anyone who prefers to game in their own native OS rather than switching back and forth.

Furthermore, it’s probably not realistic to expect Valve to close the gap between Windows and Linux gaming. Not only does that assume that Valve can magically control the entire driver stack (and it obviously can’t), it also assumes that Valve does anything within a 1-2 year time frame (it doesn’t). The launch of Vulkan means that Linux users will get feature-parity and very similar capabilities to DX12 gamers on Windows, but Nvidia, AMD, and Intel will need to provide appropriate driver support to enable it. Hopefully, since Vulkan is based on Mantle, AMD will be able to offer support in short order.

In short, it’s not surprising to see that Windows still has a strategic and structural advantage over Linux, and we shouldn’t let that fact obscure the tremendous progress we’ve seen in just a handful of years.

Advertisements
Tagged , , , , , , , , ,

Intel reportedly prepping 10-core Broadwell-E processors with 25MB L3 cache

In August 2014, Intel released the first Haswell-E processor, the Core i7-5960X. Unlike its predecessors, the Core i7-5960X jumped to eight cores and 16 threads — but the lower clock speeds that this required paradoxically made the chip a less-than-great alternative for gamers. In many titles, the 4.4GHz clock speed on the Core i7-4790K was a better gaming option than the eight cores but lower top-end clock speed (3GHz base, 3.5GHz Turbo) that the 5960X offered.

Intel is now working on the successor to Haswell-E, and if recent rumors are true, the company is going to address this discrepancy with the upcoming Broadwell-E. The upcoming family will launch with multiple SKUs that should address the needs of both gamers and other high-end users who have more use for threads and less for clock speed. According to Chinese site XFastest, the Core i7-6950X will be a 10-core, 20-thread CPU with a base clock of 3GHz, an unknown Turbo frequency, and 25MB of L3 cache. That’s two more cores than the current Core i7-5960X, with an equivalent clock speed and the same cache allocation on a per-core basis.

Xfastest

Data from XFastest

Haswell-E had three SKUs — the 5960X at 3GHz base / 3.5GHz Turbo with 8 cores, the 5930K at 3.5GHz base – 3.7GHz Turbo with six cores, and the 5820K at 3.3GHz base – 3.6GHz Turbo, with six cores. The 5960X had 20MB of L3, while the other two chips had 15MB each. This chart implies that Intel will subdivide the market further, with an eight-core, 20MB chip at 3.3GHz, and a brace of six-core chips at 3.6GHz and 3.4GHz respectively, each with 15MB of L3 cache. The implications, if true, suggest that Intel wants to target enthusiasts hunting for more clock speed as well as those who may benefit from having more threads.

When Intel first announced that Haswell-E would move to an eight-core top-end configuration, there was some speculation that Intel might bring six-core chips to the conventional desktop line. So far, that hasn’t happened, and it’s not clear if it will, given the current realities of CPU design and the overall state of multi-threading in desktop applications. The four-core / eight-thread configuration that Intel has preferred since it launched Nehalem back in 2008 continues to offer an excellent overall balance of clock speed and performance, even if performance gains have materialized more slowly than we like. There’s little point, however, in pushing end-users towards higher core-counts for their own sake.

DirectX 12 could offer additional support for multi-threaded CPU cores. But given that laptops outnumber desktops and laptops are still almost entirely dual-core + Hyper-Threading, we don’t see developers falling over themselves to make games that take advantage of 10-12 cores.

Broadwell-E is expected to be compatible with Haswell-E motherboards, though we will likely see a chipset refresh and renewed push from the regular suspects. Broadwell-E should be a drop-in replacement for Haswell-E, but Skylake-E, when it eventually appears, will likely require a new motherboard.

Tagged , , , , , , ,

Chip-making giant Intel reports 6% fall in net income

The world’s biggest chipmaker, Intel, reported a 6% fall in net income for the three months to September and cut its fourth quarter outlook for its important server-chip business.

Net income fell to $3.11bn (£2.03bn) compared with a year ago for the personal computer giant.

As its PC business continued to slow, the firm had relied on sales of its chips that go in data servers.

But the firm said demand for its server-chips was slowing.

However, Intel said its latest quarterly numbers were largely in line with expectations and that the results were “solid”.

“We executed well in the third quarter and delivered solid results in a challenging economic environment,” said Intel’s chief executive Brian Krzanich.

The US-based firm also noted the introduction of its “breakthrough 3D XPoint technology, the industry’s first new memory category in more than two decades.”

Acquisitions

Reports have said that Intel’s bid to buy Altera Corp for $6.7bn in an attempt to expand parts of its chip business could be given the go-ahead from the EU as soon as this week.

The deal had been cleared by the US Department of Justice, but there were several antitrust issues surrounding it.

Intel hoped that its buy-up of Altera would help boost its higher-margin chip business, particularly for data-servers – and help it focus on chips for cars and watches, among other devices.

In a report released in line with its latest quarterly results, the firm said its outlook for the fourth quarter “does not include the potential impact of any business combinations, asset acquisitions, divestitures, strategic investments and other significant transactions that may be completed after October 13.”

Tagged , , , ,

Apple responds to battery life concerns with its A9 SoCs

Yesterday, we covered reports from concerned iPhone 6s and 6s Plus owners, who have seen markedly different results between those devices built on Samsung’s 14nm node and those using TSMC’s 16nm. Apple has since released a statement covering these concerns in greater detail than we initially alluded to yesterday, and it’s worth considering how the company’s statements fit into the overall picture. Apple’s statement is reprinted below:

With the Apple-designed A9 chip in your iPhone 6s or iPhone 6s Plus, you are getting the most advanced smartphone chip in the world. Every chip we ship meets Apple’s highest standards for providing incredible performance and deliver great battery life, regardless of iPhone 6s capacity, color, or model.

Certain manufactured lab tests which run the processors with a continuous heavy workload until the battery depletes are not representative of real-world usage, since they spend an unrealistic amount of time at the highest CPU performance state. It’s a misleading way to measure real-world battery life. Our testing and customer data show the actual battery life of the iPhone 6s and iPhone 6s Plus, even taking into account variable component differences, vary within just 2-3% of each other.

Of benchmarks and battery life

Apple has a point when it says that benchmarks don’t often track the real-world experience of actually using a device. The primary purpose of most benchmarks is to gather performancedata, and the advent of modern benchmarking has its roots firmly in the pre-smartphone era, when battery life wasn’t relevant to desktops and workstations. Even now, many battery life tests amount to “Repeat this workload until the phone dies.”

Whether you use a light or heavy workload on a phone can have a profound impact on its battery life — and, by extension, on how the phone tests in comparison to other devices. Anandtech made this point in their own investigation:

77891

Compare the iPhone 5s against the iPhone 6. The iPhone 6’s battery is 16% larger than the iPhone 5s’s, but the iPhone 6’s light usage run-time is almost 30% longer than the iPhone 5s. Clearly, the later silicon is more power efficient. Under heavy load, however, the iPhone 6’s larger battery only manages to equal the iPhone 5s’s total run-time — not exceed it. Meanwhile, the iPhone 6 Plus’s heavy run time is worse than the Galaxy Note 5’s, but more than 90 minutes better in light usage.

This is why it’s impossible to dismiss Apple’s response as “You’re holding it wrong,” despite the tone-deaf way the company communicated its statement. If a battery test doesn’t accurately capture the way people use the phone, it’s a bad benchmark. It may accurately measure power consumption between two devices in a stated workload, but the entire point of such workloads is to actually capture real-world conditions.

Thus far, the battery tests that have been floated have involved looping a JavaScript test and Geekbench’s fixed-load test, which apparently stresses the iPhone 6 Plus at a fairly constant 30%. Neither of these are particularly representative of real-world conditions. In fact, in the one test we’ve seen where real-world loading was performed (a video playback test for 60 minutes), both of the iPhones lost the same amount of battery life. This implies that in at least some conditions, power consumption between the two devices is basically identical.

Heat and variability

There are two potential factors that could be causing Samsung devices to exhibit poor performance under load as compared to TSMC equivalents. The first, which we alluded to in our initial article, is heat. Transistors that are packed together more tightly naturally concentrate more heat into smaller areas. There’s a clear and known relationship between heat and power consumption, and while the exact relationship varies from chip to chip and node to node, it’s well-known that temperature has a significant impact.

Image by Anandtech forum user idontcare

The second factor that comes into play here is variability. It’s important to understand that while we talk about Apple building an A9 processor in the same way that we might discuss Ford building an engine, there are some critical differences between the two. When TSMC, Intel, or Samsung builds a wafer of chips, they don’t automatically “know” what kind of chips they have. Each company will test their silicon to determine how good (or bad) the wafer is. Good chips are those that can run at the target voltage and clock speeds with desired power consumption levels. Great chips are those that can run at dramatically lower power consumption, or hit higher clock speeds, while bad chips are those that consume too much power or simply can’t reach target frequencies.

Each company has different methods of recovering useful dies from poor samples, whether that means disabling some of the cache, one of the cores, or using the chip in a desktop system where battery power isn’t such a concern. The important thing to understand is that variability has been getting steadily worse with every product generation. To understand why, consider a hypothetical scenario in which a “good” transistor contains between 100-200 atoms of a material, a “great” transistor contains between 140-160 atoms, and a bad transistor (that won’t meet desired specifications) has either less than 100 or more than 200. In this example, these numbers correspond to an older process node — say, 45nm.

AppleA9

Now, imagine this same situation, but with very different numbers. In our second example, a good transistor contains between 20 and 40 atoms of a doping material, a great transistor has between 28 – 32 atoms, and a bad transistor is any transistor with less than 20 or more than 40. It’s much, much harder to control the distribution of 20 atoms than it is to control the distribution of 100 atoms. Remember, that since 14nm chips have much more transistors than 45nm chips, it’s not just a question of tighter control — you have to be more perfect to keep fail rates under control. This is why modern chips are sometimes designed with built-in logic redundancy — if one component of a chip doesn’t pass muster, you’ve got duplicate units ready to go.

Here’s what this means, in aggregate: While we are certain that Apple still strictly targets certain ranges for its parts, we’d expect to see greater variation in run-time and battery life between TSMC and Samsung hardware because even a company has legendarily strict as Apple has to accept the laws of physics.

What does this mean for TSMC vs. Samsung?

Thus far, Apple’s official position is that there is no difference between TSMC and Samsung devices. We suspect that if the company breaks from this stance, it will be because of heat differences between the two devices, rather than performance metrics. There are subtle ways to adjust performance to cut down on skin temperature, and it may be possible to create power rules for the Samsung devices that are different than those used for TSMC.

The one thing we’ll stick to is that this variation is almost certainly why Apple was forced to dual source its hardware in the first place. What will be interesting is seeing whether or not this issue continues with later iterations of the phone. Samsung and TSMC are both consistently improving yield on 16/14nm, which means we’ll see those improvements reflected in devices — even if Apple never announces that its later products have better power consumption or lower temperatures compared with the newer ones.

Tagged , , , , , , , , , , , , ,

Smaller, Faster, Cheaper, Over: The Future of Computer Chips

At the inaugural International Solid-State Circuits Conference held on the campus of the University of Pennsylvania in Philadelphia in 1960, a young computer engineer named Douglas Engelbart introduced the electronics industry to the remarkably simple but groundbreaking concept of “scaling.”

Dr. Engelbart, who would later help develop the computer mouse and other personal computing technologies, theorized that as electronic circuits were made smaller, their components would get faster, require less power and become cheaper to produce — all at an accelerating pace.

Sitting in the audience that day was Gordon Moore, who went on to help found the Intel Corporation, the world’s largest chip maker. In 1965, Dr. Moore quantified the scaling principle and laid out what would have the impact of a computer-age Magna Carta. He predicted that the number of transistors that could be etched on a chip would double annually for at least a decade, leading to astronomical increases in computer power.

Photo

The Nehalem wafer processor, introduced by Intel in 2008. CreditIntel

His prediction appeared in Electronics magazine in April 1965 and was later called Moore’s Law. It was never a law of physics, but rather an observation about the economics of a young industry that ended up holding true for a half-century.

One transistor, about as wide as a cotton fiber, cost roughly $8 in today’s dollars in the early 1960s; Intel was founded in 1968. Today, billions of transistors can be squeezed onto a chip the size of a fingernail, and transistor costs have fallen to a tiny fraction of a cent.

That improvement — the simple premise that computer chips would do more and more and cost less and less — helped Silicon Valley bring startling advances to the world, from the personal computer to the smartphone to the vast network of interconnected computers that power the Internet.

In recent years, however, the acceleration predicted by Moore’s Law has slipped. Chip speeds stopped increasing almost a decade ago, the time between new generations is stretching out, and the cost of individual transistors has plateaued.

Technologists now believe that new generations of chips will come more slowly, perhaps every two and a half to three years. And by the middle of the next decade, they fear, there could be a reckoning, when the laws of physics dictate that transistors, by then composed of just a handful of molecules, will not function reliably. Then Moore’s Law will come to an end, unless a new technological breakthrough occurs.

To put the condition of Moore’s Law in anthropomorphic terms, “It’s graying, it’s aging,” said Henry Samueli, chief technology officer for Broadcom, a maker of communications chips. “It’s not dead, but you’re going to have to sign Moore’s Law up for AARP.”

In 1995, Dr. Moore revised the doubling rate to two-year intervals. Still, he remains impressed by the longevity of his forecast: “The original prediction was to look at 10 years, which I thought was a stretch,” he said recently at a San Francisco event held to commemorate the 50th anniversary of Moore’s Law.

But the ominous question is what will happen if that magic combination of improving speeds, collapsing electricity demand and lower prices cannot be sustained.

The impact will be felt far beyond the computer industry, said Robert P. Colwell, a former Intel electrical engineer who helped lead the design of the Pentium microprocessor when he worked as a computer architect at the chip maker from 1990 to 2000.

“Look at automobiles, for example,” Dr. Colwell said. “What has driven their innovations over the past 30 years? Moore’s Law.” Most automotive industry innovations in engine controllers, antilock brakes, navigation, entertainment and security systems have come from increasingly low-cost semiconductors, he said.

These fears run contrary to the central narrative of an eternally youthful Silicon Valley. For more than three decades the industry has argued that computing will get faster, achieve higher capacity and become cheaper at an accelerating rate. It has been described both as “Internet time” and even as the Singularity, a point at which computing power surpasses human intelligence, an assertion that is held with near religious conviction among many in Silicon Valley.

Photo

When you’re thinking that big, bumping into the limits of physics could be a most humbling experience.

“I think the most fundamental issue is that we are way past the point in the evolution of computers where people auto-buy the next latest and greatest computer chip, with full confidence that it would be better than what they’ve got,” Dr. Colwell said.

The Limits of Physics

Chips are made from metal wires and semiconductor-based transistors — tiny electronic switches that control the flow of electricity. The most advanced transistors and wires are smaller than the wavelength of light, and the most advanced electronic switches are smaller than a biological virus.

Chips are produced in a manufacturing process called photolithography. Since it was invented in the late 1950s, photolithography has constantly evolved. Today, ultraviolet laser light is projected through glass plates that are coated with a portion of a circuit pattern expressed in a metal mask that looks like a street map.

Each map makes it possible to illuminate a pattern on the surface of the chip in order to deposit or etch away metal and semiconducting materials, leaving an ultrathin sandwich of wires, transistors and other components.

The masks are used to expose hundreds of exact copies of each chip, which are in turn laid out on polished wafers of silicon about a foot in diameter.

Machines called steppers, which currently cost about $50 million each, move the mask across the wafer, repeatedly exposing each circuit pattern to the surface of the wafer, alternately depositing and etching away metal and semiconducting components.

A finished computer chip may require as many as 50 exposure steps, and the mask must be aligned with astonishing accuracy. Each step raises the possibility of infinitesimally small errors.

“I’ve worked on many parts of the semiconductor process,” said Alan R. Stivers, a physicist whose career at Intel began in 1979 and who helped introduce a dozen new semiconductor generations before retiring in 2007. “By far, lithography is the hardest.”

To build devices that are smaller than the wavelength of light, chip makers have added a range of tricks like “immersion” lithography, which uses water to bend light waves sharply and enhance resolution. They also have used a technique called “multiple pattern” lithography, which employs separate mask steps to sharpen the edges and further thin the metal wires and other chip components.

Photo

As the size of components and wires have shrunk to just a handful of molecules, engineers have turned to computer simulations that require tremendous computational power. “You are playing tricks on the physics,” said Walden C. Rhines, chief executive of Mentor Graphics, a Wilsonville, Ore., design automation software firm.

If that scaling first described by Dr. Engelbart ends, how can big chip companies avoid the Moore’s Law endgame? For one, they could turn to software or new chip designs that extract more computing power from the same number of transistors.

And there is hope that the same creativity that has extended Moore’s Law for so long could keep chip technology advancing.

If silicon is, in the words of David M. Brooks, a Harvard University computer scientist, “the canvas we paint on,” engineers can do more than just shrink the canvas.

Silicon could also give way to exotic materials for making faster and smaller transistors and new kinds of memory storage as well as optical rather than electronic communications links, said Alex Lidow, a physicist who is chief executive of Efficient Power Conversion Corporation, a maker of special-purpose chips in El Segundo, Calif.

There are a number of breakthrough candidates, like quantum computing, which — if it became practical — could vastly speed processing time, and spintronics, which in the far future could move computing to atomic-scale components.

Recently, there has been optimism in a new manufacturing technique, known as extreme ultraviolet, or EUV, lithography. If it works, EUV, which provides light waves roughly a tenth the length of the shortest of the light waves that make up the visible spectrum, will permit even smaller wires and features, while at the same time simplifying the chip-making process.

But the technology still has not been proved in commercial production.

Earlier this year ASML, a Dutch stepper manufacturer partly owned by Intel, said it had received a large order for EUV steppers from a United States customer that most people in the industry believe to be Intel. That could mean Intel has a jump on the rest of the chip-making industry.

Intel executives, unlike major competitors such as Samsung and Taiwan Semiconductor Manufacturing Company, or TSMC, insist the company will be able to continue to make ever-cheaper chips for the foreseeable future. And they dispute the notion that the price of transistors has reached a plateau.

Yet while Intel remains confident that it can continue to resist the changing reality of the rest of the industry, it has not been able to entirely defy physics.

Photo

“Intel doesn’t know what to do about the impending end of Moore’s Law,” said Dr. Colwell.

In July, Intel said it would push back the introduction of 10-nanometer technology (a human hair, by comparison, is about 75,000 nanometers wide) to 2017. The delay is a break with the company’s tradition of introducing a generation of chips with smaller wires and transistors one year, followed by adding new design features the next.

“The last two technology transitions have signaled that our cadence is closer to two and a half years than two years,” Brian Krzanich, Intel’s chief executive, said in a conference call with analysts.

No More ‘Free Ride’

The glass-is-half-full view of these problems is that the slowdown in chip development will lead to more competition and creativity. Many semiconductor makers do not have the state-of-the-art factories now being designed by four chip manufacturers, GlobalFoundries, Intel, Samsung and TSMC.

The delays might allow the trailing chip makers to compete in markets that don’t require the most bleeding-edge performance, said David B. Yoffie, a professor at Harvard Business School.

And even if shrinking transistor size doesn’t make chips faster and cheaper, it will lower the power they require.

Ultra-low-power computer chips that will begin to appear at the end of this decade will in some cases not even require batteries — they will be powered by solar energy, vibration, radio waves or even sweat. Many of them will be sophisticated new kinds of sensors, wirelessly woven into centralized computing systems in the computing cloud.

What products might those chips lead to? No one knows yet, but product designers will be forced to think differently about what they’re building, rather than play a waiting game for chips to get more powerful. Thanks to Moore’s Law, computers have gotten smaller and smaller but have essentially followed the same concept of chips, hardware and software in a closed box.

“In the past, designers were lazy,” said Tony Fadell, an electrical engineer who headed the team that designed the original iPod, and led the hardware design of the iPhone before founding Nest Labs, a maker of smart home devices like thermostats and smoke alarms.

Carver Mead, the physicist who actually coined the term Moore’s Law, agrees. “We’ve basically had a free ride,” he said. “It’s really nuts, but that’s what paid off.”

Indeed, a graying Moore’s Law could be alive and well for at least another decade. And if it is not, humans will just have to get more creative.

Tagged , , , , ,

Intel to End Sponsorship of Science Talent Search

Intel, the world’s largest maker of semiconductors, is dropping its longtime support of the most prestigious science and mathematics competition for American high school students.

The contest, called the Science Talent Search, brings 40 finalists to Washington for meetings with leaders in government and industry and counts among its past competitors eight Nobel Prize winners, along with chief executives, university professors and award-winning scientists.

Over the years, the award for work in so-called STEM fields — science, technology, engineering and mathematics — has made national headlines and been an important indicator of America’s educational competitiveness and national priorities. When it was started as an essay competition in 1942, its first topic was “How science can help win the war.” The male winner, or “Top Boy,” went on to develop an artificial kidney. The “Top Girl” became an ophthalmologist. A single winner was first named in 1949.

“When I was a finalist in 1961, it was the Sputnik generation, when America was competing with Russia to get into space,” said Mary Sue Coleman, a former president of the University of Michigan and a current member of the board of the Society for Science and the Public, which administers the contest. “It was a national obsession. People in school cheered us on like we were star athletes. I got letters from the heads of corporations.”

Photo

Dropping support for the high school contest is a puzzling decision byIntel, since it costs about $6 million a year — about 0.01 percent of Intel’s $55.6 billion in revenue last year — and it generates significant good will for the sponsoring organization. Intel has also increased the size and scope of the award, giving more than $1.6 million annually to students and schools, compared with $207,000 when it began its sponsorship in 1998.

The Silicon Valley giant took over sponsorship of the award with great fanfare from Westinghouse, becoming only the second company to back the prize in its 73-year history. At the time it was seen as something of a passing of the torch in American industry, to a company then at the heart of the Information Age from one renowned for industrial work in things like nuclear power plants.

Craig Barrett, a former chief executive of Intel, is even a member of the board of the Society for Science and the Public. He said he was “surprised and a little disappointed” by Intel’s decision.

“It’s such a premier event in terms of young people and technology,” Mr. Barrett said. “But they appear to be more interested in applied things, like” Maker Faire, an all-ages event that showcases homemade engineering projects.

Mr. Barrett said he had talked with Brian M. Krzanich, Intel’s chief executive for the last two years, about the contest. Though Mr. Barrett thought it was inappropriate to aggressively lobby his old employer, he termed the annual cost “a rounding error” against Intel’s finances.

“My only comment to Brian was that we’d move forward,” said Mr. Barrett, who became Intel’s chief executive in 1998 and retired as chairman of Intel’s board in 2009. He now runs a chain of charter schools, called Basis, from Phoenix.

There is little indication that the contest has lost its prestige. Applications have held steady at around 1,800 a year for a decade. And in March, President Obama met with the Talent Search finalists at the White House.

Gail Dundas, a spokeswoman for Intel, could not say why it was ending its support, but she said the company, which has struggled with a shift to mobile computing devices but is still one of the tech industry’s most influential names, is “proud of its legacy” in supporting the award.

The Science Talent Search is open to any student in the United States or its territories in his or her last year of secondary school. Independent individual research by thousands of students is narrowed down to 300 semifinalists. Of those, 40 finalists are chosen.

Previous finalists include Ray Kurzweil, a well-known author and director of engineering at Google, and Brian Greene, a best-selling science writer. Thomas Leighton, the chief executive of the Internet company Akamai, was a finalist and is now on the society’s board.

The finalists travel to Washington, where they present their work, meet government and private sector leaders and have their projects reviewed by a panel of judges. There were nine top awards in 2015, worth $35,000 to $150,000.

Photo

This year, Intel gave out three first prizes to highlight the variety of the research conducted. One student developed an algorithm to study adaptive mutations across the human genome. Another studied how phonons, the basic particles of sound, interact with electrons.

“They have been an excellent partner for almost 20 years, but their corporate priorities have changed,” said Maya Ajmera, president of the Society for Science and the Public.

To more recent winners, Intel may have received a benefit besides publicity — it got to teach the young stars more about Intel.

“They showed us stuff they were doing with wearable technologies and machine learning,” a type of artificial intelligence, said Noah Golowich, a freshman at Harvard. He shared this year’s prize for his work in a branch of mathematics known as the Ramsey theory, which finds structure in complex systems. “I didn’t know much about all the things Intel does before I went to Washington.”

Ms. Ajmera said her group would start looking for a new corporate sponsor on Wednesday. “We pride ourselves on recognizing thousands of leaders in science and technology and hope to keep doing so,” she said.

Other board members expressed confidence that national competition would produce another corporate sponsor.

Ms. Coleman was a finalist in 1961 for researching drug-resistant bacteria. First prize that year was awarded to a study of bowing in the courtship behavior of the male ring dove.

She said she was “very aware” that Larry Page, co-founder and chief executive of Google, is a Michigan graduate and that Google might be a candidate. “This isn’t a huge amount of money for what it represents,” she said. “I assume another corporation will step up to this.”

Intel informed the group of its decision about 18 months ago, she said, and it will continue to support the award through 2017, in keeping with an earlier contract.

Intel will continue to support a separate talent search aimed at international student competition at least through 2019, which is Intel’s contractual term, said Ms. Dundas, the Intel spokeswoman.

In addition to the Intel-sponsored prize, the society also runs a science and technology competition for middle school students, financed by theBroadcom Foundation. Although Broadcom, another semiconductor company, was bought this year, the Broadcom Foundation is independent and will continue to support the prize.

“Intel’s interests have changed,” said Ms. Coleman. “But we still think this is a very attractive prize to a number of corporations. It is still really important for the nation.”

Tagged , , ,

Intel to invest $50 million in quantum computer research

Intel CEO Brian Krzanich released an open letter today, pledging to dedicate $50 million to long-term research of quantum computing. The CPU giant is partnering with TU Delft, the largest and oldest Dutch public technical university, and will work with QuTech, TU Delft’s quantum research institute. Intel is also pledging to dedicate its own resources and engineers to solving the problems of quantum computing.

It might seem odd to see Intel pumping so much money into quantum computing research, given that D-Wave’s systems have been tested and largely verified to be quantum computers. D-Wave’s devices, however, have some significant limitations. The number of Qubits has grown fairly quickly, but the total number of connections between the Qubits hasn’t scaled at the same rate — and it’s the connections between Qubits that dictate the complexity and nature of the problems the computer can actually solve. D-Wave systems are sparsely connected, which vastly simplifies routing and construction but also limits the real-world use cases of the computer.

106E2x900y900

D-Wave’s devices are one type of quantum computer, called an annealer, but it’s not the only type of quantum computer that might be theoretically constructed, nor universally the best for every kind of potential task. The challenges of building these devices, however, are considerable. Because quantum computation is extremely easy to disrupt, D-Wave uses liquid nitrogen to cool its hardware. Intel hasn’t stated which kind of devices it wants to investigate, but room-temperature quantum computing isn’t possible (at least, not as far as we know).

These types of computers, then, aren’t the kind of hardware that slots into a smartphone or that you’re likely to have sitting on your desk. In some ways, a functional quantum computer would resemble the hardware of the 1950s and 60s — huge installations with enormous power needs, fixed locations, and high operating costs. The reason that Intel and other manufacturers are so interested in building them anyway is because quantum computers can be used to solve certain problems that are so fiendishly difficult, it would require billions or trillions of years to accurately answer them using traditional transistors and cutting-edge equipment.

Quantum_Computing

Even if you think Moore’s law will pick up steam again at some point, the time scales involved make conventional transistors ill-suited to the task. As the Intel-provided infographic above points out, there are a number of other specialized applications for quantum computing as well, such as theoretically unbreakable cryptography (with the side effect that any existing cryptographic scheme can be trivially broken by full-scale quantum computing.

As early quantum computers come online, we’re beginning to get a basic sense of how quickly they can operate and what types of problems they solve best. Ars Technica recently coveredrecent updates to ongoing efforts to benchmark D-Wave systems that illustrate how understanding how a quantum computer works, and what kinds of answers it can provide, significantly changes the way we benchmark and test such systems. Ongoing research into the practical systems we can build today will guide further work on the blue-sky projects of tomorrow. As Krzanich notes, “Fifty years ago, when Gordon Moore first published his famous paper, postulating that the number of transistors on a chip would double every year (later amending to every two years), nobody thought we’d ever be putting more than 8 billion of them on a single piece of silicon. This would have been unimaginable in 1965, and yet, 50 years later, we at Intel do this every day.”

The physics of liquid nitrogen make it unlikely that we’ll have quantum smartphones 50 years from now — but that doesn’t mean quantum hardware won’t be pushing the frontiers of human knowledge and our understanding of the universe.

Tagged , , , , , ,

Intel’s new 5×5: Tiny form factor, socketed CPU

For years, mini-ITX has been the smallest mainstream system form factor that enthusiasts could reasonably buy. Intel wants to change that with its new 5×5 initiative, and it’s offering the new platform with socketed CPUs rather than relying solely on soldered parts. This could prove a potent selling point, since soldered systems are often less attractive to customers who want the option to upgrade the integrated CPU.

Despite the 5×5 name, the board is more like 5.5 by 5.8 inches (HW), but it packs a number of features. The board is capable of handling up to 65W TDP chips, with two SO-DIMM slots for memory, M.2 support for storage, and both wired and wireless networking options. The 5×5 can also use an external DC power supply. Intel says the mounting holes are “standard,” which implies that they conform to existing hardware mounts, but we haven’t seen hardware yet to verify exactly which cases will be able to mount a 5×5 board.

Socket support, higher TDPs improve value proposition

If you consider Intel’s 5×5 in the context of Skylake’s increased graphics performance, the offering makes more sense. Intel’s GPU performance has been growing substantially faster than its CPU performance, but that’s not much of a benefit in a soldered system without PCI-Express slots. With Skylake, Intel is continuing to push the graphics envelope — and socketed systems theoretically offer lower-end gamers the ability to buy a CPU today and upgrade later to a higher-performance GPU.

Intel5x5

The reason that gain is theoretical, however, is that Intel doesn’t exactly have a great track record when it comes to supporting multiple generations of processor on the same platform. It’s not clear yet if current Skylake motherboards will support Kaby Lake, the 14nm refresh now scheduled for 2016. A solution like this would be more exciting if we knew that Intel would offer multiple products with improved graphics performance integrated into each core — especially if those parts would fit into a 65W TDP.

The 5×5 probably won’t revolutionize small system design — Intel NUCs and the existing mini-ITX standard fill the low-power market fairly well — but the ability to upgrade to a faster socketed processor in the miniature form factor could sell some living room computer enthusiasts on Skylake as a Steam Box processor. Intel claims that the smallest system form factors will fit into 0.85L worth of volume, which should let it slip inconspicuously into a living room or entertainment system.

Tagged , , , , ,

Intel: Putting Innovation Back in the Hands of the Innovators

Intel is moving to empower makers of all types with its tools, and with funding for things like theAmerica’s Greatest Makers TV show. There is a realization that Intel was created by some of the greatest makers who ever lived, and that there’s a revolution pushing innovation into homes and garages all over the world. At its heart, IDF this year was all about that. Makers rule!!

We are living in an amazing time, but many of us seem to take it for granted. We have private spaceships (although they blow up more often than I’d like). Self-driving cars are on the road, even though we can’t buy them yet, and there are plans for a 12-mile-high inflatable building.

Granted, a number of us are kind of convinced it will end up looking like a giant version of the inflatable fan blown stick man that you often see outside of car dealerships. It kind of sounds like one of those ideas folks come up with when they’ve been partying too much.

The Intel Developer Forum kind of reminded me of a mini-World’s Fair this year, with three floors of ongoing entertainment and demonstrations. It was actually a ton of fun; it started with the new CEO giving a killer keynote and ended with my friend Genevieve Bell — Intel’s secret weapon futurist — talking about how makers are helping treat Ebola.

Following are some of the highlights and a look into our future.

Intel TV

Intel TV isn’t like Apple TV — I’m talking about an Intel TV show. Yep, Intel is funding a reality show contest with a prize of, wait for it, one million dollars. It sounds a little like American Idol, but it will be focused on makers.America’s Greatest Makers should do for geeks what American Idol did for a few artists: put them on the map.

It probably will have a little bit of Shark Tank in the judging, but we’ll see kids and adults from all over competing over who can build the coolest — and likely the most marketable — gadget. Were the decision up to me, I’d just focus on cool, forward-looking and entertaining. That would be a ton more fun, and the more practical inventions already have crowdfunding as a more reliable way to get lots of cash.

It will be interesting to see how Intel develops an invention taxonomy, though, because if the products get too diverse or too practical, the show might become confusing or boring.

3D Printers Everywhere

Almost everywhere at IDF you looked, there were 3D-printed objects scanned with Intel’s RealSense camera, which is moving from tablets to laptops. They ranged from robotic spiders that followed commands (and did kind of look like they wanted to rebel after being made to dance to music for hours) to a huge mother spider about the size of a small pony (clearly these folks didn’twatch Stargate), to robotic scarabs (they didn’t watch The Mummy either), to a pair of cool robotic owls (I don’t have a problem with Harry Potter).

The owls responded to tweets the inventor was concerned about — in this instance IDF and shark attacks. At this show, I would have picked killer robotic spiders or flesh-eating robotic scarabs, but that’s just me. If I were to get a vote, I’d vote for robotic puppies or kittens next year.

In any case, this just showcased what you could do with a few actuator lights and a controller. There were dancing armies of spiders and scarabs, and no one was bitten or eaten during the entire event — though Brian Krzanich did say, several times, that these robots wouldn’t end the world. Come to think of it, that didn’t mean Intel didn’t have upgraded models that weren’t being mentioned.

There was one station where you could get scanned and then printed in a block of laser-etched clear plastic. That was pretty amazing, and at several times during the event, the wait was more than three-hours long.

Collaboration Cancer Cloud

One of the most important announcements at the show was the formation of the Collaboration Cancer Cloud with the Oregon Health and Science University.

Cancer scares the hell out of me, because one out of two men and one out of three women will get it, and a lot of us won’t survive the experience. Right now there are cures that aren’t getting to people, because the cancer research centers’ massive databases aren’t connected or broadly searchable. The people who most need these massive storehouses of knowledge can’t access them.

The Collaboration Cancer Cloud, which will launch next year, initially will connect three of these large cancer centers. It is designed to connect all of them eventually. When it’s complete, we should be able to bring to bear the full power of our collective knowledge in this area, and far more of us will survive — maybe never even get — cancer.

Gaming Machines to Die For

Both Intel and Microsoft seemed to abandon the PC gaming market when the Xbox launched, and I personally thought it was a huge mistake — epic, actually. Well, the good news is both companies are reinvesting in it, and I saw the result of Intel’s renewed focus at IDF.

Intel showed off some amazing gaming rigs. One that was custom-built in Sacramento looked like a sculpture, and I got to talk to the guy who commissioned it. The labor cost alone was US$5,000 — and that was without any of the parts. Water-cooled using a Fiat radiator, it’s one of the most amazing machines I’ve ever seen.

gaming rig

There was an F1 driving simulator that I really wanted — the only problem was that puppy cost $84,000 (no, I didn’t add a zero — eighty four thousand dollars). That’s more than the price of a Jaguar V6S F-Type. But man, was it realistic.

f1 simulator

It actually required that you use the same force to steer the car that a real F1 requires, and if you got your arms locked up it likely would break them if you hit a virtual wall — but man, was it cool! It had three huge 4K screens with actuators at all four corners and high-end interfaces. I was nearly drooling when I left the booth. My wife still won’t let me buy one, even after I suggested we could combine birthday and Christmas gifts.

Intel Unite

One of the interesting little technologies at the show was Intel Unite — a little microcomputer that would connect to your laptop wirelessly, so that multiple people could collaborate during a presentation.

Schools apparently are going crazy for this thing, because it costs less and is easier to put in than an extended HDMI cable. At IDF, there always are a few little things like this that most folks miss.

Smart Everything

There were smart glasses and goggles, smartwatches and smart wristbands, smart BMX bicycles and smart exercise machines, smart sensors and smart home controls. It really got to the point where everyplace I turned it seemed that Intel was showcasing yet one more device that would capture data, could adapt itself to individual use, and could improve one or more things we do for work, entertainment and exercise.

Folks were viewing, riding, playing with, and talking to most every type of thing I could think of, and it was a little daunting. The more stuff I saw, the smaller my virtual bank account got. It was kind of like wandering through a giant adult technology toy store. I really loved it.

Wrapping Up: A View of the Future

Genevieve Bell wrapped up the event nicely. Intel is moving to empower makers of all types with its tools, and with funding for things like theAmerica’s Greatest Makers TV show.

There is a realization that Intel was created by some of the greatest makers who ever lived, and that there’s a revolution pushing innovation into homes and garages all over the world. Intel wants to be part of that movement and, at its heart, IDF this year was all about that. Makers rule!!

Rob Enderle's Product of the Week

I joke around a lot, but one thing I don’t find funny at all is bullies and bullying. This can be kid to kid, adult to kid, or even adult to adult, and I think we need to do everything we can to protect the victims and stamp this out.

Stop Attack, a new app launched last week, instantly turns your phone into a device that will record a bully’s deeds. You can use it to capture the action instantly, to protect yourself or someone else, and possibly bring the bully to justice.

For a fee of $1 a year, your phone sends what it records to a secure cloud repository, so that even if the attacker takes or breaks your phone, at least part of the attack is in a permanent record.

Stop Attack

The app is simple to set up and very quick to execute: Just tap on an icon, and suddenly what your smartphone sees and hears is captured for posterity and for law enforcement — even if it is the police you are capturing. A few years back I was attacked, and I wished for an app like this. As a result, the Stop Attack smartphone app is my product of the week.

Tagged , , , , ,

Intel Israel engineers Android phone 3D camera coup

Integration between Intel and Google tech will bring 3D and virtual reality apps to Android smarphones this year.

Intel’s RealSense technology is finally ready for prime time after Intel’s Haifa-based Israel team integrated the 3D tech with Google’s Project Tango.

RealSense, the new iteration of Perceptual Computing, is Intel’s contribution to the growing number of 3D and virtual reality platforms that developers can integrate into devices and applications, based on a library of pre-programmed routines that can be used to take advantage of the capabilities of 3D cameras. Project Tango does 3D motion and depth sensing, enabling cameras to see the world in a far more advanced – and human-like – manner than 2D cameras.

Together, the two technologies will deliver a camera that will let users to experience their surroundings in 3D, opening up the door to true virtual reality games and apps on devices. In a demonstration on Tuesday at IDF, Intel CEO Brian Krzanich showed off an app that allows users to scan their environment and use the resulting 3D image in games, apps, and other environments.

The integration of RealSense and Project Tango, and the resulting SDK that will enable developers to build smartphone apps using the system, was engineered by an Intel Israel tech team, the Haifa office announced Tuesday.

“As a result, Android developers will now be able to create new applications and experiences for the Intel RealSense technology and Project Tango ecosystems including 3-D scanning, indoor navigation, depth-enabled photography and video, measurements and immersive augmented reality and virtual reality,” Intel Israel said. The SDK is set to be released to Android developers by the end of 2015.

First out of the gate, the company said, will be an Atom-powered smartphone that uses a long-range Intel RealSense camera to provide depth-mapping capabilities at VGA resolution of 60fps. It also includes a wide field-of-view feature-tracking camera and a high-precision inertial motion unit – a combined gyroscope and accelerometer – all required for the Google Project Tango Product Development Kit (PDK) to work properly, aided by the Intel RealSense SDK add-on for Android. The software suite enables developers quick and easy access to high-precision sensor data to create a new class of end-user software applications.

“The combination brings a wide-ranging set of computer vision technologies into a single mobile platform,” the company added.

“The solution is for Android developers to create new applications and experiences for the Intel RealSense technology and Project Tango ecosystems including 3-D scanning, indoor navigation, depth-enabled photography and video, measurements and immersive augmented reality and virtual reality. This complementary set of technologies enables Android developers to experiment with and create a new class of end-user applications on a single mobile platform.”

Tagged , , , , , , , ,