Tag Archives: computers

Dell to fix flaw of its own making that puts its computers at risk

The company acknowledges it recently sold PCs loaded with a form of identification that could make them vulnerable to cyberattack.

dell-xps-15-oct150010Computer maker Dell warned late Monday of a security hole affecting recently shipped computers that could leave users vulnerable to hackers.

The issue affects computers made by Dell that come with a particular preinstalled customer service program. Through a certificate that would identify the computer to Dell support staff, this program makes the computers vulnerable to intrusions and could allow hackers to access encrypted messages to and from the machines, Dell said. There is also a risk that attackers could attempt to reroute Internet traffic to sites that look genuine but are in fact dangerous imitations.

Dell said that customers should take steps to remove the certificate from their laptops, offering instructions on how to do that manually. Starting Tuesday, it also plans to push a software update to computers to check for the certificate and then remove it.

“Customer security and privacy is a top concern and priority,” the Round Rock, Texas-based company said in a statement. Dell did not respond to a request for more information.

Security researcher Brian Krebs said that the problem affects all new Dell desktops and laptops shipped since August. That would mean a vast number of computers are at risk. In the third quarter,Dell shipped more than 10 million PCs around the world, according to market researcher IDC.

The disclosure by Dell is another sign of the dangers that lurk as we check our bank accounts online, go shopping via Amazon and share personal information over Facebook. While big data breaches at retailers like Target and Home Depot affect thousands of people all at once, consumers can also be hit much closer to home through their own laptops and smartphones.

Even as they’ve become attuned to taking security precautions, though, consumers typically don’t have to worry about brand-new technology they’ve just brought home from the store. For sure, some programs that computer manufacturers install can prove irritating or cumbersome. The revelation that one might be genuinely dangerous has the potential to erode trust in the computer in one’s hands and in the company that supplied it.

This isn’t the first time this year that out-of-the-box PCs have contained vulnerabilities. Some Lenovo laptops were found to have a similar security flaw thanks to a preloaded program called Superfish. This software altered search results to show different ads, but it also tampered with the computer’s security. It was eventually fixed with a specially released tool.

Dell said that its certificate isn’t adware or malware, nor was it used to collect personal information.

The program in question is being removed from all new Dell computers, the company said, and once it is properly removed according to the recommended process, it will not reinstall itself.

Tagged , ,

To infinity: How Pixar brought computers to the movies

“Toy Story,” the first full-length computer-animated movie, turns 20 this month. Behind Woody and Buzz are a bunch of computer graphics geeks who, with help from Steve Jobs, changed movies forever.

Ed Catmull’s office could be a window into the brain of Pixar.

Catmull, president of Walt Disney and Pixar Animation Studios, sits at a round wooden table at Pixar’s whimsical headquarters in Emeryville, California. To his right, the walls are filled with items that inspire creativity. There’s a plaster mold of his left hand: the star of the first computer-animated short he made in 1972 as a graduate student at the University of Utah. There are also toys galore, a collection of old watches, and trinkets that look like they were picked up at souvenir stands around the world.

To his left, though, it’s all business: a dual-monitor Mac, two elegant gray armchairs and a row of framed, understated drawings from Pixar movies, featuring friends like Woody and Buzz Lightyear.

The room is a metaphorical manifestation of the cerebral hemispheres — fitting for the co-founder of a studio that melded computer algorithms with art in a way no one had ever done before.

t7still13.jpg

Twenty years ago this month, Pixar ushered in a new era in cinema with “Toy Story,” the first full-length feature film created entirely with computers. Critics praised the animated film, with Roger Ebert calling it “a visionary roller-coaster ride of a movie.”

What stands out for Catmull is that nearly all of the critics devoted only a sentence or two to its breakthrough computer animation. “The rest of the review was about the movie itself,” Catmull recalls. “I took immense pride in that.”

In the past two decades, Pixar has become a celebrated art house, with other groundbreaking films to its credit, including “Monsters, Inc.,” “Up,” “Wall-E” and, most recently, “Inside Out.” (Pixar will release its newest film, “The Good Dinosaur,” later this month.) But Pixar’s achievement hasn’t just been a game changer for animation; it’s been course-altering for all of film.

“Toy Story” wouldn’t have been possible without groundbreaking software from Pixar. Called RenderMan, the program let animators create 3D scenes that were photorealistic. The idea: Generate, or “render,” images that look so real you could put them in a movie alongside live-action footage — and no one could tell the difference.

Pixar, which licenses RenderMan to other film studios, boasts that 19 of the last 21 Academy Award winners for visual effects used the software. They include “Titanic,” the “Lord of the Rings” trilogy and “Avatar.”

But film experts point to three movies from the mid-’90s that signaled the sea change for digital moviemaking: “Toy Story,” “Jurassic Park” and “Terminator 2.” RenderMan had a part in all of them.

“Before those three movies, the idea of making a movie with a computer was ridiculous,” says Tom Sito, chair of animation at the USC School of Cinematic Arts. “After those movies, the idea of making a movie without a computer was ridiculous.”

Still light-years away

Things might have turned out very differently.

In 1975, Catmull hired Alvy Ray Smith, a charismatic computer graphics pioneer from New Mexico, to join his new Computer Graphics Lab at the New York Institute of Technology. The lab was based on Long Island, not far from the environs of Jay Gatsby, the fictional millionaire from “The Great Gatsby.” Catmull and Smith’s research was bankrolled by their own eccentric multimillionaire, the institute’s president, Alex Shure. From the beginning, Catmull and Smith had a specific goal: Make the first computer-animated feature.

t38193.jpg

If there’s one striking thing about how Pixar came to be, it’s that there was always a rich guy keeping the dream alive. After Shure, it was George Lucas, fresh from the success of “Star Wars” in 1977. Lucas poached the team to start a computer division at his production studio, Lucasfilm. Then Steve Jobs — down and out after being ousted as CEO of Apple — stepped into the picture as he was looking for a comeback. Jobs bought the team from Lucasfilm for $5 million. Catmull, Smith and Jobs co-founded Pixar in February 1986.

white-cover.jpgBack then, Pixar wasn’t in the movie business. Instead, the company was hawking computers specifically for visual effects. Pixar created short films — and honed its animation skills in the process — to show potential customers what its computers could do.

But the company was sinking. The technology just wasn’t advanced enough to produce full-length films. During the early years, Jobs put in $50 million, a significant chunk of his fortune at the time, to keep it afloat. “The only reason we didn’t officially fail was because Steve didn’t want to be embarrassed,” recalls Smith.

When it came to making short films, though, the team was world-class. There were two reasons for that success. First, Catmull and Smith put John Lasseter — a young, visionary animator the company picked up while still with Lucasfilm — in charge of Pixar’s creative process. (Where the office of Pixar’s technical maestro Catmull has only one wall of toys, Lasseter’s bursts at the seams with them.)

The other reason was RenderMan.

Pixar developed the software with one simple goal: Create images good enough for Lucasfilm to use, says Catmull. At the time, animators could only cram about 500,000 polygons onto the screen when creating a scene. In computer graphics, a polygon is a flat, two-dimensional object drawn with at least three sides. Adding more polygons lets artists create more realistic-seeming 3D objects.

20100115edcatmull11.jpg

Catmull’s goal was 80 million polygons.

“Nothing could really handle the complexity of what we were trying to do,” says Rob Cook, one of the original authors of RenderMan. “We were setting the bar.”

They were so successful that in 2001, the Academy of Motion Picture Arts and Sciences’ board of governors honored Cook, Catmull and Pixar engineer Loren Carpenter with an Academy Award of Merit “for significant advancements to the field of motion picture rendering.” It was the first Oscar awarded to a software package.

“The thinking was, ‘If we could control this, we could make animated movies the way we think they should be made,'” says Jerry Beck, a historian of animated films.

The big break

Back in Catmull’s office, we stare at my iPad. I’ve asked him to take me through “Tin Toy,” a 1988 Pixar short about a marching-band toy trying to escape a slobbering baby. “You look at it now and it looks really crude,” he says. (Even cruder than normal. We’re watching the short on YouTube, and Catmull, ever the perfectionist, remarks that the upload quality has degraded the picture.)

He rewinds the video to a scene of the baby waving. The animators deliberately blurred his hand in motion, so it would look more natural to the human eye. That, he said, was a breakthrough.

For sure, “Tin Toy” was a milestone: It won the first Academy Award for a computer-animated short. But its role in movie history is bigger than that. “Tin Toy” inspired that other Pixar movie about toys.

corbis-42-16302417.jpg

When Pixar started pitching projects, Catmull didn’t think the team was capable of creating a whole movie. Instead, he pitched a 30-minute TV show. And because it had to do with toys, the show would be a Christmas special. Peter Schneider, the Walt Disney producer responsible for hits like “The Little Mermaid” and “Beauty and the Beast,” thought better.

“If you can do a half-hour, you can do 70 minutes,” Catmull recalls Schneider saying. “So I thought about it for about one nanosecond — like, ‘Yeah, you’re right.'”

“Toy Story” was released November 22, 1995.

It was the first of many milestones. Pixar’s CTO Steve May says RenderMan delivered another breakthrough for the 2001 film “Monsters, Inc.” when it allowed animators to render individual strands of Sulley’s blue hair. Chris Ford, Pixar’s RenderMan business director, thinks the studio will raise the bar again with the water scenes in “Finding Dory,” the “Finding Nemo” sequel due out in 2016.

The release of “Toy Story” also fulfilled an early goal of the team that developed the software. “We wanted to make it so people everywhere could use it,” says Pat Hanrahan, the former Pixar engineer who came up with the name RenderMan. In 2015 Pixar started offering the noncommercial version of RenderMan for free. (The paid version is about $500.) The number of people using the free software is in “seven figures,” says Ford.

It’s been more than 40 years since Catmull made one of the world’s first computer-rendered films, starring his left hand. Before I leave his office, he shows me the mold under a glass-domed cylinder casing, kind of like the rose from “Beauty and the Beast.” Two of the fingers have broken off.

Time has been more forgiving to “Toy Story” than it has been to Catmull’s plaster mold. He hopes Pixar will continue to age gracefully. “As you bring people into a successful company, you have different kinds of challenges,” he says, then pauses.

“They can’t make the first computer-animated film over again. How do they do the first of something? And how do they own that?”

Tagged , , , , , ,

Steve Jobs’ legacy includes the women he inspired

Women played a key role in helping create the Macintosh. Some of the women on the original Mac team share how Jobs pushed them to extraordinary levels of creativity.

The lore of Apple’s success goes something like this. Steve Jobs and Steve Wozniak start Apple in a Silicon Valley garage, with the crazy goal of building the first personal computer for regular people. Eight years later, Jobs introduces the Macintosh, shocking the world with its intuitive, iconic interface and creating a cult following. After his exile from Apple, Jobs returns to reinvent and popularize the digital music player, smartphone and tablet. Apple literally changes how we interact with the world.

But that story often leaves out all the others, including dozens of women, involved in Jobs’ first big bet, 1984’s Macintosh. Like everyone else on the original Mac team, these 20-somethings put in grueling hours to create a machine that could live up to the vision of Apple’s brilliant and volatile leader.

Graphic designer Susan Kare dreamed up the Mac’s icons and created some of its original typefaces, including the Chicago, Geneva and Monaco fonts. Joanna Hoffman focused on a “user experience” that made people feel as if they could, for the first time, make the computer do what they wanted. Other women oversaw manufacturing, finance, marketing and public relations.

“The bottom line is, Steve just cared if you were insanely great or not,” said Guy Kawasaki, who joined Apple in 1983 and was the Mac’s first chief evangelist. “He didn’t care about sex, color, creed — anything like that. You were either great or you’re not. You’re either great or you sucked. That’s it. That’s all he cared about.”

Apple didn’t provide a comment for this report.

The new movie “Steve Jobs” by screenwriter Aaron Sorkin and director Danny Boyle hints at some of the contributions of the women of the Mac team, primarily through the character of Hoffman, who was Apple’s head of international marketing. Played by Kate Winslet, Hoffman was Jobs’ confidante and colleague, able to challenge him when no one else could.

“Joanna was the one who represented all of us in learning how to stand up to Steve,” said Debi Coleman, who joined Apple in 1981 as controller for Jobs’ Macintosh project. “That’s one of the reasons she’s a heroine to me.”

Hoffman declined to be interviewed, but three prominent women from the team agreed to talk about the movie, Jobs’ impact on their lives and what it was like working with Jobs, who died in 2011 at the age of 56. The trio are Coleman, who later became head of Macintosh manufacturing; Susan Barnes, controller of the Macintosh division; and Andrea “Andy” Cunningham who, as an account executive for the Regis McKenna public relations firm, planned what turned out to be the tech industry’s biggest PR campaign at the time.

On Monday, Cunningham hosted a panel in Palo Alto, California, where the three women talked about how Jobs challenged, infuriated and pushed them to achieve great things. They were joined by Hoffman and Barbara Koalkin Barza, a former product marketing manager for the Mac and later director of marketing at Pixar, the animation studio Jobs bought after being fired from Apple in 1985.

Jobs “made it possible for you to do anything you wanted,” Cunningham said. The women of the Mac team “had the freedom to do what we were good at doing.”

Hoffman, speaking during the panel Monday, said “what is true is that so often Steve was so enthusiastic and so brilliant and visionary and not necessary reasonable.” And Barza noted that “Steve had a laser focus on details,” which is a something she has taken to heart throughout her career.

Here are a few of their other stories.

Music’s charms

“Billie Jean is not my lover/She’s just a girl who claims that I am the one/But the kid is not my son/She says I am the one/But the kid is not my son.” — lyrics to Michael Jackson’s “Billie Jean.”

Introducing a major product is a lot like planning a crucial battle. Both can succeed or fail on the campaign’s logistics. For the January 24, 1984, introduction of the Mac, those logistics included the then-unheard-of idea of “multiple exclusives,” in which Apple served up different slices of information to leading US publications.

About two weeks before the Mac’s launch, Cunningham and Jobs flew to the tony Carlyle Hotel on New York’s Upper East Side. They had reserved a suite for several days’ of one-on-one interviews and photo shoots.

There was just one wrinkle: Jobs “absolutely hated” having his picture taken and would turn “surly and kinda nasty” with the photographer, recalled Cunningham.

The soothing sounds of music came to the rescue.

“I discovered he loved Michael Jackson and the song ‘Billie Jean,'” she said. “And I discovered that when I played it on a cassette player, he became really docile and friendly and smiled for the cameraman. As soon as the song was over, he would go back to his snarling self.”

The cassette player got plenty of exercise. (His musical choice is ironic given that he was in a paternity battle with the mother of his eldest daughter, Lisa, before the Mac was unveiled.)

“While we were doing the shoot, I was constantly rewinding, rewinding, rewinding,” Cunningham said. “It calmed the waters.”

The waters had been seething since 10 p.m. the night before, when Jobs, Cunningham and Cunningham’s colleague, Jane Anderson, arrived at the hotel. For Jobs, the suite didn’t have the right vibe for the interviews.

So Cunningham and Anderson rearranged the furniture to Jobs’ liking, even pushing the suite’s baby grand piano to where he said it needed to be to create the best atmosphere for the meetings.

“Finally, at 2 a.m. he says to me, ‘I want a vase of those flowers that have the green stems, and they’re really long and at the top they’re kind of white and are very simple and flare out like this,'” Cunningham recalled. “I’d say, ‘Oh, OK, you want Calla lilies.’ And he’s like, ‘No! That’s not what I want.’ And he goes on to describe them again.”

Cunningham did find the Calla lilies that early January morning in New York.

Adult supervision

Silicon Valley joke circa 1981: “What’s the difference between Apple Computer and the Boy Scouts? The Boy Scouts have adult supervision.”

Coleman likes to recall that joke when describing her early years at Apple.

Jobs was a “tall, thin, unkempt, Jesus-freak looking guy” when she was introduced to him. The setting was The Good Earth, then one of Silicon Valley’s most popular restaurants. It’s where Coleman ran into Jobs and Trip Hawkins, an Apple employee who had been her college classmate and would later found video game maker Electronic Arts.

“Trip introduced me and told him, ‘She’s not your usual bean counter.’ With those words, Steve chased me for six months to join his team, which I did not realize was an unsanctioned project.”

It didn’t take long, though, for Coleman to discover that Apple didn’t share the famously genteel corporate culture at Hewlett-Packard, where she had previously worked.

“[Steve] would come marching down the hall or skipping down the hall, calling…’What an idiot. I can’t believe you did this stupid thing.'”

Coleman said it took her a year to learn how to confront Jobs. She credits Hoffman for serving as her teacher. “Joanna said, ‘Look him in the eye. You’ve got to stand up.’ From that point on — I’m not saying he wasn’t tough, totally demanding and totally critical — but he was totally wonderful to me.”

Sounding board

In some ways, that ability to stand up to Jobs was as critical for him as it was for the person confronting his verbal abuse.

“You had to stand up to him,” Barnes said. “He knew he was forming ideals and gelling them, and you had to be able to be his sounding board.”

susan-barnes-2.jpg

Despite the insults, those who learned to interact with Jobs describe the experience as intellectually stimulating, compelling and fun.

“His real skill was knowing which buttons to push,” said Barnes. “The thing that kept me going with him was the intellectual spark. He could get so much out of you. He drove a high standard.”

“You weren’t judged as a woman,” she added. “You didn’t have to worry about what you wore and how you wore it. It was about your intellect, your brain and your contributions.”

Jobs put Coleman in charge of Mac manufacturing in 1984, making her one of the highest-ranking women in the computer industry. She later became chief financial officer of all of Apple in 1987, after Jobs had left the company. Coleman most recently served as co-founder and co-managing partner at venture capital firm SmartForest Ventures from 2000 to June 2015.

Barnes co-founded NeXT Computer with Jobs and became its chief financial officer. She went into investment banking after leaving NeXT and later served as financial chief at Intuitive Surgical. Barnes currently holds that same title at Pacific Biosciences, a DNA sequencing company. Cunningham left Regis McKenna to form her own PR firm and helped Jobs launch Pixar. She currently runs Cunningham Collective, a consulting firm.

“When you’re in an environment where you’re respected for what you do and not…your gender or age, it’s really refreshing,” Cunningham said. “That’s what Steve offered back then.”

For more on the team behind the Mac, check out CNET’s Macintosh 30th anniversary package from January 2014.

42-55300307.jpg
Tagged , , ,

Can HP’s split help it beat the PC slump?

As HP prepares to split into two companies, the company’s president of Personal Systems tells Sophie Curtis it plans to be at the forefront of creating new product categories

On the first day of next month, Hewlett-Packard (HP), the American technology giant that was famously founded in a garage in Palo Alto in 1939, is splitting in two. The company’s data centre infrastructure business – comprising servers, storage and networking – will become HP Enterprise, and its PC and printer business will become HP Inc.

It is arguably the biggest upheaval in the company’s history. While HP has bought and spun off many subsidiaries in its 76 years – from 2001’s $24bn acquisition of Compaq to the $12bn purchase of Autonomy in 2011 that turned into a major corporate scandal – it has never overhauled its core structure in quite such a dramatic fashion.

The two companies will be of practically equal size: HP estimates that HP Enterprise will have revenue strength of $58bn and operating profits of $6bn, while HP Inc will have revenues of $57bn and operating profit of $5bn.

The man responsible for the lion’s share of the latter is Ron Coughlin. Under the new structure he will be president of HP Inc’s Personal Systems business, putting him in charge of PCs, tablets, accessories and their related services – a unit that currently brings in $35bn of revenues. Coughlin will report to Dion Wesler, HP Inc’s new chief executive, with HP’s current chief, Meg WhHewlett-Packarditman, taking over the other side of the company when the two split.

“What makes us most excited about the new HP Inc is the ability to have the creativity and speed of an entrepreneur and the scale of a Fortune 100 company,” said Mr Coughlin. “This means that when we have a great idea, we’re going to move fast, and we’re going to create new categories, but at the same time, when we get that idea, we can put it in every country, every city, every province in the world.”

Some argue that the split is long overdue. Former chief executive Leo Apotheker proposed spinning off the company’s PC business in 2011, but the plan was dropped following pressure from shareholders, which ultimately led to Apotheker’s departure.

Meg Whitman, who took over as HP's chief executive in September 2011, has warned investors about the scale of HP's challenges.

Ms Whitman led a reorganisation of the company in 2012 that saw the PC business combined with the Imaging and Printing Group, helping to pave the way for the current separation plan.

HP is now taking a big leaf out of the book of IBM, another major IT supplier, which realised years ago that it was never going to be a one-stop shop for corporates, and started to sell off “non-core” businesses such as printers, PCs and servers. Rather than gradually divesting businesses though, it is splitting, which offers several advantages. Existing shareholders will get shares in both companies, and the two have agreed not to compete with each other for three years after the split. They will partner to buy supplies, jointly sell products to customers, and share patents and other intellectual property.

Mr Coughlin said part of the reason that the previous attempt to spin off the PC business was unsuccessful was because of “brand dis-synergies”. In other words, HP had decided that the PC business would suffer if it was not able to use its famous brand to sell its products.

This time, both companies will get to keep HP in their name, which Mr Coughlin said “took a lot of the dis-synergy out, and really unlocked the value”.

HP Spectre x2

HP Inc will focus on three key areas – “core”, which is about using HP’s clout and scale to offer high-end features at the lowest possible cost, “growth”, which is about focusing on fast-growing categories like all-in-one PCs and convertibles, and “future” – creating new product categories.

At its consumer event in Barcelona this week, the company unveiled a slew of new devices with this new strategy in mind, including the Spectre x2, a elegant lightweight tablet with a detachable metal keyboard, an all-in-one PC with a huge 34-inch curved display, and a range of colourful HP Stream cloud-based Windows laptops.

The company is also looking to the future with its recently launched Sprout computing platform, which combines an all-in-one desktop computer with two touch displays along with a scanner, depth sensor, high-resolution camera and projector, to create a 3D computing experience – or in HP’s parlance, “Blended Reality”.

“We have been on a drumbeat of innovation. It started in October 2014, when we launched our Blended Reality,” says Mr Coughlin. “It was a vision last October, it’s becoming a reality now. Today if you go on HP.com, we sell Sprout. We also sell Sprout bundled with a 3D printer – 40pc of Sprout sales come bundled with a 3D printer.”

HP Sprout

However, splitting the company into two comes at a price. HP stands to lose $400m to $450m during the split. The company hopes its cost-cutting efforts will help offset that number. Last month, HP said it expects to cut about 33,300 jobs over the next three years, on top of the 55,000 layoffs previously announced. The latest cuts represent a 10pc reduction in the company’s total workforce.

Revenue from the PC and printer business – what will become HP Inc after the split – fell 11.5pc in its fiscal third quarter, which ended on July 31. HP has said it expects the market for PCs and printers to remain tough for “several quarters”: the growing popularity of smartphones and tablets, as well as sluggish business investment, has meant both corporate customers and consumers holding back on buying computers.

Despite this, Mr Coughlin is optimistic about the future of the PC market, and confident of HP’s place within it. The addressable market for personal systems – hardware ranging from PCs to printers to mobile devices – is worth $340bn, he said, with half of that market growing at nine per cent.

“We have a scale market with significant growth pockets, and we’ve proven we can gain share in the PC category,” he said. “I’m also optimistic because we have momentum. We are blessed with being able to go public with one of the world’s strongest brands. Our marketing engine has never revved stronger.”

On the whole, analysts tend to agree that HP Inc has an opportunity to take a lead in the development of innovative hardware, in partnership with Microsoft, which under its chief executive, Satya Nadella, is undergoing something of a revolution focused on its new Windows 10 software. However, many believe that it has to look beyond its traditional markets, having seen the smartphone revolution pass it by almost completely.

The growing “internet of things” industry – connecting everyday devices to mobile or Wi-Fi connections to improve efficiency – could be one of these, especially given HP’s reputation among enterprises.

Coughlin says that operating as a separate company will allow HP Inc to invest more money in new categories as well as creating its own, rather than simply pouring all of its research and development budget into improving its existing technology. This is not only an intent but has been built into the new company’s framework, he says, highlighting how Mr Wesler was announced as HP Inc’s new chief executive when the company unveiled its next-generation Sprout technology a year ago.

“It’s not by chance,” Coughlin says. “It highlights the fact that innovation is going to be a centrepiece of the company under his leadership.”

Forrester analyst Peter Burris said that HP Inc will need to “place some big bets” and “run like the wind” if it hopes to play a role in shaping the consumer device and services industry. However, Mr Coughlin said that HP Inc will only play in markets where it can add value, both in terms of experience and from a shareholder perspective.

“Our focus is commercial mobility. In January we launched eight new devices in four vertical focus areas – retail, field service, education and healthcare. A great example is our healthcare tablet that is antimicrobial, it has a camera to scan the patient’s tag, so the whole idea is mobilising work flows, and that’s our focus. We can add value there, we believe there’s a need, we don’t believe our competitors serve that market well and we believe that there’s profit pools there,” he said.

“We don’t chase share for share’s sake. We don’t believe that we can do well in the $79 tablet market. I don’t think that anybody is going to do well there. In commercial, the good news is that the whole category has profit, and is such that you can offer great experience and great return for shareholders.”

Tagged , , ,

Microfluidic cooling yields huge performance benefits in FPGA processors

As microprocessors have grown in size and complexity, it’s become increasingly difficult to increase performance without skyrocketing power consumption and heat. Intel’s CPU clock speeds have remained mostly flat for years, while AMD’s FX-9590 and its R9 Nano GPU both illustrate dramatic power consumption differences as clock speeds change. One of the principle barriers to increasing CPU clocks is that it’s extremely difficult to move heat out of the chip. New research into microfluidic cooling could help solve this problem, at least in some cases.

Microfluidic cooling has existed for years; we covered IBM’s Aquasar cooling system back in 2012, which uses microfluidic channels — tiny microchannels etched into a metal block — to cool the SuperMUC supercomputer. Now, a new research paper on the topic has described a method of cooling modern FPGAs by etching cooling channels directly into the silicon itself. Previous systems, like Aquasar, still relied on a metal transfer plate between the coolant flow and the CPU itself.

Here’s why that’s so significant. Modern microprocessors generate tremendous amounts of heat, but they don’t generate it evenly across the entire die. If you’re performing floating-point calculations using AVX2, it’ll be the FPU that heats up. If you’re performing integer calculations, or thrashing the cache subsystems, it generates more heat in the ALUs and L2/L3 caches, respectively. This creates localized hot spots on the die, and CPUs aren’t very good at spreading that heat out across the entire surface area of the chip. This is why Intel specifies lower turbo clocks if you’re performing AVX2-heavy calculations.

FPGA-Microchannel

By etching channels directly on top of a 28nm Altera FPGA, the research team was able to bring cooling much closer to the CPU cores and eliminate the intervening gap that makes water-cooling less effective then it would otherwise be. According to the Georgia Institute of Technology, the research team focused on 28nm Altera FPGAs. After removing their existing heatsink and thermal paste, the group etched 100 micron silicon cylinders into the die, creating cooling passages. The entire system was then sealed using silicon and connected to water tubes.

“We believe we have eliminated one of the major barriers to building high-performance systems that are more compact and energy efficient,” said Muhannad Bakir, an associate professor and ON Semiconductor Junior Professor in the Georgia Tech School of Electrical and Computer Engineering. “We have eliminated the heat sink atop the silicon die by moving liquid cooling just a few hundred microns away from the transistors. We believe that reliably integrating microfluidic cooling directly on the silicon will be a disruptive technology for a new generation of electronics.”

Could such a system work for PCs?

The team claims that using these microfluidic channels with water at 20C cut the on-die temperature of their FPGA to just 24C, compared with 60C for an air-cooled design. That’s a significant achievement, particularly given the flow rate (147 milliliters per minute). Clearly this approach can yield huge dividends — but whether or not it could ever scale to consumer hardware is a very different question.

As the feature image shows, the connect points for the hardware look decidedly fragile and easily dislodged or broken. The amount of effort required to etch a design like this into an Intel or AMD CPU would be non-trivial, and the companies would have to completely change their approach to CPU heat shields and cooling technology. Still, technologies like this could find application in HPC clusters or any market where computing power is at an absolute premium. Removing that much additional heat from a CPU die would allow for substantially higher clocks, even with modern power consumption scaling.

Tagged , , , ,

New quantum dot could make quantum communications possible

A new form of quantum dothas been developed by an international team of researchers that can produce identical photons at will, paving the way for multiple revolutionary new uses for light. Many upcoming quantum technologies will require a source of multiple lone photons with identical properties, and for the first time these researchers may have an efficient way to make them. With these quantum dots at their disposal, engineers might be able to start thinking about new, large-scale quantum communications networks.

The reason we need identical photons for quantum communication comes back to the non-quantum idea of key distribution. From a mathematics perspective, it’s trivially easy to encrypt any message so that nobody can read it, but very hard to encrypt a message so only some select individuals can read it, and nobody else. The reason is key distribution: if everybody who needs to decrypt a message has the associated key needed for decryption, then no problem. So how do you get the key to everyone who needs to decrypt it?

This Stanford invention helps handle entangled photons, but does it introduce vulnerabilities in the process?

Quantum key distribution uses the ability of quantum physics to provide evidence of surveillance. Rather than making it impossible to intercept the key, and thus decrypt the message, quantum key distribution simply makes it impossible to secretly intercept the key, thus giving the sender of the message warning that they should try again with a new key until one gets through successfully. Once you’re sure that your intended recipient has the key, and just as importantly that nobody else has it, then you could send the actual encrypted file via smoke signal if you really wanted to — at that point, the security of the transmission itself really shouldn’t matter.

There has been some promising research in this field — it’s not to be confused with the much more preliminary work on using quantum entanglement to transfer information in such a way that it literally does not traverse the intervening space. That may come along someday, but not for a long, long time.

Regardless, one of the big problems with implementing quantum key distribution is that the optical technology necessary to get these surveillance-aware signals from sender to recipient just aren’t there. In particular, the wavelength of photons changes as they move down an optical fiber — not good, since creating photon with precise attributes is the whole source of quantum security.

An Optalysys optical computer, on a desktop

So, unless you’re less than one quantum dot’s range away from the person you want to talk to, quantum security wouldn’t work; a theoretical quantum repeater would insert too much uncertainty about the wavelength of any light it ferried along. With this technology, it could be possible ferry quantum cryptographic information across real-world distances, across or even between continents in the networked way of regular digital internet traffic.

These quantum dots basically achieve perfect single-photon emission by super-cooling the quantum dots so the emitting atoms do not fluctuate. These fluctuations results in very slightly different emission wavelengths, so by slowing them with cryogenic temperatures, they reduce the signal noise. This should allow the re-emission of quantum key information in a reliable-enough form to preserve the quantum security setup.

Of course, quantum security isn’t perfect. You can still listen in on either the sender or receiver directly, or perhaps even find a way to surveil these quantum dots themselves, reading each photon as it’s absorbed and reemitted. Potential attackers could install optical splitters so they get and invalidate one copy of the key, while another arrives unmolested at its destination.

Short of telepathy, there will never be perfect communication security — not even quantum physics can change that.

Tagged , , , , ,

Acer’s Revo Build Is a Puzzle of a PC

Acer’s modular Revo Build is “innovative, but the glory days for the desktop PC are gone,” said tech analyst Jim McGregor. Most businesses “won’t like the idea of having a PC in pieces, and consumers have a plethora of other options — and it’s hard to beat something that’s mobile. Processors in the high-end smartphones will probably offer similar or better performance and connectivity.”

Acer last week announced the latest in its family of Revo small-form-factor PCs at the IFA 2015 trade show in Berlin. The Revo Build M1-601 consists of a cuboid base unit with a footprint measuring about 5 inches square and a set of easily attachable modules.

There are two versions — one with an Intel Pentium processor and the other with an Intel Celeron CPU. Both have integrated Intel HD graphics and up to 8 GB of DDR4 RAM.

Acer plans to launch the Revo Build in Europe in October and in China December. It has not indicated when it will be available in the United States.

Pricing reportedly will be about US$220 in Europe and about $315 in China.

Easy to Build

Modular blocks for the Revo Build connect through pogo pins with magnetic modules. The blocks can work independently or with other PCs.

A 500 GB/1 TB hot-swappable portable hard drive module will be available at launch.

Acer eventually will roll out a power bank for wireless charging, an audio block that will incorporate speakers and microphones, and other expansion blocks.

Been There, Done That

“IBM has tried to build a modular PC at least twice that I know of, and neither made it to market,” said Rob Enderle, principal analyst at the Enderle Group.

“Toshiba actually did create a modular desktop for business, but that went so badly they exited desktop PCs altogether,” he told TechNewsWorld.

Enderle participated as a board member about a year ago, when IBM made its last attempt to revise the IBM modular computer.

“They wanted me as CEO, but I’m not that nuts,” he said. “You have to figure out a way around the economics.”

Acer has a line of Revo products, available at Amazon, eBay, NewEgg and elsewhere, though none is modular.

Will Acer’s Revo Build Fly?

The Revo Build has had a mixed reception so far. Some people like it, while others aren’t sure where they stand. Several people have likened it to Google’s Project Ara.

“It’s innovative, but the glory days for the desktop PC are gone,” observed Jim McGregor, principal analyst at Tirias Research.

Most businesses “won’t like the idea of having a PC in pieces, and consumers have a plethora of other options — and it’s hard to beat something that’s mobile,” he told TechNewsWorld.

“Processors in the high-end smartphones will probably offer similar or better performance and connectivity,” he suggested. “Who needs a larger hard drive when you can connect to the Internet anywhere, any time?”

Modular Computing’s Godzilla

There are other modular computers on the market, however.

There’s Razer’s Project Christine, which has a PCI Express architecture and lets consumers choose modules on the fly, in any combination, and plug them in.

The modules are sealed and self-contained, and they offer active liquid cooling and noise cancellation. Components can be overclocked without voiding warranties.

Xi3 offers the X7A, which consists of a three-board system that can handle three independent monitors. The X7A runs on 30 watts. Old I/O and processor boards can be swapped out for new ones.

The X7A has a quad-core AMD Trinity Series processor of up to 3.2 GHz, a Radeon HD 7660G GPU with 384 programmable cores, 8 GB of DDR3 RAM, an mSATA SSD of 64 GB to 1 TB capacity, two Mini DisplayPorts, one combination HDMI/DisplayPort, and 12 other ports — four each of eSATAp-III ports (which also support USB 2.0), USB 2.0 ports and USB 3.0 ports. It also has one 1-GB Ethernet port.

The X7A measures 4.3 x 3.6 x 3.6 inches. It comes with a three-year warranty and runs Windows 7 Pro, which costs $137 extra, or openSUSE.

The chassis is priced at $100 to $600, depending on the internal storage.

Tagged , , , , ,

MIT unveils world’s first ‘crash proof’ computer

The first computer ‘mathematically guaranteed’ not to lose any data has been unveiled by researchers at MIT’s Computer Science and Artificial Intelligence Lab.

The research proves the viability of an entirely new type of file-system which is logically unable to forget information accidentally. The work is founded on a processes known as formal verification, which involves describing the limits of operation for a computer program, and then proving the program can’t break those boundaries.

The computer system is not necessarily unable to crash, but the data contained within it cannot be lost.

“What many people worry about is building these file systems to be reliable, both when they’re operating normally but also in the case of crashes, power failure, software bugs, hardware errors, what have you,” Nickolai Zeldovich, a CSAIL principal investigator who co-authored the new paper, said in a press statement.

“Making sure that the file system can recover from a crash at any point is tricky because there are so many different places that you could crash. You literally have to consider every instruction or every disk operation and think, ‘Well, what if I crash now? What now? What now?’ And so empirically, people have found lots of bugs in file systems that have to do with crash recovery, and they keep finding them.”

MIT’s new system, which will be demonstrated in full at a symposium this autumn, is unable to lose track of its data even during a crash.

Previous research has proven on paper that a crash-proof system should be possible — at a high level — but MIT’s work is the first to prove it works with the actual code of the file system itself.

Zeldovich and colleagues used a tool called a “proof assistant”, which is a formal proving environment that includes programming code within it. By starting from first principles with the assistant, known as Coq, they defined within the code everything from ‘what is a disk?’ to ‘what is a bit?’. As the proof developed Coq ensured everything within it was logically consistent.

Using this the team was able to prove their lossless concept works, and did so within the code of the file system itself. The result is a relatively slow system — for now — but one which provides the groundwork for future computers which would be much more resilient against problems or interference.

“It’s not like people haven’t proven things in the past,” said Ulfar Erlingsson, lead manager for security research at Google, in a statement. “But usually the methods and technologies, the formalisms that were developed for creating the proofs, were so esoteric and so specific to the problem that there was basically hardly any chance that there would be repeat work that built up on it.

“But I can say for certain that Adam’s stuff with Coq, and separation logic, this is stuff that’s going to get built on and applied in many different domains. That’s what’s so exciting.”

It is hoped the work at MIT could lead to more reliable, efficient computers. “It’s not like you could look up a paper that says, ‘This is the way to do it.'” said Frans Kaashoek, the Charles A. Piper Professor at MIT’s Department of Electrical Engineering and Computer Science, who also worked on the paper. “But now you can read our paper and presumably do it a lot faster.”

Tagged , , , ,

We can’t ignore that technology is changing our brains

An unbiassed assessment of the effects on humans of digital technology is hardly ‘scaremongering’

Do computer games and internet use wreak havoc on young brains? That’s the debate I seem to have been dragged into, thanks to the potentially adverse effects of information technology (IT) that I described in my recent book Mind Change.

As the Telegraph detailed, an article in the British Medical Journal has now accused me of scaremongering. But its authors (one of whom is a colleague of mine at Oxford University) have got the wrong end of the stick.

I have never suggested that reasonable use of the internet “harms” the adolescent brain. But it’s worth noting that recent research shows that some teens are using IT for up to up to 18 hours per day, when media-multi-tasking is taken into account . Palatable or not, other research suggests that intense use of the internet and video games may lead to microstructural abnormalities in the brain, and the effects may be comparable to those of drug abuse.

A further item on the BMJ charge sheet is that I have not published my claims in peer-reviewed scientific literature. But I am not pretending to conduct experiments that test a specific, falsifiable hypothesis. Rather I set out to provide a review – accessible to everyone – of the current state of all the research in this field.

This research is continuously evolving, just as the technologies it evaluates are doing, and spans a host of complex and diverse disciplines: neuroscience, psychology, sociology, among others. Nonetheless, Mind Change does cite some 250 peer-reviewed papers – as such it has been described by the distinguished neurologist Richard Cytowic, as “the most reference-dense and annotated work of its kind I can recall”.

An 8 year old boy playing a computer game on his Apple IPAD Children are likely to suffer health effects because of lack of sleep or too much time indoors  Photo: Alamy

The fundamental point is that the increasing impact of information technology on our lives raises a host of issues that we must unpack: the impact of social networking on identity; the effects of gaming on attention spans; the effects of search engines on memory and learning … the list goes on. To demand that a definitive, one-stop-shop experiment be carried out to determine whether all this is “good” or “bad” is naive to say the least.

Where we can be specific, however, we can begin to make judgements. On the potentially deleterious effects of social networking, for example, many studies point to an associated trend for a more “volatile” identity, increased narcissism, low self-esteem and concealment of the “real” self. Moreover, recent research suggests that an environmental component to autistic-like behaviours cannot be completely discounted , and the environment of the screen could in some cases be relevant.

In America, more than half of children on the autistic spectrum are five years old or above when first diagnosed. When reports suggest that almost all children access smartphones from the age of two, it is understandable that the some studies draw parallels between this behaviour and autistic spectrum-like disorders. These studies may raise sensitive and worrying issues about the interaction of our children with technology and screens, but surely those issues should be aired openly, not buried.

Let’s be clear, I am not a Luddite wishing away technology. On video games, for example, I have described how action video gaming actually improves visuo-spatial ability and have taken care to make clear that, in my mind, the debate about links between aggression and violent video games has not been settled one way or another. There are other academics, including those behind some of the 136 papers on video games and aggression recently reviewed, who go further .

Why games are great - children use video games to discover how to explore the world around themGame on: which console will you buy?  Photo: GETTY

Finally, the BMJ article takes exception to my suggestion that reliance on search engines and surfing the internet for information may favour superficial mental processing at the expense of deep knowledge and understanding. Yet there are serious academic studies, in such journals as The Neuroscientist, which indicate that precisely this may well be happening .

I am trying not to make value judgments, but simply wish to bring to public attention the diverse ways in which new technologies may soon affect society – as soon as the middle of this century. What may prove to be an unpleasant outcome for some – let’s say diminished communication skills or shorter attention span – may be a perfectly acceptable part of modern life for others. It is for this reason that black and white terms like “harm” are not very useful.

Because the digital world offers a way of life that is unprecedented and multifaceted, we need to be fully aware of all the opportunities it can offer. Above all we should surely aim to empower the individual so that the technologies are a means to an end, not the end itself.

My detractors conclude: “There is already much research into the many concerns about digital technology, and the public deserves to participate in the debate fully informed of all the evidence”. I could not agree more. That is why I wrote Mind Change – because we must confront the real possibility that the radically new ways in which we are living our lives today may well have profound, potentially disturbing, consequences.

Tagged , , , , ,

Can IBM’s LinuxONE mainframe compete with cloud computing?

IBM has announced two mainframe computers under the LinuxONE branding that will eventually be able to run Canonical’s popular Ubuntu Linux operating system. This latest move is part of a near-30-year history of IBM running UNIX and, later, Linux-based operating systems on its hardware products. IBM’s first UNIX-like product for its mainframes AIX/370 appeared way back in 1988. While the ability to run Ubuntu on a mainframe may have been the news that attracted attention, the announcement really has several parts.

First, there are two LinuxOne mainframe models. The LinuxONE Emperor is designed for large enterprises. IBM claims it can run up to 8,000 virtual servers, tens of thousands of containers. and 30 billion RESTful web interactions per day supporting millions of active users. The Emperor can have up to 141 processors, 10 terabytes of shared memory, and 640 dedicated I/O (input/output) processors. And IBM claims it can provide all this with a cost that’s half that of a public cloud infrastructure solution.

On the lower end, the LinuxONE Rockerhopper model is an entry-level mainframe aimed at mid-sized businesses which can be upgraded to an Emperor system. Both LinuxOne systems support KVM (Kernel-based Virtual Machine) with the initial work being done by SUSE (best known for tis SUSE Linux distribution).

Next, IBM and Canonical announced “plans to create an Ubuntu distribution for LinuxONE and z Systems.” Note that Ubuntu is not currently available for the LinuxONE mainframes. However, both Red Hat Linux and SUSE Linux are currently supported. IBM also announced its role in forming the Open Mainframe Project along with founding members ADP, CA Technologies, IBM, and SUSE. The Open Mainframe Foundation is a non-profit Linux Foundation Collaborative project.

Finally, IBM announced the LinuxONE developer cloud, which gives developers access to a virtual IBM LinuxONE mainframe. Marist College and Syracuse University’s School of Information Studies plan to host developer clouds that will be free to use. IBM didn’t indicate if this free access will be restricted to, for example, developers at educational institutions. IBM itself plans to create developer clouds for independent software vendors at at IBM sites in Dallas, Beijing, and Boeblingen, Germany that will provide “free trials.”

IBM’s continued expanding support of Linux and Open Source projects makes sense given that these platforms and tools provide much of today’s connected infrastructure. And, the move away from in-house servers to cloud-based ones is a direct threat to IBM’s mainframe business. The company’s claim that it can provide a fast, reliable, and secure alternative at half the cost of cloud-based solutions is bound to get the attention of those with large-scale projects who want to control costs.

Tagged , , , , , ,