Tag Archives: new tech

Not ready for its close-up: Virtual reality makes presidential debate virtually unwatchable

CNN’s experiment to make the Democratic presidential debate available live shows limits of virtual reality entering the mainstream.

Since Richard Nixon sweated and scowled his way through the first televised presidential debate with John Kennedy in 1960, TV’s importance in American politics has been well established: Being a successful politician, or at least getting elected, requires knowing how to look good on camera.

But if the first debate for president streamed live in virtual reality is any guide, it’s doubtful that this burgeoning technology will have much impact on politics whatsoever.

On Tuesday night, CNN partnered with virtual reality startup NextVR to make the Democratic presidential debate in Las Vegas available in real time in virtual reality to audiences anywhere, as long as they had a Samsung Gear VR headset.

If political debates don’t usually excite you, this experience probably wouldn’t have either, unless, of course, you do actually happen to be into watching five barely recognizable candidates face off for two hours through an over-heating Samsung smartphone held inches from your face by an awkward-looking, heavy headset made by the Korean electronics giant.

This was supposed to be one of the big splashes that pushed virtual reality beyond gaming and into the mainstream. But that’s not how it will likely go down.

To be sure, after more than two decades of little more than talk, VR is having its day in the sun. Smartphone makers such as Samsung and HTC plan to release VR devices this holiday season. Sony and Facebook have their own devices in the works for 2016, when industry watcher Juniper Research expects about 3 million headsets to be sold. By 2020, Juniper expects that number to hit 30 million.

However, CNN’s experiment with Laguna Beach, California-based NextVR revealed the current shortcomings of virtual reality, casting serious doubt on exactly how popular those devices will really be for anyone who’s not a hardcore gamer.

In theory, virtual reality is supposed to give viewers deeply immersive experiences, allowing them to choose where they want to look and when. And for that reason, it’s often spoken of as a potential game-changer for everything from gaming and entertainment to journalism.

But it’s a stretch to say virtual reality viewers were very immersed in the debate, which pit former Secretary of State Hillary Clinton against US Sen. Bernie Sanders as well as long-shots former Maryland Gov. Martin O’Malley, former US Sen. Jim Webb and Lincoln Chaffee, the former governor of Rhode Island.

CNN VR viewers got to see the stage and audience from three or four different angles, each providing 180-degree views. But those wide angles came with a cost: close-ups.

When CNN’s average TV viewers saw Clinton’s pearl earrings, all I saw was a blonde-haired blob on stage in a blue pantsuit. And when everyone else was watching Sanders sporting those 1970s-chic glasses, I could see was an old white guy in a suit who appeared mildly fond of moving his hands.

And the audience? Forget about them. Individual reactions to the debate were completely impossible to read.

So while the candidates and the audience were barely recognizable on stage, several huge CNN logos were always in sight. I counted nine in one shot. Anderson Cooper fans had something to be happy about. Besides his cameramen who appeared below me a few times, Cooper was as close as you got to seeing a human up close in VR and that was still from about 15 feet away.

Still, NextVR, whose cameras and technology CNN used to capture and stream the event, did blunt some of the usual critiques of VR. That is, there were few glitches or delays, even as I moved quickly from the stage to the audience and back. And I didn’t feel like throwing up while doing it, which is progress in VR world.

“I tell people that it’s like the first brick cell phone,” NextVR co-founder DJ Roller said before the event. “But it’s still pretty good.”

Um, I’m not so sure. But Roller offers at least one good reason to stay hopeful.

“That’s as bad as it’s going to get.”

Advertisements
Tagged , , ,

A new way to profit from the rise of the robots

Are robots really about to fulfil the potential to change our lives that we’ve been hearing about for decades? And can you profit if they do?

From the fictional androids of TV shows such as Lost in Space and Buck Rogers to the labour-saving gadgets helping around the house featured onTomorrow’s World, a breakthrough in robotics seems to have been perpetually just around the corner.

Now, though, confidence is higher than it’s ever been that a step-change in robotics really is here.

This is thanks to recent technological developments, but also to our growing need for the work robots can do.

These developments have prompted investment group Pictet Asset Management to add to the small number of funds that enable investors to back the companies that will gain from the rise of the robots.

“We are facing a problem of productivity,” said Karen Kharmandarian, a senior investment manager at Pictet.

“We have to find new ways to maintain economic growth as our population ages, particularly in the developed world, and robots have the potential to fill the gap.”

We are not talking, of course, about butler-style humanoids, but rather giant industrial factory lines, driverless cars, crop-spraying drones and unmanned lawn mowers and vacuum cleaners.

High-profile examples include Google’s driverless cars and Amazon’s delivery drones.

Mr Kharmandarian explained that the real breakthroughs were coming in the computer processors needed to make sophisticated robots, as well as in power storage and efficiency.

Machines can work with less supporting infrastructure and they can be made less power-hungry, which makes them cheaper and safer for humans to work alongside, because, for example, they emit less heat.

Lower costs allow a greater variety of businesses to make use of robotics, allowing them to meet a greater number of needs for the same investment of capital. A robotic production line, for instance, can be reprogrammed to make products with a greater number of specifications.

• Telegraph Investor: Look up fund performance charts

• Newsletter: Get a weekly round-up of investment ideas

This has led forecasters at Boston Consulting to predict that the robotics sector will grow by 10pc a year for the next 10 years.

The fund, Pictet-Robotics, is Luxembourg-based and will invest in companies from around the world. At launch half come from the US, a quarter from Japan and the rest from Europe and emerging markets.

The annual management fee is 0.8pc, although other costs mean that the total you pay is likely to be higher than this.

The ride from such a specialised area is likely to be bumpy so a fund that invests in it is strictly for sophisticated investors. More general exposure can be gained through technology funds.

The best performer in the tech fund sector is a tracker, the Close FTSE TechMark fund, which has an ongoing charge of 0.79pc.

Reader service: Telegraph Investor

The Telegraph has launched its own DIY investment service, designed with simple pricing. The annual cost is capped at £300 for all your Isas and general investment account, but based on an annual fee of 0.3pc,making it good for first-time fund investors and those with large fund portfolios. Alongside this, our team of investment journalists will produce an ever-greater stream of analysis on investments, aiming to help savers make the right choices. – Find out more about Telegraph Investor

Transfer to Telegraph Investor and we’ll welcome you with up to £400 cashback. Terms apply – cashback depends on transfer value.

Tagged , , ,

Exploding Chip Could Thwart Cyberthieves

Self-destructing electronics have applications in a civilian context, especially for mobile devices. Encryption keys on mobile devices are nearly impossible to crack, “but they are often protected by relatively weak passwords,” said security analyst Chris Camejo. Placing encryption keys on a self-destructing chip would make it much more difficult for a thief to extract information.

Researchers at Xerox PARC have developed a self-destructing mechanism for microchips embedded on a hardened glass surface.

The glass can self-destruct upon command and could be used to secure personal data such as health and banking records. It also can be used to destroy encryption keys stored on memory chips in standard consumer, enterprise and government electronic devices.

The research is part of the Defense Advanced Research Projects Agency’s Vanishing Programmable Resources project. The researchers demonstrated the self-destructing glass at DARPA’s Wait, What? event in St. Louis earlier this month.

The technology has reached the prototype stage, but no commercial or government uses for the self-destructing electronics are in the works, according to Gregory Whiting, senior scientist at PARC and the project’s lead scientist.

“We have working prototypes now that demonstrate the technology. They use techniques that are similar to what a product would be, but they do not have the functionality that a product would have. We haven’t settled on any particular type of product yet,” he told TechNewsWorld.

“People are interested in materials that can be transient in the environment. A simple example would be bioabsorbable sutures. For electronics, we usually do not have transience as a property. People work very hard to make sure that electronics components stay static and never change,” Whiting explained.

How It Works

In demonstrating the technology, Whiting’s team used a strengthened glass similar to Gorilla Glass. They used processes similar to those manufacturers use when tempering glass for windshields — that is, building in stress on the outside.

With the very thin glass used in electronics, the researchers applied a process called “ion exchange tempering.”

“We build in a lot more stress, so when it breaks, it releases all the stress, causing the cracks to spider out very rapidly into lots of little particles. We use a chemical process rather than a thermal process and add resistive heating,” said Whiting.

In essence, the process ruptures the material, with everything exploding from the center.

“Our idea was to build a substrate that contained all of this stress energy. When the substrate was triggered to do so, it would break itself apart and disintegrate. Those particles would disperse, then the crack would propagate into the electronics and pull it apart as well,” explained Whiting.

Speed Is the Key

When someone initiates a crack in one of these glasses, it propagates at about 1,500 meters per second, according to Whiting.

“The time between issuing the self-destruct command and it actually happening — we are talking about no more than a couple of seconds. It is very responsive,” he said.

A variety of triggers can issue the destruct command: a photodiode, a radio signal, a laser, a voice phrase or a mechanical switch.

Useful Technology

Self-destructing electronics have applications in a civilian context, especially for mobile devices.

Encryption keys on mobile devices are usually very long random numbers that are nearly impossible to crack, “but they are often protected by relatively weak passwords chosen by their owners,” said Chris Camejo, director of threat and vulnerability analysis for NTT Com Security US. “Various forensics tools rely on this principle to crack the encryption technology included on most smartphones.”

Placing encryption keys on a self-destructing chip that could be activated if a device were lost or stolen would make it much more difficult for a thief to extract information, he told TechNewsWorld.

The self-destruction tech also could be useful for digital rights management technologies. All it takes for a DRM scheme to fail is one hacker extracting keys and sharing them, Camejo said. A self-destructing chip could make that much more difficult by eliminating stored encryption keys if tampering were detected.

The technology could be rolled out on mobile devices or as part of a next-generation DRM scheme, he said. Apple has a track record of taking security seriously on the iOS platform with memory encryption, hardware-encrypted fingerprint readers (as opposed to the software encryption used on many Android devices) and its lost phone functionality.

“The movie and recording industries have also shown a willingness to embrace just about any technology, including those seen as intrusive — as demonstrated by the Sony BMG rootkit scandal in 2005 — if they believe it will help them protect their copyrights,” Camejo added.

No Strings Attached

The research project does not have secret government or military objectives, PARC’s Whiting said. DARPA’s interest is very much in commercial applications.

“They want to find commercial uses in things like environmental sensing. It would be much cheaper to have the sensors simply go away when we finish with them rather than have to go collect them and then store them,” he said.

The agency is interested in securing health information and banking records. With all the personal information that roams the Internet, DARPA wants to find ways to remove that information quickly and reliably.

“DARPA often finds military applications for things, but that is not something that we are interested in. I am amazed at DARPA’s push to find really useful commercial applications for the research that they do,” Whiting said.

DARPA is funding the research without restrictions or demands. PARC is a purely commercial entity, and if it were to turn this research into a business, DARPA would support it, he added.

Cost of Research

Standard methods and materials make the technology accessible and affordable for commercial products.

“It would have a reasonable cost,” said Whiting.

“We are applying a process that is already used to produce strengthened glass like Gorilla Glass,” he said. “It is essentially that process with similar materials. Onto that glass we build electronics in a pretty conventional way. So the combination of those very commercial approaches with a few tweaks our way would not be something out of the reach of ordinary people.”

Tagged , , , ,

WiFi, Move Over – Here Comes LiFi

Disney researchers last week demonstrated Linux Light Bulbs — a protocol for a communications system that transmits data using visible light communication, or VLC, technology.

Linux Light Bulbs can communicate with each other and with other VLC devices — such as toys, wearables and clothing — over the Internet Protocol, according to Disney scientists Stefan Schmid, Theodoros Bourchas, Stefan Mangold and Thomas R. Gross, who coauthored a report on their work. In essence, they could establish a LiFi network that would function in much the same way that WiFi works.

Scientists have been experimenting with the concept of using light to channel data transmissions for years. Previously, however, the use of VLC supported simple communication between devices. Linux Light Bulbs may take that process one step further by enabling networking on VLC devices.

However, the throughput is critically small compared to other visible light approaches, and the technology suffers from proximity limitations, noted James T. Heires, a consultant at QSM.

“Visible light technology is viable for the Internet of Things, but only on a limited basis. This is due to the physical limitations of visible light,” he told LinuxInsider. The transmitter and receiver “must be within line of sight of each other.”

How It Works

Modern light-emitting diode light bulbs, or LEDs, can provide a foundation for networking using visible light as a communication medium, according to the Disney researchers’ report.

The team modified common commercial LED light bulbs to send and receive visible light signals. They built a system on a chip, or SoC, running the Linux operating system, a VLC controller module with the protocol software, and an additional power supply for the added electronics.

The key to the project’s success was the Linux software that enabled the signals to work with the Internet Protocol. The VLC-enabled bulbs served as broadcast beacons, which made it possible to detect the location of objects on the network and to communicate with them.

The Linux connection is at the software level. The Linux kernel driver module integrates the VLC protocol’s PHY and MAC layers into the Linux networking stack.

The VLC firmware on a separate microcontroller communicates with the Linux platform over a serial interface, the report notes.

Slow Going

The drawback is the speed. The network’s throughput maxed out at 1 kilobit per second, noted SeshuKiran, founder of XAir.

“A data rate of 1 Kpbs means a maximum 2 to 3 pixels of a good photograph can be transmitted per second,” he told LinuxInsider. “Good luck with an entire photo. For half of an HD photo to go, it will take 10.66 days.”

The technology may not be fast enough to compete with other technologies. WiFi operates around 3 GHz, and invisible light frequency starts at 3 THz. That is some 1,000 times higher than the WiFi frequency.

“Technically, it should [seem] that light has a better promise in delivering data. It is true in theory — but electronics and circuits say otherwise,” said Kiran.

Made for IoT

Developers have proposed a wide range of applications for VLC tech — using LiFi in place of anything currently supported by commercial wireless technologies such as WiFi.

“The Disney effort is fairly limited in terms of performance, but other projects suggest that broadband quality data transfer performance is possible, said Charles King, principal analyst at Pund-IT.

“The real issue driving VLC is the pervasiveness of the base technology,” he told LinuxInsider.

Data transfer solutions like Wi-Fi require specialized equipment, installation and maintenance. However, light fixtures are virtually everywhere.

“Since LED represents the future of commercial lighting, developers are suggesting that VLC capabilities could easily be enabled in existing homes and businesses without the need for expensive extraneous systems,” King said.

“On the IoT side, VLC would provide an easy way of connecting endpoint sensors to back-end systems without needing to build expensive, dedicated networks,” he pointed out.

The Disney researchers developed hardware peripherals that effectively turn a consumer LED fixture into a Linux host, including a kernel module that integrates the VLC’s physical and MAC (media access control) protocol layers with a Linux-based networking stack, King added.

Trying Times

Light has been used as a communication medium for decades. Major uses include fiber optics and infrared devices, noted Heires. Auto industry researchers have been investigating the incorporation of VLC tech into headlights and sensors to allow cars to communicate with each other and thus avoid collisions.

“Applications such as using light to extend the range of a WiFi signal are within reason. However, since light does not travel through solid objects, such as walls or floors, light is impractical for applications such as TV control, sensor monitoring or security,” he said.

Brighter Ideas to Come

One of the lowest data rate uses for VLC and the IoT is for automatic door openers equipped with light sensors at the lock. Point your smartphone at the door and flash a modulated-light app with a specific code to open the door.

Such a system would work for homes, hotels, garages and more.

Another use is modulating streetlights to deliver specific information, such as alerts and emergencies, across an entire city.

It also could be used to safeguard top secret communications between coworkers.

“If a light bulb in the garden could deliver commands for the automated sprinkler, … that would be “a definite possibility,” Kiran suggested. “Data rates are not yet crucial there.”

Tagged , , , , , ,

Samsung’s IoT Products Make Themselves at Home

Samsung last week plunged into the Internet of Things for the home market, unveiling a new hub to control connected gadgets, home and sleep monitors, and a smart washing machine. The company made the announcements at IFA 2015, a large European trade show for consumer electronics and home appliances.

The SmartThings Hub for home devices is based on technology Samsung acquired a year ago when it purchased SmartThings. It is built around a powerful processor that enables video monitoring, and it includes a battery backup that lasts up to 10 hours in case of a power outage. The unit’s video-monitoring capability allows people to access a live stream at any time, yet only records video when an unexpected event, such as motion by the front door, is detected.

“Putting a processor in the hub, so all the devices aren’t required to be connected to the cloud at all times, is an absolutely essential piece of this,” said Bill Morelli, director of IoT, M2M and connectivity at IHS.

“It can transmit to the cloud as needed, but it doesn’t require a connection to the cloud to function,” he told TechNewsWorld.

That’s important, because “if you have 25 to 50 devices sending data to the cloud on a continual basis, that’s a ridiculous waste of bandwidth,” Morelli continued. “Some of that data can be processed at the hub, and only important data sent to the cloud and the owner notified of it.”

Watches While You Sleep

Samsung also announced its Smart Home Monitor. The device provides unified access and control against intrusion, and reacts to smoke, fire, leaks, floods, and other common household issues by delivering real-time notifications and video clips from multiple cameras via the SmartThings mobile app.

Another Samsung monitor watches over you while you sleep. SleepSense is a flat disk that can be popped under a mattress for the purpose of monitoring your heart and respiratory rate, as well as movements during sleep, without having to touch you.

What’s more, it can “talk” to other devices, so when you go to sleep, it can turn off the television, and adjust the heating or air conditioning automatically for a perfect sleep environment.

Samsung also pulled the wraps off its AddWash washing machine. Associated Android and iOS apps let you remotely monitor and operate the machine, as well as receive a range of notifications straight to your smartphone.

New Openness?

Samsung’s IoT strategy is rooted in openness and cross-industry collaboration, and it’s centered on human usage, the company said.

That hasn’t always been the case in the past, noted Ross Rubin, principal analyst with Reticle Research.

“SmartThing, before being acquired by Samsung, took a very open approach, trying to accommodate many different standards without playing favorites,” he told TechNewsWorld. “That’s in contrast to Samsung, which has come out in favor of some IoT standards and not others.”

Openess hasn’t been one of Samsung’s strong suits, observed Patrick Moorhead, principal analyst at Moor Insights and Strategy.

“Like Apple, Samsung has historically operated in a very insular fashion, as they can make nearly all their components and end products,” he told TechNewsWorld. “I would like to see Samsung support all the IoT standards — like AllSeen, OIC, Apple HomeKit, and Google Brillo and Nest. Now that would be open and awesome.”

More Than Me-Too

While Samsung’s new products give the brand an IoT presence in the home, whether they can distinguish themselves from competitors in the market remains to be seen.

“Samsung’s push into the IoT space is intriguing, but the new solutions the company introduced have — at least on the surface — many similarities with other vendors’ offerings,” said Charles King, principal analyst at Pund-IT.

“That may simply reflect the nature of the consumer IoT market, which focuses largely on commonplace events, processes and home appliances,” he told TechNewsWorld.

If Samsung wants to succeed in the IoT home market, it needs to deliver on its human-centric product pledge, suggested Greg Sterling, vice president of strategy and insight at the Local Search Association.

“The issue is whether Samsung can actually deliver products that live up to the hype,” he told TechNewsWorld.

“Their smartwatches, for example, have been inelegant and generally offered a poor user experience,” Sterling said, “but the fact that there are so many Samsung TVs, computers and smartphones in the world gives them a headstart regarding consumer awareness and brand loyalty.”

Costly Pastime

Even if Samsung delivers on its promises, uncertainties are still swirling around the IoT home market.

“You have to have a strong value proposition for consumers to do this,” said Jim McGregor, principal analyst at Tirias Research. “Some of these smart light switches run (US)$30. How many consumers want to spend $30 on a light switch?”

IoT easily can become an expensive pastime for consumers, said Bob O’Donnell, chief analyst at Technalysis Research.

“If someone bought all the things supported by SmartThings, it would cost thousands of dollars,” he told TechNewsWorld. “SmartThings is a good product, but the category is really lacking. Even Apple hasn’t been able to do anything in it.”

Security is also a problem within the category.

“With all these things, you want security built-in from the beginning,” said Ciaran Bradley, chief product officer with AdaptiveMobile.

“Unfortunately,with some of the devices we’re seeing, security is an afterthought,” he told TechNewsWorld.

Perhaps calling it an “afterthought” was putting it too nicely.

When it comes to security, “most of the IoT solutions out there suck,” Tirias’ McGregor told TechNewsWorld. “When I ask these companies how they’re protecting the data they’re streaming, none of them have an answer.”

Tagged , , , , , , , , , ,

Asynchronous compute, AMD, Nvidia, and DX12: What we know so far

Ever since DirectX 12 was announced, AMD and Nvidia have jockeyed for position regarding which of them would offer better support for the new API and its various features. One capability that AMD has talked up extensively is GCN’s support for asynchronous compute. Asynchronous compute allows all GPUs based on AMD’s GCN architecture to perform graphics and compute workloads simultaneously. Last week, an Oxide Games employee reported that contrary to general belief, Nvidia hardware couldn’t perform asynchronous computing and that the performance impact of attempting to do so was disastrous on the company’s hardware.

This announcement kicked off a flurry of research into what Nvidia hardware did and did not support, as well as anecdotal claims that people would (or already did) return their GTX 980 Ti’s based on Ashes of the Singularity performance. We’ve spent the last few days in conversation with various sources working on the problem, including Mahigan and CrazyElf at Overclock.net, as well as parsing through various data sets and performance reports. Nvidia has not responded to our request for clarification as of yet, but here’s the situation as we currently understand it.

Nvidia, AMD, and asynchronous compute

When AMD and Nvidia talk about supporting asynchronous compute, they aren’t talking about the same hardware capability. The Asynchronous Command Engines in AMD’s GPUs (between 2-8 depending on which card you own) are capable of executing new workloads at latencies as low as a single cycle. A high-end AMD card has eight ACEs and each ACE has eight queues. Maxwell, in contrast, has two pipelines, one of which is a high-priority graphics pipeline. The other has a a queue depth of 31 — but Nvidia can’t switch contexts anywhere near as quickly as AMD can.

NV-Preemption

According to a talk given at GDC 2015, there are restrictions on Nvidia’s preeemption capabilities. Additional text below the slide explains that “the GPU can only switch contexts at draw call boundaries” and “On future GPUs, we’re working to enable finer-grained preemption, but that’s still a long way off.” To explore the various capabilities of Maxwell and GCN, users at Beyond3D and Overclock.net have used an asynchronous compute tests that evaluated the capability on both AMD and Nvidia hardware. The benchmark has been revised multiple times over the week, so early results aren’t comparable to the data we’ve seen in later runs.

Note that this is a test of asynchronous compute latency, not performance. This doesn’t test overall throughput — in other words, just how long it takes to execute — and the test is designed to demonstrate if asynchronous compute is occurring or not. Because this is a latency test, lower numbers (closer to the yellow “1” line) mean the results are closer to ideal.

Radeon R9 290

Here’s the R9 290’s performance. The yellow line is perfection — that’s what we’d get if the GPU switched and executed instantaneously. The y-axis of the graph shows normalized performance to 1x, which is where we’d expect perfect asynchronous latency to be. The red line is what we are most interested in. It shows GCN performing nearly ideally in the majority of cases, holding performance steady even as thread counts rise. Now, compare this to Nvidia’s GTX 980 Ti.

vevF50L

Attempting to execute graphics and compute concurrently on the GTX 980 Ti causes dips and spikes in performance and little in the way of gains. Right now, there are only a few thread counts where Nvidia matches ideal performance (latency, in this case) and many cases where it doesn’t. Further investigation has indicated that Nvidia’s asynch pipeline appears to lean on the CPU for some of its initial steps, whereas AMD’s GCN handles the job in hardware.

Right now, the best available evidence suggests that when AMD and Nvidia talk about asynchronous compute, they are talking about two very different capabilities. “Asynchronous compute,” in fact, isn’t necessarily the best name for what’s happening here. The question is whether or not Nvidia GPUs can run graphics and compute workloads concurrently. AMD can, courtesy of its ACE units.

It’s been suggested that AMD’s approach is more like Hyper-Threading, which allows the GPU to work on disparate compute and graphics workloads simultaneously without a loss of performance, whereas Nvidia may be leaning on the CPU for some of its initial setup steps and attempting to schedule simultaneous compute + graphics workload for ideal execution. Obviously that process isn’t working well yet. Since our initial article, Oxide has since stated the following:

“We actually just chatted with Nvidia about Async Compute, indeed the driver hasn’t fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute.”

Here’s what that likely means, given Nvidia’s own presentations at GDC and the various test benchmarks that have been assembled over the past week. Maxwell does not have a GCN-style configuration of asynchronous compute engines and it cannot switch between graphics and compute workloads as quickly as GCN. According to Beyond3D user Ext3h:

“There were claims originally, that Nvidia GPUs wouldn’t even be able to execute async compute shaders in an async fashion at all, this myth was quickly debunked. What become clear, however, is that Nvidia GPUs preferred a much lighter load than AMD cards. At small loads, Nvidia GPUs would run circles around AMD cards. At high load, well, quite the opposite, up to the point where Nvidia GPUs took such a long time to process the workload that they triggered safeguards in Windows. Which caused Windows to pull the trigger and kill the driver, assuming that it got stuck.

“Final result (for now): AMD GPUs are capable of handling a much higher load. About 10x times what Nvidia GPUs can handle. But they also need also about 4x the pressure applied before they get to play out there capabilities.”

Ext3h goes on to say that preemption in Nvidia’s case is only used when switching between graphics contexts (1x graphics + 31 compute mode) and “pure compute context,” but claims that this functionality is “utterly broken” on Nvidia cards at present. He also states that while Maxwell 2 (GTX 900 family) is capable of parallel execution, “The hardware doesn’t profit from it much though, since it has only little ‘gaps’ in the shader utilization either way. So in the end, it’s still just sequential execution for most workload, even though if you did manage to stall the pipeline in some way by constructing an unfortunate workload, you could still profit from it.”

Nvidia, meanwhile, has represented to Oxide that it can implement asynchronous compute, however, and that this capability was not fully enabled in drivers. Like Oxide, we’re going to wait and see how the situation develops. The analysis thread at Beyond3D makes it very clear that this is an incredibly complex question, and much of what Nvidia and Maxwell may or may not be doing is unclear.

Earlier, we mentioned that AMD’s approach to asynchronous computing superficially resembled Hyper-Threading. There’s another way in which that analogy may prove accurate: When Hyper-Threading debuted, many AMD fans asked why Team Red hadn’t copied the feature to boost performance on K7 and K8. AMD’s response at the time was that the K7 and K8 processors had much shorter pipelines and very different architectures, and were intrinsically less likely to benefit from Hyper-Threading as a result. The P4, in contrast, had a long pipeline and a relatively high stall rate. If one thread stalled, HT allowed another thread to continue executing, which boosted the chip’s overall performance.

GCN-style asynchronous computing is unlikely to boost Maxwell performance, in other words, because Maxwell isn’t really designed for these kinds of workloads. Whether Nvidia can work around that limitation (or implement something even faster) remains to be seen.

What does this mean for gamers and DX12?

There’s been a significant amount of confusion over what this difference in asynchronous compute means for gamers and DirectX 12 support. Despite what some sites have implied, DirectX 12 does not require any specific implementation of asynchronous compute. That aside, it currently seems that AMD’s ACE’s could give the company a leg up in future DX12 performance. Whether Nvidia can perform a different type of optimization and gain similar benefits for itself is still unknown. Regarding the usefulness of asynchronous computing (AMD’s definition) itself, Kollock notes:

“First, though we are the first D3D12 title, I wouldn’t hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn’t hold Ashes up as the premier example of this feature.”

Given that AMD hardware powers both the Xbox and PS4 (and possibly the upcoming Nintendo NX), it’s absolutely reasonable to think that AMD’s version of asynchronous compute could be important to the future of the DX12 standard. Talk of returning already-purchased NV cards in favor of AMD hardware, however, is rather extreme. Game developers optimize for both architectures and we expect that most will take the route that Oxide did with Ashes — if they can’t get acceptable performance from using asynchronous compute on Nvidia hardware, they simply won’t use it. Game developers are not going to throw Nvidia gamers under a bus and simply stop supporting Maxwell or Kepler GPUs.

Right now, the smart thing to do is wait and see how this plays out. I stand by Ashes of the Singularity as a solid early look at DX12 performance, but it’s one game, on early drivers, in a just-released OS. Its developers readily acknowledge that it should not be treated as the be-all, end-all of DX12 performance, and I agree with them. If you’re this concerned about how DX12 will evolve, wait another 6-12 months for more games, as well as AMD and Nvidia’s next-generation cards on 14/16nm before making a major purchase.

If AMD cards have an advantage in both hardware and upcoming title collaboration, as a recent post from AMD’s Robert Hallock stated, then we’ll find that out in the not-too-distant future. If Nvidia is able to introduce a type of asynchronous computing for its own hardware and largely match AMD’s advantage, we’ll see evidence of that, too. Either way, leaping to conclusions about which company will “win” the DX12 era is extremely premature. Those looking for additional details on the differences between asynchronous compute between AMD and Nvidia may find this post from Mahigan useful as well.  If you’re fundamentally confused about what we’re talking about, this B3D post sums up the problem with a very useful analogy.

Tagged , , , , , , , , , , , , , , , ,

iPhone 6s to feature animated backgrounds like the Apple Watch

The iPhone 6s will feature animated wallpapers like the Apple Watch, according to new reports

Apple is planning to bring the Apple Watch’s artfully animated wallpapers to the forthcoming iPhone 6s and iPhone 6s Plus, according to new reports.

While Apple Watch owners are given a choice of animated jellyfish, butterflies and flowers backgrounds, the new phone wallpapers are reported to feature koi fish, flowers and coloured powdery smoke, according to9to5Mac.

Pictures of what appears to be a yet-to-be-assembled iPhone 6s Plus box have been posted online on Chinese site cnBeta, complete with an image of a gold and black butterfly koi.

Images of what is alleged to be an unassembled 6s Plus box have surfaced onlineImages of what is alleged to be an unassembled 6s Plus box have surfaced online

The iPhone 6s and 6s Plus are likely to benefit from a new 12MP camera, capable of shooting 4K video, with the front camera capable of recording 1080p video at 60fps, 240fps in slow motion mode and flash support.

The new phones are also expected to sport Force Touch technology, which detects how hard the user is pressing and allows different actions to be carried out accordingly. According to reports, the technology will allow menu “shortcuts” that enable users to find options on menus more quickly.

One of the most prevalent rumours is that the new generation of iPhones will see the introduction of a new rose-gold model.

Apple has sent out media invitations for a press event on Wednesday September 9, where chief executive Tim Cook is largely expected to introduce a revampedApple TV box set alongside the new phones.

The new box is expected to feature atouch-pad remote, extra inbuilt storage and Siri voice control for browsing and selecting programmes and films to watch, in light of a recent patent filing.

The new box is expected to run a TV-optimised version of iOS 9 with a refreshed interface and, for the first time, may be opened up to the app community, with Apple expected to unveil a software development kit (SDK) that allows developers to build apps. At present, only a handful of third-party programs feature on the Apple TV and introducing an SDK and App Store would expand this significantly.

It will also support Homekit, after Apple confirmed that Apple TV would act as acentral hub for its connected home appliances, including ecobee thermostats, lighting kits and smart sensors.

A leaked internal email from Vodafone staff suggests the new handsets will go on sale on September 25, with pre-orders being accepted from September 18.

Apple recently brought the Watch’s animated flower wallpapers to life with hand-scuplted resin models in all 24 windows of Selfridges in London. Ranging from 200mm to 1.8 metres in height, the flowers included purple passion flowers, pink peoneys and white nigella.

Apple Watch Selfridges takeoverApple Watch Selfridges takeover  Photo: Tristan Fewings / Getty Images.

Apple Watch Selfridges takeoverApple Watch Selfridges takeover  Photo: Tristan Fewings / Getty Images.

Apple Watch Selfridges takeoverApple Watch Selfridges takeover  Photo: Tristan Fewings / Getty Images.

Tagged , , , , , , , , , , ,

Revealed: the first hydrogen-powered battery that will charge your Apple iPhone for a week

British firm Intelligent Energy has claimed a breakthrough in building smartphones that run for a week on a single cartridge

A British technology company has claimed a major smartphone breakthrough by developing an iPhone that can go a week without recharging, running instead off a built-in hydrogen fuel cell.

Intelligent Energy has made a working iPhone 6 prototype containing both a rechargeable battery and its own patented technology, which creates electricity by combining hydrogen and oxygen, producing only small amounts of water and heat as waste.

The company is believed to be working closely with Apple. In what it claims is a world first, it has incorporated a fuel cell system into the current iPhone 6 without any alteration to the size or shape of the device. The only cosmetic differences compared with other handsets are rear vents so an imperceptible amount of water vapour can escape.

Intelligent Energy said it is now considering the cartridges’ sale price.

Intelligent Energy’s miniaturised fuel cell fits inside the existing iPhone 6 chassis alongside the battery  Photo: iFixit

Executives believe that for the price of a latte, a market worth as much as £300bn a year could open up.

Henri Winand, chief executive of Intelligent Energy, who refused to comment on rumours of Apple involvement, said: “To our knowledge this has never been done before.

“We have now managed to make a fuel cell so thin we can fit it to the existing chassis without alterations and retaining the rechargeable battery. This is a major step because if you are moving to a new technology you have to give people a path they are comfortable with.”

On the prototype fuel cell iPhone, seen by The Telegraph at Intelligent Energy’s Loughborough headquarters, hydrogen gas is refuelled via an adapted headphone socket.

For the commercial launch the company is developing a disposable cartridge that would slot into the bottom of future smartphones and contain enough hydrogen-releasing powder for a week of normal use without recharging.

Mark Lawson-Statham, the company’s corporate finance chief said: “Our view is that this is a couple of years out but really it’s about how quickly does our partner want to press the button and get on with it?”

Apple declined to comment.

Tagged , , , , , , , , ,

This 3D printer creates objects from 10 materials at once

A new 3D printer capable of manufacturing objects with up to ten materials at once has been unveiled.

MIT’s (apparently very busy) Computer Science and Artificial Intelligence Laboratory (CSAIL) released video of the printer, which they say is cheaper and “more user-friendly” than previous multi-material printers.

In a paper accepted at the SIGGRAPH computer-graphics conference the team showed off the printer, which requires no human intervention while in operation. It can draw at a level of 40 microns — less than half the width of a human hair — can self-calibrate and self-correct, and can print circuits directly within objects.

“The platform opens up new possibilities for manufacturing, giving researchers and hobbyists alike the power to create objects that have previously been difficult or even impossible to print,” said Javier Ramos, a research engineer at CSAIL who co-authored the paper along with members of Professor Wojciech Matusik’s Computational Fabrication Group.

The machine was constructed with components costing just $7,000, which compares favourably to the price of commercial multi-material printers, which MIT say is usually closer to $250,000 (for just three materials).

MIT

The key to the system is its ability to scan and print directly onto other objects — the team says you could put an iPhone into the machine and print a case directly onto it. Instead of extrusion — in which components are layered onto each other like icing on a cake — the machine “mixes microscopic droplets of photopolymers together that are then sent through inkjet printheads similar to the ones you see in office printers”. This requires more computing power — gigabytes of data at a time — but can more easily be scaled up with multiple materials.

Ramos said that the future of this printer doesn’t necessarily require everyone who wants to use one to own one.

“Picture someone who sells electric wine-openers, but doesn’t have $7,000 to buy a printer like this. In the future they could walk into a FedEx with a design and print out batches of their finished product at a reasonable price,” said Ramos. “For me, a practical use like that would be the ultimate dream.”

Tagged , , , ,

Gadget Ogling: Swimming Aids, Drinks Fixers, and Sleep Rings

Welcome to Gadget Dreams and Nightmares, the column that searches for the gadget announcements that’ll put a spring in our step while attempting to steer clear of those that’ll make us drag our heels.

On the trail this week are a fitness tracker for swimmers, clever goggles to keep you on track, a Bluetooth stick for cocktails, and a ring that tracks sleep.

Note — these are not reviews, and the ratings relate only to how much I’d ever wish to try out any of these items.

Butterfly Boost

As an occasional swimmer, I’d like a better way of monitoring how well I’m doing in the pool. It’s not so easy to use a stopwatch on my phone as it is while out for a jog. So, Misfit is looking to bring wearables to the water with a new version of its Shine activity tracker, developed in collaboration with Speedo.

The Speedo Shine (pictured above) apparently can monitor your progress on laps across all types of strokes. It transmits its lap and distance data to your iOS or Android device. It also can track your other physical activity and sleep patterns. You won’t have to worry too much about recharging, as the battery lasts for up to six months.

While I’m largely unconvinced about wearables for personal use, I like the Shine’s ability to track activity both in and out of the pool. The alleged six-month battery life is attractive as well — the lack of a screen helps conserve power — so this is one fitness tracker I’m actually interested in trying. Just make sure I get out of the pool at some point before dark, please.

Direct Route

Staying the course on aquatic wearables, OnCourse is a pair of goggles designed to help swimmers stay on track in open water.

When wearers view their destination and click the middle of the goggles, the system apparently keeps them on a straight line as they head toward their target. An LED on each lens informs a wearer who is going the wrong way.

Judging by the Kickstarter campaign, these are aimed at triathletes — but I have difficulty staying in a straight line in the pool sometimes, so they might prove a boon to both me and those with whom I share the gym’s swim facilities.

Honestly, I think there could be something in pairing this with the Speedo Shine to help keep swimmers on track and help them reach that personal best they’re looking for. A smart idea, here.

Rating: 4 out of 5 Murky Depths

Quick Mix

Of course, once you’re done with a long day at the pool or the lake, chances are you’ll want to relax with a tasty beverage. If, like me, your cocktail-mixing skills leave something to be desired, however, or you’re too beat from all that working out to focus properly, then you might need a little help in concocting the perfect blend of booze.

So, here’s the MixStik to help out. Right off the bat, I like the name — it’s to the point, not too clever, and it gets at what this item is all about. It’s a Bluetooth stick that you connect to your smartphone, and once you’ve decided which drink you’d like to make, it seemingly will help you get the right proportions of each ingredient.

The LEDs on the stick show how much of each component you should add. Just in case you’re using an odd-shaped glass, there’s a ruler function to show MixStik what you’re working with. There’s also an option to tell the app what you have in stock at home, and it will suggest cocktails for you to try.

I think many of us would enjoy cocktails more if they had the proper proportions. I’m not overly enthusiastic about using measuring spoons and cups to get the right quantities of ingredients, because I guess I’m a lazy millennial (give me a break, I’ve been swimming all day). Give me a thing I can throw in a glass that tells me what to do.

Rating: 4 out of 5 Bottoms Ups

Sleep Tight

When you’re ready to nap from all that working out and relaxing, you might want to learn how effective your sleep is. Sleep trackers are everywhere — though here’s one that stays on your finger rather than on your wrist.

Smart ring Oura apparently detects when you fall asleep and tracks your heartbeat, motion and temperature while you’re in the land of nod. It sends that data to a smartphone app to monitor sleep patterns.

I feel like I should hate it, given how much I despised the loved-one-only-notification smart ring I discussed last week, but somehow Oura seems useful and unobtrusive enough to take for a spin.

It looks like a passable piece of jewelry that you wouldn’t quite pick up on as a connected device at first glance.

It’s a shame that at US$229 on the lowest Kickstarter reward tier, it seems too expensive. I’d need to forego sleep to work more to justify the cost, which would defeat the purpose.

Rating: 3 out of 5 Sandmen

Tagged , , , , , , , ,