Monthly Archives: January 2015

Nokia threatens London start-up over ‘HERE’

Nokia has threatened legal action against a small UK technology firm over its use of the word “HERE”.

Lowdownapp – a digital personal assistant – allows users to tell friends they have arrived at a location by pressing the “HERE” button.

A standalone app for checking in to locations, called HERE, has also been released by the firm.

Nokia said the name would confuse the general public into thinking it was part of Nokia’s own HERE range.

Nokia’s HERE is the Finnish firm’s brand name for apps and software relating to mapping and navigation.

The company said it had so far invested $12m (£8m) in promoting the HERE brand.

In a letter seen by the BBC, Nokia gave London-based Lowdownapp a deadline of 10 February 2015 to rebrand the “HERE” function of the apps.

“Our client has invested heavily in building and promoting the HERE brand since launch,” the letter from Nokia’s lawyers to Lowdown app read.

The firm said it had registered trademarks for the word when it related to computer software, such as apps.

‘David versus Goliath’

The letter added: “Your use of the HERE sign is likely to deceive members of the relevant public such that they will believe your business is connected with or part of our client’s business, when that is not the case.

“This amounts to a misrepresentation that will cause damage to our client’s goodwill in the UK and amounts to passing off.”

David Senior, chief executive of Lowdownapp, described the threat as a real-life David versus Goliath.

“It’s ludicrous – people say, ‘I’m here,’ to announce their arrival, which is why we have it as a service.

“As a small start-up trying to deliver value to users we don’t think a multi-billion dollar company will be affected by this.

“Life is hard enough without Goliaths squashing Davids – maybe they should focus on creating a better mapping service than Google or Apple than squishing a minuscule business.”

Mr Senior said he would probably remove the HERE standalone app from relevant app stores – but was taking further legal advice on whether to remove the “HERE” button from Lowdownapp.

A spokesman for Nokia could not be reached for further comment.

Tagged , , , ,

[update] Be warned: Google enlists Chrome in push for encrypted Web

Google has taken its first step to flag ordinary sites like Wikipedia and CNN with a security warning because they are unencrypted, allowing all data transmissions to be viewed by the prying eyes of hackers or governments.

Google just gave Chrome something of an insecurity complex.

That’s because the company has enlisted Chrome — the No. 2 desktop browser worldwide — in its effort to make secure, encrypted connections on the Web the rule rather than the exception. Encryption scrambles data during transmission to protect users from identity thieves and prying governments. This week, Google built a feature into a test version of Chrome to explicitly warn people about Web pages that are delivered without encryption.

As the feature spreads to mainstream versions of Chrome, it could alarm people who thought Web pages were working fine and could impose new costs on Web site operators who don’t want their users fretting that something is wrong. But in Google’s view, the problem needs fixing.

“We know that active tampering and surveillance attacks, as well as passive surveillance attacks, are not theoretical but are in fact commonplace on the Web,” Chris Palmer, a security programmer on Google’s Chrome team, said last month in a mailing list post explaining the plan.

Moving toward encryption by default is a profound, monumental change for the Web. With unencrypted pages, somebody like an Internet service provider, taxi or airport Wi-Fi operator, or malicious hacker offering a “free Wi-Fi” hot spot can read all the data sent to and from a computer. A hacker can also modify a Web page, and an ISP can insert its own advertising. To block against that kind of eavesdropping and tampering, Google encrypted its Gmail connections and search site in 2010, and Yahoo and Microsoft have followed suit.

But countless Web pages aren’t offered over a secure connection, including Wikipedia, Instagram, Craigslist, Imgur, China Daily, CNN and Amazon product pages. Indeed, 55 percent of the Web’s top million sites don’t offer encryption, according to 2014 analysis.

“In general the principle is sound,” said Robert Duncan, a manager at Internet services and research firm Netcraft. But actually turning the principle into practice will mean many difficulties. “For smaller Web sites, many webmasters won’t have any idea what security is and how to go about doing it, even if it’s free.”

Google has been pushing for an encrypted Web for years, but former National Security Agency contractor Edward Snowden’s revelations about NSA surveillance has lent new urgency to the cause. In 2013, Snowden showed the massive extent of government surveillance both through official channels like subpoenas and the interception of communications traffic.

The first step in bringing the encryption plan to fruition came this week with a small first step that will directly affect almost nobody. The bleeding-edge Canary version of Chrome — not stable or tested enough for ordinary users — now offers a manual setting that enables the warning about unencrypted pages. A person visiting an unencrypted page will see in Chrome’s address bar a padlock with a red X over it.

As the year progresses, expect the change to spread to mainstream Chrome. Google hasn’t declared a schedule for activating the feature, but suggested one option could be to add the warning once encrypted connections reach a certain threshold of commonness.

To enable the feature now, a person has to install Chrome Canary and activate the “mark non-secure origins as non-secure” option in Chrome’s chrome://flags interface.

Google suggests a phased transition to the warnings, but in the long run, the company expects a reversal in browser behavior. Today, green lock icons denote secure pages while unencrypted pages are plain. In the future, as encrypted pages become the norm, they could get the plain pages while unencrypted sites could sport a red warning sign.

HTTPS advocacy

Encrypted Web pages are sent using the HTTPS (Secure Hypertext Transfer Protocol) technology. HTTPS arrived not long after unencrypted HTTP helped begin the Web revolution 25 years ago; the main incentive for adding HTTPS was preventing password eavesdropping on login pages and keeping credit card numbers secret for e-commerce.

Google has worked to counter one perception standing in the way of HTTPS: that HTTPS requires more powerful and therefore expensive hardware for Web site operators. But SSL/TLS, the encryption standard underlying HTTPS, “is not computationally expensive any more,” Google security expert Adam Langley argued back in 2010. “Ten years ago it might have been true, but it’s just not the case any more. You too can afford to enable HTTPS for your users.”

Snowden’s revelations helped marshal more allies to Google’s cause.

For example, the Electronic Frontier Foundation (EFF), an advocate of personal freedoms on the Net and outspoken critic of government snooping, has advocated HTTPS for years. But it increased its efforts after Snowden’s leaks.

The EFF and partners including Firefox developer Mozilla, network equipment maker Cisco Systems, and content distributor Akamai Technologies launched a project late last year called Let’s Encrypt to make it easier for Web site operators to move to HTTPS. Specifically, Let’s Encrypt will offers free certificates, the electronic credentials required to encrypt a Web site connection.

Mozilla support

Another ally for Google’s HTTPS plan is Mozilla.

“In general, this proposal seems like a good idea,” said Richard Barnes, the nonprofit organization’s cryptographic engineering manager. “Adding security to the Web is a core part of our mission…We strongly support the deployment of HTTPS as widely as possible.”

He specifically supports one facet of Google’s proposal: that warnings be shown starting when HTTPS-encrypted Web pages become more ordinary. Being more aggressive could cause confusion and other undesirable side effects.

“We wouldn’t want to turn on a warning light that’s on all the time — that just trains users to ignore it,” Barnes said. “An indicator of HTTP being insecure should be thought of as a way to move the state of HTTPS from ‘dominant’ to ‘universal,’ not from ‘bare majority’ to ‘universal.'”

Speed bumps and stop signs

Yandex, a Russian search rival to Google that now also offers a Web browser, sees user privacy and security benefits to Google’s plan, but it has its own ideas about warning users about unencrypted Web connections.

The Internet industry isn’t ready to deliver HTTPS connections at the scale they deliver HTTP connections today, said Anton Karpov, Yandex’s head of information security. Web site operators have to worry that HTTPS connections are sometimes blocked in areas like airports and that, contrary to Google’s position, HTTPS does require beefier hardware to handle the encryption calculations.

Another hitch is the content delivery network (CDN) business, in which companies armed with global network capacity and servers help Web site operators distribute their content the world over. CDNs can offer HTTPS connections — but they often charge a premium.

Outside the tech industry, there’s another kind of opposition. For example, in January, UK Prime Minister David Cameron pledged to ban encrypted communication software that’s unbreakable by the government in order to more effectively combat terrorism.

Web encryption could help thwart legislative ambitions to ban smartphone apps whose encryption comes with a government-accessible back door. For example, a person could point a browser at an encrypted online chat site in a different country.

Overall, the momentum toward encryption is powerful, as seen in Apple’s decision to encrypt data stored on iPhones and iPads and Google’s parallel move with its Android mobile operating system. New network technologies, including Google’s SPDY and its related standard HTTP/2, will in practicerequire encryption in some common instances.

Moving to an encrypted Web won’t happen quickly, but Google has momentum on its side.

Tagged , , , , , , , , ,

In a hurry? Let a robot valet park your car

Throwing your keys at the parking valet as you sashay onto a flight may seem like the stuff of James Bond films, but already a robotic valet is taking the sweat out of getting on a plane at Germany’s Dusseldorf airport.

Rather than getting behind the wheel, however, this robotic valet physically lifts your three tons of road machinery and slots it into pre-designated robot parking bays.

Nicknamed RAY by its creators, the automated forklift truck is the brainchild of Germany’s Serva Transport.

Aimed at business travelers in a hurry, the automated parking system can be controlled and booked via an app. All travelers have to do is drop the car off in a designated area, go to a nearby touch screen to confirm the car is empty, and RAY does the rest.

RAY uses sensors to measure and photograph the car, it then gently lifts it and takes it to one of 249 parking spots reserved for the robot forklifts.

The company claims that its space-saving system — which uses lasers and sensors to measure not just the height and width of the cars but accessories such as wing mirrors and fenders — can park 60% more cars than a human driver.

The system is also connected to the airport’s flight data system: RAY will retrieve the car based on flight itineraries. The app also lets car owners communicate with RAY if there are any flight delays.

The airport charges €29 a day ($40) or €4 ($5.50) an hour for the service, which the airport’s management said was likely to appeal to time-strapped corporates.

“Our product is especially appealing to business travelers, who arrive at the airport shortly before the flight, seek efficient parking, and return within a few days,” Thomas Schnalke, the airport’s managing director, said in a statement.

A new tie-up with Volkswagen announced this month aims to increase the efficiency of RAY by getting the car and the parking robot to communicate with each other.

“Our jointly developed technology exchanges data automatically between RAY and Volkswagen cars via Bluetooth and thus facilitates the parking progress,” said Rupert Kock, the managing director at Serva Transport Systems.

But RAY is not the only robot valet on the block.

A New Jersey startup called Boomerang also aims to take parking to the next level by using an automated parking system that can park hundreds of cars without human intervention.

Shuffling them like the squares in a giant Rubik’s Cube in garages that need no light and little ventilation, the company says the system not only saves on energy but can fit more cars into a smaller space, freeing up valuable land for other real estate.

According to Boomerang CEO Mark Patterson the advantage of his system is that it is designed with multiple entry bays, multiple robots and multiple lifts so there is no single point of failure.

“If any one thing goes down, we can still operate the system,” he told CNN.

Drivers put their car into a parking bay that places the car on a large steel tray. Robotic wheeled platforms slide under the vehicle and then transport it to the bays following buried wires in the floor of the carpark.

Patterson says its increased throughput means the bays can be filled and emptied more quickly than conventional carparks.

“Our system is installed in a garage with level concrete floors so there’s total fire separation between floors like in a conventional garage – most legacy systems are steel rack structures with no separation between floors,” he said.

“Developers like it because you can park 100% more cars in the same space and that’s a big value proposition.”

The other advantage is that the carpark is a ‘sterile’ environment that has no need for human intervention.

“The cars are not running in these garages so there’s a big savings on air handling equipment,” Patterson said. “You need seven or eight air changes an hour with traditional carparks versus just one or two with this system.”

Similarly, there’s also no need to illuminate the building to the sort of levels that would deter muggers or other attackers that lurk in the gloom of multi-story carparks.

“Robots don’t care if it’s dark,” Patterson said.

Tagged , , , , ,

Google Gives WebView the Cold Shoulder

Google has decided not to fix vulnerabilities in WebView for Android 4.3 and older, sparking heated discussions among developers.

Those versions of WebView run on the WebKit browser. Fixing them “required changes to significant portions of the code and was no longer practical to do so safely,” Adrian Ludwig, lead engineer for Android security, explained last week in a post.

Ludwig recommended steps users and developers can take to mitigate the potential exploitation of WebView vulnerabilities without updating to Lollipop, or Android 5.0.

The decision will leave 930 million users of Android devices in the lurch, Tod Bearsley warned earlier this month.

Let ‘Em Eat Cake!

Users should employ a browser that has its own content renderer and is regularly updated, Ludwig suggested.

Chrome and Firefox are securely updated through Google Play, he pointed out. Firefox is supported on Android 2.3 and higher, while Chrome is supported on Android 4.0 and higher.

Consumers should load content only from trusted sources, Ludwig advised.

Developers should “confirm that only trusted content … is displayed within WebViews in their application,” he said. They should consider providing their own renderer on Android 4.3 and earlier so they can update it with the latest security patches.

Everybody’s Going for Shiny New Stuff

“With the advances in Android 4.4, the number of users that are potentially affected by legacy WebKit security issues is shrinking every day as more and more people upgrade or get new devices,” Ludwig observed.

Android 4.4, aka “KitKat,” introduced a new WebView component based on the Chromium open source project. It includes an updated version of the V8 JavaScript engine and support for modern Web standards not in the earlier version of WebView.

However, Google’s own statistics tell a different tale.

Figures from a seven-day period ending Jan. 5 posted on the Android Developers Dashboard indicate Jelly Bean had 46 percent of the market and KitKat 39 percent. Ice Cream Sandwich had 6.7 percent and Gingerbread 7.8 percent. Lollipop didn’t make the cut for the dashboard, which doesn’t display any versions with less than 0.1 percent distribution.

In other words, a good 60 percent of Android users are at risk from WebView flaws.

Still, “generally speaking, Google can’t go back and support all the old versions,” said Al Hilwa, a research program director at IDC.

“You have to have a cutoff at some point and go forward,” he told TechNewsWorld. “That’s pretty normal for the industry.”

Reactions to Ludwig’s Ideas

“Telling app developers to just provide your renderer rather than you guys handling your own screw-ups? What a joke,” wrote Jake Weisz in response to Ludwig’s post. Stating the fix is expensive or difficult “is not an excuse because it’s Google’s responsibility.”

Also, “as a developer of an app that renders content from the open Web, I feel like [the suggestion devs provide their own renderer] badly misrepresents and underestimates the work involved in such a task,” Chris Lacy wrote. “Building and shipping a Web render is an absolutely massive task.”

From a developer perspective, “it isn’t right for Google to not provide backward compatibility or at least a support library for most of the vulnerabilities,” said Anirudh Pothani, head of Android development at Copper Mobile.

“This isn’t the first time Google has done something to make developers’ lives hard by not providing backward compatibility,” he told TechNewsWorld.

In most cases, developers “might require a custom implementation of the WebView” to patch the vulnerability, Pothani said.

However, most developers might not do anything to fix the problem, because the independents might not have the time to write their own WebView, he noted, while for corporate devs, most companies “do not provide adequate time to fix issues which might need them to rewrite the core framework being used in their app.”

Tagged ,

China’s Great Firewall Gets Taller

Internet Filter Makes It Harder to Circumvent Blocks to Services Like Google and Facebook

China’s government has unveiled a smarter and stricter Internet filter, riling web users and widening the divide between China’s Internet and the World Wide Web.

A recent upgrade to China’s web filters, commonly referred to as the Great Firewall, has made it more difficult to use services called virtual private networks to circumvent the country’s blocks to U.S. services like Google and Facebook .

Chinese officials confirmed a crackdown on VPNs this week, saying that new measures were needed as the Internet evolved. In the past week, major VPN providers such as Astrill have reported disruptions to their services.

The move is further indication of China’s desire to create a parallel Internet environment that it can more easily control. The web filters serve a dual purpose of screening out content critical of the Chinese government and providing protection for China’s own growing web firms against stronger overseas rivals.

The upgraded firewall also comes as Beijing is calling for U.S. technology companies to submit to intrusive security inspections, according to U.S. business groups.

This time, China appears to have made the blocking of VPN connections more automated and dynamic, said Liviu, who runs a VPN service based in Romania and requested his surname to be withheld to avoid reprisal. Whereas China’s firewall previously blocked connections known to be VPNs, since late last year it also appears to automatically find and block connections that it thinks are likely to be VPNs, he said.

“Now it seems they are doing it automatically,” he said. “You can apply some clever rules for the firewalls that will not trigger blocks.”

The crackdown has complicated life for business people in China who rely on global services such as Gmail and Twitter to communicate with clients and collaborators.

Christopher Dobbing, director of Vogmask China, which sells pollution masks, said the disruptions to VPNs have made it difficult to connect to Gmail, Facebook and other services that he uses to correspond with clients.

“I couldn’t run my business without it (VPN),” he said. “I understand the government needs to protect itself against risks, but I’m just trying to do my work.”

Liheng Bai, an independent college counselor based in Shanghai, said the VPN crackdown has also presented challenges to educators and students. Ms. Bai says she searches for information about U.S. colleges online and helps students log into admissions portals when they apply for U.S. colleges—a slow process without a VPN.

“In the long run, it really affects Chinese students’ access to the latest information in education, science and literature. It’s very narrowing and limits their world view,” she said, adding that search results from Google and Chinese search engine Baidu are very different.

Kestrel Lee, a Shanghai-based marketing consultant who is active on social media, says that he used to use Gmail as his primary email, but has switched to Hotmail due to blocks and disruptions.

“All of us who use Gmail have created new accounts by now,” he said. “It’s no use trying to fight this.”

The VPN disruptions, added to already slower connection speeds for loading foreign websites in China, mean decreased productivity for Chinese researchers, engineers and others whose work involves keeping tabs on global developments.

But state media has been arguing that the benefits for China’s tech sector are larger. A Global Times column on Wednesday said the success of China’s Internet giants could be credited to the firewall.

“The firewall blocks certain overseas websites in a targeted fashion, rather than isolating China’s Internet from the overseas one,” the column said.

Others disagree. Peking University professor Wu Bihu took to the microblog platform Weibo to complain about recent crackdowns on the Internet and other media.

“What do you want to do?” he wrote. “The Ministry of Industry and Information Technology closes and cuts off the global Internet, the State Administration of Press, Publication, Radio, Film and Television rudely censors TV dramas, the State Administration for Industry and Commerce wanted to screw Alibaba without checking the source of the fakes or good intention…China has reformed and opened up for decades but now it’s back to the impasse of seclusion. Isn’t it sad!”

While the blocks are unlikely to be reversed by China’s government, it could spur VPN providers to come up with new and better ways to get around the firewall, analysts say. VPN providers pointed to a similar crackdown in 2012 that resulted in stealthier wall-hopping techniques.

Corrections & Amplifications

Liviu, who runs a VPN service based in Romania and requested his surname to be withheld to avoid reprisal, said “Now it seems they are doing it automatically,” and added that “you can apply some clever rules for the firewalls that will not trigger blocks.” An earlier version of this article omitted the word “not” in the second part of the quote.

—Alyssa Abkowitz in Beijing and Fanfan Wang and Colum Murphy in Shanghai contributed to this article.

Tagged , , , , ,

Flexible nanogenerator harvests muscle movement to power mobile devices

The consumer world is becoming powered by mobile devices, but those devices are still powered by being tethered to a wall or a reserve power pack. What if you could generate power for your mobile devices simply by moving your body, and the power source was almost unnoticeable? A new device developed at the National University of Singapore aims to fulfill both of those requirements.

The flexible nanogenerator resembles a small, stamp-sized patch that attaches to your skin. It uses your skin as a source of static electricity, and converts it to electrical energy — reportedly enough to power a small electronic device, like a wearable. The device, presented at the MEMS 2015 conference last week, can generate 90 volts of open-circuit voltage when tapped by a finger. The researchers presented the patch as a self-powered device that can track the wearer’s motion.


The power generates thanks to the triboelectric effect, which is when certain types of materials can become electrically charged through contact and friction with another material — in this case, the patch gains the charge through fiction with human skin. When the two materials are pulled apart, they generate a current that can be harvested. An electrode is needed in order to harvest the current, so the research team installed a 50nm-thick gold film to get the job done. The gold film sits below a silicone rubber layer composed of thousands of tiny pillars that help create more surface area for skin contact, which in turn creates more friction.

Thanks to the triboelectric effect, creating the device is easier as well — the skin is one of the triboelectric layers that helps produce the effect, so that layer doesn’t need to be built into the device itself, saving time, money, and materials. It also removes something that can go wrong with the device — having one less layer built in means that’s one less part that can break.

In the researchers’ test, a finger-tap on the device was able to generate enough current to power 12 commercial LEDs.

Aside from the obvious benefit of being able to, in theory, indefinitely power a device so long as you keep moving, this type of generator could remove the need for batteries in certain mobile devices — your smartwatch or fitness tracker could be made even thinner and lighter. Who knows — one day this type of generator could even generate enough energy to power your smartphone, perhaps even removing the battery entirely, which is one of the biggest constraints to smartphone development and design.

Tagged , , , , ,

IBM joins forces with Mars, taps genomics, to boost food safety

Tech heavyweight IBM has joined forces with food manufacturing giant Mars in an attempt to boost global food safety.

Scientists from the two companies have founded the Consortium for Sequencing the Food Supply Chain, tapping advances in genomics to gain a better understanding of food safety.

IBM says that researchers will investigate the genetic fingerprints of living organisms such as bacteria, fungi, and viruses, examining how they grow in environments such as countertops, factories, and raw materials. The data will be used to investigate how bacteria interact, with scientists hopeful the results will improve food safety management across the supply chain.

“It’s becoming extremely complex with the global supply chain,” Jeff Welser, lab director of IBM’s Almaden research center in San Jose, Calif., told “A small problem in one place can travel quickly,”

The consortium will conduct the largest ever study of metagenomics, aiming to better categorize and understand micro-organisms and their impact in factory environments.

“[Metagenomics] is using genomics to identify the microorganisms in a sample to determine whether they are healthy or not,” said Welser. “It could be a piece of food or an ingredient for food, or it could be a swab taken from a machine in a food processing plant.”

“We have to figure out what microorganisms there are,” he added. “Are the population sites healthy? Are they normal?”

The first samples will be taken from Mars-owned production facilities. Scientists from U.C. Davis will sequence the sample data, which will then be sent to IBM. “We take it to do the work on the analytics and algorithms,” said Welser. “Over time, we will build a database that we will use as a reference for what is normal.”

IBM is no stranger to handling vast quantities of complex data. Last year, for example, the company enhanced its Watson supercomputer, famous for its appearance on the quiz show “Jeopardy,” in an attempt to speed up the pace of scientific breakthroughs.

Welser told that IBM is actively pursuing other partners to join the consortium, with the support of Mars. “Food safety is not a competitive issue, all companies want food to be safe,” he said.

Tagged , , , ,

SpaceX Video Stirs Excitement for Falcon Heavy

SpaceX on Thursday released a computer-generated animation demonstrating how the three Falcon 9 cores of its Falcon Heavy rocket, scheduled for launch later this year, would return to Earth.

The boosters would land vertically at a selected site.

Trimming Costs

The Falcon Heavy will be the most powerful rocket in the world, being able to lift more than 53 metric tons, or 117,000 pounds, into low earth orbit. That’s equivalent to a Boeing 737 fully loaded with passengers, crew, luggage and fuel, and it is more than twice the payload of the next closest operational space vehicle, the Delta IV Heavy, SpaceX said.

Lifting that payload with the Falcon Heavy will cost just one-third of the cost of doing it with the Delta IV Heavy, according to SpaceX.

It will cost US$85 million to launch a payload of up to 14,100 pounds to geosynchronous transfer obit using the Falcon Heavy.

If the Falcon Heavy works as planned and can be recovered for reuse, “it could reduce the cost per kilo to transport [a payload] to orbit from millions of dollars to thousands,” said Mike Jude, manager of the Stratecast consumer communication services program at Frost & Sullivan.

The Falcon Heavy is a variant of the Falcon 9 v1.1. It will consist of three standard Falcon 9 nine-engine rocket cores with Merlin 1D engines, and two additional Falcon 9 strap-on first stages.

The Heavy stands 224 feet tall and is 38 feet wide. It has a mass of more than 3.2 million pounds. The Falcon Heavy’s maximum GTO payload is 46,700 pounds.

Crashing and Burning

SpaceX has been experimenting with rocket recovery, testing first-stage boosters that relight their engines to slow their descent through the atmosphere. They have fins to help their descent, and legs for a stable touchdown.

SpaceX’s attempt earlier this month to bring part of a Falcon 9 rocket down to a floating platform — a barge less than 100m wide — failed, in that the first stage hit the barge hard instead of making a controlled return. However, it was pitch dark and foggy at the time.

The fins “ran out of hydraulic fluid right before landing,” SpaceX CEO Elon Musk tweeted.

An attempt in August, using the prototype falcon 9 Reusable, failed when the vehicle blew up in mid-air over McGregor, Texas, after its instruments detected an anomaly.

Getting Rockets Down to Earth

“General Dynamics Space Systems was working on this idea in 1991,” said Jim McGregor, principal analyst at Tirias Research, who worked on that project. “This was a government-sponsored program where General Dynamics was working on one project and Boeing on another.”

The program “was one of the programs killed by the Clinton administration,” he told TechNewsWorld. “If the United States government hadn’t killed NASA in the early ’90s, we would have had [a reusable rocket] by the end of the ’90s.”

“The challenge is landing a rocket without wrecking it because it’s coming down at high speed. You almost have to do it with a parachute assist,” said McGregor.

Also, a rocket that returns to Earth will require fuel, which means it can carry less cargo.

“The more mass you devote to recovery and re-entry, the less you have for the payload,” Frost’s Jude told TechNewsWorld, but SpaceX “is trying to get around some of this by distributing recover mass to each stage.”

Development of the technology to achieve it “is more of a software problem” than a materials issue, suggested Jude, and SpaceX “will probably be successful on the next attempt.”

Tagged , , , , ,

Will nanotechnology soon allow you to ‘swallow the doctor’?

spc make create innovate nanobots_00010402

Imagine a swarm of microscopic robots, so tiny that a teaspoon can hold billions of them.

They are ready to be injected into the most delicate areas of a human body — the heart and the brain — to deliver drugs with extreme precision or work like an army of nano surgeons, operating from within.

If it all sounds like science fiction, that’s because it is: the plot of the 1966 sci-fi classic Fantastic Voyagerevolves largely around this concept.

In the film, four people board a miniaturized submarine to enter the bloodstream of an American scientist, left comatose by the Russians as a result of a Cold War quarrel over the technology. They only have an hour to remove a life-threatening blood clot before they return to full size. The crew manage to escape the body in the nick of time via a teardrop.

But reality has a way of catching up with our fantasies, and nanotechnology is yet another field of science that bears that promise.

At the Swiss Federal Institute of Technology in Zurich, mechanical engineer Brad Nelson and his team have worked on nanobots for a decade, and are now ready to think big: “We’re making microscopic robots that are guided by externally generated magnetic fields for use in the human body,” he told CNN.

A little knife

The first to suggest that you could one day “swallow the surgeon” was beloved physicist and Nobel Prize winner Richard Feynman. He coined the idea in the provocative 1959 talk “There’s plenty of room at the bottom”, which is widely considered the first conceptual argument for nanotechnology.

“You put the mechanical surgeon inside the blood vessel and it goes into the heart and ‘looks’ around,” Feynman said, “It finds out which valve is the faulty one and takes a little knife and slices it out.”

Nelson’s microrobots might not yet have a little knife, but they sure have something special: their shape is inspired by the common E.coli bacteria, which is propelled by a rotating “tail” called the flagellum.

“Bacteria have a rotary motor,” he explains, “Now, we can’t make that motor, we don’t have the technology for that, but we can use magnetism to move these things, so we actually take these flagella and we magnetize them, which allows them to swim.”

The nanobots have already been tested “in vivo” in an extremely delicate environment, the eye. They can swim through the vitreous humor — the clear gel that fills the eyeball — and deliver drugs in the retinal area to treat age-related diseases such as macular degeneration, which can cause blindness.

At the heart of the matter

The robots are made in a “clean room” environment to keep them sterile, much in the same way as computer chips.

Nelson says that the test done with eyes have inspired other potential applications, such as the treatment of heart conditions. In this case the nanobots would be guided through a catheter – 2 to 3 millimeters in diameter – to reach the specific part of tissue that needs to be treated.

The catheter technique could also be used to reach the brain, and other target area include the smaller intestine and the urinary track. All difficult to reach areas where precision is a must. For that very reason, nanotechnology has long been touted as our best future weapon against cancer.

But how would surgeons operate with nanobots?

“They would need training to learn how to use them,” says Nelson, “but it’s kind of an intuitive interface, and the nanobots would be guided with a joystick.”

The technology is ready for the first clinical tests on human patients, which will begin to take place this year, according to Nelson.

Beyond medicine

“More recently people in the field have been looking at other applications like water treatment or environmental cleanup, where you might be able to operate hundreds, thousands, millions of these devices and have them swim through polluted water, catalyze pollutants, and then collect them back,” he says.

This could be applied for example to oil spills: “There have been some recent publications that have shown how they can actually attach to oil droplets and move them to other locations.”

But the most outlandish prediction on the use of nanotechnology comes from MIT’s digital guru Nicholas Negroponte, who believes that in the future we will receive information and knowledge directly from nanobots that will swim up to our brain from within our bloodstream.

We’d love to hear what Richard Feynman would have had to say about “swallowing the teacher.”

Tagged , , , ,

Sorry, your broadband Internet technically isn’t broadband anymore

The FCC has raised the benchmark for broadband speed to 25 megabits per second, above the speed that many Americans receive with their home connection.

The Federal Communications Commission on Thursday rewrote the definition of high-speed Internet, and chances are, your connection isn’t up to snuff.

The FCC commissioners voted 3 to 2 in support of the change. They are (left to right): Ajit Pai, Mignon Clyburn, Tom Wheeler (chairman), Jessica Rosenworcel, and Michael O’Rielly.FCC

The FCC, tasked with overseeing the rules that govern the Internet, raised the standard for broadband at 25 megabits per second from 4 Mbps, while raising the upload speed to 3 Mbps from 1 Mbps. The commissioners voted 3 to 2 in support of the change, though the dissenting Republican commissioners blasted the move as “overreaching.” The move comes as the FCC published its 2015 Broadband Progress Report, which is what Congress uses to assess the US broadband market.

The new definition effectively means that millions of Americans subscribing to Internet service that clocks in at less than 25 Mbps are no longer considered “broadband” subscribers. The average speed of service delivered in the US is 10 Mbps. Using this new threshold, the agency determined in its report that true broadband speeds are not being delivered in a timely fashion.

The agency’s report found that 55 million Americans, or 17 percent of the population, lack access to advanced broadband services. The bulk of Americans who do not have access to such speeds are in rural areas. The report indicates that 53 percent of rural Americans lack broadband with download speeds of 25 Mbps. This is compared to 8 percent of Americans living in urban areas. The report also indicates that 20 percent of rural Americans don’t even have access to the previous standard of 4 Mbps downloads.

A contentious decision

The move has once again pitted Chairman Tom Wheeler, a Democrat appointed by President Barack Obama, against the two Republicans on the commission, Ajit Pai and Michael O’Reilly, selected by the Republican-controlled Congress. Wheeler says the change in definition is an aspirational target that makes sense given the marketing claims of broadband providers that profess higher speeds are necessary for the ever increasing demands of consumers.

“Our challenge is not to hide behind self-serving lobbying statements, but to recognize reality,” Wheeler said at the meeting. “And our challenge is to help make that reality available to all.”

But the two dissenters on the FCC called the move an overreach of FCC authority and argued the new standard was arbitrary. Pai said that a better judge of what is considered broadband is to look at the services that consumers actually purchase.

“Seventy-one percent of consumers who can purchase fixed 25 Mbps service — over 70 million households — choose not to,” he said.

The FCC noted in the Broadband Progress Report that the previous definition for broadband adopted in 2010 was “inadequate.” And it redefined broadband services as 25 Mbps for downloads and 3 Mbps for uploads. The previous standard had been 4 Mbps for downloads and 1 Mbps for uploads.

Strong opinions on both sides

Consumer groups hailed the report’s findings.

“The FCC’s reevaluation of the broadband marketplace is long overdue,” Edyael Casaperalta, Internet Rights Fellow at Public Knowledge, said in a statement. “For too long, the FCC has gotten by with an outdated standard for broadband, and as a result its analysis of the marketplace grew increasingly antiquated.”

The Communications Workers of America union also applauded the effort, arguing the updated standard will create more jobs.

“Our nation’s economic strength and social welfare — as well as the future of good jobs in the telecom sector — requires world leadership in the quality and capacity of our communications networks, and today’s action by the FCC will move us forward toward regaining that leadership,” the labor union said in a statement.

But conservative groups derided the move.

“The FCC has been playing political games with the ‘706 report’ since 2010, when it suddenly declared deployment inadequate in order to justify its Net neutrality regulations,” said Berin Szoka, president of TechFreedom, a Washington, DC-based think tank that generally backs the efforts of broadband providers.

Foundation for Net neutrality battle?

The Republican commissioners believe the broadband vote is just a set up for the FCC’s intent to settle the Net neutrality fight with new regulations and to push local municipalities to go around state laws and build their own Internet networks. (Net neutrality is the principle of treating all Internet traffic the same.) They believe the stricter speed guidelines paint the broadband industry as less competitive, justifying the FCC moves.

“The ultimate goal is to seize new, virtually limitless authority to regulate the broadband marketplace,” Pai said during the agency’s meeting Thursday. “A thriving marketplace must be found to have failed so that the agency can regulate it back to health.”

At the heart of the Net neutrality debate is a provision in the new rules that will reclassify broadband as a Title II service under the Telecommunications Act. This reclassification will essentially allow the FCC to apply regulation originally established for the traditional telephone network to broadband infrastructure. While Net neutrality supporters hail this move for putting the new rules on firmer legal ground, opponents, such as large Internet service providers and conservative Republicans, say it will stifle investment in networks.

The FCC is also set to rule on two petitions that will override state laws in North Carolina and Tennessee that prohibit municipalities from building or expanding broadband networks.

The agency will vote on both the Net neutrality proposal and the municipal broadband initiative at its February 26 meeting.

Tagged , , , ,