SpaceX and NASA announced that the propulsion system designed to safely abort the upcoming crewed Dragon capsule — dubbed SuperDraco — has been successfully fired 27 times and completed development testing. The SuperDraco thrusters are scaled up versions of the small Draco thruster used for maneuvering and docking control on the upper stages of the Falcon 9 rocket, the upcoming Falcon Heavy, and the Dragon spacecraft. SuperDraco provides roughly 200x more thrust than its little brother, and is designed for a variety of use-cases and capabilities. Each spacecraft will be fitted with eight SuperDraco thrusters, and each thruster provides roughly 1/9 the performance of a single Merlin 1D. The Falcon-9 launches with nine Merlin 1D engines, to give you an idea of how the systems compare.
SuperDraco is a 3D printed engine that’s designed to be throttled from 20% to 100% of thrust and can be restarted multiple times. The SuperDraco engines are going to be used to ensure that a crew capsule can abort a mission safely and either land or splashdown. Spacecraft that carry the SuperDraco system will also have redundant parachutes to ensure that the crew’s survival doesn’t depend on a single mechanism, and the SuperDraco engines have enough thrust to safely abort a mission even with one engine failure.
One of the unique capabilities of the SuperDraco is its ability to perform what’s known as “propulsive landing.” When Curiosity landed on Mars in 2012, it was far too heavy to perform aerobraking in the thin Martian atmosphere. NASA designed a rocket-powered hovercrane to perform the operation instead, and SuperDraco could perform a similar maneuver with a much heavier payload. The engine is designed to use a storable liquid propellant for fuel (meaning it doesn’t need to be kept cryogenically cold). A video of the most recent test firing is embedded below.
The extensive abort capabilities of the Dragon V2 passenger-rated capsule are a departure from NASA’s traditional philosophy. The Space Shuttle may have been an icon of human exploration for thirty years, but it had limited abort capabilities, no crew ejection mechanism, and no way to safely return a crew to Earth if a problem developed in orbit. The investigation into Challenger’s destruction indicated that the crew survived the initial explosion and may have been alive and conscious until the crew cabin slammed into the ocean at more than 200 miles per hour, while there was no way to save Columbia’s crew short of an emergency attempt to bring Atlantis to readiness (and that plan was not attempted).
NASA’s goal with the new Dragon V2 capsule and the crew module being worked on at Boeing is to avoid ever having to face such scenarios again and to safeguard against multiple failure modes that could lead to the death of a crew. The flexibility and capability of the new SuperDraco should help achieve that goal.
NASA is run by a lot of smart people, but sometimes the smartest thing you can do is ask for some help. That’s what NASA is doing in its quest to design the next generation of space suit technology. NASA is asking the public to come up with ideas on how to test prototype Mars space suit materials for durability without actually going all the way to Mars. The agency plans to give away $15,000 in prizes for the best ideas.
We’re all familiar with the current space suit design, which has been used by astronauts for decades. The problem with these pieces of equipment is that they’re optimized for low-Earth orbit. They have some damage resistance, but they aren’t built to be worn while walking around on the surface of a planet, or anywhere really — there’s no walking at all in low-Earth orbit. They’ll need new suits.
Any future Mars mission would likely include many extravehicular activities (EVAs). If you manage to safely land a crew on mars after months in space, they won’t just take one stroll on the surface and go home. There’s serious science to be done out there, and that means a real risk or damage to suits during EVAs. NASA has reason to be worried too. Some astronauts who walked on the moon reported damage to the outer layers of their suits that impaired the insulation. They were only a few days from home, but Mars is much more remote.
Analysis of the Apollo suits by NASA has revealed that the abrasiveness of lunar dust likely caused the damage. Mars could be even more perilous in this respect. It’s quite dusty, and it has an atmosphere that can blow that dust around at upward of 60 miles per hour. The terrain of Mars is also much more complex with plenty of rocky outcroppings to run into.
NASA doesn’t have a standard way to test for this sort of damage, thus the crowdsourcing approach. The agency suggests interested parties consider innovative ways to subject materials to a simulated Martian environment. The procedures will be judged based on how well they can be matched to fiber damage of samples exposed to lunar dust, which is the closest analog we currently have. The proposed processes should also be able to quantify the number and size of particles that migrate through the fabrics and analyze physical damage in the form of cuts, tears, and so on.
NASA expects to make up to three awards of $5,000 each for a total of $15,000. The challenge is being run by NineSigma Inc. as part of the NASA Tournament Lab. This program has previously solicited suggestions for improving email in space and the design of reusable seals for use in EVAs. Submissions have to be in by December 3rd and winners will be announced in late January 2016.
Dark, narrow streaks going downhill at four locations on Mars are evidence of water flowing on the planet, NASA confirmed Monday.
Called “recurring slope lineae,” the streaks are approximately the length of a football field, according to NASA’s Jet Propulsion Laboratory. They are believed to have been formed by the seasonal flow of water.
Lujendra Ojha of the Georgia Institute of Technology noticed these lineae as an undergraduate student at the University of Arizona in 2010. He and seven coauthors wrote a report on the research, which was published Monday in the journal Nature Geoscience.
All four locations, including the walls of the Garni and Hale craters, show evidence of hydrated salts, most likely magnesium perchlorate, magnesium chlorate and sodium perchlorate, Ojha wrote.
The findings strongly support the team’s hypothesis that recurring slope lineae form as a result of current water activity on Mars, he said.
“They’re likely 95 percent correct,” said William Newman, a professor of earth and space sciences at UCLA.
“We know there’s water in [Mars’] polar caps — that’s irrefutable — so the picture they paint is plausible,” he told TechNewsWorld.
Figuring Out the Proof
Ojha noticed back in 2010 that the lineae appeared during Mars’ warm seasons, when temperatures were above -23 degrees Celsius, and seemed to indicate the downhill flow of some liquid. They would fade in cooler seasons.
NASA’s Mars Reconnaissance Orbiter, which is equipped with the High Resolution Imaging Science Experiment camera, made the initial observations. HiRISE observations have documented recurring slope lineae at dozens of sites on Mars, NASA said.
Ojha’s study pairs HiRISE observations with mineral mapping by the Compact Reconnaissance Imaging Spectrometer for Mars on the Mars Reconnaissance Orbiter.
The researchers found the hydrated salts only when the seasonal features were widest, Ojha said, suggesting that either the dark streaks themselves, or a process that formed them, were the source of the hydration.
Where’s the Water Coming From?
Mars’ atmosphere consists of about 95.3 percent carbon dioxide and 2.7 percent nitrogen. Water is a combination of hydrogen and oxygen, which raises the question of where the water could come from.
Perchlorate salts are powerful oxidizing agents, which would explain how the oxygen in the carbon dioxide might be freed to combine with hydrogen — but where’s the hydrogen?
Hydrogen “exists as a trace element, but it’s highly reactive and would likely bond to any free oxygen to form water,” said Mike Jude, a research manager at Frost & Sullivan.
“Even Earth has a hard time retaining free hydrogen,” he told TechNewsWorld.
The major source of water in the inner solar system is comets, and Mars could have retained “a significant fraction” of that water, UCLA’s Newman speculated.
Daily temperature changes are severe on Mars and could cause liquids to freeze and crack the surface, creating more places for liquid to collect, as happens in alpine terrains on Earth, he said.
The Meaning of Water
The existence of water posits life on Mars, although other requirements would have to be fulfilled.
Still, a non-oxygen-breathing life form could well exist on Mars.
Here on Earth, scientists in 2010 discovered three anoxic life forms, meaning life forms that don’t need oxygen to live. They belong to the phylum Loricifera.
Further, just what passes for water on Mars has yet to be determined.
While explanations for the lineae revolve around water, “the nature of the water involved is subject to some debate,” Frost’s Jude said.
Whether that will impact any native life forms on Mars, and how, remains to be seen.
As scientists continue to wrestle with the vexing problem of how to get humans to Mars and bring them safely home, robotic exploration of the Red Planet has already yielded many amazing discoveries. However, our missions to the planet’s surface have studied only a tiny fraction of the land area, and rovers aren’t likely to get much faster in the future. NASA has just completed preliminary testing on a novel wing designthat could one day allow a Mars probe to soar through the planet’s thin atmosphere and cover great distances.
There are two projects operating in tandem, both based on the same high-lift boomerang-shaped wing design. There’s the Preliminary Research Aerodynamic Design to Lower Drag (Prandtl-d) and the forward-looking Preliminary Research Aerodynamic Design to Land on Mars (Prandtl-m). NASA scientists have been testing the Prandtl-d design for some time now, but it has only recently been subjected to a full battery of wind tunnel tests. This is essential to understand how the wing will perform in a variety of conditions, including those on Mars if the design is carried over to a space mission.
The wind tunnel scale model testing of Prandtl-d was carried out jointly by NASA’s Armstrong Flight Center and Langley Research Center. According to the data, the boomerang wing is remarkably stable, even when it’s completely stalled. That could save a Mars exploration plane from a catastrophic failure when it’s a few million miles away from the nearest repair crew. The airflow patterns over the boomerang wing proved to be totally new to the team, which could account for its ability to generate high lift and remain stable.
The next step for Prandtl-m is a high altitude test of the wing design that will take place later this year. A small prototype of the plane will be released at an altitude of 100,000 feet. The atmosphere up that high is a close approximation of Mars, so it’s important to know if Prandtl-m could generate enough lift to stay aloft in such conditions. If the test goes well, that could be huge for future Mars missions.
NASA doesn’t expect it will have to design an entire mission around Prandtl-m. The beauty of this design is that it could ride to Mars in a 3U CubeSat (about one foot square) connected to the aeroshell of a Mars rover. This module could be ejected when the rover begins its descent, allowing the plane to deploy and fly a tremendous distance before gliding to the surface. It could be used for geological surveys, imaging, and scouting future landing sites up close. The additional weight of the Prandtl-m craft wouldn’t add much of anything to the launch cost either.
NASA believes Prandtl-d could morph into Prandtl-m in time for inclusion on the 2020-era Mars rover. That mission could reach the Red Planet as soon as 2022-2024.
NASA’s Ames Research Center has developed a draft proposal for a mission that would retrieve soil samples from Mars and deliver them back to Earth. It’s ambitious to be sure, but NASA scientists are optimistic about the so-called “Red Dragon” proposal, so named because it would rely on a modified version of SpaceX’s Dragon capsule. According to the team, this mission could be feasible in the early 2020s, just in time for NASA’s next Mars rover mission.
Repurposing near-Earth spacecraft for longer voyages is usually a bad idea that never gets past the initial design stages. However, SpaceX designed the Dragon capsule to be highly adaptable. After all, the manned Dragon is essentially the same vessel that’s already in operation as an automated cargo transport.
CEO Elon Musk says the Falcon 9 Heavy is powerful enough to take the fully loaded Dragon to Mars, provided it is not needed for the return trip. A lighter payload could make it all the way to Jupiter. He and SpaceX were not involved in the design of the Red Dragon mission, but Musk has since come out in favor of the basic idea, noting that the Dragon vehicle is designed to land on any surface in the solar system.
Landing on the surface of Mars becomes an increasingly tricky problem as you increase in mass. The atmosphere is too thin for parachutes to do all the work, and delicate components don’t take kindly to hard impacts. The 1-ton Curiosity rover was landed with the aid of a rocket sled, but the Red Dragon would have a 2-ton payload at least. Ames scientists think the Red Dragon can set down without any parachutes, using only the Super Draco engines that are being developed for the emergency abort system on manned Dragon capsules. This would allow Red Dragon to rendezvous with the planned 2020 NASA Mars rover, which will have already collected soil samples for the return mission.
It would be inefficient to try and lift the whole dragon capsule back off the Martian surface, so instead it would carry a small Mars ascent vehicle that would launch into orbit. The lower gravity and thinner atmosphere on Mars make it easier to reach orbit. This craft would line up for an Earth encounter, then release a smaller Earth return vehicle with the samples on board. Once it’s in low-Earth orbit, a second Dragon capsule will be sent up to retrieve it.
Getting samples of Martian soil back to Earth would be the best way to learn about the history and composition of Mars. There’s only so much a rover can do from millions of miles away, and if scientists come up with a new idea for a test, they have to wait for the next mission. Having fresh samples would accelerate things greatly. Maybe we’d finally be able to figure out if Mars has ever supported life.
By 2050 some experts believe that machines will have reached human level intelligence.
Thanks, in part, to a new era of machine learning, computer are already learning from raw data in the same way as the human infant learns from the world around her.
It means we are getting machines that can, for example, teach themselves how to play computer games and get incredibly good at them (work ongoing at Google’s DeepMind) and devices that can start to communicate in human-like speech, such as voice assistants on smartphones.
Computers are beginning to understand the world outside of bits and bytes.
Fei-Fei Li has spent the last 15 years teaching computers how to see.
First as a PhD student and latterly as director of the computer vision lab at Stanford University, she has pursued the painstakingly difficult goal with an aim of ultimately creating the electronic eyes for robots and machines to see and, more importantly, understand their environment.
Half of all human brainpower goes into visual processing even though it is something we all do without apparent effort.
“No one tells a child how to see, especially in the early years. They learn this through real-world experiences and examples,” said Ms Li in a talk at the 2015 Technology, Entertainment and Design (Ted) conference.
“If you consider a child’s eyes as a pair of biological cameras, they take one picture about every 200 milliseconds, the average time an eye movement is made. So by age three, a child would have seen hundreds of millions of pictures of the real world. That’s a lot of training examples,” she added.
She decided to teach computers in a similar way.
“Instead of focusing solely on better and better algorithms, my insight was to give the algorithms the kind of training data that a child is given through experiences in both quantity and quality.”
Back in 2007, Ms Li and a colleague set about the mammoth task of sorting and labelling a billion diverse and random images from the internet to offer examples of the real world for the computer – the theory being that if the machine saw enough pictures of something, a cat for example, it would be able to recognise it in real life.
They used crowdsourcing platforms such as Amazon’s Mechanical Turk, calling on 50,000 workers from 167 countries to help label millions of random images of cats, planes and people.
Eventually they built ImageNet – a database of 15 million images across 22,000 classes of objects organised by everyday English words.
It has become an invaluable resource used across the world by research scientists attempting to give computers vision.
Each year Stanford runs a competition, inviting the likes of Google, Microsoft and Chinese tech giant Baidu to test how well their systems can perform using ImageNet. In the last few years they have got remarkably good at recognising images – with around a 5% error rate.
To teach the computer to recognise images, Ms Li and her team used neural networks, computer programs assembled from artificial brain cells that learn and behave in a remarkably similar way to human brains.
A neural network dedicated to interpreting pictures has anything from a few dozen to hundreds, thousands, or even millions of artificial neurons arranged in a series of layers.
Each layer will recognise different elements of the picture – one will learn that there are pixels in the picture, another layer will recognise differences in the colours, a third layer will determine its shape and so on.
By the time it gets to the top layer – and today’s neural networks can contain up to 30 layers – it can make a pretty good guess at identifying the image.
At Stanford, the image-reading machine now writes pretty accurate captions (see examples above) for a whole range of images although it does still get things wrong – so for instance a picture of a baby holding a toothbrush was wrongly labelled “a young boy is holding a baseball bat”.
Despite a decade of hard work, it still only has the visual intelligence level of a three-year-old, said Prof Li.
And, unlike a toddler, it doesn’t yet understand context.
“So far, we have taught the computer to see objects or even tell us a simple story when seeing a picture,” Prof Li said.
But when she asks it to assess a picture of her own son at a family celebration the machine labels it simply: “Boy standing next to a cake”.
She added: “What the computer doesn’t see is that this is a special Italian cake that’s only served during Easter time.”
That is the next step for the laboratory – to get machines to understand whole scenes, human behaviours and the relationships between objects.
The ultimate aim is to create “seeing” robots that can assist in surgical operations, search out and rescue people in disaster zones and generally improve our lives for the better, said Ms Li.
The work into visual learning at Stanford illustrates how complex just one aspect of creating a thinking machine can be and it comes on the back of 60 years of fitful progress in the field.
Back in 1950, pioneering computer scientist Alan Turing wrote a paper speculating about a thinking machine and the term “artificial intelligence” was coined in 1956 by Prof John McCarthy at a gathering of scientists in New Hampshire known as the Dartmouth Conference.
After some heady days and big developments in the 1950s and 60s, during which both the Stanford lab and one at the Massachusetts Institute of Technology were set up, it became clear that the task of creating a thinking machine was going to be a lot harder than originally thought.
There followed what was dubbed the AI winter – a period of academic dead-ends when funding for AI research dried up.
But, by the 1990s, the focus in the AI community shifted from a logic-based approach – which basically involved writing a whole lot of rules for computers to follow – to a statistical one, using huge datasets and asking computers to mine them to solve problems for themselves.
In the 2000s, faster processing power and the ready availability of vast amounts of data created a turning point for AI and the technology underpins many of the services we use today.
It allows Amazon to recommend books, Netflix to suggest movies and Google to offer up relevant search results. Smart little algorithms began trading on Wall Street – sometimes going further than they should, as in the 2010 Flash Crash when a rogue algorithm was blamed for wiping billions off the New York stock exchange.
It also provided the foundations for the voice assistants, such as Apple’s Siri and Microsoft’s Cortana, on smartphones.
At the moment such machines are learning rather than thinking and whether a machine can ever be programmed to think is debatable given that the nature of human thought has eluded philosophers and scientists for centuries.
And there will remain elements to the human mind – daydreaming for example – that machines will never replicate.
But increasingly they are evaluating their knowledge and improving it and most people would agree that AI is entering a new golden age where the machine brain is only going to get smarter.
1951 – The first neural net machine SNARC was built and in the same year, Christopher Strachey wrote a checkers programme and Dietrich Prinz wrote one for chess.
1957 – The General Problem Solver was invented by Allen Newell and Herbert Simon.
1958 – AI pioneer John McCarthy came up with LISP, a programming language that allowed computers to operate on themselves.
1960 – Research labs built at MIT with a $2.2m grant from the Advanced Reserch Projects Agency – later known as Darpa
1960 – Stanford AI project founded by John McCarthy.
1964 – Joseph Weizenbaum created the first chatbot Eliza, which could fool humans but repeated back what was said to her.
1968 – Arthur C. Clarke and Stanley Kubrick immortalised Hal, that classic vision of a machine that would match or exceed human intelligence by 2001.
1973 – A report on AI research in the UK formed the basis for the British government to discontinue support for AI in all but two universities.
1979 – The Stanford Cart became the first computer-controlled autonomous vehicle when it circumnavigated the Stanford AI lab.
1981 – Danny Hillis designed a machine that utilised parallel computing to bring new power to AI.
1980s – Backpropogation algorithm allowed neural networks to start being able to learn from their mistakes.
1985 – Aaron, an autonomous painting robot, was shown off.
1997 – DeepBlue, IBM’s chess machine, beat then world champion Garry Kasparov.
1999 – Sony launched the AIBO, one of the first artificially intelligent pet robots.
2002 – The Roomba, an autonomous vacuum cleaner, was introduced.
2011 – IBM’s Watson defeated champions from TV game show Jeopardy.
2011 – Smartphones introduced natural language voice assistants – Siri, Google Now and Cortana.
2014 – Stanford and Google revealed computers that could interpret images.
Getting a satellite into orbit is only the first step in making it a useful piece of equipment. It also needs to arrive in the correct orbit and stay there, known as station-keeping. In the past this was accomplished with chemical propulsion, but more modern satellites have relied upon a mix of chemical and electric propulsion. Now Boeinghas announced the first all-electric ion propulsion satellite is fully operational.
The satellite in question doesn’t have a snappy name — it’s a communications satellite called ABS-3A 702SP. It was launched last March aboard a SpaceX Falcon 9 rocket. It has just recently been handed over to its owner, Bermuda-based telecommunications company ABS. Because ABS-3A is a communications satellite, it needs to remain in a geosynchronous orbit. Thus, station-keeping is essential. When it can no longer maintain its orbit, it will cease being useful. Ion thrusters make a lot of sense in this scenario.
Ion engines operate on the same basic principles of physics that chemical thrusters do — expel mass from a nozzle to push a craft in the opposite direction. Instead of the combustion of volatile chemicals, ion engines operate with chemically inert xenon gas. Using an electrostatic field, the ionized gas is accelerated out of the nozzle, propelling the craft forward. This is the same type of thruster technology used on NASA’s Dawn spacecraft, which is currently studying the dwarf planet Ceres.
Ion thrusters are considerably more efficient than conventional rocket motors. In this case, Boeing claims the Xenon Ion Propulsion System (XIPS) designs used for ABS-3A is ten times more efficient than liquid fueled rockets. ABS-3A needs only 11 pounds (5kg) of xenon gas per year to maintain station-keeping, meaning it can remain operational much longer than a similar satellite with conventional thrusters. ABS expects the satellite to remain active for about 15 years. Ion thrusters are also considerably lighter than chemical engines, making launches cheaper. The drawback is the very low thrust of an ion engine. That’s why past satellites have carried conventional thrusters as well.
Upon delivery to orbit, ABS-3A used its ion thrusters to reach a geosynchronous orbit at 3 degrees west longitude. After being tested by Boeing, the satellite was turned over to ABS on August 31st. Now that the design has proven itself viable, Boeing is forging ahead with a second satellite for ABS using the same XIPS engines. This one will be blasted into space sometime next year.
NASA earlier this month entered an agreement with Arx Pax to use its Magnetic Field Architecture technology in hardware that will let astronauts move tiny satellites without touching them.
The Space Act Agreement marks a major milestone for Arx Pax, CEO Greg Henderson said. “It’s exciting to work hand in hand with NASA’s brilliant team of scientists and engineers. We’re thrilled about the potential impact we can make together.”
Henderson and his wife, Jill Henderson, last year launched a successful Kickstarter campaign to fund development of a functional hoverboard based on the technology.
Magnets in Space?
NASA has been seeking to create a magnetic tether that can be used to couple and uncouple microsatellites called “CubeSats.”
It’s interested in exploring whether this tech can be used in a space environment, said Luke Murchison, a project manager at NASA’s Langley Research Center.
NASA in the near term will work to identify the constraints of the magnetic tether technology with regard to its applications in low-Earth orbit, he said.
“In the long term, we are interested in developing technology to allow the autonomous assembly of small modular satellites,” Murchison told TechNewsWorld. “That would let us create entirely new satellite architectures.”
“They come in small 10-by-10-centimeter cubes, and they can be used for a variety of things,” he told TechNewsWorld. “A lot of them are used for tech demos in space.”
A tech firm will reach out to CubeSat researchers to test prototypes of their products in space, said Saunders. They’ll put them on a CubeSat, “and we’ll get data back to them so that they can say they tested it in space.”
Along with launching tech demos into space, CubeSats also are used to conduct scientific experiments in space.
“One of the projects we’ve worked on here at Cal Poly is ExoCubes, where we put a small mass spectrometer in space and are reading ions and neutral data for certain particles at a certain level in our atmosphere,” noted Saunders.
Coupling and uncoupling CubeSats using Magnetic Field Architecture would make a good tech demo and could serve as an important step to scaling up, he said.
Building It Out
Someday, larger satellites even could use magnetic tethering to dock or undock with space stations, Langley’s Murchison suggested. This just the start of MFA’s use in space. NASA plans to iterate on the tech through its alliance with Arx Pax.
“We are currently developing a number of prototypes over the next one to two years,” he added, “and will be exploring alternative designs with this technology.”
A team of researchers from NASA’s Jet Propulsion Laboratory and CalTech have developed a range of ultra-thin, optical components capable of arbitrary manipulation of light. The devices, dubbedmetasurfaces, are able to locally modify the properties of a light-field in ways difficult to achieve with standard optics.
So what does it mean to manipulate light? Light, an electromagnetic field that propagates through space, can be completely described at one wavelength by polarization, phase, andamplitude. If we know what the light-field looks like now, we can accurately predict what it will look like in the future by knowing only these properties.
Any optical component that exists can be thought of as essentially modifying one or more of the above properties. For example, in free-space optical systems, polarization is modified using wave retarders, polarizers, and polarization beam-splitters; phase is shaped using lenses, curved mirrors, or spatial phase modulators; and amplitude is controlled via neutral-density absorptive or reflective filters. Therefore, by combining many components, we’re able to build systems that can manipulate the light-field to varying degrees.
However, as you may have guessed, in order for us to have full control, we normally need many components, each of which is usually bulky and expensive. Think of the optics in a telescope or DSLR for example.
What is a metasurface?
Metasurfaces are planar (~2D) structures that locally modify the polarization, phase, and amplitude of light in reflection or transmission, where each sub-pixel is smaller than the wavelength of light. When we say 2D, the vertical dimension is normally <100nm, or ~1,000x smaller than a human hair. Therefore, these flat, highly functional optical components can be manufactured in exactly the same way as state-of-the-art electronics, such as microchips, which use high resolution lithographic techniques.
What have they done?
The team of researchers have developed a new kind of metasurface composed of a single-layer array of amorphous silicon (silicon nanopillars), patterned into differently sized elliptical posts — all sitting upon was is essentially a glass surface. Seen under a scanning electron microscope, the metasurface appears as a cut forest where only the stumps remain.
Each silicon stump, or pillar, has an elliptical cross section, and hence has a different effective refractive index associated with the two different modes that can be excited across the structure. By carefully varying the diameters of each pillar and rotating them around their axes, the scientists were able to simultaneously manipulate the phase and polarization of passing light.
Using the elliptical nanopillar as a sub-wavelength pixel, the researchers produced a range of optical devices, from polarization beam splitters and lenses to phase holograms, all operating at a near-infrared wavelength of 915nm. (Visible light is 400-700nm.)
Should I care?
To be sure, this is an incremental advance. There is a plethora of research worldwide on metasurfaces, and pretty much every single week there is a new device, which will supposedly “revolutionize” the field of optics and be applicable to every field one might think of (which is a nifty selling tactic of getting your work published). Yet, in reality, the research is a small incremental adjustment based on a nanostructured surface, which manipulates a light field to some degree. For example, the work uses infrared light, because doing this at shorter, visible light runs into all sorts of fabrication problems. Also, nanorods and other geometries have been used previously to do pretty much the same thing.
The work in and of itself is not bad, and is a nice window into the world of metasurfaces. However, until someone cracks full, re-configurable phase control at visible wavelengths, the impact of such work will be short-lived.
Space is big and mostly empty, but it’s the small part that isn’t empty that ends up being an issue for space exploration. Even a tiny piece of debris from a derelict satellite or ancient bit of space rock can cause damage to a spacecraft, and that damage can expose your fragile atmosphere-loving body to the harsh vacuum of space in a real hurry. Researchers from the University of Michigan working with NASA have developed a material that might add an extra layer of protection from space debris, a material that can heal itself to seal hull breaches.
The International Space Station is the most heavily shielded craft ever built, a necessary distinction as it’s designed to operate for years in orbit. The current design relies on a series of impact shields known as Whipple bumpers or Whipple shields. These bumpers are essentially thin layers of material that stand off from the hull of the station by at least several centimeters. When a small object impacts the station, the impact with the Whipple bumper slows it down and may even cause it to break up. The result is a lower force spread over a larger surface area of the actual hull.
If the bumpers were to fail, the station would have a weak spot that could lead to a hull rupture. The work by U of M scientists might offer an added layer of protection. This new material iscomposed of a type of liquid resin called thiol-ene-trialkylborane. It’s sandwiched between two polymer panels to form an airtight seal. The resin remains liquid as long as that seal remains unbroken. Should a projectile pierce the hull of a ship that includes this material, it will no longer be sealed. The resin leaks out through the breach, and that’s when the magic (science) happens.
On one side of the breach is vacuum, but as we’ve all learned from TV and movies, the air inside a spacecraft will be sucked out quickly. The air on the inside of the ship reacts with the resin as it leaks out, causing it to harden into a solid plug that stops more atmosphere from escaping. This happens extremely fast as well — the video above shows the resin hardening in just a few milliseconds.
The plug only has to hold one atmosphere of pressure inside the ship, so it doesn’t have to be as strong as the undamaged hull. It just needs to be good enough to keep everyone alive while they make proper repairs. While space is the main application, the researchers also say it could be useful in automotive and building technology