Monday, September 30, 2013

Largest, most accurate list of RNA editing sites

Researchers have compiled the largest and most rigorously validated list to date of genetic sites in fruit flies where the RNA transcribed from DNA is then edited by an enzyme to affect a wide variety of fundamental biological functions. The list yielded several biological insights and can aid further research on RNA transcription because flies are a common model in that work.
A research team centered at Brown University has compiled the largest and most stringently validated list of RNA editing sites in the fruit fly Drosophila melanogaster, a stalwart of biological research. Their research, which yielded several insights into the model organism’s fundamental biology, appears Sept. 29 inNature Structural & Molecular Biology.
The “master list” totals 3,581 sites in which the enzyme ADAR might swap an “A” nucleotide for a “G” in an RNA molecule. Such a seemingly small tweak means a lot because it changes how genetic instructions in DNA are put into action in the fly body, affecting many fundamental functions including proper neural and gender development. In humans, perturbed RNA editing has been strongly implicated in the diseases ALS and Acardi-Gutieres disease.
The new list of editing sites could therefore help thousands of researchers studying the RNA molecules that are transcribed from DNA, the so-called “transcriptome,” by providing reliable information about the thousands of editing changes that can occur.
“Drosophila serves as a model for all the organisms where people are studying transcriptomes,” said the paper’s corresponding author Robert Reenan, professor of biology in the Department of Molecular Biology, Cell Biology, and Biochemistry at Brown. “But in the early days of RNA editing research, the catalog of these sites was determined completely by chance – people working on genes of interest would discover a site. The number of sites grew slowly.”
In fact, Reenan was co-author of a paper in Science 10 years ago that made a splash with only 56 new editing sites which at the time, more than doubled the number of known sites in the entire field.
Validation means accuracy
Several more recent attempts to catalog RNA editing sites have yielded larger catalogs, but those contained many errors (the paper provides a comparison between the new list and previous efforts such as ModENCODE).
To avoid such mistakes, Reenan and colleagues, including lead author and graduate student Georges St. Laurent, painstakingly validated 1,799 of the sites. They worked with Charles Lawrence, professor of applied mathematics and the paper’s co-senior author, to predict another 1,782 sites and validated a statistically rigorous sampling of those.
In all, the team’s methodology allowed them to estimate that the combined list of 3,581 directly observed and predicted sites is 87 percent accurate.
“The sites that we validated, for anyone who wants to do the same experiment under the same conditions, the sites should be there,” said co-author and postdoctoral researcher Yiannis Savva. “In other papers, they just did sequencing to say there is an editing site there, but when you check, it’s not there.”
The researchers used the tried-and-true, decades-old Sanger method of sequencing to double-check all the candidate editing sites that they had found using the high-throughput technology called single molecule sequencing. They compared the sequenced RNA of a population of fruit flies to their sequenced DNA and to the RNA of another population of flies engineered to lack the ADAR editing enzyme. By comparing these three sequences they were able to see the A-to-G changes that could not be attributed to anomalies in DNA (i.e., mutations, or single-nucleotide polymorphisms) and that never occurred in flies incapable of editing.
As they conducted their validations, they fed the results back into their prediction algorithm. Over several iterations, that computer model “learned” to make better and better predictions. They ultimately found 77 different variables that helped them to distinguish real editing sites from nucleotides that were conclusively not editing sites.
Biological insights
The researchers then examined the implications of the patterns they saw in their data and gained several insights.
One was that a considerable amount of editing occurs in sections of RNA that do not code for making proteins. Editing is concentrated in a small number of RNAs, raising the question, Lawrence said, of what accounts for that selectivity.
“How does the cell go about choosing which ones are going to get edited and which aren’t is an interesting question this opens,” he said.
Where editing is found, the researchers discovered, there is usually more alternative splicing, which means the body is more often assembling a different recipe from its genetic instructions to make certain proteins.
The researchers also found that the RNAs that are most heavily edited tend to be expressed to a lesser extent, decreasing how often they are put into action in the body.
RNA editing helps explain why organisms are even more different from each other – and from themselves at different times — than DNA differences alone would suggest.
“RNA editing has emerged as a way to diversify not just the proteome but the transcriptome overall,” Reenan said.
In addition to Reenan, Lawrence, Savva, and St. Laurent, who is also affiliated with the St. Laurent Institute in Cambridge, Mass., the paper’s other authors are Michael Tackett, Sergey Nechkin, Dimitry Shtokalo, and Philipp Kapranov of the St. Lawernce Institute, Denis Antonets of the State Research Center of Virology and Biotechnology in Russia, and Rachael Maloney, a Brown graduate now at the University of Massachusetts Medical School.
Reenan received funding from the Ellison Medical Research Foundation.

Saturday, September 28, 2013

New Energy Storage Capabilities Between the Layers of Two-Dimensional materials

Drexel University researchers are continuing to expand the capabilities and functionalities of a family of two-dimensional materials they discovered that are just a few atoms thick, but have the potential to store massive amounts of energy. Their latest achievement has pushed the materials storage capacities to new levels while also allowing for their use in flexible devices.

About three years ago, Dr. Michel W. Barsoum and Dr. Yury Gogotsi, professors in Drexel’s College of Engineering, discovered atomically thin, two-dimensional materials -similar to graphene- that have good electrical conductivity and a surface that is hydrophilic, or can hold liquids. They named these new materials “MXenes,” which hearkens to their genesis through the process of etching and exfoliating atomically thin layers of aluminum from layered carbide “MAX phases.”  The latter also discovered at Drexel about 15 years ago by Barsoum

Since then, the pair, and their team of materials scientists, have forged ahead in exploring the potential uses of MXenes. Their latest findings are reported in the Sept. 27 issue of Science. In their piece entitled “Cation Intercalation and High Volumetric Capacitance of Two-dimensional Titanium Carbide,” Gogotsi and Barsoum along with Drexel researchers Maria Lukatskaya, Olha Mashtalir, Chang Ren, Yohan Dall’Angese and Michael Naugib and Patrick Rozier, Pierre Louis Taberna and Dr. Patrice Simon from Université Paul Sabatier in France, explain how MXenes can accommodate various ions and molecules between their layers by a process known as intercalation.

Intercalation is sometimes a necessary step in order to exploit the unique properties of two-dimensional materials. For example, placing lithium ions between the MXene sheets makes them good candidates for use as anodes in lithium-ion batteries. The fact that MXenes can accommodate ions and molecules in this way is significant because it expands their ability to store energy.

“Currently, nine MXenes have been reported by our team, but there are likely many more that will be discovered - the MXene-and-ion combinations that have been tested to date are by no means an exhaustive demonstration of the material’s energy storage capabilities,” said Gogotsi, who is also director of the A.J. Drexel Nanotechnology Institute. “So even the impressive capacitances that we are seeing here are probably not the highest possible values to be achieved using MXenes. Intercalation of magnesium and aluminum ions that we observed may also pave the way to development of new kinds of metal ion batteries.”

Barsoum and Gogotsi’s report looks at intercalation of MXenes with a variety of ions, including lithium, sodium, magnesium, potassium, ammonium and aluminum ions. The resulting materials show high energy storage capacities and present another avenue of research in this branch of materials science.

“Two-dimensional, titanium carbide MXene electrodes show excellent volumetric super capacitance of up to 350 F/cm3 due to intercalation of cations between its layers,” Barsoum said. “This capacity is significantly higher than what is currently possible with porous carbon electrodes. In other words, we can now store more energy in smaller volumes, an important consideration as mobile devices get smaller and require more energy”

The researchers also reported on using MXene “paper” electrodes, instead of conventional rolled powder electrodes with a polymer binder. The flexibility of this paper suggests MXenes may also be useful in flexible and wearable energy storage devices, which is another major area of ongoing research at Drexel in collaboration with Professor Genevieve Dion’s Shima Seiki Haute Technology Laboratory.

Source: http://www.drexel.edu/now/news-media/releases/archive/2013/September/MXenes-Science/#sthash.8rtZ168M.dpuf

Researchers Demonstrate 'Accelerator on a Chip'


Technology could spawn new generations of smaller, less expensive devices for science, medicine

In an advance that could dramatically shrink particle accelerators for science and medicine, researchers used a laser to accelerate electrons at a rate 10 times higher than conventional technology in a nanostructured glass chip smaller than a grain of rice.

The achievement was reported today in Nature by a team including scientists from the U.S. Department of Energy’s (DOE) SLAC National Accelerator Laboratory and Stanford University.

“We still have a number of challenges before this technology becomes practical for real-world use, but eventually it would substantially reduce the size and cost of future high-energy particle colliders for exploring the world of fundamental particles and forces,” said Joel England, the SLAC physicist who led the experiments. “It could also help enable compact accelerators and X-ray devices for security scanning, medical therapy and imaging, and research in biology and materials science.”

Because it employs commercial lasers and low-cost, mass-production techniques, the researchers believe it will set the stage for new generations of "tabletop" accelerators.

At its full potential, the new “accelerator on a chip” could match the accelerating power of SLAC’s 2-mile-long linear accelerator in just 100 feet, and deliver a million more electron pulses per second.

This initial demonstration achieved an acceleration gradient, or amount of energy gained per length, of 300 million electronvolts per meter. That's roughly 10 times the acceleration provided by the current SLAC linear accelerator.

“Our ultimate goal for this structure is 1 billion electronvolts per meter, and we’re already one-third of the way in our first experiment,” said Stanford Professor Robert Byer, the principal investigator for this research.


How It Works


Today’s accelerators use microwaves to boost the energy of electrons. Researchers have been looking for more economical alternatives, and this new technique, which uses ultrafast lasers to drive the accelerator, is a leading candidate.

Particles are generally accelerated in two stages. First they are boosted to nearly the speed of light. Then any additional acceleration increases their energy, but not their speed; this is the challenging part.

In the accelerator-on-a-chip experiments, electrons are first accelerated to near light-speed in a conventional accelerator. Then they are focused into a tiny, half-micron-high channel within a fused silica glass chip just half a millimeter long. The channel had been patterned with precisely spaced nanoscale ridges. Infrared laser light shining on the pattern generates electrical fields that interact with the electrons in the channel to boost their energy. (See the accompanying animation for more detail.)

Turning the accelerator on a chip into a full-fledged tabletop accelerator will require a more compact way to get the electrons up to speed before they enter the device.

A collaborating research group in Germany, led by Peter Hommelhoff at Friedrich Alexander Universityand the Max Planck Institute of Quantum Optics, has been looking for such a solution. Itsimultaneously reports in Physical Review Letters its success in using a laser to accelerate lower-energy electrons.
Multi-Use Accelerators

Applications for these new particle accelerators would go well beyond particle physics research. Byer said laser accelerators could drive compact X-ray free-electron lasers, comparable to SLAC’s Linac Coherent Light Source, that are all-purpose tools for a wide range of research.

Another possible application is small, portable X-ray sources to improve medical care for people injured in combat, as well as provide more affordable medical imaging for hospitals and laboratories. That’s one of the goals of the Defense Advanced Research Projects Agency’s (DARPA) Advanced X-Ray Integrated Sources (AXiS) program, which partially funded this research. Primary funding for this research is from the DOE’s Office of Science.

The study's lead authors were Stanford graduate students Edgar Peralta and Ken Soong. Peralta created the patterned fused silica chips in the Stanford Nanofabrication Facility. Soong implemented the high-precision laser optics for the experiment at SLAC’s Next Linear Collider Test Accelerator. Additional contributors included researchers from the University of California-Los Angeles and Tech-X Corp. in Boulder, Colo.

SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the U.S. Department of Energy Office of Science. To learn more, please visitwww.slac.stanford.edu.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Source: http://www6.slac.stanford.edu/news/2013-09-27-accelerator-on-a-chip.aspx

Friday, September 27, 2013

Superfast switching of a quantum light source

Usually, an elementary light source – such as an excited atom or molecule – emits light of a particular color at an unpredictable instance in time. Recently, however, scientists from the MESA+ Institute for Nanotechnology of the UT, FOM and the Institute for Nanoscience and Cryogenics (CEA/INAC) in France have shown that a light source can be coaxed to emit light at a desired moment in time, within an ultrashort burst. The superfast switching of a light source has applications in fast stroboscopes without laser speckle, in the precise control of quantum systems and for ultrasecure communication using quantum cryptography. The theoretical results appeared on 25 September in Optics Express.

Spontaneous emission of light from excited sources, such as atoms, molecules or quantum dots, is a fundamental process with many applications in modern technology, such as LEDs and lasers. As the term 'spontaneous emission' indicates, the emission is random in nature and it is therefore impossible to predict the exact emission time of a photon. However, for several applications it is desirable to receive single photons exactly when they are needed with as little uncertainty as possible. This property is crucial for ultra-secure communication using quantum cryptography and in quantum computers. Therefore, the important goal is to fabricate a quantum light source such that it emits a single photon exactly at a desired moment in time.
Switching light emission
The average emission time of quantum light sources can be reduced by locating them in various nanostructures, like optical resonators or waveguides. But the distribution of emission times is always exponential in time in a usual stationary environment. In addition, the smallest uncertainty in the emission time is limited by both the maximum intensity in the resonator and the variations in the preparation time of the emitter. The Dutch-French team proposes to overcome these limitations by quickly switching the resonator length, in which the light source is located. The time duration of the switch should be much shorter than the average emission time. The result is that the favored color of the resonator only matches the emission color of the light source within a short time interval. Only within this short time frame are the photons emitted by the light source into the resonator.
Cartoon of the superfast emission of a light source. The light source is embedded in an optical resonator where it spontaneously emits a photon. During the emission of the photon the favored color of the resonator is quickly switched symbolized by a hammer to match the color of the light source. During this short interval the light source is triggered to emit an ultrashort burst of photons within a desired moment in time.


Ultrafast light source
The researchers propose to use quantum dot light sources, which can easily be integrated in semiconductor optical resonators with lengths on the order of microns. The switching of the resonator will be achieved by shining an ultrashort laser pulse at the micropillar resonator during the emission time of the quantum dots. This quickly changes the refractive in the resonator and thereby the effective resonator length. The switching time can be directly controlled by the arrival time of the short laser pulse and by the lifetime of the excited electrons. These controlled light switches have great prospects for creating incoherent ultrafast light sources for fast stroboscopes without laser speckle, in quantum cryptography, in quantum information and for studying ultrafast cavity Quantum electrodynamics.

The team
The research has been performed by FOM postdoc Dr. Henri Thyrrestrup, Dr. Alex Hartsuiker and FOM workgroup leader Prof.dr. Willem L. Vos from the Complex Photonic Systems (COPS) Chair at the MESA+ Institute for Nanotechnology of the University of Twente in Enschede, The Netherlands, in close collaboration with Prof.dr. Jean-Michel Gérard from the Institute for Nanoscience and Cryogeny (CEA/INAC) in Grenoble, France.

The spontaneous emission intensity from a light source as function of time after excitation at time zero. The emission in a usual stationary environment follows an exponential curve (dashed curve); whereas the photons emitted by the light source placed in the switched optical resonator (red curve) can be bunched within a time window that is much shorter than the average emission time. The short intense burst of light is marked by the red area.
Based on a press release by the Dutch Foundation for Fundamental Research on Matter FOM.

How to make ceramics that bend without breaking

New materials developed at MIT could lead to actuators on a chip and self-deploying medical devices.

Ceramics are not known for their flexibility: they tend to crack under stress. But researchers from MIT and Singapore have just found a way around that problem — for very tiny objects, at least.

The team has developed a way of making minuscule ceramic objects that are not only flexible, but also have a “memory” for shape: When bent and then heated, they return to their original shapes. The surprising discovery is reported this week in the journal Science, in a paper by MIT graduate student Alan Lai, professor Christopher Schuh, and two collaborators in Singapore. 

Shape-memory materials, which can bend and then snap back to their original configurations in response to a temperature change, have been known since the 1950s, explains Schuh, the Danae and Vasilis Salapatas Professor of Metallurgy and head of MIT’s Department of Materials Science and Engineering. “It’s been known in metals, and some polymers,” he says, “but not in ceramics.”

In principle, the molecular structure of ceramics should make shape memory possible, he says — but the materials’ brittleness and propensity for cracking has been a hurdle. “The concept has been there, but it’s never been realized,” Schuh says. “That’s why we were so excited.” 

The key to shape-memory ceramics, it turns out, was thinking small.

The team accomplished this in two key ways. First, they created tiny ceramic objects, invisible to the naked eye: “When you make things small, they are more resistant to cracking,” Schuh says. Then, the researchers concentrated on making the individual crystal grains span the entire small-scale structure, removing the crystal-grain boundaries where cracks are most likely to occur.

Those tactics resulted in tiny samples of ceramic material — samples with deformability equivalent to about 7 percent of their size. “Most things can only deform about 1 percent,” Lai says, adding that normal ceramics can’t even bend that much without cracking.

David Dunand, a professor of materials science and engineering at Northwestern University, says the MIT team “achieved something that was widely considered impossible,” finding “a clever solution, based on fundamental materials-science principles, to the Achilles’ heel of ceramics and other brittle materials.” 

“Usually if you bend a ceramic by 1 percent, it will shatter,” Schuh says. But these tiny filaments, with a diameter of just 1 micrometer — one millionth of a meter — can be bent by 7 to 8 percent repeatedly without any cracking, he says.

While a micrometer is pretty tiny by most standards, it’s actually not so small in the world of nanotechnology. “It’s large compared to a lot of what nanotech people work on,” Lai says. As such, these materials could be important tools for those developing micro- and nanodevices, such as for biomedical applications. For example, shape-memory ceramics could be used as microactuators to trigger actions within such devices — such as the release of drugs from tiny implants.

Compared to the materials currently used in microactuators, Schuh says, the strength of the ceramic would allow it to exert a stronger push in a microdevice. “Microactuation is something we think this might be very good for,” he says, because the ceramic material has “the ability to push things with a lot of force — the highest on record” for its size.

The ceramics used in this research were made of zirconia, but the same techniques should apply to other ceramic materials. Zirconia is “one of the most well-studied ceramics,” Lai says, and is already widely used in engineering. It is also used in fuel cells, considered a promising means of providing power for cars, homes and even for the electric grid. While there would be no need for elasticity in such applications, the material’s flexibility could make it more resistant to damage.

The material combines some of the best attributes of metals and ceramics, the researchers say: Metals have lower strength but are very deformable, while ceramics have much greater strength, but almost no ductility — the ability to bend or stretch without breaking. The newly developed ceramics, Schuh says, have “ceramiclike strength, but metallike ductility.”

Robert Ritchie, a professor of materials science and engineering at the University of California at Berkeley, says, “The very notion of superelastic ceramics is somewhat of a surprise. … We all know that ceramics invariably are extremely brittle.”

Ritchie, who was not connected with this work, points out that shape-memory metals are already used in satellite antennae and in self-expanding dental and cardiovascular prostheses. “Applying these concepts to ceramics, however,” he says, “is somewhat startling and raises many interesting possibilities.”

In addition to Schuh and Lai, the work was carried out by Zehui Du and Chee Lip Gan of Nanyang Technological University in Singapore.

Thursday, September 26, 2013

Nanoparticle vaccine offers better protection

Particles that deliver vaccines directly to mucosal surfaces could defend against many infectious diseases.


Many viruses and bacteria infect humans through mucosal surfaces, such as those in the lungs, gastrointestinal tract and reproductive tract. To help fight these pathogens, scientists are working on vaccines that can establish a front line of defense at mucosal surfaces.

Vaccines can be delivered to the lungs via an aerosol spray, but the lungs often clear away the vaccine before it can provoke an immune response. To overcome that, MIT engineers have developed a new type of nanoparticle that protects the vaccine long enough to generate a strong immune response — not only in the lungs, but also in mucosal surfaces far from the vaccination site, such as the gastrointestinal and reproductive tracts.

Such vaccines could help protect against influenza and other respiratory viruses, or prevent sexually transmitted diseases such as HIV, herpes simplex virus and human papilloma virus, says Darrell Irvine, an MIT professor of materials science and engineering and biological engineering and the leader of the research team. He is also exploring use of the particles to deliver cancer vaccines.

“This is a good example of a project where the same technology can be applied in cancer and in infectious disease. It’s a platform technology to deliver a vaccine of interest,” says Irvine, who is a member of MIT’s Koch Institute for Integrative Cancer Research and the Ragon Institute of Massachusetts General Hospital, MIT and Harvard University. 

Irvine and colleagues describe the nanoparticle vaccine in the Sept. 25 issue of Science Translational Medicine. Lead authors of the paper are recent PhD recipient Adrienne Li and former MIT postdoc James Moon.

Sturdier vaccines

Only a handful of mucosal vaccines have been approved for human use; the best-known example is the Sabin polio vaccine, which is given orally and absorbed in the digestive tract. There is also a flu vaccine delivered by nasal spray, and mucosal vaccines against cholera, rotavirus and typhoid fever.  

To create better ways of delivering such vaccines, Irvine and his colleagues built upon a nanoparticle they developed two years ago. The protein fragments that make up the vaccine are encased in a sphere made of several layers of lipids that are chemically “stapled” to one another, making the particles more durable inside the body. 

“It’s like going from a soap bubble to a rubber tire. You have something that’s chemically much more resistant to disassembly,” Irvine says.

This allows the particles to resist disintegration once they reach the lungs. With this sturdier packaging, the protein vaccine remains in the lungs long enough for immune cells lining the surface of the lungs to grab them and deliver them to T cells. Activating T cells is a critical step for the immune system to form a memory of the vaccine particles so it will be primed to respond again during an infection.

Stopping the spread of infection

In studies of mice, the researchers found that HIV or cancer antigens encapsulated in nanoparticles were taken up by immune cells much more successfully than vaccine delivered to the lungs or under the skin without being trapped in nanoparticles.  

HIV does not infect mice, so to test the immune response generated by the vaccines, the researchers infected the mice with a version of the vaccinia virus that was engineered to produce the HIV protein delivered by the vaccine. 

Mice vaccinated with nanoparticles were able to quickly contain the virus and prevent it from escaping the lungs. Vaccinia virus usually spreads to the ovaries soon after infection, but the researchers found that the vaccinia virus in the ovaries of mice vaccinated with nanoparticles was undetectable, while substantial viral concentrations were found in mice that received other forms of the vaccine. 

Mice that received the nanoparticle vaccine lost a small amount of weight after infection but then fully recovered, whereas the viral challenge was 100 percent lethal to mice who received the non-nanoparticle vaccine.

“Giving the vaccine at the mucosal surface in the nanocapsule form allowed us to completely block that systemic infection,” Irvine says. 

The researchers also found a strong memory T cell presence at distant mucosal surfaces, including in the digestive and reproductive tracts. “An important caveat is that although immunity at distant mucus membranes following vaccination at one mucosal surface has been seen in humans as well, it’s still being worked out whether the patterns seen in mice are fully reproduced in humans,” Irvine says. “It might be that it’s a different mucosal surface that gets stimulated from the lungs or from oral delivery in humans.”

Melissa Herbst-Kralovetz, an assistant professor of basic medical sciences at the University of Arizona College of Medicine, says the nanoparticles are “an exciting and effective strategy for inducing effector-memory T-cell responses to nonreplicating subunit vaccines through mucosal vaccination.”

“More research will need to be conducted to determine the delivery approach to be used in humans, but this vaccination strategy is particularly important for diseases that may require significant T cell-mediated protection, such as HIV,” says Herbst-Kralovetz, who was not part of the research team. 

Tumor defense

The particles also hold promise for delivering cancer vaccines, which stimulate the body’s own immune system to destroy tumors. 

To test this, the researchers first implanted the mice with melanoma tumors that were engineered to express ovalbumin, a protein found in egg whites. Three days later, they vaccinated the mice with ovalbumin. They found that mice given the nanoparticle form of the vaccine completely rejected the tumors, while mice given the uncoated vaccine did not.

Further studies need to be done with more challenging tumor models, Irvine says. In the future, tests with vaccines targeted to proteins expressed by cancer cells would be necessary. 

The research was funded by the National Cancer Institute, the Ragon Institute, the Bill and Melinda Gates Foundation, the U.S. Department of Defense and the National Institutes of Health.

The nanoparticle technology has been patented and licensed to a company called Vedantra, which is now developing infectious-disease and cancer vaccines.

MIT and Harvard create new state of matter: Photonic molecules

MIT and Harvard create new, lightsaber like state of matter: Photonic molecules

The awesomely named Center for Ultracold Atoms, a joint Harvard and MIT venture, has created a new state of matter: photonic molecules. This new state of matter is surprising and interesting, as photons are considered to be massless and incapable of interacting with each other. According to the research group’s leader, who has the unbelievably coincidental surname Lukin, these photonic molecules behave somewhat like lightsabers from the Star Wars universe, with the photons pushing and deflecting each other, but staying linked.
Almost the entirety of our understanding of light is predicated on the knowledge that photons, the elementary particle that makes up the quantum of light and all other electromagnetic radiation, are massless and have no electric charge. If you shine two lasers at each other, because the streams of photons have no mass or charge, the streams of photons simply pass through each other without reacting. It is for this reason that light (and EMR in general) is such a great medium for transmitting data over long distances, and for perceiving visual stimuli with your eyes. If you used almost any other kind of particle to transmit data, it would react violently and fizzle in the atmosphere almost instantly.
Now, however, the Harvard and MIT researchers, led by Lukin, have managed to make photons behave almost as if they’re normal, massive particles. To do this, the researchers pump rubidium atoms into a vacuum chamber, and then cool the vacuum down until it’s a few degrees from absolute zero. Extremely weak laser light — a stream of single photons — is then shone through the rubidium-filled vacuum. As individual photons travel through the medium, it loses energy to the rubidium atoms, slowing down. When the researchers used the laser to fire two photons, instead of one, they found that the photons became a two-photon molecule by the time it left the medium.
A highly informative diagram, showing the attraction (F) between the two photons
A highly informative diagram, showing the attraction (F) between the two photons in a photonic molecule
These photonic molecules have been theorized to exist, through an effect called the Rydberg blockade, but this is the first time that this new state of matter has been physically realized. Unlike a normal molecule, where the constituent atoms are held together by chemical bonds caused by opposite electron or nuclei charges, these photonic molecules aren’t really held together. Basically, as each photon travels through the medium and pushes against the rubidium atoms, they pushed back towards each other, forcing the two photons to coexist. “It’s a photonic interaction that’s mediated by the atomic interaction,” Lukin said. “That makes these two photons behave like a molecule, and when they exit the medium they’re much more likely to do so together than as single photons.”
As with all new effects, and more so with new states of matter, Lukin and co aren’t entirely sure what practical applications these photonic molecules might have. As we mentioned previously, the way these photonic molecules jostle against each other isn’t completely unlike the way two lightsaber clash in Star Wars. There’s also the fact that photons are our best bet for quantum networking — but performing logic with photons, because they don’t like to interact with each other, is hard. These photonic molecules might provide a solution to this problem. Being an entirely new state of matter, though, we really won’t know what’s possible until we perform a lot more research — which is exactly what the Center for Ultracold Atoms plans to do.

Laser treatments yield smoother metal surfaces

Optical microscope cross-sections of the alloy 
surface show that increases in laser beam 
overlap during processing reduces the number 
of small cracks (top left, 25% overlap; top right, 
50%; bottom left, 75%; and bottom right, 90%).
© 2013 Elsevier
The properties of metal surfaces, typically prone to corrosion, are now more controllable using laser processing

Ever since the Bronze Age, metals have been cast in different shapes for different applications. Smooth surfaces that are resistant to corrosion are crucial for many of the present-day uses of cast metals, ranging from bio-implants to automotive parts. Yingchun Guan, from the A*STAR Singapore Institute of Manufacturing Technology (SIMTech) and her co-workers have shown how different laser-processing methods improve metal surfaces and protect them against corrosion1.
Laser processing involves scanning a high-intensity laser beam multiple times across the surface of a metal. Each scan by the laser beam ‘writes’ a track in the surface, which partially melts the metal. Consecutive tracks can overlap — the degree to which affects how well the melting caused by these tracks will smooth the surface of the metal. The scanning speed can also affect the surface melt.
Guan and co-workers investigated how different degrees of overlap between the tracks affect the surface properties of AZ91D — a common magnesium alloy. “AZ91D is the most widely used magnesium alloy for the production of high-volume components for the automotive, electronics and telecommunications industries,” Guan explains.
By examining cross-sections of AZ91D samples post-melt, the researchers found that the greater the degree of overlap between the tracks, the fewer the number of small cracks that developed during solidification (see image). According to Guan, this finding should be considered when processing metals destined for exposure to fluids, such as those that will be used in bio-implants.
The researchers also detected alterations in the alloy’s composition through changes in the degree of laser-track overlap. Melted magnesium evaporates more readily than aluminum, and as the degree of laser-track overlap increased, it changed the composition of the alloy — particularly in the larger areas of melt. Theoretical calculations by Guan and her co-workers described these kinetics accurately.
According to the team’s model, a greater level of overlap provided a greater amount of heat, which improved the convection of the metals within the molten liquid and yielded a more homogeneous surface. Electrochemical tests by the team also confirmed that the more homogeneous the surface of a material, the more resistant it was to corrosion.
The team’s approach, particularly the theoretical model, is applicable to assess laser processing of other alloys and compounds, Guan notes. As the surface structures affect not only the mechanical and chemical properties but also the electronic, thermal and optical parameters, these findings will be of relevance to metals used in a variety of applications.

Wednesday, September 25, 2013

The First Carbon Nanotube Computer

A carbon nanotube computer processor is comparable to a chip from the early 1970s, and may be the first step beyond silicon electronics.

For the first time, researchers have built a computer whose central processor is based entirely on carbon nanotubes, an incredibly tiny form of carbon with remarkable material and electronic properties. The computer is slow and simple, but its creators, a group of Stanford University engineers, say it shows that carbon nanotube electronics are a viable potential replacement for silicon when it reaches its limits in ever-smaller electronic circuits.

The carbon nanotube processor is comparable in capabilities to the Intel 4004, that company’s first microprocessor, which was released in 1971, saysSubhasish Mitra, an electrical engineer at Stanford and one of the project’s co-leaders. The computer, described today in the journal Nature, runs a simple software instruction set called MIPS. It can switch between multiple tasks (counting and sorting numbers) and keep track of them, and it can fetch data from and send it back to an external memory.

The nanotube processor is made up of 142 transistors, each of which contains carbon nanotubes that are about 10 to 200 nanometer long. The Stanford group says it has made six versions of carbon nanotube computers, including one that can be connected to external hardware—a numerical keypad that can be used to input numbers for addition.

Aaron Franklin, a researcher at the IBM Watson Research Center in Yorktown Heights, New York, says the comparison with the 4004 and other early silicon processors is apt. “This is a terrific demonstration for people in the electronics community who have doubted carbon nanotubes,” he says.

Franklin’s group has demonstrated that individual carbon nanotube transistors—smaller than 10 nanometers—are faster and more energy efficient than those made of any other material, including silicon. Theoretical work has also suggested that a carbon nanotube computer would be an order of magnitude more energy efficient than the best silicon computers. And the nanomaterial’s ability to dissipate heat suggests that carbon nanotube computers might run blisteringly fast without heating up—a problem that sets speed limits on the silicon processors in today’s computers.

Still, some people doubt that carbon nanotubes will replace silicon. Working with carbon nanotubes is a big challenge. They are typically grown in a way that leaves them in a tangled mess, and about a third of the tubes are metallic, rather than semiconducting, which causes short-circuits.

Over the past several years, Mitra has collaborated with Stanford electrical engineer Philip Wong, who has developed ways to sidestep some of the materials challenges that have prevented the creation of complex circuits from carbon nanotubes. Wong developed a method for growing mostly very straight nanotubes on quartz, then transferring them over to a silicon substrate to make the transistors. The Stanford group also covers up the active areas of the transistors with a protective coating, then etches away any exposed nanotubes that have gone astray.

Wong and Mitra also apply a voltage to turn all of the semiconducting nanotubes on a chip to “off.” Then they pulse a large current through the chip; the metallic ones heat up, oxidize, and disintegrate. All of these nanotube-specific fixes—and the rest of the manufacturing process—can be done on the standard equipment that’s used to make today’s silicon chips. In that sense, the process is scalable.

Late last month at Hot Chips, an engineering design conference hosted, coincidentally, at Stanford, the director of the Microsystems Technology Office at DARPA made a stir by discussing the end of silicon electronics. In a keynote,Robert Colwell, former chief architect at Intel, predicted that by as early as 2020, the computing industry will no longer be able to keep making performance and cost improvements by doubling the density of silicon transistors on chips every 18 to 24 months—a feat dubbed Moore’s Law after the Intel cofounder Gordon Moore, who first observed the trend.


Mitra and Wong hope their computer shows that carbon nanotubes may be a serious answer to the question of what comes next. So far no emerging technologies come close to touching silicon. Of all the emerging materials and new ideas held up as possible saviors—nanowires, spintronics, graphene, biological computers—no one has made a central processing unit based on any of them, says Mitra. In that context, catching up to silicon’s performance circa 1970, though it leaves a lot of work to be done, is exciting.

Victor Zhirnov, a specialist in nanoelectronics at theSemiconductor Research Corporation in Durham, North Carolina, is much more cautiously optimistic. The nanotube processor has 10 million times fewer transistors on it than today’s typical microprocessors, runs much more slowly, and operates at five times the voltage, meaning it uses about 25 times as much power, he notes.

Some of the nanotube computer’s sluggishness is due to the conditions under which it was built—in an academic lab using what the Stanford group had access to, not an industry-standard factory. The processor is connected to an external hard drive, which serves as the memory, through a large bundle of electrical wires, each of which connects to a large metal pin on top of the nanotube processor. Each of the pins in turn connects to a device on the chip. This messy packaging means the data has to travel longer distances, which cuts into the efficiency of the computer.

With the tools at hand, the Stanford group also can’t make transistors smaller than about one micrometer—compare that with Intel’s announcement earlier this month that its next line of products will be built on 14-nanometer technology. If, however, the group were to go into a state-of-the-art fab, its manufacturing yields would improve enough to be able to make computers with thousands of smaller transistors, and the computer could run faster.

To reach the superb level of performance theoretically offered by nanotubes, researchers will have to learn how to build complex integrated circuits made up of pristine single nanotube transitors. Franklin says device and materials experts like his group at IBM need to start working in closer collaboration with circuit designers like those at Stanford to make real progress.

“We are well aware that silicon is running out of steam, and within 10 years it’s coming to its end,” says Zhirnov. “If carbon nanotubes are going to become practical, it has to happen quickly.”

Source: 

Tuesday, September 24, 2013

Amplituhedron: A Jewel at the Heart of Quantum Physics

Physicists have discovered a jewel-like geometric object that dramatically simplifies calculations of particle interactions and challenges the notion that space and time are fundamental components of reality.
“This is completely new and very much simpler than anything that has been done before,” said Andrew Hodges, a mathematical physicist at Oxford University who has been following the work.
The revelation that particle interactions, the most basic events in nature, may be consequences of geometry significantly advances a decades-long effort to reformulate quantum field theory, the body of laws describing elementary particles and their interactions. Interactions that were previously calculated with mathematical formulas thousands of terms long can now be described by computing the volume of the corresponding jewel-like “amplituhedron,” which yields an equivalent one-term expression.
“The degree of efficiency is mind-boggling,” said Jacob Bourjaily, a theoretical physicist at Harvard University and one of the researchers who developed the new idea. “You can easily do, on paper, computations that were infeasible even with a computer before.”
The new geometric version of quantum field theory could also facilitate the search for a theory of quantum gravity that would seamlessly connect the large- and small-scale pictures of the universe. Attempts thus far to incorporate gravity into the laws of physics at the quantum scale have run up against nonsensical infinities and deep paradoxes. The amplituhedron, or a similar geometric object, could help by removing two deeply rooted principles of physics: locality and unitarity.
“Both are hard-wired in the usual way we think about things,” said Nima Arkani-Hamed, a professor of physics at the Institute for Advanced Study in Princeton, N.J., and the lead author of the new work, which he is presenting in talks and in a forthcoming paper. “Both are suspect.”
Locality is the notion that particles can interact only from adjoining positions in space and time. And unitarity holds that the probabilities of all possible outcomes of a quantum mechanical interaction must add up to one. The concepts are the central pillars of quantum field theory in its original form, but in certain situations involving gravity, both break down, suggesting neither is a fundamental aspect of nature.
In keeping with this idea, the new geometric approach to particle interactions removes locality and unitarity from its starting assumptions. The amplituhedron is not built out of space-time and probabilities; these properties merely arise as consequences of the jewel’s geometry. The usual picture of space and time, and particles moving around in them, is a construct.
“It’s a better formulation that makes you think about everything in a completely different way,” said David Skinner, a theoretical physicist at Cambridge University.
The amplituhedron itself does not describe gravity. But Arkani-Hamed and his collaborators think there might be a related geometric object that does. Its properties would make it clear why particles appear to exist, and why they appear to move in three dimensions of space and to change over time.
Because “we know that ultimately, we need to find a theory that doesn’t have” unitarity and locality, Bourjaily said, “it’s a starting point to ultimately describing a quantum theory of gravity.”
Clunky Machinery
The amplituhedron looks like an intricate, multifaceted jewel in higher dimensions. Encoded in its volume are the most basic features of reality that can be calculated, “scattering amplitudes,” which represent the likelihood that a certain set of particles will turn into certain other particles upon colliding. These numbers are what particle physicists calculate and test to high precision at particle accelerators like the Large Hadron Collider in Switzerland.
The iconic 20th century physicist Richard Feynman invented a method for calculating probabilities of particle interactions using depictions of all the different ways an interaction could occur. Examples of “Feynman diagrams” were included on a 2005 postage stamp honoring Feynman.
United States Postal Service
The iconic 20th century physicist Richard Feynman invented a method for calculating probabilities of particle interactions using depictions of all the different ways an interaction could occur. Examples of “Feynman diagrams” were included on a 2005 postage stamp honoring Feynman.
The 60-year-old method for calculating scattering amplitudes — a major innovation at the time — was pioneered by the Nobel Prize-winning physicist Richard Feynman. He sketched line drawings of all the ways a scattering process could occur and then summed the likelihoods of the different drawings. The simplest Feynman diagrams look like trees: The particles involved in a collision come together like roots, and the particles that result shoot out like branches. More complicated diagrams have loops, where colliding particles turn into unobservable “virtual particles” that interact with each other before branching out as real final products. There are diagrams with one loop, two loops, three loops and so on — increasingly baroque iterations of the scattering process that contribute progressively less to its total amplitude. Virtual particles are never observed in nature, but they were considered mathematically necessary for unitarity — the requirement that probabilities sum to one.
“The number of Feynman diagrams is so explosively large that even computations of really simple processes weren’t done until the age of computers,” Bourjaily said. A seemingly simple event, such as two subatomic particles called gluons colliding to produce four less energetic gluons (which happens billions of times a second during collisions at the Large Hadron Collider), involves 220 diagrams, which collectively contribute thousands of terms to the calculation of the scattering amplitude.
In 1986, it became apparent that Feynman’s apparatus was a Rube Goldberg machine.
To prepare for the construction of the Superconducting Super Collider in Texas (a project that was later canceled), theorists wanted to calculate the scattering amplitudes of known particle interactions to establish a background against which interesting or exotic signals would stand out. But even 2-gluon to 4-gluon processes were so complex, a group of physicists had written two years earlier, “that they may not be evaluated in the foreseeable future.”
Stephen Parke and Tommy Taylor, theorists at Fermi National Accelerator Laboratory in Illinois, took that statement as a challenge. Using a few mathematical tricks, they managed to simplify the 2-gluon to 4-gluon amplitude calculation from several billion terms to a 9-page-long formula, which a 1980s supercomputer could handle. Then, based on a pattern they observed in the scattering amplitudes of other gluon interactions, Parke and Taylor guessed a simple one-term expression for the amplitude. It was, the computer verified, equivalent to the 9-page formula. In other words, the traditional machinery of quantum field theory, involving hundreds of Feynman diagrams worth thousands of mathematical terms, was obfuscating something much simpler. As Bourjaily put it: “Why are you summing up millions of things when the answer is just one function?”
“We knew at the time that we had an important result,” Parke said. “We knew it instantly. But what to do with it?”
The Amplituhedron
The message of Parke and Taylor’s single-term result took decades to interpret. “That one-term, beautiful little function was like a beacon for the next 30 years,” Bourjaily said. It “really started this revolution.”
Twistor diagrams depicting an interaction between six gluons, in the cases where two (left) and four (right) of the particles have negative helicity, a property similar to spin. The diagrams can be used to derive a simple formula for the 6-gluon scattering amplitude.
Arkani-Hamed et al.
Twistor diagrams depicting an interaction between six gluons, in the cases where two (left) and four (right) of the particles have negative helicity, a property similar to spin. The diagrams can be used to derive a simple formula for the 6-gluon scattering amplitude.
In the mid-2000s, more patterns emerged in the scattering amplitudes of particle interactions, repeatedly hinting at an underlying, coherent mathematical structure behind quantum field theory. Most important was a set of formulas called the BCFW recursion relations, named for Ruth Britto, Freddy Cachazo,Bo Feng and Edward Witten. Instead of describing scattering processes in terms of familiar variables like position and time and depicting them in thousands of Feynman diagrams, the BCFW relations are best couched in terms of strange variables called “twistors,” and particle interactions can be captured in a handful of associated twistor diagrams. The relations gained rapid adoption as tools for computing scattering amplitudes relevant to experiments, such as collisions at the Large Hadron Collider. But their simplicity was mysterious.
“The terms in these BCFW relations were coming from a different world, and we wanted to understand what that world was,” Arkani-Hamed said. “That’s what drew me into the subject five years ago.”
With the help of leading mathematicians such as Pierre Deligne, Arkani-Hamed and his collaborators discovered that the recursion relations and associated twistor diagrams corresponded to a well-known geometric object. In fact, as detailed in a paper posted to arXiv.org in December by Arkani-Hamed, Bourjaily, Cachazo, Alexander Goncharov,Alexander Postnikov and Jaroslav Trnka, the twistor diagrams gave instructions for calculating the volume of pieces of this object, called the positive Grassmannian.
Named for Hermann Grassmann, a 19th-century German linguist and mathematician who studied its properties, “the positive Grassmannian is the slightly more grown-up cousin of the inside of a triangle,” Arkani-Hamed explained. Just as the inside of a triangle is a region in a two-dimensional space bounded by intersecting lines, the simplest case of the positive Grassmannian is a region in an N-dimensional space bounded by intersecting planes. (N is the number of particles involved in a scattering process.)
It was a geometric representation of real particle data, such as the likelihood that two colliding gluons will turn into four gluons. But something was still missing.
The physicists hoped that the amplitude of a scattering process would emerge purely and inevitably from geometry, but locality and unitarity were dictating which pieces of the positive Grassmannian to add together to get it. They wondered whether the amplitude was “the answer to some particular mathematical question,” said Trnka, a post-doctoral researcher at the California Institute of Technology. “And it is,” he said.
A sketch of the amplituhedron representing an 8-gluon particle interaction. Using Feynman diagrams, the same calculation would take roughly 500 pages of algebra.
Nima Arkani-Hamed
A sketch of the amplituhedron representing an 8-gluon particle interaction. Using Feynman diagrams, the same calculation would take roughly 500 pages of algebra.
Arkani-Hamed and Trnka discovered that the scattering amplitude equals the volume of a brand-new mathematical object — the amplituhedron. The details of a particular scattering process dictate the dimensionality and facets of the corresponding amplituhedron. The pieces of the positive Grassmannian that were being calculated with twistor diagrams and then added together by hand were building blocks that fit together inside this jewel, just as triangles fit together to form a polygon.
Like the twistor diagrams, the Feynman diagrams are another way of computing the volume of the amplituhedron piece by piece, but they are much less efficient. “They are local and unitary in space-time, but they are not necessarily very convenient or well-adapted to the shape of this jewel itself,” Skinner said. “Using Feynman diagrams is like taking a Ming vase and smashing it on the floor.”
Arkani-Hamed and Trnka have been able to calculate the volume of the amplituhedron directly in some cases, without using twistor diagrams to compute the volumes of its pieces. They have also found a “master amplituhedron” with an infinite number of facets, analogous to a circle in 2-D, which has an infinite number of sides. Its volume represents, in theory, the total amplitude of all physical processes. Lower-dimensional amplituhedra, which correspond to interactions between finite numbers of particles, live on the faces of this master structure.
“They are very powerful calculational techniques, but they are also incredibly suggestive,” Skinner said. “They suggest that thinking in terms of space-time was not the right way of going about this.”
Quest for Quantum Gravity
The seemingly irreconcilable conflict between gravity and quantum field theory enters crisis mode in black holes. Black holes pack a huge amount of mass into an extremely small space, making gravity a major player at the quantum scale, where it can usually be ignored. Inevitably, either locality or unitarity is the source of the conflict.
Puzzling Thoughts
Locality and unitarity are the central pillars of quantum field theory, but as the following thought experiments show, both break down in certain situations involving gravity. This suggests physics should be formulated without either principle.
Locality says that particles interact at points in space-time. But suppose you want to inspect space-time very closely. Probing smaller and smaller distance scales requires ever higher energies, but at a certain scale, called the Planck length, the picture gets blurry: So much energy must be concentrated into such a small region that the energy collapses the region into a black hole, making it impossible to inspect. “There’s no way of measuring space and time separations once they are smaller than the Planck length,” said Arkani-Hamed. “So we imagine space-time is a continuous thing, but because it’s impossible to talk sharply about that thing, then that suggests it must not be fundamental — it must be emergent.”
Unitarity says the quantum mechanical probabilities of all possible outcomes of a particle interaction must sum to one. To prove it, one would have to observe the same interaction over and over and count the frequencies of the different outcomes. Doing this to perfect accuracy would require an infinite number of observations using an infinitely large measuring apparatus, but the latter would again cause gravitational collapse into a black hole. In finite regions of the universe, unitarity can therefore only be approximately known.
“We have indications that both ideas have got to go,” Arkani-Hamed said. “They can’t be fundamental features of the next description,” such as a theory of quantum gravity.
String theory, a framework that treats particles as invisibly small, vibrating strings, is one candidate for a theory of quantum gravity that seems to hold up in black hole situations, but its relationship to reality is unproven — or at least confusing. Recently, a strange duality has been found between string theory and quantum field theory, indicating that the former (which includes gravity) is mathematically equivalent to the latter (which does not) when the two theories describe the same event as if it is taking place in different numbers of dimensions. No one knows quite what to make of this discovery. But the new amplituhedron research suggests space-time, and therefore dimensions, may be illusory anyway.
“We can’t rely on the usual familiar quantum mechanical space-time pictures of describing physics,” Arkani-Hamed said. “We have to learn new ways of talking about it. This work is a baby step in that direction.”
Even without unitarity and locality, the amplituhedron formulation of quantum field theory does not yet incorporate gravity. But researchers are working on it. They say scattering processes that include gravity particles may be possible to describe with the amplituhedron, or with a similar geometric object. “It might be closely related but slightly different and harder to find,” Skinner said.
Nima Arkani-Hamed, a professor at the Institute for Advanced Study, and his former student and co-author Jaroslav Trnka, who finished his Ph.D. at Princeton University in July and is now a post-doctoral researcher at the California Institute of Technology.
Courtesy of Jaroslav Trnka
Nima Arkani-Hamed, a professor at the Institute for Advanced Study, and his former student and co-author Jaroslav Trnka, who finished his Ph.D. at Princeton University in July and is now a post-doctoral researcher at the California Institute of Technology.
Physicists must also prove that the new geometric formulation applies to the exact particles that are known to exist in the universe, rather than to the idealized quantum field theory they used to develop it, called maximally supersymmetric Yang-Mills theory. This model, which includes a “superpartner” particle for every known particle and treats space-time as flat, “just happens to be the simplest test case for these new tools,” Bourjaily said. “The way to generalize these new tools to [other] theories is understood.”
Beyond making calculations easier or possibly leading the way to quantum gravity, the discovery of the amplituhedron could cause an even more profound shift, Arkani-Hamed said. That is, giving up space and time as fundamental constituents of nature and figuring out how the Big Bang and cosmological evolution of the universe arose out of pure geometry.
“In a sense, we would see that change arises from the structure of the object,” he said. “But it’s not from the object changing. The object is basically timeless.”
While more work is needed, many theoretical physicists are paying close attention to the new ideas.
The work is “very unexpected from several points of view,” said Witten, a theoretical physicist at the Institute for Advanced Study. “The field is still developing very fast, and it is difficult to guess what will happen or what the lessons will turn out to be.”