Friday, May 29, 2015

Physicists conduct most precise measurement yet of interaction between atoms and carbon surfaces

An illustration of atoms sticking to a carbon nanotube, affecting the electrons in its surface.
An illustration of atoms sticking to a carbon nanotube, 
affecting the electrons in its surface.David Cobden and students

Physicists at the University of Washington have conducted the most precise and controlled measurements yet of the interaction between the atoms and molecules that comprise air and the type of carbon surface used in battery electrodes and air filters — key information for improving those technologies.

A team led by David Cobden, UW professor of physics, used a carbon nanotube — a seamless, hollow graphite structure a million times thinner than a drinking straw — acting as a transistor to study what happens when gas atoms come into contact with the nanotube’s surface. Their findings were published in May in the journal Nature Physics.

Cobden said he and co-authors found that when an atom or molecule sticks to the nanotube a tiny fraction of the charge of one electron is transferred to its surface, resulting in a measurable change in electrical resistance.

“This aspect of atoms interacting with surfaces has never been detected unambiguously before,” Cobden said. “When many atoms are stuck to the minuscule tube at the same time, the measurements reveal their collective dances, including big fluctuations that occur on warming analogous to the boiling of water.”

Lithium batteries involve lithium atoms sticking and transferring charges to carbon electrodes, and in activated charcoal filters, molecules stick to the carbon surface to be removed, Cobden explained.

“Various forms of carbon, including nanotubes, are considered for hydrogen or other fuel storage because they have a huge internal surface area for the fuel molecules to stick to. However, these technological situations are extremely complex and difficult to do precise, clear-cut measurements on.”

This work, he said, resulted in the most precise and controlled measurements of these interactions ever made, “and will allow scientists to learn new things about the interplay of atoms and molecules with a carbon surface,” important for improving technologies including batteries, electrodes and air filters.

Co-authors were Oscar Vilches, professor emeritus of physics, doctoral students Hao-Chun Lee and research associate Boris Dzyubenko, all of the UW. The research was funded by the National Science Foundation.

Source: http://www.washington.edu/news/2015/05/28/physicists-conduct-most-precise-measurement-yet-of-interaction-between-atoms-and-carbon-surfaces/

Donuts, math, and superdense teleportation of quantum information






In superdense teleportation of quantum information, Alice (near) selects a particular set of states to send to Bob (far), using the hyperentangled pair of photons they share. The possible states Alice may send are represented as the points on a donut shape, here artistically depicted in sharp relief from the cloudy silhouette of general quantum state that surrounds them. To transmit a state, Alice makes a measurement on her half of the entangled state, which has four possible outcomes shown by red, green, blue, and yellow points. She then communicates the outcome of her measurement (in this case, yellow, represented by the orange streak connecting the two donuts) to Bob using a classical information channel. Bob then can make a corrective rotation on his state to recover the state that Alice sent.


Putting a hole in the center of the donut—a mid-nineteenth-century invention—allows the deep-fried pastry to cook evenly, inside and out. As it turns out, the hole in the center of the donut also holds answers for a type of more efficient and reliable quantum information teleportation, a critical goal for quantum information science.

Quantum teleportation is a method of communicating information from one location to another without moving the physical matter to which the information is attached. Instead, the sender (Alice) and the receiver (Bob) share a pair of entangled elementary particles—in this experiment, photons, the smallest units of light—that transmit information through their shared quantum state. In simplified terms, Alice encodes information in the form of the quantum state of her photon. She then sends a key to Bob over traditional communication channels, indicating what operation he must perform on his photon to prepare the same quantum state, thus teleporting the information.

Quantum teleportation has been achieved by a number of research teams around the globe since it was first theorized in 1993, but current experimental methods require extensive resources and/or only work successfully a fraction of the time.

Now, by taking advantage of the mathematical properties intrinsic to the shape of a donut—or torus, in mathematical terminology—a research team led by physicist Paul Kwiat of the University of Illinois at Urbana-Champaign has made great strides by realizing “superdense teleportation”. This new protocol, developed by coauthor physicist Herbert Bernstein of Hampshire College in Amherst, MA, effectively reduces the resources and effort required to teleport quantum information, while at the same time improving the reliability of the information transfer.

With this new protocol, the researchers have experimentally achieved 88 percent transmission fidelity, twice the classical upper limit of 44 percent. The protocol uses pairs of photons that are “hyperentangled”—simultaneously entangled in more than one state variable, in this case in polarization and in orbital angular momentum—with a restricted number of possible states in each variable. In this way, each photon can carry more information than in earlier quantum teleportation experiments.

At the same time, this method makes Alice’s measurements and Bob’s transformations far more efficient than their corresponding operations in quantum teleportation: the number of possible operations being sent to Bob as the key has been reduced, hence the term “superdense”.

Kwiat explains, “In classical computing, a unit of information, called a bit, can have only one of two possible values—it’s either a zero or a one. A quantum bit, or qubit, can simultaneously hold many values, arbitrary superpositions of 0 and 1 at the same time, which makes faster, more powerful computing systems possible.

“So a qubit could be represented as a point on a sphere, and to specify what state it is, one would need longitude and latitude. That’s a lot of information compared to just a 0 or a 1.”

“What makes our new scheme work is a restrictive set of states. The analog would be, instead of using a sphere, we are going to use a torus, or donut shape. A sphere can only rotate on an axis, and there is no way to get an opposite point for every point on a sphere by rotating it—because the axis points, the north and the south, don’t move. With a donut, if you rotate it 180 degrees, every point becomes its opposite. Instead of axis points you have a donut hole. Another advantage, the donut shape actually has more surface area than the sphere, mathematically speaking—this means it has more distinct points that can be used as encoded information.”

Lead author, Illinois physics doctoral candidate Trent Graham, comments, “We are constrained to sending a certain class of quantum states called ‘equimodular’ states. We can deterministically perform operations on this constrained set of states, which are impossible to perfectly perform with completely general quantum states. Deterministic describes a definite outcome, as opposed to one that is probabilistic. With existing technologies, previous photonic quantum teleportation schemes either cannot work every time or require extensive experimental resources. Our new scheme could work every time with simple measurements.”

This research team is part of a broader collaboration that is working toward realizing quantum communication from a space platform, such as the International Space Station, to an optical telescope on Earth. The collaboration—Kwiat, Graham, Bernstein, physicist Jungsang Kim of Duke University in Durham, NC, and scientist Hamid Javadi of NASA’s Jet Propulsion Laboratory in Pasadena, CA—recently received funding from NASA Headquarter's Space Communication and Navigation program (with project directors Badri Younes and Barry Geldzahler) to explore the possibility.

“It would be a stepping stone toward building a quantum communications network, a system of nodes on Earth and in space that would enable communication from any node to any other node,” Kwiat explains. “For this, we’re experimenting with different quantum state properties that would be less susceptible to air turbulence disruptions.”

The team’s recent experimental findings are published in the May 28, 2015 issue of Nature Communications, and represent the collaborative effort Kwiat, Graham, and Bernstein, as well as physicist Tzu-Chieh Wei of State University of New York at Stony Brook, and mathematician Marius Junge of the University of Illinois.

This research is funded by NSF Grant No. PHY-0903865, NASA NIAC Program, and NASA Grant No. NNX13AP35A. It is partially supported by National Science Foundation Grants DMS-1201886, No. PHY 1314748, and No. PHY 1333903.
______________________

Contact: Siv Schwink, communications coordinator, Department of Physics, 217/300-2201.

Paul Kwiat, Department of Physics, University of Illinois at Urbana-Champaign.

Image by Precision Graphics, copyright Paul Kwiat, University of Illinois at Urbana-Champaign.

Source: http://engineering.illinois.edu/news/article/11151?

Thursday, May 28, 2015

Spinning a new version of silk


Microscope images of lab-produced fibers confirm the results of the MIT researchers' simulations of spider silk. At top are optical microscope images, and, at bottom, are scanning electron microscope images. At left are fibers 8 micrometers across, and, at right, are thinner, 3 micrometer fibers.
Microscope images of lab-produced fibers confirm the results of the MIT researchers' simulations of spider silk. At top are optical microscope images, and, at bottom, are scanning electron microscope images. At left are fibers 8 micrometers across, and, at right, are thinner, 3 micrometer fibers.
Courtesy of the researchers

Simulations and experiments aim to improve on spiders in creating strong, resilient fibers.
After years of research decoding the complex structure and production of spider silk, researchers have now succeeded in producing samples of this exceptionally strong and resilient material in the laboratory. The new development could lead to a variety of biomedical materials — from sutures to scaffolding for organ replacements — made from synthesized silk with properties specifically tuned for their intended uses.
The findings are published this week in the journal Nature Communications by MIT professor of civil and environmental engineering (CEE) Markus Buehler, postdocs Shangchao Lin and Seunghwa Ryu, and others at MIT, Tufts University, Boston University, and in Germany, Italy, and the U.K.
The research, which involved a combination of simulations and experiments, paves the way for “creating new fibers with improved characteristics” beyond those of natural silk, says Buehler, who is also the department head in CEE. The work, he says, should make it possible to design fibers with specific characteristics of strength, elasticity, and toughness.
The new synthetic fibers’ proteins — the basic building blocks of the material — were created by genetically modifying bacteria to make the proteins normally produced by spiders. These proteins were then extruded through microfluidic channels designed to mimic the effect of an organ, called a spinneret, that spiders use to produce natural silk fibers.
No spiders needed
While spider silk has long been recognized as among the strongest known materials, spiders cannot practically be bred to produce harvestable fibers — so this new approach to producing a synthetic, yet spider-like, silk could make such strong and flexible fibers available for biomedical applications. By their nature, spider silks are fully biocompatible and can be used in the body without risk of adverse reactions; they are ultimately simply absorbed by the body.
The researchers’ “spinning” process, in which the constituent proteins dissolved in water are extruded through a tiny opening at a controlled rate, causes the molecules to line up in a way that produces strong fibers. The molecules themselves are a mixture of hydrophobic and hydrophilic compounds, blended so as to naturally align to form fibers much stronger than their constituent parts. “When you spin it, you create very strong bonds in one direction,” Buehler says.
The team found that getting the blend of proteins right was crucial. “We found out that when there was a high proportion of hydrophobic proteins, it would not spin any fibers, it would just make an ugly mass,” says Ryu, who worked on the project as a postdoc at MIT and is now an assistant professor at the Korea Advanced Institute of Science and Technology. “We had to find the right mix” in order to produce strong fibers, he says.
Closing the loop
This project represents the first use of simulations to understand silk production at the molecular level. “Simulation is critical,” Buehler explains: Actually synthesizing a protein can take several months; if that protein doesn’t turn out to have exactly the right properties, the process would have to start all over.
Using simulations makes it possible to “scan through a large range of proteins until we see changes in the fiber stiffness,” and then home in on those compounds, says Lin, who worked on the project as a postdoc at MIT and is now an assistant professor at Florida State University.
Controlling the properties directly could ultimately make it possible to create fibers that are even stronger than natural ones, because engineers can choose characteristics for a particular use. For example, while spiders may need elasticity so their webs can capture insects without breaking, those designing fibers for use as surgical sutures would need more strength and less stretchiness. “Silk doesn’t give us that choice,” Buehler says.
The processing of the material can be done at room temperature using water-based solutions, so scaling up manufacturing should be relatively easy, team members say. So far, the fibers they have made in the lab are not as strong as natural spider silk, but now that the basic process has been established, it should be possible to fine-tune the materials and improve its strength, they say.
“Our goal is to improve the strength, elasticity, and toughness of artificially spun fibers by borrowing bright ideas from nature,” Lin says.  This study could inspire the development of new synthetic fibers — or any materials requiring enhanced properties, such as in electrical and thermal transport, in a certain direction.
“This is an amazing piece of work,” says Huajian Gao, a professor of engineering at Brown University who was not involved in this research. “This could lead to a breakthrough that may allow us to directly explore engineering applications of silk-like materials.”
Gao adds that the team’s exploration of variations in web structure “may have practical impacts in improving the design of fiber-reinforced composites by significantly increasing their strength and robustness without increasing the weight. The impact on material innovation could be particularly important for aerospace and industrial applications, where light weight is essential.”
The research was supported by the National Institutes of Health, the National Science Foundation, the Office of Naval Research, the National Research Foundation of Korea, and the European Research Council.

Tuesday, May 26, 2015

A new kind of biodegradable computer wood chip

A cellulose nanofibril (CNF) computer chip rests on a leaf. Photo: Yei Hwan Jung, Wisconsin Nano
Engineering Device Laboratory

Portable electronics — typically made of non-renewable, non-biodegradable and potentially toxic materials — are discarded at an alarming rate in consumers' pursuit of the next best electronic gadget.

In an effort to alleviate the environmental burden of electronic devices, a team of University of Wisconsin-Madison researchers has collaborated with researchers in the Madison-based U.S. Department of Agriculture Forest Products Laboratory (FPL) to develop a surprising solution: a semiconductor chip made almost entirely of wood.

The research team, led by UW-Madison electrical and computer engineering professor Zhenqiang "Jack" Ma, described the new device in a paper published today (May 26, 2015) by the journal Nature Communications. The paper demonstrates the feasibility of replacing the substrate, or support layer, of a computer chip, with cellulose nanofibril (CNF), a flexible, biodegradable material made from wood.

"The majority of material in a chip is support. We only use less than a couple of micrometers for everything else," Ma says. "Now the chips are so safe you can put them in the forest and fungus will degrade it. They become as safe as fertilizer."

Zhiyong Cai, project leader for an engineering composite science research group at FPL, has been developing sustainable nanomaterials since 2009.

"If you take a big tree and cut it down to the individual fiber, the most common product is paper. The dimension of the fiber is in the micron stage," Cai says. "But what if we could break it down further to the nano scale? At that scale you can make this material, very strong and transparent CNF paper."

"You don't want it to expand or shrink too much. Wood is a natural hydroscopic material and could attract moisture from the air and expand," Cai says. "With an epoxy coating on the surface of the CNF, we solved both the surface smoothness and the moisture barrier."Working with Shaoqin "Sarah" Gong, a UW-Madison professor of biomedical engineering, Cai's group addressed two key barriers to using wood-derived materials in an electronics setting: surface smoothness and thermal expansion.

Gong and her students also have been studying bio-based polymers for more than a decade. CNF offers many benefits over current chip substrates, she says.
"The advantage of CNF over other polymers is that it's a bio-based material and most other polymers are petroleum-based polymers. Bio-based materials are sustainable, bio-compatible and biodegradable," Gong says. "And, compared to other polymers, CNF actually has a relatively low thermal expansion coefficient."

The group's work also demonstrates a more environmentally friendly process that showed performance similar to existing chips. The majority of today's wireless devices use gallium arsenide-based microwave chips due to their superior high-frequency operation and power handling capabilities. However, gallium arsenide can be environmentally toxic, particularly in the massive quantities of discarded wireless electronics.

Yei Hwan Jung, a graduate student in electrical and computer engineering and a co-author of the paper, says the new process greatly reduces the use of such expensive and potentially toxic material.

"I've made 1,500 gallium arsenide transistors in a 5-by-6 millimeter chip. Typically for a microwave chip that size, there are only eight to 40 transistors. The rest of the area is just wasted," he says. "We take our design and put it on CNF using deterministic assembly technique, then we can put it wherever we want and make a completely functional circuit with performance comparable to existing chips."

While the biodegradability of these materials will have a positive impact on the environment, Ma says the flexibility of the technology can lead to widespread adoption of these electronic chips.

"Mass-producing current semiconductor chips is so cheap, and it may take time for the industry to adapt to our design," he says. "But flexible electronics are the future, and we think we're going to be well ahead of the curve."

Source: http://www.news.wisc.edu/23805

Monday, May 25, 2015

Single-Molecule Diode Could Lead To Breakthroughs In Nanoscale Devices





Researchers created a single-molecule diode, which has been sought after since the 1970s.

Scientists have designed a new way to create a single-molecule diode that performs 50 times better than past models.

These single-molecule diodes are the first that could be used for real-world applications in nanoscale devices, Columbia University School of Engineering and Applied Sciencereported. The idea of creating a single-molecule diode was first proposed in the 1970s by Arieh Aviram and Mark Ratner, who theorized that a molecule could act as a "rectifier" to conduct one-way currents.

molecular electronics ever since its inception with Aviram and Ratner's 1974 seminal paper, represents the ultimate in functional miniaturization that can be achieved for an electronic device," said Latha Venkataraman, associate professor of applied physics at Columbia Engineering.

Since the 1974 paper, scientists have determined single-molecules attached themselves to metal electrodes, and act as a number of circuit elements such as switches, resistors, and diodes. A diode works as an "electricity valve," and requires an asymmetrical structure in order to create different environments for electricity flowing in each direction.

"While such asymmetric molecules do indeed display some diode-like properties, they are not effective," said Brian Capozzi, a PhD student working with Venkataraman and lead author of the paper. "A well-designed diode should only allow current to flow in one direction-the 'on' direction-and it should allow a lot of current to flow in that direction. Asymmetric molecular designs have typically suffered from very low current flow in both 'on' and 'off' directions, and the ratio of current flow in the two has typically been low. Ideally, the ratio of 'on' current to 'off' current, the rectification ratio, should be very high."

To remedy this, the researchers worked to develop asymmetry in the environment around the molecular junction. They accomplished this by surrounding the active molecule with an ionic solution and employed the use of gold metal electrodes that differed in size to contact the molecule. The method led to rectification ratios as high as 250, which is 50 times higher than earlier designs.

"It's amazing to be able to design a molecular circuit, using concepts from chemistry and physics, and have it do something functional," Venkataraman said. "The length scale is so small that quantum mechanical effects are absolutely a crucial aspect of the device. So it is truly a triumph to be able to create something that you will never be able to physically see and that behaves as intended."

The findings were published in a recent edition of the journal Nature Nanotechnology.

http://engineering.columbia.edu/news-archive

The Coming Merge of Human and Machine Intelligence



Technology now exists to connect people’s brains to the Internet, and it’s giving rise to a new way of thinking, according to an alum's best-selling book

For most of the past two million years, the human brain has been growing steadily. But something has recently changed. In a surprising reversal, human brains have actually been shrinking for the last 20,000 years or so. We have lost nearly a baseball-sized amount of matter from a brain that isn’t any larger than a football.

The descent is rapid and pronounced. The anthropologist John Hawks describes it as a “major downsizing in an evolutionary eye­blink.” If this pace is maintained, scientists predict that our brains will be no larger than those of our forebears, Homo erectus, within another 2,000 years.

The reason that our brains are shrinking is simple: our biology is focused on survival, not intelligence. Larger brains were necessary to allow us to learn to use language, tools and all of the innovations that allowed our species to thrive. But now that we have become civilized—domesticated, if you will—certain aspects of intelligence are less necessary.
This is actually true of all animals: domesticated animals, including dogs, cats, hamsters and birds, have 10 to 15 percent smaller brains than their counterparts in the wild. Because brains are so expensive to maintain, large brain sizes are selected out when nature sees no direct survival benefit. It is an inevitable fact of life.

Fortunately, another influence has evolved over the past 20,000 years that is making us smarter even as our brains are shrinking: technology. Technology has allowed us to leapfrog evolution, enabling our brains and bodies to do things that were otherwise impossible biologically. We weren’t born with wings, but we’ve created airplanes, helicopters, hot air balloons and hang gliders. We don’t have sufficient natural strength or speed to bring down big game, but we’ve created spears, rifles and livestock farms.

Now, as the Internet revolution unfolds, we are seeing not merely an extension of mind but a unity of mind and machine, two networks coming together as one. Our smaller brains are in a quest to bypass nature’s intent and grow larger by proxy. It is not a stretch of the imagination to believe we will one day have all of the world’s information embedded in our minds via the Internet.

Psychics and Physics

In the late 1800s, a German astronomer named Hans Berger fell off a horse and was nearly trampled by cavalry. He narrowly escaped injury, but was forever changed by the incident, owing to the reaction of his sister. Though she was miles away at the time, Berger’s sister was instantly overcome with a feeling that Hans was in trouble. Berger took this as evidence of the mind’s psychic ability and dedicated the rest of his life to finding certain proof.

Berger abandoned his study of astronomy and enrolled in medical school to gain an understanding of the brain that would allow him to prove a “correlation between objective activity in the brain and subjective psychic phenomena.” He later joined the University of Jena in Germany as professor of neurology to pursue his quest.

At the time, psychic interest was relatively high. There were numerous academics devoted to the field, studying at prestigious institutions such as Stanford and Duke, Oxford and Cambridge. Still, it was largely considered bunk science, with most credible academics focused on dispelling, rather than proving, claims of psychic ability. But one of those psychic beliefs happened to be true.

That belief is the now well-understood notion that our brains communicate electrically. This was a radical idea at the time; after all, the electromagnetic field had only been discovered in 1865. But Berger found proof. He invented a device called the electroencephalogram (you probably know it as an EEG) that recorded brain waves. Using his new EEG, Berger was the first to demonstrate that our neurons actually talk to one another, and that they do so with electrical pulses. He published his results in 1929.

The New Normal

As often happens with revolutionary ideas, Berger’s EEG results were either ignored or lambasted as trickery. This was, after all, preternatural activity. But over the next decade, enough independent scholars verified the results that they became widely accepted. Berger saw his findings as evidence of the mind’s potential for “psychic” activity, and he continued searching for more evidence until the day he hanged himself in frustration. The rest of the scientific community went back to what it had always been doing, “good science,” and largely forgot about the electric neuron.

That was the case until the biophysicist Eberhard Fetz came along in 1969 and elaborated on Berger’s discovery. Fetz reasoned that if brains were controlled by electricity, then perhaps we could use our brains to control electrical devices. In a small primate lab at the University of Washington in Seattle, he connected the brain of a rhesus monkey to an electrical meter and then watched in amazement as the monkey learned how to control the level of the meter with nothing but its thoughts.

While incredible, this insight didn’t have much application in 1969. But with the rapid development of silicon chips, computers and data networks, the technology now exists to connect people’s brains to the Internet, and it’s giving rise to a new breed of intelligence.
Scientists in labs across the globe are busy perfecting computer chips that can be implanted in the human brain. In many ways, the results, if successful, fit squarely in the realm of “psychics.” There may be no such thing as paranormal activity, but make no mistake that all of the following are possible and on the horizon: telepathy, no problem; telekinesis, absolutely; clairvoyance, without question; ESP, oh yeah. While not psychic, Hans Berger may have been right all along.

The Six Million Dollar Man, For Real

Jan Scheuermann lifted a chocolate bar to her mouth and took a bite. A grin spread across her face as she declared, “One small nibble for a woman, one giant bite for BCI.”
BCI stands for brain-computer interface, and Jan is one of only a few people on earth using this technology, through two implanted chips attached directly to the neurons in her brain. The first human brain implant was conceived of by John Donoghue, a neuroscientist at Brown University, and implanted in a paralyzed man in 2004.

These dime-sized computer chips use a technology called BrainGate that directly connects the mind to computers and the Internet. Having served as chairman of the BrainGate company, I have personally witnessed just how profound this innovation is.

BrainGate is an invention that allows people to control electrical devices with nothing but their thoughts. The BrainGate chip is implanted in the brain and attached to connectors outside of the skull, which are hooked up to computers that, in Jan Scheuermann’s case, are linked to a robotic arm. As a result, Scheuermann can feed herself chocolate by controlling the robotic arm with nothing but her thoughts.

A smart, vibrant woman in her early 50s, Scheuermann has been unable to use her arms and legs since she was diagnosed with a rare genetic disease at the age of 40. “I have not moved things for about 10 years . . . . This is the ride of my life,” she said. “This is the roller coaster. This is skydiving.” Other patients use brain-controlled implants to communicate, control wheelchairs, write emails and connect to the Internet.

The technology is surprisingly simple to understand. BrainGate is merely tapping into the brain’s electrical signals in the same way that Berger’s EEG and Fetz’s electrical meter did. The BrainGate chip, once attached to the motor cortex, reads the brain’s electrical signals and sends them to a computer, which interprets them and sends along instructions to other electrical devices like a robotic arm or a wheelchair.

In that respect, it’s not much different from using your television remote to change the channel. Potentially the technology will enable bionics, restore communication abilities and give disabled people previously unimaginable access to the world.

Mind Meld

But imagine the ways in which the world will change when any of us, disabled or not, can connect our minds to computers.

Computers have been creeping closer to our brains since their invention. What started as large mainframes became desktops, then laptops, then tablets and smartphones that we hold only inches from our faces, and now Google Glass, which (albeit undergoing a redesign) delivers the Internet in a pair of eyeglasses.

Back in 2004, Google’s founders told Playboy magazine that one day we’d have direct access to the Internet through brain implants, with “the entirety of the world’s information as just one of our thoughts.”

A decade later, the road map is taking shape. While it may be years before implants like BrainGate are safe enough to be commonplace—they require brain surgery, after all—there are a host of brainwave sensors in development for use outside of the skull that will be transformational for all of us: caps for measuring driver alertness, headbands for monitoring sleep, helmets for controlling video games. This could lead to wearable EEGs, implantable nanochips or even technology that can listen to our brain signals using the electromagnetic waves that pervade the air we breathe.

Just as human intelligence is expanding in the direction of the Internet, the Internet itself promises to get smarter and smarter. In fact, it could prove to be the basis of the machine intelligence that scientists have been racing toward since the 1950s.

The pursuit of artificial intelligence has been plagued by problems. For one, we keep changing the definition of intelligence. In the 1960s, we said a computer that could beat a backgammon champion would surely be intelligent. But in the 1970s, when Gammonoid beat Luigi Villa—the world champion backgammon player—by a score of 7-1, we decided that backgammon was too easy, requiring only straightforward calculations.

We changed the rules to focus on games of sophisticated rules and strategies, like chess. Yet when IBM’s Deep Blue computer beat the reigning chess champion, Gary Kasparov, in 1997, we changed the rules again. No longer were sophisticated calculations or logical decision-making acts of intelligence.

Perhaps when computers could answer human knowledge questions, then they’d be intelligent. Of course, we had to revise that theory in 2011 when IBM’s Watson computer soundly beat the best humans at Jeopardy. But all of these computers were horribly bad sports: they couldn’t say hello, shake hands or make small talk of any kind. Each time a machine defies our definition of intelligence we move to a new definition.

What Makes Us Human?

We’ve done the same thing in nature. We once argued that what set us apart from other animals was our ability to use tools. Then we saw primates and crows using tools. So we changed our minds and said that what makes us intelligent is our ability to use language. Then biologists taught the first chimpanzee how to use sign language, and we decided that intelligence couldn’t be about language after all.

Next came self-consciousness and awareness, until experiments unequivocally proved that dolphins are self-aware. With animal intelligence as well as machine intelligence, we keep changing the goalposts.

There are those who believe we can transcend the moving goalposts. These bold adventurers have most recently focused on brain science, attempting to reverse engineer the brain. As the theory goes, once we understand all of the brain’s parts, we can recreate them to build an intelligent system.

But there are two problems with this approach. First, the inner workings of the brain are largely a mystery. Neuroscience is making tremendous progress, but it is still early.
The second issue with reverse engineering the brain is more fundamental. Just as the Wright brothers didn’t learn to fly by dissecting birds, we will not learn to create intelligence by recreating a brain. It is pretty clear that an intelligent machine will look nothing like a three-pound wrinkly lump of clay, nor will it have cells or blood or fat.

Daniel Dennett, University Professor and Austin B. Fletcher Professor of Philosophy at Tufts—whom I consider a mentor and a guide on the quest to solving the mysteries of the mind—was an advocate of reverse engineering at one point. But he recently changed course, saying “I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart.”

Dennett’s mistake was to reduce the brain to the neuron in an attempt to rebuild it. That is reducing the brain one step too far, pushing us from the edge of the forest to deep into the trees. This is the danger in any kind of reverse engineering. Biologists reduced ant colonies down to individuals, but we have now learned that the ant network, the colony, is the critical level. Reducing flight to the feathers of a bird would not have worked, but reducing it to wingspan did the trick. Feathers are one step too far, just as are ants and neurons.

Scientists have oversimplified the function of a neuron, treating it as a predictable switching device that fires on and off. That would be incredibly convenient if it were true. But neurons are only logical when they work—and a neuron misfires up to 90 percent of the time. Artificial intelligence almost universally ignores this fact.

The New Intelligence

Focusing on a single neuron’s on/off switch misses what is happening with the network of neurons, which performs amazing feats. The faultiness of the individual neuron allows for the plasticity and adaptive nature of the network as a whole. Intelligence cannot be replicated by creating a bunch of switches, faulty or not. Instead, we must focus on the network.

Neurons may be good analogs for transistors and maybe even computer chips, but they’re not good building blocks of intelligence. The neural network is fundamental. The BrainGate technology works because the chip attaches not to a single neuron, but to a network of neurons. Reading the signals of a single neuron would tell us very little; it certainly wouldn’t allow BrainGate patients to move a robotic arm or a computer cursor. Scientists may never be able to reverse engineer the neuron, but they are increasingly able to interpret the communication of the network.

It is for this reason that the Internet is a better candidate for intelligence than are computers. Computers are perfect calculators composed of perfect transistors; they are like neurons as we once envisioned them. But the Internet has all the quirkiness of the brain: it can work in parallel, it can communicate across broad distances, and it makes mistakes.

Even though the Internet is at an early stage in its evolution, it can leverage the brain that nature has given us. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines. It took millions of years for humans to gain intelligence, but with the human mind as a guide, it may only take a century to create Internet intelligence.

This article was first published in the Winter 2015 issue of Tufts Magazine.
Jeff Stibel, A95, is CEO of Dun & Bradstreet Credibility Corporation and was previously CEO of Web.com, Inc. He is the New York Times best-selling author of Breakpoint (Palgrave Macmillan), from which this article is adapted, and Wired for Thought (Harvard University Press). At Tufts, he sits on the Gordon Institute’s Entrepreneurial Leadership Advisory Board.

http://now.tufts.edu/articles/coming-merge-human-and-machine-intelligence#sthash.ZKFcAW2J.dpuf

Identification of hidden key behind liquid-liquid transition

Structural origin of the liquid-liquid transition.
© 2015 Ken-ichiro Murata, Hajime Tanaka

A University of Tokyo research group has successfully identified a microstructural unit that controls liquid-liquid transition between two phases in a single substance with multiple liquid phases. Identifying this unit is key to understanding liquid-liquid transitions.
It is widely known that even a single-component substance can have more than two crystals, as in the case of carbon (diamond and graphene) and water. Contrarily, it was thought that as a liquid is a disordered state there is only one liquid state for a single-component substance. Liquid-liquid transition in such single-component substances has attracted considerable attention as a new type of phase transition, overturning the conventional view of liquids. However, although much evidence suggestive of its presence has been gathered, the existence of liquid-liquid transitions is still an ongoing debate due to experimental difficulties. To prove the existence of liquid-liquid transitions, it is necessary to experimentally identify the micro structure governing liquid-liquid transition on a microscopic level.
Professor Hajime Tanaka’s research group at the Institute of Industrial Science have successfully identified a structural unit that controls a liquid-liquid transition by using an organic liquid, triphenyl phosphite, which has a transition under ambient pressure. The research group observed the target liquid by irradiating it with X-rays and found that the new liquid formed after the transformation has a higher density of clusters composed of several molecules.
Professor Tanaka says “A liquid state is one of the fundamental states of matter besides gas and solid, and an important physical state universal to a wide range of materials including metals, semiconductors, and organic materials. Thus, our finding not only contributes to our understanding of the underlying mechanism of liquid-liquid transition, but also provides a new insight into the liquid phase, which has been believed to be uniform and random, and leads to a deeper understanding of the very nature of the liquid state.”

Paper

Ken-ichiro Murata and Hajime Tanaka, “Microscopic identification of the order parameter governing liquid-liquid transition in a molecular liquid, "Microscopic identification of the order parameter governing liquid-liquid transition in a molecular liquid", Proceeding of the National Academy of Sciences of the United States of America Online Edition2015/4/27 (Japan time), doi: 10.1073/pnas.1501149112