Superconducting synapse may be missing piece for ‘artificial brains’

Researchers at the National Institute of Standards and Technology (NIST) have built a superconducting switch that “learns” like a biological system and could connect processors and store memories in future computers operating like the human brain.

The NIST switch, described in Science Advances, is called a synapse, like its biological counterpart, and it supplies a missing piece for so-called neuromorphic computers. Envisioned as a new type of artificial intelligence, such computers could boost perception and decision-making for applications such as self-driving cars and cancer diagnosis.

A synapse is a connection or switch between two brain cells. NIST’s artificial synapse — a squat metallic cylinder 10 micrometers in diameter — is like the real thing because it can process incoming electrical spikes to customize spiking output signals. This processing is based on a flexible internal design that can be tuned by experience or its environment. The more firing between cells or processors, the stronger the connection. Both the real and artificial synapses can thus maintain old circuits and create new ones. Even better than the real thing, the NIST synapse can fire much faster than the human brain — 1 billion times per second, compared to a brain cell’s 50 times per second — using just a whiff of energy, about one ten-thousandth as much as a human synapse. In technical terms, the spiking energy is less than 1 attojoule, lower than the background energy at room temperature and on a par with the chemical energy bonding two atoms in a molecule.

“The NIST synapse has lower energy needs than the human synapse, and we don’t know of any other artificial synapse that uses less energy,” NIST physicist Mike Schneider said.

The new synapse would be used in neuromorphic computers made of superconducting components, which can transmit electricity without resistance, and therefore, would be more efficient than other designs based on semiconductors or software. Data would be transmitted, processed and stored in units of magnetic flux. Superconducting devices mimicking brain cells and transmission lines have been developed, but until now, efficient synapses — a crucial piece — have been missing.

The brain is especially powerful for tasks like context recognition because it processes data both in sequence and simultaneously and stores memories in synapses all over the system. A conventional computer processes data only in sequence and stores memory in a separate unit.

The NIST synapse is a Josephson junction, long used in NIST voltage standards. These junctions are a sandwich of superconducting materials with an insulator as a filling. When an electrical current through the junction exceeds a level called the critical current, voltage spikes are produced. The synapse uses standard niobium electrodes but has a unique filling made of nanoscale clusters of manganese in a silicon matrix.

The nanoclusters — about 20,000 per square micrometer — act like tiny bar magnets with “spins” that can be oriented either randomly or in a coordinated manner.

“These are customized Josephson junctions,” Schneider said. “We can control the number of nanoclusters pointing in the same direction, which affects the superconducting properties of the junction.”

The synapse rests in a superconducting state, except when it’s activated by incoming current and starts producing voltage spikes. Researchers apply current pulses in a magnetic field to boost the magnetic ordering, that is, the number of nanoclusters pointing in the same direction. This magnetic effect progressively reduces the critical current level, making it easier to create a normal conductor and produce voltage spikes.

The critical current is the lowest when all the nanoclusters are aligned. The process is also reversible: Pulses are applied without a magnetic field to reduce the magnetic ordering and raise the critical current. This design, in which different inputs alter the spin alignment and resulting output signals, is similar to how the brain operates.

Synapse behavior can also be tuned by changing how the device is made and its operating temperature. By making the nanoclusters smaller, researchers can reduce the pulse energy needed to raise or lower the magnetic order of the device. Raising the operating temperature slightly from minus 271.15 degrees C (minus 456.07 degrees F) to minus 269.15 degrees C (minus 452.47 degrees F), for example, results in more and higher voltage spikes.

Crucially, the synapses can be stacked in three dimensions (3-D) to make large systems that could be used for computing. NIST researchers created a circuit model to simulate how such a system would operate.

The NIST synapse’s combination of small size, superfast spiking signals, low energy needs and 3-D stacking capability could provide the means for a far more complex neuromorphic system than has been demonstrated with other technologies, according to the paper.

The work was supported by the Intelligence Advanced Research Projects Activity’s Cryogenic Computing Complexity Program.

Drones learn to navigate autonomously by imitating cars and bicycles

All today’s commercial drones use GPS, which works fine above building roofs and in high alti-tudes. But what, when the drones have to navigate autonomously at low altitude among tall buildings or in the dense, unstructured city streets with cars, cyclists or pedestrians suddenly crossing their way? Until now, commercial drones are not able to quickly react to such unforeseen events.

Integrate autonomously navigating drones

Researchers of the University of Zurich and the National Centre of Competence in Research NCCR Robotics developed DroNet, an algorithm that can safely drive a drone through the streets of a city. Designed as a fast 8-layers residual network, it produces two outputs for each single input image: a steering angle to keep the drone navigating while avoiding obstacles, and a collision probability to let the drone recognise dangerous situations and promptly react to them. “DroNet recognises static and dynamic obstacles and can slow down to avoid crashing into them. With this algorithm we have taken a step forward towards integrating autonomously navigating drones into our everyday life,” says Davide Scaramuzza, Professor for Robotics and Perception at the University of Zurich.

Powerful artificial intelligence algorithm

Instead of relying on sophisticated sensors, the drone developed by Swiss researchers uses a normal camera like that of every smartphone, and a very powerful artificial intelligence algorithm to interpret the scene it observes and react accordingly. The algorithm consists of a so-called Deep Neural Network. “This is a computer algorithm that learns to solve complex tasks from a set of ‘training examples’ that show the drone how to do certain things and cope with some difficult situations, much like children learn from their parents or teachers,” says Prof. Scaramuzza.

Cars and bicycles are the drones’ teachers

One of the most difficult challenges in Deep Learning is to collect several thousand ‘training examples’. To gain enough data to train their algorithms, Prof. Scaramuzza and his team collected data from cars and bicycles, that were driving in urban environments. By imitating them, the drone automatically learned to respect the safety rules, such as “How follow the street without crossing into the oncoming lane,” and “How to stop when obstacles like pedestrians, construction works, or other vehicles, block their ways.” Even more interestingly, the research-ers showed that their drones learned to not only navigate through city streets, but also in completely different environments, where they were never taught to do so. Indeed, the drones learned to fly autonomously in indoor environments, such as parking lots and office’s corridors.

Toward fully autonomous drones

This research opens potential for monitoring and surveillance or parcel delivery in cluttered city streets as well as rescue operations in disastered urban areas. Nevertheless, the research team warns from exaggerated expectations of what lightweight, cheap drones can do. “Many technological issues must still be overcome before the most ambitious applications can be-come reality,” says PhD Student Antonio Loquercio.

Story Source:

Materials provided by University of Zurich. Note: Content may be edited for style and length.

Engineers design artificial synapse for ‘brain-on-a-chip’ hardware

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses — the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

The design, published today in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

Piecework at the nano assembly line

Scientists at the Technical University of Munich (TUM) have developed a novel electric propulsion technology for nanorobots. It allows molecular machines to move a hundred thousand times faster than with the biochemical processes used to date. This makes nanobots fast enough to do assembly line work in molecular factories. The new research results appeared today as the cover story in the scientific journal Science.

Up and down, up and down. The points of light alternate back and forth in lockstep. They are produced by glowing molecules affixed to the ends of tiny robot arms. Prof. Friedrich Simmel observes the movement of the nanomachines on the monitor of a fluorescence microscope. A simple mouse click is all it takes for the points of light to move in another direction.

“By applying electric fields, we can arbitrarily rotate the arms in a plane,” explains the head of the Chair of Physics of Synthetic Biological Systems at TU Munich. His team has for the first time managed to control nanobots electrically and has at the same time set a record: The new technique is 100,000 times faster than all previous methods.

DNA-Origami robots for the manufacturing plants of tomorrow

Scientists around the world are working on new technologies for the nanofactories of the future. They hope these will one day be used to analyse biochemical samples or produce active medical agents. The required miniature machines can already be produced cost-effectively using the DNA-origami technique.

One reason these molecular machines have not been deployed on a large scale to date is that they are too slow. The building blocks are activated with enzymes, strands of DNA or light to then perform specific tasks, for example to gather and transport molecules.

However, traditional nanobots take minutes to carry out these actions, sometimes even hours. Therefore, efficient molecular assembly lines cannot, for all practical intents and purposes, be implemented using these methodologies.

Electronic speed boost

“Building up a nanotechnological assembly line calls for a different kind of propulsion technology. We came up with the idea of dropping biochemical nanomachine switching completely in favour of the interactions between DNA structures and electric fields,” explains TUM researcher Simmel, who is also the co-coordinator of the Excellence Cluster Nanosystems Initiative Munich (NIM).

The principle behind the propulsion technology is simple: DNA molecules have negative charges. The biomolecules can thus be moved by applying electric fields. Theoretically, this should allow nanobots made of DNA to be steered using electrical fields.

Robotic movement under the microscope

To determine whether and how fast the robot arms would line up with an electric field, the researchers affixed several million nanobot arms to a glass substrate and placed this into a sample holder with electrical contacts designed specifically for the purpose.

Each of the miniature machines produced by the lead author Enzo Kopperger comprises a 400 nanometer arm attached to a rigid 55 by 55 nanometer base plate with a flexible joint made of unpaired bases. This construction ensures that the arms can rotate arbitrarily in the horizontal plane.

In collaboration with fluorescence specialists headed by Prof. Don C. Lamb of the Ludwig Maximillians University Munich, the researchers marked the tips of the robot arms using dye molecules. They observed their motion using a fluorescence microscope. They then changed the direction of the electric field. This allowed the researchers to arbitrarily alter the orientation of the arms and control the locomotion process.

“The experiment demonstrated that molecular machines can be moved, and thus also driven electrically,” says Simmel. “Thanks to the electronic control process, we can now initiate movements on a millisecond time scale and are thus 100,000 times faster than with previously used biochemical approaches.”

On the road to a nanofactory

The new control technology is suited not only for moving around dye molecules and nanoparticles. The arms of the miniature robots can also apply force to molecules. These interactions can be utilized for diagnostics and in pharmaceutical development, emphasizes Simmel. “Nanobots are small and economical. Millions of them could work in parallel to look for specific substances in samples or to synthesize complex molecules — not unlike an assembly line.

Engineers invent tiny vision processing chip for ultra-small smart vision systems and IoT applications

A team of researchers from the National University of Singapore (NUS) has developed a novel microchip, named EQSCALE, which can capture visual details from video frames at extremely low power consumption. The video feature extractor uses 20 times less power than existing best-in-class chips, and hence requires 20 times smaller battery, and could reduce the size of smart vision systems down to the millimetre range. For example, it can be powered continuously by a millimetre-sized solar cell without the need for battery replacement.

Led by Associate Professor Massimo Alioto from the Department of Electrical and Computer Engineering at the NUS Faculty of Engineering, the team’s discovery is a major step forward in developing millimetre-sized smart cameras with near-perpetual lifespan. It will also pave the way for cost-effective Internet of Things (IoT) applications, such as ubiquitous safety surveillance in airports and key infrastructure, building energy management, workplace safety, and elderly care.

“IoT is a fast-growing technology wave that uses massively distributed sensors to make our environment smarter and human-centric. Vision electronic systems with long lifetime are currently not feasible for IoT applications due to their high power consumption and large size. Our team has addressed these challenges through our tiny EQSCALE chip and we have shown that ubiquitous and always-on smart cameras are viable. We hope that this new capability will accelerate the ambitious endeavour of embedding the sense of sight in the IoT, said Assoc Prof Alioto.

Tiny vision processing chip that works non-stop

A video feature extractor captures visual details taken by a smart camera and turns them into a much smaller set of points of interest and edges for further analysis. Video feature extraction is the basis of any computer vision system that automatically detects, classifies and tracks objects in the visual scene. It needs to be performed on every single frame continuously, thus defining the minimum power of a smart vision system and hence the minimum system size.

The power consumption of previous state-of-the-art chips for feature extraction ranges from various milliwatts to hundreds of milliwatts, which is the average power consumption of a smartwatch and a smartphone, respectively. To enable near-perpetual operation, devices can be powered by solar cells that harvest energy from natural lighting in living spaces. However, such devices would require solar cells with a size in the centimetre scale or larger, thus posing a fundamental limit to the miniaturisation of such vision systems. Shrinking them down to the millimetre scale requires the reduction of the power consumption to much lesser than one milliwatt.

The NUS Engineering team’s microchip, EQSCALE, can perform continuous feature extraction at 0.2 milliwatts — 20 times lower in power consumption than any existing technology. This translates into a major advancement in the level of miniaturisation for smart vision systems. The novel feature extractor is smaller than a millimetre on each side, and can be powered continuously by a solar cell that is only a few millimetres in size.

Assoc Prof Alioto explained, “This technological breakthrough is achieved through the concept of energy-quality scaling, where the trade-off between energy consumption and quality in the extraction of features is adjusted. This mimics the dynamic change in the level of attention with which humans observe the visual scene, processing it with different levels of detail and quality depending on the task at hand. Energy-quality scaling allows correct object recognition even when a substantial number of points of interests are missed due to the degraded quality of the target.”

Next steps

The development of EQSCALE is a crucial step towards the future demonstration of millimetre-sized vision systems that could operate indefinitely. The NUS research team is looking into developing a miniaturised computer vision system that comprises smart cameras equipped with vision capabilities enabled by the microchip, as well as a machine learning engine that comprehends the visual scene. The ultimate goal of the NUS research team is to enable massively distributed vision systems for wide-area and ubiquitous visual monitoring, vastly exceeding the traditional concept of cameras.

Story Source:

Materials provided by National University of Singapore. Note: Content may be edited for style and length.

AI ‘scientist’ finds that toothpaste ingredient may help fight drug-resistant malaria

An ingredient commonly found in toothpaste could be employed as an anti-malarial drug against strains of malaria parasite that have grown resistant to one of the currently-used drugs. This discovery, led by researchers at the University of Cambridge, was aided by Eve, an artificially-intelligent ‘robot scientist’.

When a mosquito infected with malaria parasites bites someone, it transfers the parasites into their bloodstream via its saliva. These parasites work their way into the liver, where they mature and reproduce. After a few days, the parasites leave the liver and hijack red blood cells, where they continue to multiply, spreading around the body and causing symptoms, including potentially life-threatening complications.

Malaria kills over half a million people each year, predominantly in Africa and south-east Asia. While a number of medicines are used to treat the disease, malaria parasites are growing increasingly resistant to these drugs, raising the spectre of untreatable malaria in the future.

Now, in a study published today in the journal Scientific Reports, a team of researchers employed the Robot Scientist ‘Eve’ in a high-throughput screen and discovered that triclosan, an ingredient found in many toothpastes, may help the fight against drug-resistance.

When used in toothpaste, triclosan prevents the build-up of plaque bacteria by inhibiting the action of an enzyme known as enoyl reductase (ENR), which is involved in the production of fatty acids.

Scientists have known for some time that triclosan also inhibits the growth in culture of the malaria parasite Plasmodium during the blood-stage, and assumed that this was because it was targeting ENR, which is found in the liver. However, subsequent work showed that improving triclosan’s ability to target ENR had no effect on parasite growth in the blood.

Working with ‘Eve’, the research team discovered that in fact, triclosan affects parasite growth by specifically inhibiting an entirely different enzyme of the malaria parasite, called DHFR. DHFR is the target of a well-established antimalarial drug, pyrimethamine; however, resistance to the drug among malaria parasites is common, particularly in Africa. The Cambridge team showed that triclosan was able to target and act on this enzyme even in pyrimethamine-resistant parasites.

“Drug-resistant malaria is becoming an increasingly significant threat in Africa and south-east Asia, and our medicine chest of effective treatments is slowly depleting,” says Professor Steve Oliver from the Cambridge Systems Biology Centre and the Department of Biochemistry at the University of Cambridge. “The search for new medicines is becoming increasingly urgent.”

Because triclosan inhibits both ENR and DHFR, the researchers say it may be possible to target the parasite at both the liver stage and the later blood stage.

Lead author Dr Elizabeth Bilsland, now an assistant professor at the University of Campinas, Brazil, adds: “The discovery by our robot ‘colleague’ Eve that triclosan is effective against malaria targets offers hope that we may be able to use it to develop a new drug. We know it is a safe compound, and its ability to target two points in the malaria parasite’s lifecycle means the parasite will find it difficult to evolve resistance.”

Robot scientist Eve was developed by a team of scientists at the Universities of Manchester, Aberystwyth, and Cambridge to automate — and hence speed up — the drug discovery process by automatically developing and testing hypotheses to explain observations, run experiments using laboratory robotics, interpret the results to amend their hypotheses, and then repeat the cycle, automating high-throughput hypothesis-led research.

Professor Ross King from the Manchester Institute of Biotechnology at the University of Manchester, who led the development of Eve, says: “Artificial intelligence and machine learning enables us to create automated scientists that do not just take a ‘brute force’ approach, but rather take an intelligent approach to science. This could greatly speed up the drug discovery progress and potentially reap huge rewards.”

Small but fast: A miniaturized origami-inspired robot combines micrometer precision with high speed

Because of their high precision and speed, Delta robots are deployed in many industrial processes, including pick-and-place assemblies, machining, welding and food packaging. Starting with the first version developed by Reymond Clavel for a chocolate factory to quickly place chocolate pralines in their packages, Delta robots use three individually controlled and lightweight arms that guide a platform to move fast and accurately in three directions. The platform is either used as a stage, similar to the ones being used in flight simulators, or coupled to a manipulating device that can, for example, grasp, move, and release objects in prescribed patterns. Over time, roboticists have designed smaller and smaller Delta robots for tasks in limited workspaces, yet shrinking them further to the millimeter scale with conventional manufacturing techniques and components has proven fruitless.

Reported in Science Robotics, a new design, the milliDelta robot, developed by Robert Wood’s team at Harvard’s Wyss Institute for Biologically Inspired Engineering and John A. Paulson School of Engineering and Applied Sciences (SEAS) overcomes this miniaturization challenge. By integrating their microfabrication technique with high-performance composite materials that can incorporate flexural joints and bending actuators, the milliDelta can operate with high speed, force, and micrometer precision, which make it compatible with a range of micromanipulation tasks in manufacturing and medicine.

In 2011, inspired by pop-up books and origami, Wood’s team developed a micro-fabrication approach that enables the assembly of robots from flat sheets of composite materials. Pop-up MEMS (short for “microelectromechanical systems”) manufacturing has since been used for the construction of dynamic centimeter-scale machines that can simply walk away, or, as in the case of the RoboBee, can fly. In their new study, the researchers applied their approach to develop a Delta robot measuring a mere 15 mm-by-15 mm-by-20 mm.

“The physics of scaling told us that bringing down the size of Delta robots would increase their speed and acceleration, and pop-up MEMS manufacturing with its ability to use any material or combination of materials seemed an ideal way to attack this problem,” said Wood, who is a Core Faculty member at the Wyss Institute and co-leader of its Bioinspired Robotics platform. Wood is also the Charles River Professor of Engineering and Applied Sciences at SEAS. “This approach also allowed us to rapidly go through a number of iterations that led us to the final milliDelta.”

The milliDelta design incorporates a composite laminate structure with embedded flexural joints that approximate the more complicated joints found in large scale Delta robots. “With the help of an assembly jig, this laminate can be precisely folded into a millimeter-scale Delta robot. The milliDelta also utilizes piezoelectric actuators, which allow it to perform movements at frequencies 15 to 20 times higher than those of other currently available Delta robots,” said first-author Hayley McClintock, a Wyss Institute Staff Researcher on Wood’s team.

In addition, the team demonstrated that the milliDelta can operate in a workspace of about seven cubic millimeters and that it can apply forces and exhibit trajectories that, together with its high frequencies, could make it ideal for micromanipulations in industrial pick-and-place processes and microscopic surgeries such as retinal microsurgeries performed on the human eye.

Putting the milliDelta’s potential for microsurgeries and other micromanipulations to a first test, the researchers explored their robot as a hand tremor-cancelling device. “We first mapped the paths that the tip of a toothpick circumscribed when held by an individual, computed those, and fed them into the milliDelta robot, which was able to match and cancel them out,” said co-first author Fatma Zeynep Temel, Ph.D., a SEAS Postdoctoral Fellow in Wood’s team. The researchers think that specialized milliDelta robots could either be added on to existing robotic devices, or be developed as standalone devices like, for example, platforms for the manipulation of cells in research and clinical laboratories.

“The work by Wood’s team demonstrating the enhanced speed and control of their milliDelta robot at the millimeter scale opens entirely new avenues of development for industrial and medical robots, which are currently beyond the reach of existing technologies. It’s yet another example of how our Bioinspired Robotics platform is leading the way into the future,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at SEAS.

Quick quick slow is no-go in crab courtship dance

Female fiddler crabs are sensitive to changes in the speed of a male’s courtship display, significantly preferring displays that accelerate to those that are performed at a constant speed or slow down.

The new research is published in the journal Biology Letters and was carried out by Dr Sophie Mowles of Anglia Ruskin University, alongside academics from the Australian National University in Canberra.

Male fiddler crabs (Uca mjoebergi) wave their larger claw during courtship displays, which is a demanding exercise that leads to fatigue. A previous study by Dr Mowles has shown that male crabs that perform more vigorous, energetically costly, waving displays have higher physical fitness, meaning this signal is worth paying attention to from the perspective of a female crab.

To study the importance of tempo in mating decisions, Dr Mowles and her colleagues constructed robotic male fiddler crab arms that could move at a constant speed, slow down or speed up as the encounter progressed.

Female fiddler crabs were then introduced to these robotic crabs, and the research showed that females demonstrated a strong preference for escalating robots.

Even at points when all three robotic crabs were waving at the same frequency, the female crabs preferred the male robot whose speed was increasing. This indicates that the females realise the male might be on a trajectory to increase their wave rate further, while also demonstrating that they can conserve energy until necessary.

Dr Mowles, Lecturer in Animal and Environmental Biology at Anglia Ruskin University, said: “Dynamic signal repetition, as seen in the courtship display of male fiddler crabs, can incur significant energetic costs that reveal the quality of the displaying individual.

“They allow females to select physically fit mates as these ‘signals of stamina’ will reflect a male’s ability to perform other demanding activities associated with survival, and reduce the risk of mating with weaker signallers that might be diseased.

“A key feature of demanding displays is that they can change in intensity: energetically costly displays are likely to escalate when a male attempts to persuade a female to mate by increasing his signalling effort, but can eventually de-escalate as he becomes fatigued.

“Our findings show that females not only take into account the current level of courtship signal production, but also any changes in rate which might provide information about a male’s quality. For example, a decreasing rate might indicate that the male, despite appearing to be a vigorous and effective signaller, has exhausted his energy reserves.”

Story Source:

Materials provided by Anglia Ruskin University. Note: Content may be edited for style and length.

No evidence to support link between violent video games and behavior

Researchers at the University of York have found no evidence to support the theory that video games make players more violent.

In a series of experiments, with more than 3,000 participants, the team demonstrated that video game concepts do not ‘prime’ players to behave in certain ways and that increasing the realism of violent video games does not necessarily increase aggression in game players.

The dominant model of learning in games is built on the idea that exposing players to concepts, such as violence in a game, makes those concepts easier to use in ‘real life’.

This is known as ‘priming’, and is thought to lead to changes in behaviour. Previous experiments on this effect, however, have so far provided mixed conclusions.

Researchers at the University of York expanded the number of participants in experiments, compared to studies that had gone before it, and compared different types of gaming realism to explore whether more conclusive evidence could be found.

In one study, participants played a game where they had to either be a car avoiding collisions with trucks or a mouse avoiding being caught by a cat. Following the game, the players were shown various images, such as a bus or a dog, and asked to label them as either a vehicle or an animal.

Dr David Zendle, from the University’s Department of Computer Science, said: “If players are ‘primed’ through immersing themselves in the concepts of the game, they should be able to categorise the objects associated with this game more quickly in the real world once the game had concluded.

“Across the two games we didn’t find this to be the case. Participants who played a car-themed game were no quicker at categorising vehicle images, and indeed in some cases their reaction time was significantly slower.”

In a separate, but connected study, the team investigated whether realism influenced the aggression of game players. Research in the past has suggested that the greater the realism of the game the more primed players are by violent concepts, leading to antisocial effects in the real world.

Dr Zendle said: “There are several experiments looking at graphic realism in video games, but they have returned mixed results. There are, however, other ways that violent games can be realistic, besides looking like the ‘real world’, such as the way characters behave for example.

“Our experiment looked at the use of ‘ragdoll physics’ in game design, which creates characters that move and react in the same way that they would in real life. Human characters are modelled on the movement of the human skeleton and how that skeleton would fall if it was injured.”

The experiment compared player reactions to two combat games, one that used ‘ragdoll physics’ to create realistic character behaviour and one that did not, in an animated world that nevertheless looked real.

Following the game the players were asked to complete word puzzles called ‘word fragment completion tasks’, where researchers expected more violent word associations would be chosen for those who played the game that employed more realistic behaviours.

They compared the results of this experiment with another test of game realism, where a single bespoke war game was modified to form two different games. In one of these games, enemy characters used realistic soldier behaviours, whilst in the other game they did not employ realistic soldier behaviour.

Dr Zendle said: “We found that the priming of violent concepts, as measured by how many violent concepts appeared in the word fragment completion task, was not detectable. There was no difference in priming between the game that employed ‘ragdoll physics’ and the game that didn’t, as well as no significant difference between the games that used ‘real’ and ‘unreal’ solider tactics.

“The findings suggest that there is no link between these kinds of realism in games and the kind of effects that video games are commonly thought to have on their players.

“Further study is now needed into other aspects of realism to see if this has the same result. What happens when we consider the realism of by-standing characters in the game, for example, and the inclusion of extreme content, such as torture?

“We also only tested these theories on adults, so more work is needed to understand whether a different effect is evident in children players.”

Robotic implants spur tissue regeneration inside the body

An implanted, programmable medical robot can gradually lengthen tubular organs by applying traction forces — stimulating tissue growth in stunted organs without interfering with organ function or causing apparent discomfort, report researchers at Boston Children’s Hospital.

The robotic system, described today in Science Robotics, induced cell proliferation and lengthened part of the esophagus in a large animal by about 75 percent, while the animal remained awake and mobile. The researchers say the system could treat long-gap esophageal atresia, a rare birth defect in which part of the esophagus is missing, and could also be used to lengthen the small intestine in short bowel syndrome.

The most effective current operation for long-gap esophageal atresia, called the Foker process, uses sutures anchored on the patient’s back to gradually pull on the esophagus. To prevent the esophagus from tearing, patients must be paralyzed in a medically induced coma and placed on mechanical ventilation in the intensive care unit for one to four weeks. The long period of immobilization can also cause medical complications such as bone fractures and blood clots.

“This project demonstrates proof-of-concept that miniature robots can induce organ growth inside a living being for repair or replacement, while avoiding the sedation and paralysis currently required for the most difficult cases of esophageal atresia,” says Russell Jennings, MD, surgical director of the Esophageal and Airway Treatment Center at Boston Children’s Hospital, and a co-investigator on the study. “The potential uses of such robots are yet to be fully explored, but they will certainly be applied to many organs in the near future.”

The motorized robotic device is attached only to the esophagus, so would allow a patient to move freely. Covered by a smooth, biocompatible, waterproof “skin,” it includes two attachment rings, placed around the esophagus and sewn into place with sutures. A programmable control unit outside the body applies adjustable traction forces to the rings, slowly and steadily pulling the tissue in the desired direction.

The device was tested in the esophagi of pigs (five received the implant and three served as controls). The distance between the two rings (pulling the esophagus in opposite directions) was increased by small, 2.5-millimeter increments each day for 8 to 9 days. The animals were able to eat normally even with the device applying traction to its esophagus, and showed no sign of discomfort.

On day 10, the segment of esophagus had increased in length by 77 percent on average. Examination of the tissue showed a proliferation of the cells that make up the esophagus. The organ also maintained its normal diameter.

“This shows we didn’t simply stretch the esophagus — it lengthened through cell growth,” says Pierre Dupont, PhD, the study’s senior investigator and Chief of Pediatric Cardiac Bioengineering at Boston Children’s.

The research team is now starting to test the robotic system in a large animal model of short bowel syndrome. While long-gap esophageal atresia is quite rare, the prevalence of short bowel syndrome is much higher. Short bowel can be caused by necrotizing enterocolitis in the newborn, Crohn’s disease in adults, or a serious infection or cancer requiring a large segment of intestine to be removed.

“Short bowel syndrome is a devastating illness requiring patients to be fed intravenously,” says gastroenterologist Peter Ngo, MD, a coauthor on the study. “This, in turn, can lead to liver failure, sometimes requiring a liver or multivisceral (liver-intestine) transplant, outcomes that are both devastating and costly.”

The team hopes to get support to continue its tests of the device in large animal models, and eventually conduct clinical trials. They will also test other features.

“No one knows the best amount of force to apply to an organ to induce growth,” explains Dupont. “Today, in fact, we don’t even know what forces we are applying clinically. It’s all based on surgeon experience. A robotic device can figure out the best forces to apply and then apply those forces precisely.”

Making the internet of things possible with a new breed of ‘memristors’

The internet of things is coming, that much we know. But still it won’t; not until we have components and chips that can handle the explosion of data that comes with IoT. In 2020, there will already be 50 billion industrial internet sensors in place all around us. A single autonomous device — a smart watch, a cleaning robot, or a driverless car — can produce gigabytes of data each day, whereas an airbus may have over 10,000 sensors in one wing alone.

Two hurdles need to be overcome. First, current transistors in computer chips must be miniaturized to the size of only few nanometres; the problem is they won’t work anymore then. Second, analysing and storing unprecedented amounts of data will require equally huge amounts of energy. Sayani Majumdar, Academy Fellow at Aalto University, along with her colleagues, is designing technology to tackle both issues.

Majumdar has with her colleagues designed and fabricated the basic building blocks of future components in what are called “neuromorphic” computers inspired by the human brain. It’s a field of research on which the largest ICT companies in the world and also the EU are investing heavily. Still, no one has yet come up with a nano-scale hardware architecture that could be scaled to industrial manufacture and use.

“The technology and design of neuromorphic computing is advancing more rapidly than its rival revolution, quantum computing. There is already wide speculation both in academia and company R&D about ways to inscribe heavy computing capabilities in the hardware of smart phones, tablets and laptops. The key is to achieve the extreme energy-efficiency of a biological brain and mimic the way neural networks process information through electric impulses,” explains Majumdar.

Basic components for computers that work like the brain

In their recent article in Advanced Functional Materials, Majumdar and her team show how they have fabricated a new breed of “ferroelectric tunnel junctions,” that is, few-nanometre-thick ferroelectric thin films sandwiched between two electrodes. They have abilities beyond existing technologies and bode well for energy-efficient and stable neuromorphic computing.

The junctions work in low voltages of less than five volts and with a variety of electrode materials — including silicon used in chips in most of our electronics. They also can retain data for more than 10 years without power and be manufactured in normal conditions.

Tunnel junctions have up to this point mostly been made of metal oxides and require 700 degree Celsius temperatures and high vacuums to manufacture. Ferroelectric materials also contain lead which makes them — and all our computers — a serious environmental hazard.

“Our junctions are made out of organic hydro-carbon materials and they would reduce the amount of toxic heavy metal waste in electronics. We can also make thousands of junctions a day in room temperature without them suffering from the water or oxygen in the air,” explains Majumdar.

What makes ferroelectric thin film components great for neuromorphic computers is their ability to switch between not only binary states — 0 and 1 — but a large number of intermediate states as well. This allows them to ‘memorise’ information not unlike the brain: to store it for a long time with minute amounts of energy and to retain the information they have once received — even after being switched off and on again.

We are no longer talking of transistors, but ‘memristors’. They are ideal for computation similar to that in biological brains. Take for example the Mars 2020 Rover about to go chart the composition of another planet. For the Rover to work and process data on its own using only a single solar panel as an energy source, the unsupervised algorithms in it will need to use an artificial brain in the hardware.

“What we are striving for now, is to integrate millions of our tunnel junction memristors into a network on a one square centimetre area. We can expect to pack so many in such a small space because we have now achieved a record-high difference in the current between on and off-states in the junctions and that provides functional stability. The memristors could then perform complex tasks like image and pattern recognition and make decisions autonomously,” says Majumdar.

Story Source:

Materials provided by Aalto University. Note: Content may be edited for style and length.

Artificial muscles power up with new gel-based robotics

Scientists are one step closer to artificial muscles. Orthotics have come a long way since their initial wood and strap designs, yet innovation lapsed when it came to compensating for muscle power — until now.

A collaborative research team has designed a wearable robot to support a person’s hip joint while walking. The team, led by Minoru Hashimoto, a professor of textile science and technology at Shinshu University in Japan, published the details of their prototype in Smart Materials and Structures, a journal published by the Institute of Physics.

“With a rapidly aging society, an increasing number of elderly people require care after suffering from stroke, and other-age related disabilities. Various technologies, devices, and robots are emerging to aid caretakers,” wrote Hashimoto, noting that several technologies meant to assist a person with walking are often cumbersome to the user. “[In our] current study, [we] sought to develop a lightweight, soft, wearable assist wear for supporting activities of daily life for older people with weakened muscles and those with mobility issues.”

The wearable system consists of plasticized polyvinyl chloride (PVC) gel, mesh electrodes, and applied voltage. The mesh electrodes sandwich the gel, and when voltage is applied, the gel flexes and contracts, like a muscle. It’s a wearable actuator, the mechanism that causes movement.

“We thought that the electrical mechanical properties of the PVC gel could be used for robotic artificial muscles, so we started researching the PVC gel,” said Hashimoto. “The ability to add voltage to PVC gel is especially attractive for high speed movement, and the gel moves with high speed with just a few hundred volts.”

In a preliminary evaluation, a stroke patient with some paralysis on one side of his body walked with and without the wearable system.

“We found that the assist wear enabled natural movement, increasing step length and decreasing muscular activity during straight line walking,” wrote Hashimoto. The researchers also found that adjusting the charge could change the level of assistance the actuator provides.

The robotic system earned first place in demonstrations with their multilayer PVC gel artificial muscle at the, “24th International Symposium on Smart Structures and Materials & Nondestructive Evaluation and Health Monitoring” for SPIE the international society for optics and photonics.

Next, the researchers plan to create a string actuator using the PVC gel, which could potentially lead to the development of fabric capable of providing more manageable external muscular support with ease.

Story Source:

Materials provided by Shinshu University. Note: Content may be edited for style and length.

Robotic weeders: To a farm near you?

The future of weeding is here, and it comes in the form of a robot.

The growing popularity of robotic weeders for specialty crops has grown partly out of necessity, says Steven Fennimore, an extension specialist at the University of California, Davis. Specialty crops are vegetables like lettuce, broccoli, tomatoes, and onions. They are not mass-produced like corn, soybeans, and wheat.

The need for robotic weeders stems from two issues. One is a lack of herbicides available for use in specialty crops. Another is the fact that hand-weeding has become more and more expensive. Without pesticides, growers have had to hire people to hand-weed vast fields.

Hand-weeding is slow and increasingly expensive: it can cost $150-$300 per acre. That motivates some growers to look to robotic weeders.

“I’ve been working with robotic weeders for about 10 years now, and the technology is really just starting to come into commercial use,” Fennimore says. “It’s really an economic incentive to consider them.”

Fennimore works with university scientists and companies to engineer and test the weeders. The weeders utilize tiny blades that pop in and out to uproot weeds without damaging crops. He says that although the technology isn’t perfect, it’s getting better and better.

The weeders are programmed to recognize a pattern and can tell the difference between a plant and the soil. However, they currently have trouble telling the difference between a weed and a crop.

That said, Fennimore explains how some companies are training the machines to tell a lettuce plant from a weed. He’s also working with university engineers on a system to tag the crop plant so the weeders will avoid it.

“The problem with the machines right now is that they are version 1.0, and there’s tremendous room for improvement,” he says. “The inability to be able to tell the difference between a weed and a crop requires the grower to be very exact when using them. The rows have to be a little straighter, cleaner, and more consistent because the machines aren’t that sophisticated yet. The robots don’t like surprises.”

The robotic weeders currently on the market cost between $120,000 and $175,000. For some California growers, it is a better long-term option than expensive hand-weeding. Others think it’s a lot of money for a new technology, and are waiting for it to get better and cheaper.

Fennimore believes robotic weeders are the future of weeding in specialty crops. Because of higher labor costs and more incentives to grow organically with fewer pesticides, European growers have been using robotic weeders for some time.

Fennimore is focusing his work on physical control of weeds because it offers the best option. He’s also started working in crops besides lettuce, such as tomatoes and onions. He adds that each crop will require a different system.

“I believe what makes the robotic weeders better than herbicides is that this electronic-based technology is very flexible and can be updated easily,” he says. “We all update our phones and computers constantly, which is a sign of a robust and flexible technology.”

Story Source:

Materials provided by American Society of Agronomy. Note: Content may be edited for style and length.

Vision, sensory and motor testing could predict best batters in baseball

New research from Duke Health suggests baseball scouts looking for a consistent, conscientious hitter may find clues not only in their performance on the field, but also in front of a computer screen.

In a study of 252 baseball professionals published today in the journal Scientific Reports, Duke researchers found players with higher scores on a series of vision and motor tasks completed on large touch-screen machines called Nike Sensory Stations, had better on-base percentages, more walks and fewer strikeouts — collectively referred to as plate discipline — compared to their peers.

“There has been a data revolution in the game of baseball over the past decade with the introduction of technologies that track the speed and movement of every pitch, the location of players in the field, and other tools that can quantify player performance like never before,” said lead author Kyle Burris, a Duke statistician and Ph.D. candidate.

“In this study, we wanted to quantify the links between an athlete’s senses such as eyesight and motor control using task scores and game performance,” he said. “We found positive relationships between several tasks and performance for hitters, but not for pitchers.”

The players were on U.S. major and minor league teams. They used large touch-screen stations to complete nine exercises, many of them resembling two-dimensional video games where users track or touch flat shapes as they scoot across the screen.

The tasks test a person’s ability to glean information from a faint object or in a split second — akin to deciphering a pitcher’s grip the moment before he hurls an 80-mph curveball — plus skills such as reaction time and hand-eye coordination, said Burris, who will further his interest in baseball statistics this summer as an intern with the Cleveland Indians.

The researchers found that overall, better performance on tasks predicted better batting performance for measures of plate discipline, such as on-base percentage, strikeout rate and walk rate, but not slugging percentage or pitching statistics.

In particular, high scores on a perception-span task, which measured the player’s ability to remember and recreate visual patterns, were associated with an increased ability to get on base. High scores in hand-eye coordination and reaction time were associated with an increased ability to draw walks, while better scores in spatial recognition, such as the ability to shift attention between near and far targets, were associated with fewer strikeouts.

“We can’t say there’s a causal relationship between higher scores on the tasks and performance in games, but there was an association in the real-world data we evaluated,” Burris said. “Regardless, this information could be useful in scouting, as well providing possible training targets to improve on-field performance.”

Although research on the effectiveness of visual training for athletes is limited, sports teams have used the strategy for decades.

“The marketplace for devices had an early genesis in analog tools,” said cognitive neuroscientist L. Greg Appelbaum, Ph.D., the study’s senior author and associate professor in psychiatry and behavioral sciences at Duke. For example, trainers might challenge players to read words off a baseball being flung their way, or lead drills where players snap their attention from a close ball to one that’s far away, he said.

“In the past five years or so, we’ve moved to a digital realm where there are all kinds of new tools that provide new context for training, such as virtual reality, perceptual learning video games and brain training,” Appelbaum said. “The Sensory Station is one such device that can be used to link visual skills to on-field performance and provide information to individuals about how their skills compare to peers who might play the same sport and position at the same level.”

Scientists fiercely debate whether superior visual processing is hardwired, or whether it can be improved by training. Appelbaum and colleagues have recently launched the Duke Sports Vision Center, a clinic and lab where researchers will evaluate the effectiveness of visual training through technologies currently being used by professional athletes. The lab includes new versions of the sensory stations in the study (now made by Senaptec), immersive virtual reality and more.

Virtual twin in ten minutes

Avatars — virtual persons — are a core element of ICSpace, the virtual fitness and movement environment at Bielefeld University’s Cluster of Excellence Cognitive Interaction Technology (CITEC). The system makes it possible to practise and improve motion sequences by providing individualised feedback in real time. The system is embodied by a virtual person acting as a coach. In addition, users see themselves as avatars — virtual copies of themselves in the mirror of the virtual room. The creation of such personalised avatars used to take several days, but CITEC researchers have now developed an accelerated process.

In order to create avatars for the ICSpace system, the researchers “scan” people. The computer scientists use a circular array of 40 DSLR cameras to photograph the respective person from all sides and use these images to compute several million three-dimensional sample points on the person’s body. A generic virtual human model is fitted to this data in such a way that it corresponds to the shape and appearance of the person scanned. “Our virtual human model was generated from more than one hundred 3D scans and contains statistical knowledge about human body shape and movement,” says Professor Dr. Mario Botsch, head of the Computer Graphics and Geometry Processing research group and one of the coordinators of the ICSpace project. “Only through this model are we able to create avatars quickly and automatically.”

The resulting virtual people can be animated in detail: they can move all joints, even individual fingers, and communicate through facial expressions, speech and gestures. “The most important feature, though, is that they reflect the user photorealistically,” says Botsch. This is crucial because personalised avatars are much more readily accepted by users. This is shown in a study carried out by computer graphic researchers from Bielefeld in cooperation with Professor Dr. Marc Latoschik from the University of Würzburg. The study shows that users identify better with such a custom-tailored individualised avatar than with an avatar that does not resemble them, even if it looks similarly realistic,” says Latoschik, who holds the chair for Human-Computer Interaction in Würzburg.

“Until a few months ago, the individual processing steps for creating avatars were scarcely automated,” says Botsch. The new process has changed this.

For the current study, his team has developed algorithms that accelerate the complete processing of the photo data right up to the animatable avatar. “This way, we can now generate the avatar of any person within ten minutes,” says Jascha Achenbach, lead author of the resulting publication. “We create the virtual avatars in a format that is also used by the computer games industry,” says Thomas Waltemate, who, like Achenbach, works in Botsch’s research group. This makes the avatar generation also interesting for commercial use.

The researchers presented their development of accelerated avatar generation a month ago in Gothenburg (Sweden) at the conference “ACM Symposium on Virtual Reality Software and Technology.” The study on how personalised avatars are accepted by users will be presented at the IEEE Conference on Virtual Reality and 3D User Interfaces, the world’s leading conference on virtual reality, in the spring of 2018.

The virtual training environment ICSpace is a joint development by six research groups of the Excellence Cluster CITEC. ICSpace stands for “Intelligent Coaching Space.” The system analyses the movement of athletes and rehabilitation patients and helps to correct them. It is based on an open space with two projection walls (front and floor), known as Cave Automatic Virtual Environment (CAVE). The CAVE makes it possible to simulate a walk-in, computer-generated virtual environment. Test subjects wear 3D glasses similar to those worn in the cinema. It is the first system of its kind worldwide to simulate the complete training process and adapt flexibly to the user’s behaviour. Mario Botschcoordinates the project together with computer scientist Professor Dr. Stefan Kopp and sport and cognitive scientist Professor Dr. Thomas Schack.

ICSpace is one of four large-scale projects at CITEC. The Excellence Cluster CITEC is providing the project with a total of 1.6 million euros in funding until the end of 2017. The other projects are the robot service apartment, the walking robot Hector and the self-learning grasp system “Famula.” CITEC will be funded by the German Research Foundation (DFG) on behalf of the German federal and state governments (EXC 277) as part of the Excellence Initiative until the end of 2018. In the new Excellence Strategy of the federal and state governments, Bielefeld University is applying for a cluster based on the research of the current Excellence Cluster CITEC.

Story Source:

Materials provided by Universitaet Bielefeld. Note: Content may be edited for style and length.

Memristors power quick-learning neural network

A new type of neural network made with memristors can dramatically improve the efficiency of teaching machines to think like humans.

The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.

The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.

Reservoir computing systems, which improve on a typical neural network’s capacity and reduce the required training time, have been created in the past with larger optical components. However, the U-M group created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.

Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. In this study, Lu’s team used a special memristor that memorizes events only in the near history.

Inspired by brains, neural networks are composed of neurons, or nodes, and synapses, the connections between nodes.

To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.

Once trained, a neural network can then be tested without knowing the answer. For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.

“A lot of times, it takes days or months to train a network,” Lu said. “It is very expensive.”

Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred, or what has just been said.

“When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables,” Lu said.

This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu said.

Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system — the reservoir — does not require training.

When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.

“The beauty of reservoir computing is that while we design it, we don’t have to train it,” Lu said.

The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks. Numerals were broken up into rows of pixels, and fed into the computer with voltages like Morse code, with zero volts for a dark pixel and a little over one volt for a white pixel.

Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91 percent accuracy.

Reservoir computing systems are especially adept at handling data that varies with time, like a stream of data or words, or a function depending on past results.

To demonstrate this, the team tested a complex function that depended on multiple past results, which is common in engineering fields. The reservoir computing system was able to model the complex function with minimal error.

Lu plans on exploring two future paths with this research: speech recognition and predictive analysis.

“We can make predictions on natural spoken language, so you don’t even have to say the full word,” Lu said. “We could actually predict what you plan to say next.”

In predictive analysis, Lu hopes to use the system to take in signals with noise, like static from far-off radio stations, and produce a cleaner stream of data.

“It could also predict and generate an output signal even if the input stopped,” he said.

Robot drummer posts pictures of jamming sessions on Facebook

Scientists have developed a drumming robot that plays along with human keyboard players and posts pictures of the sessions on Facebook.

The study, by researchers at Queen Mary University of London, looks at how humans interact with robots over time and in particular how social media can enhance that relationship.

Relationships between humans and robots require both long term engagement and a feeling of believability, or social presence, towards the robot. The researchers contend that music can provide this engagement and developed a robotic drummer, called Mortimer, who is able to compose music responsively to human pianists in real-time.

To help trigger a sense of believability, the researchers extended Mortimer’s capabilities to allow him to take pictures during sessions and post them with a supporting comment to Facebook while also tagging the keyboard player.

The study was published in IEEE Transactions on Cognitive and Developmental Systems.

Lead author Louis McCallum, from Queen Mary’s School of Electronic Engineering and Computer Science, said: “We’d previously uncovered new and exciting findings that suggested open-ended creative activities could be a strong bedrock to build long-term human-robot relationships. This particular research sought to examine whether the relationships that were initially developed face-to-face, but under lab conditions, could be extended to the more open, but virtual, realm of social media.”

During the study, two groups of participants were chosen. One group was sent a Facebook friend request from Mortimer allowing the robot to tag them in pictures taken during the session. The other group was not sent a request and had no contact with the robot outside of the sessions.

Participants took part in six weekly sessions in a controlled studio environment and were instructed to stay for a minimum of 20 minutes but could optionally stay for up to 45.

They were greeted by Mortimer, who communicates via speech synthesis software, and used a tablet to interact with him.

During each session, a picture of Mortimer and the participant playing was taken automatically by a webcam in the lab and an accompanying comment was generated. In some instances, the participants also took a selfie with Mortimer and posted it to their own Facebook accounts.

From the Facebook data, there were considerably more ‘likes’ for posts made by a user as opposed to one of Mortimer’s posts that the user was tagged in.

Dr McCallum said: “One of the most interesting findings was that posts by human participants about the music sessions between them carried significantly more weight within their networks than posts by the robot itself. This suggests a discerning approach to generated posts that is especially relevant in today’s world of social media bots, automated content and fake news.”

The researchers found that the time spent with the robot increased over the study but session length for the group who were Facebook friends with Mortimer reduced over time. They suggest this may be because the participants had additional contact with Mortimer outside the sessions.

Co-author of the study Professor Peter McOwan added: “There are signs of high engagement, such as high self-reported repeat interaction, across all participants that strengthen previous results about the use of music as a good base for improving long-term human-robot relationships. But we found the effects of extending the relationship into the virtual world were less pronounced than expected. This doesn’t mean that virtual interaction doesn’t help, but maybe the quality of the interaction needs to be improved.”

The study was funded by The Engineering and Physical Sciences Research Council (EPSRC).

Story Source:

Materials provided by Queen Mary University of London. Note: Content may be edited for style and length.

Inner workings of victorious AI revealed by researchers

Libratus, an artificial intelligence that defeated four top professional poker players in no-limit Texas Hold’em earlier this year, uses a three-pronged approach to master a game with more decision points than atoms in the universe, researchers at Carnegie Mellon University report.

In a paper being published online by the journal Science, Tuomas Sandholm, professor of computer science, and Noam Brown, a Ph.D. student in the Computer Science Department, detail how their AI was able to achieve superhuman performance by breaking the game into computationally manageable parts and, based on its opponents’ game play, fix potential weaknesses in its strategy during the competition.

AI programs have defeated top humans in checkers, chess and Go — all challenging games, but ones in which both players know the exact state of the game at all times. Poker players, by contrast, contend with hidden information — what cards their opponents hold and whether an opponent is bluffing.

In a 20-day competition involving 120,000 hands at Rivers Casino in Pittsburgh during January 2017, Libratus became the first AI to defeat top human players at head’s up no-limit Texas Hold’em — the primary benchmark and long-standing challenge problem for imperfect-information game-solving by AIs.

Libratus beat each of the players individually in the two-player game and collectively amassed more than $1.8 million in chips. Measured in milli-big blinds per hand (mbb/hand), a standard used by imperfect-information game AI researchers, Libratus decisively defeated the humans by 147 mmb/hand. In poker lingo, this is 14.7 big blinds per game

“The techniques in Libratus do not use expert domain knowledge or human data and are not specific to poker,” Sandholm and Brown said in the paper. “Thus they apply to a host of imperfect-information games.” Such hidden information is ubiquitous in real-world strategic interactions, they noted, including business negotiation, cybersecurity, finance, strategic pricing and military applications.

Libratus includes three main modules, the first of which computes an abstraction of the game that is smaller and easier to solve than by considering all 10161 (the number 1 followed by 161 zeroes) possible decision points in the game. It then creates its own detailed strategy for the early rounds of Texas Hold’em and a coarse strategy for the later rounds. This strategy is called the blueprint strategy.

One example of these abstractions in poker is grouping similar hands together and treating them identically.

“Intuitively, there is little difference between a King-high flush and a Queen-high flush,” Brown said. “Treating those hands as identical reduces the complexity of the game and thus makes it computationally easier.” In the same vein, similar bet sizes also can be grouped together.

But in the final rounds of the game, a second module constructs a new, finer-grained abstraction based on the state of play. It also computes a strategy for this subgame in real-time that balances strategies across different subgames using the blueprint strategy for guidance — something that needs to be done to achieve safe subgame solving. During the January competition, Libratus performed this computation using the Pittsburgh Supercomputing Center’s Bridges computer.

Whenever an opponent makes a move that is not in the abstraction, the module computes a solution to this subgame that includes the opponent’s move. Sandholm and Brown call this nested subgame solving.

DeepStack, an AI created by the University of Alberta to play heads-up, no-limit Texas Hold’em, also includes a similar algorithm, called continual re-solving; DeepStack has yet to be tested against top professional players, however.

The third module is designed to improve the blueprint strategy as competition proceeds. Typically, Sandholm said, AIs use machine learning to find mistakes in the opponent’s strategy and exploit them. But that also opens the AI to exploitation if the opponent shifts strategy.

Instead, Libratus’ self-improver module analyzes opponents’ bet sizes to detect potential holes in Libratus’ blueprint strategy. Libratus then adds these missing decision branches, computes strategies for them, and adds them to the blueprint.

In addition to beating the human pros, Libratus was evaluated against the best prior poker AIs. These included Baby Tartanian8, a bot developed by Sandholm and Brown that won the 2016 Annual Computer Poker Competition held in conjunction with the Association for the Advancement of Artificial Intelligence Annual Conference.

Whereas Baby Tartanian8 beat the next two strongest AIs in the competition by 12 (plus/minus 10) mbb/hand and 24 (plus/minus 20) mbb/hand, Libratus bested Baby Tartanian8 by 63 (plus/minus 28) mbb/hand. DeepStack has not been tested against other AIs, the authors noted.

“The techniques that we developed are largely domain independent and can thus be applied to other strategic imperfect-information interactions, including non-recreational applications,” Sandholm and Brown concluded. “Due to the ubiquity of hidden information in real-world strategic interactions, we believe the paradigm introduced in Libratus will be critical to the future growth and widespread application of AI.”

The technology has been exclusively licensed to Strategic Machine, Inc., a company founded by Sandholm to apply strategic reasoning technologies to many different applications.

Computer systems predict objects’ responses to physical forces

Josh Tenenbaum, a professor of brain and cognitive sciences at MIT, directs research on the development of intelligence at the Center for Brains, Minds, and Machines, a multiuniversity, multidisciplinary project based at MIT that seeks to explain and replicate human intelligence.

Presenting their work at this year’s Conference on Neural Information Processing Systems, Tenenbaum and one of his students, Jiajun Wu, are co-authors on four papers that examine the fundamental cognitive abilities that an intelligent agent requires to navigate the world: discerning distinct objects and inferring how they respond to physical forces.

By building computer systems that begin to approximate these capacities, the researchers believe they can help answer questions about what information-processing resources human beings use at what stages of development. Along the way, the researchers might also generate some insights useful for robotic vision systems.

“The common theme here is really learning to perceive physics,” Tenenbaum says. “That starts with seeing the full 3-D shapes of objects, and multiple objects in a scene, along with their physical properties, like mass and friction, then reasoning about how these objects will move over time. Jiajun’s four papers address this whole space. Taken together, we’re starting to be able to build machines that capture more and more of people’s basic understanding of the physical world.”

Three of the papers deal with inferring information about the physical structure of objects, from both visual and aural data. The fourth deals with predicting how objects will behave on the basis of that data.

Two-way street

Something else that unites all four papers is their unusual approach to machine learning, a technique in which computers learn to perform computational tasks by analyzing huge sets of training data. In a typical machine-learning system, the training data are labeled: Human analysts will have, say, identified the objects in a visual scene or transcribed the words of a spoken sentence. The system attempts to learn what features of the data correlate with what labels, and it’s judged on how well it labels previously unseen data.

In Wu and Tenenbaum’s new papers, the system is trained to infer a physical model of the world — the 3-D shapes of objects that are mostly hidden from view, for instance. But then it works backward, using the model to resynthesize the input data, and its performance is judged on how well the reconstructed data matches the original data.

For instance, using visual images to build a 3-D model of an object in a scene requires stripping away any occluding objects; filtering out confounding visual textures, reflections, and shadows; and inferring the shape of unseen surfaces. Once Wu and Tenenbaum’s system has built such a model, however, it rotates it in space and adds visual textures back in until it can approximate the input data.

Indeed, two of the researchers’ four papers address the complex problem of inferring 3-D models from visual data. On those papers, they’re joined by four other MIT researchers, including William Freeman, the Perkins Professor of Electrical Engineering and Computer Science, and by colleagues at DeepMind, ShanghaiTech University, and Shanghai Jiao Tong University.

Divide and conquer

The researchers’ system is based on the influential theories of the MIT neuroscientist David Marr, who died in 1980 at the tragically young age of 35. Marr hypothesized that in interpreting a visual scene, the brain first creates what he called a 2.5-D sketch of the objects it contained — a representation of just those surfaces of the objects facing the viewer. Then, on the basis of the 2.5-D sketch — not the raw visual information about the scene — the brain infers the full, three-dimensional shapes of the objects.

“Both problems are very hard, but there’s a nice way to disentangle them,” Wu says. “You can do them one at a time, so you don’t have to deal with both of them at the same time, which is even harder.”

Wu and his colleagues’ system needs to be trained on data that include both visual images and 3-D models of the objects the images depict. Constructing accurate 3-D models of the objects depicted in real photographs would be prohibitively time consuming, so initially, the researchers train their system using synthetic data, in which the visual image is generated from the 3-D model, rather than vice versa. The process of creating the data is like that of creating a computer-animated film.

Once the system has been trained on synthetic data, however, it can be fine-tuned using real data. That’s because its ultimate performance criterion is the accuracy with which it reconstructs the input data. It’s still building 3-D models, but they don’t need to be compared to human-constructed models for performance assessment.

In evaluating their system, the researchers used a measure called intersection over union, which is common in the field. On that measure, their system outperforms its predecessors. But a given intersection-over-union score leaves a lot of room for local variation in the smoothness and shape of a 3-D model. So Wu and his colleagues also conducted a qualitative study of the models’ fidelity to the source images. Of the study’s participants, 74 percent preferred the new system’s reconstructions to those of its predecessors.

All that fall

In another of Wu and Tenenbaum’s papers, on which they’re joined again by Freeman and by researchers at MIT, Cambridge University, and ShanghaiTech University, they train a system to analyze audio recordings of an object being dropped, to infer properties such as the object’s shape, its composition, and the height from which it fell. Again, the system is trained to produce an abstract representation of the object, which, in turn, it uses to synthesize the sound the object would make when dropped from a particular height. The system’s performance is judged on the similarity between the synthesized sound and the source sound.

Finally, in their fourth paper, Wu, Tenenbaum, Freeman, and colleagues at DeepMind and Oxford University describe a system that begins to model humans’ intuitive understanding of the physical forces acting on objects in the world. This paper picks up where the previous papers leave off: It assumes that the system has already deduced objects’ 3-D shapes.

Those shapes are simple: balls and cubes. The researchers trained their system to perform two tasks. The first is to estimate the velocities of balls traveling on a billiard table and, on that basis, to predict how they will behave after a collision. The second is to analyze a static image of stacked cubes and determine whether they will fall and, if so, where the cubes will land.

Wu developed a representational language he calls scene XML that can quantitatively characterize the relative positions of objects in a visual scene. The system first learns to describe input data in that language. It then feeds that description to something called a physics engine, which models the physical forces acting on the represented objects. Physics engines are a staple of both computer animation, where they generate the movement of clothing, falling objects, and the like, and of scientific computing, where they’re used for large-scale physical simulations.

After the physics engine has predicted the motions of the balls and boxes, that information is fed to a graphics engine, whose output is, again, compared with the source images. As with the work on visual discrimination, the researchers train their system on synthetic data before refining it with real data.

In tests, the researchers’ system again outperformed its predecessors. In fact, in some of the tests involving billiard balls, it frequently outperformed human observers as well.

Software enables robots to be controlled in virtual reality

Even as autonomous robots get better at doing things on their own, there will still be plenty of circumstances where humans might need to step in and take control. New software developed by Brown University computer scientists enables users to control robots remotely using virtual reality, which helps users to become immersed in a robot’s surroundings despite being miles away physically.

The software connects a robot’s arms and grippers as well as its onboard cameras and sensors to off-the-shelf virtual reality hardware via the internet. Using handheld controllers, users can control the position of the robot’s arms to perform intricate manipulation tasks just by moving their own arms. Users can step into the robot’s metal skin and get a first-person view of the environment, or can walk around the robot to survey the scene in the third person — whichever is easier for accomplishing the task at hand. The data transferred between the robot and the virtual reality unit is compact enough to be sent over the internet with minimal lag, making it possible for users to guide robots from great distances.

“We think this could be useful in any situation where we need some deft manipulation to be done, but where people shouldn’t be,” said David Whitney, a graduate student at Brown who co-led the development of the system. “Three examples we were thinking of specifically were in defusing bombs, working inside a damaged nuclear facility or operating the robotic arm on the International Space Station.”

Whitney co-led the work with Eric Rosen, an undergraduate student at Brown. Both work in Brown’s Humans to Robots lab, which is led by Stefanie Tellex, an assistant professor of computer science. A paper describing the system and evaluating its usability was presented this week at the International Symposium on Robotics Research in Chile.

Watch video of the system in use here: https://www.youtube.com/watch?v=e3jUbQKciC4

Even highly sophisticated robots are often remotely controlled using some fairly unsophisticated means — often a keyboard or something like a video game controller and a two-dimensional monitor. That works fine, Whitney and Rosen say, for tasks like driving a wheeled robot around or flying a drone, but can be problematic for more complex tasks.

“For things like operating a robotic arm with lots of degrees of freedom, keyboards and game controllers just aren’t very intuitive,” Whitney said. And mapping a three-dimensional environment onto a two-dimensional screen could limit one’s perception of the space the robot inhabits.

Whitney and Rosen thought virtual reality might offer a more intuitive and immersive option. Their software links together a Baxter research robot with an HTC Vive, a virtual reality system that comes with hand controllers. The software uses the robot’s sensors to create a point-cloud model of the robot itself and its surroundings, which is transmitted to a remote computer connected to the Vive. Users can see that space in the headset and virtually walk around inside it. At the same time, users see live high-definition video from the robot’s wrist cameras for detailed views of manipulation tasks to be performed.

For their study, the researchers showed that they could create an immersive experience for users while keeping the data load small enough that it could be carried over the internet without a distracting lag. A user in Providence, R.I., for example, was able to perform a manipulation task — the stacking of plastic cups one inside the others — using a robot 41 miles away in Cambridge, Mass.

In additional studies, 18 novice users were able to complete the cup-stacking task 66 percent faster in virtual reality compared with a traditional keyboard-and-monitor interface. Users also reported enjoying the virtual interface more, and they found the manipulation tasks to be less demanding compared with keyboard and monitor.

Rosen thinks the increased speed in performing the task was due to the intuitiveness of the virtual reality interface.

“In VR, people can just move the robot like they move their bodies, and so they can do it without thinking about it,” Rosen said. “That lets people focus on the problem or task at hand without the increased cognitive load of trying to figure out how to move the robot.”

The researchers plan to continue developing the system. The first iteration focused on a fairly simple manipulation task with a robot that was stationary in the environment. They’d like to try more complex tasks and later combine manipulation with navigation. They’d also like to experiment with mixed autonomy, where the robot does some tasks on its own and the user takes over for other tasks.

The researchers have made the system freely available on the web (github.com/h2r/ros_reality). They hope other robotics researchers might give it a try and take it in new directions of their own.

In addition to Whitney, Rosen and Tellex, other authors on the paper were Elizabeth Phillips, a postdoctoral researcher with Brown’s Humanity Centered Robotics Initiative, and George Konidaris, as assistant professor of computer science. The work was funded in part by the Defense Advanced Research Projects Agency (DARPA) (W911NF-15-1-0503, YFA: D15AP00104, YFA: GR5245014 and D15AP00102) and NASA (GR5227035).

Engineers program tiny robots to move, think like insects

While engineers have had success building tiny, insect-like robots, programming them to behave autonomously like real insects continues to present technical challenges. A group of Cornell engineers has been experimenting with a new type of programming that mimics the way an insect’s brain works, which could soon have people wondering if that fly on the wall is actually a fly.

The amount of computer processing power needed for a robot to sense a gust of wind, using tiny hair-like metal probes imbedded on its wings, adjust its flight accordingly, and plan its path as it attempts to land on a swaying flower would require it to carry a desktop-size computer on its back. Silvia Ferrari, professor of mechanical and aerospace engineering and director of the Laboratory for Intelligent Systems and Controls, sees the emergence of neuromorphic computer chips as a way to shrink a robot’s payload.

Unlike traditional chips that process combinations of 0s and 1s as binary code, neuromorphic chips process spikes of electrical current that fire in complex combinations, similar to how neurons fire inside a brain. Ferrari’s lab is developing a new class of “event-based” sensing and control algorithms that mimic neural activity and can be implemented on neuromorphic chips. Because the chips require significantly less power than traditional processors, they allow engineers to pack more computation into the same payload.

Ferrari’s lab has teamed up with the Harvard Microrobotics Laboratory, which has developed an 80-milligram flying RoboBee outfitted with a number of vision, optical flow and motion sensors. While the robot currently remains tethered to a power source, Harvard researchers are working on eliminating the restraint with the development of new power sources. The Cornell algorithms will help make RoboBee more autonomous and adaptable to complex environments without significantly increasing its weight.

“Getting hit by a wind gust or a swinging door would cause these small robots to lose control. We’re developing sensors and algorithms to allow RoboBee to avoid the crash, or if crashing, survive and still fly,” said Ferrari. “You can’t really rely on prior modeling of the robot to do this, so we want to develop learning controllers that can adapt to any situation.”

To speed development of the event-based algorithms, a virtual simulator was created by Taylor Clawson, a doctoral student in Ferrari’s lab. The physics-based simulator models the RoboBee and the instantaneous aerodynamic forces it faces during each wing stroke. As a result, the model can accurately predict RoboBee’s motions during flights through complex environments.

“The simulation is used both in testing the algorithms and in designing them,” said Clawson, who helped has successfully developed an autonomous flight controller for the robot using biologically inspired programming that functions as a neural network. “This network is capable of learning in real time to account for irregularities in the robot introduced during manufacturing, which make the robot significantly more challenging to control.”

Aside from greater autonomy and resiliency, Ferrari said her lab plans to help outfit RoboBee with new micro devices such as a camera, expanded antennae for tactile feedback, contact sensors on the robot’s feet and airflow sensors that look like tiny hairs.

“We’re using RoboBee as a benchmark robot because it’s so challenging, but we think other robots that are already untethered would greatly benefit from this development because they have the same issues in terms of power,” said Ferrari.

One robot that is already benefiting is the Harvard Ambulatory Microrobot, a four-legged machine just 17 millimeters long and weighing less than 3 grams. It can scamper at a speed of .44 meters-per-second, but Ferrari’s lab is developing event-based algorithms that will help complement the robot’s speed with agility.

Ferrari is continuing the work using a four-year, $1 million grant from the Office of Naval Research. She’s also collaborating with leading research groups from a number of universities fabricating neuromorphic chips and sensors.

Story Source:

Materials provided by Cornell University. Original written by Syl Kacapyr. Note: Content may be edited for style and length.

Tailgating doesn’t get you there faster: Study

We’ve all experienced “phantom traffic jams” that arise without any apparent cause. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) recently showed that we’d have fewer if we made one small change to how we drive: no more tailgating.

Specifically, the team’s new journal article argues that if we all kept an equal distance between the cars in front of and behind us — an approach that MIT professor Berthold Horn describes as “bilateral control” — we would all get where we’re going almost twice as quickly.

“We humans tend to view the world in terms of what’s ahead of us, both literally and conceptually, so it might seem counter-intuitive to look backwards,” says Horn, who co-authored the article with postdoctoral associate Liang Wang. “But driving like this could have a dramatic effect in reducing travel time and fuel consumption without having to build more roads or make other changes to infrastructure.”

Horn concedes that drivers themselves are unlikely to change their forward-looking ways anytime soon, so he suggests that car companies update their adaptive cruise-control systems and add sensors to both their front and rear bumpers. (Most of today’s systems only have front sensors.)

According to Horn, traffic would get noticeably better even if just a small percentage of all cars were outfitted with such systems. In future work funded in part by Toyota, he plans to do simulations to test whether this method is not just faster for drivers, but also safer.

The team’s work has been inspired in part by how flocks of starling birds move in tandem.

“Birds have been doing this for centuries,” says Horn. ” To program this behavior, you’d want to look at the birds all around you and not just the ones in front of you.”

According to the CSAIL team, for decades there have been hundreds of academic papers looking at the problem of traffic flow, but very few about how to actually solve it.

One proposed approach is to electronically connect vehicles together to coordinate their distances between each other. But so-called “platooning” methods require detailed coordination and a massive network of connected vehicles. In contrast, the CSAIL team’s approach would simply require new software and some inexpensive hardware updates.

Horn first proposed the concept of “bilateral control” in 2013 at the level of a single car and the cars directly surrounding it. With the new paper, he has taken a more macro-level view, looking at the density of entire highways and how miles of traffic patterns can be affected by individual cars changing speeds (which his team refers to as “perturbations”).

“Our work shows that, if drivers all keep an equal distance between the cars on either side of them, such ‘perturbations’ would disappear as they travel down a line of traffic, rather than amplify to create a traffic jam,” says Horn.

The team’s article will be published this week in the journal “IEEE Transactions on Intelligent Transportation Systems.”

Story Source:

Materials provided by Massachusetts Institute of Technology, CSAIL. Note: Content may be edited for style and length.