Transparent eel-like soft robot can swim silently underwater

An innovative, eel-like robot developed by engineers and marine biologists at the University of California can swim silently in salt water without an electric motor. Instead, the robot uses artificial muscles filled with water to propel itself. The foot-long robot, which is connected to an electronics board that remains on the surface, is also virtually transparent.

The team, which includes researchers from UC San Diego and UC Berkeley, details their work in the April 25 issue of Science Robotics. Researchers say the bot is an important step toward a future when soft robots can swim in the ocean alongside fish and invertebrates without disturbing or harming them. Today, most underwater vehicles designed to observe marine life are rigid and submarine-like and powered by electric motors with noisy propellers.

“Instead of propellers, our robot uses soft artificial muscles to move like an eel underwater without making any sound,” said Caleb Christianson, a Ph.D. student at the Jacobs School of Engineering at UC San Diego.

One key innovation was using the salt water in which the robot swims to help generate the electrical forces that propel it. The bot is equipped with cables that apply voltage to both the salt water surrounding it and to pouches of water inside of its artificial muscles. The robot’s electronics then deliver negative charges in the water just outside of the robot and positive charges inside of the robot that activate the muscles. The electrical charges cause the muscles to bend, generating the robot’s undulating swimming motion. The charges are located just outside the robot’s surface and carry very little current so they are safe for nearby marine life.

“Our biggest breakthrough was the idea of using the environment as part of our design,” said Michael T. Tolley, the paper’s corresponding author and a professor of mechanical engineering at the Jacobs School at UC San Diego. “There will be more steps to creating an efficient, practical, untethered eel robot, but at this point we have proven that it is possible.”

Previously, other research groups had developed robots with similar technology. But to power these robots, engineers were using materials that need to be held in constant tension inside semi-rigid frames. The Science Robotics study shows that the frames are not necessary.

“This is in a way the softest robot to be developed for underwater exploration,” Tolley said.

The robot was tested inside salt-water tanks filled with jelly fish, coral and fish at the Birch Aquarium at the Scripps Institution of Oceanography at UC San Diego and in Tolley’s lab.

The conductive chambers inside the robot’s artificial muscles can be loaded with fluorescent dye (as shown in the video accompanying the study and this release). In the future, the fluorescence could be used as a kind of signaling system.

Next steps also include improving the robot’s reliability and its geometry. Researchers need to improve ballast, equipping the robot with weights so that it can dive deeper. For now, engineers have improvised ballast weights with a range of objects, such as magnets. In future work, researchers envision building a head for their eel robot to house a suite of sensors.

The research was supported with a grant from the Office of Naval Research. Christianson is supported by a National Science Foundation Graduate Research Fellowship.

Videos:

http://bit.ly/eelbot (feature)

http://bit.ly/UCSDScienceRobotics (research video)

Story Source:

Materials provided by University of California – San Diego. Original written by Ioana Patringenaru. Note: Content may be edited for style and length.

Turning deep-learning AI loose on software development

Computer scientists at Rice University have created a deep-learning, software-coding application that can help human programmers navigate the growing multitude of often-undocumented application programming interfaces, or APIs.

Known as Bayou, the Rice application was created through an initiative funded by the Defense Advanced Research Projects Agency aimed at extracting knowledge from online source code repositories like GitHub. A paper on Bayou will be presented May 1 in Vancouver, British Columbia, at the Sixth International Conference on Learning Representations, a premier outlet for deep learning research. Users can try it out at askbayou.com.

Designing applications that can program computers is a long-sought grail of the branch of computer science called artificial intelligence (AI).

“People have tried for 60 years to build systems that can write code, but the problem is that these methods aren’t that good with ambiguity,” said Bayou co-creator Swarat Chaudhuri, associate professor of computer science at Rice. “You usually need to give a lot of details about what the target program does, and writing down these details can be as much work as just writing the code.

“Bayou is a considerable improvement,” he said. “A developer can give Bayou a very small amount of information — just a few keywords or prompts, really — and Bayou will try to read the programmer’s mind and predict the program they want.”

Chaudhuri said Bayou trained itself by studying millions of lines of human-written Java code. “It’s basically studied everything on GitHub, and it draws on that to write its own code.”

Bayou co-creator Chris Jermaine, a professor of computer science who co-directs Rice’s Intelligent Software Systems Laboratory with Chaudhuri, said Bayou is particularly useful for synthesizing examples of code for specific software APIs.

“Programming today is very different than it was 30 or 40 years ago,” Jermaine said. “Computers today are in our pockets, on our wrists and in billions of home appliances, vehicles and other devices. The days when a programmer could write code from scratch are long gone.”

Bayou architect Vijay Murali, a research scientist at the lab, said, “Modern software development is all about APls. These are system-specific rules, tools, definitions and protocols that allow a piece of code to interact with a specific operating system, database, hardware platform or another software system. There are hundreds of APIs, and navigating them is very difficult for developers. They spend lots of time at question-answer sites like Stack Overflow asking other developers for help.”

Murali said developers can now begin asking some of those questions at Bayou, which will give an immediate answer.

“That immediate feedback could solve the problem right away, and if it doesn’t, Bayou’s example code should lead to a more informed question for their human peers,” Murali said.

Jermaine said the team’s primary goal is to get developers to try to extend Bayou, which has been released under a permissive open-source license.

“The more information we have about what people want from a system like Bayou, the better we can make it,” he said. “We want as many people to use it as we can get.” Bayou is based on a method called neural sketch learning, which trains an artificial neural network to recognize high-level patterns in hundreds of thousands of Java programs. It does this by creating a “sketch” for each program it reads and then associating this sketch with the “intent” that lies behind the program.

When a user asks Bayou questions, the system makes a judgment call about what program it’s being asked to write. It then creates sketches for several of the most likely candidate programs the user might want.

“Based on that guess, a separate part of Bayou, a module that understands the low-level details of Java and can do automatic logical reasoning, is going to generate four or five different chunks of code,” Jermaine said. “It’s going to present those to the user like hits on a web search. ‘This one is most likely the correct answer, but here are three more that could be what you’re looking for.'”

Story Source:

Materials provided by Rice University. Original written by Jade Boyd. Note: Content may be edited for style and length.

Researchers design ‘soft’ robots that can move on their own

If Star Wars’ R2-D2 is your idea of a robot, think again. Researchers led by a University of Houston engineer have reported a new class of soft robot, composed of ultrathin sensing, actuating electronics and temperature-sensitive artificial muscle that can adapt to the environment and crawl, similar to the movement of an inchworm or caterpillar.

Cunjiang Yu, Bill D. Cook Assistant Professor of mechanical engineering, said potential applications range from surgery and rehabilitation to search and rescue in natural disasters or on the battlefield. Because the robot body changes shape in response to its surroundings, it can slip through narrow crevices to search for survivors in the rubble left by an earthquake or bombing, he said.

“They sense the change in environment and adapt to slip through,” he said.

These soft robots, made of soft artificial muscle and ultrathin deformable sensors and actuators, have significant advantages over the traditional rigid robots used for automation and other physical tasks.

The researchers said their work, published in the journal Advanced Materials, took its inspiration from nature. “Many creatures, such as inchworms that have completely soft compliant bodies without any rigid components (e.g., bones), exhibit unprecedented abilities in adapting their shapes and morphologies and unique locomotion behaviors,” they wrote.

Traditional soft robots lack the ability to adapt to their environments or move on their own.

The prototype adaptive soft robot includes a liquid crystal elastomer, doped with carbon black nanoparticles to enhance thermal conductivity, as the artificial muscle, combined with ultrathin mesh shaped stretchable thermal actuators and silicon-based light sensors. The thermal actuators provide heat to activate the robot.

The prototype is small — 28.6 millimeters in length, or just over one inch — but Yu said it could easily be scaled up. That’s the next step, along with experimenting with various types of sensors. While the prototype uses heat-sensitive sensors, it could employ smart materials activated by light or other cues, he said.

“This is the first of its kind,” Yu said. “You can use other sensors, depending on what you want it to do.”

Video of robot in motion: https://www.youtube.com/watch?time_continue=3&v=fUqPPdl9ujk

Story Source:

Materials provided by University of Houston. Original written by Jeannie Kever. Note: Content may be edited for style and length.

Robot developed for automated assembly of designer nanomaterials

A current area of intense interest in nanotechnology is van der Waals heterostructures, which are assemblies of atomically thin two-dimensional (2D) crystalline materials that display attractive conduction properties for use in advanced electronic devices.

A representative 2D semiconductor is graphene, which consists of a honeycomb lattice of carbon atoms that is just one atom thick. The development of van der Waals heterostructures has been restricted by the complicated and time-consuming manual operations required to produce them. That is, the 2D crystals typically obtained by exfoliation of a bulk material need to be manually identified, collected, and then stacked by a researcher to form a van der Waals heterostructure. Such a manual process is clearly unsuitable for industrial production of electronic devices containing van der Waals heterostructures

Now, a Japanese research team led by the Institute of Industrial Science at The University of Tokyo has solved this issue by developing an automated robot that greatly speeds up the collection of 2D crystals and their assembly to form van der Waals heterostructures. The robot consists of an automated high-speed optical microscope that detects crystals, the positions and parameters of which are then recorded in a computer database. Customized software is used to design heterostructures using the information in the database. The heterostructure is then assembled layer by layer by a robotic equipment directed by the designed computer algorithm. The findings were reported in Nature Communications.

“The robot can find, collect, and assemble 2D crystals in a glove box,” study first author Satoru Masubuchi says. “It can detect 400 graphene flakes an hour, which is much faster than the rate achieved by manual operations.”

When the robot was used to assemble graphene flakes into van der Waals heterostructures, it could stack up to four layers an hour with just a few minutes of human input required for each layer. The robot was used to produce a van der Waals heterostructure consisting of 29 alternating layers of graphene and hexagonal boron nitride (another common 2D semiconductor). The record layer number of a van der Waals heterostructure produced by manual operations is 13, so the robot has greatly increased our ability to access complex van der Waals heterostructures.

“A wide range of materials can be collected and assembled using our robot,” co-author Tomoki Machida explains. “This system provides the potential to fully explore van der Waals heterostructures.”

The development of this robot will greatly facilitate production of van der Waals heterostructures and their use in electronic devices, taking us a step closer to realizing devices containing atomic-level designer materials.

Story Source:

Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length.

An AI that makes road maps from aerial images

Map apps may have changed our world, but they still haven’t mapped all of it yet. In particular, mapping roads can be tedious: even after taking aerial images, companies like Google still have to spend many hours manually tracing out roads. As a result, they haven’t yet gotten around to mapping the vast majority of the more than 20 million miles of roads across the globe.

Gaps in maps are a problem, particularly for systems being developed for self-driving cars. To address the issue, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created RoadTracer, an automated method to build road maps that’s 45 percent more accurate than existing approaches.

Using data from aerial images, the team says that RoadTracer is not just more accurate, but more cost-effective than current approaches. MIT professor Mohammad Alizadeh says that this work will be useful both for tech giants like Google and for smaller organizations without the resources to curate and correct large amounts of errors in maps.

“RoadTracer is well-suited to map areas of the world where maps are frequently out of date, which includes both places with lower population and areas where there’s frequent construction,” says Alizadeh, one of the co-authors of a new paper about the system. “For example, existing maps for remote areas like rural Thailand are missing many roads. RoadTracer could help make them more accurate.”

In tests looking at aerial images of New York City, RoadTracer could correctly map 44 percent of its road junctions, which is more than twice as effective as traditional approaches based on image segmentation that could map only 19 percent.

The paper, which will be presented in June at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah, is a collaboration between MIT CSAIL and the Qatar Computing Research Institute (QCRI).

Alizadeh’s MIT co-authors include graduate students Fayven Bastani and Songtao He, and professors Hari Balakrishnan,Sam Madden, and David DeWitt. QCRI co-authors include senior software engineer Sofiane Abbar and Sanjay Chawla, who is the research director of QCRI’s Data Analytics Group.

How it works

Current efforts to automate maps involve training neural networks to look at aerial images and identify individual pixels as either “road” or “not road.” Because aerial images can often be ambiguous and incomplete, such systems also require a post-processing step that’s aimed at trying to fill in some of the gaps.

Unfortunately, these so-called “segmentation” approaches are often imprecise: if the model mislabels a pixel, that error will get amplified in the final road map. Errors are particularly likely if the aerial images have trees, buildings or shadows that obscure where roads begin and end. (The post-processing step also requires making decisions based on assumptions that may not always hold up, like connecting two road segments simply because they are next to each other.)

Meanwhile, RoadTracer creates maps step-by-step. It starts at a known location on the road, and uses a neural network to examine the surrounding area to determine which point is most likely to be the next part on the road. It then adds that point and repeats the process to gradually trace out the road one step at a time.

“Rather than making thousands of different decisions at once about whether various pixels represent parts of a road, RoadTracer focuses on the simpler problem of figuring out which direction to follow when starting from a particular spot that we know is a road,” says Bastani. “This is in many ways actually a lot closer to how we as humans construct mental models of the world around us.”

The team trained RoadTracer on aerial images of 25 cities across six countries in North America and Europe, and then evaluated its mapping abilities on 15 other cities.

“It’s important for a mapping system to be able to perform well on cities it hasn’t trained on, because regions where automatic mapping holds the most promise are ones where existing maps are non-existent or inaccurate,” says Balakrishnan.

Bastani says that the fact that RoadTracer had an error rate that is 45 percent lower is essential to making automatic mapping systems more practical for companies like Google.

“If the error rate is too high, then it is more efficient to map the roads manually from scratch versus removing incorrect segments from the inferred map,” says Bastani.

Still, implementing something like RoadTracer wouldn’t take people completely out of the loop: The team says that they could imagine the system proposing road maps for a large region and then having a human expert come in to double-check the design.

“That said, what’s clear is that with a system like ours you could dramatically decrease the amount of tedious work that humans would have to do,” Alizadeh says.

Indeed, one advantage to RoadTracer’s incremental approach is that it makes it much easier to correct errors — human supervisors can simply correct them and re-run the algorithm from where they left off, rather than continue to use imprecise information that trickles down to other parts of the map.

Of course, aerial images are just one piece of the puzzle. They don’t give you information about roads that have overpasses and underpasses, since those are impossible to ascertain from above. As a result, the team is also separately developing algorithms that can create maps from GPS data, and working to merge these approaches into a single system for mapping.

This project was supported in part by the Qatar Computing Research Institute.

Two robots are better than one: 5G antenna measurement research

Researchers at the National Institute of Standards and Technology (NIST) continue to pioneer new antenna measurement methods, this time for future 5G wireless communications systems.

NIST’s new Large Antenna Positioning System (LAPS) has two robotic arms designed to position “smart” or adaptable antennas, which can be mounted on base stations that handle signals to and from huge numbers of devices. Future 5G systems will operate at higher frequencies and offer more than 100 times the data-carrying capacity of today’s cellphones, while connecting billions of mobile broadband users in complex, crowded signal environments.

Among its many special capabilities, the LAPS can test transmissions to and from antennas located on fast-moving mobile devices, which requires coordination between the timing of communication signals and robot motion.

“Measurements of antenna signals are a great use for robotics,” NIST electronics engineer Jeff Guerrieri said. “The robotic arms provide antenna positioning that would be constrained by conventional measurement systems.”

NIST researchers are still validating the performance of the LAPS and are just now beginning to introduce it to industry. The system was described at a European conference last week .

Today’s mobile devices such as cell phones, consumer Wi-Fi systems and public safety radios mostly operate at frequencies below 3 gigahertz (GHz), a crowded part of the spectrum. Next-generation mobile communications are starting to use the more open frequency bands at millimeter wavelengths (30-300 GHz), but these signals are easily distorted and more likely to be affected by physical barriers such as walls or buildings. Solutions will include transmitter antenna arrays with tens to hundreds of elements that focus the antenna power into a steerable beam that can track mobile devices.

For decades, NIST has pioneered testing of high-end antennas for radar, aircraft, communications and satellites. Now, the LAPS will help foster the development of 5G wireless and spectrum-sharing systems. The dual-robot system will also help researchers understand the interference problems created by ever-increasing signal density.

The new facility is the next generation of NIST’s Configurable Robotic Millimeter-Wave Antenna (CROMMA) Facility, which has a single robotic arm. CROMMA, developed at NIST, has become a popular tool for high-frequency antenna measurements. Companies that integrate legacy antenna measurement systems are starting to use robotic arms in their product lines, facilitating the transfer of this technology to companies like The Boeing Co.

CROMMA can measure only physically small antennas. NIST developed the LAPS concept of a dual robotic arm system, one robot in a fixed position and the other mounted on a large linear rail slide to accommodate larger antennas and base stations. The system was designed and installed by NSI-MI Technologies. The LAPS also has a safety unit, including radar designed to prevent collisions of robots and antennas within the surrounding environment, and to protect operators.

The LAPS’ measurement capabilities for 5G systems include flexible scan geometries, beam tracking of mobile devices and improved accuracy and repeatability in mobile measurements.

The LAPS has replaced NIST’s conventional scanners and will be used to perform near-field measurement of basic antenna properties for aerospace and satellite companies requiring precise calibrations and performance verification. The near-field technique measures the radiated signal very close to the antenna in a controlled environment and, using mathematical algorithms developed at NIST, calculates the antenna’s performance at its operating distance, known as the far field.

But the ultimate goal for the LAPS is to perform dynamic, over-the-air tests of future 5G communication systems. Initial validation shows that basic mechanical operation of the LAPS is within the specified design tolerances for still and moving tests to at least 30 GHz. Final validation is ongoing.

#

Face recognition technology that works in the dark

Army researchers have developed an artificial intelligence and machine learning technique that produces a visible face image from a thermal image of a person’s face captured in low-light or nighttime conditions. This development could lead to enhanced real-time biometrics and post-mission forensic analysis for covert nighttime operations.

Thermal cameras like FLIR, or Forward Looking Infrared, sensors are actively deployed on aerial and ground vehicles, in watch towers and at check points for surveillance purposes. More recently, thermal cameras are becoming available for use as body-worn cameras. The ability to perform automatic face recognition at nighttime using such thermal cameras is beneficial for informing a Soldier that an individual is someone of interest, like someone who may be on a watch list.

The motivations for this technology — developed by Drs. Benjamin S. Riggan, Nathaniel J. Short and Shuowen “Sean” Hu, from the U.S. Army Research Laboratory — are to enhance both automatic and human-matching capabilities.

“This technology enables matching between thermal face images and existing biometric face databases/watch lists that only contain visible face imagery,” said Riggan, a research scientist. “The technology provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis.”

He said under nighttime and low-light conditions, there is insufficient light for a conventional camera to capture facial imagery for recognition without active illumination such as a flash or spotlight, which would give away the position of such surveillance cameras; however, thermal cameras that capture the heat signature naturally emanating from living skin tissue are ideal for such conditions.

“When using thermal cameras to capture facial imagery, the main challenge is that the captured thermal image must be matched against a watch list or gallery that only contains conventional visible imagery from known persons of interest,” Riggan said. “Therefore, the problem becomes what is referred to as cross-spectrum, or heterogeneous, face recognition. In this case, facial probe imagery acquired in one modality is matched against a gallery database acquired using a different imaging modality.”

This approach leverages advanced domain adaptation techniques based on deep neural networks. The fundamental approach is composed of two key parts: a non-linear regression model that maps a given thermal image into a corresponding visible latent representation and an optimization problem that projects the latent projection back into the image space.

Details of this work were presented in March in a technical paper “Thermal to Visible Synthesis of Face Images using Multiple Regions” at the IEEE Winter Conference on Applications of Computer Vision, or WACV, in Lake Tahoe, Nevada, which is a technical conference comprised of scholars and scientists from academia, industry and government.

At the conference, Army researchers demonstrated that combining global information, such as the features from the across the entire face, and local information, such as features from discriminative fiducial regions, for example, eyes, nose and mouth, enhanced the discriminability of the synthesized imagery. They showed how the thermal-to-visible mapped representations from both global and local regions in the thermal face signature could be used in conjunction to synthesize a refined visible face image.

The optimization problem for synthesizing an image attempts to jointly preserve the shape of the entire face and appearance of the local fiducial details. Using the synthesized thermal-to-visible imagery and existing visible gallery imagery, they performed face verification experiments using a common open source deep neural network architecture for face recognition. The architecture used is explicitly designed for visible-based face recognition. The most surprising result is that their approach achieved better verification performance than a generative adversarial network-based approach, which previously showed photo-realistic properties.

Riggan attributes this result to the fact the game theoretic objective for GANs immediately seeks to generate imagery that is sufficiently similar in dynamic range and photo-like appearance to the training imagery, while sometimes neglecting to preserve identifying characteristics, he said. The approach developed by ARL preserves identity information to enhance discriminability, for example, increased recognition accuracy for both automatic face recognition algorithms and human adjudication.

As part of the paper presentation, ARL researchers showcased a near real-time demonstration of this technology. The proof of concept demonstration included the use of a FLIR Boson 320 thermal camera and a laptop running the algorithm in near real-time. This demonstration showed the audience that a captured thermal image of a person can be used to produce a synthesized visible image in situ. This work received a best paper award in the faces/biometrics session of the conference, out of more than 70 papers presented.

Riggan said he and his colleagues will continue to extend this research under the sponsorship of the Defense Forensics and Biometrics Agency to develop a robust nighttime face recognition capability for the Soldier.

Artificial intelligence helps to predict likelihood of life on other worlds

Developments in artificial intelligence may help us to predict the probability of life on other planets, according to new work by a team based at Plymouth University. The study uses artificial neural networks (ANNs) to classify planets into five types, estimating a probability of life in each case, which could be used in future interstellar exploration missions. The work is presented at the European Week of Astronomy and Space Science (EWASS) in Liverpool on 4 April by Mr Christopher Bishop.

Artificial neural networks are systems that attempt to replicate the way the human brain learns. They are one of the main tools used in machine learning, and are particularly good at identifying patterns that are too complex for a biological brain to process.

The team, based at the Centre for Robotics and Neural Systems at Plymouth University, have trained their network to classify planets into five different types, based on whether they are most like the present-day Earth, the early Earth, Mars, Venus or Saturn’s moon Titan. All five of these objects are rocky bodies known to have atmospheres, and are among the most potentially habitable objects in our Solar System.

Mr Bishop comments, “We’re currently interested in these ANNs for prioritising exploration for a hypothetical, intelligent, interstellar spacecraft scanning an exoplanet system at range.”

He adds, “We’re also looking at the use of large area, deployable, planar Fresnel antennas to get data back to Earth from an interstellar probe at large distances. This would be needed if the technology is used in robotic spacecraft in the future.”

Atmospheric observations — known as spectra — of the five Solar System bodies are presented as inputs to the network, which is then asked to classify them in terms of the planetary type. As life is currently known only to exist on Earth, the classification uses a ‘probability of life’ metric which is based on the relatively well-understood atmospheric and orbital properties of the five target types.

Bishop has trained the network with over a hundred different spectral profiles, each with several hundred parameters that contribute to habitability. So far, the network performs well when presented with a test spectral profile that it hasn’t seen before.

“Given the results so far, this method may prove to be extremely useful for categorising different types of exoplanets using results from ground-based and near Earth observatories” says Dr Angelo Cangelosi, the supervisor of the project.

The technique may also be ideally suited to selecting targets for future observations, given the increase in spectral detail expected from upcoming space missions such ESA’s Ariel Space Mission and NASA’s James Webb Space Telescope.

Story Source:

Materials provided by Royal Astronomical Society. Note: Content may be edited for style and length.

Teaching machines to spot the essential

Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics.

Over the past decade machine learning has enabled ground-breaking advances in computer vision, speech recognition and translation. More recently, machine learning has also been applied to physics problems, typically for the classification of physical phases and the numerical simulation of ground states. Maciej Koch-Janusz, a researcher at the Institute for Theoretical Physics at ETH Zurich, Switzerland, and Zohar Ringel of the Hebrew University of Jerusalem, Israel, have now explored the exciting possibility of harnessing machine learning not as a numerical simulator or a ‘hypothesis tester’, but as an integral part of the physical reasoning process.

One important step in understanding a physical system consisting of a large number of entities — for example, the atoms making up a magnetic material — is to identify among the many degrees of freedom of the system those that are most relevant for its physical behaviour. This is traditionally a step that relies heavily on human intuition and experience. But now Koch-Janusz and Ringel demonstrate a machine-learning algorithm based on an artificial neural network that is capable of doing just that, as they report in the journal Nature Physics. Their algorithm takes data about a physical system without any prior knowledge about it and extracts those degrees of freedom that are most relevant to describe the system.

Technically speaking, the machine performs one of the crucial steps of one of the conceptually most profound tools of modern theoretical physics, the so-called renormalization group. The algorithm of Koch-Janusz and Ringel provides a qualitatively new approach: the internal data representations discovered by suitably designed machine-learning systems are often considered to be ‘obscure’, but the results yielded by their algorithm provide fundamental physical insight, reflecting the underlying structure of physical system. This raises the prospect of employing machine learning in science in a collaborative fashion, combining the power of machines to distil information from vast data sets with human creativity and background knowledge.

Story Source:

Materials provided by ETH Zurich Department of Physics. Note: Content may be edited for style and length.

Pipe-crawling robot will help decommission DOE nuclear facility

A pair of autonomous robots developed by Carnegie Mellon University’s Robotics Institute will soon be driving through miles of pipes at the U.S. Department of Energy’s former uranium enrichment plant in Piketon, Ohio, to identify uranium deposits on pipe walls.

The CMU robot has demonstrated it can measure radiation levels more accurately from inside the pipe than is possible with external techniques. In addition to savings in labor costs, its use significantly reduces hazards to workers who otherwise must perform external measurements by hand, garbed in protective gear and using lifts or scaffolding to reach elevated pipes.

DOE officials estimate the robots could save tens of millions of dollars in completing the characterization of uranium deposits at the Portsmouth Gaseous Diffusion Plant in Piketon, and save perhaps $50 million at a similar uranium enrichment plant in Paducah, Kentucky.

“This will transform the way measurements of uranium deposits are made from now on,” predicted William “Red” Whittaker, robotics professor and director of the Field Robotics Center.

Heather Jones, senior project scientist will present two technical papers about the robot on Wednesday at the Waste Management Conference in Phoenix, Arizona. CMU also will be demonstrating a prototype of the robot during the conference.

CMU is building two of the robots, called RadPiper, and will deliver the production prototype units to DOE’s sprawling 3,778-acre Portsmouth site in May. RadPiper employs a new “disc-collimated” radiation sensor invented at CMU. The CMU team, led by Whittaker, began the project last year. The team worked closely with DOE and Fluor-BWXT Portsmouth, the decommissioning contractor, to build a prototype on a tight schedule and test it at Portsmouth last fall.

Shuttered since 2000, the plant began operations in 1954 and produced enriched uranium, including weapons-grade uranium. With 10.6 million square feet of floor space, it is DOE’s largest facility under roof, with three large buildings containing enrichment process equipment that span the size of 158 football fields. The process buildings contain more than 75 miles of process pipe.

Finding the uranium deposits, necessary before DOE decontaminates, decommissions and demolishes the facility, is a herculean task. In the first process building, human crews over the past three years have performed more than 1.4 million measurements of process piping and components manually and are close to declaring the building “cold and dark.”

“With more than 15 miles of piping to be characterized in the next process building, there is a need to seek a smarter method,” said Rodrigo V. Rimando, Jr., director of technology development for DOE’s Office of Environmental Management. “We anticipate a labor savings on the order of an eight-to-one ratio for the piping accomplished by RadPiper.” Even with RadPiper, nuclear deposits must be identified manually in some components.

RadPiper will operate initially in pipes measuring 30 inches and 42 inches in diameter and will characterize radiation levels in each foot-long segment of pipe. Those segments with potentially hazardous amounts of uranium-235, the fissile isotope of uranium used in nuclear reactors and weapons, will be removed and decontaminated. The vast majority of the plant’s piping will remain in place and will be demolished safely along with the rest of the facility.

The tetherless robot moves through the pipe at a steady pace atop a pair of flexible tracks. Though the pipe is in straight sections, the autonomous robot is equipped with a lidar and a fisheye camera to detect obstructions ahead, such as closed valves, Jones said. After completing a run of pipe, the robot automatically returns to its launch point. Integrated data analysis and report generation frees nuclear analysts from time-consuming calculations and makes reports available the same day.

The robot’s disc-collimated sensing instrument uses a standard sodium iodide sensor to count gamma rays. The sensor is positioned between two large lead discs. The lead discs block gamma rays from uranium deposits that lie beyond the one-foot section of pipe that is being characterized at any given time. Whittaker said CMU is seeking a patent on the instrument.

The Robotics Institute and Whittaker have extensive experience with robots in nuclear facilities, including the design and construction of robots to aid with the cleanup of the damaged Three Mile Island reactor building in Pennsylvania and the crippled Chernobyl reactor in Ukraine.

DOE has paid CMU $1.4 million to develop the robots as part of what CMU calls the Pipe Crawling Activity Measurement System.

In addition to the Portsmouth and Paducah plants, robots could be useful elsewhere in DOE’s defense nuclear cleanup program, which is not even half complete, Rimando said. Other sites where robots might be used are the Savannah River Site in Aiken, South Carolina, and the Hanford Site in Richland, Washington.

“With at least 50 more years of nuclear cleanup to be performed, the Robotics Institute could serve as a major pipeline of roboticists for DOE’s next several workforce generations,” he added.

Origami-inspired self-locking foldable robotic arm

A research team of Seoul National University led by Professor Kyu-Jin Cho has developed an origami-inspired robotic arm that is foldable, self-assembling and also highly-rigid.

They developed a novel robotic arm using a concept of variable stiffness. The robotic arm made it possible to change the shape with a single wire, thus raising the possibility of practical use of the origami structure. The robotic arm is light-weighted, and can fold flat and extend like an automatic umbrella and even becomes instantly stiff.

The key principle is a collapsible locker and this enables the robotic arm to overcome the drawbacks of origami-inspired structures that is hard to withstand external forces and is hard to be easily actuated.

The variable stiffness mechanism is based on an origami principle of perpendicular folding; two perpendicular fold lines constrain each other’s movement. By using this principle, a hexagonal structure (40X40X100 mm) which weighs less than 30 g can withstand more than 12 kg of compressive load. On the other hand, the lockers can be easily unlocked and the structure is folded flat by pulling a single wire with a small force.

Benefits of the foldable robotic arm can be maximized when it is attached to drones where the weight and the size constraints are the most extreme. The drone unfolds the robotic arm, picks up an object in the ditch, and films the trees in one trial. When the robotic arm is not in use, it folds flat for convenient maneuvering, easy take-off and landing. The proposed variable stiffness mechanism can be applied to other types of robots and structures in extreme environments such as polar area, desert, underwater, and space.

Professor Cho said, “Soft robots have great advantages in their flexible movement, but they have a limitation in that they cannot support high load without deformation. This robotic arm uses the variable stiffness technology which gains merits of both rigid and soft robots. With this property, the robotic arm can be folded flat when not in use and can be stiff when necessary. In addition, the arm is made of composite of tough ripstop fabric and specially handled strong PET film for the practical use.”

(The researchers include Suk-Jun Kim, Dae-Young Lee, Gwang-Pil Jung, Professor of SeoulTech)

Story Source:

Materials provided by Seoul National University. Note: Content may be edited for style and length.

Is your smile male or female?

The dynamics of how men and women smile differs measurably, according to new research, enabling artificial intelligence (AI) to automatically assign gender purely based on a smile.

Although automatic gender recognition is already available, existing methods use static images and compare fixed facial features. The new research, by the University of Bradford, is the first to use the dynamic movement of the smile to automatically distinguish between men and women.

Led by Professor Hassan Ugail, the team mapped 49 landmarks on the face, mainly around the eyes, mouth and down the nose. They used these to assess how the face changes as we smile caused by the underlying muscle movements — including both changes in distances between the different points and the ‘flow’ of the smile: how much, how far and how fast the different points on the face moved as the smile was formed.

They then tested whether there were noticeable differences between men and women — and found that there were, with women’s smiles being more expansive.

Lead researcher, Professor Hassan Ugail from the University of Bradford said: “Anecdotally, women are thought to be more expressive in how they smile, and our research has borne this out. Women definitely have broader smiles, expanding their mouth and lip area far more than men.”

The team created an algorithm using their analysis and tested it against video footage of 109 people as they smiled. The computer was able to correctly determine gender in 86% of cases and the team believe the accuracy could easily be improved.

“We used a fairly simple machine classification for this research as we were just testing the concept, but more sophisticated AI would improve the recognition rates,” said Professor Ugail.

The underlying purpose of this research is more about trying to enhance machine learning capabilities, but it has raised a number of intriguing questions that the team hopes to investigate in future projects.

One is how the machine might respond to the smile of a transgender person and the other is the impact of plastic surgery on recognition rates.

“Because this system measures the underlying muscle movement of the face during a smile, we believe these dynamics will remain the same even if external physical features change, following surgery for example,” said Professor Ugail. “This kind of facial recognition could become a next- generation biometric, as it’s not dependent on one feature, but on a dynamic that’s unique to an individual and would be very difficult to mimic or alter.”

The research is published in The Visual Computer: International Journal of Computer Graphics.

Story Source:

Materials provided by University of Bradford. Note: Content may be edited for style and length.

Ag robot speeds data collection, analyses of crops as they grow

A new lightweight, low-cost agricultural robot could transform data collection and field scouting for agronomists, seed companies and farmers.

The TerraSentia crop phenotyping robot, developed by a team of scientists at the University of Illinois, will be featured at the 2018 Energy Innovation Summit Technology Showcase in National Harbor, Maryland, on March 14.

Traveling autonomously between crop rows, the robot measures the traits of individual plants using a variety of sensors, including cameras, transmitting the data in real time to the operator’s phone or laptop computer. A custom app and tablet computer that come with the robot enable the operator to steer the robot using virtual reality and GPS.

TerraSentia is customizable and teachable, according to the researchers, who currently are developing machine-learning algorithms to “teach” the robot to detect and identify common diseases, and to measure a growing variety of traits, such as plant and corn ear height, leaf area index and biomass.

“These robots will fundamentally change the way people are collecting and utilizing data from their fields,” said U. of I. agricultural and biological engineering professor Girish Chowdhary. He is leading a team of students, engineers and postdoctoral researchers in development of the robot.

At 24 pounds, TerraSentia is so lightweight that it can roll over young plants without damaging them. The 13-inch-wide robot is also compact and portable: An agronomist could easily toss it on a truck seat or in a car trunk to transport it to the field, Chowdhary said.

Automating data collection and analytics has the potential to improve the breeding pipeline by unlocking the mysteries of why plant varieties respond in very different ways to environmental conditions, said U. of I. plant biology professor Carl Bernacchi, one of the scientists collaborating on the project.

Data collected by the crop-scouting robot could help plant breeders identify the genetic lineages likely to produce the best quality and highest yields in specific locations, Bernacchi said.

He and Stephen P. Long, a Stanley O. Ikenberry Endowed Chair and the Gutgsell Endowed University Professor of Crop Sciences and Plant Biology at Illinois, helped determine which plant characteristics were important for the robot to measure.

“It will be transformative for growers to be able to measure every single plant in the field in a short period of time,” Bernacchi said. “Crop breeders may want to grow thousands of different genotypes, all slightly different from one another, and measure each plant quickly. That’s not possible right now unless you have an army of people — and that costs a lot of time and money and is a very subjective process.

“A robot or swarm of robots could go into a field and do the same types of things that people are doing manually right now, but in a much more objective, faster and less expensive way,” Bernacchi said.

TerraSentia fills “a big gap in the current agricultural equipment market” between massive machinery that cultivates or sprays many acres quickly and human workers who can perform tasks requiring precision but move much more slowly, Chowdhary said.

“There’s a big market for these robots not only in the U.S., where agriculture is a profitable business, but also in developing countries such as Brazil and India, where subsistence farmers struggle with extreme weather conditions such as monsoons and harsh sunlight, along with weeds and pests,” Chowdhary said.

As part of a phased introduction process, several major seed companies, large U.S. universities and overseas partners are field testing 20 of the TerraSentia robots this spring through an early adopter program. Chowdhary said the robot is expected to become available to farmers in about three years, with some models costing less than $5,000.

“We’re getting this technology into the hands of the users so they can tell us what’s working for them and what we need to improve,” Chowdhary said. “We’re trying to de-risk the technology and create a product that’s immediately beneficial to growers and breeders in the state of Illinois and beyond.”

The robot is being made available to crop scientists and commercial crop breeders for the 2018 breeding season through EarthSense Inc., a startup company that Chowdhary co-founded with Chinmay P. Soman.

Six-legged robots get closer to nature

A study led by researchers at Tokyo Institute of Technology (Tokyo Tech) has uncovered new ways of driving multi-legged robots by means of a two-level controller. The proposed controller uses a network of so-called non-linear oscillators that enables the generation of diverse gaits and postures, which are specified by only a few high-level parameters. The study inspires new research into how multi-legged robots can be controlled, including in the future using brain-computer interfaces.

In the natural world, many species can walk over slopes and irregular surfaces, reaching places inaccessible even to the most advanced rover robots. It remains a mystery how complex movements are handled so seamlessly by even the tiniest creatures.

What we do know is that even the simplest brains contain pattern-generator circuits (CPGs)[1], which are wired up specifically for generating walking patterns. Attempts to replicate such circuits artificially have so far had limited success, due poor flexibility.

Now, researchers in Japan and Italy propose a new approach to walking pattern generation, based on a hierarchical network of electronic oscillators arranged over two levels, which they have demonstrated using an ant-like hexapod robot. The achievement opens new avenues for the control of legged robots. Published in IEEE Access, the research is the result of collaboration between scientists from Tokyo Tech, in part funded by the World Research Hub Initiative, the Polish Academy of Sciences in Krakow, Poland, and the University of Catania, Italy.

The biologically-inspired controller consists of two levels. At the top, it contains a CPG[1], responsible for controlling the overall sequence of leg movements, known as gait. At the bottom, it contains six local pattern generators (LPGs)[2], responsible for controlling the trajectories of the individual legs.

The lead author of the study, Ludovico Minati, who is also affiliated to the Polish Academy of Sciences in Krakow, Poland and invited to Tokyo Tech’s Institute of Innovative Research (IIR) through the World Research Hub Initiative explains that insects can rapidly adapt their gait depending on a wide range of factors, but particularly their walking speed. Some gaits are observed frequently and are considered as canonical, but in reality, a near-infinite number of gaits are available, and different insects such as ants and cockroaches realize similar gaits in very different postures.

Difficulties have been encountered when trying to condense so much complexity into artificial pattern generators. The proposed controller shows an extremely high level of versatility thanks to implementation based on field-programmable analog arrays (FPAAs)[3], which allow on-the-fly reconfiguration and tuning of all circuit parameters. It builds on years of previous research on non-linear and chaotic electronic networks, which has demonstrated their ability to replicate phenomena observed in biological brains, even when wired up in very simple configurations.

“Perhaps the most exciting moment in the research was when we observed the robot exhibit phenomena and gaits which we neither designed nor expected, and later found out also exist in biological insects,” says Minati. Such emergent phenomena arise particularly as the network is realized with analog components and allows a certain degree of self-organization, representing an approach that vastly differs to conventional engineering, where everything is designed a-priori and fixed. “This takes us so much closer to the way biology works,” he adds.

Yasuharu Koike, also based at the IIR, comments: “An important aspect of the controller is that it condenses so much complexity into only a small number of parameters. These can be considered high-level parameters, in that they explicitly set the gait, speed, posture, etc. Because they can be changed dynamically, in the future it should be easy to vary them in real-time using a brain-computer interface, allowing the control of complex kinematics otherwise impossible to dominate with current approaches.”

And Natsue Yoshimura, also based at the IIR, says: “As the controller responds gradually and embodies a biologically plausible approach to pattern generation, we think that it may be more seamless to drive compared to systems which decode discrete commands. This may have practical implications, and our lab has substantial know-how in this area.”

Technical terms

[1] Pattern-generator circuits (CPGs): CPG stands for Central Pattern Generator. A network that autonomously generates rhythmic gait patterns, here referring to the sequence of leg movements.

[2] Local Pattern Generator (LPG): A sub-network that transforms each CPG output into the trajectory of the joints of the corresponding leg.

[3] Field-Programmable Analog Array (FPAA): An integrated circuit containing a variety of analog blocks, which can be reconfigured under digital control.

Personalizing wearable devices

When it comes to soft, assistive devices — like the exosuit being designed by the Harvard Biodesign Lab — the wearer and the robot need to be in sync. But every human moves a bit differently and tailoring the robot’s parameters for an individual user is a time-consuming and inefficient process.

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied and Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering have developed an efficient machine learning algorithm that can quickly tailor personalized control strategies for soft, wearable exosuits.

The research is described in Science Robotics.

“This new method is an effective and fast way to optimize control parameter settings for assistive wearable devices,” said Ye Ding, a postdoctoral fellow at SEAS and co-first author of the research. “Using this method, we achieved a huge improvement in metabolic performance for the wearers of a hip extension assistive device.”

When humans walk, we constantly tweak how we move to save energy (also known as metabolic cost).

“Before, if you had three different users walking with assistive devices, you would need three different assistance strategies,” said Myunghee Kim, a postdoctoral research fellow at SEAS and co-first author of the paper. “Finding the right control parameters for each wearer used to be a difficult, step-by-step process because not only do all humans walk a little differently but the experiments required to manually tune parameters are complicated and time-consuming”

The researchers, led by Conor Walsh, the John L. Loeb Associate Professor of Engineering and Applied Sciences, and Scott Kuindersma, Assistant Professor of Engineering and Computer Science at SEAS, developed an algorithm that can cut through that variability and rapidly identify the best control parameters that work best for minimizing the of walking.

The researchers used so-called human-in-the-loop optimization, which uses real-time measurements of human physiological signals, such as breathing rate, to adjust the control parameters of the device. As the algorithm honed in on the best parameters, it directed the exosuit on when and where to deliver its assistive force to improve hip extension. The Bayesian Optimization approach used by the team was first reported in a paper last year in PLOSone.

The combination of the algorithm and suit reduced metabolic cost by 17.4 percent compared to walking without the device. This was a more than 60 percent improvement compared to the team’s previous work.

“Optimization and learning algorithms will have a big impact on future wearable robotic devices designed to assist a range of behaviors,” said Kuindersma. “These results show that optimizing even very simple controllers can provide a significant, individualized benefit to users while walking. Extending these ideas to consider more expressive control strategies and people with diverse needs and abilities will be an exciting next step.”

“With wearable robots like soft exosuits, it is critical that the right assistance is delivered at the right time so that they can work synergistically with the wearer,” said Walsh. “With these online optimization algorithms, systems can learn how do achieve this automatically in about twenty minutes, thus maximizing benefit to the wearer.”

Next, the team aims to apply the optimization to a more complex device that assists multiple joints, such as hip and ankle, at the same time.

“In this paper, we demonstrated a high reduction in metabolic cost by just optimizing hip extension,” said Ding. “This goes to show what you can do with a great brain and great hardware.”

This research was supported by the Defense Advanced Research Projects Agency, Warrior Web Program, the Wyss Institute and the Harvard John A. Paulson School of Engineering and Applied Science.

Don’t want to lose a finger? Let a robot give a hand

Every year thousands of carpenters injure their hands and fingers doing dangerous tasks like sawing.

In an effort to minimize injury and let carpenters focus on design and other bigger-picture tasks, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has created AutoSaw, a system that lets you customize different items that can then be constructed with the help of robots.

Users can choose from a range of carpenter-designed templates for chairs, desks and other furniture — and eventually could use AutoSaw for projects as large as a deck or a porch.

“If you’re building a deck, you have to cut large sections of lumber to length, and that’s often done on site,” says CSAIL postdoc Jeffrey Lipton, who was a lead author on a related paper about the system. “Every time you put a hand near a blade, you’re at risk. To avoid that, we’ve largely automated the process using a chop-saw and jigsaw.”

The system also gives general users more flexibility in designing furniture to be able to fit space-constrained houses and apartments. For example, it could allow you to modify a desk to squeeze into an L-shaped living room, or customize a table to fit in your micro-kitchen.

“Robots have already enabled mass production, but with artificial intelligence (AI) they have the potential to enable mass customization and personalization in almost everything we produce,” says CSAIL director and co-author Daniela Rus. “AutoSaw shows this potential for easy access and customization in carpentry.”

The paper, which will be presented in May at the International Conference on Robotics and Automation (ICRA) in Brisbane, Australia, was co-written by Lipton, Rus and PhD student Adriana Schulz. Other co-authors include MIT professor Wojciech Matusik, PhD student Andrew Spielberg and undergraduate Luis Trueba.

How it works

Software isn’t a foreign concept for many carpenters. “Computer Numerical Control” (CNC) can convert designs into numbers that are fed to specially programmed tools to execute. However, the machines used for CNC fabrication are usually large and cumbersome, and users are limited to the size of the existing CNC tools.

As a result, many carpenters continue to use chop-saws, jigsaws, and other hand tools that are low cost, easy to move, and simple to use. These tools, while useful for customization, still put people at a high-risk of injury.

AutoSaw draws on expert knowledge for designing, and robotics for the more risky cutting tasks. Using the existing CAD system OnShape with an interface of design templates, users can customize their furniture for things like size, sturdiness, and aesthetics. Once the design is finalized, it’s sent to the robots to assist in the cutting process using the jigsaw and chop-saw.

To cut lumber the team used motion tracking software and small mobile robots — an approach that takes up less space and is more cost-effective than large robotic arms.

Specifically, the team used a modified jigsaw-rigged Roomba to cut lumber of any shape on a plank. For the chopping, the team used two Kuka youBots to lift the beams, place it on the chop saw, and cut.

“We added soft grippers to the robots to give them more flexibility, like that of a human carpenter,” says Lipton. “This meant we could rely on the accuracy of the power tools instead of the rigid-bodied robots.”

After the robots finish with cutting, the user then assembles their new piece of furniture using step-by-step directions from the system.

When testing the system, the teams’ simulations showed that they could build a chair, shed, and deck. Using the robots, the team also made a table with an accuracy comparable to a human, without a real hand ever getting near a blade.

“There have been many recent AI achievements in virtual environments, like playing Go and composing music,” says Hod Lipson, a professor of mechanical engineering and data science at Columbia University. “Systems that can work in unstructured physical environments, such as this carpentry system, are notoriously difficult to make. This is truly a fascinating step forward.”

While AutoSaw is still a research platform, in the future the team plans to use materials like wood, and integrate complex tasks like drilling and gluing.

“Our aim is to democratize furniture-customization,” says Schulz. “We’re trying to open up a realm of opportunities so users aren’t bound to what they’ve bought at Ikea. Instead, they can make what best fits their needs.”

This work was supported in part by the National Science Foundation, grant number CMMI-1644558.

Smart heat control of microchips

Thermal images.

Credit: Image courtesy of Karlsruhe Institute of Technology

Technological progress in the electronics sector, such as higher speeds, reduced costs, and smaller sizes, result in entirely new possibilities of automation and industrial production, without which “Industry 4.0” would not be feasible. In particular, miniaturization advanced considerably in the last years. Meanwhile, physical flow of a few electrons is sufficient to execute a software. But this progress also has its dark side. Processors for industrial production of less than 10 nm in dimension are highly sensitive. By specific overloading through incorrect control commands, hackers might initiate an artificial aging process that will destroy the processors within a few days. To defend such attacks on industrial facilities in the future, researchers of KIT are now working on a smart self-monitoring system.

The new approach is based on identifying thermal patterns during normal operation of processors. “Every chip produces a specific thermal fingerprint,” explains Professor Jörg Henkel, who heads the team at the Chair for Embedded Systems (CES). “Calculations are carried out, something is stored in the main memory or retrieved from the hard disk. All these operations produce short-term heating and cooling in various areas of the processor.” Henkel’s team monitored this pattern with sensitive infrared cameras and reproduced changes in the control routine from minimum temperature changes or temporal deviations of a few milliseconds. The setup with infrared cameras was used to demonstrate feasibility of such thermal monitoring. In the future, sensors on the chip are planned to assume the function of the cameras. “We already have temperature sensors on chips. They are used for overheat protection,” Jörg Henkel says. “We will increase the number of sensors and use them for cyber security purposes for the first time.” In addition, the scientists want to equip the chips with neural networks to identify thermal deviations and to monitor the chip in real time.

The researchers think that their smart heat control will be applied in industrial facilities first. As mostly static control routines are executed, deviations are easier to identify than in a smartphone, for instance. However, also industry computers are exposed to dynamic threats. “As soon as the hackers will know that we monitor temperature, they will adapt,” computer scientist Hussam Amrouch, who works in the team of Jörg Henkel, explains. “They will write smaller or slower programs, whose heating profiles will be more difficult to identify.” Right from the start, the neural networks will therefore be trained to identify even modified threats.

Story Source:

Materials provided by Karlsruhe Institute of Technology. Note: Content may be edited for style and length.


Cite This Page:

Karlsruhe Institute of Technology. “Smart heat control of microchips.” ScienceDaily. ScienceDaily, 28 February 2018. <www.sciencedaily.com/releases/2018/02/180228112524.htm>.

Karlsruhe Institute of Technology. (2018, February 28). Smart heat control of microchips. ScienceDaily. Retrieved March 4, 2018 from www.sciencedaily.com/releases/2018/02/180228112524.htm

Karlsruhe Institute of Technology. “Smart heat control of microchips.” ScienceDaily. www.sciencedaily.com/releases/2018/02/180228112524.htm (accessed March 4, 2018).

Novel 3-D printing method embeds sensing capabilities within robotic actuators

Researchers at Harvard University have built soft robots inspired by nature that can crawl, swim, grasp delicate objects and even assist a beating heart, but none of these devices has been able to sense and respond to the world around them.

That’s about to change.

Inspired by our bodies’ sensory capabilities, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences and the Wyss Institute for Biologically Inspired Engineering have developed a platform for creating soft robots with embedded sensors that can sense movement, pressure, touch, and even temperature.

The research is published in Advanced Materials.

“Our research represents a foundational advance in soft robotics,” said Ryan Truby, first author of the paper and recent Ph.D. graduate at SEAS. “Our manufacturing platform enables complex sensing motifs to be easily integrated into soft robotic systems.”

Integrating sensors within soft robots has been difficult in part because most sensors, such as those used in traditional electronics, are rigid. To address this challenge, the researchers developed an organic ionic liquid-based conductive ink that can be 3D printed within the soft elastomer matrices that comprise most soft robots.

“To date, most integrated sensor/actuator systems used in soft robotics have been quite rudimentary,” said Michael Wehner, former postdoctoral fellow at SEAS and co-author of the paper. “By directly printing ionic liquid sensors within these soft systems, we open new avenues to device design and fabrication that will ultimately allow true closed loop control of soft robots.”

Wehner is now an assistant professor at the University of California, Santa Cruz.

To fabricate the device, the researchers relied on an established 3D printing technique developed in the lab of Jennifer Lewis, the Hansjorg Wyss Professor of Biologically Inspired Engineering at SEAS and Core Faculty Member of the Wyss Institute. The technique — known as embedded 3D printing — seamlessly and quickly integrates multiple features and materials within a single soft body.

“This work represents the latest example of the enabling capabilities afforded by embedded 3D printing — a technique pioneered by our lab,” said Lewis.

“The function and design flexibility of this method is unparalleled,” said Truby. “This new ink combined with our embedded 3D printing process allows us to combine both soft sensing and actuation in one integrated soft robotic system.”

To test the sensors, the team printed a soft robotic gripper composed of three soft fingers or actuators. The researchers tested the gripper’s ability to sense inflation pressure, curvature, contact, and temperature. They embedded multiple contact sensors, so the gripper could sense light and deep touches.

“Soft robotics are typically limited by conventional molding techniques that constrain geometry choices, or, in the case of commercial 3D printing, material selection that hampers design choices,” said Robert Wood, the Charles River Professor of Engineering and Applied Sciences at SEAS, Core Faculty Member of the Wyss Institute, and co-author of the paper. “The techniques developed in the Lewis Lab have the opportunity to revolutionize how robots are created — moving away from sequential processes and creating complex and monolithic robots with embedded sensors and actuators.”

Next, the researchers hope to harness the power of machine learning to train these devices to grasp objects of varying size, shape, surface texture, and temperature.

The research was coauthored by Abigail Grosskopf, Daniel Vogt and Sebastien Uzel. It was supported it part by through Harvard MRSEC and the Wyss Institute for Biologically Inspired Engineering.

Snake-inspired robot uses kirigami to move

Who needs legs? With their sleek bodies, snakes can slither up to 14 miles-per-hour, squeeze into tight space, scale trees and swim. How do they do it? It’s all in the scales. As a snake moves, its scales grip the ground and propel the body forward — similar to how crampons help hikers establish footholds in slippery ice. This so-called friction-assisted locomotion is possible because of the shape and positioning of snake scales.

Now, a team of researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) has developed a soft robot that uses those same principles of locomotion to crawl without any rigid components. The soft robotic scales are made using kirigami — an ancient Japanese paper craft that relies on cuts, rather than origami folds, to change the properties of a material. As the robot stretches, the flat kirigami surface is transformed into a 3D-textured surface, which grips the ground just like snakeskin.

The research is published in Science Robotics.

“There has been a lot of research in recent years into how to fabricate these kinds of morphable, stretchable structures,” said Ahmad Rafsanjani, a postdoctoral fellow at SEAS and first author of the paper. “We have shown that kirigami principles can be integrated into soft robots to achieve locomotion in a way that is simpler, faster and cheaper than most previous techniques.”

The researchers started with a simple, flat plastic sheet. Using a laser cutter, they embedded an array of centimeter-scale cuts, experimenting with different shapes and sizes. Once cut, the researchers wrapped the sheet around a tube-like elastomer actuator, which expands and contracts with air like a balloon.

When the actuator expands, the kirigami cuts pop-out, forming a rough surface that grips the ground. When the actuator deflates, the cuts fold flat, propelling the crawler forward.

The researchers built a fully untethered robot, with its integrated onboard control, sensing, actuation and power supply packed into a tiny tail. They tested it crawling throughout Harvard’s campus.

The team experimented with various-shaped cuts, including triangular, circular and trapezoidal. They found that trapezoidal cuts — which most closely resemble the shape of snake scales -gave the robot a longer stride.

“We show that the locomotive properties of these kirigami-skins can be harnessed by properly balancing the cut geometry and the actuation protocol,” said Rafsanjani. “Moving forward, these components can be further optimized to improve the response of the system.”

“We believe that our kirigami-based strategy opens avenues for the design of a new class of soft crawlers,” said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics and senior author of the paper. “These all-terrain soft robots could one day travel across difficult environments for exploration, inspection, monitoring and search and rescue missions or perform complex, laparoscopic medical procedures.”

Bertoldi is also an Associate Faculty of the Wyss Institute for Biologically Inspired Engineering at Harvard University.

This research was co-authored by Yuerou Zhang, Bangyuan Liu and Shmuel M. Rubinstein, Associate Professor of Applied Physics at SEAS. It was supported by the National Science Foundation.

All-terrain microbot moves by tumbling over complex topography

A new type of all-terrain microbot that moves by tumbling could help usher in tiny machines for various applications.

The “microscale magnetic tumbling robot,” or μTUM (microTUM), is about 400 by 800 microns, or millionths of a meter, smaller than the head of a pin. A continuously rotating magnetic field propels the microbot in an end-over-end or sideways tumbling motion, which helps the microbot traverse uneven surfaces such as bumps and trenches, a difficult feat for other forms of motion.

“The μTUM is capable of traversing complex terrains in both dry and wet environments,” said David Cappelleri, an associate professor in Purdue University’s School of Mechanical Engineering and director of Purdue’s Multi-Scale Robotics and Automation Lab.

Findings are detailed in a research paper published online Feb. 3 in the journal Micromachines. The paper was authored by Purdue graduate student Chenghao Bi; postdoctoral research associate Maria Guix; doctoral student Benjamin V. Johnson; Wuming Jing, an assistant professor of mechanical engineering at Lawrence Technological University; and Cappelleri.

The flat, roughly dumbbell-shaped microbot is made of a polymer and has two magnetic ends. A non-magnetic midsection might be used to carry cargo such as medications. Because the bot functions well in wet environments, it has potential biomedical applications.

“Robotics at the micro- and nano-scale represent one of the new frontiers in intelligent automation systems,” Cappelleri said. “In particular, mobile microrobots have recently emerged as viable candidates for biomedical applications, taking advantage of their small size, manipulation, and autonomous motion capabilities. Targeted drug delivery is one of the key applications of these nano- and microrobots.”

Drug-delivery microbots might be used in conjunction with ultrasound to guide them to their destination in the body.

Researchers studied the machine’s performance when traversing inclines as steep as 60 degrees, demonstrating an impressive climbing capability in both wet and dry environments.

“The ability to climb is important because surfaces in the human body are complex,” Guix said. “It’s bumpy, it’s sticky.”

The ideal technology for many applications would be an untethered microrobot that is adaptable to various environments and is simple to operate. Microbots animated through magnetic fields have shown promise, Cappelleri said.

While concepts explored thus far have required complex designs and microfabrication methods, the μTUM is produced with standard photolithography techniques used in the semiconductor industry. The new paper focuses on the microrobot design, fabrication, and use of rotating magnetic fields to operate them in a strategy to negotiate complex terrains.

One critical factor in the development of such microbots is the effect of electrostatic and van der Waals forces between molecules that are prevalent on the scale of microns but not on the macroscale of everyday life. The forces cause “stiction” between tiny components that affect their operation. The researchers modeled the effects of such forces.

“Under dry conditions, these forces make it very challenging to move a microbot to its intended location in the body,” Guix said. “They perform much better in fluid media.”

Because the tiny bots contain such a small quantity and surface area of magnetic material, it takes a relatively strong magnetic field to move them. At the same time, biological fluids or surfaces resist motion.

“This is problematic because for microscale robots to operate successfully in real working environments, mobility is critical,” Cappelleri said.

One way to overcome the problem is with a tumbling locomotion, which requires a lower magnetic-field strength than otherwise needed. Another key to the bot’s performance is the continuously rotating magnetic field.

“Unlike the microTUM, other microscale robots use a rocking motion under an alternating magnetic field, where contact between the robot and the surface is continually lost and regained,” Bi said. “Though the continuously rotating field used for the μTUM is harder to implement than an alternating field, the trade-off is that the tumbling robot always has a point in contact with the ground, provided that there are no sharp drop-offs or cliffs in its path. This sustained contact means that the μTUM design can take advantage of the constant adhesion and frictional forces between itself and the surface below it to climb steep inclined terrains.”

The microbot was tested on a dry paper surface, and in both water and silicone oil to gauge and characterize its capabilities in fluid environments of varying viscosity. Findings showed highly viscous fluids such as silicone oil limit the robot’s maximum speed, while low-density media such as air limit how steep they can climb.

The microTUM might be upgraded with “advanced adhesion” capabilities to perform drug-delivery for biomedical applications.

Future work will focus on dynamic modeling of the μTUM to predict its motion trajectories over complex terrains, as well as addressing the unique challenges present at the interface of distinct environments. Additional goals include developing a “vision-based” control system that uses cameras or sensors for precise navigation and for using such bots to finely manipulate objects for potential industrial applications. Alternate designs for the mid-section of the robot will be explored as well.

“For all the design configurations considered, the midsection of the robot was kept non-magnetized in order to explore the future possibility of embedding a payload in this area of the robot,” Cappelleri said. “Replacing this area with a compliant material or a dissolvable payload could lead to improved dynamic behavior, and in-vivo drug delivery, respectively, with far-reaching potential in micro-object manipulation and biomedical applications.”

A YouTube video is available at https://www.youtube.com/watch?v=obwvH78hGLY

Neural networks everywhere

Most recent advances in artificial-intelligence systems such as speech- or face-recognition programs have come courtesy of neural networks, densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training data.

But neural nets are large, and their computations are energy intensive, so they’re not very practical for handheld devices. Most smartphone apps that rely on neural nets simply upload data to internet servers, which process it and send the results back to the phone.

Now, MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” says Avishek Biswas, an MIT graduate student in electrical engineering and computer science, who led the new chip’s development.

“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption. But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”

Biswas and his thesis advisor, Anantha Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, describe the new chip in a paper that Biswas is presenting this week at the International Solid State Circuits Conference.

Back to analog

Neural networks are typically arranged into layers. A single processing node in one layer of the network will generally receive data from several nodes in the layer below and pass data to several nodes in the layer above. Each connection between nodes has its own “weight,” which indicates how large a role the output of one node will play in the computation performed by the next. Training the network is a matter of setting those weights.

A node receiving data from multiple nodes in the layer below will multiply each input by the weight of the corresponding connection and sum the results. That operation — the summation of multiplications — is the definition of a dot product. If the dot product exceeds some threshold value, the node will transmit it to nodes in the next layer, over connections with their own weights.

A neural net is an abstraction: The “nodes” are just weights stored in a computer’s memory. Calculating a dot product usually involves fetching a weight from memory, fetching the associated data item, multiplying the two, storing the result somewhere, and then repeating the operation for every input to a node. Given that a neural net will have thousands or even millions of nodes, that’s a lot of data to move around.

But that sequence of operations is just a digital approximation of what happens in the brain, where signals traveling along multiple neurons meet at a “synapse,” or a gap between bundles of neurons. The neurons’ firing rates and the electrochemical signals that cross the synapse correspond to the data values and weights. The MIT researchers’ new chip improves efficiency by replicating the brain more faithfully.

In the chip, a node’s input values are converted into electrical voltages and then multiplied by the appropriate weights. Only the combined voltages are converted back into a digital representation and stored for further processing.

The chip can thus calculate dot products for multiple nodes — 16 at a time, in the prototype — in a single step, instead of shuttling between a processor and memory for every computation.

All or nothing

One of the keys to the system is that all the weights are either 1 or -1. That means that they can be implemented within the memory itself as simple switches that either close a circuit or leave it open. Recent theoretical work suggests that neural nets trained with only two weights should lose little accuracy — somewhere between 1 and 2 percent.

Biswas and Chandrakasan’s research bears that prediction out. In experiments, they ran the full implementation of a neural network on a conventional computer and the binary-weight equivalent on their chip. Their chip’s results were generally within 2 to 3 percent of the conventional network’s.

Can a cockroach teach a robot how to scurry across rugged terrain?

When they turn up in family pantries or restaurant kitchens, cockroaches are commonly despised as ugly, unhealthy pests and are quickly killed. But in the name of science, Johns Hopkins researchers have put these unwanted bugs to work.

In a crowded, windowless lab, scholars and students are coaxing the insects to share some crucial locomotion tips that could help future robotic vehicles traverse treacherous terrain.

Picture the aftermath of an earthquake or the cluttered, unexplored surface of another planet. Human teams might hesitate to enter such hazard-strewn regions. But what earthly lifeform — other than the one jokingly said to be able to survive even an atomic bomb blast — is more likely to persist on dangerous alien landscape?

For missions like these, the Johns Hopkins researchers want to build robots that behave more like cockroaches. The team’s early findings are the subject of two related research papers published in the Feb. 2 issue of the journal Bioinspiration & Biomimetics.

Sean W. Gart, a postdoctoral fellow who puts the roaches through their paces, was lead author of the two papers. The senior author was Chen Li, an assistant professor of mechanical engineering who directs the Terradynamics Lab. It focuses on movement science at the interface of biology, robotics and physics.

Inside the lab, cockroaches scurry along tracks laden with two types of obstacles: large “bumps” and equally large “gaps.” These mimic the holes and barriers that the roaches might have encountered in their rugged natural habitat. The bugs contort their heads, torsos and legs until they find a way to get themselves over or across the obstacles in order to remain on course.

High-speed cameras capture the body and leg motions used by these roaches, a Central American species with bodies about 2 inches long. These videos can later be slowed down to help the researchers learn the precise travel tactics that small robots could use to surmount the same type of obstacles. The roaches, native to a rainforest region highly cluttered with vegetation, need these skills.

“Where they live, you have all sorts of stuff around you, like dense vegetation or fallen leaves or branches or roots,” Li said. “Wherever they go, they run into these obstacles. “We’re trying to understand the principles of how they go through such a complex terrain, and we hope to then transfer those principles to advanced robots.”

Some of these roach-inspired improvements have already materialized. Li’s team has constructed a multi-legged robot to replicate the insect’s running patterns. After carefully reviewing their bug videos to discover the underlying physics principles, the researchers added a “tail” to help the robots replicate body positions that helped the real roaches get past the large bumps and gaps on the lab track. This simple change increased the largest gap size that the robot could traverse by 50 percent and the largest bump size it could traverse by 75 percent.

“We are just beginning to understand how these critters move through a cluttered 3-D terrain where you have obstacles that are larger than or comparable to the animal or robot’s size,” Li said.

The next step will be to determine whether their findings will also apply to movement through more randomly scattered terrain such as rubble from a demolished building.

But don’t expect the team to abandon its use of cockroaches in unraveling the mysteries of animal movement. Li has been working with them since 2012 when he became a UC Berkeley postdoctoral fellow studying animal locomotion.

“I knew I would be working with these animals, and I was a little scared at first because they just run so fast, and they were so creepy,” Li recalls. “But as soon as I started working in the lab, I learned that it’s actually very easy to work with them, and they’re actually a very nice, fantastic model organism. Not just because they’re so robust and move so fast, but also because they’re very easy to handle and motivate to run and very easy to care for. So, they’re currently one of the main species in our lab, serving as a model system.”

Co-authors on the journal article about traversal of large gaps were graduate students Changxin Yan and Ratan Othayoth and undergraduate Zhiyi Ren, all from the Department of Mechanical Engineering.

The research was funded by a Burroughs Wellcome Fund Career Award at the Scientific Interface, a U.S. Army Research Office Young Investigator Award, and The Johns Hopkins University Whiting School of Engineering.

Story Source:

Materials provided by Johns Hopkins University. Original written by By Phil Sneiderman. Note: Content may be edited for style and length.

3-D vision discovered in praying mantis

Miniature glasses have revealed a new form of 3D vision in praying mantises that could lead to simpler visual processing for robots.

Publishing their latest research in Current Biology, the team at Newcastle University, UK has discovered that mantis 3D vision works differently from all previously known forms of biological 3D vision.

3D or stereo vision helps us work out the distances to the things we see. Each of our eyes sees a slightly different view of the world. Our brains merge these two views to create a single image, while using the differences between the two views to work out how far away things are.

But humans are not the only animals that have stereo vision. Other animals include monkeys, cats, horses, owls and toads, but the only insect known to have stereo vision is the praying mantis.

Going bug-eyed?

A team at the Institute of Neuroscience at Newcastle University funded by the Leverhulme Trust have been investigating whether praying mantis 3D vision works in the same way as humans’.

To investigate this they created special insect 3D glasses which were temporarily glued on with beeswax.

In their insect 3D cinema, they could show the mantis a movie of tasty prey, apparently hovering right in front of the mantis. The illusion is so good the mantises try to catch it.

The scientists could now show the mantises not only simple movies of bugs, but the complex dot-patterns used to investigate human 3D vision. This enabled them to compare human and insect 3D vision for the first time.

Humans are incredibly good at seeing 3D in still images. We do this by matching up the details of the picture seen in each eye. But mantises only attack moving prey so their 3D doesn’t need to work in still images. The team found mantises don’t bother about the details of the picture but just look for places where the picture is changing.

This makes mantis 3D vision very robust. Even if the scientists made the two eyes’ images completely different, mantises can still match up the places where things are changing. They did so even when humans couldn’t.

“This is a completely new form of 3D vision as it is based on change over time instead of static images,” said behavioural ecologist, Dr Vivek Nityananda at Newcastle University. “In mantises it is probably designed to answer the question ‘is there prey at the right distance for me to catch?'”

As part of the wider research, a Newcastle University engineering student developed an electronic mantis arm which mimics the distinct striking action of the insect.

Fellow team-member from the School of Engineering, Dr Ghaith Tarawneh adds, “Many robots use stereo vision to help them navigate, but this is usually based on complex human stereo. Since insect brains are so tiny, their form of stereo vision can’t require much computer processing. This means it could find useful applications in low-power autonomous robots.”

Story Source:

Materials provided by Newcastle University. Note: Content may be edited for style and length.

Researchers help robots ‘think’ and plan in the abstract

Researchers from Brown University and MIT have developed a method for helping robots plan for multi-step tasks by constructing abstract representations of the world around them. Their study, published in the Journal of Artificial Intelligence Research, is a step toward building robots that can think and act more like people.

Planning is a monumentally difficult thing for robots, largely because of how they perceive and interact with the world. A robot’s perception of the world consists of nothing more than the vast array of pixels collected by its cameras, and its ability to act is limited to setting the positions of the individual motors that control its joints and grippers. It lacks an innate understanding of how those pixels relate to what we might consider meaningful concepts in the world.

“That low-level interface with the world makes it really hard to do decide what to do,” said George Konidaris, an assistant professor of computer science at Brown and the lead author of the new study. “Imagine how hard it would be to plan something as simple as a trip to the grocery store if you had to think about each and every muscle you’d flex to get there, and imagine in advance and in detail the terabytes of visual data that would pass through your retinas along the way. You’d immediately get bogged down in the detail. People, of course, don’t plan that way. We’re able to introduce abstract concepts that throw away that huge mass of irrelevant detail and focus only on what is important.”

Even state-of-the-art robots aren’t capable of that kind of abstraction. When we see demonstrations of robots planning for and performing multistep tasks, “it’s almost always the case that a programmer has explicitly told the robot how to think about the world in order for it to make a plan,” Konidaris said. “But if we want robots that can act more autonomously, they’re going to need the ability to learn abstractions on their own.”

In computer science terms, these kinds of abstractions fall into two categories: “procedural abstractions” and “perceptual abstractions.” Procedural abstractions are programs made out of low-level movements composed into higher-level skills. An example would be bundling all the little movements needed to open a door — all the motor movements involved in reaching for the knob, turning it and pulling the door open — into a single “open the door” skill. Once such a skill is built, you don’t need to worry about how it works. All you need to know is when to run it. Roboticists — including Konidaris himself — have been studying how to make robots learn procedural abstractions for years, he says.

But according to Konidaris, there’s been less progress in perceptual abstraction, which has to do with helping a robot make sense of its pixelated surroundings. That’s the focus of this new research.

“Our work shows that once a robot has high-level motor skills, it can automatically construct a compatible high-level symbolic representation of the world — one that is provably suitable for planning using those skills,” Konidaris said.

Learning abstract states of the world

For the study, the researchers introduced a robot named Anathema Device (or Ana, for short) to a room containing a cupboard, a cooler, a switch that controls a light inside the cupboard, and a bottle that could be left in either the cooler or the cupboard. They gave Ana a set of high-level motor skills for manipulating the objects in the room — opening and closing both the cooler and the cupboard, flipping the switch and picking up a bottle. Then they turned Ana loose to try out her motor skills in the room, recording the sensory data from her cameras and actuators before and after each skill execution. Those data were fed into the machine-learning algorithm developed by the team.

(See video of the process here: https://www.youtube.com/watch?v=lY4PKBqp9ZM)

The researchers showed that Ana was able to learn a very abstract description of the environment that contained only what was necessary for her to be able perform a particular skill. For example, she learned that in order to open the cooler, she needed to be standing in front of it and not holding anything (because she needed both hands to open the lid). She also learned the proper configuration of pixels in her visual field associated with the cooler lid being closed, which is the only configuration in which it’s possible to open it.

She learned similar abstractions associated with her other skills. She learned, for example, that the light inside cupboard was so bright that it whited out her sensors. So in order to manipulate the bottle inside the cupboard, the light had to be off. She also learned that in order to turn the light off, the cupboard door needed to be closed, because the open door blocked her access to the switch. The resulting abstract representation distilled all that knowledge down from high-definition images to a text file, just 126 lines long.

“These were all the important abstract concepts about her surroundings,” Konidaris said. “Doors need to be closed before they can be opened. You can’t get the bottle out of the cupboard unless it’s open, and so on. And she was able to learn them just by executing her skills and seeing what happens.”

Planning in the abstract

Once Ana was armed with her learned abstract representation, the researchers asked her to do something that required some planning: take the bottle from the cooler and put it in the cupboard.

As they hoped she would, Ana navigated to the cooler and opened it to reveal the bottle. But she didn’t pick it up. Instead, she planned ahead. She realized that if she had the bottle in her gripper, then she wouldn’t be able to open the cupboard, because doing so requires both hands. So after she opened the cooler, she navigated to the cupboard. There she saw that the light switch in the “on” position, and she realized that opening the cupboard would block the switch, so she turned the switch off before opening the cupboard, returning to the cooler and retrieving the bottle, and finally placing it in the cupboard. In short, she planned ahead, identifying problems and fixing them before they could occur.

“We didn’t provide Ana with any of the abstract representations she needed to plan for the task,” Konidaris said. “She learned those abstractions on her own, and once she had them, planning was easy. She found that plan in only about four milliseconds.”

Konidaris says the research provides an important theoretical building block for applying artificial intelligence to robotics. “We believe that allowing our robots to plan and learn in the abstract rather than the concrete will be fundamental to building truly intelligent robots,” he said. “Many problems are often quite simple, if you think about them in the right way.”

Konidaris’ coauthors on the paper were Leslie Pack Kaelbling and Tomas Lozano-Perez from MIT. The research was supported by an award from the Defense Advanced Research Projects Agency and by MIT’s Intelligence Initiative.

Robotic fish can ‘see’ and mimic live fish

For more than a decade, biomimetic robots have been deployed alongside live animals to better understand the drivers of animal behavior, including social cues, fear, leadership, and even courtship. The encounters have always been unidirectional; the animals observe and respond to the robots. But in the lab of Maurizio Porfiri, a professor of mechanical and aerospace engineering at the NYU Tandon School of Engineering, the robots can now watch back.

Porfiri and a team of collaborators tapped advances in real-time tracking software and robotics to design and test the first closed-loop control system featuring a bioinspired robotic replica interacting in three dimensions with live zebrafish. The system allows the robotic replica to both “see” and mimic the behavior of live zebrafish in real time. The results of these experiments, which represent the first of their kind with zebrafish, were published in Scientific Reports.

The team tested the interaction of the robotic replica and live zebrafish under several different experimental conditions, but in all cases, the replica and the live fish were separated by a transparent panel. In preference tests, zebrafish showed greater affinity- and, importantly, no signs of anxiety or fear — toward a robotic replica that mirrored its own behavior rather than a robot that followed a pre-set pattern of swimming.

Porfiri noted that while mirroring is a basic, limited form of social interaction, these experiments are a powerful first step toward enriching the exchange between robots and live animals. “This form of mirroring is a very simple social behavior, in which the replica seeks only to stay as close as possible to the live animal. But this is the baseline for the types of interactions we’re hoping to build between animals and robots,” Porfiri said. “We now have the ability to measure the response of zebrafish to the robot in real time, and to allow the robot to watch and maneuver in real time, which is significant.”

The researchers are now investigating social interactions among live zebrafish to better understand the animals’ natural cues and responses. “We are learning what really matters in zebrafish social interactions, and we can use this information to help the robot interpret and respond appropriately, rather than just copying what it sees,” he said.

Story Source:

Materials provided by NYU Tandon School of Engineering. Note: Content may be edited for style and length.

Crowd workers, AI make conversational agents smarter

Conversational agents such as Siri, Alexa and Cortana are great at giving you the weather, but are flummoxed when asked for unusual information, or follow-up questions. By adding humans to the loop, Carnegie Mellon University researchers have created a conversational agent that is tough to stump.

The chatbot system, called Evorus, is not the first to use human brainpower to answer a broad range of questions. What sets it apart, says Jeff Bigham, associate professor in the Human-Computer Interaction Institute, is that humans are simultaneously training the system’s artificial intelligence, making it gradually less dependent on people.

Like an earlier CMU agent called Chorus, Evorus recruits crowd workers on demand from Amazon Mechanical Turk to answer questions from users, with the crowd workers voting on the best answer. Evorus also keeps track of questions asked and answered and, over time, begins to suggest these answers for subsequent questions. The researchers also have developed a process by which the AI can help to approve a message with less crowd worker involvement.

“Companies have put a lot of effort into teaching people how to talk to conversational agents, given the devices’ limited command of speech and topics,” Bigham said. “Now, we’re letting people speak more freely and it’s the agent that must learn to accommodate them.”

The system isn’t in its final form, but it is available for download and use by anyone willing to be part of the research effort: http://talkingtothecrowd.org/.

A research paper on Evorus, already available online, will be presented by Bigham’s research team later this year at CHI 2018, the Conference on Human Factors in Computing Systems in Montreal.

Totally automated conversational agents can do well answering simple, common questions and commands and can converse in depth when the subject is relatively narrow, such as advising on bus schedules. Systems with people in the loop can answer a wide variety of questions, Bigham said, but with the exception of concierge or travel services for which users are willing to pay — agents that depend on humans are too expensive to be scaled up for wide use. A session on Chorus costs an average of $2.48.

“With Evorus, we’ve hit a sweet spot in the collaboration between the machine and the crowd,” Bigham said. The hope is that as the system grows, the AI is able to handle an increasing percentage of questions, while the number of crowd workers necessary to respond to “long tail” questions will remain relatively constant.

Keeping humans in the loop also reduces the risk that malicious users will manipulate the conversational agent inappropriately, as occurred when Microsoft briefly deployed its Tay chatbot in 2016, said Ting-Hao Huang, a Ph.D. student in the Language Technologies Institute (LTI). Huang developed Evorus with Bigham and Joseph Chee Chang, also a Ph.D. student in LTI.

During Evorus’ five-month deployment with 80 users and 181 conversations, automated responses to questions were chosen 12 percent of the time, crowd voting was reduced by almost 14 percent and the cost of crowd work for each reply to a user’s message dropped by 33 percent.

Evorus is a text chatbot, but is deployed via Google Hangouts, which can accommodate voice input, as well as access from computers, phones and smartwatches. To enhance its scalability, Evorus uses a software architecture that can accept automated question-answering components developed by third parties.

This research is supported by Project InMind, a Carnegie Mellon effort sponsored by Yahoo!/Oath to develop advanced technologies for personalized digital assistants.

Worm ‘uploaded’ to computer and taught amazing tricks

It is not much to look at: the nematode C. elegans is about one millimetre in length and is a very simple organism. But for science, it is extremely interesting. C. elegans is the only living being whose neural system has been analysed completely. It can be drawn as a circuit diagram or reproduced by computer software, so that the neural activity of the worm is simulated by a computer program.

Such an artificial C. elegans has now been trained at TU Wien (Vienna) to perform a remarkable trick: The computer worm has learned to balance a pole at the tip of its tail.

The Worm’s Reflexive behaviour as Computer Code

C. elegans has to get by with only 300 neurons. But they are enough to make sure that the worm can find its way, eat bacteria and react to certain external stimuli. It can, for example, react to a touch on its body. A reflexive response is triggered and the worm squirms away.

This behaviour can be perfectly explained: it is determined by the worm’s nerve cells and the strength of the connections between them. When this simple reflex-network is recreated on a computer, then the simulated worm reacts in exactly the same way to a virtual stimulation — not because anybody programmed it to do so, but because this kind of behaviour is hard-wired in its neural network.

“This reflexive response of such a neural circuit, is very similar to the reaction of a control agent balancing a pole,” says Ramin Hasani (Institute of Computer Engineering, TU Wien). This is a typical control problem which can be solved quite well by standard controllers: a pole is fixed on its lower end on a moving object, and it is supposed to stay in a vertical position. Whenever it starts tilting, the lower end has to move slightly to keep the pole from tipping over. Much like the worm has to change its direction whenever it is stimulated by a touch, the pole must be moved whenever it tilts.

Mathias Lechner, Radu Grosu and Ramin Hasani wanted to find out, whether the neural system of C. elegans, uploaded to a computer, could solve this problem — without adding any nerve cells, just by tuning the strength of the synaptic connections. This basic idea (tuning the connections between nerve cells) is also the characteristic feature of any natural learning process.

A Program without a Programmer

“With the help of reinforcement learning, a method also known as ‘learning based on experiment and reward’, the artificial reflex network was trained and optimized on the computer,” Mathias Lechner explains. And indeed, the team succeeded in teaching the virtual nerve system to balance a pole. “The result is a controller, which can solve a standard technology problem — stabilizing a pole, balanced on its tip. But no human being has written even one line of code for this controller, it just emerged by training a biological nerve system,” says Radu Grosu.

The team is going to explore the capabilities of such control-circuits further. The project raises the question, whether there is a fundamental difference between living nerve systems and computer code. Is machine learning and the activity of our brain the same on a fundamental level? At least we can be pretty sure that the simple nematode C. elegans does not care whether it lives as a worm in the ground or as a virtual worm on a computer hard drive.

Story Source:

Materials provided by Vienna University of Technology. Note: Content may be edited for style and length.

Lightweight robots harvest cucumbers

Automation-intensive sectors such as the automotive industry are not the only ones to rely on robots. In more and more agricultural settings, automation systems are superseding strenuous manual labor. As part of the EU’s CATCH project, the Fraunhofer Institute for Production Systems and Design Technology IPK is developing and testing a dual-arm robot for the automated harvesting of cucumbers. This lightweight solution has the potential to keep crop cultivation commercially viable in Germany.

In Germany, cucumbers destined for pickle jars are harvested by hand with the aid of “cucumber flyers” — farm vehicles with wing-like attachments. Seasonal workers lie on their stomachs on the vehicle’s wings and pluck the ripe cucumbers. This labor-intensive and energy-sapping type of manual harvesting is increasingly becoming uneconomical. In addition, the per-unit costs of harvesting have risen since Germany introduced a minimum wage. Many of the country’s agricultural regions consequently face an uncertain future; cucumber farming has already begun relocating to Eastern Europe and India. There is thus an urgent need for improved harvesting technologies to maintain the economic viability of cucumber farming in Germany. Experts from Fraunhofer IPK in Berlin, along with other German and Spanish researchers, are studying the potential for automating cucumber harvests in the scope of the EU project CATCH, which stands for “Cucumber Gathering — Green Field Experiments.” Project partners are the Leibniz Institute for Agricultural Engineering and Bioeconomy in Germany and the CSIC-UPM Centre for Automation and Robotics (CAR) in Spain.

CATCH researchers want to develop and test a dual-arm robot system consisting of inexpensive lightweight modules. The ultimate aim: this system could be used for automated cucumber farming and other agricultural applications. The robotic picker would have to be cost-effective, high-performance and dependable. Even in adverse weather, it would need to be capable of first identifying ripe cucumbers and then using its two gripper arms to gently pick and store them. To this end, cutting-edge control methods equip the robot with tactile perception and enable it to adapt to ambient conditions. These methods also make it possible for the dual-arm robot system to imitate human movements. Researchers namely want to make sure that the robot does not damage crops — or pull them and their roots out of the soil. But that is not all. The automated harvester must be at least as efficient as its experienced human counterpart, who can pick as many as 13 cucumbers per minute.

High success rate It is a considerable challenge to design autonomous systems capable of optical and tactile sensing, assessing and evaluating. The challenge is only compounded by cucumber harvesting: a robot must identify green objects camouflaged by green surroundings. In addition, cucumbers are randomly distributed throughout a field, and some are concealed by vegetation. Varying light conditions make the mission all the more difficult. It should be possible to use multispectral cameras and intelligent image processing to help locate cucumbers and guide the robot’s gripper arms to pluck them. This part of the CATCH project is overseen by CSIC-UPM, the Spanish project partner. A special camera system helps ensure that the robot detects and locates approximately 95 percent of cucumbers, an impressive success rate. The goal, of course, is to advance the technology so that the robot picks all the ripe cucumbers to foster growth of new ones. Fraunhofer IPK has developed robot arms with five degrees of freedom on the basis of hardware modules developed by igus GmbH in Cologne.

In search of human inspiration

The IPK project experts are tasked with developing three gripper prototypes: a gripper based on vacuum technology, a set of bionic gripper jaws (Fin Ray®) and a customized “cucumber hand” based on OpenBionics robot hands. They are relying on insights acquired during a previous European research project, in which they developed a dual-arm robot control system with efficient task-oriented programming for Workerbot I — a humanoid robot capable of industrial assembly. Project experts from IPK are enhancing this system so that it can plan, program and control the behavior of robots harvesting cucumbers. These preprogrammed behavioral patterns make bimanual searching possible, meaning the robot can look for cucumbers as a human would. Dr. Dragoljub Surdilovic, a scientist at Fraunhofer IPK, explains: “The robot can, for example, push leaves to the side using symmetrical or asymmetrical movements, or congruent and incongruent movements. As a result, it can automatically change directions on the fly to approach and then grasp a cucumber.” The researchers’ goal is to create an intelligent control system capable of making judgment calls: assigning a certain task to a certain gripper arm, monitoring cucumber picking and dealing with exceptions.

In July 2017, the Leibniz Institute for Agricultural Engineering and Bioeconomy used various types of cucumbers to conduct initial field testing of the robot system at its test site. The institute also tested harvesting new types of cucumbers with distinguishing features that make them easier to pick. In short, the first round of testing validated basic functionality. Since fall 2017, project partners have been conducting additional tests in a Leibniz Institute greenhouse. Researchers are especially eager to scrutinize the extent to which interference or malfunctions affect the efficiency and robustness of the system. Once testing of the lightweight robot has been completed, project partners will strive to make it commercially viable. Companies, cucumber farmers and agricultural associations have expressed considerable interest in the dual-arm robot. In November 2017, the CATCH project was unveiled to the general public at Agritechnica, the world’s leading trade fair for agricultural technology. The German Agricultural Society (DLG e.V.) exhibited the robot at its Agritechnica booth, eliciting enthusiastic feedback from agricultural specialists and numerous companies.

Story Source:

Materials provided by Fraunhofer-Gesellschaft. Note: Content may be edited for style and length.

Letting molecular robots swarm like birds

A team of researchers from Hokkaido University and Kansai University has developed DNA-assisted molecular robots that autonomously swarm in response to chemical and physical signals, paving the way for developing future nano-machines.

The world’s smallest “swarm robot” measures 25 nanometers in diameter and 5 micrometers in length, and exhibits swarming behavior resembling motile organisms such as fish, ants and birds.

“Swarm robots are one of the most elusive subjects in robotics,” says Akira Kakugo of the research team at Hokkaido University. “Fish schools, ant colonies and bird flocks show fascinating features that cannot be achieved by individuals acting alone. These include the formation of complex structures, distinct divisions of labor, robustness and flexibility, all of which emerge through local interactions among the individuals without the presence of a leader.” Inspired by these characteristics, researchers have been working to develop micro-scale swarm robots.

In the present study, Kakugo and his collaborators have built a molecular system that is composed of the three essential components of a robot: sensors, information processors and actuators. They used cellular proteins called microtubules and kinesins as the actuator, and DNA as the information processor. Microtubules are filamentous proteins that serve as the railways in the cellular transportation system, while kinesins are motor proteins that run on the railways by consuming chemical energy obtained from hydrolysis of adenosine triphosphate (ATP). The team took a reverse strategy and built a system in which microtubules move randomly on a kinesin coated surface.

A major challenge in swarm robotics is the construction of a large number of individual robots capable of programmable self-assembly. The team addressed this issue by introducing DNA molecules into the system that are known to hybridize when they have a complementary sequence. The chemically synthesized DNA molecules with certain programs in their sequences are conjugated to the microtubules labeled with green or red fluorescence dye.

The team then monitored the motions of the DNA-conjugated microtubules gliding on a kinesin coated surface. Initially, five million microtubules moved without any interactions with each other. They then added single-strand linker DNA (l-DNA), programmed to initiate interactions among the DNA-attached microtubules. Upon introduction of the l-DNA, the microtubules began to assemble and formed swarms of a much larger size than the microtubules. When another single-strand DNA (d-DNA), programmed to dissociate the swarms was added, the microtubule swarms disappeared soon. This demonstrated that swarming of a large number of microtubules can be reversibly regulated by selectively providing the input DNA signal in the system.

Moreover, they added a photosensitive sensor to the system, azobenzene attached to the DNA molecules. They utilized isomerization of the azobenzene that occurs reversibly in response to irradiation of visible or ultraviolet light to switch on or off the interaction between DNA molecules. This enabled the photo-irradiation induced switching between the solitary and swarm state of the microtubules. The team also demonstrated that the swarms of microtubules move with a translational or rotational motion depending on the rigidity of the microtubules.

“This is the first evidence showing that swarming behavior of molecular robots can be programmed by DNA computing. The system acts as a basic computer by executing simple mathematical operations, such as AND or OR operations, leading to various structures and complex motions. It is expected that such a system contributes in developing artificial muscles and gene diagnoses, as well as building nano-machines in the future,” Kakugo commented.

Story Source:

Materials provided by Hokkaido University. Note: Content may be edited for style and length.

Applying machine learning to the universe’s mysteries

Computers can beat chess champions, simulate star explosions, and forecast global climate. We are even teaching them to be infallible problem-solvers and fast learners.

And now, physicists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and their collaborators have demonstrated that computers are ready to tackle the universe’s greatest mysteries. The team fed thousands of images from simulated high-energy particle collisions to train computer networks to identify important features.

The researchers programmed powerful arrays known as neural networks to serve as a sort of hivelike digital brain in analyzing and interpreting the images of the simulated particle debris left over from the collisions. During this test run the researchers found that the neural networks had up to a 95 percent success rate in recognizing important features in a sampling of about 18,000 images.

The study was published Jan. 15 in the journal Nature Communications.

The next step will be to apply the same machine learning process to actual experimental data.

Powerful machine learning algorithms allow these networks to improve in their analysis as they process more images. The underlying technology is used in facial recognition and other types of image-based object recognition applications.

The images used in this study — relevant to particle-collider nuclear physics experiments at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider and CERN’s Large Hadron Collider — recreate the conditions of a subatomic particle “soup,” which is a superhot fluid state known as the quark-gluon plasma believed to exist just millionths of a second after the birth of the universe. Berkeley Lab physicists participate in experiments at both of these sites.

“We are trying to learn about the most important properties of the quark-gluon plasma,” said Xin-Nian Wang, a nuclear physicist in the Nuclear Science Division at Berkeley Lab who is a member of the team. Some of these properties are so short-lived and occur at such tiny scales that they remain shrouded in mystery.

In experiments, nuclear physicists use particle colliders to smash together heavy nuclei, like gold or lead atoms that are stripped of electrons. These collisions are believed to liberate particles inside the atoms’ nuclei, forming a fleeting, subatomic-scale fireball that breaks down even protons and neutrons into a free-floating form of their typically bound-up building blocks: quarks and gluons.

Researchers hope that by learning the precise conditions under which this quark-gluon plasma forms, such as how much energy is packed in, and its temperature and pressure as it transitions into a fluid state, they will gain new insights about its component particles of matter and their properties, and about the universe’s formative stages.

But exacting measurements of these properties — the so-called “equation of state” involved as matter changes from one phase to another in these collisions — have proven challenging. The initial conditions in the experiments can influence the outcome, so it’s challenging to extract equation-of-state measurements that are independent of these conditions.

“In the nuclear physics community, the holy grail is to see phase transitions in these high-energy interactions, and then determine the equation of state from the experimental data,” Wang said. “This is the most important property of the quark-gluon plasma we have yet to learn from experiments.”

Researchers also seek insight about the fundamental forces that govern the interactions between quarks and gluons, what physicists refer to as quantum chromodynamics.

Long-Gang Pang, the lead author of the latest study and a Berkeley Lab-affiliated postdoctoral researcher at UC Berkeley, said that in 2016, while he was a postdoctoral fellow at the Frankfurt Institute for Advanced Studies, he became interested in the potential for artificial intelligence (AI) to help solve challenging science problems.

He saw that one form of AI, known as a deep convolutional neural network — with architecture inspired by the image-handling processes in animal brains — appeared to be a good fit for analyzing science-related images.

“These networks can recognize patterns and evaluate board positions and selected movements in the game of Go,” Pang said. “We thought, ‘If we have some visual scientific data, maybe we can get an abstract concept or valuable physical information from this.'”

Wang added, “With this type of machine learning, we are trying to identify a certain pattern or correlation of patterns that is a unique signature of the equation of state.” So after training, the network can pinpoint on its own the portions of and correlations in an image, if any exist, that are most relevant to the problem scientists are trying to solve.

Accumulation of data needed for the analysis can be very computationally intensive, Pang said, and in some cases it took about a full day of computing time to create just one image. When researchers employed an array of GPUs that work in parallel — GPUs are graphics processing units that were first created to enhance video game effects and have since exploded into a variety of uses — they cut that time down to about 20 minutes per image.

They used computing resources at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC) in their study, with most of the computing work focused at GPU clusters at GSI in Germany and Central China Normal University in China.

A benefit of using sophisticated neural networks, the researchers noted, is that they can identify features that weren’t even sought in the initial experiment, like finding a needle in a haystack when you weren’t even looking for it. And they can extract useful details even from fuzzy images.

“Even if you have low resolution, you can still get some important information,” Pang said.

Discussions are already underway to apply the machine learning tools to data from actual heavy-ion collision experiments, and the simulated results should be helpful in training neural networks to interpret the real data.

“There will be many applications for this in high-energy particle physics,” Wang said, beyond particle-collider experiments.