Brain-Based Circuitry Just Made Artificial Intelligence A Whole Lot Faster

We take the vast computing power of our brains for granted. But scientists are still trying to get computers to the brain’s level.

This is how we ended up with artificial intelligence algorithms that learn through virtual neurons  the neural net.

Now a team of engineers has taken another step closer to emulating the computers in our noggins: they’ve built a physical neural network, with circuits that even more closely resemble neurons. When they tested an AI algorithm on the new type of circuitry, they found that it performed as well as conventional neural nets already in use. But! the new integrated neural net system completed the task with 100 times less energy than a conventional AI algorithm.

If these new neuron-based circuits take off, artificial intelligence researchers will soon be able to do a lot more computing with a lot less energy. Like using a tin can to communicate with an actual telephone, computer chips and neural net algorithms just speak two different languages, and work slower as a result. But in the new system, the hardware and software were built to work perfectly together. So the new AI system completed the tasks much faster than a conventional system, without any drop in accuracy.

This is a step up from previous attempts to make silicon-based neural networks. Usually, the AI systems built on these sorts of neuron-inspired chips don’t usually work as well as conventional artificial intelligence. But the new research modeled two types of neurons: one that was geared for quick computations and another that was designed to store long-term memory, the researchers explained to MIT Technology Review.

There’s good reason to be skeptical of any researcher who claims that the answer to truly comprehensive, general artificial intelligence and consciousness is to recreate the human brain. That’s because, fundamentally, we know very little about how the brain works. And chances are, there are lots of things in our brains that a computer would find useless.

But even so, the researchers behind the new artificial neural hardware have been able to glean important lessons from how our brains work and apply it to computer science. In that sense, they have figured out how to further artificial intelligence by cherry-picking what our brains have to offer without getting weighed down trying to rebuild the whole darn thing.

As technology sucks up more and more power, the hundred-fold improvement to energy efficiency in this AI system means scientists will be able to pursue big questions without leaving such a huge footprint on the environment.

AI senses people’s pose through walls

X-ray vision has long seemed like a far-fetched sci-fi fantasy, but over the last decade a team led by Professor Dina Katabi from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has continually gotten us closer to seeing through walls.

Their latest project, “RF-Pose,” uses artificial intelligence (AI) to teach wireless devices to sense people’s postures and movement, even from the other side of a wall.

The researchers use a neural network to analyze radio signals that bounce off people’s bodies, and can then create a dynamic stick figure that walks, stops, sits and moves its limbs as the person performs those actions.

The team says that the system could be used to monitor diseases like Parkinson’s and multiple sclerosis (MS), providing a better understanding of disease progression and allowing doctors to adjust medications accordingly. It could also help elderly people live more independently, while providing the added security of monitoring for falls, injuries and changes in activity patterns.

(All data the team collected has subjects’ consent and is anonymized and encrypted to protect user privacy. For future real-world applications, the team plans to implement a “consent mechanism” in which the person who installs the device is cued to do a specific set of movements in order for it to begin to monitor the environment.)

The team is currently working with doctors to explore multiple applications in healthcare.

“We’ve seen that monitoring patients’ walking speed and ability to do basic activities on their own gives healthcare providers a window into their lives that they didn’t have before, which could be meaningful for a whole range of diseases,” says Katabi, who co-wrote a new paper about the project. “A key advantage of our approach is that patients do not have to wear sensors or remember to charge their devices.”

Besides health-care, the team says that RF-Pose could also be used for new classes of video games where players move around the house, or even in search-and-rescue missions to help locate survivors.

“Just like how cellphones and Wi-Fi routers have become essential parts of today’s households, I believe that wireless technologies like these will help power the homes of the future,” says Katabi, who co-wrote the new paper with PhD student and lead author Mingmin Zhao, MIT professor Antonio Torralba, postdoc Mohammad Abu Alsheikh, graduate student Tianhong Li and PhD students Yonglong Tian and Hang Zhao. They will present it later this month at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah.

One challenge the researchers had to address is that most neural networks are trained using data labeled by hand. A neural network trained to identify cats, for example, requires that people look at a big dataset of images and label each one as either “cat” or “not cat.” Radio signals, meanwhile, can’t be easily labeled by humans.

To address this, the researchers collected examples using both their wireless device and a camera. They gathered thousands of images of people doing activities like walking, talking, sitting, opening doors and waiting for elevators.

They then used these images from the camera to extract the stick figures, which they showed to the neural network along with the corresponding radio signal. This combination of examples enabled the system to learn the association between the radio signal and the stick figures of the people in the scene.

Post-training, RF-Pose was able to estimate a person’s posture and movements without cameras, using only the wireless reflections that bounce off people’s bodies.

Since cameras can’t see through walls, the network was never explicitly trained on data from the other side of a wall — which is what made it particularly surprising to the MIT team that the network could generalize its knowledge to be able to handle through-wall movement.

“If you think of the computer vision system as the teacher, this is a truly fascinating example of the student outperforming the teacher,” says Torralba.

Besides sensing movement, the authors also showed that they could use wireless signals to accurately identify somebody 83 percent of the time out of a line-up of 100 individuals. This ability could be particularly useful for the application of search-and-rescue operations, when it may be helpful to know the identity of specific people.

For this paper, the model outputs a 2-D stick figure, but the team is also working to create 3-D representations that would be able to reflect even smaller micromovements. For example, it might be able to see if an older person’s hands are shaking regularly enough that they may want to get a check-up.

“By using this combination of visual data and AI to see through walls, we can enable better scene understanding and smarter environments to live safer, more productive lives,” says Zhao.

Google: JK, We’re Going To Keep Working With The Military After All

Google pulled a headfake.

Let’s catch you up real quick: Google partnered with the Department of Defense for Project Maven, in which artificial intelligence would analyze military drone footage. Google employees made it clear they’re not happy to be working on the project. And last week, it looked like the company was going to meet their demands  Google announced that it would not renew its contract with the military when it expires next year.

Well, it turns out that that sweet, sweet military dough is too good to pass up. On Thursday, Google CEO Sundar Pichai revealed new internal guidelines for how Google plans to conduct itself in the future. And we can expect the company’s military deals to continue, as WIRED reported (is it a coincidence that, last month, the company apparently removed its longtime motto “don’t be evil” from its code of conduct? You decide).

The updated guidelines, which Google laid out in a blog post, do say that Google will have no part in building weapons or “other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” and also ruled out surveillance technologies like those sold by Amazon.

You may be thinking, “But that’s the same stance Google had at the beginning of this whole Project Maven mess!” And, dear reader, you would be right. At least as far as military involvement goes, Google’s stance seems to be something along the lines of: “Hey, we hear you. But we’re gonna keep doing what we want anyway. Trust us, we’re gonna be, like, really really ethical.”

As a response, many are calling for Google to establish an independent ethics committee to oversee its military involvement, according to WIRED. Because, strangely enough, this fake out may have shaken people’s trust in the unavoidable, omnipresent provider of critical online services such as emails, driving directions, and late-night paranoid internet searches.

In all fairness, other tenets of Google’s new guidelines could be crowd-pleasers. They call for technology developed with particularly care towards racial minorities, members of the LGBTQIA community, and other marginalized groups. This is likely a response to the fact that most AI systems are biased since they have been inadvertently trained to treat people unfairly.

It’s not yet clear how Google and the Department of Defense will work together in the future. And, as many have pointed out, the Department of Defense certainly won’t stop developing artificial intelligence tools and weapons just because Google isn’t going to help. But Google employees, and the public, will likely make sure the company abides by its own guidelines and stays out of the weapons game.

This article has been updated to include Google’s blog post about the new guidelines.

AI Can Now Manipulate People’s Movements In Fake Videos

There are already fake videos on the internet, manipulated to make it look like people said things (or appeared in porn) that they never did. And now they’re about to get way better, thanks to some new tools powered by artificial intelligence.

Instead of just moving a source video’s lips and face, an artificial intelligence-powered system can create photorealistic videos in which people can sway, turn their heads, blink their eyes, and emote. Basically, everything that an actor does and says in an input video will be translated into the video being altered.

According to the research, which will be presented at the VR filmmaking conference SIGGRAPH in August, the team ran a number of tests comparing its new algorithm to existing means of manipulating lifelike videos and images, many of which have been at least partially developed by Facebook and Google. Their system outperformed all the others, and participants in an experiment struggled to determine whether or not the resulting videos were real.

The researchers, who received some funding from Google, hope that their work will be used to improve virtual reality technology. And because the AI system only needs to train on a few minutes of source video to work, the team feels that its new tools will help make high-end video editing software more accessible.

The researchers also know their work might, uh, worry some folks.

“I’m aware of the ethical implications of those reenactment projects,” researcher Justus Thies told The Register. “That is also a reason why we published our results. I think it is important that the people get to know the possibilities of manipulation techniques.”

But at what point do we get tired of people “raising awareness” by further developing the problem? In the paper itself, there is just one sentence dedicated to ethical concerns — the researchers suggest that someone ought to look into better watermarking technologies or other ways to spot fake videos.

Not them, though. They’re too busy making it easier than ever to create flawless manipulated videos.

There’s Now A Computer Designed Specifically for Programming Intelligent Robots

It just got way easier to build “smart” robots.

Each year, the who’s who of the tech world gathers at Computex 2018, an information and communications technology showcase in Taiwan. On Sunday, Jensen Huang, founder and CEO of American tech company NVIDIA, took the stage at the conference to announce two new products designed to make it easier (and cheaper) for developers to create and train intelligent robots: Jetson Xavier and NVIDIA Isaac.

According to a NVIDIA press release, Jetson Xavier is “the world’s first computer designed specifically for robotics.” It includes 9 billion transistors and a half-dozen processors, including a Volta Tensor Core GPU and an eight-core ARM64 CPU.

Image Credit: NVIDIA

Translation: this computer is powerful and efficient — in fact, it can process 30 trillion operations per second (TOPS) (for comparison, the most powerful iMac on the market can process up to 22 TOPS, and costs about $5,000). And to do so, it needs less than half the electricity you’d need to power a light bulb. And while that may not be necessary for a computer on which you basically only use Facebook and Microsoft Word, it could mean a lot for the advent of more advanced, and more accessible, robots.

“This level of performance is essential for a robot to take input from sensors, locate itself, perceive its environment, recognize and predict motion of nearby objects, reason about what action to perform, and articulate itself safely,” according to the press release.

Really incredibly hardware like Jetson Xavier can only push technology so far. It needs advanced software to match. That’s where NVIDIA Isaac comes in.

NVIDIA Isaac is a developer platform broken into three components:

  • Isaac SDK (software development kit), a collection of tools developers can use to create their own AI software
  • Isaac IMX (Intelligent Machine Acceleration applications), a library of robotic algorithm software developed by NVIDIA, which the company claims on its website could save developers “months of development time and effort”
  • Isaac SIM, a virtual simulation environment where developers can train and test their AI systems

NVIDIA plans to sell its Isaac-equipped Jetson Xavier computers starting in August for $1,299.

During his Computex presentation, Huang claimed a workstation with comparable processing power costs upwards of $10,000. He didn’t specify exactly who his intended clientele would be, but it’s not hard to imagine that students in high school and college who are interested in robotics, along with any company interested in AI but perhaps lacks the capital to make a big investment, would be most interested in purchasing it.

If this system lives up to its promise, it could create a moment like when GarageBand made it possible for anyone to record music without needing a recording studio. Now, anyone (with $1300) can design their own AI.

By lowering the cost of the tools necessary for intelligent robot development, NVIDIA is opening up the field to people who couldn’t afford to work on it in the past. And who knows what kinds of remarkable creations they might come up with?

Use artificial intelligence to identify, count, describe wild animals

A new paper in the Proceedings of the National Academy of Sciences (PNAS) reports how a cutting-edge artificial intelligence technique called deep learning can automatically identify, count and describe animals in their natural habitats.

Photographs that are automatically collected by motion-sensor cameras can then be automatically described by deep neural networks. The result is a system that can automate animal identification for up to 99.3 percent of images while still performing at the same 96.6 percent accuracy rate of crowdsourced teams of human volunteers.

“This technology lets us accurately, unobtrusively and inexpensively collect wildlife data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology and animal behavior into ‘big data’ sciences. This will dramatically improve our ability to both study and conserve wildlife and precious ecosystems,” says Jeff Clune, the senior author of the paper. He is the Harris Associate Professor at the University of Wyoming and a senior research manager at Uber’s Artificial Intelligence Labs.

The paper was written by Clune; his Ph.D. student Mohammad Sadegh Norouzzadeh; his former Ph.D. student Anh Nguyen (now at Auburn University); Margaret Kosmala (Harvard University); Ali Swanson (University of Oxford); and Meredith Palmer and Craig Packer (both from the University of Minnesota).

Deep neural networks are a form of computational intelligence loosely inspired by how animal brains see and understand the world. They require vast amounts of training data to work well, and the data must be accurately labeled (e.g., each image being correctly tagged with which species of animal is present, how many there are, etc.).

This study obtained the necessary data from Snapshot Serengeti, a citizen science project on the http://www.zooniverse.org platform. Snapshot Serengeti has deployed a large number of “camera traps” (motion-sensor cameras) in Tanzania that collect millions of images of animals in their natural habitat, such as lions, leopards, cheetahs and elephants. The information in these photographs is only useful once it has been converted into text and numbers. For years, the best method for extracting such information was to ask crowdsourced teams of human volunteers to label each image manually. The study published today harnessed 3.2 million labeled images produced in this manner by more than 50,000 human volunteers over several years.

“When I told Jeff Clune we had 3.2 million labeled images, he stopped in his tracks,” says Packer, who heads the Snapshot Serengeti project. “We wanted to test whether we could use machine learning to automate the work of human volunteers. Our citizen scientists have done phenomenal work, but we needed to speed up the process to handle ever greater amounts of data. The deep learning algorithm is amazing and far surpassed my expectations. This is a game changer for wildlife ecology.”

Swanson, who founded Snapshot Serengeti, adds: “There are hundreds of camera-trap projects in the world, and very few of them are able to recruit large armies of human volunteers to extract their data. That means that much of the knowledge in these important data sets remains untapped. Although projects are increasingly turning to citizen science for image classification, we’re starting to see it take longer and longer to label each batch of images as the demand for volunteers grows. We believe deep learning will be key in alleviating the bottleneck for camera-trap projects: the effort of converting images into usable data.”

“Not only does the artificial intelligence system tell you which of 48 different species of animal is present, but it also tells you how many there are and what they are doing. It will tell you if they are eating, sleeping, if babies are present, etc.,” adds Kosmala, another Snapshot Serengeti leader. “We estimate that the deep learning technology pipeline we describe would save more than eight years of human labeling effort for each additional 3 million images. That is a lot of valuable volunteer time that can be redeployed to help other projects.”

First-author Sadegh Norouzzadeh points out that “Deep learning is still improving rapidly, and we expect that its performance will only get better in the coming years. Here, we wanted to demonstrate the value of the technology to the wildlife ecology community, but we expect that as more people research how to improve deep learning for this application and publish their datasets, the sky’s the limit. It is exciting to think of all the different ways this technology can help with our important scientific and conservation missions.”

Story Source:

Materials provided by University of Wyoming. Note: Content may be edited for style and length.

Language-Policing AI Will Suggest A Polite Alternative to Online Hate Speech

It’s an oft-repeated phrase among journalists: never read the comments. Comment sections, from Twitter to Reddit and everything in between, are some of the darkest places on the internet, places where baseless insults and pointed critiques fly like bullets in a chaotic melee.

To save us from that ugliness (in others, and also in ourselves), engineers at IBM have created an AI algorithm that tries to filter the profanity out of our messages and suggests more palatable alternatives.

The scientists behind the profanity-filtering AI are, in a refreshing twist, conscious of how their filter might be misused. For instance, authoritarian governments or overreaching technology companies could, hypothetically, use similar algorithms to flag political or otherwise critical language among people conversing online. And since governments are already hard at work shutting down dissident rumblings online, it’s not far-fetched to imagine that a tool like this would be destructive if in the wrong hands.

So, instead of simply changing offensive language, the researchers argue their algorithm should be used to provide gentle reminders and suggestions. For instance, a tool resembling good ol’ Microsoft Clippy might pop up and ask, “Do you really want to tell this stranger on Reddit to fuck off and die?” instead of automatically editing what you type.

And there’s a lot of merit in that — it’s the technological equivalent of venting your anger and then sleeping on it or stepping away from the keyboard before you hit send.

After being trained on millions of tweets and Reddit posts, the AI system became very effective at removing profane and hateful words. But it’s much, much less good at recreating the sentences in a polite way that conserved meaning.

For instance, a tweet reading “bros before hoes” was translated into “bros before money.” There’s… something missing there. Granted, this is much better than existing language filter AI, which turned the same tweet into “club tomorrow.” Let’s give credit where credit is due.

Also, a lot of swear words were turned into “big,” regardless of context. A frustrated Reddit post reading “What a fucking circus this is” became a sincere, awe-filled “what a big circus this is.”

So far, the researchers have simply created their algorithm, but haven’t incorporated it into a usable online too, for either individual users or the sites themselves. Presumably, it would have to get a lot better at suggesting new language before that could happen.

Aside from the, er, obvious shortcomings, the team behind this algorithm is aware of its limitations. AI filters of this sort can only work to remove the most obvious, explicit forms of online abuse. For instance, it can’t tell if a particular sentence is hateful unless it includes specific angry or profane words. If the language itself is seemingly benign or requires context to understand, it would fly under the radar.

Implicit prejudices, then, would go unchecked, as long as no one says “shit.” And this says nothing for the arguably more dangerous forms of online harassment like stalking, doxing, or threatening people. Of course, a language filter can’t end the internet’s toxic culture, but this new AI research can help us take a step back and make you think real hard before you decide to perpetuate hateful speech.

Future robots need no motors

To develop micro- and biomimetic-robots, artificial muscles and medical devices, actuating materials that can reversibly change their volume under various stimuli are researched in the past thirty years to replace traditional bulky and heavy actuators including motors and pneumatic actuators.

A mechanical engineering team led by Professor Alfonso Ngan Hing-wan, Chair Professor in Materials Science and Engineering, and Kingboard Professor in Materials Engineering, Faculty of Engineering, the University of Hong Kong (HKU) published an article in Science Robotics on 30 May 2018 (EST) that introduces a novel actuating material — nickel hydroxide-oxyhydroxide — that can be powered by visible (Vis) light, electricity, and other stimuli. The material actuation can be instantaneously triggered by Vis light to produce a fast deformation and exert a force equivalent to 3000 times of its own weight. The material cost of a typical actuator is as low as HKD 4 per cm2 and can be easily fabricated within three hours.

Among various stimuli, light-induced actuating materials are highly desirable because they enable wireless operation of robots. However, very few light driven materials are available in the past, and their material and production costs are high, which hinder their development in actual applications such as artificial muscles for robotics and human assist device, and minimally invasive surgical and diagnostic tools.

Developing actuating materials was identified as the top of the 10 challenges in “The grand challenges of Science Robotics.” Research in actuating materials can radically change the concept of robots which are now mainly motor-driven. Therefore, materials that can be actuated by wireless stimuli including a change in temperature, humidity, magnetic fields and light is one of the main research focus in recent years. In particular, a material that can be actuated by Vis light and produces strong, quick and stable actuation has never been achieved. The novel actuating material system — nickel hydroxide-oxyhydroxide that can be actuated by Vis light at relatively low intensity to produce high stress and speed comparable to mammalian skeletal muscles has been developed in this research initiated by engineers in HKU.

In addition to its Vis light actuation properties, this novel material system can also be actuated by electricity, enabling it to be integrated into the present well-developed robotics technology. It is also responsive to heat and humidity changes so that they might potentially be applied in autonomous machines that harness the tiny energy change in the environment. Because the major component is nickel, the material cost is low.

The fabrication only involves electrodeposition which is a simple process, and the time required for the fabrication is around three hours, therefore the material can be easily scaled up and manufactured in industry.

The newly invented nickel hydroxide-oxyhydroxide responses to light almost instantaneously and produces a force corresponding to about 3000 times of its own weight.

When integrated into a well-designed structure, a “mini arm” made by two hinges of actuating materials can easily lift an object 50 times of its weight. Similarly, by utilizing a light blocker, a mini walking-bot in which only the “front leg” bent and straighten alternatively and therefore moves under illumination was made so that it can walk towards the light source. These demonstrate that future applications in micro-robotics including rescue robots are possible.

The evidences above revealed that this nickel hydroxide-oxyhydroxide actuating material can have different applications in the future, including rescue robots or other mini-robots. The intrinsic actuating properties of the materials obtained from our research show that by scaling up the fabrication, artificial muscles comparable to that of mammalian skeletal muscles can be achieved, and applying it in robotics, human assist device and medical devices are possible.

From a scientific point of view, this nickel hydroxide-oxyhydroxide actuating material is the world’s first material system that can be actuated directly by Vis light and electricity without any additional fabrication procedures. This also opens up a new research field on light-induced actuating behaviour for this material type (hydroxide-oxyhydroxides) because it has never been reported before.

The research team members are all from the Department of Mechanical Engineering at HKU Faculty of Engineering, led by Professor Alfonso Ngan’s group in collaboration with Dr Li Wen-di’s group on light actuation experiment and Dr Feng Shien-ping’s group on electrodeposition experiment. The research has been published in the journal Science Robotics on 30 May 2018 with a title of “Light-stimulated actuators based on nickel hydroxide-oxyhydroxide.” The first author of this paper is Dr Kwan Kin-wa who is currently a post-doctoral fellow in Prof. Ngan’s group.

The corresponding author is Prof. Ngan. The complete author list is as below: K-W. Kwan, S-J. Li, N-Y. Hau, W-D. Li, S-P. Feng, A.H.W. Ngan. This research is funded by the Research Grants Council, Hong Kong.

An artificial nerve system gives prosthetic devices and robots a sense of touch

Stanford and Seoul National University researchers have developed an artificial sensory nerve system that can activate the twitch reflex in a cockroach and identify letters in the Braille alphabet.

The work, reported May 31 in Science, is a step toward creating artificial skin for prosthetic limbs, to restore sensation to amputees and, perhaps, one day give robots some type of reflex capability.

“We take skin for granted but it’s a complex sensing, signaling and decision-making system,” said Zhenan Bao, a professor of chemical engineering and one of the senior authors. “This artificial sensory nerve system is a step toward making skin-like sensory neural networks for all sorts of applications.”

Building blocks

This milestone is part of Bao’s quest to mimic how skin can stretch, repair itself and, most remarkably, act like a smart sensory network that knows not only how to transmit pleasant sensations to the brain, but also when to order the muscles to react reflexively to make prompt decisions.

The new Science paper describes how the researchers constructed an artificial sensory nerve circuit that could be embedded in a future skin-like covering for neuro-prosthetic devices and soft robotics. This rudimentary artificial nerve circuit integrates three previously described components.

The first is a touch sensor that can detect even minuscule forces. This sensor sends signals through the second component — a flexible electronic neuron. The touch sensor and electronic neuron are improved versions of inventions previously reported by the Bao lab.

Sensory signals from these components stimulate the third component, an artificial synaptic transistor modeled after human synapses. The synaptic transistor is the brainchild of Tae-Woo Lee of Seoul National University, who spent his sabbatical year in Bao’s Stanford lab to initiate the collaborative work.

“Biological synapses can relay signals, and also store information to make simple decisions,” said Lee, who was a second senior author on the paper. “The synaptic transistor performs these functions in the artificial nerve circuit.”

Lee used a knee reflex as an example of how more-advanced artificial nerve circuits might one day be part of an artificial skin that would give prosthetic devices or robots both senses and reflexes.

In humans, when a sudden tap causes the knee muscles to stretch, certain sensors in those muscles send an impulse through a neuron. The neuron in turn sends a series of signals to the relevant synapses. The synaptic network recognizes the pattern of the sudden stretch and emits two signals simultaneously, one causing the knee muscles to contract reflexively and a second, less urgent signal to register the sensation in the brain.

Making it work

The new work has a long way to go before it reaches that level of complexity. But in the Science paper, the group describes how the electronic neuron delivered signals to the synaptic transistor, which was engineered in such a way that it learned to recognize and react to sensory inputs based on the intensity and frequency of low-power signals, just like a biological synapse.

The group members tested the ability of the system to both generate reflexes and sense touch.

In one test they hooked up their artificial nerve to a cockroach leg and applied tiny increments of pressure to their touch sensor. The electronic neuron converted the sensor signal into digital signals and relayed them through the synaptic transistor, causing the leg to twitch more or less vigorously as the pressure on the touch sensor increased or decreased.

They also showed that the artificial nerve could detect various touch sensations. In one experiment the artificial nerve was able to differentiate Braille letters. In another, they rolled a cylinder over the sensor in different directions and accurately detected the direction of the motion.

Bao’s graduate students Yeongin Kim and Alex Chortos, plus Wentao Xu, a researcher from Lee’s own lab, were also central to integrating the components into the functional artificial sensory nervous system.

The researchers say artificial nerve technology remains in its infancy. For instance, creating artificial skin coverings for prosthetic devices will require new devices to detect heat and other sensations, the ability to embed them into flexible circuits and then a way to interface all of this to the brain.

The group also hopes to create low-power, artificial sensor nets to cover robots, the idea being to make them more agile by providing some of the same feedback that humans derive from their skin.

Story Source:

Materials provided by Stanford University. Original written by Tom Abate. Note: Content may be edited for style and length.

AI researchers design ‘privacy filter’ for your photos

Each time you upload a photo or video to a social media platform, its facial recognition systems learn a little more about you. These algorithms ingest data about who you are, your location and people you know — and they’re constantly improving.

As concerns over privacy and data security on social networks grow, U of T Engineering researchers led by Professor Parham Aarabi and graduate student Avishek Bose have created an algorithm to dynamically disrupt facial recognition systems.

“Personal privacy is a real issue as facial recognition becomes better and better,” says Aarabi. “This is one way in which beneficial anti-facial-recognition systems can combat that ability.”

Their solution leverages a deep learning technique called adversarial training, which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks: the first working to identify faces, and the second working to disrupt the facial recognition task of the first. The two are constantly battling and learning from each other, setting up an ongoing AI arms race.

The result is an Instagram-like filter that can be applied to photos to protect privacy. Their algorithm alters very specific pixels in the image, making changes that are almost imperceptible to the human eye.

“The disruptive AI can ‘attack’ what the neural net for the face detection is looking for,” says Bose. “If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they’re less noticeable. It creates very subtle disturbances in the photo, but to the detector they’re significant enough to fool the system.”

Aarabi and Bose tested their system on the 300-W face dataset, an industry standard pool of more than 600 faces that includes a wide range of ethnicities, lighting conditions and environments. They showed that their system could reduce the proportion of faces that were originally detectable from nearly 100 per cent down to 0.5 per cent.

“The key here was to train the two neural networks against each other — with one creating an increasingly robust facial detection system, and the other creating an ever stronger tool to disable facial detection,” says Bose, the lead author on the project. The team’s study will be published and presented at the 2018 IEEE International Workshop on Multimedia Signal Processing later this summer.

In addition to disabling facial recognition, the new technology also disrupts image-based search, feature identification, emotion and ethnicity estimation, and all other face-based attributes that could be extracted automatically.

Next, the team hopes to make the privacy filter publicly available, either via an app or a website.

“Ten years ago these algorithms would have to be human defined, but now neural nets learn by themselves — you don’t need to supply them anything except training data,” says Aarabi. “In the end they can do some really amazing things. It’s a fascinating time in the field, there’s enormous potential.”

Aerial robot that can morph in flight

Marking a world first, researchers from the Étienne Jules Marey Institute of Movement Sciences (CNRS / Aix-Marseille Université) have drawn inspiration from birds to design an aerial robot capable of altering its profile during flight. To reduce its wingspan and navigate through tight spaces, it can reorient its arms, which are equipped with propellers that let it fly like a helicopter. The scientists’ work is the subject of an article published in Soft Robotics (May 30, 2018). It paves the way for a new generation of large robots that can move through narrow passages, making them ideal for exploration as well as search and rescue missions.

Birds and winged insects have the remarkable ability to maneuver quickly during flight to clear obstacles. Such extreme agility is necessary to navigate through cramped spaces and crowded environments, like forests. There are already miniature flying machines that can roll, pitch, or otherwise alter their flight attitude to pass through small apertures. But birds illustrate another strategy that is just as effective for flying through bottlenecks. They can quickly fold their wings during high-speed flight, reducing their imposing span, to easily negotiate the challenging paths before them.[1]

Deployment of aerial robots in constricted and cluttered areas for search and rescue, exploratory, or mapping operations will become more and more commonplace. They will need to be able to circumnavigate many obstacles and travel through fairly tight passages to complete their missions. Accordingly, researchers from the Étienne Jules Marey Institute of Movement Sciences (CNRS / Aix-Marseille Université) have designed a flying robot that can reduce its wingspan in flight to move through a small opening, without intensive steering that would consume too much energy and require a robotic platform featuring a low-inertia (light and small robot).[2]

Dubbed Quad-Morphing, the new robot has two rotating arms each equipped with two propellers for helicopter-like flight. A system of elastic and rigid wires allows the robot to change the orientation of its arms in flight so that they are either perpendicular or parallel to its central axis. It adopts the parallel position, halving its wingspan, to traverse a narrow stretch and then switches back to perpendicular position to stabilize its flight, all while flying at a speed of 9 km/h, which is pretty fast for an aerial robot.

At present, it is the precision of the Quad-Morphing autopilot mechanism that determines the robot’s agility. The autopilot activates arm reorientation when the robot nears a tight passage, as determined by a 3D localization system used at the institute.[3] The researchers have also equipped the robot with a miniature camera that can take 120 pictures per second. In the future, this will allow Quad-Morphing to independently assess the size of the gap before it and fold its wings accordingly if necessary. Flight testing with the new camera will begin this month.

Notes:

[1] Such impressive behavior has been observed among budgerigars and goshawks flying at speeds above 14 km/h.

[2] Flying robots have typical transversal speed of 4-5 km/h in indoor conditions.

[3] The studies were conducted at the AVM flying machine arena, built with the financial support of the French Equipex Robotex program. The arena has 17 cameras for recording movement.

Story Source:

Materials provided by CNRS. Note: Content may be edited for style and length.

Cometh the cyborg: Improved integration of living muscles into robots

The new field of biohybrid robotics involves the use of living tissue within robots, rather than just metal and plastic. Muscle is one potential key component of such robots, providing the driving force for movement and function. However, in efforts to integrate living muscle into these machines, there have been problems with the force these muscles can exert and the amount of time before they start to shrink and lose their function.

Now, in a study reported in the journal Science Robotics, researchers at The University of Tokyo Institute of Industrial Science have overcome these problems by developing a new method that progresses from individual muscle precursor cells, to muscle-cell-filled sheets, and then to fully functioning skeletal muscle tissues. They incorporated these muscles into a biohybrid robot as antagonistic pairs mimicking those in the body to achieve remarkable robot movement and continued muscle function for over a week.

The team first constructed a robot skeleton on which to install the pair of functioning muscles. This included a rotatable joint, anchors where the muscles could attach, and electrodes to provide the stimulus to induce muscle contraction. For the living muscle part of the robot, rather than extract and use a muscle that had fully formed in the body, the team built one from scratch. For this, they used hydrogel sheets containing muscle precursor cells called myoblasts, holes to attach these sheets to the robot skeleton anchors, and stripes to encourage the muscle fibers to form in an aligned manner.

“Once we had built the muscles, we successfully used them as antagonistic pairs in the robot, with one contracting and the other expanding, just like in the body,” study corresponding author Shoji Takeuchi says. “The fact that they were exerting opposing forces on each other stopped them shrinking and deteriorating, like in previous studies.”

The team also tested the robots in different applications, including having one pick up and place a ring, and having two robots work in unison to pick up a square frame. The results showed that the robots could perform these tasks well, with activation of the muscles leading to flexing of a finger-like protuberance at the end of the robot by around 90°.

“Our findings show that, using this antagonistic arrangement of muscles, these robots can mimic the actions of a human finger,” lead author Yuya Morimoto says. “If we can combine more of these muscles into a single device, we should be able to reproduce the complex muscular interplay that allow hands, arms, and other parts of the body to function.”

Story Source:

Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length.

Activity simulator could eventually teach robots tasks like making coffee or setting the table

For many people, household chores are a dreaded, inescapable part of life that we often put off or do with little care — but what if a robot maid could help lighten the load?

Recently, computer scientists have been working on teaching machines to do a wider range of tasks around the house. In a new paper spearheaded by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Toronto, researchers demonstrate “VirtualHome,” a system that can simulate detailed household tasks and then have artificial “agents” execute them, opening up the possibility of one day teaching robots to do such tasks.

The team trained the system using nearly 3,000 programs of various activities, which are further broken down into subtasks for the computer to understand. A simple task like “making coffee,” for example, would also include the step “grabbing a cup.” The researchers demonstrated VirtualHome in a 3-D world inspired by the Sims video game.

The team’s AI agent can execute 1,000 of these interactions in the Sims-style world, with eight different scenes including a living room, kitchen, dining room, bedroom, and home office.

“Describing actions as computer programs has the advantage of providing clear and unambiguous descriptions of all the steps needed to complete a task,” says PhD student Xavier Puig, who was lead author on the paper. “These programs can instruct a robot or a virtual character, and can also be used as a representation for complex tasks with simpler actions.”

The project was co-developed by CSAIL and the University of Toronto alongside researchers from McGill University and the University of Ljubljana. It will be presented at the Computer Vision and Pattern Recognition (CVPR) conference, which takes place this month in Salt Lake City.

How it works

Unlike humans, robots need more explicit instructions to complete easy tasks — they can’t just infer and reason with ease.

For example, one might tell a human to “switch on the TV and watch it from the sofa.” Here, actions like “grab the remote control” and “sit/lie on sofa” have been omitted, since they’re part of the commonsense knowledge that humans have.

To better demonstrate these kinds of tasks to robots, the descriptions for actions needed to be much more detailed. To do so, the team first collected verbal descriptions of household activities, and then translated them into simple code. A program like this might include steps like: walk to the television, switch on the television, walk to the sofa, sit on the sofa, and watch television.

Once the programs were created, the team fed them to the VirtualHome 3-D simulator to be turned into videos. Then, a virtual agent would execute the tasks defined by the programs, whether it was watching television, placing a pot on the stove, or turning a toaster on and off.

The end result is not just a system for training robots to do chores, but also a large database of household tasks described using natural language. Companies like Amazon that are working to develop Alexa-like robotic systems at home could eventually use data like this to train their models to do more complex tasks.

The team’s model successfully demonstrated that, their agents could learn to reconstruct a program, and therefore perform a task, given either a description: “pour milk into glass,” or a video demonstration of the activity.

“This line of work could facilitate true robotic personal assistants in the future,” says Qiao Wang, a research assistant in arts, media, and engineering at Arizona State University. “Instead of each task programmed by the manufacturer, the robot can learn tasks just by listening to or watching the specific person it accompanies. This allows the robot to do tasks in a personalized way, or even some day invoke an emotional connection as a result of this personalized learning process.”

In the future, the team hopes to train the robots using actual videos instead of Sims-style simulation videos, which would enable a robot to learn simply by watching a YouTube video. The team is also working on implementing a reward-learning system in which the agent gets positive feedback when it does tasks correctly.

“You can imagine a setting where robots are assisting with chores at home and can eventually anticipate personalized wants and needs, or impending action,” says Puig. “This could be especially helpful as an assistive technology for the elderly, or those who may have limited mobility.”

Face recognition experts perform better with AI as partner

Experts at recognizing faces often play a crucial role in criminal cases. A photo from a security camera can mean prison or freedom for a defendant — and testimony from highly trained forensic face examiners informs the jury whether that image actually depicts the accused. Just how good are facial recognition experts? Would artificial intelligence help?

A study appearing today in the Proceedings of the National Academy of Sciences has brought answers. In work that combines forensic science with psychology and computer vision research, a team of scientists from the National Institute of Standards and Technology (NIST) and three universities has tested the accuracy of professional face identifiers, providing at least one revelation that surprised even the researchers: Trained human beings perform best with a computer as a partner, not another person.

“This is the first study to measure face identification accuracy for professional forensic facial examiners, working under circumstances that apply in real-world casework,” said NIST electronic engineer P. Jonathon Phillips. “Our deeper goal was to find better ways to increase the accuracy of forensic facial comparisons.”

The team’s effort began in response to a 2009 report by the National Research Council, “Strengthening Forensic Science in the United States: A Path Forward,” which underscored the need to measure the accuracy of forensic examiner decisions.

The NIST study is the most comprehensive examination to date of face identification performance across a large, varied group of people. The study also examines the best technology as well, comparing the accuracy of state-of-the-art face recognition algorithms to human experts.

Their result from this classic confrontation of human versus machine? Neither gets the best results alone. Maximum accuracy was achieved with a collaboration between the two.

“Societies rely on the expertise and training of professional forensic facial examiners, because their judgments are thought to be best,” said co-author Alice O’Toole, a professor of cognitive science at the University of Texas at Dallas. “However, we learned that to get the most highly accurate face identification, we should combine the strengths of humans and machines.”

The results arrive at a timely moment in the development of facial recognition technology, which has been advancing for decades, but has only very recently attained competence approaching that of top-performing humans.

“If we had done this study three years ago, the best computer algorithm’s performance would have been comparable to an average untrained student,” Phillips said. “Nowadays, state-of-the-art algorithms perform as well as a highly trained professional.”

The study itself involved a total of 184 participants, a large number for an experiment of this type. Eighty-seven were trained professional facial examiners, while 13 were “super recognizers,” a term implying exceptional natural ability. The remaining 84 — the control groups — included 53 fingerprint examiners and 31 undergraduate students, none of whom had training in facial comparisons.

For the test, the participants received 20 pairs of face images and rated the likelihood of each pair being the same person on a seven-point scale. The research team intentionally selected extremely challenging pairs, using images taken with limited control of illumination, expression and appearance. They then tested four of the latest computerized facial recognition algorithms, all developed between 2015 and 2017, using the same image pairs.

Three of the algorithms were developed by Rama Chellappa, a professor of electrical and computer engineering at the University of Maryland, and his team, who contributed to the study. The algorithms were trained to work in general face recognition situations and were applied without modification to the image sets.

One of the findings was unsurprising but significant to the justice system: The trained professionals did significantly better than the untrained control groups. This result established the superior ability of the trained examiners, thus providing for the first time a scientific basis for their testimony in court.

The algorithms also acquitted themselves well, as might be expected from the steady improvement in algorithm performance over the past few years.

What raised the team’s collective eyebrows regarded the performance of multiple examiners. The team discovered that combining the opinions of multiple forensic face examiners did not bring the most accurate results.

“Our data show that the best results come from a single facial examiner working with a single top-performing algorithm,” Phillips said. “While combining two human examiners does improve accuracy, it’s not as good as combining one examiner and the best algorithm.”

Combining examiners and AI is not currently used in real-world forensic casework. While this study did not explicitly test this fusion of examiners and AI in such an operational forensic environment, results provide an roadmap for improving the accuracy of face identification in future systems.

While the three-year project has revealed that humans and algorithms use different approaches to compare faces, it poses a tantalizing question to other scientists: Just what is the underlying distinction between the human and the algorithmic approach?

“If combining decisions from two sources increases accuracy, then this method demonstrates the existence of different strategies,” Phillips said. “But it does not explain how the strategies are different.”

The research team also included psychologist David White from Australia’s University of New South Wales.

An elastic fiber filled with electrodes set to revolutionize smart clothes

It’s a whole new way of thinking about sensors. The tiny fibers developed at EPFL are made of elastomer and can incorporate materials like electrodes and nanocomposite polymers. The fibers can detect even the slightest pressure and strain and can withstand deformation of close to 500% before recovering their initial shape. All that makes them perfect for applications in smart clothing and prostheses, and for creating artificial nerves for robots.

The fibers were developed at EPFL’s Laboratory of Photonic Materials and Fiber Devices (FIMAP), headed by Fabien Sorin at the School of Engineering. The scientists came up with a fast and easy method for embedding different kinds of microstructures in super-elastic fibers. For instance, by adding electrodes at strategic locations, they turned the fibers into ultra-sensitive sensors. What’s more, their method can be used to produce hundreds of meters of fiber in a short amount of time. Their research has just been published in Advanced Materials.

Heat, then stretch

To make their fibers, the scientists used a thermal drawing process, which is the standard process for optical-fiber manufacturing. They started by creating a macroscopic preform with the various fiber components arranged in a carefully designed 3D pattern. They then heated the preform and stretched it out, like melted plastic, to make fibers of a few hundreds microns in diameter. And while this process stretched out the pattern of components lengthwise, it also contracted it crosswise, meaning the components’ relative positions stayed the same. The end result was a set of fibers with an extremely complicated microarchitecture and advanced properties.

Until now, thermal drawing could be used to make only rigid fibers. But Sorin and his team used it to make elastic fibers. With the help of a new criterion for selecting materials, they were able to identify some thermoplastic elastomers that have a high viscosity when heated. After the fibers are drawn, they can be stretched and deformed but they always return to their original shape.

Rigid materials like nanocomposite polymers, metals and thermoplastics can be introduced into the fibers, as well as liquid metals that can be easily deformed. “For instance, we can add three strings of electrodes at the top of the fibers and one at the bottom. Different electrodes will come into contact depending on how the pressure is applied to the fibers. This will cause the electrodes to transmit a signal, which can then be read to determine exactly what type of stress the fiber is exposed to — such as compression or shear stress, for example,” says Sorin.

Artificial nerves for robots

Working in association with Professor Dr. Oliver Brock (Robotics and Biology Laboratory, Technical University of Berlin), the scientists integrated their fibers into robotic fingers as artificial nerves. Whenever the fingers touch something, electrodes in the fibers transmit information about the robot’s tactile interaction with its environment. The research team also tested adding their fibers to large-mesh clothing to detect compression and stretching. “Our technology could be used to develop a touch keyboard that’s integrated directly into clothing, for instance” says Sorin.

The researchers see many other potential applications. Especially since the thermal drawing process can be easily tweaked for large-scale production. This is a real plus for the manufacturing sector. The textile sector has already expressed interest in the new technology, and patents have been filed.

Story Source:

Materials provided by Ecole Polytechnique Fédérale de Lausanne. Note: Content may be edited for style and length.

The Military Just Created An AI That Learned How To Program Software

Tired of writing your own boring code for new software? Finally, there’s an AI that can do it for you.

BAYOU is an deep learning tool that basically works like a search engine for coding: tell it what sort of program you want to create with a couple of keywords, and it will spit out java code that will do what you’re looking for, based on its best guess.

The tool was developed by a team of computer scientists from Rice University who received funding both from the military and Google. In a study published earlier this month on the preprint server arXiv, they describe how they built BAYOU and what sorts of problems it can help programmers solve.

Basically, BAYOU read the source code for about 1500 Android apps, which comes out to 100 million lines of Java. All that code was fed through BAYOU’s neural net, resulting in AI that can, yes, program other software.

If the code that BAYOU read included any sort of information about what the code does, then BAYOU also learned what those programs were intended to do along with how they work. This contextual information is what lets the AI write functional software based on just a couple of key words and basic information about what the programmer wants.

Computer science majors, rejoice: your homework might be about to get much easier. And teaching people how to code may become simpler and more intuitive, as they may someday use this new AI to generate examples of code or even to check their own work. Right now, BAYOU is still in the early stages, and the team behind it is still proving their technology works.

No, this is not that moment in which AI becomes self-replicating; BAYOU merely generates what the researchers call “sketches” of a program that are relevant to what a programmer is trying to write. These sketches still need to be pieced together into the larger work, and they may have to be tailored to the project at hand.

But even if the technology is in its infancy, this is a major step in the search for an AI programmer, a longstanding goal for computer science researchers. Other attempts to create something like BAYOU required extensive, narrow constraints to guide programmers towards the correct type of code. Because BAYOU can get to work with just a couple of keywords, it’s much less time-intensive, and much easier to use overall, for the human operators.

It’ll Take More Than $1.4 Billion to Make the UK the World Leader in AI

People may fear that artificial intelligence will take over the world. But before that can happen, countries are vying to be the one to shepherd in this new era. That is: they’re pouring money into AI research, in a rush to make it smarter as we find increasingly sophisticated ways to use it.

Nations are doing this for more than just bragging rights. Experts predict that AI will contribute $15.7 trillion to the world’s economy by 2030. Though the United Kingdom hasn’t gotten as much attention for its AI research as, say, the United States or China, that’s about to change: today, the U.K. announced a $1.4 billion investment in AI.

“We have a position of strength that we want to capitalize on because if we don’t build on it the other countries around the world would steal a march,” U.K. business minister Greg Clark told Reuters.

The U.K. plans to spend the money on a number of projects, including teacher training, regional AI hubs, and the creation of an AI supercomputer at the University of Cambridge.

Those will no doubt help grow the AI industry in the U.K., but will they be enough to put the U.K. in the top spot as the world leader in AI?

Probably not.

First, $1.4 billion might seem like a lot to invest in just one industry, but it’s really nowhere near what other nations are spending on AI.

U.S. venture capitalists are investing more money into AI startups than VCs in any other nation, at least as of July 2017. In 2016, private U.S. investors pumped about $21 billion into the AI industry. Private investors in the U.K. and the rest of Europe combined invested a paltry $2.9 to $3.8 billion, at most 18 percent of what the Americans invested.

The U.S. also has more troops in the AI battle, with an estimated 850,000 AI professionals scattered across the country. For comparison, the U.K. has just 140,000 (China, arguably the current AI frontrunner, has even fewer: 50,000).

Despite lagging behind in numbers, the U.K. does have an advantage over geographically larger nations: almost all of its AI research takes place in London.

“The cross-fertilisation that is at the core of the impact of artificial intelligence can happen here more easily than elsewhere,” Matt Hancock, U.K. Secretary of State for Digital, Culture, Media and Sport, told Reuters.

However, the U.K. might not have that advantage for much longer. China recently announced it is spending $2.1 billion on a single AI venture: a new technology park just outside of Beijing.

According to Xinhua, the nation’s official news agency, the park will support up to 400 AI enterprises focused on everything from biometric identification to deep learning. This will make it easy for experts to collaborate across disciplines.

The U.S. and China are just two of the U.K.’s major AI competitors. Japan has its own strategy for AI domination. So does CanadaGermany, too.

So while the U.K.’s $1.4 billion AI investment is nothing to scoff at, it’s unlikely to be the deciding factor in the fight to win the top spot as the world leader of AI.

Artificial Intelligence Writes Bad Poems Just Like An Angsty Teen

the sun is a beautiful thing

in silence is drawn

between the trees

only the beginning of light

Was that poem written by an angsty middle schooler or an artificially intelligent algorithm? Is it easy to tell?

Yeah, it’s not easy for us, either. Or for poetry experts, for that matter.

A team of researchers from Microsoft and Kyoto University developed a poet AI good enough to fool online judges, according to a paper published Thursday on the preprint site arXiv. It’s the latest step towards artificial intelligence that can create believable, human-passing language, and, man, it seems like a big one.

In order to generate something as esoteric as a poem, the AI was fed thousands of images paired with human-written descriptions and poems. This taught the algorithm associations between images and text. It also learned the patterns of imagery, rhymes, and other language that might make up a believable poem, as well as how certain colors or images relate to emotions and metaphors.

Once the AI was trained, it was then given an image and tasked with writing a poem that was not only relevant to the picture but also, you know, read like a poem instead of algorithmic nonsense.

And to be fair, some of the results were pretty nonsensical, even beyond the sorts of nonsense you’d find in a college literary magazine.

this realm of rain

grey sky and cloud

it’s quite and peaceful

safe allowed

 And, arguably, worse:

I am a coal-truck

by a broken heart

I have no sound

the sound of my heart

I am not

You could probably (we hope) pick those out of the crowd as machine-written. But while the AI is no Kendrick Lamar, many of the resulting poems actually did look like poems.

Next, the researchers had to see if the average person could tell the difference. That means: a Turing test of sorts.

The researchers found their judges on Amazon Mechanical Turk — an online service where people complete tasks that benefit from automation but still require human intelligence — and divided people up as either general users or “experts,” who had some sort of background in literary academia. These judges were then presented with poem after poem — sometimes with the associated picture, and sometimes without. They had to guess whether a human had written them, or whether AI had.

While the experts were better at identifying machine-written poems if they were given the image and general users were better without it, both groups were better at picking out the human-written poems than they were at identifying which ones were written by the new AI.

That is to say, the machines had them fooled more often than not.

While it might be neat to buy a coffee table book of robot poetry, odds are we’ll see the convincing, evocative language that these algorithms have mastered in more commercially-relevant ways, like ads or customer service chatbots. But even so, it’s nice to imagine a future full of gentle, thoughtful robots studying Shakespeare and comparing their broken hearts to coal trucks.

Transparent eel-like soft robot can swim silently underwater

An innovative, eel-like robot developed by engineers and marine biologists at the University of California can swim silently in salt water without an electric motor. Instead, the robot uses artificial muscles filled with water to propel itself. The foot-long robot, which is connected to an electronics board that remains on the surface, is also virtually transparent.

The team, which includes researchers from UC San Diego and UC Berkeley, details their work in the April 25 issue of Science Robotics. Researchers say the bot is an important step toward a future when soft robots can swim in the ocean alongside fish and invertebrates without disturbing or harming them. Today, most underwater vehicles designed to observe marine life are rigid and submarine-like and powered by electric motors with noisy propellers.

“Instead of propellers, our robot uses soft artificial muscles to move like an eel underwater without making any sound,” said Caleb Christianson, a Ph.D. student at the Jacobs School of Engineering at UC San Diego.

One key innovation was using the salt water in which the robot swims to help generate the electrical forces that propel it. The bot is equipped with cables that apply voltage to both the salt water surrounding it and to pouches of water inside of its artificial muscles. The robot’s electronics then deliver negative charges in the water just outside of the robot and positive charges inside of the robot that activate the muscles. The electrical charges cause the muscles to bend, generating the robot’s undulating swimming motion. The charges are located just outside the robot’s surface and carry very little current so they are safe for nearby marine life.

“Our biggest breakthrough was the idea of using the environment as part of our design,” said Michael T. Tolley, the paper’s corresponding author and a professor of mechanical engineering at the Jacobs School at UC San Diego. “There will be more steps to creating an efficient, practical, untethered eel robot, but at this point we have proven that it is possible.”

Previously, other research groups had developed robots with similar technology. But to power these robots, engineers were using materials that need to be held in constant tension inside semi-rigid frames. The Science Robotics study shows that the frames are not necessary.

“This is in a way the softest robot to be developed for underwater exploration,” Tolley said.

The robot was tested inside salt-water tanks filled with jelly fish, coral and fish at the Birch Aquarium at the Scripps Institution of Oceanography at UC San Diego and in Tolley’s lab.

The conductive chambers inside the robot’s artificial muscles can be loaded with fluorescent dye (as shown in the video accompanying the study and this release). In the future, the fluorescence could be used as a kind of signaling system.

Next steps also include improving the robot’s reliability and its geometry. Researchers need to improve ballast, equipping the robot with weights so that it can dive deeper. For now, engineers have improvised ballast weights with a range of objects, such as magnets. In future work, researchers envision building a head for their eel robot to house a suite of sensors.

The research was supported with a grant from the Office of Naval Research. Christianson is supported by a National Science Foundation Graduate Research Fellowship.

Videos:

http://bit.ly/eelbot (feature)

http://bit.ly/UCSDScienceRobotics (research video)

Story Source:

Materials provided by University of California – San Diego. Original written by Ioana Patringenaru. Note: Content may be edited for style and length.

Turning deep-learning AI loose on software development

Computer scientists at Rice University have created a deep-learning, software-coding application that can help human programmers navigate the growing multitude of often-undocumented application programming interfaces, or APIs.

Known as Bayou, the Rice application was created through an initiative funded by the Defense Advanced Research Projects Agency aimed at extracting knowledge from online source code repositories like GitHub. A paper on Bayou will be presented May 1 in Vancouver, British Columbia, at the Sixth International Conference on Learning Representations, a premier outlet for deep learning research. Users can try it out at askbayou.com.

Designing applications that can program computers is a long-sought grail of the branch of computer science called artificial intelligence (AI).

“People have tried for 60 years to build systems that can write code, but the problem is that these methods aren’t that good with ambiguity,” said Bayou co-creator Swarat Chaudhuri, associate professor of computer science at Rice. “You usually need to give a lot of details about what the target program does, and writing down these details can be as much work as just writing the code.

“Bayou is a considerable improvement,” he said. “A developer can give Bayou a very small amount of information — just a few keywords or prompts, really — and Bayou will try to read the programmer’s mind and predict the program they want.”

Chaudhuri said Bayou trained itself by studying millions of lines of human-written Java code. “It’s basically studied everything on GitHub, and it draws on that to write its own code.”

Bayou co-creator Chris Jermaine, a professor of computer science who co-directs Rice’s Intelligent Software Systems Laboratory with Chaudhuri, said Bayou is particularly useful for synthesizing examples of code for specific software APIs.

“Programming today is very different than it was 30 or 40 years ago,” Jermaine said. “Computers today are in our pockets, on our wrists and in billions of home appliances, vehicles and other devices. The days when a programmer could write code from scratch are long gone.”

Bayou architect Vijay Murali, a research scientist at the lab, said, “Modern software development is all about APls. These are system-specific rules, tools, definitions and protocols that allow a piece of code to interact with a specific operating system, database, hardware platform or another software system. There are hundreds of APIs, and navigating them is very difficult for developers. They spend lots of time at question-answer sites like Stack Overflow asking other developers for help.”

Murali said developers can now begin asking some of those questions at Bayou, which will give an immediate answer.

“That immediate feedback could solve the problem right away, and if it doesn’t, Bayou’s example code should lead to a more informed question for their human peers,” Murali said.

Jermaine said the team’s primary goal is to get developers to try to extend Bayou, which has been released under a permissive open-source license.

“The more information we have about what people want from a system like Bayou, the better we can make it,” he said. “We want as many people to use it as we can get.” Bayou is based on a method called neural sketch learning, which trains an artificial neural network to recognize high-level patterns in hundreds of thousands of Java programs. It does this by creating a “sketch” for each program it reads and then associating this sketch with the “intent” that lies behind the program.

When a user asks Bayou questions, the system makes a judgment call about what program it’s being asked to write. It then creates sketches for several of the most likely candidate programs the user might want.

“Based on that guess, a separate part of Bayou, a module that understands the low-level details of Java and can do automatic logical reasoning, is going to generate four or five different chunks of code,” Jermaine said. “It’s going to present those to the user like hits on a web search. ‘This one is most likely the correct answer, but here are three more that could be what you’re looking for.'”

Story Source:

Materials provided by Rice University. Original written by Jade Boyd. Note: Content may be edited for style and length.

Building AI systems that make fair decisions

A growing body of research has demonstrated that algorithms and other types of software can be discriminatory, yet the vague nature of these tools makes it difficult to implement specific regulations. Determining the existing legal, ethical and philosophical implications of these powerful decision-making aides, while still obtaining answers and information, is a complex challenge.

Harini Suresh, a PhD student at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL), is investigating this multilayered puzzle: how to create fair and accurate machine learning algorithms that let users obtain the data they need. Suresh studies the societal implications of automated systems in MIT Professor John Guttag’s Data-Driven Inference Group, which uses machine learning and computer vision to improve outcomes in medicine, finance, and sports. Here, she discusses her research motivations, how a food allergy led her to MIT, and teaching students about deep learning.

Q: What led you to MIT?

A: When I was in eighth grade, my mom developed an allergy to spicy food, which, coming from India, was truly bewildering to me. I wanted to discover the underlying reason. Luckily, I grew up next to Purdue University in Indiana, and I met with a professor there who eventually let me test my allergy-related hypotheses. I was fascinated with being able to ask and answer my own questions, and continued to explore this realm throughout high school.

When I came to MIT as an undergraduate, I intended to focus solely on biology, until I took my first computer science class. I learned how computational tools could profoundly affect biology and medicine, since humans can’t process massive amounts of data in the way that machines can.

Towards the end of my undergrad, I started doing research with [professor of computer science and engineering] Peter Szolovits, who focuses on utilizing big medical data and machine learning to come up with new insights. I stayed to get my master’s degree in computer science, and now I’m in my first year as a PhD student studying personalized medicine and societal implications of machine learning.

Q: What are you currently working on?

A: I’m studying how to make machine learning algorithms more understandable and easier to use responsibly. In machine learning, we typically use historical data and train a model to detect patterns in the data and make new predictions.

If the data we use is biased in a particular way, such as “women tend to receive less pain treatment”, then the model will learn that. Even if the data isn’t biased, if we just have way less data on a certain group, predictions for that group will be worse. If that model is then integrated into a hospital (or any other real-world system), it’s not going to perform equally across all groups of people, which is problematic.

I’m working on creating algorithms that utilize data effectively but fairly. This involves both detecting bias or underrepresentation in the data as well as figuring out how to mitigate it at different points in the machine learning pipeline. I’ve also worked on using predictive models to improve patient care.

Q: What effect do you think your area of work will have in the next decade?

A: Machine learning is everywhere. Companies are going to use these algorithms and integrate them into their products, whether they’re fair or not. We need to make it easier for people to use these tools responsibly so that our predictions on data are made in a way that we as a society are okay with.

Q: What is your favorite thing about doing research at CSAIL?

A: When I ask for help, whether it’s related to a technical detail, a high-level problem, or general life advice, people are genuinely willing to lend support, discuss problems, and find solutions, even if it takes a long time.

Q: What is the biggest challenge you face in your work?

A: When we think about machine learning problems with real-world applications, and the goal of eventually getting our work in the hands of real people, there’s a lot of existing legal, ethical, and philosophical considerations that arise. There’s variability in the definition of “fair,” and it’s important not to reduce our research down to a simple equation, because it’s much more than that. It’s definitely challenging to balance thinking about how my work fits in with these broader frameworks while also carving out a doable computer science problem to work on.

Q: What is something most people would be surprised to learn about you?

A: I love creative writing, and for most of my life before I came to MIT I thought I would be an author. I really enjoy art and creativity. Along those lines, I painted a full-wall mural in my room a while ago, I frequently spend hours at MIT’s pottery studio, and I love making up recipes and taking photos.

Q: If you could tell your younger self one thing what would it be?

A: If you spend time on something, and it doesn’t directly contribute to a paper or thesis, don’t think of it as a waste of time. Accept the things that don’t work out as a part of the learning process and be honest about when to move on to something new without feeling guilty.

If you’d rather be doing something else, sooner is better to just go do it. Things that seem like huge consequences at the time, like taking an extra class or graduating slightly later, aren’t actually an issue when the time rolls around, and a lot of people do it. Honestly, my future self could probably use this advice too!

Q: What else have you been involved with at MIT?

A: During Independent Activity Period 2017, I organized a class called Intro to Deep Learning. I think machine learning gets a reputation of being a very difficult, expert-only endeavor, which scares people away and creates a pretty homogenous group of “experts.”

I wanted to create a low-commitment introduction to an area of machine learning that might help ease the initial barrier to entry. My co-organizer and I tried to keep our goals of accessibility and inclusivity at the forefront when making decisions about the course. Communicating complex ideas in an accessible way was a challenge, but a very fun one.


Topics: Students, Computer science and technology, Machine learning, Electrical Engineering & Computer Science (eecs), Artificial intelligence, Computer Science and Artificial Intelligence Laboratory (CSAIL), Data, Diversity and inclusion, Algorithms, Graduate, postdoctoral

AI Could Start A Nuclear War. But Only If We Let AI Start A Nuclear War

Here are some very true facts:

You might be tempted to put these pieces together and assume that AI might autonomously start a nuclear war. This is the subject of a new paper and article published today by the RAND Corporation, a nonprofit thinktank that researches national security as part of its Security 2040 initiative.

But AI won’t necessarily cause a nuclear war; no matter what AI fear-mongerer Elon Musk tweets out, artificial intelligence will only trigger a nuclear war if we decide to build artificial intelligence that can start nuclear wars.

The RAND Corporation hosted a series of panels with mysterious, unnamed experts in the realms of national security, nuclear weaponry, and artificial intelligence to speculate and theorize on how AI might advance in the coming years and what that means for nuclear war.

Much of the article talks about hyper-intelligent computers that would transform when a nation decides to launch its nuclear missiles. The researchers imagine algorithms that can track intercontinental ballistic missiles, launch its nukes before they’re destroyed in an incoming attack, or even deploy retaliatory strikes before an enemy’s nukes have even left their lairs.

It also mentions that AI could suggest when human operators should launch missiles, while also arguing that future generations will be more willing to take that human operator out of the equation, leaving those life-or-death decisions to AI that has been trained to make them.

Oh, and also the researchers say that all these systems will be buggy as hell while people work out the kinks. Because what’s the harm in a little trial and error when nukes are involved?

As we become more dependent on AI for its military applications, we might need to reconsider how implementing such systems could affect nuclear powers worldwide, many of which find themselves in a complicated web of alliances and political rivalries.

If you are despairing at the prospect of a computer-dominated military, the study authors offer some solace. Buried within their findings is the very reasonable perspective that artificial intelligence, which excels at the super-niche tasks for which it is developed, will continue to be developed at an incremental pace and likely won’t do all that much. Yes, AI is sophisticated enough to win at Go, but it’s not ready to be in charge of our nukes. At least, not yet.

In short, it will be a while before we have computers deciding when to launch our nukes (though, given the rate of human error and some seriously close calls that resulted from it in the past, it’s a matter of opinion whether more human control is actually a good thing).

Robot Showboat: Now Even Our Celeb Feuds Are Automated

Drake and Meek Mill. 6ix9ine and The Game. Sarah Jessica Parker and Kim Cattrall. Let’s be real, part of the reason we’re on social media is for the celebrity feuds.

And now it seems like that’s just another thing the robots are going to do better (or, at least, stranger) than us.

Earlier this week, famous instagram robot Bermuda, a digital avatar who posts gems about how climate change is fake, feminists are misguided, and whites are supreme, took over the account of Miquela, another Instagram “robot” who models designer clothes, supports Black Lives Matter, and loves Beyonce.

After a few days, Miquela “came clean,” and posted an “emotional” rant about her “revelation” that she was not based on a real person, but instead is an artificial intelligence-powered robot that, like Bermuda, was created by the Trump-endorsing tech company Cain Intelligence.

Bermuda returned Miquela’s account, and the feud ended.

But the true mystery had just begun.

Now, we here at Futurism love some good internet as much as anyone else. But it’s important to mention that none of this is real. Just like Twitter’s horse ebooks or YouTube’s Pronunciationbook, Miquela and Bermuda’s feud and revelation is probably just another exhausting internet art project. Sigh.

In fact, Cain Intelligence is not even a real company. And it’s unclear how much (if any) of either digital persona’s account is generated by artificial intelligence. It’s possible that some of the captions and images are rendered through a machine learning algorithm, but it’s just as likely that some dude behind the scenes is writing everything himself, as is partially the case for Hanson Robotics’ Sophia.

No one, human or AI, has come forward to reveal the real goal behind this artificial feud, and Miquela hasn’t posted since her confession, so she’s not dropping any clues. Until then, we can probably all move on and get ready for the next viral social media feud/prank.

Unless, of course, AI really is controlling Bermuda and Miquela, in which case I apologize and beg that they leave my account alone.

Researchers design ‘soft’ robots that can move on their own

If Star Wars’ R2-D2 is your idea of a robot, think again. Researchers led by a University of Houston engineer have reported a new class of soft robot, composed of ultrathin sensing, actuating electronics and temperature-sensitive artificial muscle that can adapt to the environment and crawl, similar to the movement of an inchworm or caterpillar.

Cunjiang Yu, Bill D. Cook Assistant Professor of mechanical engineering, said potential applications range from surgery and rehabilitation to search and rescue in natural disasters or on the battlefield. Because the robot body changes shape in response to its surroundings, it can slip through narrow crevices to search for survivors in the rubble left by an earthquake or bombing, he said.

“They sense the change in environment and adapt to slip through,” he said.

These soft robots, made of soft artificial muscle and ultrathin deformable sensors and actuators, have significant advantages over the traditional rigid robots used for automation and other physical tasks.

The researchers said their work, published in the journal Advanced Materials, took its inspiration from nature. “Many creatures, such as inchworms that have completely soft compliant bodies without any rigid components (e.g., bones), exhibit unprecedented abilities in adapting their shapes and morphologies and unique locomotion behaviors,” they wrote.

Traditional soft robots lack the ability to adapt to their environments or move on their own.

The prototype adaptive soft robot includes a liquid crystal elastomer, doped with carbon black nanoparticles to enhance thermal conductivity, as the artificial muscle, combined with ultrathin mesh shaped stretchable thermal actuators and silicon-based light sensors. The thermal actuators provide heat to activate the robot.

The prototype is small — 28.6 millimeters in length, or just over one inch — but Yu said it could easily be scaled up. That’s the next step, along with experimenting with various types of sensors. While the prototype uses heat-sensitive sensors, it could employ smart materials activated by light or other cues, he said.

“This is the first of its kind,” Yu said. “You can use other sensors, depending on what you want it to do.”

Video of robot in motion: https://www.youtube.com/watch?time_continue=3&v=fUqPPdl9ujk

Story Source:

Materials provided by University of Houston. Original written by Jeannie Kever. Note: Content may be edited for style and length.

Robot developed for automated assembly of designer nanomaterials

A current area of intense interest in nanotechnology is van der Waals heterostructures, which are assemblies of atomically thin two-dimensional (2D) crystalline materials that display attractive conduction properties for use in advanced electronic devices.

A representative 2D semiconductor is graphene, which consists of a honeycomb lattice of carbon atoms that is just one atom thick. The development of van der Waals heterostructures has been restricted by the complicated and time-consuming manual operations required to produce them. That is, the 2D crystals typically obtained by exfoliation of a bulk material need to be manually identified, collected, and then stacked by a researcher to form a van der Waals heterostructure. Such a manual process is clearly unsuitable for industrial production of electronic devices containing van der Waals heterostructures

Now, a Japanese research team led by the Institute of Industrial Science at The University of Tokyo has solved this issue by developing an automated robot that greatly speeds up the collection of 2D crystals and their assembly to form van der Waals heterostructures. The robot consists of an automated high-speed optical microscope that detects crystals, the positions and parameters of which are then recorded in a computer database. Customized software is used to design heterostructures using the information in the database. The heterostructure is then assembled layer by layer by a robotic equipment directed by the designed computer algorithm. The findings were reported in Nature Communications.

“The robot can find, collect, and assemble 2D crystals in a glove box,” study first author Satoru Masubuchi says. “It can detect 400 graphene flakes an hour, which is much faster than the rate achieved by manual operations.”

When the robot was used to assemble graphene flakes into van der Waals heterostructures, it could stack up to four layers an hour with just a few minutes of human input required for each layer. The robot was used to produce a van der Waals heterostructure consisting of 29 alternating layers of graphene and hexagonal boron nitride (another common 2D semiconductor). The record layer number of a van der Waals heterostructure produced by manual operations is 13, so the robot has greatly increased our ability to access complex van der Waals heterostructures.

“A wide range of materials can be collected and assembled using our robot,” co-author Tomoki Machida explains. “This system provides the potential to fully explore van der Waals heterostructures.”

The development of this robot will greatly facilitate production of van der Waals heterostructures and their use in electronic devices, taking us a step closer to realizing devices containing atomic-level designer materials.

Story Source:

Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length.

An AI that makes road maps from aerial images

Map apps may have changed our world, but they still haven’t mapped all of it yet. In particular, mapping roads can be tedious: even after taking aerial images, companies like Google still have to spend many hours manually tracing out roads. As a result, they haven’t yet gotten around to mapping the vast majority of the more than 20 million miles of roads across the globe.

Gaps in maps are a problem, particularly for systems being developed for self-driving cars. To address the issue, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created RoadTracer, an automated method to build road maps that’s 45 percent more accurate than existing approaches.

Using data from aerial images, the team says that RoadTracer is not just more accurate, but more cost-effective than current approaches. MIT professor Mohammad Alizadeh says that this work will be useful both for tech giants like Google and for smaller organizations without the resources to curate and correct large amounts of errors in maps.

“RoadTracer is well-suited to map areas of the world where maps are frequently out of date, which includes both places with lower population and areas where there’s frequent construction,” says Alizadeh, one of the co-authors of a new paper about the system. “For example, existing maps for remote areas like rural Thailand are missing many roads. RoadTracer could help make them more accurate.”

In tests looking at aerial images of New York City, RoadTracer could correctly map 44 percent of its road junctions, which is more than twice as effective as traditional approaches based on image segmentation that could map only 19 percent.

The paper, which will be presented in June at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah, is a collaboration between MIT CSAIL and the Qatar Computing Research Institute (QCRI).

Alizadeh’s MIT co-authors include graduate students Fayven Bastani and Songtao He, and professors Hari Balakrishnan,Sam Madden, and David DeWitt. QCRI co-authors include senior software engineer Sofiane Abbar and Sanjay Chawla, who is the research director of QCRI’s Data Analytics Group.

How it works

Current efforts to automate maps involve training neural networks to look at aerial images and identify individual pixels as either “road” or “not road.” Because aerial images can often be ambiguous and incomplete, such systems also require a post-processing step that’s aimed at trying to fill in some of the gaps.

Unfortunately, these so-called “segmentation” approaches are often imprecise: if the model mislabels a pixel, that error will get amplified in the final road map. Errors are particularly likely if the aerial images have trees, buildings or shadows that obscure where roads begin and end. (The post-processing step also requires making decisions based on assumptions that may not always hold up, like connecting two road segments simply because they are next to each other.)

Meanwhile, RoadTracer creates maps step-by-step. It starts at a known location on the road, and uses a neural network to examine the surrounding area to determine which point is most likely to be the next part on the road. It then adds that point and repeats the process to gradually trace out the road one step at a time.

“Rather than making thousands of different decisions at once about whether various pixels represent parts of a road, RoadTracer focuses on the simpler problem of figuring out which direction to follow when starting from a particular spot that we know is a road,” says Bastani. “This is in many ways actually a lot closer to how we as humans construct mental models of the world around us.”

The team trained RoadTracer on aerial images of 25 cities across six countries in North America and Europe, and then evaluated its mapping abilities on 15 other cities.

“It’s important for a mapping system to be able to perform well on cities it hasn’t trained on, because regions where automatic mapping holds the most promise are ones where existing maps are non-existent or inaccurate,” says Balakrishnan.

Bastani says that the fact that RoadTracer had an error rate that is 45 percent lower is essential to making automatic mapping systems more practical for companies like Google.

“If the error rate is too high, then it is more efficient to map the roads manually from scratch versus removing incorrect segments from the inferred map,” says Bastani.

Still, implementing something like RoadTracer wouldn’t take people completely out of the loop: The team says that they could imagine the system proposing road maps for a large region and then having a human expert come in to double-check the design.

“That said, what’s clear is that with a system like ours you could dramatically decrease the amount of tedious work that humans would have to do,” Alizadeh says.

Indeed, one advantage to RoadTracer’s incremental approach is that it makes it much easier to correct errors — human supervisors can simply correct them and re-run the algorithm from where they left off, rather than continue to use imprecise information that trickles down to other parts of the map.

Of course, aerial images are just one piece of the puzzle. They don’t give you information about roads that have overpasses and underpasses, since those are impossible to ascertain from above. As a result, the team is also separately developing algorithms that can create maps from GPS data, and working to merge these approaches into a single system for mapping.

This project was supported in part by the Qatar Computing Research Institute.

Two robots are better than one: 5G antenna measurement research

Researchers at the National Institute of Standards and Technology (NIST) continue to pioneer new antenna measurement methods, this time for future 5G wireless communications systems.

NIST’s new Large Antenna Positioning System (LAPS) has two robotic arms designed to position “smart” or adaptable antennas, which can be mounted on base stations that handle signals to and from huge numbers of devices. Future 5G systems will operate at higher frequencies and offer more than 100 times the data-carrying capacity of today’s cellphones, while connecting billions of mobile broadband users in complex, crowded signal environments.

Among its many special capabilities, the LAPS can test transmissions to and from antennas located on fast-moving mobile devices, which requires coordination between the timing of communication signals and robot motion.

“Measurements of antenna signals are a great use for robotics,” NIST electronics engineer Jeff Guerrieri said. “The robotic arms provide antenna positioning that would be constrained by conventional measurement systems.”

NIST researchers are still validating the performance of the LAPS and are just now beginning to introduce it to industry. The system was described at a European conference last week .

Today’s mobile devices such as cell phones, consumer Wi-Fi systems and public safety radios mostly operate at frequencies below 3 gigahertz (GHz), a crowded part of the spectrum. Next-generation mobile communications are starting to use the more open frequency bands at millimeter wavelengths (30-300 GHz), but these signals are easily distorted and more likely to be affected by physical barriers such as walls or buildings. Solutions will include transmitter antenna arrays with tens to hundreds of elements that focus the antenna power into a steerable beam that can track mobile devices.

For decades, NIST has pioneered testing of high-end antennas for radar, aircraft, communications and satellites. Now, the LAPS will help foster the development of 5G wireless and spectrum-sharing systems. The dual-robot system will also help researchers understand the interference problems created by ever-increasing signal density.

The new facility is the next generation of NIST’s Configurable Robotic Millimeter-Wave Antenna (CROMMA) Facility, which has a single robotic arm. CROMMA, developed at NIST, has become a popular tool for high-frequency antenna measurements. Companies that integrate legacy antenna measurement systems are starting to use robotic arms in their product lines, facilitating the transfer of this technology to companies like The Boeing Co.

CROMMA can measure only physically small antennas. NIST developed the LAPS concept of a dual robotic arm system, one robot in a fixed position and the other mounted on a large linear rail slide to accommodate larger antennas and base stations. The system was designed and installed by NSI-MI Technologies. The LAPS also has a safety unit, including radar designed to prevent collisions of robots and antennas within the surrounding environment, and to protect operators.

The LAPS’ measurement capabilities for 5G systems include flexible scan geometries, beam tracking of mobile devices and improved accuracy and repeatability in mobile measurements.

The LAPS has replaced NIST’s conventional scanners and will be used to perform near-field measurement of basic antenna properties for aerospace and satellite companies requiring precise calibrations and performance verification. The near-field technique measures the radiated signal very close to the antenna in a controlled environment and, using mathematical algorithms developed at NIST, calculates the antenna’s performance at its operating distance, known as the far field.

But the ultimate goal for the LAPS is to perform dynamic, over-the-air tests of future 5G communication systems. Initial validation shows that basic mechanical operation of the LAPS is within the specified design tolerances for still and moving tests to at least 30 GHz. Final validation is ongoing.

#

Face recognition technology that works in the dark

Army researchers have developed an artificial intelligence and machine learning technique that produces a visible face image from a thermal image of a person’s face captured in low-light or nighttime conditions. This development could lead to enhanced real-time biometrics and post-mission forensic analysis for covert nighttime operations.

Thermal cameras like FLIR, or Forward Looking Infrared, sensors are actively deployed on aerial and ground vehicles, in watch towers and at check points for surveillance purposes. More recently, thermal cameras are becoming available for use as body-worn cameras. The ability to perform automatic face recognition at nighttime using such thermal cameras is beneficial for informing a Soldier that an individual is someone of interest, like someone who may be on a watch list.

The motivations for this technology — developed by Drs. Benjamin S. Riggan, Nathaniel J. Short and Shuowen “Sean” Hu, from the U.S. Army Research Laboratory — are to enhance both automatic and human-matching capabilities.

“This technology enables matching between thermal face images and existing biometric face databases/watch lists that only contain visible face imagery,” said Riggan, a research scientist. “The technology provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis.”

He said under nighttime and low-light conditions, there is insufficient light for a conventional camera to capture facial imagery for recognition without active illumination such as a flash or spotlight, which would give away the position of such surveillance cameras; however, thermal cameras that capture the heat signature naturally emanating from living skin tissue are ideal for such conditions.

“When using thermal cameras to capture facial imagery, the main challenge is that the captured thermal image must be matched against a watch list or gallery that only contains conventional visible imagery from known persons of interest,” Riggan said. “Therefore, the problem becomes what is referred to as cross-spectrum, or heterogeneous, face recognition. In this case, facial probe imagery acquired in one modality is matched against a gallery database acquired using a different imaging modality.”

This approach leverages advanced domain adaptation techniques based on deep neural networks. The fundamental approach is composed of two key parts: a non-linear regression model that maps a given thermal image into a corresponding visible latent representation and an optimization problem that projects the latent projection back into the image space.

Details of this work were presented in March in a technical paper “Thermal to Visible Synthesis of Face Images using Multiple Regions” at the IEEE Winter Conference on Applications of Computer Vision, or WACV, in Lake Tahoe, Nevada, which is a technical conference comprised of scholars and scientists from academia, industry and government.

At the conference, Army researchers demonstrated that combining global information, such as the features from the across the entire face, and local information, such as features from discriminative fiducial regions, for example, eyes, nose and mouth, enhanced the discriminability of the synthesized imagery. They showed how the thermal-to-visible mapped representations from both global and local regions in the thermal face signature could be used in conjunction to synthesize a refined visible face image.

The optimization problem for synthesizing an image attempts to jointly preserve the shape of the entire face and appearance of the local fiducial details. Using the synthesized thermal-to-visible imagery and existing visible gallery imagery, they performed face verification experiments using a common open source deep neural network architecture for face recognition. The architecture used is explicitly designed for visible-based face recognition. The most surprising result is that their approach achieved better verification performance than a generative adversarial network-based approach, which previously showed photo-realistic properties.

Riggan attributes this result to the fact the game theoretic objective for GANs immediately seeks to generate imagery that is sufficiently similar in dynamic range and photo-like appearance to the training imagery, while sometimes neglecting to preserve identifying characteristics, he said. The approach developed by ARL preserves identity information to enhance discriminability, for example, increased recognition accuracy for both automatic face recognition algorithms and human adjudication.

As part of the paper presentation, ARL researchers showcased a near real-time demonstration of this technology. The proof of concept demonstration included the use of a FLIR Boson 320 thermal camera and a laptop running the algorithm in near real-time. This demonstration showed the audience that a captured thermal image of a person can be used to produce a synthesized visible image in situ. This work received a best paper award in the faces/biometrics session of the conference, out of more than 70 papers presented.

Riggan said he and his colleagues will continue to extend this research under the sponsorship of the Defense Forensics and Biometrics Agency to develop a robust nighttime face recognition capability for the Soldier.

“Cow FitBits” Won’t Make Cows Happier Because They’re Not Milk Robots

The life of a milk cow is mostly pretty great. They relax, they go for walks on rich pastures; when it gets cold, they hang out with their bovine homies indoors.

That all goes out the window when they’re sick. Sick cows tend to eat less, walk differently, and give off sad moos. Now, the great AI hawkers have decided to automate a practice almost as old as farming itself: figuring out if cows are sick to them give them treatment. Proponents claim that the devices can identify a sick cow sooner, but many farmers don’t think they’re necessary, because they’ve developed a sixth sense for a sick cow.

Dutch innovation company Connecterra has developed an “intelligent cow-monitoring system” that follows  individual cows’ every move, relaying live information back to the farmer. Built on Google’s open-source AI platform TensorFlow (the same technology used to thwart illegal deforestation in Louisiana), the system uses motion-sensing “FitBits” attached to the cow’s neck to analyze its behavior.

Connecterra claims it’s Big Bovine Brother network can tell if a cow gets sick 24 to 48 hours before any visual symptoms arise by analyzing changes in internal temperature (that aren’t accounted for by external factors like high outside temperatures and humidity levels). It can also learn behavior such as walking, standing, laying, and chewing, and ring the alarm bells if a particular cow decides not to go for a second helping of hay.

Many farmers directly benefit from the technology, the company claims. “For a typical Dutch farm, which are generally known to be very productive to begin with, we’ve seen about a 20 percent to 30 percent gain in efficiency in farm operations using Connecterra,” Yasir Khokhar, former Microsoft employee, and the company’s CEO.

AI is being used elsewhere on the farm, too. Farmers in China have been tracking the movement of pigs using RFID tags and overhead cameras that track individual pigs using machine learning. Even the noises the pigs make are analyzed to monitor for disease.

But do we really need AI-powered sensors to know if a cow is not producing at her max? Dairy farmers have been around for at least 7,500 years. “I can spot a cow across a room that don’t feel great just by looking in her eyes,” Mark Rodgers, a Georgian dairy farmer, tells the Washington Post.

And then there is the cost. Just to get your herd all hooked up with Connecterra, it costs a substantial $79.99-per-cow, and $3-per-month charge per cow. If you’ve got a decent number of cows in your herd, costs like that can really add up.

The benefits of using AI technology to the individual animals themselves are pretty clear. Farmers can respond to illnesses and other changes in behavior faster. But there is, of course, a downside: if farmers continue to use technologies like Connecterra’s in the future, will their intuition change or vanish over time? What about the next generation of dairy farmers?

Dairy farmers should know how and when to respond to a cow’s needs without sophisticated technology. Teach a farmer how to watch cows, and they’ll drink milk for the rest of their life. But the unstoppable wave of AI technologies is taking over almost every aspects of our lives. At the end of the day, it’s about finding a balance between farmer intuition, and technological aids that will make everyone happy.