AI Coaches Are Here To Unleash Your Inner LeBron

A coach is indispensable to the serious athlete — everyone from Olympians to up-and-coming youth athletes needs experts who can spot the strengths and weaknesses of an athlete’s style and cater to their personal needs. But now AI systems are almost sophisticated enough to do the job just as well as — better in some ways — than the old human experts.

HomeCourt, an iPhone app that basketball players can use to track their shots, might be the first of its kind. If the phone’s camera is propped up and aimed at them while they practice, the app will track the position and success rate of each throw. As The Wall Street Journal reported, the free app offers users real-time feedback, complete with an automatically-spliced video recording of every single shot the athlete takes so they can check their form. At least, it does for 300 shots per month — more than that, and a user is prompted to pay $8 for a subscription.

There are other apps that coaches and athletes use, of course. Coach’s Eye, for instance, lets athletes review and annotate their footage. But while many of them help athletes film themselves, none use AI to help improve performance. Without an expert there to review the footage, these athletes may not even know what they’re looking for.

Credit: Emily Cho

HomeCourt isn’t yet as sophisticated as a real-life human coach — right now, the app’s AI gets confused if there’s more than one person on the court. But David Lee, the Co-Founder and CEO of NEX Team, the company behind HomeCourt, is optimistic for how AI will be able to serve athletes in the future.

“In the future, we believe we can provide a platform where coaches and trainers can be actively training and coaching their players through the app from anywhere, anytime,” Lee told Futurism. He added that some athletes are already using HomeCourt to work remotely with their coaches when one of them is on the road. That way, athletes can get feedback from coaches based on what the AI saw during a solo practice session.

HomeCourt’s AI, rudimentary as it might be, represents an important first step. Artificial intelligence and apps — cheap compared to the elite coaches that kids are expected to hire if they want to break into travel leagues or thrive in a highly-competitive sport — could democratize the way that people can train and improve.

HomeCourt’s AI, rudimentary as it might be, represents an important first step. Artificial intelligence and apps could democratize the way that people can train and improve.

In the future, Lee plans to make the app capable of new measurements so it can glean even more about a player, some of which a human coach can’t readily discern. For basketball players, HomeCourt would look for things like jump height, speed, and release time, and analyze how each factor plays a role in an athlete’s accuracy.

“From the data, we can extract what shooting form has the highest consistency and success specifically for you,” says Lee. “The idea here is not to identify the perfect shot, but your perfect shot.”

He also hopes to bring the HomeCourt’s level of analysis to other sports. Tennis might be a natural next step, since the court is similarly marked with clear lines that help the AI gauge where people are standing. But other sports and activities may see AI coaches in the near future as well. Even some unconventional ones like yoga.

“We can track a person’s poses for something like downward dog and provide instant feedback about adjustments to help a yogi improve their poses,” says Lee. “Simply seeing ourselves doing yoga along with actionable insights could revolutionize yoga since most people don’t currently get any feedback about their poses and how they can improve.”

While having your smartphone film you while you stretch and balance may ruin yoga’s relaxing elements for some, AI could be a great learning tool for the many yoga practitioners who only do yoga at home, instructed by a video. Once people get the fundamentals down, they would presumably be able to unplug and enjoy yoga’s meditative side.

Animated GIF - Find & Share on GIPHY

Whether or not HomeCourt (or a similar AI system) reaches a given sport, it’s clear that sports technology is becoming more sophisticated than ever before. Athletes and coaches have access to an incredible amount of analytics and data, which helps them find more specific ways to improve their games in ways that wouldn’t have been imaginable in the past. The key to improving sports through AI, of course, is to make sure that these technologies are available to everyone. Otherwise, tools like HomeCourt will only help the privileged few who already had access to the best tools.

While the premium membership to HomeCourt isn’t unaffordable for most, it’s yet another subscription to keep track of. Meanwhile, the price tags for other advanced sports technology can easily climb into the triple digits, or even higher.

It’s easy to wonder whether these high-tech systems can really deliver on the promise to bring competitive sports to people who have been historically priced out. Until more competitors arrive to the artificial intelligence coaching space, we may have to wait and see.

While one can’t expect one company like HomeCourt to solve the problems of wealth inequality in sports, what we can all hope is increased outreach to make sure that those who would actually benefit from smarter sports tech actually get to use it.

More about how technology can give athletes a boost, click here: The Next Revolution of Sports Cheating: Rewriting Athletes’ Genetic Codes

Amazon Rekognition Falsely Matched 28 Members of Congress to Mugshots

YOU ARE NOT A MATCH. The American Civil Liberties Union (ACLU) just spent $12.33 to test a system that could quite literally cost people their lives. On Thursday, the nonprofit organization, which focuses on preserving the rights of U.S. citizens, published a blog post detailing its test of Rekognition, a facial identification tool developed and sold by Amazon.

Using Rekognition’s default setting of 80 percent confidence (meaning the system was 80 percent certain it was correct when it signaled a match), the ACLU scanned a database of 25,000 publicly available mugshots looking to match them to photos of every sitting representative in Congress, in both the Senate and the House of Representatives.

Rekognition matched 28 Congresspeople to mugshots. It was wrong, but it found matches anyway.

THE POLICE AND P.O.C. Not only did Rekognition mistakenly believe that those 28 Congresspeople were the same people in the mugshots, the people it wrongfully matched were disproportionately people of color; while people of color make up just 20 percent of Congress, according to the ACLU, they accounted for 39 percent of the false matches.

“People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that,” wrote the ACLU in its post. “An identification — whether accurate or not — could cost people their freedom or even their lives.”

AMAZON’S RESPONSE. An Amazon spokesperson told The Verge that poor calibration was likely the reason Rekognition falsely matched so many members of Congress. “While 80 percent confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty,” said the spokesperson. They said Amazon recommends at least a 95 percent confidence threshold for any situation where a match might have significant consequences, such as when used by law enforcement agencies.

HALT! While anyone can use the system relatively cheaply (Rekognition, according to its web site, “charges you only for the images processed, minutes of video processed, and faces stored”), like the ACLU did, Amazon is actively marketing Rekognition to government and law enforcement agencies. Several are already using the service.

The ACLU isn’t the only organization actively petitioning against this use; Amazon shareholders and employees have urged the company to stop providing Rekognition to government agencies. So have dozens of civil rights groups and tens of thousands members of the public.

So far, Amazon has not indicated that it plans to comply with these requests. But perhaps, if members of Congress see just how flawed the system really is, they’ll be compelled to take action, placing a halt on any law enforcement use of facial recognition software, as the ACLU requests in its blog post.

READ MORE: Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots [ACLU]

More on Rekognition: Police Surveillance Is Getting a Helping Hand From…Amazon!

The “Biometric Mirror” Judges You The Way We’ve Taught It To: With Bias

FIRST IMPRESSIONS. When we see someone for the first time, we make internal snap judgements about them. We can’t help it, we’re just judgmental like that. After looking at the person for just a few seconds, we might note their gender, race, and age or decide whether or not we think they’re attractive, trustworthy, or kind.

After actually getting to know the person, we might find out that our initial perception of them was wrong. No big deal, right?

Well, it’s a very big deal when you consider how our assumptions could shape how the artificial intelligence (AI) of the future make increasingly important decisions.

In an effort to illustrate this issue to the public, researchers from the University of Melbourne created Biometric Mirror.

PUBLIC PERCEPTION. Biometric Mirror is an AI that analyzes a person’s face and then displays 14 characteristics about them, including their age, race, and perceived level of attractiveness.

To teach the system to do this, the Melbourne researchers started by asking human volunteers to judge thousands of photos for the same characteristics. This became the dataset Biometric Mirror referenced when analyzing new faces. Because the information these volunteers provided was subjective, so was Biometric Mirror’s output. If most of the human respondents thought people with beards seemed less trustworthy, that would influence how the Biometric Mirror judged people with beards.

THE ETHICS OF AI. To use Biometric Mirror, a person just has to stand in front of the system for a few seconds. It quickly scans their face and then lists their perceived characteristics on a screen. The AI then asks the person to think about how they’d feel if it shared that information with others. How would they feel if they didn’t get a job because the AI ranked them as having a low level of trustworthiness? Or if law enforcement officials decided to target them because they ranked highly for aggression?

“Our study aims to provoke challenging questions about the boundaries of AI. It shows users how easy it is to implement AI that discriminates in unethical or problematic ways which could have societal consequences,” lead researcher Niels Wouters said in a press release. “By encouraging debate on privacy and mass-surveillance, we hope to contribute to a better understanding of the ethics behind AI.”

ELIMINATING BIAS. A system as biased as Biometric Mirror could have major consequences as AI becomes more widely used and makes increasingly important decisions. And this isn’t just some future possibility, either; we’re already seeing examples show up in today’s systems. While researchers work on figuring out ways to ensure future systems don’t contain those same flaws, it’s important that the public consider the potential impact of biased AI on society. Biometric Mirror could help them do just that.

READ MORE: Holding a Black Mirror up to Artificial Intelligence [University of Melbourne]

More on biased AI: Microsoft Announces Tool to Catch Biased AI Because We Keep Making Biased AI

DARPA Is Funding Research Into AI That Can Explain What It’s “Thinking”

LOOKING AHEAD. Researchers will hold the next wave of artificial intelligences (AI) to the same standard as high school math students everywhere: no credit if you don’t show your work.

On Friday, Defense Advanced Research Projects Agency (DARPA), a Department of Defense (DoD) agency focused on breakthrough technologies, announced its Artificial Intelligence Exploration (AIE) program. This program will streamline the agency’s process for funding AI research and development with a focus on third wave AI technologies — the kinds that can understand and explain how they arrived at an answer.

NEXT-LEVEL AI. Most of the AI in use today falls under the category of first wave. These AIs follow clear, logical rules (think: chess-playing AIs). Second wave AIs are the kind that use statistical learning to arrive at an answer for a certain type of problem (think: image recognition systems).

A system in the third wave of AI will not only be able to do what a second wave system can do (for example, correctly identify a picture of a dog), it’ll be able to explain why it decided the image is of a dog. For example, it might note that the animal’s four legs, tail, and spots align with its understanding of what a dog should look like.

In other words, it’ll be able to do more than give the right answer — it’ll be able to show us how it got there.

As John Launchbury, the Director of DARPA’s Information Innovation Office (I2O), noted in a DARPA video, these systems should be able to learn from far smaller datasets than second wave systems. For example, instead of feeding an AI 100,000 meticulously labels images to teach it how to recognize handwriting, we might be able to show it just one or two generative examples — examples that show how to form each letter. It can then use that contextual information to identify handwriting in the future.

These third wave systems would “think” rather than just churn out answers based on whatever datasets they’re fed (and as we’ve seen in the past, these datasets can include the biases of their creators). Ultimately, it is the next step to creating AIs that can reason and engage in abstract thought, which could improve how both the military and everyone else makes use of AI.

FUNDING THE THIRD WAVE. Here’s how DARPA’s AIE program hopes to speed up the approach of this third wave AI. First, the agency will periodically announce a notice it’s calling an “AIE Opportunity.” This notice will highlight an area of third wave AI research of particular interest to the military.

Researchers can then submit proposals for projects to DARPA, which will review them and potentially choose to award the researcher with up to $1 million in funding. The goal is to have researchers get started on projects within 90 days of the AIE Opportunity announcement and determine whether a concept is feasible within 18 months.

AI ON THE BRAIN. This is just the latest example of the U.S. military’s growing interest in AI. Recent projects include everything from AIs that analyze footage to improve drone strikes to systems that function like the human brain. Just last month, the DoD launched the Joint Artificial Intelligence Center (JAIC), a center designed to help the department integrate AI into both its business and military practices.

Both that center and the AIE program put a premium on speed, a wise move for the DoD given that nations all across the globe are racing to be the world leader in military AI.

READ MORE: DARPA Pushes for AI That Can Explain Its Decisions [Engadget]

More on U.S. military AI: U.S. Department of Defense Established a Center to Better Integrate AI

Google’s AI Can Predict When A Patient Will Die

AI knows when you’re going to die. But unlike in sci-fi movies, that information could end up saving lives.

A new paper published in Nature suggests that feeding electronic health record data to a deep learning model could substantially improve the accuracy of projected outcomes. In trials using data from two U.S. hospitals, researchers were able to show that these algorithms could predict a patient’s length of stay and time of discharge, but also the time of death.

The neural network described in the study uses an immense amount of data, such as a patient’s vitals and medical history, to make its predictions. A new algorithm lines up previous events of each patient’s records into a timeline, which allowed the deep learning model to pinpoint future outcomes, including time of death. The neural network even includes handwritten notes, comments, and scribbles on old charts to make its predictions. And all of these calculations in record time, of course.

What can we do with this information, besides fear the inevitable? Hospitals could find new ways to prioritize patient care, adjust treatment plans, and catch medical emergencies before they even occur. It could also free up healthcare workers, who would no longer have to manipulate the data into a standardized, legible format.

AI, of course, already has a number of other applications in healthcare. A pair of recently developed algorithms could diagnose lung cancer and heart disease even more accurately than human doctors. Health researchers have also fed retinal images to AI algorithms to determine the chances a patient could develop one (or more) of three major eye diseases.

But those early trials operated on a much smaller scale than what Google is trying to do. More and more of our health data is being uploaded to centralized computer systems, but most of these databases exist independently, spread across various healthcare systems and government agencies.

Funneling all of this personal data into a single predictive model owned by one of the largest private corporations in the world is a solution, but it’s not an appealing one. Electronic health records of millions of patients in the hands of a small number of private companies could quickly allow the likes of Google to exploit health industries, and become a monopoly in healthcare.

Just last week, Alphabet-owned DeepMind Health came under scrutiny by the U.K. government over concerns it was able to “exert excessive monopoly power,” according to TechCrunch. And their relationship was already frayed over allegations that DeepMind Health broke U.K. laws by collecting patient data without proper consent in 2017.

Healthcare professionals are already concerned about the effect that AI will have on medicine once it’s truly embedded, and if we don’t take precautions for transparency before then. The American Medical Association admits in a statement that combining AI with human clinicians can bring significant benefits, but states that AI tools must “strive to meet several key criteria, including being transparent, standards-based, and free from bias.” The Health Insurance Portability and Accountability Act (HIPAA) passed by Congress in 1996 — 22 years is an eternity in technology terms — just won’t cut it.

Without a effective regulatory framework that encourages transparency in the U.S. it will be near impossible to hold these companies accountable. It may be up to private companies to ensure that AI technology will have an impact on healthcare that benefits patients, not just the companies themselves.

Faster analysis of medical images

Medical image registration is a common technique that involves overlaying two images, such as magnetic resonance imaging (MRI) scans, to compare and analyze anatomical differences in great detail. If a patient has a brain tumor, for instance, doctors can overlap a brain scan from several months ago onto a more recent scan to analyze small changes in the tumor’s progress.

This process, however, can often take two hours or more, as traditional systems meticulously align each of potentially a million pixels in the combined scans. In a pair of upcoming conference papers, MIT researchers describe a machine-learning algorithm that can register brain scans and other 3-D images more than 1,000 times more quickly using novel learning techniques.

The algorithm works by “learning” while registering thousands of pairs of images. In doing so, it acquires information about how to align images and estimates some optimal alignment parameters. After training, it uses those parameters to map all pixels of one image to another, all at once. This reduces registration time to a minute or two using a normal computer, or less than a second using a GPU with comparable accuracy to state-of-the-art systems.

“The tasks of aligning a brain MRI shouldn’t be that different when you’re aligning one pair of brain MRIs or another,” says co-author on both papers Guha Balakrishnan, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Engineering and Computer Science (EECS). “There is information you should be able to carry over in how you do the alignment. If you’re able to learn something from previous image registration, you can do a new task much faster and with the same accuracy.”

The papers are being presented at the Conference on Computer Vision and Pattern Recognition (CVPR), held this week, and at the Medical Image Computing and Computer Assisted Interventions Conference (MICCAI), held in September. Co-authors are: Adrian Dalca, a postdoc at Massachusetts General Hospital and CSAIL; Amy Zhao, a graduate student in CSAIL; Mert R. Sabuncu, a former CSAIL postdoc and now a professor at Cornell University; and John Guttag, the Dugald C. Jackson Professor in Electrical Engineering at MIT.

Retaining information

MRI scans are basically hundreds of stacked 2-D images that form massive 3-D images, called “volumes,” containing a million or more 3-D pixels, called “voxels.” Therefore, it’s very time-consuming to align all voxels in the first volume with those in the second. Moreover, scans can come from different machines and have different spatial orientations, meaning matching voxels is even more computationally complex.

“You have two different images of two different brains, put them on top of each other, and you start wiggling one until one fits the other. Mathematically, this optimization procedure takes a long time,” says Dalca, senior author on the CVPR paper and lead author on the MICCAI paper.

This process becomes particularly slow when analyzing scans from large populations. Neuroscientists analyzing variations in brain structures across hundreds of patients with a particular disease or condition, for instance, could potentially take hundreds of hours.

That’s because those algorithms have one major flaw: They never learn. After each registration, they dismiss all data pertaining to voxel location. “Essentially, they start from scratch given a new pair of images,” Balakrishnan says. “After 100 registrations, you should have learned something from the alignment. That’s what we leverage.”

The researchers’ algorithm, called “VoxelMorph,” is powered by a convolutional neural network (CNN), a machine-learning approach commonly used for image processing. These networks consist of many nodes that process image and other information across several layers of computation.

In the CVPR paper, the researchers trained their algorithm on 7,000 publicly available MRI brain scans and then tested it on 250 additional scans.

During training, brain scans were fed into the algorithm in pairs. Using a CNN and modified computation layer called a spatial transformer, the method captures similarities of voxels in one MRI scan with voxels in the other scan. In doing so, the algorithm learns information about groups of voxels — such as anatomical shapes common to both scans — which it uses to calculate optimized parameters that can be applied to any scan pair.

When fed two new scans, a simple mathematical “function” uses those optimized parameters to rapidly calculate the exact alignment of every voxel in both scans. In short, the algorithm’s CNN component gains all necessary information during training so that, during each new registration, the entire registration can be executed using one, easily computable function evaluation.

The researchers found their algorithm could accurately register all of their 250 test brain scans — those registered after the training set — within two minutes using a traditional central processing unit, and in under one second using a graphics processing unit.

Importantly, the algorithm is “unsupervised,” meaning it doesn’t require additional information beyond image data. Some registration algorithms incorporate CNN models but require a “ground truth,” meaning another traditional algorithm is first run to compute accurate registrations. The researchers’ algorithm maintains its accuracy without that data.

The MICCAI paper develops a refined VoxelMorph algorithm that “says how sure we are about each registration,” Balakrishnan says. It also guarantees the registration “smoothness,” meaning it doesn’t produce folds, holes, or general distortions in the composite image. The paper presents a mathematical model that validates the algorithm’s accuracy using something called a Dice score, a standard metric to evaluate the accuracy of overlapped images. Across 17 brain regions, the refined VoxelMorph algorithm scored the same accuracy as a commonly used state-of-the-art registration algorithm, while providing runtime and methodological improvements.

Beyond brain scans

The speedy algorithm has a wide range of potential applications in addition to analyzing brain scans, the researchers say. MIT colleagues, for instance, are currently running the algorithm on lung images.

The algorithm could also pave the way for image registration during operations. Various scans of different qualities and speeds are currently used before or during some surgeries. But those images are not registered until after the operation. When resecting a brain tumor, for instance, surgeons sometimes scan a patient’s brain before and after surgery to see if they’ve removed all the tumor. If any bit remains, they’re back in the operating room.

With the new algorithm, Dalca says, surgeons could potentially register scans in near real-time, getting a much clearer picture on their progress. “Today, they can’t really overlap the images during surgery, because it will take two hours, and the surgery is ongoing” he says. “However, if it only takes a second, you can imagine that it could be feasible.”

Topics: Research, Algorithms, Imaging, Machine learning, Computer science and technology, Artificial intelligence, Health care, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering, Medicine, Health sciences and technology

Was That Script Written By A Human Or An AI? Here’s How To Spot The Difference.

If you spent time on social media today, you probably came across that script for an Olive Garden commercial allegedly written by an artificial intelligence algorithm. The commercial is a hilarious trip into the absurd in which patrons enjoy classic staples of the kind of Italian dining we’ve come to expect from America’s 15th favorite chain restaurant, such as “warm and defeated pasta nachos,” secret soup, Italian citizens, and “unlimited stick.”

The commercial was also, unfortunately, likely not written by AI at all. Instead, it was probably just a boring old human who claimed to have used a neural net for some sweet, sweet social media fame.

Last night, engineer Janelle Shane took to Twitter to lay out some of the telltale giveaways that the script was written by a person pretending to be an AI algorithm for kicks. You may recognize Shane as the person who trains neural nets to create jokes that devolve into nonsense or paint colors that almost sound real after being trained on thousands of actual examples. Yes, the AI-generated results are absurd, but they also highlight one key fact — the neural nets have no clue what the hell they’re talking about.

So how do you spot something written by an AI, anyhow?

“I’d say the clearest giveaways are a really short memory (maybe just a couple of sentences long) and a lack of understanding of meaning and context,” Shane told Futurism. “One characteristic of neural net text is it’ll tend to mimic the surface appearance of things without really getting the meaning behind them.”

The fun parts of these bot-written passages are the parts where it creates impossibly-surreal scenarios, but one tell-tale sign that something was actually written by a person is when those individual images still fit together. For instance, the cast of the Olive Garden commercial remains consistent; if an actual neural net had been the author, characters would have been introduced and abandoned willy-nilly.

A neural net trained just on commercials wouldn’t understand how to put together a cohesive narrative, but a human writer would keep an eye out for these things.

Take, for example, this example of a recipe that was written by a neural net trained on recipes:

The instructions, you notice, have absolutely nothing to do with the ingredients listed.

Right now, artificial intelligence excels at incredibly-narrow tasks. It can generate cohesive grammar at the sentence level, but something like a script is still too complex.

“For many years yet, it will be the case that if you see a well-written story with a coherent plot and clever wordplay, it will be because a human did most of the work,” Shane said.

AI-written text will continue to grow in sophistication and prevalence, even if it’s mostly a source of humor today, Shane added. And that means it can grow to be more misleading. Today we’re just talking about who or what wrote a funny commercial online, but as artificial intelligence becomes more sophisticated, it could be used to write misinformation like convincing (but fake news) articles. That could have very real consequences for people who fall for it.

Now, based on what Shane told us, we’ve come up with a list of tell-tale signs to look for if you want to know whether a particular text was penned by human or a bot.

  1. Did it make sense? If something looks like it matches a classic joke convention but the content seems totally garbled, it likely wasn’t written by a person.
  2. How’s their attention span? If the author seems to have forgotten what they were talking about part of the way through, then you’re likely witnessing a neural net’s inherently-short attention span. Meanwhile, if the text seems sophisticated and clever or it seems as though someone actually put care into the structure of their sentences, then you’re looking at signs of a human’s touch.
  3. Did they show their work? As Shane pointed out, whoever managed to write an Olive Garden commercial with nothing but an AI algorithm would be gloating about it much more — we’d see a whole lot more about how they trained their neural net and how they managed to make everything come together. The fact that no one is geeking out about the technical side of this neural net suggests that it doesn’t exist.

The truth at the bottom of all this? We need to know what AI is actually capable of. Because how can we appreciate a parody if we don’t understand what it’s mimicking?

“There’s definitely a place for parodies of AI-generated text,” added Shane, “but the parodies only work if you know what the real stuff is like.”

Google Created AI That Just Needs A Few Snapshots To Make 3D Models Of Its Surroundings

Google’s new type of artificial intelligence algorithm can figure out what things look like from all angles — without needing to see them.

After viewing something from just a few different perspectives, the Generative Query Network was able to piece together an object’s appearance, even as it would appear from angles not analyzed by the algorithm, according to research published today in Science. And it did so without any human supervision or training. That could save a lot of time as engineers prepare increasingly advanced algorithms for technology, but it could also extend the abilities of machine learning to give robots (military or otherwise) greater awareness of their surroundings.

The Google researchers intend for their new type of artificial intelligence system to take away one of the major time sucks of AI research — going through and manually tagging and annotating images and other media that can be used to teach an algorithm what’s what. If the computer can figure all that out on its own, scientists would no longer need to spend so much time gathering and sorting data to feed into their algorithm.

According to the research, the AI system could create a full render of a 3D setting based on just five separate virtual snapshots. It learned about objects’ shape, size, and color independently of one another and then combined all of its findings into an accurate 3D model. Once the algorithm had that model, researchers could use the algorithm to create entirely new scenes without having to explicitly lay out what objects should go where.

While the tests were conducted in a virtual room, the Google scientists suspect that their work will give rise to machines that can autonomously learn about their surroundings without any researchers poring through an expansive set of data to make it happen.

It’s easy to imagine a world where this sort of artificial intelligence is used to enhance surveillance programs (good thing Google updated its code of ethics to allow work with the military). But the Generative Query Network isn’t quite that sophisticated yet — the algorithm can’t guess what your face looks like after seeing the back of your head or anything like that. So far, this technology has only faced simple tests with basic objects, not anything as complex as a person.

Instead, this research this is likely to boost existing applications for machine learning, like enhancing the precision of assembly line robots to give them a better understanding of their surroundings.

No matter what practical applications emerge from this early, proof-of-concept research, it does show that we’re getting closer to truly autonomous machines that are able to perceive and understand their surroundings, just the way humans do.

A New AI Algorithm Can Track Your Movements Through A Wall

For decades, prestigious scientists have been searching for a way to see through walls. And, in the past few years, they’ve succeeded — they created technology that uses WiFi to sense people through walls. Only, the signal it returns is really scant. Now, researchers at MIT have developed a new machine learning algorithm that not only detects people’s movements, it also models what they are actually doing.

Fortunately for people who value the modest privacy walls provide, the new technology just recreates a bare-bones stick figure that matches the person’s pose and movements.

When we say something can “see” through walls, we don’t actually mean, like, vision per se. The artificial intelligence-powered system, dubbed RF-Pose, bouncesWiFi signals through the walls and off people on the other side of them, and analyzes the patterns as they come back. From these blobs of reflected signals, the technology is able to reconstruct a 2D stick figure. It’s like how a bat sees by echolocation, except the “image” rendered in its head is a childish drawing.

The stick figure, of course, doesn’t have a face. So if the system has never seen you before, it won’t know who you are. But after 100 participants trained the system, it could correctly identify which researchers were which 83 percent of the time, based on “their style of moving,” the researchers write.

The scientists behind this new technology hope that it will improve care for people with neurodegenerative diseases like Alzheimer’s or Parkinson’s and for the elderly in assisted living facilities.

If people agree to let technology like this new wall-penetrating scanner monitor them, it could help scientists better understand how certain neurological conditions affect posture and gait over time. If the technology continues to improve, it could even be used to detect a person’s tremors at a distance to help doctors monitor how well a certain treatment is working.

RF-Pose also stands to improve video game technology and police safety — officers could tell whether someone on the other side of a wall is holding a weapon rather than merely knowing that someone is there — but the researchers told Motherboard that they hope to focus their efforts on medical applications.

And in a refreshing twist on the usual news we hear about potentially-creepy surveillance technology, the researchers behind RF-Pose have already taken steps to protect those who might be monitored. They’ve worked to anonymize and encrypt all of the data; they’ve also designed it so that people who would be monitored by it would be able to turn the device on or off with a particular motion, the physical equivalent of turning on Alexa by saying “OK, Google.” Some may even prefer to have RF-Pose keep an eye on them over an assisted living worker.

While that doesn’t necessarily guarantee that people’s privacy will be respected and some people with more advanced neurodegenerative diseases may not be able to fully understand the implications of the technology, it’s nice to see that engineers are thinking about the people who might be abused by their inventions — and to see that scientific research can progress with those protections in place.

This Artificial Intelligence System Predicts The Future To Help You Around The House

There’s an artificial intelligence system that can peer into the future and anticipate what you’re about to do.

No, we’re not talking about anything like Roko’s Basilisk — it can’t travel through time or take over the world (at least not yet). Instead, fortune-telling AI can only guess what someone will do within the next few minutes. As long as those next few minutes definitely involve cooking. And the past few minutes also involved cooking.

OK, look, we’re just going to come out and say it: scientists created a salad- and breakfast-food-predictor. New research that will be presented at the IEEE Conference on Computer Vision and Pattern Recognition later this month details how a machine learning algorithm was trained with hours of videos of people preparing salads and breakfast food. Now it can now predict the next few steps of all sorts of salad recipes when the system sees someone start to make one.

The scientists behind the algorithm envision a future in which smart home devices can recognize what you’re doing and automatically step in to help. Maybe your smart speaker could remind you that you skipped a step of a recipe, or a futuristic Bluetooth stove could adjust the heat as you put each ingredient on the skillet. In short, your kitchen might be able to anticipate exactly how you plan to use it.

This does not mean that machines have developed the ability to recognize people’s intentions as conscious entities. Rather, this machine learning algorithm has simply learned to predict the next steps of a process (again, right now that’s limited to just making breakfast or salad) based on what someone has already done. And even then, it’s not particularly great at it yet.

Right now, the AI system still needs a little help: someone needs to tag the actions in the first 20 to 30 percent of a video for it to catch on. If it was given that jump start and didn’t have to look more than a few minutes into the future, the AI was able to predict someone’s next move about 40 percent of the time. That’s already not that great. And the threshold for that level of accuracy seems to be about three minutes into the future — beyond that, the algorithm’s accuracy dropped to 15 percent, the researchers explained in a press release.

But since process-oriented, future-predicting artificial intelligence is such a new technology, the researchers define a long-term prediction as “anything more than a few seconds.” Surely, bringing AI’s predictive abilities into the realm of minutes is a notable advance, even if the process isn’t yet flawless (and only works for videos of breakfast food and salads.)

Of course, in order for such a system to work, you would have to be okay with setting up all sorts of internet-connected gadgets in your home, and based on one Gizmodo reporter’s experience, that might not be worth the AI-generated kitchen tips.

Brain-Based Circuitry Just Made Artificial Intelligence A Whole Lot Faster

We take the vast computing power of our brains for granted. But scientists are still trying to get computers to the brain’s level.

This is how we ended up with artificial intelligence algorithms that learn through virtual neurons  the neural net.

Now a team of engineers has taken another step closer to emulating the computers in our noggins: they’ve built a physical neural network, with circuits that even more closely resemble neurons. When they tested an AI algorithm on the new type of circuitry, they found that it performed as well as conventional neural nets already in use. But! the new integrated neural net system completed the task with 100 times less energy than a conventional AI algorithm.

If these new neuron-based circuits take off, artificial intelligence researchers will soon be able to do a lot more computing with a lot less energy. Like using a tin can to communicate with an actual telephone, computer chips and neural net algorithms just speak two different languages, and work slower as a result. But in the new system, the hardware and software were built to work perfectly together. So the new AI system completed the tasks much faster than a conventional system, without any drop in accuracy.

This is a step up from previous attempts to make silicon-based neural networks. Usually, the AI systems built on these sorts of neuron-inspired chips don’t usually work as well as conventional artificial intelligence. But the new research modeled two types of neurons: one that was geared for quick computations and another that was designed to store long-term memory, the researchers explained to MIT Technology Review.

There’s good reason to be skeptical of any researcher who claims that the answer to truly comprehensive, general artificial intelligence and consciousness is to recreate the human brain. That’s because, fundamentally, we know very little about how the brain works. And chances are, there are lots of things in our brains that a computer would find useless.

But even so, the researchers behind the new artificial neural hardware have been able to glean important lessons from how our brains work and apply it to computer science. In that sense, they have figured out how to further artificial intelligence by cherry-picking what our brains have to offer without getting weighed down trying to rebuild the whole darn thing.

As technology sucks up more and more power, the hundred-fold improvement to energy efficiency in this AI system means scientists will be able to pursue big questions without leaving such a huge footprint on the environment.

AI senses people’s pose through walls

X-ray vision has long seemed like a far-fetched sci-fi fantasy, but over the last decade a team led by Professor Dina Katabi from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has continually gotten us closer to seeing through walls.

Their latest project, “RF-Pose,” uses artificial intelligence (AI) to teach wireless devices to sense people’s postures and movement, even from the other side of a wall.

The researchers use a neural network to analyze radio signals that bounce off people’s bodies, and can then create a dynamic stick figure that walks, stops, sits and moves its limbs as the person performs those actions.

The team says that the system could be used to monitor diseases like Parkinson’s and multiple sclerosis (MS), providing a better understanding of disease progression and allowing doctors to adjust medications accordingly. It could also help elderly people live more independently, while providing the added security of monitoring for falls, injuries and changes in activity patterns.

(All data the team collected has subjects’ consent and is anonymized and encrypted to protect user privacy. For future real-world applications, the team plans to implement a “consent mechanism” in which the person who installs the device is cued to do a specific set of movements in order for it to begin to monitor the environment.)

The team is currently working with doctors to explore multiple applications in healthcare.

“We’ve seen that monitoring patients’ walking speed and ability to do basic activities on their own gives healthcare providers a window into their lives that they didn’t have before, which could be meaningful for a whole range of diseases,” says Katabi, who co-wrote a new paper about the project. “A key advantage of our approach is that patients do not have to wear sensors or remember to charge their devices.”

Besides health-care, the team says that RF-Pose could also be used for new classes of video games where players move around the house, or even in search-and-rescue missions to help locate survivors.

“Just like how cellphones and Wi-Fi routers have become essential parts of today’s households, I believe that wireless technologies like these will help power the homes of the future,” says Katabi, who co-wrote the new paper with PhD student and lead author Mingmin Zhao, MIT professor Antonio Torralba, postdoc Mohammad Abu Alsheikh, graduate student Tianhong Li and PhD students Yonglong Tian and Hang Zhao. They will present it later this month at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah.

One challenge the researchers had to address is that most neural networks are trained using data labeled by hand. A neural network trained to identify cats, for example, requires that people look at a big dataset of images and label each one as either “cat” or “not cat.” Radio signals, meanwhile, can’t be easily labeled by humans.

To address this, the researchers collected examples using both their wireless device and a camera. They gathered thousands of images of people doing activities like walking, talking, sitting, opening doors and waiting for elevators.

They then used these images from the camera to extract the stick figures, which they showed to the neural network along with the corresponding radio signal. This combination of examples enabled the system to learn the association between the radio signal and the stick figures of the people in the scene.

Post-training, RF-Pose was able to estimate a person’s posture and movements without cameras, using only the wireless reflections that bounce off people’s bodies.

Since cameras can’t see through walls, the network was never explicitly trained on data from the other side of a wall — which is what made it particularly surprising to the MIT team that the network could generalize its knowledge to be able to handle through-wall movement.

“If you think of the computer vision system as the teacher, this is a truly fascinating example of the student outperforming the teacher,” says Torralba.

Besides sensing movement, the authors also showed that they could use wireless signals to accurately identify somebody 83 percent of the time out of a line-up of 100 individuals. This ability could be particularly useful for the application of search-and-rescue operations, when it may be helpful to know the identity of specific people.

For this paper, the model outputs a 2-D stick figure, but the team is also working to create 3-D representations that would be able to reflect even smaller micromovements. For example, it might be able to see if an older person’s hands are shaking regularly enough that they may want to get a check-up.

“By using this combination of visual data and AI to see through walls, we can enable better scene understanding and smarter environments to live safer, more productive lives,” says Zhao.

Google: JK, We’re Going To Keep Working With The Military After All

Google pulled a headfake.

Let’s catch you up real quick: Google partnered with the Department of Defense for Project Maven, in which artificial intelligence would analyze military drone footage. Google employees made it clear they’re not happy to be working on the project. And last week, it looked like the company was going to meet their demands  Google announced that it would not renew its contract with the military when it expires next year.

Well, it turns out that that sweet, sweet military dough is too good to pass up. On Thursday, Google CEO Sundar Pichai revealed new internal guidelines for how Google plans to conduct itself in the future. And we can expect the company’s military deals to continue, as WIRED reported (is it a coincidence that, last month, the company apparently removed its longtime motto “don’t be evil” from its code of conduct? You decide).

The updated guidelines, which Google laid out in a blog post, do say that Google will have no part in building weapons or “other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” and also ruled out surveillance technologies like those sold by Amazon.

You may be thinking, “But that’s the same stance Google had at the beginning of this whole Project Maven mess!” And, dear reader, you would be right. At least as far as military involvement goes, Google’s stance seems to be something along the lines of: “Hey, we hear you. But we’re gonna keep doing what we want anyway. Trust us, we’re gonna be, like, really really ethical.”

As a response, many are calling for Google to establish an independent ethics committee to oversee its military involvement, according to WIRED. Because, strangely enough, this fake out may have shaken people’s trust in the unavoidable, omnipresent provider of critical online services such as emails, driving directions, and late-night paranoid internet searches.

In all fairness, other tenets of Google’s new guidelines could be crowd-pleasers. They call for technology developed with particularly care towards racial minorities, members of the LGBTQIA community, and other marginalized groups. This is likely a response to the fact that most AI systems are biased since they have been inadvertently trained to treat people unfairly.

It’s not yet clear how Google and the Department of Defense will work together in the future. And, as many have pointed out, the Department of Defense certainly won’t stop developing artificial intelligence tools and weapons just because Google isn’t going to help. But Google employees, and the public, will likely make sure the company abides by its own guidelines and stays out of the weapons game.

This article has been updated to include Google’s blog post about the new guidelines.

AI Can Now Manipulate People’s Movements In Fake Videos

There are already fake videos on the internet, manipulated to make it look like people said things (or appeared in porn) that they never did. And now they’re about to get way better, thanks to some new tools powered by artificial intelligence.

Instead of just moving a source video’s lips and face, an artificial intelligence-powered system can create photorealistic videos in which people can sway, turn their heads, blink their eyes, and emote. Basically, everything that an actor does and says in an input video will be translated into the video being altered.

According to the research, which will be presented at the VR filmmaking conference SIGGRAPH in August, the team ran a number of tests comparing its new algorithm to existing means of manipulating lifelike videos and images, many of which have been at least partially developed by Facebook and Google. Their system outperformed all the others, and participants in an experiment struggled to determine whether or not the resulting videos were real.

The researchers, who received some funding from Google, hope that their work will be used to improve virtual reality technology. And because the AI system only needs to train on a few minutes of source video to work, the team feels that its new tools will help make high-end video editing software more accessible.

The researchers also know their work might, uh, worry some folks.

“I’m aware of the ethical implications of those reenactment projects,” researcher Justus Thies told The Register. “That is also a reason why we published our results. I think it is important that the people get to know the possibilities of manipulation techniques.”

But at what point do we get tired of people “raising awareness” by further developing the problem? In the paper itself, there is just one sentence dedicated to ethical concerns — the researchers suggest that someone ought to look into better watermarking technologies or other ways to spot fake videos.

Not them, though. They’re too busy making it easier than ever to create flawless manipulated videos.

There’s Now A Computer Designed Specifically for Programming Intelligent Robots

It just got way easier to build “smart” robots.

Each year, the who’s who of the tech world gathers at Computex 2018, an information and communications technology showcase in Taiwan. On Sunday, Jensen Huang, founder and CEO of American tech company NVIDIA, took the stage at the conference to announce two new products designed to make it easier (and cheaper) for developers to create and train intelligent robots: Jetson Xavier and NVIDIA Isaac.

According to a NVIDIA press release, Jetson Xavier is “the world’s first computer designed specifically for robotics.” It includes 9 billion transistors and a half-dozen processors, including a Volta Tensor Core GPU and an eight-core ARM64 CPU.

Image Credit: NVIDIA

Translation: this computer is powerful and efficient — in fact, it can process 30 trillion operations per second (TOPS) (for comparison, the most powerful iMac on the market can process up to 22 TOPS, and costs about $5,000). And to do so, it needs less than half the electricity you’d need to power a light bulb. And while that may not be necessary for a computer on which you basically only use Facebook and Microsoft Word, it could mean a lot for the advent of more advanced, and more accessible, robots.

“This level of performance is essential for a robot to take input from sensors, locate itself, perceive its environment, recognize and predict motion of nearby objects, reason about what action to perform, and articulate itself safely,” according to the press release.

Really incredibly hardware like Jetson Xavier can only push technology so far. It needs advanced software to match. That’s where NVIDIA Isaac comes in.

NVIDIA Isaac is a developer platform broken into three components:

  • Isaac SDK (software development kit), a collection of tools developers can use to create their own AI software
  • Isaac IMX (Intelligent Machine Acceleration applications), a library of robotic algorithm software developed by NVIDIA, which the company claims on its website could save developers “months of development time and effort”
  • Isaac SIM, a virtual simulation environment where developers can train and test their AI systems

NVIDIA plans to sell its Isaac-equipped Jetson Xavier computers starting in August for $1,299.

During his Computex presentation, Huang claimed a workstation with comparable processing power costs upwards of $10,000. He didn’t specify exactly who his intended clientele would be, but it’s not hard to imagine that students in high school and college who are interested in robotics, along with any company interested in AI but perhaps lacks the capital to make a big investment, would be most interested in purchasing it.

If this system lives up to its promise, it could create a moment like when GarageBand made it possible for anyone to record music without needing a recording studio. Now, anyone (with $1300) can design their own AI.

By lowering the cost of the tools necessary for intelligent robot development, NVIDIA is opening up the field to people who couldn’t afford to work on it in the past. And who knows what kinds of remarkable creations they might come up with?

Use artificial intelligence to identify, count, describe wild animals

A new paper in the Proceedings of the National Academy of Sciences (PNAS) reports how a cutting-edge artificial intelligence technique called deep learning can automatically identify, count and describe animals in their natural habitats.

Photographs that are automatically collected by motion-sensor cameras can then be automatically described by deep neural networks. The result is a system that can automate animal identification for up to 99.3 percent of images while still performing at the same 96.6 percent accuracy rate of crowdsourced teams of human volunteers.

“This technology lets us accurately, unobtrusively and inexpensively collect wildlife data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology and animal behavior into ‘big data’ sciences. This will dramatically improve our ability to both study and conserve wildlife and precious ecosystems,” says Jeff Clune, the senior author of the paper. He is the Harris Associate Professor at the University of Wyoming and a senior research manager at Uber’s Artificial Intelligence Labs.

The paper was written by Clune; his Ph.D. student Mohammad Sadegh Norouzzadeh; his former Ph.D. student Anh Nguyen (now at Auburn University); Margaret Kosmala (Harvard University); Ali Swanson (University of Oxford); and Meredith Palmer and Craig Packer (both from the University of Minnesota).

Deep neural networks are a form of computational intelligence loosely inspired by how animal brains see and understand the world. They require vast amounts of training data to work well, and the data must be accurately labeled (e.g., each image being correctly tagged with which species of animal is present, how many there are, etc.).

This study obtained the necessary data from Snapshot Serengeti, a citizen science project on the platform. Snapshot Serengeti has deployed a large number of “camera traps” (motion-sensor cameras) in Tanzania that collect millions of images of animals in their natural habitat, such as lions, leopards, cheetahs and elephants. The information in these photographs is only useful once it has been converted into text and numbers. For years, the best method for extracting such information was to ask crowdsourced teams of human volunteers to label each image manually. The study published today harnessed 3.2 million labeled images produced in this manner by more than 50,000 human volunteers over several years.

“When I told Jeff Clune we had 3.2 million labeled images, he stopped in his tracks,” says Packer, who heads the Snapshot Serengeti project. “We wanted to test whether we could use machine learning to automate the work of human volunteers. Our citizen scientists have done phenomenal work, but we needed to speed up the process to handle ever greater amounts of data. The deep learning algorithm is amazing and far surpassed my expectations. This is a game changer for wildlife ecology.”

Swanson, who founded Snapshot Serengeti, adds: “There are hundreds of camera-trap projects in the world, and very few of them are able to recruit large armies of human volunteers to extract their data. That means that much of the knowledge in these important data sets remains untapped. Although projects are increasingly turning to citizen science for image classification, we’re starting to see it take longer and longer to label each batch of images as the demand for volunteers grows. We believe deep learning will be key in alleviating the bottleneck for camera-trap projects: the effort of converting images into usable data.”

“Not only does the artificial intelligence system tell you which of 48 different species of animal is present, but it also tells you how many there are and what they are doing. It will tell you if they are eating, sleeping, if babies are present, etc.,” adds Kosmala, another Snapshot Serengeti leader. “We estimate that the deep learning technology pipeline we describe would save more than eight years of human labeling effort for each additional 3 million images. That is a lot of valuable volunteer time that can be redeployed to help other projects.”

First-author Sadegh Norouzzadeh points out that “Deep learning is still improving rapidly, and we expect that its performance will only get better in the coming years. Here, we wanted to demonstrate the value of the technology to the wildlife ecology community, but we expect that as more people research how to improve deep learning for this application and publish their datasets, the sky’s the limit. It is exciting to think of all the different ways this technology can help with our important scientific and conservation missions.”

Story Source:

Materials provided by University of Wyoming. Note: Content may be edited for style and length.

Language-Policing AI Will Suggest A Polite Alternative to Online Hate Speech

It’s an oft-repeated phrase among journalists: never read the comments. Comment sections, from Twitter to Reddit and everything in between, are some of the darkest places on the internet, places where baseless insults and pointed critiques fly like bullets in a chaotic melee.

To save us from that ugliness (in others, and also in ourselves), engineers at IBM have created an AI algorithm that tries to filter the profanity out of our messages and suggests more palatable alternatives.

The scientists behind the profanity-filtering AI are, in a refreshing twist, conscious of how their filter might be misused. For instance, authoritarian governments or overreaching technology companies could, hypothetically, use similar algorithms to flag political or otherwise critical language among people conversing online. And since governments are already hard at work shutting down dissident rumblings online, it’s not far-fetched to imagine that a tool like this would be destructive if in the wrong hands.

So, instead of simply changing offensive language, the researchers argue their algorithm should be used to provide gentle reminders and suggestions. For instance, a tool resembling good ol’ Microsoft Clippy might pop up and ask, “Do you really want to tell this stranger on Reddit to fuck off and die?” instead of automatically editing what you type.

And there’s a lot of merit in that — it’s the technological equivalent of venting your anger and then sleeping on it or stepping away from the keyboard before you hit send.

After being trained on millions of tweets and Reddit posts, the AI system became very effective at removing profane and hateful words. But it’s much, much less good at recreating the sentences in a polite way that conserved meaning.

For instance, a tweet reading “bros before hoes” was translated into “bros before money.” There’s… something missing there. Granted, this is much better than existing language filter AI, which turned the same tweet into “club tomorrow.” Let’s give credit where credit is due.

Also, a lot of swear words were turned into “big,” regardless of context. A frustrated Reddit post reading “What a fucking circus this is” became a sincere, awe-filled “what a big circus this is.”

So far, the researchers have simply created their algorithm, but haven’t incorporated it into a usable online too, for either individual users or the sites themselves. Presumably, it would have to get a lot better at suggesting new language before that could happen.

Aside from the, er, obvious shortcomings, the team behind this algorithm is aware of its limitations. AI filters of this sort can only work to remove the most obvious, explicit forms of online abuse. For instance, it can’t tell if a particular sentence is hateful unless it includes specific angry or profane words. If the language itself is seemingly benign or requires context to understand, it would fly under the radar.

Implicit prejudices, then, would go unchecked, as long as no one says “shit.” And this says nothing for the arguably more dangerous forms of online harassment like stalking, doxing, or threatening people. Of course, a language filter can’t end the internet’s toxic culture, but this new AI research can help us take a step back and make you think real hard before you decide to perpetuate hateful speech.

Future robots need no motors

To develop micro- and biomimetic-robots, artificial muscles and medical devices, actuating materials that can reversibly change their volume under various stimuli are researched in the past thirty years to replace traditional bulky and heavy actuators including motors and pneumatic actuators.

A mechanical engineering team led by Professor Alfonso Ngan Hing-wan, Chair Professor in Materials Science and Engineering, and Kingboard Professor in Materials Engineering, Faculty of Engineering, the University of Hong Kong (HKU) published an article in Science Robotics on 30 May 2018 (EST) that introduces a novel actuating material — nickel hydroxide-oxyhydroxide — that can be powered by visible (Vis) light, electricity, and other stimuli. The material actuation can be instantaneously triggered by Vis light to produce a fast deformation and exert a force equivalent to 3000 times of its own weight. The material cost of a typical actuator is as low as HKD 4 per cm2 and can be easily fabricated within three hours.

Among various stimuli, light-induced actuating materials are highly desirable because they enable wireless operation of robots. However, very few light driven materials are available in the past, and their material and production costs are high, which hinder their development in actual applications such as artificial muscles for robotics and human assist device, and minimally invasive surgical and diagnostic tools.

Developing actuating materials was identified as the top of the 10 challenges in “The grand challenges of Science Robotics.” Research in actuating materials can radically change the concept of robots which are now mainly motor-driven. Therefore, materials that can be actuated by wireless stimuli including a change in temperature, humidity, magnetic fields and light is one of the main research focus in recent years. In particular, a material that can be actuated by Vis light and produces strong, quick and stable actuation has never been achieved. The novel actuating material system — nickel hydroxide-oxyhydroxide that can be actuated by Vis light at relatively low intensity to produce high stress and speed comparable to mammalian skeletal muscles has been developed in this research initiated by engineers in HKU.

In addition to its Vis light actuation properties, this novel material system can also be actuated by electricity, enabling it to be integrated into the present well-developed robotics technology. It is also responsive to heat and humidity changes so that they might potentially be applied in autonomous machines that harness the tiny energy change in the environment. Because the major component is nickel, the material cost is low.

The fabrication only involves electrodeposition which is a simple process, and the time required for the fabrication is around three hours, therefore the material can be easily scaled up and manufactured in industry.

The newly invented nickel hydroxide-oxyhydroxide responses to light almost instantaneously and produces a force corresponding to about 3000 times of its own weight.

When integrated into a well-designed structure, a “mini arm” made by two hinges of actuating materials can easily lift an object 50 times of its weight. Similarly, by utilizing a light blocker, a mini walking-bot in which only the “front leg” bent and straighten alternatively and therefore moves under illumination was made so that it can walk towards the light source. These demonstrate that future applications in micro-robotics including rescue robots are possible.

The evidences above revealed that this nickel hydroxide-oxyhydroxide actuating material can have different applications in the future, including rescue robots or other mini-robots. The intrinsic actuating properties of the materials obtained from our research show that by scaling up the fabrication, artificial muscles comparable to that of mammalian skeletal muscles can be achieved, and applying it in robotics, human assist device and medical devices are possible.

From a scientific point of view, this nickel hydroxide-oxyhydroxide actuating material is the world’s first material system that can be actuated directly by Vis light and electricity without any additional fabrication procedures. This also opens up a new research field on light-induced actuating behaviour for this material type (hydroxide-oxyhydroxides) because it has never been reported before.

The research team members are all from the Department of Mechanical Engineering at HKU Faculty of Engineering, led by Professor Alfonso Ngan’s group in collaboration with Dr Li Wen-di’s group on light actuation experiment and Dr Feng Shien-ping’s group on electrodeposition experiment. The research has been published in the journal Science Robotics on 30 May 2018 with a title of “Light-stimulated actuators based on nickel hydroxide-oxyhydroxide.” The first author of this paper is Dr Kwan Kin-wa who is currently a post-doctoral fellow in Prof. Ngan’s group.

The corresponding author is Prof. Ngan. The complete author list is as below: K-W. Kwan, S-J. Li, N-Y. Hau, W-D. Li, S-P. Feng, A.H.W. Ngan. This research is funded by the Research Grants Council, Hong Kong.

An artificial nerve system gives prosthetic devices and robots a sense of touch

Stanford and Seoul National University researchers have developed an artificial sensory nerve system that can activate the twitch reflex in a cockroach and identify letters in the Braille alphabet.

The work, reported May 31 in Science, is a step toward creating artificial skin for prosthetic limbs, to restore sensation to amputees and, perhaps, one day give robots some type of reflex capability.

“We take skin for granted but it’s a complex sensing, signaling and decision-making system,” said Zhenan Bao, a professor of chemical engineering and one of the senior authors. “This artificial sensory nerve system is a step toward making skin-like sensory neural networks for all sorts of applications.”

Building blocks

This milestone is part of Bao’s quest to mimic how skin can stretch, repair itself and, most remarkably, act like a smart sensory network that knows not only how to transmit pleasant sensations to the brain, but also when to order the muscles to react reflexively to make prompt decisions.

The new Science paper describes how the researchers constructed an artificial sensory nerve circuit that could be embedded in a future skin-like covering for neuro-prosthetic devices and soft robotics. This rudimentary artificial nerve circuit integrates three previously described components.

The first is a touch sensor that can detect even minuscule forces. This sensor sends signals through the second component — a flexible electronic neuron. The touch sensor and electronic neuron are improved versions of inventions previously reported by the Bao lab.

Sensory signals from these components stimulate the third component, an artificial synaptic transistor modeled after human synapses. The synaptic transistor is the brainchild of Tae-Woo Lee of Seoul National University, who spent his sabbatical year in Bao’s Stanford lab to initiate the collaborative work.

“Biological synapses can relay signals, and also store information to make simple decisions,” said Lee, who was a second senior author on the paper. “The synaptic transistor performs these functions in the artificial nerve circuit.”

Lee used a knee reflex as an example of how more-advanced artificial nerve circuits might one day be part of an artificial skin that would give prosthetic devices or robots both senses and reflexes.

In humans, when a sudden tap causes the knee muscles to stretch, certain sensors in those muscles send an impulse through a neuron. The neuron in turn sends a series of signals to the relevant synapses. The synaptic network recognizes the pattern of the sudden stretch and emits two signals simultaneously, one causing the knee muscles to contract reflexively and a second, less urgent signal to register the sensation in the brain.

Making it work

The new work has a long way to go before it reaches that level of complexity. But in the Science paper, the group describes how the electronic neuron delivered signals to the synaptic transistor, which was engineered in such a way that it learned to recognize and react to sensory inputs based on the intensity and frequency of low-power signals, just like a biological synapse.

The group members tested the ability of the system to both generate reflexes and sense touch.

In one test they hooked up their artificial nerve to a cockroach leg and applied tiny increments of pressure to their touch sensor. The electronic neuron converted the sensor signal into digital signals and relayed them through the synaptic transistor, causing the leg to twitch more or less vigorously as the pressure on the touch sensor increased or decreased.

They also showed that the artificial nerve could detect various touch sensations. In one experiment the artificial nerve was able to differentiate Braille letters. In another, they rolled a cylinder over the sensor in different directions and accurately detected the direction of the motion.

Bao’s graduate students Yeongin Kim and Alex Chortos, plus Wentao Xu, a researcher from Lee’s own lab, were also central to integrating the components into the functional artificial sensory nervous system.

The researchers say artificial nerve technology remains in its infancy. For instance, creating artificial skin coverings for prosthetic devices will require new devices to detect heat and other sensations, the ability to embed them into flexible circuits and then a way to interface all of this to the brain.

The group also hopes to create low-power, artificial sensor nets to cover robots, the idea being to make them more agile by providing some of the same feedback that humans derive from their skin.

Story Source:

Materials provided by Stanford University. Original written by Tom Abate. Note: Content may be edited for style and length.

AI researchers design ‘privacy filter’ for your photos

Each time you upload a photo or video to a social media platform, its facial recognition systems learn a little more about you. These algorithms ingest data about who you are, your location and people you know — and they’re constantly improving.

As concerns over privacy and data security on social networks grow, U of T Engineering researchers led by Professor Parham Aarabi and graduate student Avishek Bose have created an algorithm to dynamically disrupt facial recognition systems.

“Personal privacy is a real issue as facial recognition becomes better and better,” says Aarabi. “This is one way in which beneficial anti-facial-recognition systems can combat that ability.”

Their solution leverages a deep learning technique called adversarial training, which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks: the first working to identify faces, and the second working to disrupt the facial recognition task of the first. The two are constantly battling and learning from each other, setting up an ongoing AI arms race.

The result is an Instagram-like filter that can be applied to photos to protect privacy. Their algorithm alters very specific pixels in the image, making changes that are almost imperceptible to the human eye.

“The disruptive AI can ‘attack’ what the neural net for the face detection is looking for,” says Bose. “If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they’re less noticeable. It creates very subtle disturbances in the photo, but to the detector they’re significant enough to fool the system.”

Aarabi and Bose tested their system on the 300-W face dataset, an industry standard pool of more than 600 faces that includes a wide range of ethnicities, lighting conditions and environments. They showed that their system could reduce the proportion of faces that were originally detectable from nearly 100 per cent down to 0.5 per cent.

“The key here was to train the two neural networks against each other — with one creating an increasingly robust facial detection system, and the other creating an ever stronger tool to disable facial detection,” says Bose, the lead author on the project. The team’s study will be published and presented at the 2018 IEEE International Workshop on Multimedia Signal Processing later this summer.

In addition to disabling facial recognition, the new technology also disrupts image-based search, feature identification, emotion and ethnicity estimation, and all other face-based attributes that could be extracted automatically.

Next, the team hopes to make the privacy filter publicly available, either via an app or a website.

“Ten years ago these algorithms would have to be human defined, but now neural nets learn by themselves — you don’t need to supply them anything except training data,” says Aarabi. “In the end they can do some really amazing things. It’s a fascinating time in the field, there’s enormous potential.”

Aerial robot that can morph in flight

Marking a world first, researchers from the Étienne Jules Marey Institute of Movement Sciences (CNRS / Aix-Marseille Université) have drawn inspiration from birds to design an aerial robot capable of altering its profile during flight. To reduce its wingspan and navigate through tight spaces, it can reorient its arms, which are equipped with propellers that let it fly like a helicopter. The scientists’ work is the subject of an article published in Soft Robotics (May 30, 2018). It paves the way for a new generation of large robots that can move through narrow passages, making them ideal for exploration as well as search and rescue missions.

Birds and winged insects have the remarkable ability to maneuver quickly during flight to clear obstacles. Such extreme agility is necessary to navigate through cramped spaces and crowded environments, like forests. There are already miniature flying machines that can roll, pitch, or otherwise alter their flight attitude to pass through small apertures. But birds illustrate another strategy that is just as effective for flying through bottlenecks. They can quickly fold their wings during high-speed flight, reducing their imposing span, to easily negotiate the challenging paths before them.[1]

Deployment of aerial robots in constricted and cluttered areas for search and rescue, exploratory, or mapping operations will become more and more commonplace. They will need to be able to circumnavigate many obstacles and travel through fairly tight passages to complete their missions. Accordingly, researchers from the Étienne Jules Marey Institute of Movement Sciences (CNRS / Aix-Marseille Université) have designed a flying robot that can reduce its wingspan in flight to move through a small opening, without intensive steering that would consume too much energy and require a robotic platform featuring a low-inertia (light and small robot).[2]

Dubbed Quad-Morphing, the new robot has two rotating arms each equipped with two propellers for helicopter-like flight. A system of elastic and rigid wires allows the robot to change the orientation of its arms in flight so that they are either perpendicular or parallel to its central axis. It adopts the parallel position, halving its wingspan, to traverse a narrow stretch and then switches back to perpendicular position to stabilize its flight, all while flying at a speed of 9 km/h, which is pretty fast for an aerial robot.

At present, it is the precision of the Quad-Morphing autopilot mechanism that determines the robot’s agility. The autopilot activates arm reorientation when the robot nears a tight passage, as determined by a 3D localization system used at the institute.[3] The researchers have also equipped the robot with a miniature camera that can take 120 pictures per second. In the future, this will allow Quad-Morphing to independently assess the size of the gap before it and fold its wings accordingly if necessary. Flight testing with the new camera will begin this month.


[1] Such impressive behavior has been observed among budgerigars and goshawks flying at speeds above 14 km/h.

[2] Flying robots have typical transversal speed of 4-5 km/h in indoor conditions.

[3] The studies were conducted at the AVM flying machine arena, built with the financial support of the French Equipex Robotex program. The arena has 17 cameras for recording movement.

Story Source:

Materials provided by CNRS. Note: Content may be edited for style and length.

Cometh the cyborg: Improved integration of living muscles into robots

The new field of biohybrid robotics involves the use of living tissue within robots, rather than just metal and plastic. Muscle is one potential key component of such robots, providing the driving force for movement and function. However, in efforts to integrate living muscle into these machines, there have been problems with the force these muscles can exert and the amount of time before they start to shrink and lose their function.

Now, in a study reported in the journal Science Robotics, researchers at The University of Tokyo Institute of Industrial Science have overcome these problems by developing a new method that progresses from individual muscle precursor cells, to muscle-cell-filled sheets, and then to fully functioning skeletal muscle tissues. They incorporated these muscles into a biohybrid robot as antagonistic pairs mimicking those in the body to achieve remarkable robot movement and continued muscle function for over a week.

The team first constructed a robot skeleton on which to install the pair of functioning muscles. This included a rotatable joint, anchors where the muscles could attach, and electrodes to provide the stimulus to induce muscle contraction. For the living muscle part of the robot, rather than extract and use a muscle that had fully formed in the body, the team built one from scratch. For this, they used hydrogel sheets containing muscle precursor cells called myoblasts, holes to attach these sheets to the robot skeleton anchors, and stripes to encourage the muscle fibers to form in an aligned manner.

“Once we had built the muscles, we successfully used them as antagonistic pairs in the robot, with one contracting and the other expanding, just like in the body,” study corresponding author Shoji Takeuchi says. “The fact that they were exerting opposing forces on each other stopped them shrinking and deteriorating, like in previous studies.”

The team also tested the robots in different applications, including having one pick up and place a ring, and having two robots work in unison to pick up a square frame. The results showed that the robots could perform these tasks well, with activation of the muscles leading to flexing of a finger-like protuberance at the end of the robot by around 90°.

“Our findings show that, using this antagonistic arrangement of muscles, these robots can mimic the actions of a human finger,” lead author Yuya Morimoto says. “If we can combine more of these muscles into a single device, we should be able to reproduce the complex muscular interplay that allow hands, arms, and other parts of the body to function.”

Story Source:

Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length.

Activity simulator could eventually teach robots tasks like making coffee or setting the table

For many people, household chores are a dreaded, inescapable part of life that we often put off or do with little care — but what if a robot maid could help lighten the load?

Recently, computer scientists have been working on teaching machines to do a wider range of tasks around the house. In a new paper spearheaded by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Toronto, researchers demonstrate “VirtualHome,” a system that can simulate detailed household tasks and then have artificial “agents” execute them, opening up the possibility of one day teaching robots to do such tasks.

The team trained the system using nearly 3,000 programs of various activities, which are further broken down into subtasks for the computer to understand. A simple task like “making coffee,” for example, would also include the step “grabbing a cup.” The researchers demonstrated VirtualHome in a 3-D world inspired by the Sims video game.

The team’s AI agent can execute 1,000 of these interactions in the Sims-style world, with eight different scenes including a living room, kitchen, dining room, bedroom, and home office.

“Describing actions as computer programs has the advantage of providing clear and unambiguous descriptions of all the steps needed to complete a task,” says PhD student Xavier Puig, who was lead author on the paper. “These programs can instruct a robot or a virtual character, and can also be used as a representation for complex tasks with simpler actions.”

The project was co-developed by CSAIL and the University of Toronto alongside researchers from McGill University and the University of Ljubljana. It will be presented at the Computer Vision and Pattern Recognition (CVPR) conference, which takes place this month in Salt Lake City.

How it works

Unlike humans, robots need more explicit instructions to complete easy tasks — they can’t just infer and reason with ease.

For example, one might tell a human to “switch on the TV and watch it from the sofa.” Here, actions like “grab the remote control” and “sit/lie on sofa” have been omitted, since they’re part of the commonsense knowledge that humans have.

To better demonstrate these kinds of tasks to robots, the descriptions for actions needed to be much more detailed. To do so, the team first collected verbal descriptions of household activities, and then translated them into simple code. A program like this might include steps like: walk to the television, switch on the television, walk to the sofa, sit on the sofa, and watch television.

Once the programs were created, the team fed them to the VirtualHome 3-D simulator to be turned into videos. Then, a virtual agent would execute the tasks defined by the programs, whether it was watching television, placing a pot on the stove, or turning a toaster on and off.

The end result is not just a system for training robots to do chores, but also a large database of household tasks described using natural language. Companies like Amazon that are working to develop Alexa-like robotic systems at home could eventually use data like this to train their models to do more complex tasks.

The team’s model successfully demonstrated that, their agents could learn to reconstruct a program, and therefore perform a task, given either a description: “pour milk into glass,” or a video demonstration of the activity.

“This line of work could facilitate true robotic personal assistants in the future,” says Qiao Wang, a research assistant in arts, media, and engineering at Arizona State University. “Instead of each task programmed by the manufacturer, the robot can learn tasks just by listening to or watching the specific person it accompanies. This allows the robot to do tasks in a personalized way, or even some day invoke an emotional connection as a result of this personalized learning process.”

In the future, the team hopes to train the robots using actual videos instead of Sims-style simulation videos, which would enable a robot to learn simply by watching a YouTube video. The team is also working on implementing a reward-learning system in which the agent gets positive feedback when it does tasks correctly.

“You can imagine a setting where robots are assisting with chores at home and can eventually anticipate personalized wants and needs, or impending action,” says Puig. “This could be especially helpful as an assistive technology for the elderly, or those who may have limited mobility.”

Face recognition experts perform better with AI as partner

Experts at recognizing faces often play a crucial role in criminal cases. A photo from a security camera can mean prison or freedom for a defendant — and testimony from highly trained forensic face examiners informs the jury whether that image actually depicts the accused. Just how good are facial recognition experts? Would artificial intelligence help?

A study appearing today in the Proceedings of the National Academy of Sciences has brought answers. In work that combines forensic science with psychology and computer vision research, a team of scientists from the National Institute of Standards and Technology (NIST) and three universities has tested the accuracy of professional face identifiers, providing at least one revelation that surprised even the researchers: Trained human beings perform best with a computer as a partner, not another person.

“This is the first study to measure face identification accuracy for professional forensic facial examiners, working under circumstances that apply in real-world casework,” said NIST electronic engineer P. Jonathon Phillips. “Our deeper goal was to find better ways to increase the accuracy of forensic facial comparisons.”

The team’s effort began in response to a 2009 report by the National Research Council, “Strengthening Forensic Science in the United States: A Path Forward,” which underscored the need to measure the accuracy of forensic examiner decisions.

The NIST study is the most comprehensive examination to date of face identification performance across a large, varied group of people. The study also examines the best technology as well, comparing the accuracy of state-of-the-art face recognition algorithms to human experts.

Their result from this classic confrontation of human versus machine? Neither gets the best results alone. Maximum accuracy was achieved with a collaboration between the two.

“Societies rely on the expertise and training of professional forensic facial examiners, because their judgments are thought to be best,” said co-author Alice O’Toole, a professor of cognitive science at the University of Texas at Dallas. “However, we learned that to get the most highly accurate face identification, we should combine the strengths of humans and machines.”

The results arrive at a timely moment in the development of facial recognition technology, which has been advancing for decades, but has only very recently attained competence approaching that of top-performing humans.

“If we had done this study three years ago, the best computer algorithm’s performance would have been comparable to an average untrained student,” Phillips said. “Nowadays, state-of-the-art algorithms perform as well as a highly trained professional.”

The study itself involved a total of 184 participants, a large number for an experiment of this type. Eighty-seven were trained professional facial examiners, while 13 were “super recognizers,” a term implying exceptional natural ability. The remaining 84 — the control groups — included 53 fingerprint examiners and 31 undergraduate students, none of whom had training in facial comparisons.

For the test, the participants received 20 pairs of face images and rated the likelihood of each pair being the same person on a seven-point scale. The research team intentionally selected extremely challenging pairs, using images taken with limited control of illumination, expression and appearance. They then tested four of the latest computerized facial recognition algorithms, all developed between 2015 and 2017, using the same image pairs.

Three of the algorithms were developed by Rama Chellappa, a professor of electrical and computer engineering at the University of Maryland, and his team, who contributed to the study. The algorithms were trained to work in general face recognition situations and were applied without modification to the image sets.

One of the findings was unsurprising but significant to the justice system: The trained professionals did significantly better than the untrained control groups. This result established the superior ability of the trained examiners, thus providing for the first time a scientific basis for their testimony in court.

The algorithms also acquitted themselves well, as might be expected from the steady improvement in algorithm performance over the past few years.

What raised the team’s collective eyebrows regarded the performance of multiple examiners. The team discovered that combining the opinions of multiple forensic face examiners did not bring the most accurate results.

“Our data show that the best results come from a single facial examiner working with a single top-performing algorithm,” Phillips said. “While combining two human examiners does improve accuracy, it’s not as good as combining one examiner and the best algorithm.”

Combining examiners and AI is not currently used in real-world forensic casework. While this study did not explicitly test this fusion of examiners and AI in such an operational forensic environment, results provide an roadmap for improving the accuracy of face identification in future systems.

While the three-year project has revealed that humans and algorithms use different approaches to compare faces, it poses a tantalizing question to other scientists: Just what is the underlying distinction between the human and the algorithmic approach?

“If combining decisions from two sources increases accuracy, then this method demonstrates the existence of different strategies,” Phillips said. “But it does not explain how the strategies are different.”

The research team also included psychologist David White from Australia’s University of New South Wales.

An elastic fiber filled with electrodes set to revolutionize smart clothes

It’s a whole new way of thinking about sensors. The tiny fibers developed at EPFL are made of elastomer and can incorporate materials like electrodes and nanocomposite polymers. The fibers can detect even the slightest pressure and strain and can withstand deformation of close to 500% before recovering their initial shape. All that makes them perfect for applications in smart clothing and prostheses, and for creating artificial nerves for robots.

The fibers were developed at EPFL’s Laboratory of Photonic Materials and Fiber Devices (FIMAP), headed by Fabien Sorin at the School of Engineering. The scientists came up with a fast and easy method for embedding different kinds of microstructures in super-elastic fibers. For instance, by adding electrodes at strategic locations, they turned the fibers into ultra-sensitive sensors. What’s more, their method can be used to produce hundreds of meters of fiber in a short amount of time. Their research has just been published in Advanced Materials.

Heat, then stretch

To make their fibers, the scientists used a thermal drawing process, which is the standard process for optical-fiber manufacturing. They started by creating a macroscopic preform with the various fiber components arranged in a carefully designed 3D pattern. They then heated the preform and stretched it out, like melted plastic, to make fibers of a few hundreds microns in diameter. And while this process stretched out the pattern of components lengthwise, it also contracted it crosswise, meaning the components’ relative positions stayed the same. The end result was a set of fibers with an extremely complicated microarchitecture and advanced properties.

Until now, thermal drawing could be used to make only rigid fibers. But Sorin and his team used it to make elastic fibers. With the help of a new criterion for selecting materials, they were able to identify some thermoplastic elastomers that have a high viscosity when heated. After the fibers are drawn, they can be stretched and deformed but they always return to their original shape.

Rigid materials like nanocomposite polymers, metals and thermoplastics can be introduced into the fibers, as well as liquid metals that can be easily deformed. “For instance, we can add three strings of electrodes at the top of the fibers and one at the bottom. Different electrodes will come into contact depending on how the pressure is applied to the fibers. This will cause the electrodes to transmit a signal, which can then be read to determine exactly what type of stress the fiber is exposed to — such as compression or shear stress, for example,” says Sorin.

Artificial nerves for robots

Working in association with Professor Dr. Oliver Brock (Robotics and Biology Laboratory, Technical University of Berlin), the scientists integrated their fibers into robotic fingers as artificial nerves. Whenever the fingers touch something, electrodes in the fibers transmit information about the robot’s tactile interaction with its environment. The research team also tested adding their fibers to large-mesh clothing to detect compression and stretching. “Our technology could be used to develop a touch keyboard that’s integrated directly into clothing, for instance” says Sorin.

The researchers see many other potential applications. Especially since the thermal drawing process can be easily tweaked for large-scale production. This is a real plus for the manufacturing sector. The textile sector has already expressed interest in the new technology, and patents have been filed.

Story Source:

Materials provided by Ecole Polytechnique Fédérale de Lausanne. Note: Content may be edited for style and length.

The Military Just Created An AI That Learned How To Program Software

Tired of writing your own boring code for new software? Finally, there’s an AI that can do it for you.

BAYOU is an deep learning tool that basically works like a search engine for coding: tell it what sort of program you want to create with a couple of keywords, and it will spit out java code that will do what you’re looking for, based on its best guess.

The tool was developed by a team of computer scientists from Rice University who received funding both from the military and Google. In a study published earlier this month on the preprint server arXiv, they describe how they built BAYOU and what sorts of problems it can help programmers solve.

Basically, BAYOU read the source code for about 1500 Android apps, which comes out to 100 million lines of Java. All that code was fed through BAYOU’s neural net, resulting in AI that can, yes, program other software.

If the code that BAYOU read included any sort of information about what the code does, then BAYOU also learned what those programs were intended to do along with how they work. This contextual information is what lets the AI write functional software based on just a couple of key words and basic information about what the programmer wants.

Computer science majors, rejoice: your homework might be about to get much easier. And teaching people how to code may become simpler and more intuitive, as they may someday use this new AI to generate examples of code or even to check their own work. Right now, BAYOU is still in the early stages, and the team behind it is still proving their technology works.

No, this is not that moment in which AI becomes self-replicating; BAYOU merely generates what the researchers call “sketches” of a program that are relevant to what a programmer is trying to write. These sketches still need to be pieced together into the larger work, and they may have to be tailored to the project at hand.

But even if the technology is in its infancy, this is a major step in the search for an AI programmer, a longstanding goal for computer science researchers. Other attempts to create something like BAYOU required extensive, narrow constraints to guide programmers towards the correct type of code. Because BAYOU can get to work with just a couple of keywords, it’s much less time-intensive, and much easier to use overall, for the human operators.

It’ll Take More Than $1.4 Billion to Make the UK the World Leader in AI

People may fear that artificial intelligence will take over the world. But before that can happen, countries are vying to be the one to shepherd in this new era. That is: they’re pouring money into AI research, in a rush to make it smarter as we find increasingly sophisticated ways to use it.

Nations are doing this for more than just bragging rights. Experts predict that AI will contribute $15.7 trillion to the world’s economy by 2030. Though the United Kingdom hasn’t gotten as much attention for its AI research as, say, the United States or China, that’s about to change: today, the U.K. announced a $1.4 billion investment in AI.

“We have a position of strength that we want to capitalize on because if we don’t build on it the other countries around the world would steal a march,” U.K. business minister Greg Clark told Reuters.

The U.K. plans to spend the money on a number of projects, including teacher training, regional AI hubs, and the creation of an AI supercomputer at the University of Cambridge.

Those will no doubt help grow the AI industry in the U.K., but will they be enough to put the U.K. in the top spot as the world leader in AI?

Probably not.

First, $1.4 billion might seem like a lot to invest in just one industry, but it’s really nowhere near what other nations are spending on AI.

U.S. venture capitalists are investing more money into AI startups than VCs in any other nation, at least as of July 2017. In 2016, private U.S. investors pumped about $21 billion into the AI industry. Private investors in the U.K. and the rest of Europe combined invested a paltry $2.9 to $3.8 billion, at most 18 percent of what the Americans invested.

The U.S. also has more troops in the AI battle, with an estimated 850,000 AI professionals scattered across the country. For comparison, the U.K. has just 140,000 (China, arguably the current AI frontrunner, has even fewer: 50,000).

Despite lagging behind in numbers, the U.K. does have an advantage over geographically larger nations: almost all of its AI research takes place in London.

“The cross-fertilisation that is at the core of the impact of artificial intelligence can happen here more easily than elsewhere,” Matt Hancock, U.K. Secretary of State for Digital, Culture, Media and Sport, told Reuters.

However, the U.K. might not have that advantage for much longer. China recently announced it is spending $2.1 billion on a single AI venture: a new technology park just outside of Beijing.

According to Xinhua, the nation’s official news agency, the park will support up to 400 AI enterprises focused on everything from biometric identification to deep learning. This will make it easy for experts to collaborate across disciplines.

The U.S. and China are just two of the U.K.’s major AI competitors. Japan has its own strategy for AI domination. So does CanadaGermany, too.

So while the U.K.’s $1.4 billion AI investment is nothing to scoff at, it’s unlikely to be the deciding factor in the fight to win the top spot as the world leader of AI.

Artificial Intelligence Writes Bad Poems Just Like An Angsty Teen

the sun is a beautiful thing

in silence is drawn

between the trees

only the beginning of light

Was that poem written by an angsty middle schooler or an artificially intelligent algorithm? Is it easy to tell?

Yeah, it’s not easy for us, either. Or for poetry experts, for that matter.

A team of researchers from Microsoft and Kyoto University developed a poet AI good enough to fool online judges, according to a paper published Thursday on the preprint site arXiv. It’s the latest step towards artificial intelligence that can create believable, human-passing language, and, man, it seems like a big one.

In order to generate something as esoteric as a poem, the AI was fed thousands of images paired with human-written descriptions and poems. This taught the algorithm associations between images and text. It also learned the patterns of imagery, rhymes, and other language that might make up a believable poem, as well as how certain colors or images relate to emotions and metaphors.

Once the AI was trained, it was then given an image and tasked with writing a poem that was not only relevant to the picture but also, you know, read like a poem instead of algorithmic nonsense.

And to be fair, some of the results were pretty nonsensical, even beyond the sorts of nonsense you’d find in a college literary magazine.

this realm of rain

grey sky and cloud

it’s quite and peaceful

safe allowed

 And, arguably, worse:

I am a coal-truck

by a broken heart

I have no sound

the sound of my heart

I am not

You could probably (we hope) pick those out of the crowd as machine-written. But while the AI is no Kendrick Lamar, many of the resulting poems actually did look like poems.

Next, the researchers had to see if the average person could tell the difference. That means: a Turing test of sorts.

The researchers found their judges on Amazon Mechanical Turk — an online service where people complete tasks that benefit from automation but still require human intelligence — and divided people up as either general users or “experts,” who had some sort of background in literary academia. These judges were then presented with poem after poem — sometimes with the associated picture, and sometimes without. They had to guess whether a human had written them, or whether AI had.

While the experts were better at identifying machine-written poems if they were given the image and general users were better without it, both groups were better at picking out the human-written poems than they were at identifying which ones were written by the new AI.

That is to say, the machines had them fooled more often than not.

While it might be neat to buy a coffee table book of robot poetry, odds are we’ll see the convincing, evocative language that these algorithms have mastered in more commercially-relevant ways, like ads or customer service chatbots. But even so, it’s nice to imagine a future full of gentle, thoughtful robots studying Shakespeare and comparing their broken hearts to coal trucks.

Transparent eel-like soft robot can swim silently underwater

An innovative, eel-like robot developed by engineers and marine biologists at the University of California can swim silently in salt water without an electric motor. Instead, the robot uses artificial muscles filled with water to propel itself. The foot-long robot, which is connected to an electronics board that remains on the surface, is also virtually transparent.

The team, which includes researchers from UC San Diego and UC Berkeley, details their work in the April 25 issue of Science Robotics. Researchers say the bot is an important step toward a future when soft robots can swim in the ocean alongside fish and invertebrates without disturbing or harming them. Today, most underwater vehicles designed to observe marine life are rigid and submarine-like and powered by electric motors with noisy propellers.

“Instead of propellers, our robot uses soft artificial muscles to move like an eel underwater without making any sound,” said Caleb Christianson, a Ph.D. student at the Jacobs School of Engineering at UC San Diego.

One key innovation was using the salt water in which the robot swims to help generate the electrical forces that propel it. The bot is equipped with cables that apply voltage to both the salt water surrounding it and to pouches of water inside of its artificial muscles. The robot’s electronics then deliver negative charges in the water just outside of the robot and positive charges inside of the robot that activate the muscles. The electrical charges cause the muscles to bend, generating the robot’s undulating swimming motion. The charges are located just outside the robot’s surface and carry very little current so they are safe for nearby marine life.

“Our biggest breakthrough was the idea of using the environment as part of our design,” said Michael T. Tolley, the paper’s corresponding author and a professor of mechanical engineering at the Jacobs School at UC San Diego. “There will be more steps to creating an efficient, practical, untethered eel robot, but at this point we have proven that it is possible.”

Previously, other research groups had developed robots with similar technology. But to power these robots, engineers were using materials that need to be held in constant tension inside semi-rigid frames. The Science Robotics study shows that the frames are not necessary.

“This is in a way the softest robot to be developed for underwater exploration,” Tolley said.

The robot was tested inside salt-water tanks filled with jelly fish, coral and fish at the Birch Aquarium at the Scripps Institution of Oceanography at UC San Diego and in Tolley’s lab.

The conductive chambers inside the robot’s artificial muscles can be loaded with fluorescent dye (as shown in the video accompanying the study and this release). In the future, the fluorescence could be used as a kind of signaling system.

Next steps also include improving the robot’s reliability and its geometry. Researchers need to improve ballast, equipping the robot with weights so that it can dive deeper. For now, engineers have improvised ballast weights with a range of objects, such as magnets. In future work, researchers envision building a head for their eel robot to house a suite of sensors.

The research was supported with a grant from the Office of Naval Research. Christianson is supported by a National Science Foundation Graduate Research Fellowship.

Videos: (feature) (research video)

Story Source:

Materials provided by University of California – San Diego. Original written by Ioana Patringenaru. Note: Content may be edited for style and length.

Turning deep-learning AI loose on software development

Computer scientists at Rice University have created a deep-learning, software-coding application that can help human programmers navigate the growing multitude of often-undocumented application programming interfaces, or APIs.

Known as Bayou, the Rice application was created through an initiative funded by the Defense Advanced Research Projects Agency aimed at extracting knowledge from online source code repositories like GitHub. A paper on Bayou will be presented May 1 in Vancouver, British Columbia, at the Sixth International Conference on Learning Representations, a premier outlet for deep learning research. Users can try it out at

Designing applications that can program computers is a long-sought grail of the branch of computer science called artificial intelligence (AI).

“People have tried for 60 years to build systems that can write code, but the problem is that these methods aren’t that good with ambiguity,” said Bayou co-creator Swarat Chaudhuri, associate professor of computer science at Rice. “You usually need to give a lot of details about what the target program does, and writing down these details can be as much work as just writing the code.

“Bayou is a considerable improvement,” he said. “A developer can give Bayou a very small amount of information — just a few keywords or prompts, really — and Bayou will try to read the programmer’s mind and predict the program they want.”

Chaudhuri said Bayou trained itself by studying millions of lines of human-written Java code. “It’s basically studied everything on GitHub, and it draws on that to write its own code.”

Bayou co-creator Chris Jermaine, a professor of computer science who co-directs Rice’s Intelligent Software Systems Laboratory with Chaudhuri, said Bayou is particularly useful for synthesizing examples of code for specific software APIs.

“Programming today is very different than it was 30 or 40 years ago,” Jermaine said. “Computers today are in our pockets, on our wrists and in billions of home appliances, vehicles and other devices. The days when a programmer could write code from scratch are long gone.”

Bayou architect Vijay Murali, a research scientist at the lab, said, “Modern software development is all about APls. These are system-specific rules, tools, definitions and protocols that allow a piece of code to interact with a specific operating system, database, hardware platform or another software system. There are hundreds of APIs, and navigating them is very difficult for developers. They spend lots of time at question-answer sites like Stack Overflow asking other developers for help.”

Murali said developers can now begin asking some of those questions at Bayou, which will give an immediate answer.

“That immediate feedback could solve the problem right away, and if it doesn’t, Bayou’s example code should lead to a more informed question for their human peers,” Murali said.

Jermaine said the team’s primary goal is to get developers to try to extend Bayou, which has been released under a permissive open-source license.

“The more information we have about what people want from a system like Bayou, the better we can make it,” he said. “We want as many people to use it as we can get.” Bayou is based on a method called neural sketch learning, which trains an artificial neural network to recognize high-level patterns in hundreds of thousands of Java programs. It does this by creating a “sketch” for each program it reads and then associating this sketch with the “intent” that lies behind the program.

When a user asks Bayou questions, the system makes a judgment call about what program it’s being asked to write. It then creates sketches for several of the most likely candidate programs the user might want.

“Based on that guess, a separate part of Bayou, a module that understands the low-level details of Java and can do automatic logical reasoning, is going to generate four or five different chunks of code,” Jermaine said. “It’s going to present those to the user like hits on a web search. ‘This one is most likely the correct answer, but here are three more that could be what you’re looking for.'”

Story Source:

Materials provided by Rice University. Original written by Jade Boyd. Note: Content may be edited for style and length.