Building AI systems that make fair decisions

A growing body of research has demonstrated that algorithms and other types of software can be discriminatory, yet the vague nature of these tools makes it difficult to implement specific regulations. Determining the existing legal, ethical and philosophical implications of these powerful decision-making aides, while still obtaining answers and information, is a complex challenge.

Harini Suresh, a PhD student at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL), is investigating this multilayered puzzle: how to create fair and accurate machine learning algorithms that let users obtain the data they need. Suresh studies the societal implications of automated systems in MIT Professor John Guttag’s Data-Driven Inference Group, which uses machine learning and computer vision to improve outcomes in medicine, finance, and sports. Here, she discusses her research motivations, how a food allergy led her to MIT, and teaching students about deep learning.

Q: What led you to MIT?

A: When I was in eighth grade, my mom developed an allergy to spicy food, which, coming from India, was truly bewildering to me. I wanted to discover the underlying reason. Luckily, I grew up next to Purdue University in Indiana, and I met with a professor there who eventually let me test my allergy-related hypotheses. I was fascinated with being able to ask and answer my own questions, and continued to explore this realm throughout high school.

When I came to MIT as an undergraduate, I intended to focus solely on biology, until I took my first computer science class. I learned how computational tools could profoundly affect biology and medicine, since humans can’t process massive amounts of data in the way that machines can.

Towards the end of my undergrad, I started doing research with [professor of computer science and engineering] Peter Szolovits, who focuses on utilizing big medical data and machine learning to come up with new insights. I stayed to get my master’s degree in computer science, and now I’m in my first year as a PhD student studying personalized medicine and societal implications of machine learning.

Q: What are you currently working on?

A: I’m studying how to make machine learning algorithms more understandable and easier to use responsibly. In machine learning, we typically use historical data and train a model to detect patterns in the data and make new predictions.

If the data we use is biased in a particular way, such as “women tend to receive less pain treatment”, then the model will learn that. Even if the data isn’t biased, if we just have way less data on a certain group, predictions for that group will be worse. If that model is then integrated into a hospital (or any other real-world system), it’s not going to perform equally across all groups of people, which is problematic.

I’m working on creating algorithms that utilize data effectively but fairly. This involves both detecting bias or underrepresentation in the data as well as figuring out how to mitigate it at different points in the machine learning pipeline. I’ve also worked on using predictive models to improve patient care.

Q: What effect do you think your area of work will have in the next decade?

A: Machine learning is everywhere. Companies are going to use these algorithms and integrate them into their products, whether they’re fair or not. We need to make it easier for people to use these tools responsibly so that our predictions on data are made in a way that we as a society are okay with.

Q: What is your favorite thing about doing research at CSAIL?

A: When I ask for help, whether it’s related to a technical detail, a high-level problem, or general life advice, people are genuinely willing to lend support, discuss problems, and find solutions, even if it takes a long time.

Q: What is the biggest challenge you face in your work?

A: When we think about machine learning problems with real-world applications, and the goal of eventually getting our work in the hands of real people, there’s a lot of existing legal, ethical, and philosophical considerations that arise. There’s variability in the definition of “fair,” and it’s important not to reduce our research down to a simple equation, because it’s much more than that. It’s definitely challenging to balance thinking about how my work fits in with these broader frameworks while also carving out a doable computer science problem to work on.

Q: What is something most people would be surprised to learn about you?

A: I love creative writing, and for most of my life before I came to MIT I thought I would be an author. I really enjoy art and creativity. Along those lines, I painted a full-wall mural in my room a while ago, I frequently spend hours at MIT’s pottery studio, and I love making up recipes and taking photos.

Q: If you could tell your younger self one thing what would it be?

A: If you spend time on something, and it doesn’t directly contribute to a paper or thesis, don’t think of it as a waste of time. Accept the things that don’t work out as a part of the learning process and be honest about when to move on to something new without feeling guilty.

If you’d rather be doing something else, sooner is better to just go do it. Things that seem like huge consequences at the time, like taking an extra class or graduating slightly later, aren’t actually an issue when the time rolls around, and a lot of people do it. Honestly, my future self could probably use this advice too!

Q: What else have you been involved with at MIT?

A: During Independent Activity Period 2017, I organized a class called Intro to Deep Learning. I think machine learning gets a reputation of being a very difficult, expert-only endeavor, which scares people away and creates a pretty homogenous group of “experts.”

I wanted to create a low-commitment introduction to an area of machine learning that might help ease the initial barrier to entry. My co-organizer and I tried to keep our goals of accessibility and inclusivity at the forefront when making decisions about the course. Communicating complex ideas in an accessible way was a challenge, but a very fun one.

Topics: Students, Computer science and technology, Machine learning, Electrical Engineering & Computer Science (eecs), Artificial intelligence, Computer Science and Artificial Intelligence Laboratory (CSAIL), Data, Diversity and inclusion, Algorithms, Graduate, postdoctoral

AI Could Start A Nuclear War. But Only If We Let AI Start A Nuclear War

Here are some very true facts:

You might be tempted to put these pieces together and assume that AI might autonomously start a nuclear war. This is the subject of a new paper and article published today by the RAND Corporation, a nonprofit thinktank that researches national security as part of its Security 2040 initiative.

But AI won’t necessarily cause a nuclear war; no matter what AI fear-mongerer Elon Musk tweets out, artificial intelligence will only trigger a nuclear war if we decide to build artificial intelligence that can start nuclear wars.

The RAND Corporation hosted a series of panels with mysterious, unnamed experts in the realms of national security, nuclear weaponry, and artificial intelligence to speculate and theorize on how AI might advance in the coming years and what that means for nuclear war.

Much of the article talks about hyper-intelligent computers that would transform when a nation decides to launch its nuclear missiles. The researchers imagine algorithms that can track intercontinental ballistic missiles, launch its nukes before they’re destroyed in an incoming attack, or even deploy retaliatory strikes before an enemy’s nukes have even left their lairs.

It also mentions that AI could suggest when human operators should launch missiles, while also arguing that future generations will be more willing to take that human operator out of the equation, leaving those life-or-death decisions to AI that has been trained to make them.

Oh, and also the researchers say that all these systems will be buggy as hell while people work out the kinks. Because what’s the harm in a little trial and error when nukes are involved?

As we become more dependent on AI for its military applications, we might need to reconsider how implementing such systems could affect nuclear powers worldwide, many of which find themselves in a complicated web of alliances and political rivalries.

If you are despairing at the prospect of a computer-dominated military, the study authors offer some solace. Buried within their findings is the very reasonable perspective that artificial intelligence, which excels at the super-niche tasks for which it is developed, will continue to be developed at an incremental pace and likely won’t do all that much. Yes, AI is sophisticated enough to win at Go, but it’s not ready to be in charge of our nukes. At least, not yet.

In short, it will be a while before we have computers deciding when to launch our nukes (though, given the rate of human error and some seriously close calls that resulted from it in the past, it’s a matter of opinion whether more human control is actually a good thing).

Robot Showboat: Now Even Our Celeb Feuds Are Automated

Drake and Meek Mill. 6ix9ine and The Game. Sarah Jessica Parker and Kim Cattrall. Let’s be real, part of the reason we’re on social media is for the celebrity feuds.

And now it seems like that’s just another thing the robots are going to do better (or, at least, stranger) than us.

Earlier this week, famous instagram robot Bermuda, a digital avatar who posts gems about how climate change is fake, feminists are misguided, and whites are supreme, took over the account of Miquela, another Instagram “robot” who models designer clothes, supports Black Lives Matter, and loves Beyonce.

After a few days, Miquela “came clean,” and posted an “emotional” rant about her “revelation” that she was not based on a real person, but instead is an artificial intelligence-powered robot that, like Bermuda, was created by the Trump-endorsing tech company Cain Intelligence.

Bermuda returned Miquela’s account, and the feud ended.

But the true mystery had just begun.

Now, we here at Futurism love some good internet as much as anyone else. But it’s important to mention that none of this is real. Just like Twitter’s horse ebooks or YouTube’s Pronunciationbook, Miquela and Bermuda’s feud and revelation is probably just another exhausting internet art project. Sigh.

In fact, Cain Intelligence is not even a real company. And it’s unclear how much (if any) of either digital persona’s account is generated by artificial intelligence. It’s possible that some of the captions and images are rendered through a machine learning algorithm, but it’s just as likely that some dude behind the scenes is writing everything himself, as is partially the case for Hanson Robotics’ Sophia.

No one, human or AI, has come forward to reveal the real goal behind this artificial feud, and Miquela hasn’t posted since her confession, so she’s not dropping any clues. Until then, we can probably all move on and get ready for the next viral social media feud/prank.

Unless, of course, AI really is controlling Bermuda and Miquela, in which case I apologize and beg that they leave my account alone.

Researchers design ‘soft’ robots that can move on their own

If Star Wars’ R2-D2 is your idea of a robot, think again. Researchers led by a University of Houston engineer have reported a new class of soft robot, composed of ultrathin sensing, actuating electronics and temperature-sensitive artificial muscle that can adapt to the environment and crawl, similar to the movement of an inchworm or caterpillar.

Cunjiang Yu, Bill D. Cook Assistant Professor of mechanical engineering, said potential applications range from surgery and rehabilitation to search and rescue in natural disasters or on the battlefield. Because the robot body changes shape in response to its surroundings, it can slip through narrow crevices to search for survivors in the rubble left by an earthquake or bombing, he said.

“They sense the change in environment and adapt to slip through,” he said.

These soft robots, made of soft artificial muscle and ultrathin deformable sensors and actuators, have significant advantages over the traditional rigid robots used for automation and other physical tasks.

The researchers said their work, published in the journal Advanced Materials, took its inspiration from nature. “Many creatures, such as inchworms that have completely soft compliant bodies without any rigid components (e.g., bones), exhibit unprecedented abilities in adapting their shapes and morphologies and unique locomotion behaviors,” they wrote.

Traditional soft robots lack the ability to adapt to their environments or move on their own.

The prototype adaptive soft robot includes a liquid crystal elastomer, doped with carbon black nanoparticles to enhance thermal conductivity, as the artificial muscle, combined with ultrathin mesh shaped stretchable thermal actuators and silicon-based light sensors. The thermal actuators provide heat to activate the robot.

The prototype is small — 28.6 millimeters in length, or just over one inch — but Yu said it could easily be scaled up. That’s the next step, along with experimenting with various types of sensors. While the prototype uses heat-sensitive sensors, it could employ smart materials activated by light or other cues, he said.

“This is the first of its kind,” Yu said. “You can use other sensors, depending on what you want it to do.”

Video of robot in motion:

Story Source:

Materials provided by University of Houston. Original written by Jeannie Kever. Note: Content may be edited for style and length.

Robot developed for automated assembly of designer nanomaterials

A current area of intense interest in nanotechnology is van der Waals heterostructures, which are assemblies of atomically thin two-dimensional (2D) crystalline materials that display attractive conduction properties for use in advanced electronic devices.

A representative 2D semiconductor is graphene, which consists of a honeycomb lattice of carbon atoms that is just one atom thick. The development of van der Waals heterostructures has been restricted by the complicated and time-consuming manual operations required to produce them. That is, the 2D crystals typically obtained by exfoliation of a bulk material need to be manually identified, collected, and then stacked by a researcher to form a van der Waals heterostructure. Such a manual process is clearly unsuitable for industrial production of electronic devices containing van der Waals heterostructures

Now, a Japanese research team led by the Institute of Industrial Science at The University of Tokyo has solved this issue by developing an automated robot that greatly speeds up the collection of 2D crystals and their assembly to form van der Waals heterostructures. The robot consists of an automated high-speed optical microscope that detects crystals, the positions and parameters of which are then recorded in a computer database. Customized software is used to design heterostructures using the information in the database. The heterostructure is then assembled layer by layer by a robotic equipment directed by the designed computer algorithm. The findings were reported in Nature Communications.

“The robot can find, collect, and assemble 2D crystals in a glove box,” study first author Satoru Masubuchi says. “It can detect 400 graphene flakes an hour, which is much faster than the rate achieved by manual operations.”

When the robot was used to assemble graphene flakes into van der Waals heterostructures, it could stack up to four layers an hour with just a few minutes of human input required for each layer. The robot was used to produce a van der Waals heterostructure consisting of 29 alternating layers of graphene and hexagonal boron nitride (another common 2D semiconductor). The record layer number of a van der Waals heterostructure produced by manual operations is 13, so the robot has greatly increased our ability to access complex van der Waals heterostructures.

“A wide range of materials can be collected and assembled using our robot,” co-author Tomoki Machida explains. “This system provides the potential to fully explore van der Waals heterostructures.”

The development of this robot will greatly facilitate production of van der Waals heterostructures and their use in electronic devices, taking us a step closer to realizing devices containing atomic-level designer materials.

Story Source:

Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length.

An AI that makes road maps from aerial images

Map apps may have changed our world, but they still haven’t mapped all of it yet. In particular, mapping roads can be tedious: even after taking aerial images, companies like Google still have to spend many hours manually tracing out roads. As a result, they haven’t yet gotten around to mapping the vast majority of the more than 20 million miles of roads across the globe.

Gaps in maps are a problem, particularly for systems being developed for self-driving cars. To address the issue, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created RoadTracer, an automated method to build road maps that’s 45 percent more accurate than existing approaches.

Using data from aerial images, the team says that RoadTracer is not just more accurate, but more cost-effective than current approaches. MIT professor Mohammad Alizadeh says that this work will be useful both for tech giants like Google and for smaller organizations without the resources to curate and correct large amounts of errors in maps.

“RoadTracer is well-suited to map areas of the world where maps are frequently out of date, which includes both places with lower population and areas where there’s frequent construction,” says Alizadeh, one of the co-authors of a new paper about the system. “For example, existing maps for remote areas like rural Thailand are missing many roads. RoadTracer could help make them more accurate.”

In tests looking at aerial images of New York City, RoadTracer could correctly map 44 percent of its road junctions, which is more than twice as effective as traditional approaches based on image segmentation that could map only 19 percent.

The paper, which will be presented in June at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah, is a collaboration between MIT CSAIL and the Qatar Computing Research Institute (QCRI).

Alizadeh’s MIT co-authors include graduate students Fayven Bastani and Songtao He, and professors Hari Balakrishnan,Sam Madden, and David DeWitt. QCRI co-authors include senior software engineer Sofiane Abbar and Sanjay Chawla, who is the research director of QCRI’s Data Analytics Group.

How it works

Current efforts to automate maps involve training neural networks to look at aerial images and identify individual pixels as either “road” or “not road.” Because aerial images can often be ambiguous and incomplete, such systems also require a post-processing step that’s aimed at trying to fill in some of the gaps.

Unfortunately, these so-called “segmentation” approaches are often imprecise: if the model mislabels a pixel, that error will get amplified in the final road map. Errors are particularly likely if the aerial images have trees, buildings or shadows that obscure where roads begin and end. (The post-processing step also requires making decisions based on assumptions that may not always hold up, like connecting two road segments simply because they are next to each other.)

Meanwhile, RoadTracer creates maps step-by-step. It starts at a known location on the road, and uses a neural network to examine the surrounding area to determine which point is most likely to be the next part on the road. It then adds that point and repeats the process to gradually trace out the road one step at a time.

“Rather than making thousands of different decisions at once about whether various pixels represent parts of a road, RoadTracer focuses on the simpler problem of figuring out which direction to follow when starting from a particular spot that we know is a road,” says Bastani. “This is in many ways actually a lot closer to how we as humans construct mental models of the world around us.”

The team trained RoadTracer on aerial images of 25 cities across six countries in North America and Europe, and then evaluated its mapping abilities on 15 other cities.

“It’s important for a mapping system to be able to perform well on cities it hasn’t trained on, because regions where automatic mapping holds the most promise are ones where existing maps are non-existent or inaccurate,” says Balakrishnan.

Bastani says that the fact that RoadTracer had an error rate that is 45 percent lower is essential to making automatic mapping systems more practical for companies like Google.

“If the error rate is too high, then it is more efficient to map the roads manually from scratch versus removing incorrect segments from the inferred map,” says Bastani.

Still, implementing something like RoadTracer wouldn’t take people completely out of the loop: The team says that they could imagine the system proposing road maps for a large region and then having a human expert come in to double-check the design.

“That said, what’s clear is that with a system like ours you could dramatically decrease the amount of tedious work that humans would have to do,” Alizadeh says.

Indeed, one advantage to RoadTracer’s incremental approach is that it makes it much easier to correct errors — human supervisors can simply correct them and re-run the algorithm from where they left off, rather than continue to use imprecise information that trickles down to other parts of the map.

Of course, aerial images are just one piece of the puzzle. They don’t give you information about roads that have overpasses and underpasses, since those are impossible to ascertain from above. As a result, the team is also separately developing algorithms that can create maps from GPS data, and working to merge these approaches into a single system for mapping.

This project was supported in part by the Qatar Computing Research Institute.

Two robots are better than one: 5G antenna measurement research

Researchers at the National Institute of Standards and Technology (NIST) continue to pioneer new antenna measurement methods, this time for future 5G wireless communications systems.

NIST’s new Large Antenna Positioning System (LAPS) has two robotic arms designed to position “smart” or adaptable antennas, which can be mounted on base stations that handle signals to and from huge numbers of devices. Future 5G systems will operate at higher frequencies and offer more than 100 times the data-carrying capacity of today’s cellphones, while connecting billions of mobile broadband users in complex, crowded signal environments.

Among its many special capabilities, the LAPS can test transmissions to and from antennas located on fast-moving mobile devices, which requires coordination between the timing of communication signals and robot motion.

“Measurements of antenna signals are a great use for robotics,” NIST electronics engineer Jeff Guerrieri said. “The robotic arms provide antenna positioning that would be constrained by conventional measurement systems.”

NIST researchers are still validating the performance of the LAPS and are just now beginning to introduce it to industry. The system was described at a European conference last week .

Today’s mobile devices such as cell phones, consumer Wi-Fi systems and public safety radios mostly operate at frequencies below 3 gigahertz (GHz), a crowded part of the spectrum. Next-generation mobile communications are starting to use the more open frequency bands at millimeter wavelengths (30-300 GHz), but these signals are easily distorted and more likely to be affected by physical barriers such as walls or buildings. Solutions will include transmitter antenna arrays with tens to hundreds of elements that focus the antenna power into a steerable beam that can track mobile devices.

For decades, NIST has pioneered testing of high-end antennas for radar, aircraft, communications and satellites. Now, the LAPS will help foster the development of 5G wireless and spectrum-sharing systems. The dual-robot system will also help researchers understand the interference problems created by ever-increasing signal density.

The new facility is the next generation of NIST’s Configurable Robotic Millimeter-Wave Antenna (CROMMA) Facility, which has a single robotic arm. CROMMA, developed at NIST, has become a popular tool for high-frequency antenna measurements. Companies that integrate legacy antenna measurement systems are starting to use robotic arms in their product lines, facilitating the transfer of this technology to companies like The Boeing Co.

CROMMA can measure only physically small antennas. NIST developed the LAPS concept of a dual robotic arm system, one robot in a fixed position and the other mounted on a large linear rail slide to accommodate larger antennas and base stations. The system was designed and installed by NSI-MI Technologies. The LAPS also has a safety unit, including radar designed to prevent collisions of robots and antennas within the surrounding environment, and to protect operators.

The LAPS’ measurement capabilities for 5G systems include flexible scan geometries, beam tracking of mobile devices and improved accuracy and repeatability in mobile measurements.

The LAPS has replaced NIST’s conventional scanners and will be used to perform near-field measurement of basic antenna properties for aerospace and satellite companies requiring precise calibrations and performance verification. The near-field technique measures the radiated signal very close to the antenna in a controlled environment and, using mathematical algorithms developed at NIST, calculates the antenna’s performance at its operating distance, known as the far field.

But the ultimate goal for the LAPS is to perform dynamic, over-the-air tests of future 5G communication systems. Initial validation shows that basic mechanical operation of the LAPS is within the specified design tolerances for still and moving tests to at least 30 GHz. Final validation is ongoing.


Face recognition technology that works in the dark

Army researchers have developed an artificial intelligence and machine learning technique that produces a visible face image from a thermal image of a person’s face captured in low-light or nighttime conditions. This development could lead to enhanced real-time biometrics and post-mission forensic analysis for covert nighttime operations.

Thermal cameras like FLIR, or Forward Looking Infrared, sensors are actively deployed on aerial and ground vehicles, in watch towers and at check points for surveillance purposes. More recently, thermal cameras are becoming available for use as body-worn cameras. The ability to perform automatic face recognition at nighttime using such thermal cameras is beneficial for informing a Soldier that an individual is someone of interest, like someone who may be on a watch list.

The motivations for this technology — developed by Drs. Benjamin S. Riggan, Nathaniel J. Short and Shuowen “Sean” Hu, from the U.S. Army Research Laboratory — are to enhance both automatic and human-matching capabilities.

“This technology enables matching between thermal face images and existing biometric face databases/watch lists that only contain visible face imagery,” said Riggan, a research scientist. “The technology provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis.”

He said under nighttime and low-light conditions, there is insufficient light for a conventional camera to capture facial imagery for recognition without active illumination such as a flash or spotlight, which would give away the position of such surveillance cameras; however, thermal cameras that capture the heat signature naturally emanating from living skin tissue are ideal for such conditions.

“When using thermal cameras to capture facial imagery, the main challenge is that the captured thermal image must be matched against a watch list or gallery that only contains conventional visible imagery from known persons of interest,” Riggan said. “Therefore, the problem becomes what is referred to as cross-spectrum, or heterogeneous, face recognition. In this case, facial probe imagery acquired in one modality is matched against a gallery database acquired using a different imaging modality.”

This approach leverages advanced domain adaptation techniques based on deep neural networks. The fundamental approach is composed of two key parts: a non-linear regression model that maps a given thermal image into a corresponding visible latent representation and an optimization problem that projects the latent projection back into the image space.

Details of this work were presented in March in a technical paper “Thermal to Visible Synthesis of Face Images using Multiple Regions” at the IEEE Winter Conference on Applications of Computer Vision, or WACV, in Lake Tahoe, Nevada, which is a technical conference comprised of scholars and scientists from academia, industry and government.

At the conference, Army researchers demonstrated that combining global information, such as the features from the across the entire face, and local information, such as features from discriminative fiducial regions, for example, eyes, nose and mouth, enhanced the discriminability of the synthesized imagery. They showed how the thermal-to-visible mapped representations from both global and local regions in the thermal face signature could be used in conjunction to synthesize a refined visible face image.

The optimization problem for synthesizing an image attempts to jointly preserve the shape of the entire face and appearance of the local fiducial details. Using the synthesized thermal-to-visible imagery and existing visible gallery imagery, they performed face verification experiments using a common open source deep neural network architecture for face recognition. The architecture used is explicitly designed for visible-based face recognition. The most surprising result is that their approach achieved better verification performance than a generative adversarial network-based approach, which previously showed photo-realistic properties.

Riggan attributes this result to the fact the game theoretic objective for GANs immediately seeks to generate imagery that is sufficiently similar in dynamic range and photo-like appearance to the training imagery, while sometimes neglecting to preserve identifying characteristics, he said. The approach developed by ARL preserves identity information to enhance discriminability, for example, increased recognition accuracy for both automatic face recognition algorithms and human adjudication.

As part of the paper presentation, ARL researchers showcased a near real-time demonstration of this technology. The proof of concept demonstration included the use of a FLIR Boson 320 thermal camera and a laptop running the algorithm in near real-time. This demonstration showed the audience that a captured thermal image of a person can be used to produce a synthesized visible image in situ. This work received a best paper award in the faces/biometrics session of the conference, out of more than 70 papers presented.

Riggan said he and his colleagues will continue to extend this research under the sponsorship of the Defense Forensics and Biometrics Agency to develop a robust nighttime face recognition capability for the Soldier.

“Cow FitBits” Won’t Make Cows Happier Because They’re Not Milk Robots

The life of a milk cow is mostly pretty great. They relax, they go for walks on rich pastures; when it gets cold, they hang out with their bovine homies indoors.

That all goes out the window when they’re sick. Sick cows tend to eat less, walk differently, and give off sad moos. Now, the great AI hawkers have decided to automate a practice almost as old as farming itself: figuring out if cows are sick to them give them treatment. Proponents claim that the devices can identify a sick cow sooner, but many farmers don’t think they’re necessary, because they’ve developed a sixth sense for a sick cow.

Dutch innovation company Connecterra has developed an “intelligent cow-monitoring system” that follows  individual cows’ every move, relaying live information back to the farmer. Built on Google’s open-source AI platform TensorFlow (the same technology used to thwart illegal deforestation in Louisiana), the system uses motion-sensing “FitBits” attached to the cow’s neck to analyze its behavior.

Connecterra claims it’s Big Bovine Brother network can tell if a cow gets sick 24 to 48 hours before any visual symptoms arise by analyzing changes in internal temperature (that aren’t accounted for by external factors like high outside temperatures and humidity levels). It can also learn behavior such as walking, standing, laying, and chewing, and ring the alarm bells if a particular cow decides not to go for a second helping of hay.

Many farmers directly benefit from the technology, the company claims. “For a typical Dutch farm, which are generally known to be very productive to begin with, we’ve seen about a 20 percent to 30 percent gain in efficiency in farm operations using Connecterra,” Yasir Khokhar, former Microsoft employee, and the company’s CEO.

AI is being used elsewhere on the farm, too. Farmers in China have been tracking the movement of pigs using RFID tags and overhead cameras that track individual pigs using machine learning. Even the noises the pigs make are analyzed to monitor for disease.

But do we really need AI-powered sensors to know if a cow is not producing at her max? Dairy farmers have been around for at least 7,500 years. “I can spot a cow across a room that don’t feel great just by looking in her eyes,” Mark Rodgers, a Georgian dairy farmer, tells the Washington Post.

And then there is the cost. Just to get your herd all hooked up with Connecterra, it costs a substantial $79.99-per-cow, and $3-per-month charge per cow. If you’ve got a decent number of cows in your herd, costs like that can really add up.

The benefits of using AI technology to the individual animals themselves are pretty clear. Farmers can respond to illnesses and other changes in behavior faster. But there is, of course, a downside: if farmers continue to use technologies like Connecterra’s in the future, will their intuition change or vanish over time? What about the next generation of dairy farmers?

Dairy farmers should know how and when to respond to a cow’s needs without sophisticated technology. Teach a farmer how to watch cows, and they’ll drink milk for the rest of their life. But the unstoppable wave of AI technologies is taking over almost every aspects of our lives. At the end of the day, it’s about finding a balance between farmer intuition, and technological aids that will make everyone happy.

The Military Wants to Make AI That Mimics the Human Brain. Experts Know There’s a Better Way.

No matter how many times you may hear that AI is going to make us human slaves and take over the world, it’s kind of hard to believe when we’re constantly confronted with AI that’s consistently stupid. A few reminders: Alexa once played porn when someone requested a children’s song; AI playing one of those old text-based computer games got stuck when it kept giving nonsense commands.

While that might save us from a Skynet-type situation, it’s problematic as we use AI for increasingly sophisticated applications, such as robotic prosthetics, writes DARPA’s Justin Sanchez in the Wall Street Journal. Brains and computers process information very differently, and the software for a prosthetic arm can’t keep up with all the different ways a person’s brain might attempt to control it. The result is that prosthetics spend an awful lot of time sitting still.

What if the software was better adapted to how brains actually work?

DARPA thinks it found the answer: train AI to read and adapt along with the brain’s signals, learning what we are thinking and why as we do it. In short: teach AI to function more like the human brain.

Sounds good, right? In practice, however, it would mean jumping over a much bigger hurdle, one that has tripped up a great deal of researchers on their race to create a truly intelligent machine: figuring out how in the hell our brains work. Doing that would allow for a seamless interface between brain and machine that could, to continue their example, give an amputee perfect control over their artificial limbs. And if their plan wasn’t batshit enough, it even has some scientists speculating on whether AI might be able to hallucinate or develop depression.

Hey but here’s a handy thing to know about the human brain: we really have no idea what’s going on in there.

And it just so happens that a number of leading AI researchers think that trying to decode and mimic the human brain is a waste of time.

Max Tegmark, an MIT physicist and director of the Future of Life Institute, has a few choice words for those attempting to digitally recreate the human brain. Namely, he calls it “carbon chauvinism.”

“We’re too obsessed with how our brain works, and I think that shows a lack of imagination,” he said during a panel on AI last September.

“The main progress right now and in the near future will be getting to a performance at a human-level without getting the details of the human brain all figured out,” Bart Selman, an AI researcher at Cornell University, told Business Insider.

There’s nothing wrong with mimicking the natural world in technology. For an amputee controlling a prosthetic, software based on how the brain processes language could be invaluable.

But the key there is to be inspired by existing biology while creating something new based on the framework of the technology itself — in this case, intelligence and information processing.

There’s a very good reason the first flying machines didn’t imitate the way bats fly, and the first cars weren’t based on horses and buggies: people tried that, and they were terrible. AI is no different. And the sooner we can move away from the idea that we should try to copy an incredible computer that we don’t understand (our brains), the more AI can advance.

A New AI “Journalist” Is Rewriting the News to Remove Bias

Want your news delivered with the icy indifference of a literal robot? You might want to bookmark the newly launched site Knowhere News. Knowhere is a startup that combines machine learning technologies and human journalists to deliver the facts on popular news stories.

Here’s how it works. First, the site’s artificial intelligence (AI) chooses a story based on what’s popular on the internet right now. Once it picks a topic, it looks at more than a thousand news sources to gather details. Left-leaning sites, right-leaning sites – the AI looks at them all.

Then, the AI writes its own “impartial” version of the story based on what it finds (sometimes in as little as 60 seconds). This take on the news contains the most basic facts, with the AI striving to remove any potential bias. The AI also takes into account the “trustworthiness” of each source, something Knowhere’s co-founders preemptively determined. This ensures a site with a stellar reputation for accuracy isn’t overshadowed by one that plays a little fast and loose with the facts.

Fighting Fake News: Can Technology Stem the Tide?
Click to View Full Infographic

For some of the more political stories, the AI produces two additional versions labeled “Left” and “Right.” Those skew pretty much exactly how you’d expect from their headlines:

  • Impartial: “US to add citizenship question to 2020 census”
  • Left: “California sues Trump administration over census citizenship question”
  • Right: “Liberals object to inclusion of citizenship question on 2020 census”

Some controversial but not necessarily political stories receive “Positive” and “Negative” spins:

  • Impartial: “Facebook scans things you send on messenger, Mark Zuckerberg admits”
  • Positive: “Facebook reveals that it scans Messenger for inappropriate content”
  • Negative: “Facebook admits to spying on Messenger, ‘scanning’ private images and links”

Even the images used with the stories occasionally reflect the content’s bias. The “Positive” Facebook story features CEO Mark Zuckerberg grinning, while the “Negative” one has him looking like his dog just died.

Knowhere’s AI isn’t putting journalists out of work, either.

Editor-in-chief and co-founder Nathaniel Barling told Motherboard that a pair of human editors review every story. This ensures you feel like you’re reading something written by an actual journalist, and not a Twitter chatbot. Those edits are then fed back into the AI, helping it improve over time. Barling himself then approves each story before it goes live. “The buck stops with me,” he told Motherboard.

This human element could be the tech’s major flaw. As we’ve seen with other AIs, they tend to take on the biases of their creators, so Barling and his editors will need to be as impartial as humanly possible — literally — to ensure the AI retains its impartiality.

Image Credit: Knowhere

Knowhere just raised $1.8 million in seed funding, so clearly investors thinks it has potential to change how we get our news. But will it be able to reach enough people — and the right people — to really matter?

Impartiality is Knowhere’s selling point, so if you think it sounds like a site you want to visit, you’re probably someone who already values impartiality in news. Awesome. You aren’t the problem.

The problem is some people are perfectly happy existing in an echo chamber where they get news from a source that reflects what they’re thinking. And if you’re one of those news sources, you don’t want to alienate your audience, right? So you keep feeding the same comfortable readers the same biased stories.

This wouldn’t be such a big deal if the media status quo didn’t wreak havoc on our societyour democracy, and our planet.

So, impartial stories written by AI. Pretty neat? Sure. But society changing? We’ll probably need more than a clever algorithm for that.

Ungrateful Google Plebes Somehow Not Excited to Work on Military Industrial Complex Death Machines

“Don’t Be Evil” has been one of Google’s corporate maxims for over 15 years. But it’s recent dealings with the Department of Defense has put that ideal on ice. For some reason, Google’s workers aren’t psyched about this!

Over three thousand Google employees signed a recent public letter demanding CEO Sundar Pichai shut down Project Maven — a Department of Defense contract to create a “customized AI surveillance engine” — and publicize a clear policy that “neither Google nor its contractors will ever build warfare technology.”

The letter’s got some pretty direct language, calling the company out on its loss of the aforementioned core value: “Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.” The commoditization of people’s personal data (ergo, their psyches) not withstanding, obviously.

Gizmodo reported on Project Maven earlier last month, when they described it as “using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts.” Google and the Pentagon fired back, stating that the technology wouldn’t be used to create an autonomous weapons system that can identify targets and fire without a human squeezing the trigger.

CEO Pichai spun the letter and public exchange with the company as “hugely important and beneficial” in a statement to the New York Times, but of course, didn’t refer to any plans to throw the brakes on the project. Pichai’s statement went on to say that the tech used by the Pentagon is available to “any Google Cloud customer” and reserved specifically for “non-offensive purposes.”

Thing is, Google’s far from the only tech industry player in cahoots with the military. Red flags immediately went up when news broke that a team of researchers from the Korea Advanced Institute of Science and Technology (KAIST) was partnering up with weapons company Hanwha Systems — a company that produces cluster bombs, not exactly a popular form of warfare, as far as these things go. Fifty researchers from thirty countries called for an immediate boycott of the Korean institute.

Microsoft and Amazon both signed multi billion dollar contracts with the Department of Defense to develop cloud services. Credit where it’s due: At least the DOD isn’t trying to spin this as anything other than death machine-making. Defense Department chief management officer John Gibson didn’t beat around the bush when he said the collaboration was designed in part to “increase lethality and readiness.”

So that’s fun! And if Google’s recent advancements in AI tech faced a similar fate, think: Weaponized autonomous drones, equipped with private data, and a sophisticated AI. Not saying this is exactly how SkyNet starts, but, this is basically how SkyNet starts.

The counter to this argument, insomuch as there is one, is that these technological developments lead to better data, and better data leads to better object identification technology, which could also lead to more precise offensives, which could lead (theoretically) to less civilian casualties, or at least (again, theoretically) increased accountability on the part of the military (analog: the calculator should make it exponentially more difficult to get numbers “wrong” on your taxes, so the automated hyper-targeted death robots should make it exponentially more difficult to “accidentally” murder a school full of children).

All of which should go without saying that collaboration between the Department of Defense and various Silicon Valley tech companies is a dangerous game, and we have seen how quickly the balance can tilt in one direction. Having informed tech employees call out their CEOs publicly could hopefully lead to tech companies choosing their military contracts more carefully, or at least, more light being shed on who’s making what technologies, or rather, what technologies Silicon Valley coders are unknowingly working on.

More likely is that it just results in these companies being more discreet about the gobstoppingly shady (but profitable!) death machine work they’re doing. Good thing — like the rest of the world with a brain in their heads — we’re all ears.

Artificial intelligence helps to predict likelihood of life on other worlds

Developments in artificial intelligence may help us to predict the probability of life on other planets, according to new work by a team based at Plymouth University. The study uses artificial neural networks (ANNs) to classify planets into five types, estimating a probability of life in each case, which could be used in future interstellar exploration missions. The work is presented at the European Week of Astronomy and Space Science (EWASS) in Liverpool on 4 April by Mr Christopher Bishop.

Artificial neural networks are systems that attempt to replicate the way the human brain learns. They are one of the main tools used in machine learning, and are particularly good at identifying patterns that are too complex for a biological brain to process.

The team, based at the Centre for Robotics and Neural Systems at Plymouth University, have trained their network to classify planets into five different types, based on whether they are most like the present-day Earth, the early Earth, Mars, Venus or Saturn’s moon Titan. All five of these objects are rocky bodies known to have atmospheres, and are among the most potentially habitable objects in our Solar System.

Mr Bishop comments, “We’re currently interested in these ANNs for prioritising exploration for a hypothetical, intelligent, interstellar spacecraft scanning an exoplanet system at range.”

He adds, “We’re also looking at the use of large area, deployable, planar Fresnel antennas to get data back to Earth from an interstellar probe at large distances. This would be needed if the technology is used in robotic spacecraft in the future.”

Atmospheric observations — known as spectra — of the five Solar System bodies are presented as inputs to the network, which is then asked to classify them in terms of the planetary type. As life is currently known only to exist on Earth, the classification uses a ‘probability of life’ metric which is based on the relatively well-understood atmospheric and orbital properties of the five target types.

Bishop has trained the network with over a hundred different spectral profiles, each with several hundred parameters that contribute to habitability. So far, the network performs well when presented with a test spectral profile that it hasn’t seen before.

“Given the results so far, this method may prove to be extremely useful for categorising different types of exoplanets using results from ground-based and near Earth observatories” says Dr Angelo Cangelosi, the supervisor of the project.

The technique may also be ideally suited to selecting targets for future observations, given the increase in spectral detail expected from upcoming space missions such ESA’s Ariel Space Mission and NASA’s James Webb Space Telescope.

Story Source:

Materials provided by Royal Astronomical Society. Note: Content may be edited for style and length.

School of Engineering first quarter 2018 awards

Members of the MIT engineering faculty receive many awards in recognition of their scholarship, service, and overall excellence. Every quarter, the School of Engineering publicly recognizes their achievements by highlighting the honors, prizes, and medals won by faculty working in our academic departments, labs, and centers.

The following awards were given from January through March, 2018. Submissions for future listings are welcome at any time.

Lallit Anand, Department of Mechanical Engineering, was elected to the National Academy of Engineering on Feb. 7.

Polina Anikeeva, Department of Materials Science and Engineering, was awarded the Vilcek Prize on Feb. 1.

Regina Barzilay, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, was named an Association for Computational Linguistics Fellow on Feb. 20.

János Beér and William H. Green Jr., Department of Chemical Engineering, were named Inaugural Fellows of The Combustion Institute on Feb. 22.

Angela Belcher, Department of Materials Science and Engineering and the Department of Biological Engineering, was elected to the National Academy of Engineering on Feb. 7.

Michael Birnbaum, Department of Biological Engineering and the Koch Institute for Integrative Cancer Research, was awarded a Jimmy V Foundation Scholar Grant on Feb. 25.

Tamara Broderick, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won the Army Research Office Young Investigator Program Award on Jan. 23; she was also awarded a Sloan Research Fellowship on Feb. 15 and was honored with a National Science Foundation CAREER Award on March 15.

W. Craig Carter, Department of Materials Science and Engineering, was awarded a J-WEL Grant on Feb. 1.

Arup K. Chakraborty, Institute for Medical Engineering and Science and the Department of Chemical Engineering, was awarded a Moore Fellowship at Caltech on Jan. 1.

Edward Crawley, Department of Aeronautics and Astronautics was inducted as a foreign member into the Russian Academy of Science on March 29.

Mark Drela, Department of Aeronautics and Astronautics, received the AIAA Reed Aeronautics Award on Feb. 21.

Elazer R. Edelman, Institute for Medical Engineering and Science, was honored with the Giulio Natta Medal in Chemical Engineering from the Department of Chemistry, Materials and Chemical Engineering “Giulio Natta” of Milan Polytechnic on Feb. 6; he also won the 2018 Distinguished Scientist Award from the American College of Cardiology.

Ahmed Ghoniem, Department of Mechanical Engineering, was named a fellow of The Combustion Institute on Feb. 23.

Shafi Goldwasser, Silvio Micali, and Ron Rivest, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, were honored with BBVA Foundation Frontiers of Knowledge Awards in the Information and Communication Technologies Category on Jan. 17.

Stephen Graves, Department of Mechanical Engineering and the Sloan School of Management, was elected to the National Academy of Engineering on Feb. 7.

Paula Hammond, Department of Chemical Engineering and the Koch Institute for Integrative Cancer Research, won the American Chemical Society Award in Applied Polymer Science on Jan. 8.

Daniel Jackson, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won the MIT Martin Luther King Jr. Leadership Award on Feb. 8.

Stefanie Jegelka, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, was awarded a Sloan Research Fellowship on Feb. 15.

Heather Kulik, Department of Chemical Engineering, won an Office of Naval Research Young Investigator Award on Feb. 21.

John Lienhard, Department of Mechanical Engineering, was named one of the Top 25 Global Water Leaders on Jan. 10.

Barbara Liskov, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won the IEEE Computer Society 2018 Computer Pioneer Award on Feb. 16.

Luqiao Liu, Department of Electrical Engineering and Computer Science, won the William L. McMillan Award on March 27; he was also honored with the 2017 Young Scientist Prize in the field of Magnetism by the International Union of Pure and Applied Physics on Feb. 12.

Wenjie Lu, Department of Electrical Engineering and Computer Science, was recognized by the Next Generation Workforce on Feb. 12.

Stefanie Mueller, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won an Outstanding Dissertation Award from the Association for Computing Machinery Special Interest Group on Computer-Human Interaction (ACM SIGCHI) on Feb. 15.

Pablo A. Parrilo, Department of Electrical Engineering and Computer Science, was named a 2018 Society of Industrial Applied Mathematics Fellow on March 29.

Bryan Reimer, Center for Transportation and Logistics, won the Autos2050 Driving Innovation Award on Jan. 10.

Ronald Rivest, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, was inducted into the National Inventors Hall of Fame on Jan. 23.

Daniela Rus, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won the Pioneer in Robotics and Automation Award from the IEEE Robotics and Automation Society on Jan. 24.

Noelle Selin, Institute for Data, Systems, and Society, was awarded a Hans Fischer Senior Fellowship on March 23.

Devavrat Shah, Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society won a Frank Quick Faculty Research Innovation Award on Feb. 20.

Julie Shah, Department of Aeronautics and Astronautics and the Computer Science and Artificial Intelligence Laboratory won the 2018 Robotics and Automation Society Early Career Award on March 23.

Alex K. Shalek, Institute for Medical Engineering and Science and the Department of Chemistry, was honored with a 2018 Sloan Research Fellowship on Feb. 15.

Yang Shao-Horn, Department of Mechanical Engineering and Materials Science and Engineering, was elected to the National Academy of Engineering on Feb. 7.

Cem Tasan, Department of Materials Science and Engineering, won the Young Investigator Award on Feb. 22.

Karen Willcox, Department of Aeronautics and Astronautics, was named a 2018 Society of Industrial Applied Mathematics Fellow on March 29.

Laurence R. Young, Department of Aeronautics and Astronautics and the Institute for Medical Engineering and Science, was awarded the 2018 de Florez Award for Flight Simulation from the American Institute of Aeronautics and Astronautics on Jan. 9.

Nickolai Zeldovich, Department of Electrical Engineering and Computer Science (EECS) and the Computer Science and Artificial Intelligence Laboratory, was awarded a Faculty Research Innovation Award from EECS on Feb. 20.

Topics: Awards, honors and fellowships, Faculty, School of Engineering, Biological engineering, Chemical engineering, Aeronautical and astronautical engineering, Institute for Medical Engineering and Science (IMES), Electrical Engineering & Computer Science (eecs), Mechanical engineering, Civil and environmental engineering, Koch Institute, IDSS, DMSE, Nuclear science and engineering, Computer Science and Artificial Intelligence Laboratory (CSAIL)

The Oscar for Best Visual Effects Goes To: AI

The next breakout star in Hollywood might be an AI named Arraiy.

Arraiy is a computer vision and machine learning platform specifically designed for film and television effects.

Arraiy’s creators are training the system to rotoscope — the process of separating certain parts of footage from the background (for example) separating an actor from the green screen behind them) with years’ worth of human-created visual effects as training tools.

The ultimate goal, though, is to do it more quickly and cheaply than humans can, and just as effectively. Rotoscoping by hand can take dozens of hours, but Arraiy can do it in a fraction of the time. This gives a filmmaker a chance to see how a finished scene could look before they even leave the set. That allows films to dedicate many fewer resources to the effects, eliminating the need to reshoot scenes repeatedly if the effects aren’t quite right.

So far, Arraiy has been used to make one short film (“The Human Race”) and one music video (The Black Eyed Peas’ “Street Livin’”).

Arraiy isn’t the only company looking to bring AI to the world of special effects. Adobe, along with other software companies, are doing the same, the New York Times reports. But Arraiy may be in a position to dominate; the company just raised $10 million in funding.

“Our aim is make movies better, cheaper, and faster to produce by empowering creators with a practical machine learning based workflow,” Ethan Rublee, co-founder and CEO of Arraiy, said in a press release.

“We’re filmmakers, scientists, roboticists, and engineers; and we’re passionate about the opportunity to bring all of these disciplines together as we reimagine the process of making movie magic.”

Teaching machines to spot the essential

Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics.

Over the past decade machine learning has enabled ground-breaking advances in computer vision, speech recognition and translation. More recently, machine learning has also been applied to physics problems, typically for the classification of physical phases and the numerical simulation of ground states. Maciej Koch-Janusz, a researcher at the Institute for Theoretical Physics at ETH Zurich, Switzerland, and Zohar Ringel of the Hebrew University of Jerusalem, Israel, have now explored the exciting possibility of harnessing machine learning not as a numerical simulator or a ‘hypothesis tester’, but as an integral part of the physical reasoning process.

One important step in understanding a physical system consisting of a large number of entities — for example, the atoms making up a magnetic material — is to identify among the many degrees of freedom of the system those that are most relevant for its physical behaviour. This is traditionally a step that relies heavily on human intuition and experience. But now Koch-Janusz and Ringel demonstrate a machine-learning algorithm based on an artificial neural network that is capable of doing just that, as they report in the journal Nature Physics. Their algorithm takes data about a physical system without any prior knowledge about it and extracts those degrees of freedom that are most relevant to describe the system.

Technically speaking, the machine performs one of the crucial steps of one of the conceptually most profound tools of modern theoretical physics, the so-called renormalization group. The algorithm of Koch-Janusz and Ringel provides a qualitatively new approach: the internal data representations discovered by suitably designed machine-learning systems are often considered to be ‘obscure’, but the results yielded by their algorithm provide fundamental physical insight, reflecting the underlying structure of physical system. This raises the prospect of employing machine learning in science in a collaborative fashion, combining the power of machines to distil information from vast data sets with human creativity and background knowledge.

Story Source:

Materials provided by ETH Zurich Department of Physics. Note: Content may be edited for style and length.

Professor Tom Leighton wins 2018 Marconi Prize

MIT professor of mathematics Tom Leighton has been selected to receive the 2018 Marconi Prize. The Marconi Society, dedicated to furthering scientific achievements in communications and the Internet, is honoring Leighton for his fundamental contributions to technology and the establishment of the content delivery network (CDN) industry.

Leighton ’81, a professor in the Department of Mathematics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), will be awarded at The Marconi Society’s annual awards dinner in Bologna, Italy, on Oct. 2.

“Being recognized by the Marconi Society is an incredible honor,” said Leighton. “It’s an honor not just for me, but also for Danny Lewin, who created this company with me, and for all of the people at Akamai who have worked so hard for over two decades to make this technology real so that the internet can scale to be a secure and affordable platform where entertainment, business, and life are enabled to reach unimagined potential.

Leighton developed the algorithms now used to deliver trillions of content requests over the internet every day. Akamai, the world’s largest cloud delivery platform, routes and replicates content over a gigantic network of distributed servers, using algorithms to find and utilize servers closest to the end user, thereby avoiding congestion within the internet.

“Tom’s work at MIT and with Akamai has had a groundbreaking impact in making the world a more connected place,” says Professor Daniela Rus, director of CSAIL. “His insights on web content delivery have played a key role in enabling us to share information and media online, and all of us at CSAIL are so very proud of him for this honor.”

“What is amazing about Tom is that, throughout his career, he is and has been as comfortable and talented as a researcher designing clever and efficient algorithms, as an educator teaching and mentoring our undergraduate and graduate students, as an entrepreneur turning mathematical and algorithmic ideas into a rapidly-expanding startup, and as an executive and industry leader able to weather the storm in the most difficult times and bring Akamai to a highly successful company,” says Michel Goemans, interim head of the mathematics department.

Leighton has said that Akamai’s role within the internet revolution was to end the “World Wide Wait.” World Wide Web founder and 2002 Marconi Fellow Tim Berners-Lee, who was the 3Com Founders chair at MIT’s Laboratory for Computer Science (LCS), foresaw an internet congestion issue and in 1995 challenged his MIT colleagues to invent a better way to deliver content. Leighton set out with one of his brightest students, Danny Lewin, to solve this challenge using distributed computing algorithms.

After two years of research, Leighton and Lewin discovered a solution — but then faced the challenge of convincing others that it would work. In 1997, they entered the $50K Entrepreneurship Competition run by the MIT Sloan School of Management.

“We literally went to the library and got the equivalent of ‘Business Plans for Dummies’ because, as theoretical mathematicians, we had no experience in business,” Leighton remembers. But they learned quickly from those who did, including business professionals they met through the $50K Competition.

At the time, Leighton and Lewin didn’t envision building their own company around the technology. Instead, they planned to license it to service providers. However, they found that carriers needed to be convinced that the technology would work at scale before they were interested. “Akamai was state-of-the-art in theory, meaning that it was well beyond where people were in practice. I think folks were very skeptical that it would work,” says Leighton.

While carriers were ambivalent, content providers were receptive: The internet had proven vulnerable to congestion that was crashing websites during high demand periods. So Leighton and Lewin decided to build their own content delivery network and provide content delivery as a service. Although their business plan did not win the $50K contest, it attracted enough venture capital investment to get a company started, and Leighton and Lewin incorporated Akamai in 1998.

Akamai’s first big opportunity came in 1999 with the U.S. collegiate basketball tournament known as “March Madness.” With 64 teams playing basketball during the course of a few days, millions of viewers were watching their favorite teams online, mostly from work. When ESPN and their hosting company Infoseek became overloaded with traffic, they asked if Akamai could handle 2,000 content requests per second.

Leighton and his team said yes — even though up to that point they had only been delivering one request every few minutes. “We were a startup and we believed,” said Leighton. Akamai was able to handle 3,000 requests per second, helping ESPN to get back on line and run six times faster than they would on a normal traffic day.

Akamai’s technology and viability were proven; the company went public in 1999, earning millions for several of its young employees. But when the tech bubble burst the next year, Akamai’s stock plummeted and the firm faced the prospect of retrenchment. Then, on September 11, 2001, Danny Lewin was killed aboard American Airlines Flight 11 in the terrorist attack on the Twin Towers. Akamai employees had to set aside their personal grief and complete emergency integrations to restore client sites that had crashed in the overwhelming online traffic created that day.

Akamai rebounded from that dark period, and over the years evolved from static image content to handle dynamic content and real-time applications like streaming video. Today, Akamai has over 240,000 servers in over 130 countries and within more than 1,700 networks around the world, handling about 20 to 30 percent of the traffic on the internet. Akamai accelerates trillions of internet requests each day, protects web and mobile assets from targeted application and DDoS attacks, and enables internet users to have a seamless and secure experience across different device types and network conditions. They created new technology for leveraging machine learning to analyze real-user behavior to continuously optimize a website’s performance, as well as algorithms that differentiate between human users and bots. Akamai’s security business surpassed half a billion dollars per year in revenue, making it the fastest growing part of Akamai’s business.

“Dr. Leighton is the embodiment of what the Marconi Prize honors,” says Vint Cerf, Marconi Society chair and chief internet evangelist at Google. “He and his research partner, Danny Lewin, tackled one of the major problems limiting the power of the internet, and when they developed the solution, they founded Akamai — now one of the premier technology companies in the world — to bring it to market. This story is truly remarkable.”

By receiving the Marconi Prize, Leighton joins a distinguished list of scientists whose work underlies all of modern communication technology, from the microprocessor to the internet, and from optical fiber to the latest wireless breakthroughs. Other Marconi Fellows include 2007 winner Ron Rivest, an Institute Professor, a member of CSAIL and the lab’s Theory of Computation Group, and a founder of its Cryptography and Information Security Group; and LIDS adjunct Dave Forney, ScD (EE) ’65, who received it in 1997.  

In 2016, the MIT Graduate School Council awarded Leighton, jointly with Dean of Science Michael Sipser, the Irwin Sizer Award, for most significant improvements to MIT education, specifically for their development of the successful 18C major: Mathematics with Computer Science. Leighton was also inducted into the National Inventors Hall of Fame in 2017 for Content Delivery Network methods; Danny Lewin was also inducted posthumously.

Leighton said he plans to donate the $100,000 Marconi Prize to The Akamai Foundation, with the goal of promoting the pursuit of excellence in mathematics in grades K-12 to encourage the next generation of technology innovators.

Topics: Awards, honors and fellowships, Faculty, Internet, Mathematics, Technology and society, History of science, School of Science, Computer Science and Artificial Intelligence Laboratory (CSAIL), Algorithms, Computer science and technology, Industry, Alumnai/ae, School of Engineering

Soft robotic fish swims alongside real ones in coral reefs

This month scientists published rare footage of one of the Arctic’s most elusive sharks. The findings demonstrate that, even with many technological advances in recent years, it remains a challenging task to document marine life up close.

But MIT computer scientists believe they have a possible solution: using robots.

In a paper out today, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled “SoFi,” a soft robotic fish that can independently swim alongside real fish in the ocean.

During test dives in the Rainbow Reef in Fiji, SoFi swam at depths of more than 50 feet for up to 40 minutes at once, nimbly handling currents and taking high-resolution photos and videos using (what else?) a fisheye lens.

Using its undulating tail and a unique ability to control its own buoyancy, SoFi can swim in a straight line, turn, or dive up or down. The team also used a waterproofed Super Nintendo controller and developed a custom acoustic communications system that enabled them to change SoFi’s speed and have it make specific moves and turns.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” says CSAIL PhD candidate Robert Katzschmann, lead author of the new journal article published today in Science Robotics. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”

Katzschmann worked on the project and wrote the paper with CSAIL director Daniela Rus, graduate student Joseph DelPreto and former postdoc Robert MacCurdy, who is now an assistant professor at the University of Colorado at Boulder.

How it works

Existing autonomous underwater vehicles (AUVs) have traditionally been tethered to boats or powered by bulky and expensive propellers.

In contrast, SoFi has a much simpler and more lightweight setup, with a single camera, a motor, and the same lithium polymer battery that’s found in consumer smartphones. To make the robot swim, the motor pumps water into two balloon-like chambers in the fish’s tail that operate like a set of pistons in an engine. As one chamber expands, it bends and flexes to one side; when the actuators push water to the other channel, that one bends and flexes in the other direction.

These alternating actions create a side-to-side motion that mimics the movement of a real fish. By changing its flow patterns, the hydraulic system enables different tail maneuvers that result in a range of swimming speeds, with an average speed of about half a body length per second.

“The authors show a number of technical achievements in fabrication, powering, and water resistance that allow the robot to move underwater without a tether,” says Cecilia Laschi, a professor of biorobotics at the Sant’Anna School of Advanced Studies in Pisa, Italy. “A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.”

The entire back half of the fish is made of silicone rubber and flexible plastic, and several components are 3-D-printed, including the head, which holds all of the electronics. To reduce the chance of water leaking into the machinery, the team filled the head with a small amount of baby oil, since it’s a fluid that will not compress from pressure changes during dives.

Indeed, one of the team’s biggest challenges was to get SoFi to swim at different depths. The robot has two fins on its side that adjust the pitch of the fish for up and down diving. To adjust its position vertically, the robot has an adjustable weight compartment and a “buoyancy control unit” that can change its density by compressing and decompressing air.

Katzschmann says that the team developed SoFi with the goal of being as nondisruptive as possible in its environment, from the minimal noise of the motor to the ultrasonic emissions of the team’s communications system, which sends commands using wavelengths of 30 to 36 kilohertz.

“The robot is capable of close observations and interactions with marine life and appears to not be disturbing to real fish,” says Rus.

The project is part of a larger body of work at CSAIL focused on soft robots, which have the potential to be safer, sturdier, and more nimble than their hard-bodied counterparts. Soft robots are in many ways easier to control than rigid robots, since researchers don’t have to worry quite as much about having to avoid collisions.

“Collision avoidance often leads to inefficient motion, since the robot has to settle for a collision-free trajectory,” says Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. “In contrast, a soft robot is not just more likely to survive a collision, but could use it as information to inform a more efficient motion plan next time around.”

As next steps the team will be working on several improvements on SoFi. Katzschmann plans to increase the fish’s speed by improving the pump system and tweaking the design of its body and tail.

He says that they also plan to soon use the on-board camera to enable SoFi to automatically follow real fish, and to build additional SoFis for biologists to study how fish respond to different changes in their environment.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” says Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

This project was supported by the National Science Foundation.

Topics: Research, Robotics, Computer Science and Artificial Intelligence Laboratory (CSAIL), Robots, Electrical Engineering & Computer Science (eecs), Soft robotics, Actuators, Distributed Robotics Laboratory, School of Engineering, Artificial intelligence, 3-D printing, Additive manufacturing, Biomimetics, Oceanography and ocean engineering, National Science Foundation (NSF)

Pipe-crawling robot will help decommission DOE nuclear facility

A pair of autonomous robots developed by Carnegie Mellon University’s Robotics Institute will soon be driving through miles of pipes at the U.S. Department of Energy’s former uranium enrichment plant in Piketon, Ohio, to identify uranium deposits on pipe walls.

The CMU robot has demonstrated it can measure radiation levels more accurately from inside the pipe than is possible with external techniques. In addition to savings in labor costs, its use significantly reduces hazards to workers who otherwise must perform external measurements by hand, garbed in protective gear and using lifts or scaffolding to reach elevated pipes.

DOE officials estimate the robots could save tens of millions of dollars in completing the characterization of uranium deposits at the Portsmouth Gaseous Diffusion Plant in Piketon, and save perhaps $50 million at a similar uranium enrichment plant in Paducah, Kentucky.

“This will transform the way measurements of uranium deposits are made from now on,” predicted William “Red” Whittaker, robotics professor and director of the Field Robotics Center.

Heather Jones, senior project scientist will present two technical papers about the robot on Wednesday at the Waste Management Conference in Phoenix, Arizona. CMU also will be demonstrating a prototype of the robot during the conference.

CMU is building two of the robots, called RadPiper, and will deliver the production prototype units to DOE’s sprawling 3,778-acre Portsmouth site in May. RadPiper employs a new “disc-collimated” radiation sensor invented at CMU. The CMU team, led by Whittaker, began the project last year. The team worked closely with DOE and Fluor-BWXT Portsmouth, the decommissioning contractor, to build a prototype on a tight schedule and test it at Portsmouth last fall.

Shuttered since 2000, the plant began operations in 1954 and produced enriched uranium, including weapons-grade uranium. With 10.6 million square feet of floor space, it is DOE’s largest facility under roof, with three large buildings containing enrichment process equipment that span the size of 158 football fields. The process buildings contain more than 75 miles of process pipe.

Finding the uranium deposits, necessary before DOE decontaminates, decommissions and demolishes the facility, is a herculean task. In the first process building, human crews over the past three years have performed more than 1.4 million measurements of process piping and components manually and are close to declaring the building “cold and dark.”

“With more than 15 miles of piping to be characterized in the next process building, there is a need to seek a smarter method,” said Rodrigo V. Rimando, Jr., director of technology development for DOE’s Office of Environmental Management. “We anticipate a labor savings on the order of an eight-to-one ratio for the piping accomplished by RadPiper.” Even with RadPiper, nuclear deposits must be identified manually in some components.

RadPiper will operate initially in pipes measuring 30 inches and 42 inches in diameter and will characterize radiation levels in each foot-long segment of pipe. Those segments with potentially hazardous amounts of uranium-235, the fissile isotope of uranium used in nuclear reactors and weapons, will be removed and decontaminated. The vast majority of the plant’s piping will remain in place and will be demolished safely along with the rest of the facility.

The tetherless robot moves through the pipe at a steady pace atop a pair of flexible tracks. Though the pipe is in straight sections, the autonomous robot is equipped with a lidar and a fisheye camera to detect obstructions ahead, such as closed valves, Jones said. After completing a run of pipe, the robot automatically returns to its launch point. Integrated data analysis and report generation frees nuclear analysts from time-consuming calculations and makes reports available the same day.

The robot’s disc-collimated sensing instrument uses a standard sodium iodide sensor to count gamma rays. The sensor is positioned between two large lead discs. The lead discs block gamma rays from uranium deposits that lie beyond the one-foot section of pipe that is being characterized at any given time. Whittaker said CMU is seeking a patent on the instrument.

The Robotics Institute and Whittaker have extensive experience with robots in nuclear facilities, including the design and construction of robots to aid with the cleanup of the damaged Three Mile Island reactor building in Pennsylvania and the crippled Chernobyl reactor in Ukraine.

DOE has paid CMU $1.4 million to develop the robots as part of what CMU calls the Pipe Crawling Activity Measurement System.

In addition to the Portsmouth and Paducah plants, robots could be useful elsewhere in DOE’s defense nuclear cleanup program, which is not even half complete, Rimando said. Other sites where robots might be used are the Savannah River Site in Aiken, South Carolina, and the Hanford Site in Richland, Washington.

“With at least 50 more years of nuclear cleanup to be performed, the Robotics Institute could serve as a major pipeline of roboticists for DOE’s next several workforce generations,” he added.

AI translates news just as well as a human would

In Brief

A Microsoft AI has achieved so-called “human-parity” in the translation of a text from Chinese to English, translating sentences from a sample of news stories like a professional human translator. Next step, real time news.

Translation was traditionally considered a job in which the magic human touch would always ultimately trump a machine. That may no longer be the case, as a Microsoft AI translator just nailed one of the hardest challenges: translating Chinese into English with accuracy comparable to that of a bilingual person.

Chinese is so difficult a language that it takes years for a non-native speaker to just about manage the 3,000 characters needed to read a newspaper. Previous attempts at automatic translation have amused the world, with gems such as “hand grenade” to indicate a fire extinguisher or a mysterious “whatever” dish on a restaurant menu.

“For alphabetic languages, there’s what they call a virtuous loop between the writing, speaking and listening — those three categories constitute one composite skill,” linguist David Moser told the Los Angeles Times. “But the problem with Chinese […] is it breaks that loop. Speaking does not necessarily help your reading. Reading doesn’t necessarily help your writing.” These are three different skills that, when learning Chinese, have to be mastered in parallel.

After years of working on what it seemed a nearly impossible feat, Microsoft engineers finally achieved the so called “human parity” in translating a sample of sentences from Chinese news articles into English.

The team used a sample of 2000 sentences from online newspapers that had been previously translated by a professional. Not only did they compare the machine’s job with that of the human translator, but they also hired a team of independent bilingual consultants to keep an eye on the process.

“Hitting human parity in a machine translation task is a dream that all of us have had,” Xuedong Huang, a technical fellow in charge of Microsoft’s speech, natural language and machine translation told the company’s blog. “We just didn’t realize we’d be able to hit it so soon.”

Teaching a system to translate a language is particularly complex because two different translations of the same word may sound equally right. People choose different words depending on context, mood and who they are communicating with.

“Machine translation is much more complex than a pure pattern recognition task,” Ming Zhou, assistant managing director of Microsoft Research Asia told the Microsoft blog. “People can use different words to express the exact same thing, but you cannot necessarily say which one is better.”

The next challenge, he said, will be to test the new AI translator on real-time news articles.

OpenAI Wants to Make Safe AI, but That May Be an Impossible Task

True artificial intelligence is on its way, and we aren’t ready for it. Just as our forefathers had trouble visualizing everything from the modern car to the birth of the computer, it’s difficult for most people to imagine how much truly intelligent technology could change our lives as soon as the next decade — and how much we stand to lose if AI goes out of our control.

Fortunately, there’s a league of individuals working to ensure that the birth of artificial intelligence isn’t the death of humanity. From Max Tegmark’s Future of Life Institute to the Harvard Kennedy School of Government’s Future Society, the world’s most renowned experts are joining forces to tackle one of the most disruptive technological advancements (and greatest threats) humanity will ever face.

Perhaps the most famous organization to be born from this existential threat is OpenAI. It’s backed by some of the most respected names in the industry: Elon Musk, the SpaceX billionaire who founded Open AI, but departed the board this year to avoid conflicts of interest with Tesla; Sam Altman, the president of Y Combinator; and Peter Thiel, of PayPal fame, just to name a few. If anyone has a chance at securing the future of humanity, it’s OpenAI.

But there’s a problem. When it comes to creating safe AI and regulating this technology, these great minds have little clue what they’re doing. They don’t even know where to begin.

The Dawn of a New Battle

While traveling in Dubai, I met with Michael Page, the Policy and Ethics Advisor at OpenAI. Beneath the glittering skyscrapers of the self-proclaimed “city of the future,” he told me of the uncertainty that he faces. He spoke of the questions that don’t have answers, and the fantastically high price we’ll pay if we don’t find them.

The conversation began when I asked Page about his role at OpenAI. He responded that his job is to “look at the long-term policy implications of advanced AI.” If you think that this seems a little intangible and poorly defined, you aren’t the only one. I asked Page what that means, practically speaking. He was frank in his answer: “I’m still trying to figure that out.” 

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Page attempted to paint a better picture of the current state of affairs by noting that, since true artificial intelligence doesn’t actually exist yet, his job is a little more difficult than ordinary.

He noted that, when policy experts consider how to protect the world from AI, they are really trying to predict the future. They are trying to, as he put it, “find the failure modes … find if there are courses that we could take today that might put us in a position that we can’t get out of.” In short, these policy experts are trying to safeguard the world of tomorrow by anticipating issues and acting today. The problem is that they may be faced with an impossible task.

Page is fully aware of this uncomfortable possibility, and readily admits it. “I want to figure out what can we do today, if anything. It could be that the future is so uncertain there’s nothing we can do,” he said.

Our problems don’t stop there. It’s also possible that we’ll figure out what we need to do in order to protect ourselves from AI’s threats, and realize that we simply can’t do it. “It could be that, although we can predict the future, there’s not much we can do because the technology is too immature,” Page said.

This lack of clarity isn’t really surprising, given how young this industry is. We are still at the beginning, and so all we have are predictions and questions. Page and his colleagues are still trying to articulate the problem they’re trying to solve, figure out what skills we need to bring to the table, and what policy makers will need to be in on the game.

As such, when asked for a concrete prediction of where humanity and AI will together be in a year, or in five years, Page didn’t offer false hope: “I have no idea,” he said.

However, Page and OpenAI aren’t alone in working on finding the solutions. He therefore hopes such solutions may be forthcoming: “Hopefully, in a year, I’ll have an answer. Hopefully, in five years, there will be thousands of people thinking about this,” Page said.

Well then, perhaps it’s about time we all get our thinking caps on.

Origami-inspired self-locking foldable robotic arm

A research team of Seoul National University led by Professor Kyu-Jin Cho has developed an origami-inspired robotic arm that is foldable, self-assembling and also highly-rigid.

They developed a novel robotic arm using a concept of variable stiffness. The robotic arm made it possible to change the shape with a single wire, thus raising the possibility of practical use of the origami structure. The robotic arm is light-weighted, and can fold flat and extend like an automatic umbrella and even becomes instantly stiff.

The key principle is a collapsible locker and this enables the robotic arm to overcome the drawbacks of origami-inspired structures that is hard to withstand external forces and is hard to be easily actuated.

The variable stiffness mechanism is based on an origami principle of perpendicular folding; two perpendicular fold lines constrain each other’s movement. By using this principle, a hexagonal structure (40X40X100 mm) which weighs less than 30 g can withstand more than 12 kg of compressive load. On the other hand, the lockers can be easily unlocked and the structure is folded flat by pulling a single wire with a small force.

Benefits of the foldable robotic arm can be maximized when it is attached to drones where the weight and the size constraints are the most extreme. The drone unfolds the robotic arm, picks up an object in the ditch, and films the trees in one trial. When the robotic arm is not in use, it folds flat for convenient maneuvering, easy take-off and landing. The proposed variable stiffness mechanism can be applied to other types of robots and structures in extreme environments such as polar area, desert, underwater, and space.

Professor Cho said, “Soft robots have great advantages in their flexible movement, but they have a limitation in that they cannot support high load without deformation. This robotic arm uses the variable stiffness technology which gains merits of both rigid and soft robots. With this property, the robotic arm can be folded flat when not in use and can be stiff when necessary. In addition, the arm is made of composite of tough ripstop fabric and specially handled strong PET film for the practical use.”

(The researchers include Suk-Jun Kim, Dae-Young Lee, Gwang-Pil Jung, Professor of SeoulTech)

Story Source:

Materials provided by Seoul National University. Note: Content may be edited for style and length.

This Bot Wants To Pay People When Their Flights Get Cheaper

In Brief

DoNotPay, the AI-powered chatbot that helps users dispute parking tickets, is now expanding its expertise to cover price protection for airline tickets.

Remember that online artificial intelligence (AI) that helped people dispute their parking tickets? Well, DoNotPay — the parking lawyer chatbot developed by Stanford University student Joshua Browder a few years ago — has since expanded its services. DoNotDay’s latest addition is retail price protection for air travel and hotel accommodations.

Browder launched DoNotPay’s flight and hotel price protection service on March 6, 2018 — and just like its parking ticket service, it’s completely free and accessible online. Regular travelers know that airplane ticket prices usually change, and DoNotPay can help get them rebooked and refunded whenever that happens.

Price Protection

DoNotPay’s new service works by regularly checking every U.S. airline and air travel booking site for price changes. Once a user registers for free, the bot will automatically look for all the travel confirmations in your email and check for changes in price about 17,000 times a day — or roughly every five seconds — until your flight departs. This isn’t an arbitrary rate, since some routes supposedly change prices every six seconds, according to the Economist.

When the robot lawyer finds that the price has dropped, it will then scan the exact terms of your ticket to “find a legal loophole to negotiate a cheaper price or rebook you.” DoNotPay would then move you to the cheaper flight and make the airlines refund the difference directly to your account. The process repeats until it DoNotPay finds the lowest possible offered for your fare class, or at least until just before your flight’s scheduled departure.

“For example, if someone books a $560 flight from New York to Hawaii on United Airlines and the price drops to $120, they are automatically paid $440 by the airline,” Browder explained in a press release.

With DoNotPay, air travelers could save at least $450 a year, according to beta tests that Browder conducted with a few hundred people.

DoNotPay is easily accessible. Image credit: Joshua Browder

DoNotPay is easily accessible. Image credit: Joshua Browder.

It’s not surprising that AI thrives in the legal profession. Most — if not all — of the time, legal documents tend to be quite verbose. It’s difficult for people who lack legal expertise to be able to parse through the often-complicated fine print on the back of parking tickets or airline booking contracts.

With AI’s ability to quickly sift through legal jargon, chatbots like DoNotPay can easily become anyone’s instant, easy-to-use online lawyer.

Is your smile male or female?

The dynamics of how men and women smile differs measurably, according to new research, enabling artificial intelligence (AI) to automatically assign gender purely based on a smile.

Although automatic gender recognition is already available, existing methods use static images and compare fixed facial features. The new research, by the University of Bradford, is the first to use the dynamic movement of the smile to automatically distinguish between men and women.

Led by Professor Hassan Ugail, the team mapped 49 landmarks on the face, mainly around the eyes, mouth and down the nose. They used these to assess how the face changes as we smile caused by the underlying muscle movements — including both changes in distances between the different points and the ‘flow’ of the smile: how much, how far and how fast the different points on the face moved as the smile was formed.

They then tested whether there were noticeable differences between men and women — and found that there were, with women’s smiles being more expansive.

Lead researcher, Professor Hassan Ugail from the University of Bradford said: “Anecdotally, women are thought to be more expressive in how they smile, and our research has borne this out. Women definitely have broader smiles, expanding their mouth and lip area far more than men.”

The team created an algorithm using their analysis and tested it against video footage of 109 people as they smiled. The computer was able to correctly determine gender in 86% of cases and the team believe the accuracy could easily be improved.

“We used a fairly simple machine classification for this research as we were just testing the concept, but more sophisticated AI would improve the recognition rates,” said Professor Ugail.

The underlying purpose of this research is more about trying to enhance machine learning capabilities, but it has raised a number of intriguing questions that the team hopes to investigate in future projects.

One is how the machine might respond to the smile of a transgender person and the other is the impact of plastic surgery on recognition rates.

“Because this system measures the underlying muscle movement of the face during a smile, we believe these dynamics will remain the same even if external physical features change, following surgery for example,” said Professor Ugail. “This kind of facial recognition could become a next- generation biometric, as it’s not dependent on one feature, but on a dynamic that’s unique to an individual and would be very difficult to mimic or alter.”

The research is published in The Visual Computer: International Journal of Computer Graphics.

Story Source:

Materials provided by University of Bradford. Note: Content may be edited for style and length.

3 Questions: The future of transportation systems

Daniel Sperling is a distinguished professor of civil engineering and environmental science and policy at the University of California at Davis, where he is also founding director of the school’s Institute of Transportation Studies. Sperling, a member of the California Air Resources Board, recently gave a talk at MITEI detailing major technological and societal developments that have the potential to change transportation for the better — or worse. Following the event, Sperling spoke to MITEI about policy, science, and how to harness these change agents for the public good.

(Sperling’s talk is also available as a podcast.)

Q: What are the downsides of the “car-centric monoculture,” as you put it, that we find ourselves living in?

A: Cars provide great value, which is why they are so popular. But too much of a good thing can be destructive. We’ve gone too far. We’ve created a transportation system made up of massive road systems and parking infrastructure that is incredibly expensive for travelers and for society to build and maintain. It is also very energy- and carbon-intensive, and disadvantages those unable to buy and drive cars.

Q: Can you tell me about the three transportation revolutions that you say are going to transform mobility over the next few decades?

A: The three revolutions are electrification, automation, and pooling. Electrification is already under way, with increasing numbers of pure battery electric vehicles, plug-in hybrid vehicles that combine batteries and combustion engines, and fuel cell electric vehicles that run on hydrogen. I currently own a hydrogen car (Toyota Mirai) and have owned two different battery electric cars (Nissan Leaf and Tesla).

A second revolution, automation, is not yet under way, at least in the form of driverless cars. But it is poised to be truly transformational and disruptive for many industries — including automakers, rental cars, infrastructure providers, and transit operators. While partially automated cars are already here, true transformations await fully driverless vehicles, which are not likely to exist in significant numbers for a decade or more.

Perhaps the most pivotal revolution, at least in terms of assuring that the automation revolution serves the public interest, is pooling, or sharing. Automation without pooling would lead to large increases in vehicle use. With pooling, though, automation would lead to reductions in vehicle use, but increases in mobility (passenger miles traveled) by mobility-disadvantaged travelers who are too poor or disabled to drive.

Q: You’ve mentioned that how these revolutions play out depends on which cost factor dominates — money or time. The result would either be heaven or hell for our environment and cities. Explain the nuances of that situation.

A: With pooled, automated and electric cars, the cost of travel would drop precipitously as a result of using cars intensively — spreading costs over 100,000 miles or more per year — having no driver costs, and having multiple riders share the cost. The monetary cost could be as little as 15 cents per mile, versus 60 cents per mile for an individually-owned automated car traveling 15,000 miles per year. The time cost of car occupants, on the other hand, is near zero because they don’t need to pay attention to driving. They can work, sleep, text, drink, and read. Thus, even if the cost of owning and operating the vehicle is substantial, the time savings would be so beneficial that many, perhaps most, would choose car ownership over subscribing to an on-demand service. In fact, most people in affluent countries would likely choose the huge time savings, worth $10, $20, or more per hour, over low travel costs. Thus, policy will be needed to assure that the public interest — environmental externalities, urban livability, access by the mobility disadvantaged — is favored over the gains of a minority of individuals.

Topics: 3 Questions, MIT Energy Initiative, Carbon, Energy, Transportation, Sustainability, Emissions, Climate change, Policy, Economics, Electric vehicles, Autonomous vehicles, Artificial intelligence, Urban planning

Bring This Robot to a Disco, and It’ll Dance With You

In Brief

A four-legged robot that can dance has been developed by a team at ETH Zurich. The robot can analyze music in real time and develop movement to match its speed.

If you’re one who thinks the ability to dance is severely missing from modern robotics, the fine folks over at ETH Zurich’s Department of Mechanical and Process Engineering have a treat for you. The team has developed a robot – a quadrupedal bot called the ANYmal – with the ability to analyze music and create its own choreography.

Other robots have shown the ability to cut a rug to varying degrees, like the stripper robots on display during the most recent Consumer Electronics Show. Even NAO robots have synchronized skills. However, what makes the ANYmal special is its ability to react to music in real time and not rely on a programmed set of moves.

The accompanying software allows the robot to analyze the beats per minute (BPM) of music, create movement to match those speeds, and then check to ensure the devised movement speed matches the speed of the music. The robot even has the ability to change up its moves when different songs are played back to back (although, to be clear, most of those “moves” involve shaking a robo-behind back and forth).

“We wanted to have it so if you bring the robot to a disco, it can figure out the music, create a choreography, and sync up its motion,” Péter Fankhauser, a doctoral student working on the team, tells The Verge. “We’re also interested in creating lifelike movements. Dancing is a very human and motion-intensive action, so it’s challenging to mimic.”

Evidence presented below:

This may seem like a lot of novelty without much significance to robotics as a whole, but looks can be misleading. The bot’s ability to provide itself with a feedback loop could have real applications in fields such as surveying, and search and rescue. Fankhauser compares this with a human’s ability to imagine how they will complete a task. In theory, this type of technology could someday save lives.

For now though, the motivation is a little different. “We do a lot of serious things with the robot, but this is the fun side,” says Fankhauser. “For us, for students, for everybody involved, it’s enjoyable to do these things, and really explore the capabilities of the hardware.”

The autonomous “selfie drone”

If you’re a rock climber, hiker, runner, dancer, or anyone who likes recording themselves while in motion, a personal drone companion can now do all the filming for you — completely autonomously.

Skydio, a San Francisco-based startup founded by three MIT alumni, is commercializing an autonomous video-capturing drone — dubbed by some as the “selfie drone” — that tracks and films a subject, while freely navigating any environment.

Called R1, the drone is equipped with 13 cameras that capture omnidirectional video. It launches and lands through an app — or by itself. On the app, the R1 can also be preset to certain filming and flying conditions or be controlled manually.

The concept for the R1 started taking shape almost a decade ago at MIT, where the co-founders — Adam Bry SM ’12, Abraham Bacharach PhD ’12, and Matt Donahoe SM ’11 — first met and worked on advanced, prize-winning autonomous drones. Skydio launched in 2014 and is releasing the R1 to consumers this week.

“Our goal with our first product is to deliver on the promise of an autonomous flying camera that understands where you are, understands the scene around it, and can move itself to capture amazing video you wouldn’t otherwise be able to get,” says Bry, co-founder and CEO of Skydio.

Deep understanding

Existing drones, Bry says, generally require a human pilot. Some offer pilot-assist features that aid the human controller. But that’s the equivalent of having a car with adaptive cruise control — which automatically adjusts vehicle speed to maintain a safe distance from the cars ahead, Bry says. Skydio, on the other hand, “is like a driverless car with level-four autonomy,” he says, referring to the second-highest level of vehicle automation.

R1’s system integrates advanced algorithm components spanning perception, planning, and control, which give it unique intelligence “that’s analogous to how a person would navigate an environment,” Bry says.

On the perception side, the system uses computer vision to determine the location of objects. Using a deep neural network, it compiles information on each object and identifies each individual by, say, clothing and size. “For each person it sees, it builds up a unique visual identification to tell people apart and stays focused on the right person,” Bry says.

That data feeds into a motion-planning system, which pinpoints a subject’s location and predicts their next move. It also recognizes maneuvering limits in one area to optimize filming. “All information is constantly traded off and balanced … to capture a smooth video,” Bry says.

Finally, the control system takes all information to execute the drone’s plan in real time. “No other system has this depth of understanding,” Bry says. Others may have one or two components, “but none has a full, end-to-end, autonomous [software] stack designed and integrated together.”

For users, the end result, Bry says, is a drone that’s as simple to use as a camera app: “If you’re comfortable taking pictures with your iPhone, you should be comfortable using R1 to capture video.”

A user places the drone on the ground or in their hand, and swipes up on the Skydio app. (A manual control option is also available.) The R1 lifts off, identifies the user, and begins recording and tracking. From there, it operates completely autonomously, staying anywhere from 10 feet to 30 feet away from a subject, autonomously, or 300 feet away, manually, depending on Wi-Fi availability.

When batteries run low, the app alerts the user. Should the user not respond, the drone will find a flat place to land itself. After the flight — which can last about 16 minutes, depending on speed and use — users can store captured video or upload it to social media.

Through the app, users can also switch between several cinematic modes. For instance, with “stadium mode,” for field sports, the drone stays above and moves around the action, following selected subjects. Users can also direct the drone where to fly (in front, to the side, or constantly orbiting). “These are areas we’re now working on to add more capabilities,” Bry says.

The lightweight drone can fit into an average backpack and runs about $2,500.

Skydio takes wing

Bry came to MIT in 2009, “when it was first possible to take a [hobby] airplane and put super powerful computers and sensors on it,” he says.

He joined the Robust Robotics Group, led by Nick Roy, an expert in drone autonomy. There, he met Bacharach, now Skydio’s chief technology officer, who that year was on a team that won the Association for Unmanned Vehicles International contest with an autonomous minihelicopter that navigated the aftermath of a mock nuclear meltdown. Donahoe was a friend and graduate student at the MIT Media Lab at the time.

In 2012, Bry and Bacharach helped develop autonomous-control algorithms that could calculate a plane’s trajectory and determine its “state” — its location, physical orientation, velocity, and acceleration. In a series of test flights, a drone running their algorithms maneuvered around pillars in the parking garage under MIT’s Stata Center and through the Johnson Athletic Center.

These experiences were the seeds of Skydio, Bry says: “The foundation of the [Skydio] technology, and how all the technology works and the recipe for how all of it comes together, all started at MIT.”

After graduation, in 2012, Bry and Bacharach took jobs in industry, landing at Google’s Project Wing delivery-drone initiative — a couple years before Roy was tapped by Google to helm the project. Seeing a need for autonomy in drones, in 2014, Bry, Bacharach, and Donahoe founded Skydio to fulfill a vision that “drones [can have] enormous potential across industries and applications,” Bry says.

For the first year, the three co-founders worked out of Bacharach’s dad’s basement, getting “free rent in exchange for helping out with yard work,” Bry says. Working with off-the-shelf hardware, the team built a “pretty ugly” prototype. “We started with a [quadcopter] frame and put a media center computer on it and a USB camera. Duct tape was holding everything together,” Bry says.

But that prototype landed the startup a seed round of $3 million in 2015. Additional funding rounds over the next few years — more than $70 million in total — helped the startup hire engineers from MIT, Google, Apple, Tesla, and other top tech firms.

Over the years, the startup refined the drone and tested it in countries around the world — experimenting with high and low altitudes, heavy snow, fast winds, and extreme high and low temperatures. “We’ve really tried to bang on the system pretty hard to validate it,” Bry says.

Athletes, artists, inspections

Early buyers of Skydio’s first product are primarily athletes and outdoor enthusiasts who record races, training, or performances. For instance, Skydio has worked with Mikel Thomas, Olympic hurdler from Trinidad and Tobago, who used the R1 to analyze his form.

Artists, however, are also interested, Bry adds: “There’s a creative element to it. We’ve had people make music videos. It was themselves in a driveway or forest. They dance and move around and the camera will respond to them and create cool content that would otherwise be impossible to get.”

In the future, Skydio hopes to find other applications, such as inspecting commercial real estate, power lines, and energy infrastructure for damage. “People have talked about using drones for these things, but they have to be manually flown and it’s not scalable or reliable,” Bry says. “We’re going in the direction of sleek, birdlike devices that are quiet, reliable, and intelligent, and that people are comfortable using on a daily basis.”

Topics: Innovation and Entrepreneurship (I&E), Startups, Alumni/ae, Drones, Computer Science and Artificial Intelligence Laboratory (CSAIL), Media Lab, School of Architecture and Planning, Computer vision, Artificial intelligence, Algorithms, Software, School of Engineering