New 3-D printing approach makes cell-scale lattice structures

System could provide fine-scale meshes for growing highly uniform cultures of cells with desired properties.

A new way of making scaffolding for biological cultures could make it possible to grow cells that are highly uniform in shape and size, and potentially with certain functions. The new approach uses an extremely fine-scale form of 3-D printing, using an electric field to draw fibers one-tenth the width of a human hair.

The system was developed by Filippos Tourlomousis, a postdoc at MIT’s Center for Bits and Atoms, and six others at MIT and the Stevens Institute of Technology in New Jersey. The work is being reported today in the journal Microsystems and Nanoengineering.

Many functions of a cell can be influenced by its microenvironment, so a scaffold that allows precise control over that environment may open new possibilities for culturing cells with particular characteristics, for research or eventually even medical use.

While ordinary 3-D printing produces filaments as fine as 150 microns (millionths of a meter), Tourlomousis says, it’s possible to get fibers down to widths of 10 microns by adding a strong electric field between the nozzle extruding the fiber and the stage on which the structure is being printed. The technique is called melt electrowriting.

“If you take cells and put them on a conventional 3-D-printed surface, it’s like a 2-D surface to them,” he explains, because the cells themselves are so much smaller. But in a mesh-like structure printed using the electrowriting method, the structure is at the same size scale as the cells themselves, and so their sizes and shapes and the way they form adhesions to the material can be controlled by adjusting the porous microarchitecture of the printed lattice structure.

“By being able to print down to that scale, you produce a real 3-D environment for the cells,” Tourlomousis says.

He and the team then used confocal microscopy to observe the cells grown in various configurations of fine fibers, some random, some precisely arranged in meshes of different dimensions. The large number of resulting images were then analyzed and classified using artificial intelligence methods, to correlate the cell types and their variability with the kinds of microenvironment, with different spacings and arrangements of fibers, in which they were grown.

Cells form proteins known as focal adhesions at the places where they attach themselves to the structure. “Focal adhesions are the way the cell communicates with the external environment,” Tourlomousis says. “These proteins have measurable features across the cell body allowing us to do metrology. We quantify these features and use them to model and classify quite precisely individual cell shapes.”

For a given mesh-like structure, he says, “we show that cells acquire shapes that are directly coupled with the substrate’s architecture and with the melt electrowritten  substrates,” promoting a high degree of uniformity compared to nonwoven, randomly structured  substrates. Such uniform cell populations could potentially be useful in biomedical research, he says: “It is widely known that cell shape governs cell function and this work suggests a shape-driven pathway for engineering and quantifying cell responses with great precision,” and with great reproducibility.

He says that in recent work, he and his team have shown that certain type of stem cells  grown in such 3-D-printed meshes survived without losing their properties for much longer than those grown on a conventional two-dimensional substrate. Thus, there may be medical applications for such structures, perhaps as a way to grow large quantities of human cells with uniform properties that might be used for transplantation or to provide the material for building artificial organs, he says. The material being used for the printing is a polymer melt that has already been approved by the FDA.

The need for tighter control over cell function is a major roadblock for getting tissue engineering products to the clinic. Any steps to tighten specifications on the scaffold, and thereby also tighten the variance in cell phenotype, are much needed by this industry, Tourlomousis says.

The printing system might have other applications as well, Tourlomousis says. For example, it might be possible to print “metamaterials” — synthetic materials with layered or patterned structures that can produce exotic optical or electronic properties.

The team included Thrasyvoulos Karydis and Andreas Mershin at MIT, and Chao Jia, Hongjun Wang, Dilhan Kalyon, and Robert Chang at the Stevens Institute of Technology in Hoboken, New Jersey. The work was funded by the National Science Foundation.

Kicking neural network design automation into high gear

Algorithm designs optimized machine-learning models up to 200 times faster than traditional methods.

A new area in artificial intelligence involves using algorithms to automatically design machine-learning systems known as neural networks, which are more accurate and efficient than those developed by human engineers. But this so-called neural architecture search (NAS) technique is computationally expensive.

A state-of-the-art NAS algorithm recently developed by Google to run on a squad of graphical processing units (GPUs) took 48,000 GPU hours to produce a single convolutional neural network, which is used for image classification and detection tasks. Google has the wherewithal to run hundreds of GPUs and other specialized hardware in parallel, but that’s out of reach for many others.

In a paper being presented at the International Conference on Learning Representations in May, MIT researchers describe an NAS algorithm that can directly learn specialized convolutional neural networks (CNNs) for target hardware platforms — when run on a massive image dataset — in only 200 GPU hours, which could enable far broader use of these types of algorithms.

Resource-strapped researchers and companies could benefit from the time- and cost-saving algorithm, the researchers say. The broad goal is “to democratize AI,” says co-author Song Han, an assistant professor of electrical engineering and computer science and a researcher in the Microsystems Technology Laboratories at MIT. “We want to enable both AI experts and nonexperts to efficiently design neural network architectures with a push-button solution that runs fast on a specific hardware.”

Han adds that such NAS algorithms will never replace human engineers. “The aim is to offload the repetitive and tedious work that comes with designing and refining neural network architectures,” says Han, who is joined on the paper by two researchers in his group, Han Cai and Ligeng Zhu.

“Path-level” binarization and pruning

In their work, the researchers developed ways to delete unnecessary neural network design components, to cut computing times and use only a fraction of hardware memory to run a NAS algorithm. An additional innovation ensures each outputted CNN runs more efficiently on specific hardware platforms — CPUs, GPUs, and mobile devices — than those designed by traditional approaches. In tests, the researchers’ CNNs were 1.8 times faster measured on a mobile phone than traditional gold-standard models with similar accuracy.

A CNN’s architecture consists of layers of computation with adjustable parameters, called “filters,” and the possible connections between those filters. Filters process image pixels in grids of squares — such as 3x3, 5x5, or 7x7 — with each filter covering one square. The filters essentially move across the image and combine all the colors of their covered grid of pixels into a single pixel. Different layers may have different-sized filters, and connect to share data in different ways. The output is a condensed image — from the combined information from all the filters — that can be more easily analyzed by a computer.

Because the number of possible architectures to choose from — called the “search space” — is so large, applying NAS to create a neural network on massive image datasets is computationally prohibitive. Engineers typically run NAS on smaller proxy datasets and transfer their learned CNN architectures to the target task. This generalization method reduces the model’s accuracy, however. Moreover, the same outputted architecture also is applied to all hardware platforms, which leads to efficiency issues.

The researchers trained and tested their new NAS algorithm on an image classification task directly in the ImageNet dataset, which contains millions of images in a thousand classes. They first created a search space that contains all possible candidate CNN “paths” — meaning how the layers and filters connect to process the data. This gives the NAS algorithm free rein to find an optimal architecture.

This would typically mean all possible paths must be stored in memory, which would exceed GPU memory limits. To address this, the researchers leverage a technique called “path-level binarization,” which stores only one sampled path at a time and saves an order of magnitude in memory consumption. They combine this binarization with “path-level pruning,” a technique that traditionally learns which “neurons” in a neural network can be deleted without affecting the output. Instead of discarding neurons, however, the researchers’ NAS algorithm prunes entire paths, which completely changes the neural network’s architecture.

In training, all paths are initially given the same probability for selection. The algorithm then traces the paths — storing only one at a time — to note the accuracy and loss (a numerical penalty assigned for incorrect predictions) of their outputs. It then adjusts the probabilities of the paths to optimize both accuracy and efficiency. In the end, the algorithm prunes away all the low-probability paths and keeps only the path with the highest probability — which is the final CNN architecture.

Hardware-aware

Another key innovation was making the NAS algorithm “hardware-aware,” Han says, meaning it uses the latency on each hardware platform as a feedback signal to optimize the architecture. To measure this latency on mobile devices, for instance, big companies such as Google will employ a “farm” of mobile devices, which is very expensive. The researchers instead built a model that predicts the latency using only a single mobile phone.

For each chosen layer of the network, the algorithm samples the architecture on that latency-prediction model. It then uses that information to design an architecture that runs as quickly as possible, while achieving high accuracy. In experiments, the researchers’ CNN ran nearly twice as fast as a gold-standard model on mobile devices.

One interesting result, Han says, was that their NAS algorithm designed CNN architectures that were long dismissed as being too inefficient — but, in the researchers’ tests, they were actually optimized for certain hardware. For instance, engineers have essentially stopped using 7x7 filters, because they’re computationally more expensive than multiple, smaller filters. Yet, the researchers’ NAS algorithm found architectures with some layers of 7x7 filters ran optimally on GPUs. That’s because GPUs have high parallelization — meaning they compute many calculations simultaneously — so can process a single large filter at once more efficiently than processing multiple small filters one at a time.

“This goes against previous human thinking,” Han says. “The larger the search space, the more unknown things you can find. You don’t know if something will be better than the past human experience. Let the AI figure it out.”

The work was supported, in part, by the MIT Quest for Intelligence, the MIT-IBM Watson AI lab, SenseTime, and Xilinx.

“Particle robot” works as a cluster of simple units

Loosely connected disc-shaped “particles” can push and pull one another, moving en masse to transport objects.

Taking a cue from biological cells, researchers from MIT, Columbia University, and elsewhere have developed computationally simple robots that connect in large groups to move around, transport objects, and complete other tasks.

This so-called “particle robotics” system — based on a project by MIT, Columbia Engineering, Cornell University, and Harvard University researchers — comprises many individual disc-shaped units, which the researchers call “particles.” The particles are loosely connected by magnets around their perimeters, and each unit can only do two things: expand and contract. (Each particle is about 6 inches in its contracted state and about 9 inches when expanded.) That motion, when carefully timed, allows the individual particles to push and pull one another in coordinated movement. On-board sensors enable the cluster to gravitate toward light sources.

In a Nature paper published today, the researchers demonstrate a cluster of two dozen real robotic particles and a virtual simulation of up to 100,000 particles moving through obstacles toward a light bulb. They also show that a particle robot can transport objects placed in its midst.

Particle robots can form into many configurations and fluidly navigate around obstacles and squeeze through tight gaps. Notably, none of the particles directly communicate with or rely on one another to function, so particles can be added or subtracted without any impact on the group. In their paper, the researchers show particle robotic systems can complete tasks even when many units malfunction.

The paper represents a new way to think about robots, which are traditionally designed for one purpose, comprise many complex parts, and stop working when any part malfunctions. Robots made up of these simplistic components, the researchers say, could enable more scalable, flexible, and robust systems.

“We have small robot cells that are not so capable as individuals but can accomplish a lot as a group,” says Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The robot by itself is static, but when it connects with other robot particles, all of a sudden the robot collective can explore the world and control more complex actions. With these ‘universal cells,’ the robot particles can achieve different shapes, global transformation, global motion, global behavior, and, as we have shown in our experiments, follow gradients of light. This is very powerful.”

Joining Rus on the paper are: first author Shuguang Li, a CSAIL postdoc; co-first author Richa Batra and corresponding author Hod Lipson, both of Columbia Engineering; David Brown, Hyun-Dong Chang, and Nikhil Ranganathan of Cornell; and Chuck Hoberman of Harvard.

At MIT, Rus has been working on modular, connected robots for nearly 20 years, including an expanding and contracting cube robot that could connect to others to move around. But the square shape limited the robots’ group movement and configurations.

In collaboration with Lipson’s lab, where Li was a postdoc until coming to MIT in 2014, the researchers went for disc-shaped mechanisms that can rotate around one another. They can also connect and disconnect from each other, and form into many configurations.

Each unit of a particle robot has a cylindrical base, which houses a battery, a small motor, sensors that detect light intensity, a microcontroller, and a communication component that sends out and receives signals. Mounted on top is a children’s toy called a Hoberman Flight Ring — its inventor is one of the paper’s co-authors — which consists of small panels connected in a circular formation that can be pulled to expand and pushed back to contract. Two small magnets are installed in each panel.

The trick was programming the robotic particles to expand and contract in an exact sequence to push and pull the whole group toward a destination light source. To do so, the researchers equipped each particle with an algorithm that analyzes broadcasted information about light intensity from every other particle, without the need for direct particle-to-particle communication.

The sensors of a particle detect the intensity of light from a light source; the closer the particle is to the light source, the greater the intensity. Each particle constantly broadcasts a signal that shares its perceived intensity level with all other particles. Say a particle robotic system measures light intensity on a scale of levels 1 to 10: Particles closest to the light register a level 10 and those furthest will register level 1. The intensity level, in turn, corresponds to a specific time that the particle must expand. Particles experiencing the highest intensity — level 10 — expand first. As those particles contract, the next particles in order, level 9, then expand. That timed expanding and contracting motion happens at each subsequent level.

“This creates a mechanical expansion-contraction wave, a coordinated pushing and dragging motion, that moves a big cluster toward or away from environmental stimuli,” Li says. The key component, Li adds, is the precise timing from a shared synchronized clock among the particles that enables movement as efficiently as possible: “If you mess up the synchronized clock, the system will work less efficiently.”

In videos, the researchers demonstrate a particle robotic system comprising real particles moving and changing directions toward different light bulbs as they’re flicked on, and working its way through a gap between obstacles. In their paper, the researchers also show that simulated clusters of up to 10,000 particles maintain locomotion, at half their speed, even with up to 20 percent of units failed.

“It’s a bit like the proverbial ‘gray goo,’” says Lipson, a professor of mechanical engineering at Columbia Engineering, referencing the science-fiction concept of a self-replicating robot that comprises billions of nanobots. “The key novelty here is that you have a new kind of robot that has no centralized control, no single point of failure, no fixed shape, and its components have no unique identity.”

The next step, Lipson adds, is miniaturizing the components to make a robot composed of millions of microscopic particles.

"The work points toward an innovative new direction in modular and distributed robotics,” says Mac Schwager, an assistant professor of aeronautics and astronautics and director of the Multi-robot Systems Lab at Stanford University. “The authors use collectives of simple stochastic robotic cells, and leverage the statistics of the collective to achieve a global motion. This has some similarity to biological systems, in which the cells of an organism each follow some random process, while the bulk effect of this low-level randomness leads to a predictable behavior for the whole organism. The hope is that such robot collectives will yield robust and adaptable behaviors, similar to the robustness and adaptability we see in nature."

Exercises in amazement: Discovering deep learning

A popular student-coordinated class draws a capacity crowd from across the MIT campus and beyond.

It was standing-room only in the Stata Center’s Kirsch Auditorium when some 300 attendees showed up for opening lectures for MIT’s intensive, student-designed course 6.S191 (Introduction to Deep Learning).

Nathan Rebello, a first-year graduate student in chemical engineering, was among those who were excited about the class, coordinated by Alexander Amini ’17 and Ava Soleimany ’16 during MIT’s Independent Activities Period (IAP) in January.

“I hope to go into either industry or academia and to apply deep learning techniques for the design of new materials,” Rebello says. He signed up for 6.S191 to learn more about deep learning with the intention of applying it to the design of bio-inspired polymeric materials, adding: “I also wanted to network with students and faculty to explore their ways of thinking on this topic.”

There were plenty of people available for networking. “We want the class to be open and accessible to the broader community,” says Soleimany, a MIT and Harvard University graduate student, who, with Amini, also served as an instructor for the course. “We welcome people from outside MIT. There were many students from surrounding universities in Boston and even specialized physicians from Mass General Hospital. We had people fly in from California and from outside the country, from Turkey and China, to attend the lectures.”

The for-credit course has been offered for the past three years. A subset of artificial intelligence (AI), deep learning focuses on building predictive models automatically from big data. Each class consisted of technical lectures followed by software labs where students could immediately apply what they had learned. Technical lectures spanned state-of-the-art techniques in deep learning, and included lectures on computer vision, reinforcement learning, and natural language processing given by Amini and Soleimany, as well as guest lectures by leading AI researchers from Google, IBM, and Nvidia.

“This year, we remade the software labs totally from scratch and collaborated very closely with the Google Brain team to reflect the newest version of the framework TensorFlow, the language which we were using for the labs,” says Amini, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory. “TensorFlow is the most popular machine learning and deep learning framework out there.”

One specific lab featured research that Amini and Soleimany recently published in the Association for the Advancement of Artificial Intelligence/Association for Computing Machinery Conference on Artificial Intelligence, Ethics, and Society. “The focus is on building facial detection systems and using deep learning to make them unbiased with respect to things like gender and race,” Soleimany says. “This is a really exciting piece of work, but it’s also really pragmatic work, because there’s been a lot of news recently on AI being biased towards certain underrepresented minorities. To have students not only understand why that bias might arise, but also try to use deep learning to actually remove some of that bias was really cool. It’s cutting-edge work.”

For final projects, 6.S191 students could either write a brief review of a new deep learning paper or present a three-minute oral proposal for a deep learning application, to be judged by industry representatives.

This year, some 20 groups comprising two to four people completed projects, competing for high-end graphics processing units (GPUs) provided by Nvidia, each worth more than $1,000, and AI home assistants provided by Google.

One winning team proposed using deep learning to detect deformation in 2-D materials on a micro scale or even smaller. A second group proposed using it to design new catalysts for chemistry applications. The final group proposed using deep learning to analyze the X-rays of scoliosis patients.

“We thought that these three projects stood out in terms of their immediate applications and that these teams would take the GPUs and really put them to use,” Soleimany says.

Rebello, who had a basic knowledge of neural nets and TensorFlow before he enrolled in the course, was on the team that presented “Advanced Scoliosis Detection with Deep Neural Nets.”

“Even though my teammates and I were from different disciplines, we pooled our knowledge and interests to propose the award-winning idea of a merger of convolutional neural networks with scoliosis detection, potentially enabling doctors to detect subtle abnormal features from X-rays in the early stages of scoliosis and classify the severity of the condition over time,” Rebello says.

“The project was a fun way to think outside of the box,” says another member of the winning team, Eric A. Magliarditi, a graduate student in aeronautics and astronautics. The third team member, Sandra Liu, who is studying for a master's degree in mechanical engineering, said she had little knowledge of deep learning before the class but was eager to learn about its applications to soft robotics, her academic interest. “The highlights of the course were the labs,” she says. “In one, we got to complete the code for a neural net that could generate Irish folk songs. It was fun to be able to do ‘hands-on’ projects and also to learn more about real-life applications of deep learning.”

Magliarditi had a real-life interest in the topic the trio explored. “I had advanced scoliosis — I had surgery to fix it in 2014 — so this topic was extremely relevant and interesting to me,” he says. “I am not entirely sure if our idea could work, but it is something I want to investigate further because it has some interesting consequences if it were to work.”

Not every idea presented was so practical. “One project was an AI personal assistant,” says Amini. “And though it may be far-fetched, a full-fledged AI assistant, essentially a micro-drone the size of an insect that would fly around the house and keep track of your personal belongings, would be pretty amazing.”

Amini and Soleimany plan to teach the deep learning course again during IAP 2020. In the meantime, the lectures from the 2019 class can be found on the course website.

Robot hand is soft and strong

Gripper device inspired by “origami magic ball” can grasp wide array of delicate and heavy objects.

Fifty years ago, the first industrial robot arm (called Unimate) assembled a simple breakfast of toast, coffee, and champagne. While it might have looked like a seamless feat, every movement and placement was coded with careful consideration.

Even with today’s more intelligent and adaptive robots, this task remains difficult for machines with rigid hands. They tend to work only in structured environments with predefined shapes and locations, and typically can’t cope with uncertainties in placement or form.

In recent years, though, roboticists have come to grips with this problem by making fingers out of soft, flexible materials like rubber. This pliability lets these soft robots pick up anything from grapes to boxes and empty water bottles, but they’re still unable to handle large or heavy items.

To give these soft robots a bit of a hand, researchers from MIT and Harvard University have developed a new gripper that’s both soft and strong: a cone-shaped origami structure that collapses in on objects, much like a Venus' flytrap, to pick up items that are as much as 100 times its weight. This motion lets the gripper grasp a much wider range of objects — such as soup cans, hammers, wine glasses, drones, and even a single broccoli floret.

“One of my moonshots is to create a robot that can automatically pack groceries for you,” says MIT Professor Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and one of the senior authors of a new paper about the project.

“Previous approaches to the packing problem could only handle very limited classes of objects — objects that are very light, or objects that conform to shapes such as boxes and cylinders — but with the Magic Ball gripper system we’ve shown that we can do pick-and-place tasks for a large variety of items ranging from wine bottles to broccoli, grapes and eggs,” says Rus. “In other words, objects that are heavy and objects that are light. Objects that are delicate, or sturdy, or that have regular or free-form shapes.”

The project is one of several in recent years that has researchers thinking outside the box with robot design. Ball-shaped grippers, for example, can handle a wider range of objects than fingers, but still have the issue of limited angles. Softer robotic fingers typically use compressed air, but aren’t strong enough to pick up heavier objects.

The structure of this new gripper, meanwhile, takes an entirely different form. Cone-shaped, hollow, and vacuum-powered, the device was inspired by the “origami magic ball” and can envelope an entire object and successfully pick it up.

The gripper has three parts: the origami-based skeleton structure, the airtight skin to encase the structure, and the connector. The team created it using a mechanical rubber mold and a special heat-shrinking plastic that self-folds at high temperatures.

The magic ball’s skeleton is covered by either a rubber balloon or a thin fabric sheet, not unlike the team’s previous research on fluid-driven origami-inspired artificial muscles, which consisted of an airtight skin surrounding a foldable skeleton and fluid.

The team used the gripper with a standard robot to test its strength on different objects. The gripper could grasp and lift objects 70 percent of its diameter, which allowed it to pick up and hold a variety of soft foods without causing damage. It could also pick up bottles weighing over four pounds.

“Companies like Amazon and JD want to be able to pick up a wider array of delicate or irregular-shaped objects, but can’t with finger-based and suction-cup grippers,” says Shuguang Li, a joint postdoc at CSAIL and Harvard’s John A. Paulson School of Engineering and Applied Sciences. “Suction cups can’t pick up anything with holes — and they’d need something much stronger than a soft-finger-based gripper.”

The robot currently works best with cylindrical objects like bottles or cans, which could someday make it an asset for production lines in factories. Not surprisingly, the shape of the gripper makes it more difficult for it to grasp something flat, like a sandwich or a book.

“One of the key features of this approach to manipulator construction is its simplicity,” says Robert Wood, co-author and professor at Harvard’s School of Engineering and Wyss Institute for Biologically Inspired Engineering. “The materials and fabrication strategies used allow us to rapidly prototype new grippers, customized to object or environment as needed.”  

In the future, the team hopes to try to solve the problem of angle and orientation by adding computer vision that would let the gripper “see”, and make it possible to grasp specific parts of objects.

“This is a very clever device that uses the power of 3-D printing, a vacuum, and soft robotics to approach the problem of grasping in a whole new way,” says Michael Wehner, an assistant professor of robotics at the University of California at Santa Cruz, who was not involved in the project. “In the coming years, I could imagine seeing soft robots gentle and dexterous enough to pick a rose, yet strong enough to safely lift a hospital patient.”

Other co-authors of the paper include MIT undergraduates John Stampfli, Helen Xu, Elian Malkin, and Harvard Research Experiences for Undergraduates student Evelin Villegas Diaz from St. Mary's University. The team will present their paper at the International Conference on Robotics and Automation in Montreal, Canada, this May.

This project was supported in part by the Defense Advanced Research Projects Agency, the National Science Foundation, and Harvard's Wyss Institute.

Combining artificial intelligence with their passions

Research projects show creative ways MIT students are connecting computing to other fields.

Computational thinking will be the mark of an MIT education when the MIT Stephen A. Schwarzman College of Computing opens this fall, and glimpses of what's to come were on display during the final reception of a three-day celebration of the college Feb. 26-28.

In a tent filled with electronic screens, students and postdocs took turns explaining how they had created something new by combining computing with topics they felt passionate about, including predicting panic selling on Wall Street, analyzing the filler ingredients in common drugs, and developing more energy-efficient software and hardware. The poster session featured undergraduates, graduate students, and postdocs from each of MIT’s five schools. Eight projects are highlighted here.

Low-cost screening tool for genetic mutations linked to autism

Autism is thought to have a strong genetic basis, but few of the genetic mutations responsible have been found. In collaboration with Boston Children’s Hospital and Harvard Medical School, MIT researchers are using AI to explore autism’s hidden origins. 

Working with his advisors, Bonnie Berger and Po-Ru Loh, professors of math and medicine at MIT and Harvard respectively, graduate student Maxwell Sherman has helped develop an algorithm to detect previously unidentified mutations in people with autism which cause some cells to carry too much or too little DNA. 

The team has found that up to 1 percent of people with autism carry the mutations, and that inexpensive consumer genetic tests can detect them with a mere saliva sample. Hundreds of U.S. children who carry the mutations and are at risk for autism could be identified this way each year, researchers say.  

“Early detection of autism gives kids earlier access to supportive services,” says Sherman, “and that can have lasting benefits.” 

Can deep learning models be trusted?

As AI systems automate more tasks, the need to evaluate their decisions and alert the public to possible failures has taken on new urgency. In a project with the MIT-IBM Watson AI Lab, graduate student Lily Weng is helping to build an efficient, general framework for quantifying how easily deep neural networks can be tricked or misled into making mistakes.

Working with a team led by Pin-Yu Chen, a researcher at IBM, and Luca Daniel, a professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), Weng developed a method that reports how much each individual input can be altered before the neural network makes a mistake. The team is now expanding the framework to larger, and more general neural networks, and developing tools to quantify their level of vulnerability based on different ways of measuring input-alteration. The work has spawned a series of papers, summarized in a recent MIT-IBM blog post.

Mapping the spread of Ebola virus

By the time the Ebola virus spread from Guinea and Liberia to Sierra Leone in 2014, the government was prepared. It quickly closed its schools and shut its borders with the two countries. Still, relative to its population, Sierra Leone fared worse than its neighbors, with 14,000 suspected infections and 4,000 deaths.

Marie Charpignon, a graduate student in the MIT Institute for Data, Systems, and Society (IDSS), wanted to know why. Her search became a final project for Network Science and Models, a class taught by Patrick Jaillet, the Dugald C. Jackson Professor in EECS. 

In a network analysis of trade, migration, and World Health Organization data, Charpignon discovered that a severe shortage of medical resources seemed to explain why Ebola had caused relatively more devastation in Sierra Leone, despite the country’s precautions.

“Sierra Leone had one doctor for every 30,000 residents, and the doctors were the first to be infected,” she says. “That further reduced the availability of medical help.” 

If Sierra Leone had not acted as decisively, she says, the outbreak could have been far worse. Her results suggest that epidemiology models should factor in where hospitals and medical staff are clustered to better predict how an epidemic will unfold.

An AI for sustainable, economical buildings

When labor is cheap, buildings are designed to use fewer materials, but as labor costs rise, design choices shift to inefficient but easily constructed buildings. That’s why much of the world today favors buildings made of standardized steel-reinforced concrete, says graduate student Mohamed Ismail.

AI is now changing the design equation. In collaboration with TARA, a New Delhi-based nonprofit, Ismail and his advisor, Caitlin Mueller, an associate professor in the Department of Architecture and the Department of Civil and Environmental Engineering, are using computational tools to reduce the amount of reinforced concrete in India’s buildings.

“We can, once again, make structural performance part of the architectural design process, and build exciting, elegant buildings that are also efficient and economical,” says Ismail. 

The work involves calculating how much load a building can bear as the shape of its design shifts. Ismael and Mueller developed an optimization algorithm to compute a shape that would maximize efficiency and provide a sculptural element. The hybrid nature of reinforced concrete, which is both liquid and solid, brittle and ductile, was one challenge they had to overcome. Making sure the models would translate on the ground, by staying in close contact with the client, was another.

“If something didn’t work, I could remotely connect to my computer at MIT, adjust the code, and have a new design ready for TARA within an hour,” says Ismail. 

Robots that understand language

The more that robots can engage with humans, the more useful they become. That means asking for feedback when they get confused and seamlessly absorbing new information as they interact with us and their environment. Ideally, this means moving to a world in which we talk to robots instead of programming them. 

In a project led by Boris Katz, a researcher at the Computer Science and Artificial Intelligence Laboratory and Nicholas Roy, a professor in MIT’s Department of Aeronautics and Astronautics, graduate student Yen-Ling Kuo has designed a set of experiments to understand how humans and robots can cooperate and what robots must learn to follow commands.

In one video game experiment, volunteers are asked to drive a car full of bunnies through an obstacle course of walls and pits of flames. It sounds like “absurdist comedy,” Kuo admits, but the goal is straightforward: to understand how humans plot a course through hazardous conditions while interpreting the actions of others around them. Data from the experiments will be used to design algorithms that help robots to plan and explain their understanding of what others are doing.

A deep learning tool to unlock your inner artist 

Creativity is thought to play an important role in healthy aging, with research showing that creative people are better at adapting to the challenges of old age. The trouble is, not everyone is in touch with their inner artist. 

“Maybe they were accountants, or worked in business and don’t see themselves as creative types,” says Guillermo Bernal, a graduate student at the MIT Media Lab. “I started to think, what if we could leverage deep learning models to help people explore their creative side?”

With Media Lab professor Pattie Maes, Bernal developed Paper Dreams, an interactive storytelling tool that uses generative models to give the user a shot of inspiration. As a sketch unfolds, Paper Dreams imagines how the scene could develop further and suggests colors, textures, and new objects for the artist to add. A “serendipity dial” lets the artist decide how off-beat they want the suggestions to be.

“Seeing the drawing and colors evolve in real-time as you manipulate them is a magical experience,” says Bernal, who is exploring ways to make the platform more accessible.

Preventing maternal deaths in Rwanda

The top cause of death for new mothers in Rwanda are infections following a caesarean section. To identify at-risk mothers sooner, researchers at MIT, Harvard Medical School, Brigham Women’s Hospital, and Partners in Health, Rwanda, are developing a computational tool to predict whether a mother’s post-surgical wound is likely to be infected.  

Researchers gathered C-section wound photos from 527 women, using health workers who captured the pictures with their smartphones 10 to 12 days after surgery. Working with his advisor, Richard Fletcher, a researcher in MIT’s D-Lab, graduate student Subby Olubeko helped train a pair of models to pick out the wounds that developed into infections.  When they tested the logistic regression model on the full dataset, it gave almost perfect predictions. 

The color of the wound’s drainage, and how bright the wound appears at its center, are two of the features the model picks up on, says Olubeko. The team plans to run a field experiment this spring to collect wound photos from a more diverse group of women and to shoot infrared images to see if they reveal additional information.

Do native ads shape our perception of the news?

The migration of news to the web has given advertisers the ability to place ever more personalized, engaging ads amid high-quality news stories. Often masquerading as legitimate news, so-called “native” ads, pushed by content recommendation networks, have brought badly needed revenue to the struggling U.S. news industry. But at what cost?

“Native ads were supposed to help the news industry cope with the financial crisis, but what if they’re reinforcing the public’s mistrust of the media and driving readers away from quality news?” says graduate student Manon Revel

Claims of fake news dominated the 2016 U.S. presidential elections, but politicized native ads were also common. Curious to measure their reach, Revel joined a project led by Adam Berinsky, a professor in MIT’s Department of Political ScienceMunther Dahleh, a professor in EECS and director of IDSS, Dean Eckles, a professor at MIT’s Sloan School of Management, and Ali Jadbabaie, a CEE professor who is associate director of IDSS.  

Analyzing a sample of native ads that popped up on readers’ screens before the election, they found that 25 percent could be considered highly political, and that 75 percent fit the description of clickbait. A similar trend emerged when they looked at coverage of the 2018 midterm elections. The team is now running experiments to see how exposure to native ads influences how readers rate the credibility of real news. 

Computing the future

Fireside chat brings together six Turing Award winners to reflect on their field and the MIT Stephen A. Schwarzman College of Computing.

As part of the public launch of the Stephen A. Schwarzman College of Computing, MIT hosted a special fireside chat Wednesday, Feb.27, at Kresge Auditorium that brought together six MIT professors who have received the Association for Computing Machinery’s esteemed A.M. Turing Award, often described as “the Nobel Prize for computing.”

Moderated by Professor Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the conversation included Tim Berners-Lee, the 3Com Founders Professor of Engineering; Shafi Goldwasser, the RSA Professor of Electrical Engineering and Computer Science; Butler Lampson, technical fellow at the Microsoft Corporation and adjunct professor of computer science at MIT; Barbara Liskov, Institute Professor; Ronald Rivest, Institute Professor; and Michael Stonebraker, CTO of Paradigm4 and of Tamr Inc. and adjunct professor of computer science at MIT. (Other MIT Turing Award winners include Professor Silvio Micali, Professor Emeritus Fernando “Corby” Corbato, and the late Professor Marvin Minsky.)

Rus first briefly highlighted the accomplishments of the Turing winners, from Lampson’s contributions to the growth of personal computers to how Berners-Lee and Rivest’s work has fundamentally transformed global commerce.

“Imagine what the world would be like without these achievements in AI, databases, cryptography, and more,” said Rus. “Just try to imagine a day without the World Wide Web and all that it enables — no online news, no electronic transactions, no social media.”

Coming off less as a panel than a casual conversation among friends, the wide-ranging dialogue reflected the CSAIL colleagues’ infectious enthusiasm for each other’s work. One theme was the serendipity of computer science and how often the panelists’ breakthroughs in one area of research ended up having major impacts in other, completely unexpected domains. For example, Goldwasser discussed her work on zero-knowledge proofs and their use in fields such as cloud computing and machine learning that didn’t even exist when she and Micali first dreamed them up. Rivest later joked that the thriving study of quantum computing has been largely driven by the desire to “break” his RSA (named for Rivest-Shamir-Adelman) encryption algorithm.

With a broad lens looking toward the future, panelists also discussed how to create more connections between their work and topics such as climate change and brain research. Liskov cited medical technology, and how more effective data collection could allow doctors to spend less time on their computers and more time with patients. Lampson spoke of the importance of developing more specialized hardware, like Google has with its tensor processing unit.

Another recurring theme during the panel was a hope that the new college can also keep MIT at the center of the conversation about the potential adverse effects of computing technologies.

“The future of the field isn’t just building new functionality for the good, but thinking about how it can be abused,” Rivest said. “It will be crucially important to teach our students how to think more like adversaries.”

The group also reminisced on the letter they penned in the Tech student newspaper in 2017 calling for the creation of a computing school.

“Since we wrote that letter, the MIT administration has created a college and raised $1 billion for a new building and 50 professors,” said Stonebraker. “The fact that they’ve done this all from a standing start in 16 months is truly remarkable.”

The laureates agreed that one of MIT’s core goals should be to teach computational skills in a bidirectional way: that is, for MIT’s existing schools to inform the college’s direction, and for the college to also teach concepts of “computational thinking” that are more generalizable than any one programming language or algorithmic framework.

“I think we do a reasonable job of training computer scientists, but one mission of the college will be to teach the right kinds of computing skills to the rest of campus,” said Stonebraker. “One of the big challenges the new dean is going to face is how to organize all that.”

The panelists also reflected on MIT’s unique positioning to be able to continue to study tough “moonshot” problems in computing that require more than just incremental progress.

“As the world’s leading technological university, MIT has an obligation to lead the forefront of research rather than follow industry,” Goldwasser said. “What separates us from industrial product — and even from other research labs — is our ability to pursue basic research as a pure metric rather than for dollar signs.”

Solve launches 2019 global challenges

Anyone can submit tech-based solution applications until July 1.

At the Feb. 28 Hello World, Hello MIT event celebrating the MIT Stephen A. Schwarzman College of Computing, MIT Solve Executive Director Alex Amouyel announced the launch of Solve’s newest set of global challenges: Circular Economy, Community-Driven Innovation, Early Childhood Development, and Healthy Cities.

Solve now seeks tech-based solutions from innovators around the world that address these four challenges, and anyone can submit a relevant solution by the July 1 deadline.

“Through open innovation, we find the most promising tech-based social innovators from all around the world, including those already right here at MIT,” said Amouyel. “These Solver teams use AI, machine learning, and many other technologies to positively improve the lives of thousands already, and hopefully millions more in the future.”

Finalists will be invited to pitch their solutions to Solve’s Challenge Leadership Group — a judging panel of cross-sector leaders and MIT faculty — at Solve Challenge Finals on Sept. 22 in New York City during U.N. General Assembly Week.

The most promising solutions will be selected to form the 2019 Solver Class, and Solve will then deploy its global community of private, public, and nonprofit leaders to build the partnerships needed to scale their work.

To date, Solve’s community has committed more than $7 million in funding to Solver teams, in addition to in-kind support such as mentorship, technical expertise, media and conference exposure, and business and entrepreneurship training.

Over the past six months, Solve staff consulted more than 500 leaders and experts to determine the 2019 global challenges. Solve hosted 14 Challenge Design Workshops in eight countries — in cities ranging from New York to Hong Kong to Abu Dhabi, United Arab Emirates, to Monterrey, Mexico — to collect feedback from communities around the world. More than 30,000 votes were cast online through Solve’s open innovation platform to influence challenge themes.

  1. Circular Economy: How can people create and consume goods that are renewable, repairable, reusable, and recyclable?

  2. Community-Driven Innovation: How can citizens and communities create and improve social inclusion and shared prosperity?

  3. Early Childhood Development: How can all children under age 5 develop the critical learning and cognitive skills they need to reach their full potential?

  4. Healthy Cities: How can urban residents design and live in environments that promote physical and mental health?

As a marketplace for social impact, Solve finds tech entrepreneurs from around the world and brokers partnerships across its community to scale their innovative work — driving lasting, transformational change. Organizations interested in joining the Solve community can learn more and apply for membership here.

Addressing the promises and challenges of AI

Final day of the MIT Schwarzman College of Computing celebration explores enthusiasm, caution about AI’s rising prominence in society.

A three-day celebration event this week for the MIT Stephen A. Schwarzman College of Computing put focus on the Institute’s new role in helping society navigate a promising yet challenging future for artificial intelligence (AI), as it seeps into nearly all aspects of society.

On Thursday, the final day of the event, a series of talks and panel discussions by researchers and industry experts conveyed enthusiasm for AI-enabled advances in many global sectors, but emphasized concerns — on topics such as data privacy, job automation, and personal and social issues — that accompany the computing revolution. The day also included a panel called “Computing for the People: Ethics and AI,” whose participants agreed collaboration is key to make sure artificial intelligence serves the public good.

Kicking off the day’s events, MIT President Rafael Reif said the MIT Schwarzman College of Computing will train students in an interdisciplinary approach to AI. It will also train them to take a step back and weigh potential downsides of AI, which is poised to disrupt “every sector of our society.”

“Everyone knows pushing the limits of new technologies can be so thrilling that it’s hard to think about consequences and how [AI] too might be misused,” Reif said. “It is time to educate a new generation of technologists in the public interest, and I’m optimistic that the MIT Schwarzman College [of Computing] is the right place for that job.”

In opening remarks, Massachusetts Governor Charlie Baker gave MIT “enormous credit” for focusing its research and education on the positive and negative impact of AI. “Having a place like MIT … think about the whole picture in respect to what this is going to mean for individuals, businesses, governments, and society is a gift,” he said.

Personal and industrial AI

In a panel discussion titled, “Computing the Future: Setting New Directions,” MIT alumnus Drew Houston ’05, co-founder of Dropbox, described an idyllic future where by 2030 AI could take over many tedious professional tasks, freeing humans to be more creative and productive.

Workers today, Houston said, spend more than 60 percent of their working lives organizing emails, coordinating schedules, and planning various aspects of their job. As computers start refining skills — such as analyzing and answering queries in natural language, and understanding very complex systems — each of us may soon have AI-based assistants that can handle many of those mundane tasks, he said.

“We’re on the eve of a new generation of our partnership with machines … where machines will take a lot of the busy work so people can … spend our working days on the subset of our work that’s really fulfilling and meaningful,” Houston said. “My hope is that, in 2030, we’ll look back on now as the beginning of a revolution that freed our minds the way the industrial revolution freed our hands. My last hope is that … the new [MIT Schwarzman College of Computing] is the place where that revolution is born.”   

Speaking with reporters before the panel discussion “Computing for the Marketplace: Entrepreneurship and AI,” Eric Schmidt, former executive chairman of Alphabet and a visiting innovation fellow at MIT, also spoke of a coming age of AI assistants. Smart teddy bears could help children learn language, virtual assistants could plan people’s days, and personal robots could ensure the elderly take medication on schedule. “This model of an assistant … is at the basis of the vision of how people will see a difference in our lives every day,” Schmidt said.

He noted many emerging AI-based research and business opportunities, including analyzing patient data to predict risk of diseases, discovering new compounds for drug discovery, and predicting regions where wind farms produce the most power, which is critical for obtaining clean-energy funding. “MIT is at the forefront of every single example that I just gave,” Schmidt said.

When asked by panel moderator Katie Rae, executive director of The Engine, what she thinks is the most significant aspect of AI in industry, iRobot co-founder Helen Greiner cited supply chain automation. Robots could, for instance, package goods more quickly and efficiently, and driverless delivery trucks could soon deliver those packages, she said: “Logistics in general will be changed” in the coming years.

Finding an algorithmic utopia

For Institute Professor Robert Langer, another panelist in “Computing for the Marketplace,” AI holds great promise for early disease diagnoses. With enough medical data, for instance, AI models can identify biological “fingerprints” of certain diseases in patients. “Then, you can use AI to analyze those fingerprints and decide what … gives someone a risk of cancer,” he said. “You can do drug testing that way too. You can see [a patient has] a fingerprint that … shows you that a drug will treat the cancer for that person.”

But in the “Computing the Future” section, David Siegel, co-chair of Two Sigma Investments and founding advisor for the MIT Quest for Intelligence, addressed issues with data, which is at the heart of AI. With the aid of AI, Siegel has seen computers go from helpful assistants to “routinely making decisions for people” in business, health care, and other areas. While AI models can benefit the world, “there is a fear that we may move in a direction that’s far from an algorithmic utopia.”

Siegel drew parallels between AI and the popular satirical film “Dr. Strangelove,” in which an “algorithmic doomsday machine” threatens to destroy the world. AI algorithms must be made unbiased, safe, and secure, he said. That involves dedicated research in several important areas, at the MIT Schwarzman College of Computing and around the globe, “to avoid a Strangelove-like future.”

One important area is data bias and security. Data bias, for instance, leads to inaccurate and untrustworthy algorithms. And if researchers can guarantee the privacy of medical data, he added, patients may be more willing to contribute their records to medical research.

Siegel noted a real-world example where, due to privacy concerns, the Centers for Medicare and Medicaid Services years ago withheld patient records from a large research dataset being used to study substance misuse, which is responsible for tens of thousands of U.S. deaths annually. “That omission was a big loss for researchers and, by extension, patients,” he said. “We are missing the opportunity to solve pressing problems because of the lack of accessible data. … Without solutions, the algorithms that drive our world are at high risk of becoming data-compromised.”

Seeking humanity in AI

In a panel discussion earlier in the day, “Computing: Reflections and the Path Forward,” Sherry Turkle, the Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, called on people to avoid “friction free” technologies — which help people avoid stress of face-to-face interactions.

AI is now “deeply woven into this [friction-free] story,” she said, noting that there are apps that help users plan walking routes, for example, to avoid people they dislike. “But who said a life without conflict … makes for the good life?” she said.

She concluded with a “call to arms” for the new college to help people understand the consequences of the digital world where confrontation is avoided, social media are scrutinized, and personal data are sold and shared with companies and governments: “It’s time to reclaim our attention, our solitude, our privacy, and our democracy.”

Speaking in the same section, Patrick H. Winston, the Ford Professor of Engineering at MIT, concluded on an equally humanistic — and optimistic — message. After walking the audience through the history of AI at MIT, including his run as director of the Artificial Intelligence Laboratory from 1972 to 1997, he told the audience he was going to discuss the greatest computing innovation of all time.

“It’s us,” he said, “because nothing can think like we can. We don’t know how to make computers do it yet, but it’s something we should aspire to. … In the end, there’s no reason why computers can’t think like we [do] and can’t be ethical and moral like we aspire to be.”

For founders of new college of computing, the human element is paramount

Stephen A. Schwarzman and MIT President L. Rafael Reif discuss the Institute’s historic new endeavor.

The new MIT Stephen A. Schwarzman College of Computing is destined to become a major center of artificial intelligence research. But a public conversation between the college’s founders on Thursday helped illuminate the very human impulses guiding it.

“It’s a remarkable expression of the human spirit that you have here,” said Stephen A. Schwarzman during a dialogue with MIT President L. Rafael Reif at MIT’s Kresge Auditorium.

Schwarzman is the principal benefactor of the college, which is intended to drive forward research in computing and artificial intelligence, and link computing to every other discipline at the Institute. Its formation represents the biggest change to MIT’s institutional structure since the 1950s. Schwarzman has delivered a $350 million gift to the Institute, as part of the roughly $1 billion college.

To help launch the MIT Schwarzman College of Computing, MIT held a three-day celebration this week, with dozens of speakers appearing from Tuesday to Thursday, among other campus events.

“Anybody from MIT who takes what you do for granted, it’s just that you’ve been here too long,” Schwarzman added. “Every speaker is like magic.”

To an extent, Schwarzman said, the impulse behind the founding of the college came from trips he had taken to China, where he observed intensified Chinese investment in artificial intelligence, and wanted to make sure the U.S. was also on the leading edge of A.I.

“What I was interested in was taking U.S. competitiveness and really punching it up,” said Schwartzman.

And as Schwarzman and Reif recounted, the college’s origins stemmed in part from something eternally human as well: an ongoing series of conversations between them about how to increase the tempo of computing advances. After their initial discussion, Schwarzman said, he encouraged MIT to think bigger about the possible scope of the project.

As Reif noted, he knew that faculty and staff were increasingly emphasizing a need for researchers to be “bilingual,” in terms of knowing their own disciplines, and understanding how computing could help that disciplinary research.

Moreover, Reif added, while China is very strong in certain applied areas of artificial intelligence, he understood that the U.S. has unrivaled strengths in education and research, making the idea of a new computing college at MIT all the more likely to succeed.

Referring to the U.S., Reif said, “We are extremely strong in human capital in this space. Let’s just invest in ourselves and see what happens.”

Thursday afternoon’s events also included an onstage conversation about artificial intelligence between former U.S. Secretary of State Henry Kissinger and columnist Thomas L. Friedman of The New York Times.

Kissinger expressed general concern about the potentially unpredictable consequences of artificial intelligence, extending a point he raised in an essay in The Atlantic last summer.

“Working in this field is a tremendous responsibility and a tremendous challenge,” Kissinger said.

For his part, Friedman, mostly serving as an interlocutor, suggested that these advanced technologies mean humans “have never been more Godlike” than they are now. He added that “at a minimum a simple golden rule” — of equitable ethical treatment among people — “is going to be essential” for society.

At the start of the session, Robert Millard, chair of the MIT Corporation, introduced Reif and Schwarzman while noting the significance of the new college’s launch. Millard called Reif “an inspired leader who has in his tenure become the senior spokesman for higher education, in science and technology generally.”

Schwarzman, Millard observed, has “accelerated MIT further into the future” with his support for the college.

One of the celebration’s closing events was a panel called “Computing for the People: AI and Ethics.” In his discussion with Reif, Schwzarman also offered his thoughts on the ethics and social impact of innovation.

The curriculum of the new college will include ethics, and, as Schwarzman noted, it will always be important to “have a focus on the workforce … [and] the people who get dislocated” by technology.  

“The technology is going to affect the whole world, and we have to get it right,” Schwarzman said.