Welcome to the Machine

AI is redefining how we fight climate change and combat disease. It is even revolutionizing the systems responsible for its own evolution.

30 min readJun 5, 2020

--

By Michael Blanding for The Engine
Illustrations by Harol Bustos

We have a romantic vision of the scientific discovery process. A white-coated chemist spends long hours in the lab, titrating, pipetting, centrifuging chemicals, until — by accident or design — they stumble upon a new molecule that might do something useful in the world. Then there are longer nights spent testing, refining, and optimizing the synthesis process, hoping that one day that chemical can be commercialized. In the luckiest of scenarios, that process might take 5 years — often it takes 10 or 20.

For Alan Aspuru-Guzik, that’s 20 years too long. “Every time a human is involved, you’re really wasting a lot of time,” says the University of Toronto chemistry professor, who wears big, grey-rimmed glasses and usually an impish grin. “You really want to have a machine doing as many things as possible.” Four years ago, while a professor at Harvard University, he succeeded in producing the first novel chemicals discovered by computer — materials for organic light-emitting diodes — selected out of 1.6 million possible candidates. From start to finish, the process just took a few months.

Since moving to Toronto two years ago, he’s expanded his vision to create a self-driven laboratory, with robotic arms to do all of the hard work of discovery, development, synthesis and optimization, all at the beck and call of its human overlords, much faster than human hands ever could. Once the lab is totally up and running, it could create new chemical materials at the rate of one every hour, an incredible leap forward in productivity.

That advance is all because of one thing: the power of artificial intelligence (AI). “The chemical industry is one of the last industries that has not really been taking advantage of AI,” Aspuru-Guzik says. “Of course, many other industries, like buying stuff on the Internet have already been affected by AI, but chemistry is very hard to do because it has a very low rate of discovery and many complicated development processes.” New advancements in power and ability, however, have put AI on the cusp of transforming chemistry discovery, as well as the innovation process of many tough technology industries, serving as a virtual assistant to dramatically speed up and improve what humans are capable of inventing on their own.

The idea that machines possess intelligence to rival a human goes back to the golems and automatons of antiquity. It didn’t start becoming a reality until the 1950s, when Dartmouth computer scientist John McCarthy coined the term “artificial intelligence”2 to describe a new class of computers capable of reasoning and making decisions for themselves when faced with novel situations. The first AI mastered simple tasks, such as playing games of Tic-Tac-Toe or Chess. However, over the decades, it has exploded into a variety of fields, including image processing, speech recognition, and natural language processing.

A subset of AI called machine learning (ML) promises to transform innovation. Simply put, ML searches through massive amounts of data to find patterns and then uses those patterns to make future predictions. These processes power everything, from the predictions you get from shopping websites to the results you get from online search engines. “If you go to Google and type Disney, Florida, and Delta airlines, then it’s pretty clear you are going to the Magic Kingdom,” says Scott Stern, a business professor at MIT Sloan School of Management and co-author of a recent National Bureau of Economic Research paper on AI and innovation. ML excels in predicting patterns too complex for humans to uncover, at least without a lot of bookkeeping. “Machine learning dramatically enhances the ability to predict things that are relatively rare in isolation, but once you aggregate them, they become clear,” Stern says.

(Left) Alan Aspuru-Guzik Professor of Chemistry and Computer Science at the University
of Toronto; Co-Founder of Zapata Computing, Co-Founder of Kebotix (Right) Scott Stern, David Sarnoff Professor of Management of Technology, Professor, Technological Innovation, Entrepreneurship, and
Strategic Management at MIT

The process relies on algorithms, sets of rules, and instructions that the artificial brain can use to sift through data, supplemented by heuristics or “rules of thumb” that help narrow down the list of possibilities. This process has been significantly enhanced since the development in the mid-2000s of artificial neural networks, a set of algorithms that mirror the structure of the human brain with neurons that pass information from one to another to enable “deep learning.” Since then, there has been an explosion in the applications of AI — from stock picking to facial recognition to routing your Uber. That’s been especially true in the past five years or so, as the advent of cloud computing has enabled researchers and companies to use vast amounts of computing power cheaply.

AI generally comes in two types: the most common type, supervised learning, which happens when someone gives the AI an example of a target and then asks it to sift through data to find other examples of that target; unsupervised learning happens when the AI is given unstructured data and asked to find its own patterns by employing techniques such as “nearest neighbor search” to group similar objects into categories. The new frontier of AI is reinforcement learning, in which the machine brain tries different combinations based on trial-and-error, gradually honing in on an answer based on pre-defined positive and negative feedback. Google made a splash three years ago with its AlphaGo software, which used reinforcement learning to beat the best human players in Go, a Chinese game infinite-ly more difficult than chess, discovering new strategies no human had thought of.

It’s the quality of discovery that makes AI so exciting not only for optimizing products and services but also for learning how to create new ones, such as the chemicals in Aspuru-Guzik’s self-driving laboratory. “If you could get predictions on unstructured data to be good enough you could not simply apply them to existing things and optimize them,” Stern says, “but you could undertake a resequencing of the innovation process itself for the creation of fundamentally novel products, services, and inventions.” He compares the technology to lenses, which were used to make eyeglasses in the 1400s. “People got good at grinding lenses so that people could see,” Stern says. “But Galileo took those lenses and built the first telescope and was immediately able to resolve the existence of moons around planets. It allowed us to literally resolve phenomena we couldn’t even imagine, and ask new types of questions.”

Stern, who is on the board of Atomwise, a company using AI to discover new drugs, makes the distinction between AI as a “general purpose technology” that can be used to enable other technologies, and AI as a “method of invention” that allows for the creation of fundamentally new products separate from the AI itself. Both uses could impact the development of tough technologies, allowing dramatically new breakthroughs and capabilities. Among other industries, AI is poised to transform the fields of health, climate science, and advanced computing itself.

Depending on whom you talk to, the number of possible chemicals in the world are anywhere from 1060 to 1080 — somewhere, in other words, between the number of atoms in our solar system and the number of atoms in the visible universe. Any one of those chemicals could hold the key to the development of a new drug or other therapy. While human researchers often have a good intuition of where to explore, the sheer vastness of the space leaves ample room for AI to assist in discovering brand-new possibilities that humans might not have even considered.

“In drug discovery, you’re going to have all of these ideas of what could be a good candidate molecule,” says Connor Coley, an assistant professor of chemical engineering at MIT. “You know what could have good bioactivity, good potency, good solubility, and other properties, — but in order to test it, you actually have to make it.” Creating drugs can be a complicated process, requiring multiple chemical reactions in a specific sequence and conditions, done at scale in order to synthesize the desired amount of drug to test.

Coley previously worked in the laboratory of Klavs Jensen, an MIT professor applying those principles to the synthesis and discovery of small molecules, which form the basis of many medicines. Using supervised learning techniques, he and his colleagues have used AI to simulate the testing of candidate molecules, varying catalysts, temperature, and concentration and predicting likely outcomes. “All of those variables will have some effect on the yield, efficiency, and cost of running that reaction,” says Coley. “These adaptive algorithms help you design experiments and tell you what are the most useful experiments to run.”

More recently, Coley and his colleagues have been using supervised machine learning on a huge corpus of chemical reactions collated by Elsevier and other companies to help identify new molecules to test. “We’re taking information chemists have been publishing for the last couple hundred years, and then trying to apply that to new molecules,” Coley says. “Then we put out a full synthetic recipe about how you might be able to make it — everything you should buy, how you should combine them, and what steps you should take to produce the molecule of interest.” More recently, Coley and his colleagues, including Jensen, have worked on creating a robotic platform3 similar to Aspuru-Guziks’ self-driving laboratory to at least partially automate that process. “The idea is to make the research and development process a little bit easier and a little bit faster.”

Coley is a member of MIT’s Machine Learning for Pharmaceutical Discovery and Synthesis Consortium, a group that includes both MIT researchers and representatives from 14 pharmaceutical and biotech companies to help better train algorithms to discover and create new drugs. The companies provide financial support as well as feedback from their own processes to the labs, which produce models and algorithms for the use of members. “We benefit tremendously from understanding what are the real pain points in their workflows, and how well the existing solutions work to alleviate those pain points,” Coley says. In return, such techniques could help companies to quantitatively determine which candidates for drug discovery they should pursue for testing.

Another consortium member, MIT computer science professor Regina Barzilay, recently led the use of deep learning techniques to discover a new antibiotic. Historically, antibiotics have been discovered by screening vast numbers of soil-dwelling microbes, or creating synthetic varieties based on those discoveries that she, James Collins (a Professor of Medical Engineering and Science at MIT), and colleagues wrote about in a paper published in Cell in February 20204. As the number of new molecules have shrunk, however, pharmaceutical companies have turned to screening large chemical libraries in search of novel drugs. Barzilay’s team trained a specialized deep neural network called a variational autoencoder on more than 2,000 molecules to determine what molecular features inhibited the growth of E.coli bacteria, then applied their model to libraries with more than 100 million chemicals, ranking the results.

This process resulted in the identification of a drug with the unwieldy name c-Jun N-terminal kinase inhibitor SU3327, which the researchers renamed Halicin. Not only was it effective against E.coli, but they also found that it inhibited other pathogens, including A. baumanniias, a bacterium that has developed antibiotic resistance and is one of the World Health Organization’s top pathogens to target. In addition, they discovered two other antibiotics effective against antibiotic-resistant E.coli. “The machine learning model can explore, in silico, large chemical spaces that can be prohibitively expensive for traditional experimental approaches,” Barzilay told MIT News5. “This work highlights the significant impact that machine learning can have on early antibiotic discovery efforts by simultaneously increasing the accuracy rate of lead compound identification and decreasing the cost of screening efforts,” the researchers wrote.

Another veteran of Jensen’s lab, Dale Thomas, has spun out a company to synthesize molecules for commercialization. Called MyTide, the startup focuses on peptides, short chains of amino acids including insulin and other hormones that drive reactions inside the body. Among other uses, they can help in regulating obesity, developing vaccines, and targeting cancer though personalized peptide therapeutics. “You can develop a peptide specifically tuned to cancer within a patient’s body and trigger an immune response that allows the body to recognize a tumor as something that is not natural.”

(L-R) Connor Coley, Assistant Professor of Chemical Engineering at MIT; Regina Barzilay, Delta Electronics professor in the Department of Electrical Engineering at MIT; Dale Thomas, Co-founder of mytide therapeutics; Finale Doshi-Velez, John L. Loeb associate professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences

With 20 different amino acids to choose from, however, the number of potential peptides that could be used in therapies is overwhelming. “There are over a trillion different peptides that can be manufactured — and that’s an understatement,” Thomas says. In addition, each amide bond between amino acids is unique, requiring different conditions to form, and with dangerous side reactions if you get it wrong. “You have to understand not only the conditions for linking one amino acid to another, which is 400 different types of reactions, but also how previous couplings affect the current coupling. If you do not have the correct data collection and ability to use in-line analytics, you don’t have the ability to capture the data of this building of peptides.”

In order to solve the problem, the company has built a synthesizing machine that can move amino acids and other reagents through a reactor at the rate of 3,000 reactions a day. As the reactions occur, the system collects second-by-second data, including traces of UV light absorbed at different wavelengths, to show the extent of coupling of molecules. “That enables us to leverage machine learning, because we are building high-quality datasets that feed into machine learning models,” says Thomas. Using the output from those models, the machine operators can then adjust variables such as the type of reagents, reaction time, and temperature, to optimize molecule synthesis, as well as to create new molecules for new purposes.

“After we run 5,000 molecules through this platform, we have 150,000 data points on the output. And we can feed that back in, in this closed loop manner, in order to redefine the next set of peptides that we should manufacture, and then carry through again to find the right target.” Using these techniques, the company has been able to reduce the time required to synthesize non-naturally occurring peptides from nine weeks to four days — and use fewer reagents overall. “Our company’s real aim is to start building in more complex chemistries than have previously been able to be manufactured,” Thomas says. For example, the company is experimenting with building using D-amino acids, mirror images of naturally occurring L-amino acids that show increased bioactivity. Other projects include adding polymer side chains to peptides to create cardiac medicines and oils to treat eczema.

In addition to drug development, researchers are using AI to create other new therapies as well. Cellino, one of The Engine’s portfolio companies, is developing cell and tissue therapies by guiding the development of stem cells using lasers. Induced pluripotent stem cells (iPSCs) are grown on a well plate with a 20-nanometer undercoating of titanium. A laser shot at the bottom of the plate can target an individual cell, poking a tiny bubble in the cell membrane through which a liquid-borne cargo of growth factors or other components can enter, or — if the bubble is big enough — killing the cell outright. Using these controls, company scientists can precisely control the differentiation of cells into therapeutic cell types to treat various disorders.

“We are positioning ourselves as a cell and tissue foundry,” says Marinna Madrid, cofounder of Cellino. “But what’s unique about our technology is that we’re able to do that with single-cell control because of the laser-based technology we use.” The company hopes to partner with pharma and biotech companies that are differentiating cells for therapeutic applications but are having difficulty in achieving uniform, high-quality cells for implantation.

One therapy the company has been working on, for example, is the implantation of retinal pigment epithelial cells in order to treat age-related macular degeneration (AMD). As it grows these cells, the company has worked with biologists to identify which cells or regions of tissue to keep or enhance, killing the others with the laser. “There are a few things that correlate with the cells being good at doing their job in the body,” says chief science officer Arnaldo Pereira, “for example, the amount of melanin granules in them, or how easily they can phagocytose the garbage that the photoreceptors sitting atop them slough off.”

(Left) Marinna Madrid, Co-founder of Cellino (Right) Arnaldo Pereira, Principal Machine Learning
Engineer at Cellino

Ordinarily, it would take a Ph.D-level biologist painstaking hours with a microscope to identify these cells. Using AI, however, Pereira and his team have been able to design an algorithm on microscope images to quickly identify which cells to save. “If you can remove the underperforming cells and leave space for good cells to grow in that space, then you can increase your yield, and also increase the variability of cells that you get in the end,” Pereira says. The company is working on a way to totally automate the process, including cell detection and lysing with the laser in order to streamline the process.

The same technique could be used for other cell therapies, for example, transplanting dopaminergic neurons to treat Parkinson’s disease. Some studies have shown that transplant-ing such cells at an earlier progenitor stage produces better outcomes, but it can be difficult to determine at that stage which cells will succeed in fully differentiating into mature cells. “We plan to develop machine learning algorithms that can predict at earlier and earlier times whether or not the cell is going to be functional down the road,” Madrid says.

In addition to just killing under-performing cells, the process is also able to manipulate cells by controlling the strength of the laser to poke just a small hole, through which liquid-borne cargo can enter, without killing the cell. The process could be used to insert growth factors or gene modulators to turn on or off genes within cells to individually control their growth with a high degree of precision. “So if you see that a cell is starting to differentiate down an incorrect pathway, but it’s not completely lost, you could deliver a gene modulator into that cell to turn the right genes on or off to kind of course correct,” Madrid says.

How AI is Personalizing Medicine

Doctors are already starting to use AI in limited ways as a virtual medical assistant with special training for pattern recognition. Radiology has been an early pioneer in this area, using AI’s ability to recognize complex patterns in images.6 The FDA has already approved more than 50 algorithms7 for use with AI platforms to help detect early signs of lung, breast, and brain cancer. In Uganda, an AI Lab at Makerere University uses a smartphone app and a microscope to detect malaria in blood samples, compensating for a lack of trained pathologists in the country.

In the coming years, however, AI has the potential to transform every aspect of clinical diagnosis and treatment. “AI is not the decision-maker,” says Finale Doshi-Velez, assistant professor of computer science at Harvard, who has pioneered such efforts, “but it can be the tool that provides the information and potentially the recommendations or anti-recommendations based on a patient’s history to help doctors make decisions under conditions of uncertainty.”

With new legislation on Electronic Medical Records (EMR) under consideration to allow patients better share their medical information digitally, new apps could help doctors sift through a patient’s histories quickly in order to arrive at more effective diagnoses and treatments. “You have a lot of data about the patient in the health record but the doctor just doesn’t have the time to look at it,” says Doshi-Velez. An app could flag, for example, when a cardiac medication a doctor prescribes might produce unforeseen side-effects. “Errors may get made simply because the doctor didn’t know enough about the patient.

Doshi-Velez is currently working on more advanced algorithms to personalize treatment modalities for treating diseases such as HIV, which relies on complex cocktails of drugs given in individualized combinations, dosages, and sequences. Using data on efficacy in past patients, an algorithm could analyze a new patient based on the patient’s history, including resistance mutations in the virus and previously prescribed drugs, to recommend an optimal cocktail for that patient. “The trick is to look at the data and then propose alternate strategies that have not been tested, and be able to assess how well you think your new proposal will do,” says Doshi-Velez.

Not all conditions may require such highly individualized treatments, however. Doshi-Velez’ lab is also working with algorithms to produce more generalized treatment protocols using past data to sort patients into categories based on the symptoms they present. “My hunch is that a lot of times that stratification is going to get us a long way; if you could split people among the right variables, then you could put them into one of 300 bins, or 10 bins,” Doshi-Velez says. Once the AI has helped create those bins, she says, then humans could help determine which treatment bins patients fall into. In an ICU setting, AI could even develop protocols for treatment that might fit on just a few pieces of paper, differentiating patients based on temperature, blood pressure, and other symptoms. “They could say, whenever you’re making this decision ask a patient about these five variables and follow the flow chart.”

In addition to applying AI in the energy industry, a burgeoning new field is the implementation of machine learning in materials production, similar to the way that it has been used for drug design. Instead of clinical effects of molecules, however, scientists are looking to create materials with specific chemical properties. MIT professor of chemical engineer, Tonio Buonassisi, for example, has increasingly devoted resources in his lab towards creating materials for use in solar panels and environmentally sustainable applications. “I think of machine learning as a cognitive assistant that helps the researcher do their job more efficiently and faster.”

In 2018, he started a program in Singapore through the Singapore MIT Alliance for Research and Technology (SMART) with $18 million (USD) in funding from the Advanced Manufacturing & Engineering (AME) domain of the National Research Foundation (NRF) of Singapore, specifically to use AI to speed up the creation of solar cells and other environmental materials. Much of the challenge in producing more efficient solar cells lies in optimizing several variables at once. “It’s not enough that it absorbs a lot of light, it’s not enough that it has the ability to extract a lot of carriers, it’s not enough that it’s cheap, it has to be all of those things at the same time — and more,” Buonissisi says. “In mathematics, we call it a heavily constrained optimization problem.”

The human brain isn’t good at keeping track of so many different variables at once, so machine learning can help solve those complex problems. “It’s a matter of bookkeeping,” Buonassisi says. “Our brains can’t really handle these very high dimensional spaces, very well, but the machine learning algorithms.” In order to test the design of the experiment, Buonissisi applied it to a “toy problem,” which sought to tune the size and shape of silver nanoparticles so that they would change to a desired color at a certain temperature. “With only 100 points of training data, we could predict this output parameter within just a few degrees Celsius,” he says. “It was a beautiful case study of creating materials with user-defined properties that nobody had ever made before. The algorithm told us what to make.”

For their next experiment, Buonissisi’s lab created a convolutional neural network that could predict stable crystals made from a set of 28 precursor solutions that could be used in a particular type of solar cell. “It is a very slow technique,” Buonissisi says. “It took us three to five hours per sample to fire the data and for the human to analyze it. And we developed a machine-learning algorithm that cut that time down to five minutes per sample.” In the end, the process discovered two new materials, including novel alloy, and four crystals which had only been grown in bulk before, but could now be spread in a thin film. “In the old way of doing science, this would’ve been six independent research papers that each would have taken three or four months to produce, but we’re doing it here in two months. The rate at which you are learning is increasing thanks to the availability of these tools.”

The crystal films might be useful for optoelectronic applications, such as light-emitting diodes, Buonissisi says. In order to create more efficient materials for solar panels, however, you’d need to optimize not just for stability but for long-range carrier transport as well. “To really change the world, you need to move from single-objective optimization to multi-objective simultaneous optimization and that’s where some of our experiments are going at the moment,” he says. “Now we are starting to push stability and performance at the same time.” The AI zooms in on the region of multi-variable parameter space that’s most promising and then churns through samples in that region and does the final optimization from there. That emergent toolset should knock on wood, yield many materials with user-defined properties, and hopefully a lot of them will succeed in addressing climate needs and sustainability needs within the next two to five years.

On a long term basis, the process might even be able to create mass-customized materials, for example, making solar materials that would work irrespective of the different locations. “We know, for example, that a material with a given band gap would perform better in Singapore than it would in Boston because the temperature and humidity are different,” Buonissisi says. “If you had the power to tailor a solar cell for one region versus another, you could squeeze out a few extra percent energy yield, which is a really big deal in terms of profitability for companies.

Another organization tackling similar problems of designing new materials to solve climate challenges is Mission Innovation, a worldwide nonprofit funded in part by Bill Gates to help spur research towards the goals of the Paris Climate Accords. Among its eight “challenges” are such fields as batteries, smart grids, and biofuels, as well as one on Clean Energy Materials9 specifically dedicated to using AI to create new environment-friendly mate-rials. “Alan was the brainchild behind it,” says Mark Kozdras, a program manager at Natural Resources Canada and the co-lead of the challenge. After proposing the initiative, Aspuru-Guzik helped lead the first international conference dedicated to the concept in Mexico City in September 2017.

Since then, Kozdras has led the effort with an initial $10.8 million grant from the Canadian government and research support from several Canadian universities to create a self-driving laboratory, specifically engineered for sustainable mate-rials called a Materials Acceleration Platform, or MAP. One of its most successful efforts to date is a robotic platform created by researchers at the University of British Columbia (UBC) along with Aspuru-Guzik called Ada, which uses machine learning to design and synthesize thin-film materials. In a video on the Project Ada website, a white robotic arm swivels around a platform, selecting test-tubes full of reagents, twirling them in a spin coater, and pipetting them to combine on a well plate. As a proof of concept, the UBC lab showed how they could use the platform to create a new hole transport material, a component of certain types of solar cells, optimizing it on two parameters: stability and connectivity. In a working paper published in March of this year, the team described how the AI platform shrunk the number of experiments needed to discover the material from 1,000 to 30.

(L-R) Tonio Buonassisi, Professor of Mechanical Engineering at MIT; Corey Capasso, Founder and CEO
of Urbint; Mark Kozdras, Program Manager at Natural Resources Canada

“They’re taking experiments that could take a year and doing them in a week,” Kozdras says. “and with a 10th of the chemicals.” The Canadian government has now dedicated in excess of $28 million to build a new two-floor laboratory dedicated specifically to MAPs that can solve a variety of chemical problems. In addition to photovoltaic materials, Kozdras says, the lab is planning to explore batteries and thermoelectric materials that could trans-form waste heat to energy. Another candidate is concrete, which is similarly created by mixing materials that vary in composition from place to place. An AI brain could optimize production for different areas based on their unique raw materials. “If you could increase the strength of cement, then you could use less material,” Kozdras says. “Even a 1 percent reduction could have a huge environmental impact.”

Aspuru-Guzik also co-founded a company called Kebotix designed specifically to use AI and self-driving laboratories to custom-design new materials based on previously undiscovered molecules. “Our tagline really is ‘Materials for tomorrow today,’” says Jill Becker, the company’s CEO, a Harvard Chemistry Ph.D and formerly CEO of the successful company Cambridge NanoTech. “What we want to be is the DuPont of the 21st century.” Currently, the Cam-bridge-based company employs 17 people and recently closed an $11.4 million Series A round.

Along with developing novel chemicals to suit the specific needs of clients, Kebotix is consciously focused on environmental sustainability. “We have aligned ourselves with several of the United Nations sustainability goals, and are really interested in making chemicals and materials green,” says Becker, an irreverent entrepreneur taken to wearing a leather jacket onstage. “Like one day I’m envisioning the future where I can walk up to one of our self-driving labs and say, ‘Hey, Kebotix platform, make me a chemical that will make me a truly recyclable coffee cup and won’t leach out nasty shit that gives people stomach cancer.”

Along with neural networks and other techniques, the company uses a type of unsupervised AI for unstructured data called a generative area network (GAN), refined by Aspuru-Guzik, which can explore a space of hundreds of millions of potential chemical candidates in search of viable molecules. Shown the structure of a particular molecule, say ibuprofen, the GAN crawls the space, generating other molecules it thinks will behave similarly, then automatically challenging and rejecting them as it refines its choices. Alternatively, the GAN can explore the space between multiple molecules at once. Over video chat, Becker puts up a slide with an image of a woman wearing eye-shadow and another woman smoking a cigarette, along with a half-dozen images between them that show a graded combination of the two.

“We call this the road from aspirin to caffeine,” Becker says. “Aspirin’s on the left — that would be the lady with the cigarette — and caffeine is on the right — that’s the lady with the eye-shadow, and instead of a roadmap of pictures of women that this narcissistic society doesn’t need, it dreams up molecules in the latest space. Then we can go back in there and say, hey, I’ve never seen you before, can I make you? What else do you do? Can I patent you? What are your properties?”

So far, the company has both worked with clients as well as dream-ing up and patenting its own unique molecules. Kebotix gave one company, for example, some 2,300 possible chemical compounds all ranked to meet certain properties, including boiling point, vapor pressure, and melt viscosity. “They liked four of them, and made three of them,” Becker says. “That saved them years and hundreds of thousands of dollars.” In its own explorations, the company searched through a dataset of 7 million chemicals in order to find a new electrochromic material. “So it should be transparent, and when you send a current through it, it should give off light, but not heat,” Becker says. The company has identified one such material it is in the process of patenting for smart windows, which can change color and transparency to conserve heat. “You can imagine the savings in terms of cost of electricity,” Becker says.

The Utility of AI

Another major challenge that AI is beginning to tackle is the threat posed to our climate from global warming. Already AI is being used in “smart grids” that help better distribute the energy load to require less standing energy at power plants and in “smart buildings” to draw electricity for heating and cooling systems at off-peak times to put less pressure on the system as a whole. A new company, Urbint, is using a vast number of variables to help utilities cut down on direct leakage of greenhouse gases into the atmosphere. Among other energy infrastructure, Urbint targets natural gas pipelines, which transport methane gas — 84 times more heat-trapping than carbon dioxide — and are prone to corrosion and breakage.

The country has over 2 million miles of natural gas pipelines and countless more local pipes, many over 100 years old. “Just last summer, a local utility replaced a gas pipe in Chicago that had been installed the year before Lincoln became president in 1859, and had leaked 30 times in the past 30 years,” says Urbint founder and CEO Corey Capasso. A successful software company entrepreneur, Capasso, helped found Urbint two years ago to help companies manage the whole infrastructure using AI. Most of the threats to energy assets come from outside, such as fallen trees from wildfires, corrosion from flooding, or a backhoe hitting an underground pipe. “If we could predict and pinpoint these very specific risks, then the utility could anticipate those risks and take action,” Capasso says.

The company was launched in 2018 with $25 million in funding, mostly from Energy Impact Partners and National Grid Partners, and now serves 40 clients, including the 10 largest natural gas companies. For each company, Urbint gathers data on the location of assets and historical data on damages. “We get three or four rows of data from the utility, and then we add hundreds and hundreds of columns of our own to that,” Capasso says. Among the data it includes are soil conditions, weather conditions, construction activity in the area. “We even realized that moon phase has something to do with corrosion,” Capasso says. “In some areas, the moon affects the tides which affect underground aquifers, which can lead to increased corrosion in certain types of soil conditions.”

The company then applies AI models to identify the patterns and sequences of events in real time that could lead to specific types of leaks or other incidents and then help the company incorporate that prediction into its workflow.
“If we predict, ‘Hey, this excavator is going to hit a gas pipe, the utility can drive on site and talk to the excavator in person and tell him why his job is extra-risky.” So far, he says, the company calculates that it has been able to help large utilities reduce third-party damages by more than 30 percent. “The whole goal is to make a prediction and then take an intervention to reduce the risk,” Carpasso says.

Artificial intelligence is using advanced computing to transform every other industry, so perhaps it was inevitable that AI would be used to optimize computing itself. Earlier this year, Google announced that it was using AI to help determine the optimal placement of components on chips for its Tensor Processing Unit, a processor designed to optimize Google’s AI program TensorFlow — in other words, using AI to develop a better chip for AI.12 Ordinarily, human designers determine chip placement using a set of ingrained heuristics to reduce the amount of wire needed for hundreds of components.

(L-R) Jill Becker, CEO of Kebotix; Jeff Dean, Lead of Google AI; Mike Davies, Director of Neuromorphic Computing Lab at Intel; Christopher Savoie, CEO and Founder of Zapata Computing; Yudong Cao, CTO of Zapata Computing

In a paper released in March of this year, the company describes how it is using a different type of machine learning called deep reinforcement learning — the same kind of AI that Google used to create its AlphaGo program — to solve the problem. Instead of predicting results based on a pre-trained set of data, deep reinforcement learning picks solutions to problems based on how closely they meet a pre-defined “reward function” — in this case, a composite of different measures of performance. As it generates different solutions, the model uses positive and negative reinforcement to guide it towards the best solution.

While Google engineers acknowledge the difficulty of accurately programming such models, they have been able to use an algorithm to generate tens of thousands of possible solutions to chip placement, gradually honing in on the best solution. At the International Solid State Circuits Conference in San Francisco this February, Google’s head of AI, Jeff Dean, described how within 24 hours, one algorithm came up with a more efficient solution than designers who had spent up to eight weeks on the task.

Intel, meanwhile, has been using the principles of neural networks to design the very architecture of a new type of “neuromorphic” computer. Two years ago, the company released its Loihi neuromorphic research chip, which instead of having traditional architecture of parallel processors attached to memory, is arranged around artificial neurons and synapses that learn and store information in the same way as the brain. “This is about finding a new computer architecture that is inspired more directly from our modern understanding of neuroscience of how brains process and compute information, which is dramatically different than our standard way of computing,” said Mike Davies during a press call in March to announce the release of a new neuromorphic research system, Pohoiki Springs, which integrates 768 chips containing nearly 100 million artificial neurons — the number in the brain of a small mammal. Notably, the quirky system names come from geographical features on the Hawaiian Islands.

For its first application, Intel has trained an olfactory sensing system using machine learning to recognize smells. As an odor arrives, sensory neurons “spike” in a telltale pattern that is then compared to stored patterns based on training data. Unlike vision, which relies on three colors — red, green, and blue — smell integrates thousands of different olfactory components, making it a formidable problem. With only a single sample of a particular chemical, however, the system was able to outperform a deep learning AI system by 40%. It takes 3,000 samples, says Davies, for the deep learning system to catch up to the neuromorphic system, even though eventually he concedes that the traditional system sur-passed the new design. The speed with which it can make judgments could be an advantage in certain applications in which a quick answer is important.

In addition, says Davies, the system uses significantly less energy than traditional computing systems, since unlike them, the neuromorphic system doesn’t store a series of 1’s and 0’s. Individual neurons only spike with 1’s, while the 0 state takes no energy to maintain. “Once you have an architecture that is actually exploiting the fact that one of these two binary states can save energy, then you can skew your whole algorithmic formulations to prefer zero states, and get gains that way,” Davies says. “The downside is you have to rethink the algorithms because all of our algorithms don’t have that property by default.” In November, the company announced the formation of a Neuromorphic Research Community, with 500 corporate members, including Accenture, Airbus, GE, and Hitachi, to experiment with the system. The company hopes that in the future, it might show promise in other machine learning problems, such as image recognition and route optimization.

Other designers are using AI to help better design next-generation quantum computers, which use quantum superpositions to short-cut thousands of cumbersome computations to arrive at solutions to problems exponentially faster [see Tough Tech 01 “The Future is Quantum”]. While a variety of different techniques exist to create quantum computers, including superconducting circuits and trapped ions, they share a common difficulty: random errors that must be corrected to get an accurate result. Until quantum computers are able to surpass the power of conventional silicon computers, AI can help bridge the gap.

“There have been a few proposals about using machine learning to detect and classify the errors in these near-term devices,” says Chris Savoie, CEO of Zapata, a quantum software company founded in part by Aspuru-Guzik. “AI could help correct those errors without even necessarily knowing the source of them. Even without correcting errors, machine learning could help improve the accuracy of quantum computers in the near-term,” says Zapata chief technology officer Yudong Cao. “A quantum computer is essentially a sample generator,” he says. “A machine-learning model could capture the distribution of samples and use that as a proxy for the actual distribution.”

Among the applications that quantum computing could potentially revolutionize is artificial intelligence, using quantum’s superior computing power for ML to come up with better solutions to optimization problems. In other words, using machine-learning techniques to optimize quantum would once again be like using AI to create better AI. No matter how “intelligent” artificial intelligence gets, the dystopian fantasies of sentient machines taking over the planet are likely to stay forever in the realm of science fiction. Instead, AI is likely to remain a powerful complement to human intelligence, allowing us to create new breakthroughs in health, climate, computing, and other industries, we could never dream up alone.

What if Driverless Cars Could Think?

Autonomous driving couldn’t exist intelligence; the complicated nature of driving a car on a public street requires that a driving system is able to make split-second decisions about when to speed up, slow down, and turn to stay on the proper route and avoid obstacles. So far, however, the promise of autonomous driving has not lived up to the hype, with existing systems unable to show the control needed for real-life conditions on public roads.

(Left) Yibao Zhao, Co-Founder and CEO of isee (Right) Chris Baker, Co-Founder and Chief Scientist of isee

The Engine portfolio company isee has a theory for why that is so — it’s not the self-driving cars that are the problem, it’s the other cars on the road. “If we could completely remove other humans, and just have the autonomous driver out on the streets, that should already be possible,” says Yibao Zhao, founder and CEO. “But we can’t move those humans, so we have to have an autonomous vehicle co-exist with other human-driven vehicles and pedestrians, and they are the most unpredictable part of the environment.”

Zhao and his co-founder Chris Baker are implementing a different kind of AI based on their previous work at MIT’s Computational Cognitive Science Lab, which focused more on making an artificial brain aware of the intelligence of others. “When humans make decisions, we have a lot of common sense knowledge that helps us to have a much better understanding of the environment in order to understand events that are happening,” Zhao says. One of those senses, known as “theory of mind” allows us to put ourselves in the mind of another person to better understand what they are thinking and feeling and predict what they might do because of it.

“It’s fundamentally a kind of probabilistic reasoning, where we realize we don’t have certainty about the causes of events, or know what is going to happen next,” says Baker. “But as humans uncertainty is pervasive, and yet we can make intelligent decisions and gauge risk and live healthy, successful lives.” In essence, he says, models of the world, based on the possible causes and future effects.

The company is working to train autonomous driving brains not only to anticipate behavior of other drivers, but also to understand the causes of their behavior. “If we see someone change lanes, we ask why they did that,” Zhao says. Are they getting ready to exit, for example, or is there something in the road. Answering that question could better determine whether we change lanes as well. It’s important that causal reasoning be a part of this.” Building a sophisticated model of the environment can help us understand uncertainty and reason about the ‘what-if’ in the future.”

That’s a dramatic departure from other autonomous driving companies, which are pouring their energies into creating better sensors, cameras, and handling in order to help cars better control their environment. By contrast, isee’s focus on using deep learning to teach its autonomous autos “common sense” helps them better understand their environment, and anticipate and adapt to changing circumstances on the road.

After raising a first round of funding from The Engine and a second round last year of $15 million from Founder’s Fund, isee has been able to design an artificial brain with enough common sense to be able to safely put cars onto public roads, hauling freight in Texas and California. “If we can move freight more efficiently, and reduce two days to one, or one day to five hours, that’s a huge benefit to customers,” Zhao says.

“The supply chain is full of opportunities that we can use AI to optimize.”

--

--

Built by MIT, we help founders create the next generation of world-changing companies. Visit us at engine.xyz.