I’m always looking for ambitious ideas on ways we can generate huge amounts of energy with minimal carbon emissions. It’s one of the defining technical challenges of our time. The urgency of the need means we have to try lots of different ideas and figure out quickly whether each idea will actually work. Part 3 of the Netflix series Inside Bill’s Brain gives a cameo role to a tool that I rely on to get those answers: computer modeling. This tool is enormously powerful today in part because of what happened in computing during my first career. It’s important enough that I think everyone should know more about it.
Many years ago, I used to visit a friend who had a house near Kitty Hawk, the “birthplace of flight” in North Carolina. As I walked the beach there, I would picture a scene from another era: two young bicycle mechanics running down the hill after the flying machine they had built, chasing their dream of an airplane that could carry people for miles without crashing.
Earlier pioneers in aviation had taken bold leaps of faith with their inventions. A few had made that leap literally—and plunged to their deaths. But the bike mechanics Orville and Wilbur Wright did something smarter. They built a rudimentary wind tunnel. The replica I saw at Kitty Hawk was just a coffin-size box with a gas-powered fan at one end.
The brothers used their wind tunnel to experiment with model elevators, rudders, propellers and wings they cut from saw blades and spare parts. They tried long, skinny wings and squarish, nearly flat ones, some that were curled and others that were thicker on the leading edge. None were bigger than your hand.
Wilbur and Orville built their models to help answer a hard but crucial question: will it fly? The results let them make a leap not of faith, but of reason. The Wright Flyer worked, as the wind-tunnel experiments had predicted. And flying machines quickly put humanity on a new trajectory.
A few years before visiting Kitty Hawk, I made my own leap, starting Microsoft on the belief that a technological revolution was about to make computing easier, cheaper, and more powerful—and this change was going to happen with astonishing speed. That bet paid off in spades, including by helping scientists and engineers advance further and faster than ever before. Using models built from software and supercomputers, rather than steel and wood, they can now run thousands of elaborate virtual experiments in a day, without fear of hurting anyone or going broke.
That also makes computer modeling the perfect tool for much of what I do now, as I search for innovations and ideas that can bend the path of history again. In areas like drug discovery, the eradication of disease, and breakthroughs in energy production, bold ideas tend to come with long timelines, huge costs, and lots of uncertainty. Modeling helps on all three fronts.
Take energy. I’ve invested in companies trying to tap heat from the earth, build solar-friendly electrical grids, and birth next-generation nuclear reactors, among many other ideas. Peek inside those labs, and you’ll find clever engineers harnessing the predictive power of computer modeling. It’s played an especially important role in my work in nuclear energy that’s depicted in the documentary.
Back in 2005, Nathan Myhrvold, a former Microsoft colleague, showed me a long scientific paper on an innovative nuclear reactor and introduced me to the lead author, an inventive physicist named Lowell Wood who would go on to beat Thomas Edison’s record for the most U.S. patents in history. Lowell claimed that this reactor could satisfy “much of humanity’s requirements for electricity in the 21st century.”
I was skeptical, but also intrigued. On the one hand, I’d grown up in the Cold War and remembered the accidents at Three Mile Island and Chernobyl—evidence of the dangers of nuclear power mishandled. On the other hand, I’d learned to listen to Nathan’s ideas. A physicist who did research with Stephen Hawking, he was Microsoft’s CTO in the 1990s and created Microsoft’s R&D lab. He has a rare talent for identifying visionary technology, and this paper described a reactor designed specifically to avoid the problems that nuclear reactors had experienced in the past.
In 2006, I started drilling into the details of the idea with Nathan and Lowell. Could the reactor really be made provably safe, with the laws of physics—not fallible human operators—holding temperatures under control, even after an earthquake, tsunami, or plane crash? Could it truly reduce existing stockpiles of hazardous waste by burning depleted uranium as fuel?
They claimed these reactors could run for decades between refuelings, lowering electricity costs. They said that countries could run them without enriching uranium, so the technology wouldn’t raise the risk of weapons proliferation.
That all sounded great—but would it actually work?
I gave Nathan’s lab seed money to recruit a team of nuclear engineers. Step one: buy an $800 piece of nuclear modeling software from Oak Ridge National Laboratory, and optimize it until it was literally a thousand times faster and a million times more accurate.
Next, they assembled a supercomputer, which they nicknamed after the physicist Enrico Fermi, who built the first nuclear reactor. Within months, Enrico was simulating millions of neutrons bounding around the reactor core, splitting atoms and transmitting the energy of nuclear fission. It was our 21st-century version of the Wright brothers’ wind tunnel.
The model proved its worth right off the bat. We had hoped that the reactor could run on thorium, which produces less hazardous waste than uranium does in this kind of reactor. To our disappointment, modeling showed that thorium-fueled fission would eventually fizzle out. But the models also demonstrated that if we ran the reactor mainly on depleted uranium, a cheap and abundant alternative, it could hum along at high power for decades on a single load of fuel.
Over the next few years, engineers and physicists combined a bunch of different models on the computer to create a digital prototype of a nuclear plant. They experimented with thousands of different configurations of the plant, continuously making small tweaks—from the diameter of a piece of fuel to how heat and coolant flow through the plant—to see how those changes (and all sorts of worst-case scenarios like earthquakes and plane crashes) could affect the plant over its life of 60 or more years.
Step by step, the models helped optimize the design and give us confidence that it could overcome real-world obstacles that had blocked nuclear energy for decades. Safer? Cleaner? Cheaper? Proliferation-resistant? Yes on all counts. We spun out the work into a venture called TerraPower, I put in more money, and we started looking for partners to build a demonstration plant.
Meanwhile, our foundation had started investing in ending polio and exploring whether other diseases could be eradicated. I funded a small team at Nathan’s lab to model how infectious diseases like malaria, TB, and HIV spread. That team grew into an institute in Seattle that has helped us more accurately target polio in places like Nigeria, where we had struggled to stay ahead of the disease.
Models are incredibly useful tools, but they are just tools. They can’t predict whether our reactor will get the financing it will need or, as the documentary shows, that a trade spat between the U.S. and China would derail our initial plan to build one. The real world is full of surprises. It’s fundamentally hard to model geopolitics, macroeconomics, or really anything that hinges on human decisions rather than the laws of nature.
But computational science does help us make leaps of reason that can bring us closer to a world of abundant clean energy, and where poverty and preventable disease are things of the past. As the Wright brothers showed, the right invention at the right time can change the world.