Every great magic trick consists of three parts or acts. The first part is called "The Pledge". The magician shows you something ordinary: a deck of cards, a bird or a man. He shows you this object. Perhaps he asks you to inspect it to see if it is indeed real, unaltered, normal. But of course... it probably isn't. The second act is called "The Turn". The magician takes the ordinary something and makes it do something extraordinary. Now you're looking for the secret... but you won't find it, because of course you're not really looking. You don't really want to know. You want to be fooled. But you wouldn't clap yet. Because making something disappear isn't enough; you have to bring it back. That's why every magic trick has a third act, the hardest part, the part we call "The Prestige"
- Cutter, played by Sir Michael Caine, in the movie “The Prestige”
There’s an eerie similarity between magic and frontier technology. The three acts, though, are separated by long periods of time. In 2003, Elon Musk joined a fledgling automotive startup making an electric car. They called it Tesla Motors. The pledge - to build an electric car company that was also a technology company. This was against the backdrop of GM, one of the world’s largest car manufacturers, having shut down its electric car project. The world ignored Tesla Motors. It is just another car company. Who would even drive an electric car? Why would they succeed where GM hasn’t?
Yet, Tesla chugged along. The next time we heard of them was in 2008. It was their second act - the turn. They had shown that they could make a pretty good car. It surprised people. It was expensive, but it had the performance of a luxury sports car. Tesla declared that the revenue made from their sales was being used to produce the car that would change everything. Two years later, Tesla launched their IPO -- the first American car company to do so since Ford in 1956, selling shares for $3.40 (split adjusted) and being valued at 1.7B. But they were not quite done yet.
Over the next 10 years, Tesla set the stage for their third act - the prestige. They went from luxury electric car maker to a technology-first full-scale electrification and decarbonization company. How did this happen? Tesla pursued radical innovation across all facets important to the customer and made pioneering advances in -
Software: over the air firmware updates will make your car perform better over time, summon mode will bring your parked car to you, autopilot will let your car drive you for the most part
Hardware: the supercharger will charge your car for 75 miles worth of driving in 5 minutes, world’s most efficient drivetrains that give you more range per charge, large scale automation for factories to manufacture faster
Material sciences: solar tiles that blend in with your roof’s aesthetics, batteries that have more capacity and last much longer than any other
As a customer, you could buy a Tesla solar roof, powerwall for your house, a car and declare total energy independence from the carbon-fuelled grid. Today, Tesla is worth 750B.
Have you wondered, how many Teslas may be showing us their pledges right now and we aren’t paying attention?
Why is it hard to pay attention?
All of us want to learn about interesting technological developments and thereby expand the horizons of our awareness. But in the battle for our diminishing attention spans, deeply technical stuff has a few handicaps. It runs dry and is unrelatable. Numbers are very important in describing frontier technology. Yet, your brain revolts against your best intentions and pleads for you to tune out if there are more than a few.
Complexity is inherent in these discussions. On occasion, tech coverage may do a good job of avoiding jargon. But they don’t spend nearly enough time breaking the conceptual complexity.
We need a new lens to look through and make sense of it all.
A new lens for a new perspective
Over the last 10 years that I have spent dabbling in frontier technologies, I have come to realise that the best way to develop this new perspective is to
Shift the frame of reference
Study multiple trend lines concomitantly, and not in isolation.
Let me explain in detail what this means and how it translates into practice.
Shift the frame of reference:
Most discussions of deeply technical topics and domains have the tendency to devolve into semantics and jargon. In 2012, Google reported that it had taught an algorithm to identify a cat. I remember reading this news as a grad student and being mildly amused. Google’s blog announced the news to the world, saying, “neural networks are very computationally costly, so to date, most networks used in machine learning have used only 1 to 10 million connections. But we suspected that by training much larger networks, we might achieve significantly better accuracy. So we developed a distributed computing infrastructure for training large-scale neural networks. Then, we took an artificial neural network and spread the computation across 16,000 of our CPU cores (in our data centers), and trained models with more than 1 billion connections.”
As a non-expert of this then-nascent and specific field of study called deep learning, are you able to recognise what exactly was accomplished? What did it mean for you? I may have told myself, “all that for teaching a computer how to identify a cat? Engineers at Google have a lot of time to play.”
The same blog adjusts the frame of reference at the very end, saying “this isn’t just about images—we’re actively working with other groups within Google on applying this artificial neural network approach to other areas such as speech recognition and natural language modelling. Someday this could make the tools you use every day work better, faster and smarter.”
Look at where we are 9 years later. I have dictated large parts of this piece you are reading to Google Keep. It captured my voice and converted it into text flawlessly.
It is clear now that, by all measures, the achievement of teaching an algorithm the cognitive task of recognition was a watershed moment. When we attain such moments, it is critical to shift frames of reference in our discussion of futuristic technology. Not only do we have to talk about how a particular piece works, but we also need to zoom out onto a bigger frame and connect the present to plausible outcomes in the future. That’s what makes it relatable.
Study multiple trend lines concomitantly, and not in isolation
Truly radical progress is rarely, if ever, unidimensional. Disparate technologies improving in their own lane for years together, meld together at a point in the future and make something radical possible. Viewed individually, they create little excitement. But place it in the context of other inflection points and you see miraculous progress.
Let me explain with an example. We will talk about the protein folding problem.
Proteins are the building blocks of life, responsible for numerous functions inside cells. The function of a protein and the mechanism by which it fulfils that function is determined by its 3D shape. Proteins adopt their shape without help (in most cases), guided only by fundamental laws of physics. Scientists have long wondered how a protein’s constituent parts — a string of different amino acids — lead to its eventual shape. The protein folding problem is this: if I told you the sequence of amino acids that make up a protein, can you tell me the shape the protein will fold into?
We wanted to use computational advances to answer this question - an algorithm that outputs structure of protein for an input sequence of amino acids. But for decades, laboratory experiments have been the only way to get good protein structures. To know the shape of a protein, you have to experimentally determine it. This is how it worked - X-ray beams are fired at crystallised proteins and the diffracted light forms Rorschach like patterns and these patterns would then be translated into a protein’s atomic coordinates, revealing its shape.
These experiments were painstakingly slow and prohibitively expensive. Determining the structure of a protein would take more than a year of effort in the late 1990s and cost more than a million dollars per protein. This was a chicken and egg problem. We were not generating enough data for our algorithms to have enough examples to learn from. As a result, our algorithms weren’t getting materially better.
This changed in the 2010s. There used to be a lot of extremely skilled manual labor of high diligence involved in the workflow. As robotics became more robust, reliable and precise, it made its way from more industrial applications to niche spaces like automation of laboratory procedures. It became possible for human effort to be progressively abstracted away from the most troublesome parts of the process. Look at a critical part of the workflow - the crystallisation of the protein itself. The very first step in the process of determining the structure, is to make crystals of the protein. Figuring out how to crystallise a protein could take up to a year of work. Automation made it possible to perform thousands of experiments in parallel, monitored by computer vision at high-throughputs. We could drop that time to a few months. Once the crystals were formed, a human would have to manually mount and align it to capture an X-ray diffraction image. This would take 15-30 mins. A robot could do the same in less than a minute.
Alongside, the technology behind X-ray sources improved exponentially. Concurrently, the technology behind image acquisition from X-ray diffraction saw significant improvements. What used to take days of data collection, started being accomplished in hours and now takes minutes. The software that converts the patterns to structures got increasingly powerful and automated, sometimes being able to generate structures in a day (even minutes sometimes). These things happened in relative obscurity, bringing joy and excitement to only the people closest to it.
But when you put these developments together, we saw the time spent on structure determination drop from years to months, and even weeks, if you were lucky. How long did it take for us to determine the shape of the coronavirus spike protein in 2021? Few weeks. The cost of structure determination for a single protein had dropped to $150,000 in 2010 and thereon to $100,000 in 2015. This meant that scientists were now able to determine more structures than they could ever before. See the exponential growth in the chart below. This is how many protein structures were determined and uploaded to PDB: an open database of protein structures available to everyone.
Completely unrelated to this advance in protein structure determination, we were seeing a revolution in deep learning. We were generating more image, text and numerical data than we ever had in the past. Our algorithms had a data deluge to be trained on, get better and drive meaningful transformation in multiple sectors. Huge demand for deep learning drove an insatiable appetite for computing hardware. Supercomputing infrastructure was added at record paces. Companies like Nvidia and Google also obliged by churning out semiconductor chips primed to crunch through the massive mathematical manipulations that drive the “learning” behaviour of these algorithms.
Now look at all these seemingly unrelated trends together:
Protein structure determination times and costs dropping due to automation, chemistry and software advances.
Record levels of computational power available and, generally, increased availability of hardware primed for deep learning applications
About 200000 experimentally determined protein structures to learn from
The stage is set for an exponential jump. In November 2020, Deepmind declared that Alphafold has solved the protein folding problem. If you tell it the constituent amino acid sequence, it will give you the structure of the protein with accuracy indifferentiable from experimentally determined structures.
What does this mean for you? Remember we talked about proteins being responsible for a majority of functions in health, disease and everything in between. If we know a protein’s structure, we can make educated guesses about its function. Think of a protein’s 3D structure as a lock. If we can map the shape of the lock, we can then make “keys”— can be therapeutics — to disrupt it. If you can go from sequence to shape to function without needing to perform experimental determination, we save hundreds of thousands of dollars and months of time for EVERY protein we need to study. We typically need to study hundreds in any serious discovery process.
When COVID-19 hit, Deepmind made a prediction, in March 2020, regarding the shape of a protein called Orf3a which is used by the SARS-CoV-2 virus to infect us. 3 months later, experimentalists confirmed that in fact, their predictions were really close to the actual structure of the protein.
We will have more of such crossover moments in the near future where seemingly unrelated technologies getting better, faster, more accessible will have opportunities to meld with one another and create something radical not at all possible before. It will become increasingly important to study such technological progress concomitantly and not in isolation.
Have you noticed?
What I have described above are NOT isolated examples of exponential progress.
Did any of us think, in 2011, that reusing rockets would be possible? In 2021, if SpaceX fails to land a rocket on a drone ship hovering out in the open ocean, that is what surprises us. We wonder what went wrong if they fail. We don’t wonder, anymore, what had to go right for them to stick the landing. That is how the state of art gets redefined. That is also how, space freight goes from costing $18000/kg to $2700/kg in the course of a decade. All of a sudden, the whole space economy is open for business. In 2021, we have constellations of satellites beaming down high-speed internet without needing to build undersea or over land infrastructure. We are building space factories - for building in lower earth orbit, precious stuff that would benefit from the manufacturing environment of near-zero gravity. We are, literally, printing rockets (or parts, thereof) in days, instead of building them over months. If this is where we are at $2000/kg, where will we be at $500?
Vaccine development for addressing public health challenges used to take years of research. It would take years more to test them and tune them to attain acceptable levels of efficacy. How would we have done with COVID-19, if that was our state of art paradigm? Instead, now, we have the capability to look at the genetic code of the pathogen, choose which part of the code we want to use for designing the vaccine, manufacture small amounts and make it available for trials in 25 days. The threat is mutating and there are new variants? No problem, we will look at the changes and manufacture another “updated” version of the vaccine. We aren’t stopping there - we are also working on one-shot vaccines that transfer immunity to a wide variety of viruses. For a threat that emerged just about a year ago, we have 95 vaccines in various stages of development, 4 out of which are already approved and available for use. Let’s put this in context. Remember Ebola? The deadliest outbreak started in 2014. It took us 6 years to develop, test and approve a vaccine against that. 6 years for Ebola, 10 months for COVID-19. Where do we go from here?
In 2011, the global demand for oil was at 4.06B tons. In the pre-pandemic year of 2020, it had risen to 4.47B tons. Not surprisingly, we went from having 383 ppm carbon dioxide in the air to 413 ppm in the same time frame. I know, we are setting the planet on fire, right? But did you notice that solar power went from costing 50c per unit in 2010 to 5c in 2017? Some countries, more so than others, did. That’s why we went from 40 gigawatt solar power capacity in 2010, to 700 gigawatt in 2020. Granted, it is still a small fraction of what we need, but the walk towards progress is unmistakable. If this is where we are 5c, where will we be at 1c?
Notice how using the new lens we discussed previously allows you to feel tangibly closer to the progress being made.
So why is any of this important?
We live in a world of information overload. In a world where we are constantly bombarded with “breaking news”, we can no longer track what really is. We have been primed to tune in only when things are sensational and tune out immediately if it is not. Yet, the most exciting stuff happens when you haven’t been paying attention. Far away from clickbait, slowly and in relative obscurity from the hype machine.
Information has been democratized over the last decade. Access to information is no longer a critical determinant of outcomes. My hypothesis is that most of us are going to be knowledge workers in the future (if we aren’t already). The ability to synthesise new ideas by consuming divergent content (often from different domains, fields, subjects, industries), the ingenuity of your newly synthesised ideas and your ability to execute upon them (in some form) will be a superpower of sorts.
If you could develop a sense of what is going to be important in the future, what would you change in the present?
If you are a student: what are the subjects that make you most qualified to solve the problem you have identified? Who should you talk to? Where do they work?
If you are a graduate: where will the most meaningful jobs of the future be and what are the companies that are really pushing the envelope of possibility forward?
If you are an entrepreneur: what is the state of art in any particular field you are interested in and how will you break the mould?
If you are an investor: who are the people that are building the future that you will inevitably live in and how can you support them?
If you are at a BigCo: who are the people trying to disrupt you, can you work with them, can you stay ahead?
If you are just an interested reader looking to expand your awareness: what will the future look like? Wouldn’t you like to say to your friends and family, “I told you so, 10 years ago?”
If you are any of the above, this newsletter is for you.
This is 10 years from now.
Welcome aboard.
Thoroughly enjoyed reading it!!
Great to see this published!