What is hardtech and what is different about building it?
In the previous post, I talked about how frontier technology moves slowly at first and then over time, becomes indistinguishable from magic. This week, we continue that thought and build more perspective around the landscape of frontier tech in general. Specifically, we will talk about what constitutes frontier tech (or hardtech) and then the struggles of building such technology.
What constitutes hardtech?
Any technology or product which has an extremely high technical risk associated with it is hardtech. With frontier tech, founders have less to worry about in terms of the market risk because if the product can be built and offered for a reasonable cost, there will always be a market for it (exceptions apply). To reiterate, it does not mean there is NO market risk — just that it is lower relative to the technical risk. These technologies are usually built to service a massive unmet need. They are probably the truest embodiment of the adage, if we build it, they will come. In terms of market opportunities for such technology, you will probably be allowed to go further and say, they have been waiting for us to build it.
The technical risk, however, is existential. That is to say, there is significant doubt whether the product can ever be built at all.
Over time, these kinds of technologies have been called by few other names, for example, deep tech or frontier tech. To me, there is no difference between these categories and the delineation is rather arbitrary. There are two salient features of hardtech that are its main differences over other kinds of products/technologies:
Massive scale of impact: the solutions being developed by hardtech ventures are almost always targeted towards unresolved health, environmental or even social challenges. As such, the scale of the impact it can have when it is successful is huge. Often, if not always, hardtech ventures will result in disruption not only of the particular industry it replaces, but also the entire value and supply chain around it. In the process, these technologies expand the available market many times over and create new ones, hitherto unforeseen.
Absolute originality of the technology: it is imperative that hardtech offerings are backed by several years of R&D. They tend to accumulate significant IP portfolios which form the initial moat and barrier to entry for other competitors. The very nature of hardtech is such that it spends a long time in relative obscurity — largely funded by public grants, philanthropic money or corporate research arms. To future users, customers and practitioners, hardtech (in its early stages of development) can seem like a pipe dream. Gene Cernan, commander of Apollo 17, in a May 2010 Senate testimony, said that companies like SpaceX “do not yet know what they don’t know”. 10 years later, in May 2020, SpaceX carried out its first crewed mission for NASA.
Let’s look at some examples of hardtech building from this decade.
Exhibit 1:
Quantum computers: Rigetti Computing, D-Wave Systems, ColdQuanta
Problem: Many mathematical problems like large-scale optimisation (important to finance, transport, logistics) and simulations of matter interactions (important to materials, chemistry, medicines, protein folding) are intractable for classical computers. Even if we use supercomputers, the time needed to solve these problems would be longer than the known age of the universe and thereby, clearly not practical. As workarounds, we use approximations to simplify our calculations and make them computationally tractable. However, as a result of making these estimations, our models fail to fully capture the “realism” of complex phenomena. In turn this leads to reductionist models which do not succeed in predicting real-world behaviour.
Solution: Quantum computing is a natural fit for these kinds of problems.
Technical Risk: The qubit is the functional unit of quantum computing, like a bit is for classical computers. Qubits have some quirky properties that mean a connected group of them can provide way more processing power than the same number of binary bits. Howevery, unlike bits which are reliable and stable, qubits are notoriously difficult to stabilize and lose their quantum state at the slightest mechanical disturbance (vibrations) or temperature in a process called decoherence. This makes quantum computers highly more error-prone compared to classical computers. It is postulated that we would need 100,000s of qubits to approach real problems. Google wants to build a million qubit computer, eventually. After 8 years of research (and unknown amount of money), the highest number of qubits that they have been able to stabilize is 72.
Exhibit 2:
Human heart, grown in a pig: eGenesis
Problem: As you read this, there are about 120,000 Americans waiting for a heart transplant. It is not uncommon to wait over a year for a transplant, often in precarious health conditions.
Solution: What if we could grow a human heart in pigs instead? Anyone who needs a heart transplant would then get one when they need it.
Technical Risk: A pig would naturally grow a pig heart. We will employ gene editing to make a pig’s heart more “human-like”. It is postulated that we would need to edit ~20000 genes to be able to grow a completely human heart in a pig. This would require us to push our gene-editing abilities far beyond what has been demonstrated to be reliably possible. The highest known public number is 62.
Exhibit 3:
High-speed internet, delivered by satellite constellations: Astranis, Starlink
Problem: More than 4 billion people don’t have access to the high-speed internet that we take for granted. Building network infrastructure in underserved areas carries no sustained incentives for companies, and hence, is ignored.
Solution: Delivering high-speed internet through low-flying satellite constellations has emerged as an exciting alternative.
Technical Risk: To do this we will need to develop satellites that are more capable while costing at least 10X to manufacture and launch. On the user side, we will need to design antennas that are able to track these satellite constellations across the sky while being extremely robust in all weather scenarios and costing no more than 100s of dollars. For perspective, the currently available user links deployed by ships, emergency vehicles for accessing satellite internet cost more than $ 5000.
Exhibit 4:
Meat, without the animal: Mosa Meat, Memphis Meats, Finless Foods, Wildtype
Problem: Factory farming uses half of all water consumed every year in the US. Globally, raising animals for food releases more greenhouse gases than ALL the transportation in the world combined. Yet, factory farming is responsible for meeting the protein needs of a growing population, projected to reach 9 billion by 2040.
Solution: What if we could get the meat we needed for meeting our dietary requirements but without the animal?
Technical Risk: The $ 325,000 burger was the earliest step in this direction in 2013. Being able to produce meat without the animal would require nothing short of a complete reimagination of the traditional “cell culture” technology used commonly in basic research and biotech/pharma. The industries that supply reagents, consumables and equipment for cell culture have had extravagantly high margins and relatively, no pricing pressure for decades. Yet, that same burger now costs about $ 50, all due to disruptive innovation by the startups in the domain. Can clean meat reach price parity with meat? Possibly; but it will be unsurprisingly hard.
Now, you can see why frontier tech is often called hardtech.
Why is hardtech hard?
Let’s tackle this with an analogy. Assume that the product you are trying to build is a house. It is a new kind of house, with some special features, but is a house nonetheless. Over centuries of technological progress, the ecosystem of tools and services around the act of “building” per se has matured. As a builder, you can focus on “assembling” what is already available in a way that has never been done before. You can buy all the raw materials you will need from stores or providers; you can hire contractors for specific functions; you can rent heavy equipment — and so on. Most technology is built this way: you are building “atop” several freely available layers and rarely, if ever, from scratch. The creators leverage existing tools and services to create something that didn’t exist before. This way, the barrier to entry for building technology is low and the cost of every iterative failure is either negligibly low or even in the worst case, not so high as to bankrupt the company. Due to the democratisation of the technology layers and vibrant ecosystem of services, it is highly likely that there is a large pool of talent that you can draw on — tinkerers like yourself, who have been developing new kinds of houses for a while. Sure, you are building a unique house, but let’s just say that there are people who have largely transferable experience.
Now imagine an alternate world where you are the pioneer who has thought of a “house” before it becomes an accepted concept. People kind of agree that it sounds a lot better, scalable and customisable than a cave everyone is used to. But no one has ever built one. And few have ever heard of it. In this case, here’s how the task of building a house would expand in its scope:
Develop, test and refine a lot of the underlying structures like beams, columns, windows, etc
Invest resources in designing and developing the tools you would need for fairly routine tasks in the building process
Buy expensive equipment you would use for a few times in the process.
Figure out who would be able to supply you the raw materials for making many houses. Remember, not a lot of people have heard of a house yet.
Find people who can help you build it. Again, very few people know anything about it.
All of a sudden, your house became a lot more difficult to build.
In real terms, these represent the risks that a hardtech venture faces. By contrast, decades of progress in software has created the superpower of “abstraction” — the ability to choose and use from a variety of built-for-purpose tooling and services. A single developer can build out a fully functioning application that can start generating revenue with barely any investment other than their time. This is impossible with hardtech.
Without the luxuries of abstraction, hardtech entrepreneurs have no choice but to build nearly everything required to develop the technology itself. This translates into high technical risk. Imagine there are 5 separate pieces that need to work together for a hardtech product to succeed. Now you have 5 different and not necessarily overlapping sources of technical risk.
Developing hardware or wetware is orders of magnitude more expensive than developing software. The cost comes from a few sources:
Infrastructure: To develop the technology, the founding team will need a lab and some basic equipment. Depending upon the location and the domain of the technology, the cost of lab space and equipment can run higher than $ 500K.
Services: Services around developing physical hardware for prototypes in engineering pursuits or testing hypotheses in biological pursuits are prohibitively expensive. For example, engaging CROs for really early-stage lead generation in therapeutic discovery can run upwards of $ 100K. Similarly, if your mechanical designs would require new tooling to manufacture (and they almost always do), you can’t get them for cheap.
Iteration: When you are pushing the limits in the world of atoms, the deviation between your design and your prototype build matters a lot. If your design has inherently low engineering tolerance and your manufacturing process is not highly precise, you can have a failed iteration despite having the right hypothesis. Keep in mind, every iteration is expensive in itself!
Finding exceptional talent who can help you build the product can be extremely difficult for a hardtech venture. More often, founding teams form at academic labs or corporate research arms and more recently, out of hardtech startups themselves. If you are an outsider, looking in and trying to find people who would take the journey with you, it can become difficult. This is because the kind of skills that would be required for adding value to an early stage hardtech venture are always specific and rarely generalizable. Pressed with a resource allocation crunch in every conceivable direction, founders would not want people who won’t hit the ground running. Without being plugged into a network, the pool of talent can shrink to a drop.
In the delightful event that you do manage to make it past the prototype stage, you will face a new challenge which your peers in non-hardtech sectors will not. Critical components that you will need to mass manufacture your product either don’t exist at all or not in the form that you need. Developing and then subsequently, managing a supply chain with high-value, confidential, IP-heavy goods and services is bewildering and difficult. You can’t bring it all in-house because it is neither feasible nor scalable. You can’t work with just about anybody because they will need to have the capability you need access to and be reputable about their handling of highly valuable IP. The set of service providers/manufacturers who will fit this criteria will, most often, be tiny.
Death by a thousand cuts means something entirely different for hardtech startups. Each cut can kill and you have to systematically mitigate 1000s of them.
In the next part, we will learn more about:
The investors’ quandary about hardtech
Risks of investing in hardtech
Rewards of success in hardtech
If you have any feedback, I would love to hear from you.