Product Fundamentals

1.1: Chasing Waterfalls

May 24, 2023 Jordan Phillips Season 1 Episode 1
1.1: Chasing Waterfalls
Product Fundamentals
More Info
Product Fundamentals
1.1: Chasing Waterfalls
May 24, 2023 Season 1 Episode 1
Jordan Phillips

Send us a Text Message.

To understand why we make software the way that we do, we start from the beginning, with the earliest programmable computers, exponential growth, NATO's first "software engineers," and Winston Royce's articulation of the Waterfall.

Support the Show.

For full show transcripts, links to sources, and ways to contact me, please see the show site at https://www.prodfund.com.

Intro and outro music by Jesse Spillane.

Show Notes Transcript Chapter Markers

Send us a Text Message.

To understand why we make software the way that we do, we start from the beginning, with the earliest programmable computers, exponential growth, NATO's first "software engineers," and Winston Royce's articulation of the Waterfall.

Support the Show.

For full show transcripts, links to sources, and ways to contact me, please see the show site at https://www.prodfund.com.

Intro and outro music by Jesse Spillane.

Hello friends, and welcome back to the Product Fundamentals podcast, episode 1: Chasing Waterfalls. 

In this season, we are tracking the evolution of how we came to make software in the weird way we do, from the earliest origins of our methods, through to today.

In this episode, we’ll start at the very beginning of professional software engineering and understand the methodology everyone loves to hate: the Waterfall. If you don’t know what that means, don’t worry; you’ll understand it soon.

We’ll also establish some of the critical themes that will run through the whole story of the Path to Now, especially the challenge of how humans wrestle with exponential change. 

It’s easy enough to dismiss a way of work as something trivial that doesn’t really need its history told. 

Of course, there are obvious reasons to care about work, including the truism that we spend about a third of our lives working, and for many of us, work is a huge part of our sense of self.

But there are also vital human questions tied up in the history of how we work. Yes, “How can I provide a good living for myself and my family?” But also, 

  • How can we live meaningful lives, pushing back the frontier of what’s possible?
  • How can we live honestly, and make promises that we can keep?
  • How can we retain our autonomy and avoid being cogs in the machine?

It turns out software people have been wrestling with those questions from the beginning.


[The first computers]

So, let’s start in the way-back, and set the scene as we think about the early days of software development. The first general-purpose programmable computer was the Electronic Numerical Integrator and Computer, or ENIAC, which was built in 1945. Like so much of early computing, it was built for the military: ENIAC’s main task was to calculate artillery shell trajectories for the US Army. With a processing speed of about 500 floating point operations per second, ENIAC was much faster than humans. But ENIAC followed instructions that were represented on external media and in the wiring between components; it couldn’t be used to edit its own instructions. So, changing the program for different calculations was very time-consuming.

The first computers that stored their programs in memory, where they could be more easily edited, came in 1949. Mainframe computers on this model, which were programmable but still very inflexible, dominated the 1950s and early 1960s.

The physical act of writing software was very different in these early days. 

In the 1960s, programmers generally typed their code into the mainframe using an expensive terminal, which had a keyboard but no screen. Instead, the terminal had an attached printer, which would print out the code that the programmer was interacting with so he or she could see it.

Hardware at this time was incredibly expensive. When IBM introduced their System/360 line of mainframes in 1964, which was celebrated for taking advantage of mass-production and standardization to reduce costs compared to previous computers, the cheapest model still cost 1.3 million dollars in inflation-adjusted 2023 terms. The high-end models cost as much as 54 million dollars in 2023 terms (source).

That IBM System/360 line would go on to dominate the 1960s market and secure IBM’s lasting leadership, in part because it embraced compatibility and interoperability. IBM committed that peripherals would be widely compatible across its different products, including into future generations of its mainframes. IBM also enabled the new systems to emulate older IBM machines, allowing customers to easily port their existing programs from their current IBM hardware to run the new S/360 machines. 

The gamble for IBM in launching this System 360 line was huge, in part explaining the cost of the units – IBM invested the equivalent of $50 billion in 2023 terms into development of the System/360 line; that was more than IBM’s entire annual revenue at the time, all into one product line’s R&D (source). 

IBM became a market leader with this new hardware, creating a common – though certainly not universal – foundation for lots of software to be built on through the late 1960s and 1970s. 

The 1964 launch of the S/360 line brought an important technology to programming: the CRT monitor was sold as an optional accessory to IBM’s mainframe computers, allowing the engineer to see the output of programs. However, it took time for screens to proliferate; each individual programmer contributing to the program on the mainframe would not have a screen to look at until the early 1970s, as screens gradually replaced printers on terminals.

During this period, while multiple programmers could work simultaneously on the same program using multiple terminals wired to the same mainframe, there was no version control software like git, so conflicts were easy to generate and mistakes could not be trivially rolled back.

As you can imagine, despite the early strides in standardization and compatibility created by IBM’s outsized success, programming in the 1960s was incredibly slow, laborious, and expensive, and it necessitated an abundance of coordination and separation of responsibilities to enable teams to work efficiently.  

[1968 NATO Conference]

Given these challenges and the insane prices of hardware, it is unsurprising that the civilian governments and the military, along with a few very large companies, were the only real clients with the resources and use cases to justify buying early computer systems. When they did buy them, hardware and software were often tightly bundled and built to order. 

In the mid-1960s, there weren’t yet organized schools of thought about how to build software. Indeed, the term “software engineering” itself wasn’t even coined until 1967, which is why I’ve only referred to “programmers” so far.  “Software engineering” was first used by NATO’s Science Committee when they used it in the name of a conference in Germany to discuss the growing role of software in society. Yeah, the North Atlantic Treaty Organization – the European and American military alliance to contain the Soviet Union in Europe – created the term “software engineering”.

Before this NATO conference, which would actually take place in early 1968, workers in the software industry were generally just “programmers” and “program designers.”  Now we think of the label “software engineer” as rather pedestrian and interchangeable with words like “coder,” “programmer,” and “developer,” but the organizers of the NATO conference chose this name to elevate the profession.

“The phrase ‘software engineering’ was deliberately chosen as being provocative, in implying the need for software manufacture to be based on the types of theoretical foundations and practical disciplines, that are traditional in the established branches of engineering.”

For scale, at the time that NATO’s Science Committee was pushing for software workers to see their profession as an engineering discipline, there were perhaps twenty or thirty thousands of mainframe computers worldwide, many of them concentrated with a few very large customers.

The fact that this was a NATO organized conference is a reminder of how intimately tied computing was with the military. At this time, computers played a vital role in air defense, missile guidance, nuclear weapon design, and the Space Race, which was itself a proving ground for military technology.

It’s also worth noting that the organizers are treating software as an independent concern from hardware. This represents the maturation of a paradigm shift, from a time when hardware and software were nearly inseparable, to the beginning of a period when software could be thought of as a cleanly separated layer of abstraction above hardware. 

At this 1968 conference, about 50 academic computer scientists and commercial programmers discussed a wide variety of issues in software. I’ll link the official report from the conference in the show notes. It is full of interesting little tidbits, some of which reflect arguments that are still playing out between members of every software team around the world today. One of my favorites is a set of opinions from six experts over how the user’s desires should factor into software design. Perspectives range from, to paraphrase, “users are the best designers” to “the user doesn’t know what they want at all.” That certainly still sounds familiar today.

During this NATO conference, significant time went into discussing how to make software design and development more efficient. 

Taken for granted at the conference was that software development went through the following cycle:

  • First, a problem is recognized,
  • Then the problem is analyzed, resulting in a description of the problem,
  • Then a solution is designed, resulting in a complete system specification,
  • Then the solution is implemented, resulting in a working system,
  • Then the system is installed and accepted by the client, resulting in a completely operational system.
  • Finally, the system is maintained, until it becomes obsolete.

Note two things about this methodology:

  • First, It’s very linear. A leads to B leads to C; there’s no feedback loop; there’s nothing cyclical.
  • Second, it assumes knowability and intelligibility. The problem can be understood, a solution can be designed in one shot, and it can be implemented (with some testing to catch human errors in translating the design to code).

Based on the conference notes, some of the attendees seemed to come with lofty ambitions that they could just get everyone together, talk it through, and discover the right way to build software and settle the question. But unsurprisingly, they left without that grand revelation, only a commitment to keep talking about the problem.

Even in 1968, some of the experts at the conference recognized that they were in the early days of a booming new field, and simultaneously that they were at the limits of what their methodologies could achieve.

One remarked,

“We undoubtedly produce software by backward techniques… Software production today appears in the scale of industrialization somewhere below the more backward construction industries."

Another adds, 

“Programming management will continue to deserve its current poor reputation for cost and schedule effectiveness until such time as a more complete understanding of the program design process is achieved.”

Another,

“One of the problems that is central to the software production process is to identify the nature of progress and to find some way of measuring it. Only one thing seems to be clear just now. It is that program construction is not always a simple progression in which each act of assembly represents a distinct forward step and that the final product can be described simply as the sum of many sub-assemblies.”

And finally,

“Today we tend to go on for years, with tremendous investments to find that the system, which was not well understood to start with, does not work as anticipated. We build systems like the Wright brothers built airplanes — build the whole thing, push it off the cliff, let it crash, and start over again.”

[Complexity & Moore’s Law]

This failure to find the one right way, even with the best and brightest to figure it out, brings us to one of the themes that we should always keep in mind, and that will be a significant part of the story of how we make software. That big idea is Moore’s Law, and the complexity creep that it implies.

 In 1965, Gordon Moore, a semiconductor engineer and future co-founder of Intel, noticed that the number of transistors on an integrated circuit had doubled about every year in the preceding five years, and predicted that it would continue doubling into the future. He eventually said he expected this doubling of transistors on a circuit to double every two years, and that is essentially what has happened from 1965 through today. This trend, dubbed by others Moore’s Law, is critical to understanding the evolution of software.

The transistor count on a chip is a rough proxy for its computing power. So, if the transistor count doubles every two years, then the amount of computation a computer can do in a given period of time also doubles every two years. That means that in 10 years, computing power is multiplied 32 times.

Moore’s Law combined with Kryder’s Law, which described an analogous exponential improvement in storage capacity, and Dennard scaling, which describes exponential decay in power consumption per transistor in memory and processing chips, and together these phenomena drove computers to improve in overall capabilities at a stunning rate.

Any discussion of Moore’s Law requires a mind-numbing example, and here’s ours: The first commercially available microprocessor was the Intel 4004, released in 1971. It processed 92,000 instructions per second. A 2023 Intel Core i7 desktop computer processor retails for about the same $400 inflation-adjusted price, and processes 170 billion instructions per second. That’s 1.9 million times faster processing, and the bit width has grown from 4-bits to 64-bits, meaning the modern processor can manipulate much larger blocks of data with each instruction.

The consequences of this relentless exponential growth are hard to overstate, but at the same time, have come to be expected by most of us who grew up in the age of exponentiation. Stop for a moment, and consider how bizarre and unprecedented this scale of exponential growth is in any other area of human experience. At its absolute peak, the annual global population growth rate hit 2.2% in the 1960s; it’s just above 1% now. Global energy consumption has increased at less than 1% per year since the 1960s. A roaring developed economy compounds at 6% per year, and rarely sustains that for more than a few years at a time. Transformative economic changes like indoor plumbing and electrification had enormous impact, but they happened once; we’re not getting exponentially more and better water from our faucets every year.

By doubling roughly every two years, the building blocks of computing maintained a 41% compounding annual growth rate for decades. 

Ever-faster computers meant the same task could be completed more quickly and cheaply with each generation of hardware. But more importantly, faster processors, supported by more memory and more permanent storage, could grow in complexity. Each generation of computer could support more layers of abstraction which instruction-for-instruction might be less efficient than earlier approaches, but which were much more capable of representing a nuanced reality and tackling complicated problems.

These ever-growing new capabilities mean there have always been new problems that were just-barely-solvable with computers. Over time, as capabilities continue to grow, those just-barely-solvable problems become ordinary. In some cases, we come up with more elegant and easier solutions, but in many cases, we simply develop so much capability that the underlying hardness of the previous problem becomes a rounding error. 

As I record this in mid-2023, the cutting-edge of technology is generative AI, which is co-writing software and creating semi-original visual art. Performing these feats requires what currently seem like enormous processing power operating over a huge corpus of data. In twenty years, the same operation may require just as many calculations to perform, but the resources needed will seem unremarkable in the context of continued exponential growth in capabilities. Instead, something new that we can currently only imagine doing with computers will be the new just-barely-solvable frontier. As long as exponential growth continues, this growth in complexity will continue right along with it. 

The attendees of the NATO conference could see the complexity creep all around them. In the conference report, they included a diagram showing that the number of instructions – basically the lines of code – in a typical program was doubling every year. They knew that the way they were working had limits, but they didn’t understand what to do next.

It’s at this point, at the end of the 1960s, that the software industry reached the scale and complexity that the question of how to build software became vital.

[Winston Royce & the Waterfall]

The first widely acknowledged landmark in the emergence of software methodology came in 1970, when a computer scientist and executive at the aerospace contractor TRW, Dr. Winston Royce, wrote a journal article titled “Managing the Development of Large Software Systems.” In his article, Royce does two things: first, he describes “how large software projects usually get done.” Then, the remainder of the paper becomes prescriptive, in which Royce advocates for several improvements to the way we built software. 

I promise that we won’t walk through other documents in such detail in the future, but Royce’s description of the Waterfall became such an important touchstone for the industry for decades to come that it’s worth really talking it through so that everything that comes after can make sense.

Royce describes how large software projects get built as a series of steps, which basically mirror the NATO Conference’s model and which should sound familiar to this day. 

Those steps are:

  • System requirements (by which he means the overall business needs from the customer),
  • Software requirements,
  • Analysis of those requirements,
  • Program design,
  • Coding
  • Testing, and finally,
  • Operations

Royce illustrates these steps in a collection of figures throughout the paper, with each step as a box in a series that runs from the top left of the page down to the bottom right. The simplest illustration shows a series of thick curved arrows pointing sequentially from each box down and over to the next one, and if you squint just right, the arrows do look a bit like water cascading down a series of small drops. This is the origin of the label “Waterfall” for a linear single-shot development process. Royce never applied that label in the paper though, and it seems it wasn’t first used in print until 1976, when other authors used it to refer back to Royce’s diagram.

It’s important to note that the label “waterfall” was not initially pejorative; it was just an evocative label for a flow chart. Indeed, some saw Royce’s articulation of the steps of software development as a helpful step forward. As a prominent figure at a prominent organization, Royce was defining a “default” or base case against which everything else was measured. The methodology he described was a standard, systematic process for writing software, which could be mapped neatly onto the methods of better-established disciplines, like structural or mechanical engineering.

The simple, optimistic case illustrated by Royce in that waterfall diagram has a linear flow from one step to the next, with requirements leading to analysis leading to design leading to coding leading to testing leading to operations. We can all imagine this flow – it’s probably how most of us expect, or at least hope, a big project will go when it begins. Gather requirements, then lock the requirements. Design a solution, then lock the design. Code up the solution, then lock the code. That’s the dream.

If you have any work experience, you know that projects rarely play out this way; surprise requirements come up, certain features are harder to implement than expected, and so on.

This is where it’s important to re-emphasize the difference between description and prescription. In laying out how big software usually gets built, in this one-shot linear fashion, Royce is just describing a reality. But he’s a smart, experienced guy, and he knows this smooth waterfall flow often breaks down. So, in the prescriptive part of the paper, he advocates a number of improvements, and illustrates them with a series of increasingly complicated diagrams, which – it should be noted – really break the “waterfall” imagery.

First, he notes that the interaction between any two steps should really be a feedback loop. That is to say, even though software requirements are upstream of analysis, the analysis should be able to inform software requirements; if the analysis reveals requirements need to change, then the requirements should change and the analysis should be updated before the project moves on to the program design step.

Now, on top of these feedback loops between sequential stages of the project, he advocates for five more changes to improve implementation. 

Change 1: As soon as you’ve gathered requirements, do a preliminary program design. By this, he’s referring to a technical document that describes the database to be used, the functions needed, and the inputs and outputs of the software. But he also includes a general overview document, which should be 

“understandable, informative and current” because “ Each and every worker must have an elemental understanding of the system.”

Put in modern parlance, he’s basically talking about a combined product spec and technical spec doc. But he’s placing it earlier than most of us would today: For Royce, this program design comes before really analyzing all the customer requirements. Presumably, this is an artifact of the time Royce is writing in: at this time, tech is hard, so you need to have some sense of your implementation approach before you can sign off on whether the requirements are even achievable.

Change 2: Document the Design.

In this paper, Royce is fanatical about documentation. Not only is there a spec document early in the process, but almost every stage of development needs its own doc; some need more than one. There are at least 8 documents in Royce’s process. I’ll list them for effect:

  1. Software requirements doc
  2. Preliminary design spec
  3. Interface design spec
  4. Final design spec
  5. Test plan spec
  6. Final design (as built)
  7. Test results
  8. Operating instructions

These aren’t short documents, either. I’ll quote Royce at some length here:

“Occasionally I am called upon to review the progress of other software design efforts. My first step is to investigate the state of the documentation. If the documentation is in serious default my first recommendation is simple. Replace project management. Stop all activities not related to documentation. Bring the documentation up to acceptable standards. Management of software is simply impossible without a very high degree of documentation. As an example, let me offer the following estimates for comparison. In order to procure a 5 million dollar hardware device, I would expect that a 30 page specification would provide adequate detail to control the procurement. In order to procure 5 million dollars of software I would estimate ~1,500 page specification is about right in order to achieve comparable control.”

Yes, you heard right. 1500 pages of documentation for a large project. 

Why so much documentation? It forces the designer to make concrete decisions about how all the pieces fit together, it will make developers more efficient, it will make testing more productive, it provides instructions to users who weren’t involved in building it, and it will act as a valuable reference for future iteration.

Change 3: Do It Twice

This is probably the most provocative of Royce’s process requirements. For a major project in a new area, he writes, we should allocate a quarter to a third of the project calendar time to building a crappy prototype that will be discarded, but which will reveal risks and lessons that can be applied to the real, second version of the project, which is what will actually be delivered to the customer. The prototype also allows for testing hypotheses, which presages a lot of modern software development.

 Change 4: Be Methodical in Testing

Royce advocates that every logic path in the application have documented defined test cases, and that every bit of code be inspected and independently tested by someone who did not write the code.

This likely sounds familiar today – teams that do pair programming and test-driven development should see their roots here, as should QAs that are embedded with engineering teams. But the scale of what Royce is describing is enormous, and reminds us of the time he’s living. He writes,

“Without question the biggest user of project resources, whether it be manpower, computer time, or management judgment, is the test phase. It is the phase of greatest risk in terms of dollars and schedule. It occurs at the latest point in the schedule when backup alternatives are least available, if at all.”

When you’re developing software for the Apollo landing that is tightly coupled with inflexible custom-made hardware, testing is both mission critical and incredibly expensive.

Lastly, Royce’s Change 5: Involve the customer.

This is another bit of Royce’s thinking that may be more modern than you’d expect. The customer needs to be engaged to sign off on the preliminary design, to sign off on the final design, and to review the code after testing, before delivery

 I’ll quote a brief excerpt from Royce that I wish I’d had beaten into me before my first product management job:

“For some reason, what a software design is going to do is subject to wide interpretation even after previous agreement. It is important to involve the customer in a formal way so that he has committed himself at earlier points before final delivery. To give the contractor free rein between requirement definition and operation is inviting trouble.”

Royce concludes his essay with a final graphic, which updates the original simple sequence of boxes with lots more boxes as well as loops to represent his total vision for a software development cycle. It is a doozy: I count 25 boxes, with 30 explicit interactions among them. It’s a lot, and it takes the “waterfall” image from a gentle cascade to a proper Iguazu Falls. Or, frankly, it looks to me like one of those old floor patterns that illustrates a complicated dance routine… but for a six-legged dancer. For whatever reason, history decided on “waterfall methodology” over “crab dance methodology,” so here we are.

[Reflecting on Royce]

Now, with all that history, what do we make of Royce’s model? What is its relevance to us today? 

We’ll hear a lot of criticism of the Waterfall method, even with Royce’s enhancements, throughout the rest of this series.

But before we get to those criticisms, it’s worth noting how many of his ideas persist, mostly because they’re good ideas, in modern software development. Concretely,   

  • For a big project, getting and maintaining customer buy-in is intuitively important.
  • Testing is good! While a lot of software projects today are in low-criticality fields that can afford to be error-prone, if you’re writing software for national defense, aerospace, finance, health care, and so on, you better be allocating a lot of time and resources to very thorough testing.
  • Documentation really is helpful! And while Royce’s 1500 pages of documentation for a single project may seem like overkill, he said that was for a project sold to a client for 5 million dollars. Adjusting for inflation, that’s about 150 million dollars in 2023. I’ve never sold packaged software to outside clients, but between all the internal requirements docs, Figma work, technical specs, Jira tickets, and wiki documentation, I’ve worked on major projects that would total at least a hundred of pages of printed-out documentation. The scale might be extreme, but there’s a durable kernel here.
  • Finally, while “spend a third of your time on a prototype that you’ll throw away” isn’t a universal practice the way that Royce advocated, people certainly do software “spikes” to this day, where a small amount of dev time goes into a proof-of-concept for a project before the main work begins.

We should also understand Waterfall and Royce’s amendments to it as a product of that era.

This was the age of Big Science. The software engineers of the 1960s and 1970s were children when the resources of the entire nation were poured into the Manhattan Project and helped to win World War Two. In 1953, President Eisenhower gave his “Atoms for Peace” speech to the UN and began the process of declassifying nuclear reactor technology, foretelling an imminent future when nuclear engineers would build technology that brought all kinds of abundance. Sputnik launched in 1957, kicking off the Space Race. Alan Shephard became the first American in space by riding a Mercury rocket in 1961, and in 1969 Americans landed on the Moon. This is a time when engineers are taking on big, audacious projects at massive scale with massive investment. In 1968, the NATO conference had deliberately coined the emerging discipline as software engineering to capture that same sense of boldness.

The Apollo landings were ongoing as Royce was writing. The culture was primed for a methodology of big projects that pool big resources under brilliant planners.

There’s also a technological moment at play. While exponential curves like those around Moore’s Law always look like they’re at a critical bend when viewed from a scale, there may really have been something special about the time when Royce was writing. Contractors like Royce are responding to concrete demands from their customers – often government agencies – who want very specific things. There is no consumer market for software, nor a small business market or even much of a mid-size market. There appears to be a monolithic customer who should be able to explain what they need. Intuitively, it seems like a sufficiently smart planner should be able to solve the problem soothly.

We’ll see soon that the belief in the wise planner was neer universal, and will break down as it really gets tested. But it wasn’t an insane thing to think. 

I’m reminded of the Milton Friedman adage that no one person can build a pencil, because the raw materials and tools and crafting skills are distributed across a huge number of people and a vast swath of geography. In his article, Winston Royce may have been writing at one of the last moments when it was still conceivable that a small group of people could design and build a software application from near the metal of the hardware all the way to the user interface. In Royce’s 1970 world, you still could, just barely, write a comprehensive spec doc defining every part of a large important program. It would take a year or more to complete a project, but it was still doable.

But even at this early time, Royce already has an understanding that not everything is foreseeable. There’s a bit too much complexity, a bit too much potential for error, so that it’s worth dedicating a quarter of the project’s time budget to a throwaway prototype, just to uncover the unknown. There’s no notion of that kind of disposability in the NATO Conference notes from just two years before. That’s an innovation.

[Wrapping up]

All right, that’s it for this episode. We’ve covered the early days of software development and the emergence of the “Waterfall” as a way to build software, which nobody seems to like very much, but which gave us a place to start. In future episodes, we’ll see that label “Waterfall” evolve into shorthand for a slow, painful, likely-to-fail way of working – but which will somehow become the dominant paradigm of the industry for decades. 

In parallel, a whole other approach to software has been evolving. Join me next episode when we talk about the people putting men into space and firing nuclear missiles from submarines with a very different way of work.

As always, your comments and feedback on this episode are very welcome. You can find an episode transcript and links to sources at prodfund.com. And if you like this series, and want to hear more, do me a favor and share it with someone you think would enjoy it too.

Thank you very much for listening.


Introduction
The first computers
1968 NATO conference
Moore's Law and complexity
Winston Royce & the Waterfall
Reflecting on Royce
Wrapping up