This morning a coworker, Mark Z., sent me another excellent paper,
this time Why Engineers Should Consider Formal Methods (1997) by C. Michael Holloway.
The paper examines the argument for using formal methods (which is to say, mathematics) in software development, most explicitly it intends to divine if the failure to adapt formal methodologies is a failure in the argument or a failure in the software developers who haven't adopted them.
The paper--although unquestionably interesting and on a topic that is increasingly dear to my heart--opens and closes a number of partially tangental doors without much investigation.
First, in physical systems smooth changes in inputs usually produce smooth changes in outputs. That is, most physical systems are continuous. This allows the behavior of the system to be determined by testing only certain inputs, and using extrapolation and interpolation to determine the behaviors for untested inputs.
Software systems are, by their very nature, discontinuous. A small change in input may change the outcomes at several decision points within the software, causing very different execution paths and major changes in output behavior.
Generally this point seems reasonable, but it strikes me that
approaches combining functional programming (here meaning "programming free from side-effects") and assertions make it entirely possible to make accurate predictions on how a unit of functionality will operate across the range of possible inputs.
Consider for a moment this function written in Erlang:
prepend(Lst, X) when is_list(Lst) ->
[ X | Lst ].
Here is a function which validates its inputs and is entirely free of side-effects. Would not a system composed of such atoms guarantee the same benefits?
Further, I'd posit that systems written in this style are amenable to
meaningful testing as they be reliably tested in exclusion and yet still assumed to operate sanely in conjunction.
Must design flaws be handled? In computer games, VCRs, and personal entertainment systems, failing to handle design flaws might not have serious consequences. In avionics, reactor control, and anti-lock brakes, failing to
handle design flaws might have life threatening consequences. Thus, for the most important types of computer systems, this proposition is also indisputably true.
I find the reasoning around this reasoning to be somewhat naive. For many developers
it is a matter of morality (in most extreme cases life and death, in less extreme professional responsibility,
and in the mildest right and wrong) that software (at least important pieces of software) must have its design
flaws handled, but this is an extremely developer-centric mindset.
Unfortunately, (once companies reach a certain critical mass) rarely are developers the ones making decisions on
fixing design flaws. Rather, the fixing of design flaws is viewed as a business problem, and is decided upon
by businessmen according to the peculiar logic of businesses.
Design flaws with provably negative costs on capital (require too many engineers, cause loss of face within an organization, prevent product functionality) may be fixed on a case by case basis, but certainly it is not the case, regardless of whether or not they must be fixed, that most design flaws will be fixed.
All in all, I definitely recommend a quick read (it's only seven pages) if you're working in the industry.