Está en la página 1de 134

The Programmers' Stone

Hi, and welcome to The Programmers' Stone. The purpose of this site is to recapture, explore and
celebrate the Art of Computer Programming. By so doing we hope to help the reader either become a
better programmer, understand what less experienced programmers are struggling with, or communicate
more effectively with other experienced programmers.

We know from work with individuals that by doing this we put the fun back into the work and greatly
extend the boundaries of the possible, so building much smarter and stronger systems.

The present structure is planned around an eight day course, delivered two days a week for four weeks.
Each chapter corresponds to the course notes for one day's material. The eighth day should be free
discussion, so no prepared notes, meaning that there are seven chapters. We've deliberately made each
chapter a single HTML page because it makes it much easier to print the text. Sorry there are no internal
anchors yet, there are big headings, so use your slider!

We'd very much like to hear from you!

Alan & Colston

alan@melloworld.com
colston@shotters.dircon.co.uk

Chapter 1 - Thinking about Thinking

● Roots of the Approach


● Mapping and Software Engineering
● Mapping and TQM
● Mandate Yourself!
● The Undiscovered Country
● Knowledge Packets, Daydreams, Maps and Understanding
● Mappers and Packers
● How to Regain Mapping
● The Ways of Mappers and Packers
● Packing as a Self-Sustaining Condition
● The Mapper/Packer Communication Barrier
Chapter 2 - Thinking about Programming

● What is Software Engineering For?


● Software Engineering is Distributed Programming
● What is Programming?
● Programming is a Mapper's Game
● General Tips on Mapping
● Mapping and the Process
● Angels, Dragons and the Philosophers' Stone
● Literary Criticism and Design Patterns
● Cognitive Atoms
● The Quality Plateau
● Knowledge, Not KLOCS
● Good Composition and Exponential Benefits

Chapter 3 - The Programmer at Work

● Approaches, Methodologies, Languages


● How to Write Documents
● The Knight's Fork
● The Personal Layered Process
● To See the World in a Line of Code
● Conceptual Integrity
● Mood Control
● Situation Rehearsals

Chapter 4 - Customs and Practices

● The Codeface Leads


● Who Stole My Vole?
● Reviews and Previews
● Code Inspections and Step Checks
● Coding Standards and Style Guides
● Meaningful Metrics
● Attitude to Tools
● Software Structures are Problem Structures
● Root Cause Analysis
● Complexity Matching and Incremental Boildown
● The Infinite Regress of `Software Architectures'
● The Quality Audit
Chapter 5 - Design Principles

● Simple and Robust Environments


● System Types
● Error Handling - a Program's Lymphatic System
● Modalism and Combinatorical Explosion
● Avoid Representative Redundancy
● Look at the State of That!
● The Reality of the System as an Object
● Memory Leak Detectors
● Timeouts
● Design for Test
● Dates, Money, Units and the Year 2000
● Security

Chapter 6 - Prudence and Safety

● Brain Overload
● Brain Overrun
● Overwork
● Cultural Interface Management
● Individual Responsibility and Leadership
● The False Goal of Deskilling
● Escape Roads
● New Member Integration

Chapter 7 - Some Weird Stuff...

● Richard Feynman
● George Spencer-Brown
● Physics Textbook as Cultural Construct
● Are Electrons Conscious?
● Teilhard de Chardin and Vernor Vinge
● Society of Mind
● Mapping and Mysticism
● Mapping and ADHD
● How The Approach Developed
● Complexity Cosmology
● The Prisoners' Dilemma, Freeware and Trust
● Predeterminism
Appendix

● Stoned! Sites
● User Reports
● Additional Materials
● Links
● References
Thinking about Thinking

Roots of the Approach


The work leading to this course was motivated by wondering why, in software engineering, there are
some people who are one or two orders of magnitude more useful than most people. If this was true of
bricklayers, the building industry would be very keen to find out why. The problem of course, is that one
can film a bricklayer, and later analyze what is happening at leisure. One cannot even see what great
programmers do, and for some reason they cannot explain what the difference is themselves, although
most of them wish they could.

We knew that the elements of industry best practice alone are not enough. Management commitment to
investment and training are not enough. Innovative Quality programmes that explicitly include holistic
concepts such as Robert Pirsig's Zen and the Art of Motorcycle Maintenance, which much of the
industry would consider too radical to experiment with are not enough. Years of experience are not
enough, nor are years of academic study.

There seemed to be only one way to continue the investigation if an industry dedicated to objective
metrics had not found the X factor: we needed to look at the subjective experience of the people
concerned.

Achieving understanding of what was happening took a long time, although the key ideas are known to
most of us already. On the way we learned a great deal about the mind set of successful programmers,
and were able to develop exercises that certainly helped many people.

Thus the material in this course has developed over several years, and is a mix of ideas empirically
justified by experiment and later fitted into the logical picture, and material derived from the logical
picture.

This course aims to address the element of `experience' or `judgment' referred to almost everywhere, but
rarely described. Many of the topics are the kind of thing programmers discuss over a beer. Perhaps it is
odd that no-one tends to ask how the issues that programmers see as most important relate to the `formal'
structures of modern engineering. Here, we do just that.

We have found that once we get into the swing of this, most programmers find they have an opportunity
to put issues they have wondered about for years into a clear work context, together with their
colleagues. We therefore ask you to relax, because you are supposed to be doing this, and have an
enjoyable time!
Mapping and Software Engineering
Software engineering is in a terrible pickle. The so-called `Software Crisis' was identified in 1968, but
despite thirty years of effort, with hundreds of supposedly fundamental new concepts published, the
general state of the industry is horrific. Projects run massively over-budget or collapse entirely in
unrecoverable heaps. Estimating is a black art, and too many projects solve the customers' problems of
yesterday, not today. The technical quality of most code is dreadful, leading to robustness problems in
service and high maintenance costs. And yet within the industry there exist individuals and groups who
enjoy staggering, repeatable successes. There are many ways of measuring the usefulness of
programmers, but some are rated as over a hundred times more useful than most, by several methods of
counting. If only the whole of the industry performed as well as the tiny minority of excellent workers,
the economic benefits would be immense. If it were possible to write sophisticated, reliable software
quickly and cheaply, the intelligence of society would increase, as everything from car sharing to
realistic social security regulations became possible.

Within this model, the problem can be understood. What is presented as socially conditioned
conventional thinking (called packing) is based on action. To be a good bricklayer, a packer must know
what a bricklayer does. What does a programmer do? The most developed packer model of
programming is the concept of the Software Factory. In this, statements of requirements from customers
go in one door, and are processed by workers following procedures written down in manuals. When the
production line has done its work, programs come out of the other door. It works in car factories.

The trouble is, the analogy with a car factory is sloppy. Most of the car factory is filled with workers
using machines to make cars, but around the back there is a little office where another worker
determines how to use the resources of the factory to make as many cars as possible, all alike.

The workers in a software shop are not like the factory floor workers. The shop floor workers can be
replaced with robots today, but the person who uses creativity to set up the factory is still needed. The
programmers are doing the same job as the office at the back of the factory, and we cannot learn
anything about what happens in there by playing at car factory shop floors.

Packers who advocate uncompromising process-based Software Factories are in fact claiming to be able
to implement an Artificial Intelligence that simulates a production line designer, and to be able to do it
by using humans pushing bits of paper around as their computer. Unfortunately, packing is just not up to
the job of understanding software production, and gets terribly confused. This means it says some very
silly things sometimes.

To understand what programmers really do, an alternative strategy of thinking (called mapping) is
necessary, because programming is essentially a process of internalising the capabilities of the system,
the nature of the problem, and the desire, and capturing the insight in a programming language. It is all
about exploring the details of our desires, and understanding them in such a way that we can keep track
of all the complexity. Mapper problem collapse can produce beautiful, tiny, elegant programs with no
room for bugs in them. Mapping can do programming, but how it does it cannot be explained in packer,
action-based language.

Packers therefore assert that hackers are `irresponsible' and discount their work, saying that complexity
is inherently not understandable and we must develop ever more complex procedures to abdicate our
responsibility to.

Fortunately, many organisations' managements continue to foster reflection on grounds of personal


intuition and empirical experience, without any justifications to place on action-based balance sheets.
This is a difficult thing to do, but is the only reason anything gets done.

It is important to recognise that mapping is not another procedural methodology to be applied in a


packer mindset. It is a different way of looking at things altogether. It is necessary to convince yourself
that it really is possible to take personal responsibility for an undertaking instead of abdicating in favour
of a procedure.

Programming is as near to pure mapping as you can get outside your skull. This is why it is fun. It is
endless discovery, understanding and learning.

Object Orientation (OO) and mapping have an interesting relationship. OO is often seen in very different
ways by mappers and packers. The mapper's map is a kind of object model that has a rich variety of
objects and associations. Mappers see OO as an elegant way to design software once they have
understood the problem. Packers seem to see OO as a way of wandering around the problem domain and
creating software objects, then just wiring them up as they are found. Thus OO is taken to be a
procedural mechanism for getting from problem to program without the intervening understanding. If it
were possible to capture absolutely every aspect of the problem domain and one did not care about
efficiency, this approach might even work. But in fact, good taste is always needed in object design and
categorisation, because it is necessary to design software objects that have a good mapping with real
world objects, but can be plugged together to construct a viable computer system. That takes
understanding, and is a strictly mapper job. This explains the OO projects that grind to a halt with the
product a tangle of real and utility objects using multiply redundant addressing schemes to communicate
via Object Request Brokers, with no clear conceptual integrity in instantiation, flattening and journaling.
Packer programmers often have so little control over their objects that they lose them, and end up with
memory leaks that cause the application to fail. The packer solution to this is to buy a memory leak
detection tool, rather than to regain control of their objects so that everything else works properly too.

Mapping and TQM


After WWII the Americans sent Dr. J. Edwards Deming to Japan to help sort out their manufacturing
industry, which was an odd mix of the medieval and industrial ages, and war shattered. Deming
introduced ideas including collecting statistics from the mass production activities, asking the workers
that performed those processes to think of way of improving them and making sure that each worker
understood what he or she was doing. These ideas were later developed into what we today call `Total
Quality Management' (TQM).

The results (we are told) were extraordinary. Within a generation, Japanese industry soared and moved
from building bicycles in sheds to worldwide dominance of high-value industries like building ships,
cars and electronics. `Japanese Methods' were reimported to the West, and have been institutionalised in
ISO 9001, an international `Quality' standard that business has spent a fortune on, and which focuses on
defining procedures for everything with lots of ticking and checking. The expected benefits have not yet
been seen in general, and yet some organisations that have applied the work of Deming and his
successors have seen staggering benefits.

Recognising the importance of mapping suggests another way of looking at what has happened here.
Mapping can certainly be reawakened by trauma. One possible way to traumatise a person might be to:

1. Nuke them. Twice.


2. Rip apart their rigid, predictable feudal society.
3. Tell them the invader will be coming around tomorrow.
4. Leave them nothing for supper.

To eat tonight, this person is going to have to reawaken his ability to be imaginative. So by the time Dr.
Deming got to Japan, the population he was to work with was already mapping. All of them. At once.
Perhaps all Dr. Deming needed to do was take a leaf out of Bill and Ted's Excellent Adventure, stand on
a tea chest and shout, `Be most sensible to each other!'

When that worked so spectacularly, Dr. Deming and his colleagues would have naturally been
impressed, and so started to work on methods that their work-force could use to get even more sensible,
creating a culture which is an industrial powerhouse, but has the hidden requirement that it only works
for mappers!

During the early reintroduction of `Japanese Methods', mapper people from Japan returned to America,
and with the characteristic enthusiasm and habits of mappers they showed the American workers how to
ask interesting questions about their work, collect data, interpret the data wisely and improve processes.
They showed them how to write down a description of their jobs, look at those descriptions and see if
there might be any problems lurking in there.

It worked wonderfully, but again it was accidentally teaching people mapping that had done the real
work.

When the TQM ideas became widespread, the accidental teaching of mapping just got lost. The ideas
were sold to packer industry on their results, but packer industry just couldn't see the key bits of what
they'd bought - the wisdom and reflection stuff.
Even creative managements of high tech industries can be thwarted by the communication barrier. To
many of their workforce, the manifest artifacts of TQM look just like the stuff that Frederic Taylor, the
father of scientific management threw about the place. Taylor gave us mass production before we had
robots, by getting people to do the robots jobs. Perhaps that is an odd way of looking at it, but at Los
Alamos, they simulated spreadsheet programs by sitting secretaries at grids of desks with adding
machines! He was such a control freak that he used to strap himself into bed every night to counter his
morbid fear of falling out. His slogan was, `Leave your brain outside and bring your body indoors'. Our
culture, from schools to legislation and concepts of status, is still riddled with Taylorism. In this
situation, the worst case result of introducing TQM without an explicit understanding of mapping will be
dumb Taylorism. The best will be that we are confused about why we do what we do.

In some organisations the results have been tragic. There is an obsession with micro-accounting,
dumbing-down and writing poorly-designed job descriptions that are taken as absolute behavioural
tramlines. Everything has to be done on the adversarial model of packing, not the intended co-operative
model of mapping. ISO 9001 auditors appear in the workplace and perform swoop raids on the
paperwork, aiming to catch workers out in trivialities of paperwork regulations, like a scene out of
Kafka. In some organisations, workers become more concerned with avoiding blame for microviolations
of paperwork regulations than the work at hand, which becomes completely obscured by the intervening
rituals. Think of Feynman's story of the six lines on the STS SRBs! Some people actually think that this
is the idea!

Good TQM captures experience in the workplace and condenses this knowledge into lists of things that
are worth considering. These checklists simply remind mappers of issues they should use their mapper
common sense to consider, where appropriate. The packer corruption is to regard the job as ticking the
boxes as quickly as excuses can be found to do so. How much consideration is `sufficient' to a packer?

As the proceduralist orgy has progressed under the banner of `Quality' in too many places it has driven
real quality, which is about doing one's imaginative best to do the best possible job for the customer,
completely out of the window.

Ironically, there are some organisations (all of whom seem to be able to make intelligent use of
information technology) that have invented a kind of `real proceduralism'. Telephone banking
companies have dropped the pretense that they are offering an intelligent service from real people, and
openly acknowledged the anonymous, proceduralised nature of their business. This has allowed them to
think about their procedures clearly, and produce very good procedures that satisfy customers' needs
twenty-four hours a day at low cost. This contrasts favourably in many people's eyes with an offensive
counter-clerk performing a caricature of a pompous Dickensian undertaker and behaving as if the
ridiculous `regulations' he is applying are the customer's problem and not his.

Very successful financial organisations recognise that there are procedures that computers do well, and
judgements that experienced people do well. They analyse their markets with mathematics run by the
computers, and leave the final calls up to the people. They can use different criteria to describe the jobs
of both aspects of the overall system, and evaluate the effectiveness of different algorithms and traders.

This gives an opportunity to try a mappers' technique. If we have `Real TQM', `Fake TQM' and `Real
Proceduralism', can we say:

Real TQM Real Proceduralism


Fake TQM Fake Proceduralism

and ask if there are any examples of `Fake Proceduralism': organisations that swear blind that they are
mindless automatons while actually indulging in a frenzy of mapping? What about the British Army's
journey to Port Stanley in 1982? Remember, an army is an organisation that faces particularly difficult
challenges. Even those that abhor all conflict can learn how to make their world more co-operative by
understanding what makes an army more co-operative. The British Army are Fake Proceduralists? Now
that's an interesting mapper way of looking at things, because then we can look beyond the paper and the
language and see what the organisation does. The idea that they are all following rules all the time
makes the British Army in action hard to understand. Once we realise that there are a lot of mappers in
there, following the rules until the moment that they can see they won't work any more, things get
clearer. We can also compare the customs of the British Army with the US Army. The Americans have
always openly preferred an approach more like the `Real Proceduralism' of the telephone bankers. They
openly intend to do everything by procedure, and get their mappers to write the best procedures they
can, in readiness. When this works, it works very well indeed, as in the Gulf, but it is brittle because it
does not give the packers using the procedures much room to react to changing circumstances. This
leads to inefficiency, as in the Grenada invasion.

The lesson is simple. Without the underlying mapping, TQM turns into a black comedy. With mapping,
the Quality stuff can educate and provoke, and the enthusiasm and joy in work that the TQM advocates
talk about is nothing but general mapper high spirits!

In this model, the Systems Thinking approach advocated by Peter Senge in The Fifth Discipline) can be
seen as a collection of useful mapper concepts and techniques, optimised for management problems.

Mandate Yourself!
There are many more packers than mappers alive today. One purpose of this course is to explain
effective mapping techniques, but others are to explain why for many of us, our insights do not seem to
be endorsed by others. We have to recognise when our concerns as artisan programmers are not
understood by packer colleagues, so that we can get them habituated to complex phenomena taking a
while to think about. We also have to accept that being right is not necessarily being popular, but that a
personal commitment to solid work often brings a more fulfilling and less stressful environment than
any ostrich behaviour could.
We must also recognise that it is possible to communicate effectively with mappers, even those who are
out of their domain. While accepting that there is a specific communication barrier with some, we must
also recognise that with others, communication is often much easier than we might expect.

We must also keep in mind a clear understanding of the boundaries of our own responsibility. When
talking to a customer about a subject which he does not seem to grasp the essential points of, remember
that our personal, self-imposed goal of finding the best answer does not necessarily mean forcing the
customer to accept that answer alone. Any contemplation that throws up one strategy usually throws up
several others as well, each with strengths and weaknesses. You can always summarise these, and
content yourself with the knowledge that you have done a good job of exploring the options and
explaining the choices to the customer. If, with full understanding, the customer makes what you would
see as a stupid choice, well how else can the customer organisation learn?

You don't have to save the world, just your bit and as much of the rest as you can reach!

The Undiscovered Country


In Tom de Marco and Tim Lister's Peopleware, the authors suggest that gelled teams make great
software, and propose that initiatives are taken to assist the social cohesion of teams. Looking at gelled
teams, we can see the social ease which they exhibit, and the effectiveness in their work. But add the
concept of mapping into the equation, and the picture changes. Gelled teams look much more like
groups of mappers, communicating effectively with one another because they can refer to parts of their
shared mental map of the situation with a few, perhaps odd-sounding words. (There was once a
guaranteed delivery comms buffering subsystem that its creators called the `Spaghetti Factory'. It was to
do with loops of stuff flying unsupported through the air.)

They can't just exchange information about their maps quickly - they can all grab hold of chunks of their
maps and move them around. They can move chunks of each others maps around. They can react, as a
team, very quickly. They all know what is going on, and they've all thrashed it to death, in their own
minds. They don't make cock-ups, and they don't waste time on unsynchronised activity. They respect
each other even though they may loathe each others' taste in music, politics and food. The performance
gains are breathtaking, as anyone who has had the pleasure of working on such a team knows.

What one has to do is take the time to ensure that everyone has a shared understanding of what is going
on, and life can be a more rewarding experience, because one has a sense of success at five o'clock.

Getting into this situation is not an accident, it is repeatable.

Knowledge Packets, Daydreams, Maps and Understanding


As software engineers, we might describe learning as forming associations between referents. The sky is
blue. The rain in Spain falls mainly on the plain. We might call these learned facts `knowledge packets':
little bits of truth (or errors) that we possess.

One can go a long way on knowledge packets. Early learning (as directed by adults) for most children
focuses almost entirely on the acquisition of knowledge packets. Things that one should or should not
do. Methods for performing tasks. Data to be retained and later recovered on demand.

The trick with knowledge packets is to identify key features of the situation, and determine what action
to take. One can get A Levels and degrees, drive cars, even chat up members of the opposite sex by
using knowledge packets. Very adept knowledge packet users can fill their heads with megabytes of
procedural tax law and become accountants earning six figure sums. Some politicians omit the pattern
recognition stage and use a single all-purpose knowledge packet for everything.

Of course, we don't just stack up knowledge packets like dinner plates in our heads. From our earliest
years our natural response to gaining each new knowledge packet is to ask, `Why?'

We attempt to connect up knowledge packets to create a structure within our knowledge, a mental map
that gives us understanding of the causes and effects within a situation. This understanding allows us to
derive a solution to any problem within the situation, instead of attempting to select a rote-learned
response.

In later life, we must spend periods of reflection, or daydreaming, where we trace through the
relationships between that which we know. This broadens our integrated map, and allows us to identify
structures in the map that apply in different areas. We can then get a deeper map, where what
mathematicians call `isomorphism' provides what software engineers call `inheritance', allowing us to
reapply knowledge.

We rearrange our mental maps to produce simpler expressions, and allow more understanding to be held
in the mind at once. When we find a simpler way of looking at things, we find it hard to remember what
it was like when the topic seemed complicated, and we ourselves have grown. With understanding,
where does the self end and the data begin? With knowledge packets, the division is clear.

We become adept at using techniques in reflection that allow us to explore our maps, and the knowledge
packets we have not yet connected. There are likely to be neurological underpinnings to what we do
when we reflect, but some kind of abstract pattern recognition activity must be under way. We learn to
use our brains.

Without understanding there can be little intelligent action. Without mental maps there can be no
understanding. Without reflection, there can be no mental maps, only knowledge packets.

There are computer data structures, called `ontologies', that hold vast numbers of truths in networks
associated by a form of predicate logic. The CYC database for example, can use maps of the meanings
of natural language well enough to interpret photograph captions and find examples for pictures needed
by journalists.

Mappers and Packers


Or at least, all this should be true. Unfortunately, we are descended from industrial and agrarian societies
where one day was very much like another. Efficiency was dependent on getting everyone co-ordinated
into simple group activities. On the other hand, there really wasn't much call for inventiveness. We
developed social customs that teach people to stack knowledge packets and focus on action. Reflection
(`daydreaming') is discouraged during early school. We observe children closely and note deviations
from action-based behavioural norms with concern. One even hears parents who are concerned that their
children may have physiological abnormalities if they do not wish to play a particular sport.

One cannot easily teach reflection to a child. Unlike the performance of physically manifest task,
subjective experience must be discussed.

One cannot easily ascertain if reflection is proceeding well in a person. Only by careful discussion or
watching the long-term results of a child's mentation can effective daydreaming be identified.

So there is nothing in our social history that motivates parents or teachers to teach reflection. There is
nothing that makes teaching reflection in school a priority.

In fact, the reverse is true. When a child attempts to reflect, the consequent lack of manifest physical
activity is chastised. When questions prompted by reflection are asked by children, they are rarely
addressed by busy adults. Where reflection succeeds and understanding is gained, this can become a
handicap to the child. If there are another fifteen simple addition sums to do, the child will become
bored, be chastised, and labeled as incapable of performing the simple task, although nothing could be
further from the truth.

Notice that although adults chastise different effects on each occasion, what the child has been doing in
each case is reflecting. Many people have actually been conditioned to think that reflective thinking is,
in itself, socially unacceptable!

The traditional story is that thinking is taught at universities, but with a whole degree course of thirty
years ago packed into the first year of a modern course in most technical subjects, this rarely happens.

In the workplace, educated people are still regarded as able to think, and indeed all programmers must
be able to do it to some extent, just to accomplish anything. We are the amongst the most reflective
people in society, but we are still a far from homogeneous group. Some of us are better at it or less
nervous about it than others. Again it is not taught, and with the workplace a part of the embedding
society, the cultural environment often remains based on knowledge packets and action, rather than
mental maps and understanding.
This leads to two distinct groups in society. Mappers predominantly adopt the cognitive strategy of
populating and integrating mental maps, then reading off the solution to any particular problem. They
quickly find methods for achieving their objectives by consulting their maps. Packers become adept at
retaining large numbers or knowledge packets. Their singular objective is performing the `correct'
action. Strategies for resolving `hash collisions', where more than one action might fit a circumstance are
\ad hoc\.

How to Regain Mapping


Our species' principal advantage over others lies in our generality. We can survive a greater range of
temperatures that any other creature, but more importantly, we are inventive. Arthur C. Clarke and
Stanley Kubrick celebrated this inventiveness in the famous `thigh-bone to spaceship' fade in the film
2001.

We are all mappers, no matter how little we use the faculty. Those of you who spend time on solitary
walks, in heavy metal bars or whatever does it for you, feeling somehow uncomfortable until suddenly a
penny you didn't even know you were looking for drops, are already operational. You know who you
are!

Otherwise, there is an easy way to start. So easy kids that are trying really hard to be natural mappers
often discover it. Get yourself an imaginary friend, as smart as you are, but totally ignorant of the world.
Whatever you feel you could relate to - you don't have to tell anyone that you find it easiest to talk to the
1960's cartoon character `Astronut' hovering about in his little UFO with a VHF television aerial on his
head. Or maybe Sean Connery's canny medieval investigator in The Name of the Rose would be more
fun. Explain everything to your imaginary friend. What it's for. Where it comes from. Where it's going.

At first your full attention is required for this exercise, but after a while the logic between knowledge
packets becomes as automatic as driving, and your attention is only drawn to unusual situations: pieces
of your map that need filling in or contradictions resolving. It works. With your maps building,
discussion of techniques is possible, because we all know what we are talking about.

The Ways of Mappers and Packers


It is a surprise to discover that there are two distinct states of mind around us. It is similar to the
experience of learning that someone you've known for months is illiterate. At first you are astonished:
this cannot be possible! But then you realise how someone else can live a life very different to yours,
that looks superficially almost the same.

In this section we look at traits of the two strategies. As we do so, many of the woes of the modern age,
particularly in high tech disciplines, will come into a simple picture - the mark of a useful theory!
Remember, most people, be they mappers or packers, have no reason to believe there is any other state
of mind but their's.

What is packing? Well, you just stop yourself asking `Why?'. You never really clean up your map of the
world, so you don't find many of the underlying patterns that mappers use to `cheat'. You learn slower,
because you learn little pockets of knowledge that you can't check all the way through, so lots of little
problems crop up. You rarely get to the point where you've got so much of the map sorted out you can
just see how the rest of it develops. In thinking-intensive areas like maths and physics, mappers can
understand enough to get good GCSE grades in two weeks, while most schools have to spend three
years or more bashing the knowledge packets into rote-learned memory, where they sit unexamined
because the kids are good and do not daydream. It really isn't a very efficient way to go about things in
the Information Age.

With no map of the world that checks out against itself and explains just about everything you can see, it
is very hard to be confident about what to do. The approach you have to take in any situation is to cast
about frantically until you find a little packet of knowledge that kind of fits (everything has a little bit of
daydreaming at its core, but the confused objective is to stop it as soon as humanly possible). Then you
list the bits that kind of fit, and you assert that the situation is one of those, so the response is specified
by your `knowledge'.

Your friend has happened to grab another packet of `knowledge' and so you begin an `argument' where
your friend lists bits of your knowledge that don't fit and says that you are wrong and he is right, and you
do the same thing. You don't attempt to build a map that includes both your bits of knowledge and so
illuminates the true answer because you don't have access to the necessary faculty of mapping, and
anyway, without the experience, it is hard to believe that it is possible in the time allowed. Being devoid
of the clarity that comes from a half-way decent map, you would rather do something ineffective by the
deadline than something that might even work. Then when things go pear-shaped you say it is bad luck.

The consequences go further. Not having a big map means that you often don't understand what is
happening, even in familiar settings like your home or workplace. You assume that this means that you
do not possess the appropriate knowledge packet, and this may be seen as a moral failure on your part.
After all, you have been told since childhood that the good acquire knowledge packets and stack them
up in their heads like dinner plates, the lazy do not.

You are also overly concerned about certainty. Mappers have a rich, strong, self-connected structure
they can explore in detail and check the situation and their actions against. Logic for them is being true
to the map, and being honest when it stops working. It's not a problem, they just change it until it's
`logical' again. Without mapping, you have to use rickety chains of reasoning that are really only
supported at one end. Because they are rickety you get very worried that each link is absolute, certain,
totally correct (which you can never actually achieve). You have to discount evidence that is not
`certain' (although tragically it might be if your map was bigger), and often constrain your actions to
those that you can convince yourself are totally certain in an inherently uncertain world.
The issue of certainty then becomes dominant. People are unwilling to think about something (erect a
rickety chain) unless they are `certain' that the `procedure' will have a guaranteed payoff, because that,
they believe, is how the wise proceed.

You become absorbed by the fear of being found to be `in the wrong', because of the idea that the `good'
will have acquired the correct knowledge packet for dealing with any situation. The notion that the
world is a closed, fully understood (but not by you) thing kind of creeps in by implication there. The
idea of a new situation becomes so unlikely that you rarely spot one when it happens, although mappers
notice new situations all the time. Your approach becomes focussed on actions that you cannot be
`blamed' for, even though their futility or even counter-productivity is obvious. You insist on your
specific actions being specified in your job, even when your map is already easily good enough for you
to accept personal responsibility for the objectives that need to be achieved, which would be more in
keeping with your true dignity.

Some people have so little experience of direct understanding, produced by mapping over time, that they
cannot believe that anything can be achieved unless someone else spells out in exact detail how to do
absolutely everything. They believe that the only alternative to total regimentation is total anarchy, not a
bunch of people getting things done.

Now, if you are used to talking to your imaginary friend about your map of the world, and keep finding
holes and fixing them, you don't become very attached to the current state of it at any particular time.
You do sometimes, if you find an abstraction that was a wonderful surprise when you got it and has been
useful, but now needs to go. It's always important to remember that the fun only adds up: if finding
something was fun, finding something deeper is even more fun. Generally though, you don't mind your
imaginary friend knocking bits off the map if they don't work. So you don't mind real friends doing it
either! When you see thing in different ways you try to understand each others' maps and work through
the differences. Two messy maps often point the way to a deeper way of seeing things.

Great thinkers are mappers. They rarely proceed by erecting edifices of great conceptual complexity.
Rather they show us how to see the world in a simpler way.

Mappers experience learning as an internal process in response to external and self-generated stimuli.
Packers experience learning as another task to be performed, usually in a classroom, using appropriate
equipment. Particularly in the early years, efficient mapper learning requires internal techniques for
exploring conceptual relationships and recognising truths, while efficient packer learning focuses on
memorisation skills.

Aspects of mapper learning require higher investment than packer learning, and this has consequences.
An emphasis on succinct, structured knowledge means that low structured off-topic considerations can
displace disproportionally larger issues from a problem the mapper is contemplating. If a child is trying
to understand a new idea in terms of as much as possible of what is already known, then likely the
child's awareness will be spread over as much `core knowledge' as possible already. The requirement to
then consider the question `Shall I take my library books back today?', bringing with it conceptually
networked questions such as `Where is my satchel?', `Will it rain?', `Will it rain tomorrow?' and so on is
an imposition on the mind that a packer child would simply not experience in apparently similar
circumstances. The packer child simply never has (for example) the form of the flows resulting from
economic supply and demand curves (which might also actually be the same representations that are
used to hold, say, parts of thermodynamic understanding) floating about to be displaced by a simple
question about a library book.

Accepting a fact and being ready for the next is also a different process in mapping and packing. The
mapper mind must explore the fact and compare it against core knowledge to see if it is a consequence
that already has a place in the mapper's conceptual model of the world, or if it is in fact new fundamental
knowledge that requires structural change.

Mappers are likely to be much more aware of the comparative reliability of information. Whereas
packers tend to regard knowledge as planar, a series of statements that are the case, mappers tend to
cross-index statements to verify and collapse them into more profound truths. Mappers are more likely
to work with contingent thinking of the form: `If X is true then Y must be true also, Z is certainly true,
and W is nonsense although everyone keeps saying it is the case'. Mappers are likely to be concerned
about the soundness of packer reasoning.

An aspect of packer thinking that drives mappers up the wall, is that packers often seem to neither seek
out the flaws in their own logic, nor even hear them when they utter them. Worse, when flaws are
pointed out to them, they are likely to react by justifying following logic that they cheerfully admit is
flawed, on grounds of administrative convenience. The evidence of their own senses is not as important
as behaviour learned through repetition, and they seem to have no sense of proportion when performing
cost/benefit analyses. This is because packers do not create integrated conceptual pictures from as much
as possible of what they know. The mapper may point out a fact, but it is one fact amongst so many. The
packer does not have a conceptual picture of the situation that indicates the important issues, so the
principal source of guidance is a set of procedural responses that specify action to be taken. The
procedure that is selected to be followed will be something of a lottery. For the mapper, one fact that
should fit the map but doesn't, means the whole map is suspect. The error could wander around like a
lump in a carpet, and end up somewhere really important. Both parties agree that they should do the
`logical' thing, but two people can disagree about logic when one sees relationships that the other has
only ever been dissuaded from seeing.

Mappers have lots of good ideas based in profound insights into relationships that packers rarely have
the opportunity to recognise.

Part of mappers' extraordinary flexibility and learning speed comes from the benefits of seeking
understanding rather than data, but some of it comes from the sheer amount of playing with a topic they
do. It is quite usual for mappers to spend every spare moment for a week wandering around a topic in
their heads, and then spend all weekend focused on it. Mapper focus is a terrible thing. A few hours of it
can produce breathtaking results where a team of packers could strive for months. Every IT manager
who has ever had an effective mapper around knows this.

Mappers have a linguistic tendency to want to talk in terms of the form of the concentrated knowledge
they reduce experience into. Although mappers often use different internal representations of a sphere of
discourse, they are adept at negotiating mutually agreed terminology at the onset of discussions between
themselves, and this is one way that mappers are able to recognise one another. Mutual recognition
occurs because of this series of transactions where one party traces a route through the map, stops, and
invites the other to pick up where they left off. The objective of the exercise is to align mental maps, but
it also reveals the presence of the other's map in the first place!

Mappers advocate changing descriptions and approaches often, because they see simplification benefits
that are of high value to understanding, and whose map is it anyway? In social or administrative
situations, this can cause confusion because the mapper does not realise that the packers do not have a
map that they can move around in chunks. Mappers see packers as wilfully ignorant, packers see
mappers as confused. In software engineering contexts, this failure of communication leads to arguments
about `churn'. The mapper wants to move from a large mass of software to a smaller one that is more
robust because of its necessary and sufficient structure. The packers are not practiced at seeing the
proposed new structure, and see only a maniac who wants to change every single file in one go.

Mappers have a direct, hands-on awareness of the effectiveness of their reflections and so, in most areas,
they have a sense of the universe in some unseen way `playing fair' with them, even rewarding them
with wonderful surprises when they look deeply enough. This often gives rise to a `spiritual' or
`mystical' element to their character, and often to unusually high spirits, even in situations where packers
are despondent.

Mappers ensure that the known elements of a problem are held in their minds, before embarking on it.
They draw on their own strength of character to find the motivation to do the hard work involved in
keeping their background explorations going. To achieve a solution to a problem, a mapper engages all
his or her strengths, and is rewarded with elation or a sensation of betrayal if things do not work out
well. Mappers are `passionate' about `dry' subjects.

Mappers excel at conceptually challenging work such as complex problem-solving with many inter-
related elements. They can perform tasks requiring insight, or imagination, that packers simply cannot
do at all. Best quality software engineering, mathematics and physics, with genetics emerging as a likely
area of unique contribution, are amongst mapper challenging science disciplines. Amongst the
traditionally recognised arts, poetry and music are areas where the mapper faculty for manipulating
structure is of particular benefit, although there may be value in redefining the `Arts' as what mappers do
well. The very power of great art is only available to mapper thinking, because the artist uses a tone of
sound or light, itself representative of nothing, but triggering the recognition of a deep structure.
Pointing out the structure can then bring to mind instances of that structure, and the artist has added to
the audiences maps!
All these differences are simply consequences of one person having a big map built by a great deal of
disciplined daydreaming, and the other not. That these profound differences between two clearly distinct
groups of people exist is the major surprise of the approach proposed. It means that it is very unlikely
that either kind is likely to have any appreciation of the other's state of mind.

Packing as a Self-Sustaining Condition


We live in an action oriented society. It's been that way since we invented agriculture and developed a
stable environment in which the tasks to be performed could be defined within. Not much thinking was
needed. We have little experience of discussing and managing subjective, internal states - although they
are as much shared experiences as external objects visible to all. We have a general heuristic that says
we should confine our observations to the externally visible, which kicks in to prevent the exploration of
subjective phenomena even before they have had the chance to give results and justify themselves.

When things go wrong, we seek to clarify action, and capture better descriptions of more effective
actions. In situations where flexibility is an asset, this leads to reduced aspirations. If things are
proceeding according to the actions written on paper, they are deemed to be going well, and the
opportunity cost is not considered.

Worse, the behaviour of people trapped in lack of understanding can reinforce each other. If one person
just doesn't understand what is happening, they look about them and see others apparently knowing what
they are doing, feel vulnerable, because lack of knowledge packets is supposed to be a personal failure,
and therefore they bluster. They stick their noses in the air and waffle about `due consideration' and
`appropriate action' as if `undue consideration' or `inappropriate action' was also on the table, but don't
suggest what the appropriate action might be.

The thing is, everybody is doing it! So the silent conspiracy to maintain the etiquette of bluster develops.
If anyone violates the etiquette, that person will be assailed by inherently unclear objections and other
pressures to `conform', apparently for the sake of it. These cannot be countered in action-oriented terms,
only by reference to causal relationships that only one person is fully cognizant of. Mapping in a
packing world can be a painful and depressing experience, particularly if one does not understand the
shattered reality one's packing associates inhabit.

In pathological situations, this can lead to an infinite regress wherein every problem is addressed by
attempting to delegate it to someone else, a procedure, or a blame allocation mechanism. It's rather like
holding your toothbrush with chopsticks - if you are holding the chopsticks just like on the diagram, the
brush up your nose and the paste all over the mirror are not your responsibility!

Remember, we've described the causes of this misery not by waffling about `the human condition' or our
colleagues' `moral fibre', but practically, out of socially-conditioned avoidance of `daydreaming'!
The Mapper/Packer Communication Barrier
It's worth reiterating some key points here:

● Mapping and packing are very different strategies


● Packing is the strongly enforced social norm
● The world is set up for packers
● Business language is packer language
● The results of mapping are called `common sense'
● Common sense isn't so common
● Mappers think packers are cynical or lazy
● Packers think mappers are irrational
● Packers spend much of their time playing politics
● The last thing that counts in politics is reason
● Mappers are often wrong about packer psychology
● Packers are usually right about packer psychology
● Mappers are often wrong about mapper psychology
● Packers are always wrong about mapper psychology.
● Mappers do not have a culture to guide them
● Most mappers teach themselves, like Mowgli
● Mappers can teach themselves!
● Mappers can learn from others
● Mappers often face significant social challenges
● Mappers currently rarely fulfill their potential
● Once a situation is understood, it can be addressed.

This file last updated 20 October 1997


Copyright (c) Alan G Carter and Colston Sanger 1997

alan@melloworld.com

colston@shotters.dircon.co.uk
Thinking about Programming

What is Software Engineering For?


Whenever we get confused, we must be able to see where we are going in order to know what action to
take. We must know what we are trying to achieve.

We are software engineers. Why? What is software engineering for? What do software engineers do?
We get some curious answers to this question. One chap said, `They follow the procedures of the
Software Engineering Standards!' Another said, `They transliterate a requirement!'

Oh dear. We suggest that software engineers ensure the programs their customers need are running on
their computers. That means our programs must do the right things. They must be robust. Sometimes we
must know for certain that they are robust, and sometimes we will need to be able to prove it. We'd
always like to be able to do all those things! The necessary programs must be running tomorrow as well,
which usually means that our programs today must be maintainable. We must do our work cost-
effectively, or we won't get the chance to write the programs in the first place. Our delivery must be
timely.

We use all our inventiveness and the experience contained within our discipline to attain these goals. All
our methodologies, standards, tools, languages are intended to assist us in attaining these goals.

We do nothing for the sake of it.

Software Engineering is Distributed Programming


The traditional view of the workplace is that the team is doing a job, and the individual is a part of this
effort. But as mappers we can try looking at things in all sorts of odd ways, to see if they are
informative. We can draw a system boundary around the programming team and notice that it does
nothing that an individual programmer couldn't do. Activities such as requirement elicitation, design,
implementation, test, management, review, build, archive and configuration management must all be
performed by a single programmer doing even a small job. So we can see software engineering activities
as the distribution of what a single individual could be doing quite effectively and responsibly in potter
mode in his or her study!

We distribute programming for the same reasons that we distribute any kind of processing: availability,
parallelism and specialisation.
This way of looking at things brings insights. We must select the divisions between tasks intelligently.
Sometimes we can get benefits from putting two tasks with one person, where we need not be concerned
if they remerge. For example, many organisations have a general practice of separating the identification
of software requirements and architecture, but when they are following Booch style object modelling
methodology, they take his advice and remerge these tasks. When we separate the skills of design and
test, we can actually get added benefits from the situation, by controlling communication between the
disciplines so that the test engineer's thinking is not compromised by the designer's. There was a project
manager who was very much a packer. He didn't have a clear understanding of what he was doing and
why, and had been led by the absence of any positive model of his job into thinking that a key objective
was preventing this communication. The testers didn't know how to set up the conditions for the
components they were to test, and the designers weren't allowed to tell them. Acrimonious arguments
continued for days. These things really happen when we lose sight of the big picture.

We must make sure that the communication between distributed tasks is efficient, and that means that
we must agree both a protocol and bear each others' needs in mind. Anything you'd need in your mind
when you have completed one task and are about to embark on another, your colleague needs in his or
hers. Your output will be no help to anyone if it doesn't tell your colleague what they will need to do the
next bit. We need to use our own ability to perform each others' jobs, no matter how naively, to monitor
our own performance.

The final insight we need to raise at this point is that the black box of an individual programmer still
exists in the team. The flow of information is not a linear series of transforms like a car factory, it is a
fan-in of issues to a designer and a fan-out of solutions. The insight of the designer has not yet been
distributed. Such an achievement would be a major result in AI.

What is Programming?
To understand software engineering we must understand a programmer. Let us allow a programmer to
specify the requirement (to be identical with the user), and examine a scenario which ends in the
construction of the simplest possible program: a single bit program.

Ada is sitting in a room.


In the evening the room becomes dark.
Ada turns on the light.

That is the fundamental act of programming. There is a problem domain (the room), which is dynamic
(gets dark). There is order to the dynamic problem domain (it will be dark until morning), permitting
analysis. There is a system that can operate within the problem domain (the light), and it has semantics
(the switch state).

There is a desire (that the room shall remain bright), and there is an insight (that the operation of the
switch will fulfill the desire).

Dynamic problem domains, systems and semantics are covered in detail elsewhere. On this course we
are concentrating on understanding more about the desire and the insight.

It is worth pointing out here what we mean by a `programmer'. A drone typing in the same RPG 3
invoicing system yet again might not be doing any real programming at all, but a project manager using
Excel to gain an intuitive understanding of when the budget will get squeezed and what the key drivers
are, most certainly is.

Programming is a Mapper's Game


We have a reasonable description of what programmers actually do, that makes sense. The two key
words, `desire' and `insight', are things that it is difficult to discuss sensibly in packer business language,
which concentrates on manifest `objective' phenomena. While this is a very good idea when possible, it
can hamper progress when applied as an absolute rule, which is how packers often apply rules.

It is worth making a philosophical point here. In order for any communication to take place, I must refer
to something that is already there in your head. One way a thing can get into your head is as an image of
something in the external world, and another is by being part of your own experience. If a part of your
experience is unique to you (perhaps an association between pipe smoke and the taste of Christmas
pudding, because of visits to your grandparents), we cannot speak of it without first defining terms. Even
then, I cannot have the experience of the association, only an imagining of an association. But if the part
of your experience is shared by all humans (perhaps our reaction to the sight of an albatross chick), we
can speak of it `objectively', as if the reaction to the chick was out there with the chick itself to be
weighed and measured.

It has been argued that it is necessary to restrict the language of the workplace to the `objective' because
that is a limitation of the legal framework of the workplace. This is just silly. How do journalists,
architects (of the civil variety) or even judges do it? This is the area where managers have to use their
own insight to control risk exposure.

We suggest that the real issue here is that we are not very good at software yet. We probably never will
be - our aspirations will always be able to rise. We are culturally constrained, and further influenced by
the mature objective metrics that our colleagues in the physical, rather than information disciplines,
routinely use.

To get anywhere with programming we must be free to discuss and improve subjective phenomena, and
leave the objective metrics to resultants such as bug reports.

First, desire. In the example above, Ada likely did not begin with a clear desire for greater light. Her
environment became non-optimal, perhaps uncomfortable, and she had to seek for a clear description of
exactly what she wanted. This clarifying of one's desire is usually a nested experience where incremental
refinement is possible, and proceeds in tandem with design. We will have more to say about the User
Requirements Document later-- for now let us remember that the clarification of desire always has the
potential to turn into a journey of exploration together with the customer.

Next, insight. This is the moment of recognition when we see that the interaction of the problem and the
desire can be fulfilled by a given use of the semantics. It's kind of like very abstract vector addition with
a discontinuous solution space. Or tp put it another way, it's like doing a jigsaw puzzle where you can
change the shape of the pieces as well as their positions, It is supremely intellectually challenging.

There is a pattern here that relates computer programming to every other creative art. We have three
phenomena, Problem, Semantics and Desire (heavy capitals to indicate Platonic essences and like that).
Problem and Semantics are of no great interest to the AI or Consciousness Studies people, but that
Desire has something odd about it. These three phenomena are addressed or coupled to by three
activities of the programmer. Looking consists of internalising the features of the Problem. Seeing
comprehends the meaning of the Desire. Telling exerts the Semantics. Looking and Telling are domain
specific. The poet might observe commuters, while the ecologist samples populations. The poet writes
structured words, while the ecologist introduces carefully selected species. All of us do the same kind of
Seeing. Talk to any artist about the good bits of your job.

We need all those wonderful mapper faculties to handle this stuff.

Programming is a mapper's game.

General Tips on Mapping


Packers have a whole proceduralised culture that provides behavioural tramlines for just about
everything. It's so complete you don't even notice it until you solve a problem perfectly effectively one
day, by a method that's not on the list. It might be something as trivial as getting out of the car and
buying the Pay and Display ticket before driving along the car park and pulling into a space. Apparently
one is `supposed' to park the car, walk to the machine, and walk back again.

Mappers hardly ever get the upper hand on these cultural issues, but when it does happen it can be
hilarious. A packer gave a dinner party and it so happened that over half of the guests were mapper
types, IT workers and others. The host pulled a pile of warm plates from the oven, and started handing
them to the guy on his left.`Just pass them around!', he cried cheerfully. Everything went well until he
passed out the last plate. Then his expression changed from confusion, to amusement and a distinct
moment of fear before he realised he needed to shout `Stop!'

Or maybe it was just a plea from the heart.


Mappers don't have a general cultural context to learn from, so we are almost entirely self taught. Here
we have collected some observations we have collected talking to mappers. We can learn a great deal
about mapping by talking to others.

Problem Quake

After you've been telling yourself about what your customer needs to accomplish for a while, chasing
around the elements of the problem, how it is related, the physical capabilities of the systems available,
the problem will suddenly collapse into something much simpler. For some reason, we rarely get it quite
right in that sudden moment of understanding. Be ready to shift your new understanding around, and
make the most of your aftershocks. This is a good time to express your new understanding to colleagues,
and allow them to look afresh at things you may have stopped seeing because of familiarity.

Incremental vs Catastrophic Change

Sudden realisations come when they are ready, and we can optimise conditions to produce them. They
have their problems. They are exhilarating, convincing, and sometimes wrong. When you get them,
check them through with respect to everything you know, and try your best to break them. A big quake
is always important, even if it doesn't bring an instant solution. On the other hand, we can often get a
great deal of reduction out of chunking the problem and just moving lumps around. Don't feel
embarrassed about thinking `crudely' - start doing it now and you might get to see something a week
next Tuesday. By which time people whose thinking consists of looking very serious will know nothing.

Boundaries

Focus on your boundaries. There are three classes of components to your problem. These are things you
care about, things that affect things you care about, and things you don't care about. One of the reasons
that mappers have an easier life than packers is that they take the initiative and try to identify all the
external effects that could give them problems, and they don't just concentrate on stuff listed on bits of
paper they've been handed. If you can find your boundaries, your problem is well defined and you can
start to solve it. If you can't you might need to talk to your customer again, or draw your own boundary,
which involves making assumptions that should be explicitly justifiable.

Explore Permutations

When you have a green duck, a pink lion and a green lion, ask yourself where the pink duck has got to.
Understanding trivial and impossible permutations can lead to greater overall understanding, and some
permutations are just plain useful in their own right.

Work Backwards
We all know how to solve mazes in childrens' puzzle books, don't we!

Plate Spinning

You know when your unconscious mapping faculty is going because of a fidgety, uncomfortable, even
grouchy feeling. When that feeling eases off, it's your call. If you've got a date, leave it be! But if you
want results, just take a quick tour around your problem from a couple of different perspectives or
directions, and the fidgetiness will come back. It's like the way platespinners nip back to each plate and
spin it up again before it falls off its stick.

Ease Off

After a great deal of physical work, you can attempt to lift something, but no movement occurs. The
sensation of feebleness where you expected to be able to exert force is surprising and unmistakable. The
mental equivalent feels very similar. There is absolutely no point pushing harder, but switching to rest
mode instead of carrying on bashing away with your puny little neurons is not easy. This stuff runs on
autopilot. You must obtain physical sensory stimulation. A shower, a noisy bar, a band. Get out of your
surroundings. You can recover mental energy in a few hours if you stop when you know you can get no
further.

Break Loops

The fidgety feeling that comes from effective background thinking is different to a stale sensation,
sometimes even described as nauseous. Your brain has exhausted all the options it can find, and you
need new empirical input. Get more data. Talk to someone. You obviously don't have some key datum,
or your whole model is skew. So maybe you need to do a dragnet search of your problem. If it's a buggy
program, put a diagnostic after every single line and put the output in a file. Then read it in detail over a
cup of coffee. Sure it will take ages - do you have a better idea? If it's a hideous collection of
asynchronous events to be handled, write them out in a list by hand. This forces your attention onto one
event after another, and you'll probably have new lines of inquiry before you are half way through.

Fault to Swapping

There are kinds of stupidity that only mappers have access to. Mappers can be paralysed by trying to
optimise a sequence that is too big to fit in their heads. Perhaps they want to move the wedding cake
before they put the spare wheel in the car so their hands are clear, but the spare wheel is here and the
wedding cake is at Fred's, and so on. When this happens to a modern paged OS, it get itself out of
thrashing pages by reverting to a swapping strategy. It just swaps out whole processes until the logjam
clears, and then goes back to paging. Don't get paralysed - just do any job and then look again.

Duvet Stuffing
Turn the cover inside out, put your arms into it, and grab the far corners from the inside. Then grab the
corners of the duvet with the corners of the cover, and shake the cover over the duvet. A bit of practice
and you can do a king size one in less than 30 seconds.

Mapping and the Process


The purpose of software engineering is ensuring that the programs our customers need are running on
their computers. Software engineering is distributed programming. From this perspective, we can define
the process as a protocol for communicating with our colleagues through time and space. It provides a
framework that tells our successors where to find design information they will need to do their jobs. By
changing the process we communicate our experience to the future. It tells our colleagues in other parts
of the team when we will meet, and provides a structure for our discussions. It provides common points
in our projects where we can compare like with like, and so discuss aspects of our approach that we have
varied.

The process is not a prescriptive meta-program for making other programs. While our activities must
map to the process, it is not in itself sufficient for making programs. We think within the structure of the
process, but there must always be a stage of interpreting the process definition in the light of any given
problem. Remember that one always interprets the definition - abdicating this activity simply selects an
arbitrary interpretation. One then usually ends up trying to manage the issues that would arise when
building say, a futures trading system, when the problems that are emerging are those of the actual
project say, a graphics rendering system. So you end up arguing about how you'll do requirements
tracing for transaction journaling instead of worrying about the extra bits you need for the specular
reflections!

Angels, Dragons and the Philosophers' Stone


Our ancestors were as smart as we are, and when it got dark at four o'clock in the afternoon, the other
thing to do was play with the insides of their own heads. Understanding some puzzles from antiquity as
the thinking of past mappers is useful not only because it is interesting, but because it shows us what the
unaided human intellect is capable of. This is something we need to appreciate if we are to regain
control of our work from the processes we have handed our lives and careers to.

Infinity was a hot topic, and our ancestors had broken this notion down into three different kinds.
Conceptual infinity is easy - you just say `forever' and you've got it, for what it's worth. Next there is
potential infinity. You can give someone an instruction like, `Keep counting forever'. In theory you
could end up with an infinite collection of numbers that way, but could it ever really happen? Could you
ever actually get an infinite number of things right in front of you, to do amazing conjuring tricks with?
They realised that if an infinite collection of anything from cabbages to kings really existed, it would
take up infinite space, so if there was an infinite collection of anything with any size to it, anywhere in
the universe, we wouldn't be here. There would be nothing but cabbages - everywhere. We are here, so
there is no infinite collection of anything with any size to it, anywhere. But there is still the possibility of
an infinite collection of something infinitely small. If something can be infinitely small, then God (who
is handy to have around for thought experiments because he can accomplish anything that can be done in
this universe) should be able to get an infinite number of angels to dance on the head of a pin.

Our ancestors felt that this idea was ridiculous, and that therefore there is no actual infinity in this
universe. Today, we have two great theories of physics. One works at large scales and uses smooth
curves to describe the universe. The other works at small scales and uses steps. We haven't got the two
theories to mesh yet, so we don't know if the deeper theory behind them both uses steps to build curves,
like a newspaper picture, or if it uses curves to build steps, like a stair carpet. It might be something
we've not imagined yet of course, but if it's one or the other, our ancestors would guess the steps,
because of the angels on the head of a pin.

What about the dragons? They roar and belch flame. Their noise travels faster than the wind. They
collect precious jewels below the ground. They live in South America, China, Wales. They eat people.
They are worms, and an ancient symbol for the world is the great world worm. They are a conceptual
bucket in which our ancestors gathered together what we now call tectonic phenomena. They had no
idea that the world is covered by solid plates wandering around on a liquid core, but they had eventually
gathered all the effects together through mapping applied to direct observation. The dragon took the
place of the real thing in their mental maps until by wandering around they discovered the real
phenomena that produced the effects they tagged `dragon'.

And alchemy? The futile search for a procedure for turning base metals into gold and getting rich quick?
An alchemical or Hermetic journey consists of a series of operations (which may or may not have
physical manifestation such as a diagram or experiment, or may be just a thought experiment),
performed by the operator. The journey ends at the same place that it begins, and during the journey the
operator's perception of the world is changed. The operator's consciousness has been deepened and
enhanced, and it is he, not the stuff on his desk, that is transformed. The return to the beginning is
necessary because it is only then that he sees that that which was obscure is now clear. Alchemy is
mapping.

In the great cathedrals of Europe there are many arches holding up the roofs. In these days we'd probably
get a symmetric multiprocessor to grind out a finite element analysis, but the builders didn't have the
hardware or the algorithms. They didn't have the nice equations we have in mechanics, or even Newton's
own Latin prose. Most of them were illiterate. But if you compute the optimal arch strength/mass curve
for the spans, you usually find they were bang on. They did this with the only tools to hand - their own
experience, and the ability we have to get a feel for anything with the neural net between our ears.

Make sure you have a realistic evaluation of your own capabilities. The usually necessary correction is
up! Getting good at anything takes practice, but given that you'll be doing the work anyway, it's nice to
know how good you can get.
Creative hacking and responsible engineering are orthogonal, not contradictory. We can have the
pleasure of stretching our faculties to their limits, and still fulfill our obligations to our colleagues.

Literary Criticism and Design Patterns


There is an important difference between intentionality and action. A scriptwriter might intend to tell us
that the bad guy is horrible, and will do it by writing scenes involving nasty deeds. Our intention might
be to signal that a memory page in cache is no longer valid, our action is to set the dirty flag.

To starkly expose this point, consider an assembly language. An opcode might perform the most
peculiar settings of the processor's outputs given the inputs, but we think of the opcode by its mnemonic,
say DAA (Decimal Adjust Accumulator). Even though there is an identity between opcode and
mnemonic, the high level intentionality of the mnemonic can mask the action of the opcode on the
accumulator, which just flips bits according to an algorithm. If we see the processing opportunities in the
opcode, are we `abusing' it? The answer depends on the circumstance.

Whenever we have a distinction between intentionality and action, we have the opportunity to look at
the effectiveness of the action, and ask what we can learn about the intent, or the domain of the intent,
from the structure of the selected action. Might another action have been better? Do problems in the
action reveal issues in the intentionality? When we do this with books it is called literary criticism, and
taken seriously. If we are to learn how to write better programs, we need to learn as much as possible
about our kind of lit crit, because that's the only way we'll be able to have a sensible discussion of the
interplay of structure and detail that characterises style. The really nice thing is, unlike prose lit crit,
program lit crit is informed by experimental evidence such as failure reports. This ups the gusto and cuts
the waffle, leaving the learning enhanced.

We can get a rigorous and elegant coding discipline out of the difference between intentionality and
action. Consider the following fragment:

// Search the list of available dealers and find those that


// handle the triggering stock. Send them notification of
// the event.

for(DealerIterator DI(DealersOnline); DI.more(); DI++)


if(DI.CurrentDealer()->InPortfolio(TheEvent.GetStock()))
DI.CurrentDealer()->HandleEvent(TheEvent);

The definition of the objects has allowed the intentionality of the use case to be expressed succinctly.
However, there is really no smaller granularity where we can cluster intentionality into comment and
action into code without the comments getting silly.

If we interleave comment and code at this natural level of granularity, we can ensure that all lines in the
program are interpreted in comment. We are motivated to design objects (or functions) that we can use
economically in this way. We find it easier to correct some inelegance than to explain it away.

By being conscious of the difference between intentionality and action, we can make both
simultaneously economical, and fulfill the goals of a detailed design document's pseudo code and an
implementation's comments, while helping the implementation's verifiability. By putting everything in
one place, we assist the coherence of the layers.

This concept is taken further in Donald Knuth's idea of `Literate Programming', which to be done well,
really needs tool support from systems like his Web environment (predating the World Wide Web). But
you don't need to buy all the gear to enjoy the sport - literate programming is more an attitude than a
tool.

It is at this level of programming lit crit that we can seriously benefit from studying design patterns.
These are chunks of architectural technique more complex than the usual flow control, stream
management with error handling and other typical kinds of idiom. They are extremely powerful, and
very portable. See the wonderful book by Gamma, Helm, Johnson and Vlissides, where they describe a
pattern as something that:

`... describes a problem which occurs over and over again in our environment, and then
describes the core of the solution to that problem, in such a way that you can use the
solution a million times over, without ever doing it the same way twice.'

The theme that underlies all the issues discussed in this section is Aesthetical Quality. We all know a
mess when we see one, but too often we are in danger of being paralysed, unable to act on the evidence
of our own senses because there is no procedural translation of `It works but it's ugly.' When an
experienced professional feels aesthetical disquiet and cares enough to say so, we should always take
notice. Our standards of beauty change from generation to generation, and for some reason always
follow function. That is why making code beautiful exploits a huge knowledge base that we may not
have consciously integrated, and leads to cost effective solutions. The stuff is less likely to incur vast
maintenance costs downstream if it's beautiful. That's what beauty is. Aesthetical quality is probably the
only criterion against which one can honesty argue that the wrong language has been used. An attempt
to do an impressionist dawn in acrylic would be horrible even if the airbrush work were perfect.

We should be willing to look at the source code we produce not as the end product of a more interesting
process, but as an artifact in its own right. It should look good stuck up on the wall. The up-front costs of
actually looking at our code, and exploiting the mapping of the geometrical patterns of black and white,
the patterns in the syntax, and the patterns in the problem domain aren't that great, given that they
sometimes literally let you spot bugs from six feet away.

With all this literary criticism, what of religious wars? Some of it is done for entertainment of course,
and we don't want to impede the pleasure of ridiculing the peculiarities of our friends' favourite tools and
techniques! But sometimes intelligent programmers get caught in distressing and counter-productive
squabbles that just go around in circles. We forget that we can use structured arguments rigorously
between ourselves. When an unwanted religious war breaks out, ask the following questions:

1. What is the global position that includes both special cases?


2. Is there a variation in intentionality between the positions?
3. What is the overall objective?

For example, you value the facilities of a powerful integrated environment. You use Emacs at work and
have extended it to control your coffee machine. I use many machines, and bizarre though it may be, I
know vi will always be there. We install Emacs on new sites, and teach vi to novices. Your LISP
technique of course, sucks.

This evaluation of options against objectives often produces a genuine convergence of opinion amongst
experienced people. Agreeing on the best idioms to get a job done in a well-understood environment
does not mean that everyone is coerced to conform - they just agree. Contrary to popular opinion there
often is a right answer. Talking to an experienced person about a new environment's idioms can teach
you an awful lot very quickly, while using the idioms from your old environment in a new one can lead
to fighting all the way.

Cognitive Atoms
In any task that requires understanding, we wil always find at least one `cognitive atom'. A cognitive
atom is a part of the problem that can only be adequately treated by loading up its elements, features,
signs, whatever into the mind of a single mapper and getting the best result possible. The word
`adequate' is important here - there are a whole bunch of problems that given unlimited resources could
be tackled slapdash, but need thinking about in the real world. For example, any bunch of idiots could
pull off the set changes needed on the stage of a big musical show, given several weeks to do it. Doing
the same thing in the time it takes the leading lady to sing a melancholy song under a single spotlight
takes a logistical genius.

Experienced project planners discover that recognising and managing the cognitive atoms within a
project is a crucial early step in gaining control. First we must recognise the cognitive atoms. There is a
relationship between the system architcture and the cognitive atoms it contains - the architect will have
to use intuition and experience to identify solvable, but as yet unsolved problems. The problems the
architect believes can be solved during development will influence the design, because nobody wants to
design an architecture that is not implementable!

The architect can therefore move the boundaries of the cognitive atoms around somewhat. For example,
in a data mining system, practical combinatorical problems might be concentrated in the database
design, or in the higher level application logic. The correct identification of the cognitive atoms will
control both the architecture and the work packages devolved to the team members. Each atom must be
given to one person or subgroup to kick around, but they may find themselves working on more than one
part of the system to solve their problem. The parts must therefore be well layered so that modules do
not turn into time wasted battles. Inentifying atoms usually requires balancing time, space, comms, risk,
team skills, portability, development time, and all of this must be done with proposed atoms whose
solvability is not certain. The architect must therefore be able to see the heart of the problem, and
express, at least in his or her own head, the nature of the tradeoff options. It is quite possible to be able
to recognise a set of clearly seen tradeoffs that it is very hard to express to another without the same
mapper ability to see the structure. Serialising a mental model is always difficult, because we do not
think in technical papers that we download like ftp transfers.

When identifying cognitive atoms it is important to avoid being confused by a specific fallacy that one
sees over and over again. It is often possible to keep breaking atoms into smaller ones without much
thought, and thus reach code level without much effort. When such reductions reach implementation
however, all turns to chaos. The real problems haven't gone away, they've just been squeezed down into
ugly subsystem APIs, performance problems, fragility and the like. The boundaries of the cognitive
atoms have been squeezed smaller and smaller, until... Pop! They re-appear around the whole system
itself! The doctrine of simplistic stepwise refinement without regular reality checks and rigourous
attempts to falsify the design has been responsible for a great many tradgedies, involving wasting most
of the time budget on attempting to do the actual design work by open-house informal acrimony,
followed by desperate kludging attempts.

The reduction of cognitive atom boundaries can be cyclic, and the skilled architect will pick the right
place for them, whether that is high level, low level or in between. Some initial studies can be a huge,
single cognitive atom, that one just has to hand to a trusted worker and say `Try to sort this mess out
please!'

By definition we don't know how to best approach a cognitive atom. If we did, it wouldn't be an atom.
So it follows that it cannot be planned on a project planning Gantt chart in terms of subgoals. It must be
entered as a single task, and the duration must be guessed at. Experienced mappers get pretty good at
guessing, but they cannot explain why the problem smells like ba two day one, a week one, a six month
one. Therefore there is little point arguing with someone who has given their best guess. The fear of the
subsequent argument is an important factor that often prevents mappers engaging their intuitive skills,
and giving the numbers that are needed for project planning.

The upside of this is that once a cognitive atom fissions, the worker can usually lay out a very detailed
set of task descriptions based on a solid understanding of what has to be done. Therefore many projects
should plan to update their Gantt charts as the cognitive atoms fission. We suggest that the high
proportion of projects that attempt to Gantt everything on day one indicates the pervasiveness of the
production line model. The programmers working under such Gantt charts cannot be benefiting from
intelligent management of cognitive atoms. Instead of opening their minds to the problems to be solved,
they will be arguing about whether or not they are any good at their jobs and being `put under pressure',
as if it is possible to make someone think more clearly by belittling them. This is stressful and counter-
productive.
The Quality Plateau
When one adopts the strategy of forming one's own mental map of a problem domain and attempting to
simplify it, one faces the problem of when to stop working on the map. This applies at every level of
design. The extraordinary thing is that there almost always is a deep solution, that is significantly
simpler than anything else, and manifestly minimal. (There may be more than one way of expressing it,
but then the relationship will be manifest.) Although homilies like `You'll know it when you see it!' are
undoubtably true, they don't tell one where to look.

As the only honest argument we can offer here is the promise that this really happens. And although it
proves nothing, all we can do is show you a worked example reduced to a minimal state. But it does
work - ask anyone who's tried.

The example we will present is from Jeffrey Richter's excellent Advanced Windows. This book is
essential reading for anyone intending to program against Microsoft's Win32 Application Programming
Interface (API) (because otherwise you won't have a mental map of the system semantics).

Richter sets out to provide as clear as exposition of how to use Win32 as he can, but even in his
examples (and partially as a result of the conventions he is following), complexity that we can lose
appears. On page 319, there is a function SecondThread() We'll just look at the function, and leave
the remainder of the program and some global definitions:

DWORD WINAPI SecondThread (LPVOID lpwThreadParm) {


BOOL fDone = FALSE;
DWORD dw;

while (!fDone) {
// Wait forever for the mutex to become signaled.
dw = WaitForSingleObject(g_hMutex, INFINITE);

if (dw == WAIT_OBJECT_0) {
// Mutex became signalled.
if (g_nIndex >= MAX_TIMES) {
fDone = TRUE;
} else {
g_nIndex++;
g_dwTimes[g_nIndex - 1] = GetTickCount():
}

// Release the mutex.


ReleaseMutex(g_hMutex);
} else {
// The mutex was abandoned.
break;// Exit the while loop.
}
}
return(0);
}

First let's just simplify the brace style, loose the extra space between keyword and open bracket, and the
redundant ReleaseMutex comment. We are aware that there is a religious war between the followers
of K&R and Wirth on brace style, but getting the blocking symmetric really does make things easier to
see. The extra line it takes will be won back later - bear with us!

DWORD WINAPI SecondThread(LPVOID lpwThreadParm)


{
BOOL fDone = FALSE;
DWORD dw;

while(!fDone)
{
// Wait forever for the mutex to become signaled.
dw = WaitForSingleObject(g_hMutex, INFINITE);

if(dw == WAIT_OBJECT_0)
{
// Mutex became signalled.
if(g_nIndex >= MAX_TIMES)
{
fDone = TRUE;
}
else
{
g_nIndex++;
g_dwTimes[g_nIndex - 1] = GetTickCount():
}

ReleaseMutex(g_hMutex);
}
else
{
// The mutex was abandoned.
break;// Exit the while loop.
}
}
return(0);
}

It's easy to lose one local variable: dw is assigned then tested in the next statement. Inverting the sense
of the test helps locality of reference (testing then changing g_nIndex). And while we are about it
there is no point incrementing g_nIndex just to subtract 1 from its current value in the next operation!
We are already using the C post-increment operator, which was provided for just this sort of job.

DWORD WINAPI SecondThread (LPVOID lpwThreadParm)


{
BOOL fDone = FALSE;

while (!fDone)
{
// Wait forever for the mutex to become signaled.
if (WaitForSingleObject(g_hMutex, INFINITE)==WAIT_OBJECT_0)
{
// Mutex became signalled.
if (g_nIndex < MAX_TIMES)
{
g_dwTimes[g_nIndex++] = GetTickCount();
}
else
{
fDone = TRUE;
}
ReleaseMutex(g_hMutex);
}
else
{
// The mutex was abandoned.
break;// Exit the while loop.
}
}
return(0);
}

The break depends only on the result of WaitForSingleObject, so it is a simple matter to move the test
up into the controlling expression, eliminating both the break and a level of indentation:

DWORD WINAPI SecondThread (LPVOID lpwThreadParm)


{
BOOL fDone = FALSE;

while (!fDone && WaitForSingleObject(g_hMutex, INFINITE)


==WAIT_OBJECT_0)
{
// Mutex became signalled.
if (g_nIndex < MAX_TIMES)
{
g_dwTimes[g_nIndex++] = GetTickCount();
}
else
{
fDone = TRUE;
}
ReleaseMutex(g_hMutex);
}
return(0);
}

Now just squeeze... We know that lots of coding standards say that we must always put the curly
brackets in because sometimes silly people made unreadable messes, but look what happens when we
dump the rule, and concentrate on the intent of making the code readable.

DWORD WINAPI SecondThread (LPVOID lpwThreadParm)


{
BOOL fDone = FALSE;

while (!fDone && WaitForSingleObject(g_hMutex, INFINITE)


==WAIT_OBJECT_0)
{
if (g_nIndex < MAX_TIMES)
g_dwTimes[g_nIndex++] = GetTickCount();
else
fDone = TRUE;
ReleaseMutex(g_hMutex);
}
return(0);
}

Now for some real heresy. Gosh, by the time we've finished this total irresponsibility the result will be
totally illegible. (Or of course, common sense can do more good than rules.)

Heresies are, if we know what our variables are, we'll know their type. If we don't know what a variable
is for, knowing its type won't help much. Anyway, the compilers type-check every which way these
days. So drop the Hungarian, and gratuitous fake type extensions that are just #defined to nothing
somewhere. Hiding dereferences in typedefs is another pointless exercise because although it
accomplishes a kind of encapsulation of currency, it is never sufficiently continent that we never have to
worry about it, and then a careful programmer has to keep the real types in mind. Maintaining a concept
of long pointers in variable names in what is a flat 32 bit API is pretty silly too.

DWORD SecondThread (void *ThreadParm)


{
BOOL done = FALSE;

while (!done && WaitForSingleObject(Mutex, INFINITE)


==WAIT_OBJECT_0)
{
if (Index < MAX_TIMES)
Times[Index++] = GetTickCount();
else
done = TRUE;

ReleaseMutex(Mutex);
}
return(0);
}

Now watch. We will hit the Quality Plateau...

DWORD SecondThread(void *ThreadParm)


{
while(Index < MAX_TIMES &&
WaitForSingleObject(Mutex, INFINITE) == WAIT_OBJECT_0)
{
if (Index < MAX_TIMES)
Times[Index++] = GetTickCount():
ReleaseMutex(Mutex);
}
return(0);
}

Eleven lines vs 26. One less level of indentation, but the structure completely transparent. Two local
variables eliminated. No else clauses. Absolutely no nested elses. Less places for bugs to hide.

Finally, the text has made it clear that different threads execute functions in different contexts. It is not
necessary to define one function called FirstThread(), with exactly the same cut and paste
definition as SecondThread(), and call them,

hThreads[0] = CreateThread(..., FirstThread, ...);


hThreads[1] = CreateThread(..., SecondThread, ...);

When it could just say,

hThreads[0] = CreateThread(..., TheThread, ...);


hThreads[1] = CreateThread(..., TheThread, ...);

About a third of this example is actually clone code! If we did see a bug in one instance, we'd have to
remember to correct the other one too. Why bother when we can just junk it. It's this kind of thing that
stresses deadlines.

Knowledge, Not KLOCS


Programmers are expensive. The results of their work must be captured and used to the benefit of their
organisation. The trouble is, the traditional packer way to count results is to count what they can be seen
producing. The results of a programming team studying a problem, coming to an understanding, and
testing that understanding down to the ultimate rigour of executable code are not the KLOCS of code
they typed in when they were learning. They are the final understanding that they came to when they had
finished.

The reason why it is important to identify value that way around is that sometimes, the understanding
shows a much easier way of doing things than the design the team started with. A classic mapper/packer
battleground in programming consists of the mappers seeing that with what they know now, a
reimplementation could be done in a fraction of the time, and would not suffer from maintenance issues
that they see looming in the existing code. The packers see the mappers insanely trying to destroy all
their work b(as if there weren't backups), and repeat the last few months, which have been terrible
because they obviously didn't know what they were doing anyway (they kept changing things). The
packers set out on one of their crusades to stop the mappers, and the organisation has to abandon its
ubnderstanding, which cannot be used in the context of the existing source code.

The intelligent organisation wants the most understanding and the least source code it can acheive. The
organisation stuck in inappropriate physical mass production models doesn't count understanding, and
counts its worth by its wealth of source code, piled higher and deeper.

Good Composition and Exponential Benefits


A definition of a good composition that is often used on Arts foundation courses is that is such that `if
any element were to be missing or changed, the whole would be changed'. Perhaps it is a seascape, with
a lighthouse making a strong vertical up one side, guiding the eye and placing itself in relation to the
waves beneath. The situation of the lighthouse (and the waves) is one we recognise, and this is where the
painting gets its power. If the noble lighthouse was a squat concrete pillbox, the picture would say
something else. If the waves were an oilslick or a crowd of frisbee players, there would be still others
messages in the painting.

The point is, there shouldn't be anything around that does not have a carefully arranged purpose with
respect to the other elements of the composition. The artist needs to keep control of the message, and if
the picture contains random bits, they will trigger unpredictable associations in the viewers' minds, and
obscure the relationships between the important elements that the picture needs to work at all.

Logicians examining axiom sets face exactly the same issue. They have a much more precise term for
what they mean, but this comes simply from the tighter formal structures that they make their
observations and propositions within. They say that an axiom set should be `necessary and sufficient'. A
necessary and sufficient set allows one to see clearly the `nature' of the `universe' being considered. It
allows one to be confident that the consequences one finds are truly consequences of the area of interest,
and not some arbitrary assumption.

In neither of these disciplines would it be necessary to remind people of the importance of keeping
things as small as possible, as an ongoing area of concern. Unfortunately, the practical usefulness of our
art means that people are often keen to see new functionality, which we try to construct as quickly as
possible. When established, functionality becomes part of the background, and all of us, from corporate
to individual entities, start to become ensnarled in our own legacy systems.

Although this may seem like an eternal unavoidable of the Programmer's Condition, one does see people
breaking out of this cyclic degeneration, and from this perspective of programming as a creative art, we
can describe how they do it.

The fundamental difficulty in keeping control of legacy structures, be they artifacts of the customer's
transport strategy that have made it into the specification for the fixed costs amortisation logic, or an
ancient CODASYL indexing system that one is being asked to recreate in an object database, is time.
This is sometimes expressed as `cost', but the issue is rarely cost. It is deadlines. Apart from
circumstances where the misguided cry `Wolf!' there is no getting away from deadlines. They are a
commercial reality over which we have no control. That's OK - we just think about them realistically
and manage their problems rather than use them to justify poor products.

The first point of leverage against deadlines is recognising that work proceeds in a clean environment
without odd flags on functions, inconsistent calling conventions, multiple naming conventions and the
like, than with the junk in place. Days after cleanup count more than days before cleanup. So do the
cleanup first, when everyone can see a long project ahead of them, and get the time back later. You will
nearly always have to do a cleanup - the code that most organisations put in their repository is usually
the first that passes all test cases. This does not matter. Do your own cleanup for this phase, regression
test and don't even discuss your own deltas until you can see straight.

The warning that comes with this observation, is to be realistic about how long your cleanup will take.
The nastier the tangle, the bigger the multiplier a cleanup will give, but the greater the risk that you
won't have time to sort it out and do the work. A useful question often is, `How complex is the black box
functionality of this thing?' If the answer is `Not very!', then you know that as you incrementally comb
the complexity out, it will collapse to something simple, even if you can't see the route at all.

The second point of leverage comes from the exponential collapse of complexity in software. If you
have a cleaner algorithm, the minimal implementation will be simpler. The less code you have, the
easier it is to see the structure in the code, and the chance of off-concept bugs is reduced. At the same
time, less code means fewer opportunities for syntax errors, mistyping of variables and so on. Fewer
bugs means fewer deltas, fewer deltas mean fewer tests. It doesn't take long in any team of more than
half a dozen people for most of their activity to descend into a mayhem of mutual over-patching, with
repository access being the bandwidth bottleneck. Letting loose stuff through the process into later
stages can plant a time-bomb that will blossom when it is too late to do anything about it. On the other
hand, a frenzy of throwing away in the midst of such a situation can regain calm in a matter of days.

The third part of leverage is the `skunkworks', so called because the original Skunkworks was located by
Lockheed Martin, at a remove from its corporate centre, `because it stunk.' This fearsome technique can
be used by excessively keen teams in secret on winter evenings, or can be mandated by enlightened
managements. As with everything on this course, we will offer an insight into why skunkworks work.

In industrial age activities like housebuilding, we have physical objects (bricks) which are awkward to
manage. Instead of piling up bricks against reference piles to see how many we will need to build a
house, we count them. The abstraction from physical to informational gives us enormous leverage in
managing bricks. Eventually we have so many numbers telling us about supply, transport and demand
that we have to organise our numbers into patterns to manage them. We use spreadsheets, and the
abstraction from informational to conceptual again gives us enormous leverage.

In information activities such as programming, we don't start with the physical and get immediate
leverage by moving to the informational. We start with informational requirements, listings etc., and we
have to manage these with informational tools. We have to do this for good reasons, such as
informational contracts with suppliers, and informational agreements on meetings with colleagues
contained in our process. We also sometimes do this for bad reasons, such as a too literal translation of
informational techniques for managing housebricks into the informational arena, such as counting
productivity by KLOCS.

The trouble is, in normal working we have no leverage. The information content of a meeting's minutes
can be bigger that the requirement they discuss! As an activity performed by humans, the meeting has
negative leverage! We only win because we can sell our new bit either many times, or because in
collaboration with other bits it gives greatly added value to the process.

This leaves the opportunity to use understanding to gain leverage over information. The skunkworks is
sometimes seen as an abandonment of the process in the interests of creativity. Nothing could be further
from the truth. One needs a high proportion of experienced people to pull the trick off, because they
must lay down highly informed dynamic personal processes to get anything done at all. What one trades
off is the understanding contained in an exhaustive process, for the understanding contained in
experienced people. From this comes the precondition for the skunkworks. By abandoning the detailed
process, one accepts that risk is inevitable, and loses the personal protection given by simple, well-
defined objectives. Everybody must accept that a skunkworks may fail, that what it delivers might not be
what was expected, and that there may be issues reinserting the results into traditional management
streams. But when they work, they work magnificently!

All successful startups are skunkworks. So are unsuccessful startups. A skunkworks effort can turn a
major maintainability bloat risk into a small upfront time risk. In these situations, it can be an effective
risk management tool.

This file last updated 3 October 1999


Copyright (c) Alan G Carter and Colston Sanger 1997

alan@melloworld.com

colston@shotters.dircon.co.uk
The Programmer at Work

Approaches, Methodologies, Languages


When we looked at a one bit program being written, we saw the need to find a mapping between the
problem domain and the system semantics that fulfills the desire. Obviously, the less rich the possible
set of mappings is, the easier it will be to find a useful one, assuming it exists. Any given problem
domain will have its own inherent complexity, and every instance of a problem within it will have its
own unique complexities. When we have a problem however, it is what it is. We can rarely change its
definition to control its complexity (although sometimes it is both possible and desirable, and thus A
Good Thing). So in search of leverage, the most effective way to get the job done, all we can play with
are the system semantics.

At one end of this spectrum is the COTS product. Load it, run it, job done. At the other is the processor's
instruction set, which allows us to organise any behaviour the hardware is physically capable of.
Between these extremes are a variety of layered semantics that simplify the mapping by restricting the
semantics.

In these terms, a language is any kitbag of semantics. C is a language, but so is Excel and so are GUI
builders. The kitbag sits there but doesn't give you any clue how to use its contents. Languages are
specialised by problem domain to offer greater chances of achieving simpler mappings to any given
problem within the chosen domain. To decide if one wishes to make a choice of one semantics
(language) over another, the criterion is usually to ask which requires the simpler mapping (the simpler
program) to get the job done. Beyond the most trivial cases, this requires familiarity with both kitbags in
use.

Although we can get a clear understanding of what a language is, a methodology is harder to pin down
in these terms. We suggest that the reason for this is that the idea of a `methodology', as it is commonly
encountered, includes the default assumption that it is a procedural approach to solving programming
problems, and we know there is no such thing. What we can describe instead for now, is something
slightly different - an approach.

An approach consists of advice, given by one experienced mapper to another, about how best to tackle a
kind of problem. It is an invitation to see the world in a certain way, even if it is phrased as procedural
guidance. The injunction `Draw a Data Flow Diagram showing weekly inputs' in a book called How To
Build A Payroll System, is actually saying, `Constrain your world-view to a weekly batch input system,
and list the batches the world throws at you.'
This is sound advice for the builder of a payroll system, provided the work patterns it is to reward can fit
into weekly batches. Like a language, the approach gets simpler the more it is specialised to a given
domain. Also like a language, the approach is hard to select appropriately without an understanding of
the `currencies' of the available approaches, and the problem. With more clever developers writing
COTS products every year, which automate an approach to form a highly domain specific language that
any idiot can work, the likely future leverage for good programmers is going to be in familiarity with
deep, profound approaches, and deriving new approaches in the face of new problems. There will likely
always be hordes of people using the same approach to ritualise the production of the same billing
system for another client. They may well be `always retraining to new methodologies'. But they are and
will remain, clerical workers, and the gap in performance and rewards between clerical workers and
programmers is going to widen. This is what it means to be a player in the information age.

There are some languages that are specialised to particular approaches. Smalltalk requires the user to see
the world as objects. Lisp requires an unhealthy relationship with the lambda calculus that leads to
proposing the dog of food instead of feeding the dog.

The point that languages are real, and approaches are real, but methodologies are a figment of our
collective imaginations and do not exist must be emphasised. Confusion on this point and an unfortunate
choice of approach can lead to situations where critical parts of the problem are not addressed because
the approach happens not to speak of them, while those who attempt to deal with the issues are
hampered by their colleagues who feel that they are acting `unprofessionally' by not `applying the
methodology'. This is an example of the mapper/packer communication barrier.

Interesting methodologies consist of part approach, and part language. Jackson Structured Design (JSD)
constrains its domain of applicability to problems with clearly identifiable features, and is then able to
offer quite detailed guidance on how to address instances of those kinds of problems. Keep your eyes
open and JSD will serve you well in its domain. Outside its domain however, it can cause problems
because if the problem doesn't have Jackson's features, no end of kludging will make a good system out
of a bad understanding. This is not Jackson's fault, as he never said that JSD was a ritualised panacea for
solving all computer problems.

At the output end of JSD we see something quite unusual, an artifact of its time. Jackson describes how
to transliterate from his diagrams into code, by hand! He is clear that this is what he is doing, and
explains that the automatism of this task allows us to break the rules of structured programming and use
gotos. Today he would not do this - he'd just hand the diagrams over to a code generator as many others
do. The point is, the diagrams of the JSD notation are best considered a programming language! Jackson
has created a language that is specialised for an approach to a problem domain.

The same is true of the Booch, Rumbaugh and Unified Modelling Language approaches and languages.
In fact, every interesting methodology. In Booch and Rumbaugh's earlier publications, they did not hand
the diagrams over to code generators, but showed that the translation of most of the diagrams was
largely mechanical. Don't worry too much for now about the methods one fills in by hand - the whole
point about these is they are not complicated!

The creation of a language and approach, more or less specialised for a domain, is a great achievement.
In doing so, the authors must have dwelt long on how best to navigate about problems, chunk them,
explore them, see them in different ways, and designed their approach and language accordingly. But
many seem to get confused by the mapper/packer language barrier, and feel the need to omit the
emphasis on creative thinking needed to find the mapping between the problem and their language.
Instead of presenting their approach as a structure, and suggesting some heuristics for seeing a problem
in terms of it, they feel the need to use a procedural language, and describe actions to be taken, in the
imperative voice. If someone hasn't been encouraged to think creatively, ie, construct a mental map of
their problem through daydreaming and then explore it, what choice do they have but to follow this
procedural misdirection, and their results will inevitably depend on luck. Jackson is good here. He
specifically limits his domain and tells the reader what features to look for. The reader starts by
searching the problem and looking for clues. Booch includes an interesting section on finding the
objects, which if only it had gone deep and wide enough would have rendered this course unnecessary
because it address exactly the right mapper issues. Finally, Stroustrup's book describing the C ++ object
approach and language is a celebration of style, insight, structure, depth and creativity. It is a hard book
describing a complex programming language, but it is written by a great mapper at play, who seems to
have no internal confusion about these issues.

How to Write Documents


In many software engineers' view, much of their lives consist of writing documents. From the
perspective of this course, we would prefer to say that their lives consist of performing work to gain
understanding, which will be delivered to their colleagues according to the protocol specified in their
process. Hence we are aware that the work is always understanding, and the process tells what
understanding we need to convey to them. It therefore indicates the suitable language for each
document. These considerations can inform a description of the actual job to be done with each of the
documents; User Requirement, Software Requirement, Architectural Design, Detailed Design and Test
Specification, that an engineer produces.

There are two more general points that should be made. Firstly, the job does not consist of producing
reams of unintelligible gobbledegook that no-one will ever read, that look like `engineering documents'.
The first person to quote a reference, full of slashes and decimal points, in the main text when it should
have been in an appendix if anywhere, wasn't just being rude to his or her readers, they were setting a
trend that has devalued the whole of our art. Use simple, normal language (including specialist
terminology where necessary, but not made up for the sake of it), to tell the reader what they need to
know.

The second point regards format. In any stage of the software engineering process, people use
understanding to find and propose pattern. If they knew what they were going to find, they wouldn't
have the job, because somebody would be setting up a COTS product instead. So we don't necessarily
know what the worker will need to present, so how can we tell them how to present it? Standard formats
in processes should not be taken as exclusive. All decent ISO 9001 processes have provisions to tailor
the required sections of a document where appropriate. Make proper use of these, and if the structure of
the document emerges during the writing, you can still put an insertion in the Project Management Plan
to describe your chosen format. This is what ISO 9001 is all about.

User Requirements Document

There has been much interest recently in `Business Process Re-Engineering', (BPR). This is the practice
of examining one's business processes to determine if they can be improved, and it often has to be done
simply because the passage of time has altered the nature of the organisation's businesses. It is
sometimes overlooked that software engineering has always included a significant component of BPR,
because otherwise a customer will find that a computer system that automates an outmoded business
process will not include the workarounds that staff will have implemented to handle change, and the
system will fail. The first duty of the software engineer is therefore to help the customer understand the
nature of their own requirement. In the example of the one bit program, it is the crystalisation of the
desire from general discomfort to a specific need for more light. The software engineer is aided in this
task by the discipline of having to write a computer program. It isn't possible to hide ambiguities in
flowery code, as one can in a text report. A useful URD therefore captures as clear an understanding the
user's needs as can be had at the beginning of a project, as understood by user and engineer, in the user's
language. The URD will almost certainly need clarification later as the programming discipline
identifies ambiguities, whether the amendments are tracked as part of the document or not.

An issue which causes a great deal of confusion here is a joint purpose that the URD has developed.
From an engineering perspective, the URD must be a living document, but from the commercial and
legal perspective it takes the place of a reference document for the duration of the project. The two
objectives are quite distinct. When they are confused, we get the spectacle of engineers, unfamiliar with
legal knowledge (such as it is) trying to write clauses out of `A Day at the Races', while crucial issues of
the business process go unexamined.

Sometimes the only way out of this is to have two documents. One specifies the contractual minimum,
and may well be written solely by the customer, as some methodologies suggest. The other is a living,
internal document that tells us what would `delight the customer'. It is what we are trying to aim for on
his behalf. How can we delight the customer if the only clue we have to how to do this is a something
that will serve our commercial colleagues well in a court of law? The extent to which the customer
should have visibility of the `real URD ' depends on commercial circumstances.

Be very careful of `integrated feature tracking environments' that purport to capture your URD and track
its clauses through design, into code, and through to test case. Such environments often forget that
requirements can be met by not doing something, that several requirements may be implemented across
several code segments, without any direct mapping between requirement and segment, and that it is hard
to test for perfectly reasonable general requirements with specific test cases. This is not to say such tools
have no use - for tasks like configuration and datafill they work perfectly. One could even track the
features of a specified group of classes for GUI manipulation. But for general `user level' black-box
requirements they either distort what can be expressed in the URD, or impose a style of development
that encourages long hand coding of individual features instead of performing abstractions wherever
possible.

Software Requirements Document

Where the URD describes the needed system in the user's language, the SRD describes it in the
engineer's. It is in this document that system sizing calculations can first appear. Particularly with
modern object methodologies, the need for an SRD has been reduced, because the architecture will
consist of pragmatic classes that have a clear relationship with the language of the URD . In this
situation, the SRD and ADD can be combined.

Sculptors are told to think of the completed work as residing within the block of stone or wood they are
carving. It helps. In the same way, we can imagine ourselves looking over our user's shoulder, one day
in the future, when our design has been delivered. As we watch them using the features of the system,
we can ask ourselves, `How must that have been implemented?' A software engineer's description of the
user's needs is then easy to capture.

Architectural Design Document

It is in doing the work captured in the ADD that the hard work of a design has to be done. It is also in
the ADD that the greatest opportunity to fudge it exists. While we deliberately omit design detail from
the ADD, sometimes so as to retain portability, sometimes just to avoid clouding the big picture, we
must still be convinced that our design is in fact implementable. The engineer should know of at least
one acceptable way to implement each feature before calling for it, and should have thought about the
conceptual integrity of the collection of all the code required to implement the features.

The proposition that architectural design should not consider detailed design, we suggest is misguided. If
we cannot consider implementation, we can't be very good engineers, because any fool can design the
unbuildable. It is by considering implementation that we discover the limitations of our designs and
learn the difference between good and bad. We are able to see alternatives, compare them and select the
best. If we cannot consider implementational reality, one design is as good as another, and this critical
stage of cognition becomes a typing exercise to see how fast one can `write the document', and never
mind what is written!

The ADD is a didactic document. It teaches the reader how to see the problem and solution in the way
that the author sees them.

Detailed Design Document


The DDD is a message in a bottle. It tells the reader about how the original author planned the
implementation, so that the code is intelligible. The detail of exposition must take over where the ADD
leaves off, and take the reader to the point where the code can stand for itself. Sometimes, this
explanation can be assisted by pseudo-code, but this need not be the case. The DDD should be regarded
as amendable. During implementation, design details like the organisation of code into modules will
emerge. If these details are not captured for our colleagues in the DDD, where will they be captured?
This simple omission causes far too much unnecessary trouble, as engineers pick up parts of systems
that they can see is well documented, if only they knew where to start! Your final DDD should tell your
successor whatever they need to know in order to pick up the system and change it.

Test Plan

Test is the most context sensitive of the document types, but the following observations are useful
guides within the imperatives of the situation. The test strategy aims to stress the system. It will not get
much leverage out of doing this entirely at random, so the issue is to find one or more models of the
system, that can give us an indication of likely typical and stress conditions. A useful structure is
therefore to describe the model, derive the stress conditions, and then list them.

The Knight's Fork


Over and over again in this course we see the echos of a deep pattern that we exposed in the writing of a
one bit program. We have the problem domain, the system semantics, and a mapping between the two
created by the programmer in the light of the desire. This pattern is the central act of computer
programming. It may not be understanding in itself, but the ability to do this is the only evidence that
one can have that one has actually understood a problem in the terms of a given semantics. If the
semantics are rigorous and testable like those of a digital computer, one might claim a `deep' or `true'
understanding, but this is suspect, because someone can always pop over the horizon and say, `See it this
way!'

This pattern is so important we want to focus attention on it. Although we have avoided fatuous jargon
without any real meaning behind it, we want to introduce a term, `The Knight's Fork', to tag this pattern.
We've borrowed the term from chess. In it, a Knight sits on the board and can make a number of L-
shaped moves. The other pieces are all constrained to move on diagonals or orthogonals, but the
Knight's L shapes allow it to threaten two pieces, each themselves constrained to their own worlds, and
thus accomplish something useful in any case.

This kind of pattern occurs over and over again, but everywhere we can track it down to the writing of
the one bit program. A computer system can be in many states and evolve according to its own internal
logic. the reality the computer is following can also be in many states, and itself evolve. Because of the
designer's insight, a critical aspect of the problem can be abstracted and captured in the computer, the
same pattern in both cases, such that in any case, computer and reality will conform. The test cases,
informed by a model of the problem and of the system, will cover the permissible (and possibly
inpermissible) state space of the inputs according to insight of the writer, such that in any case, the
system's state evolution will be verified. The designer, looking at a need to perform data manipulation,
will exploit features of the data that genuinely indicate structure in the data, and map this to features of
the language, as in the canonical:

while((c = getchar()) != EOF)


putchar(f(c));

All architectural design involves teasing apart a problem by looking at the needs from as many
directions as possible, until it reveals the structure within itself that the system designer can use to defeat
it..

The Knight's Fork always uses an inherent deep structure of the problem domain. Checking that a
proposed deep structure is real and not just a coincidence is very important. If a designer exploits a
coincidence, the result will be `clever' rather than `elegant', and it will be fragile, liable to explode into
special cases provisions all over the resulting system code, with all design integrity lost. Weinberg gives
the example of a programmer writing an assembler. He discovered that he could do table lookups based
on the opcode number and so designed his program. But the hardware people did not hold the opcode
numbering scheme sacrosanct, and when they made a valid change, the program design broke.

The Personal Layered Process


A Zen koan tells of a wise monk who visited a great teacher. He entered the teacher's room and sat
before him. `When you came in' asked the teacher, `which side of the door did you leave your stick?'
The monk did not know. `In that case, you have lost your Zen'.

After you have seen the structure of your program and are ready to implement it, there is still a great
deal to keep control of. Even if you can see the critical lines of code there are still a great many others to
type in. The discipline required is far greater than any formal process could control, and must be applied
intelligently in each new situation.

Your process will break a task down so far, and then you must take over. Like a track-laying vehicle,
you must structure your work as it develops. After a while you get to the point where you can do this in
your head, very quickly indeed, because you can get leverage out of two techniques.

You can only expand the part of your plan that you are working on. At one point in an activity to add a
change to some source might be held in your mind as:

1. Identify all files that include functions:


ModelOpen(),
ModelRead(),
ModelWrite(),
ModelClose().

2. Book all files out of version control.

3. Hack.

3.1. Change modread.c


3.1.1. Hack ModelOpen()
3.1.2 Hack ModelRead()
3.1.3. Hack ModelWrite()
3.1.4. Hack ModelClose()
3.2. Change appfile1.c
3.3. Change applile2.c

4. Book files back in.

5. Update conman system.

The fact that the process definition can't spell out every little step and so doesn't insult your intelligence
in a futile attempt to do so, doesn't absolve you from the duty to do the job for yourself. And it's quite
proper to leave how this is done up to you - it allows you to do the necessary organisation in your head,
or any other way that pleases you. Some people like to write down little lists of files to modify on scraps
of paper and cross them off as they do them, but leave the rest of the process in their heads. They can
remember where they are in the big picture, but if they're interrupted in the middle of a big list, they
might get confused.

The second important technique is that you can change your plans. The core concept of TQM is that we
must understand what we are setting out to achieve, if we are even going to know when we have got
there. This means that we need to be able to say honestly what we think we are doing at any time, but
does not stop us changing our minds! For example, we might add to the example above,

3.1.5. Sort out all the headers :-(

at any time as we are changing the function definitions and our bored little minds are roving backwards
and forwards and realise that the prototypes will be wrong too.

We do not need to remember which bin we threw our morning coffee beaker in to have total
understanding of where we are in our organisable work. Instead we can take control of the spirit of TQM
and organise ourselves with full consciousness of what we are doing. As we do this, all the usual
benefits of thinking about what we are doing come about. We can see opportunities to automate the
boring typing with scripts and macros, and within the PLP we can always ask the question `How would I
undo this action', which is what makes people who don't accidentally delete all their source, and have to
wait two hours for the administrator to retrieve last night's tape backup.

As a final comment on this topic, we often need to use a PLP to control the complexity of even the
simplest job in a professional engineering environment. The ritualisation of PLP can become hypnotic.
To keep proportion, always ask yourself if there is a 30 second hack that would accomplish the task, and
if you can just do it, don't waste time on elaborate self-created rituals. Always keep a backup!

To See the World in a Line of Code


We've described the central problem of software design as finding the optimal mapping between the
problem and system semantics. We've also discussed the activity usually referred to as `writing
documents' as doing the necessary work and capturing the results in a document. So what is involved in
doing the work that does not show up in the document? It will have a lot to do with finding the optimal
mapping.

The fact is, no-one ever picks up a job, sits down and rolls out the best solution as if they were doing
some sort of exam questions. The designer of an effective solution will always look at the problem from
several different directions, and will usually see several variations of possible solutions. The solutions
must be challenged to ensure that they meet all the requirements, and that they are going to be practical
to implement. Only the winner will be recorded in the document. Sadly, the usual convention is to omit
the details of why the documented solution was chosen over other alternatives from the document.

This point is particularly important when our dominant approach, usually the one that provides the basic
structure of our process, involves top-down design. The idea of top-down is that it enables us to see the
wood for the trees. In the early stages, we can see the overall intent of the system. We can then
concentrate on getting the details within each subsystem right, knowing that its general direction is
correct. This is distinct from the approach of doing top down design to remain independent of the design
details of the lower levels, although the two motivations are often found together.

In both cases, the design will actually have to be implemented, so the designer will have to convince him
or her self that the design is actually implementable. If the objective is seeing the wood for the trees,
there will probably be an idea around of what the target language, operating system, or in management
problems the team, actually is. A criterion for a successful design is then usually optimising the use of
system resources. If the objective is independence, the criterion is to produce a design that is
implementable in all of the possible targets. Ideally this is done by using a model explicitly common to
all the targets.

This means that the designer must have considered implementation during design, even though usual
practice is to lose the implementation considerations that caused the designer to prefer one design over
another.

While thinking about design, it is quite common for designers to see in their minds a high level
description of the outer parts of their system, perhaps the I/O, a more detailed description of the inner
parts, perhaps a group of database table definitions, and right in the middle, at the point where the key
processing of the system is done, they often know just what the critical line of code, which may be quite
complex, actually says. From this line they can convince themselves that the details of the outer parts of
the system will be OK without having to think them all through. It's not always at the core of a design
that the tickleish bits exist - the designer might notice a critical part of a low level error recovery
protocol, and feel the need to know that it can be implemented. There is no better way to feel secure
with what your design calls for than to be able to state at least one practical way to do it.

We are not saying that it is imperative to see lines of code popping into your head during design. We are
saying that it can be a very useful way to clarify your thinking about an area, and if your thoughts do
turn to code, follow them. Don't cut off these considerations because your deliverable is a higher level
document. That way, you get a design document that is effective in use, and people will call you a
demon wizard of the design process. Remember holding your toothbrush with chopsticks? People that
are into the habit will rather believe you have a really good chopstick technique than that you just
grasped the toothbrush with your fist.

Another area where little code fragments are really useful during high-level design is in getting a real
sense of the system semantics that you are going to be using. We always have to learn new APIs, to our
OSs, GUIs, libraries and so on. It takes years to become really fluent in all the ways we can properly use
an API. So look in the books that discuss an API, and write little demo apps that demonstrate the
features you think you'll be needing. This really helps concentrate your mind on what you need to keep
track of from the bottom up, as your design progresses from the top down, and ensures that you don't
attempt to use semantics that actually aren't there. It can be very embarassing to produce a design that
requires a different operating system design, but if you've spent a few minutes writing a little program
that exercises a feature, you'll use it as it is, and never mind what the documentation claims. You win the
minutes back during implementation, because you can copy bits of your doodles into your source, and
hack them.

Spend a while looking at the design of the APIs you use. Look at their currencies - the values passed in
and out of the API . How do the bits of the interface fit together? Are they well designed? What are the
idioms that their designer was intending you to use? APIs are usually done by experienced designers,
and they are like little messages from very bright people about how they see the world. The style of Ken
Thompson's UNIX API has survived very well for nearly 30 years. He himself said of it that the only
change he would make is `I'd spell creat() with an e!'. There is something very close to the way
computers work in the structure of the UNIX API

This section is all about the importance of being able to see one level below where one is working. This
is true even though hiding the details of implementation is a permanent goal of our discipline. The better
we get at this, the more we win, but we just aren't good enough at it yet to forget about the lower levels.
Understanding where a compiler allocates heap and stack space enables you to handle scribble bugs,
where we break the model of the language. Having a sense of how much physical memory (and swap)
we have enables us to write programs that will work in real world situations. Even true virtual machines,
such as the Java virtual machine, gives services so low level that we can trust the implementer to do it
sensibly, so we can predict the efficiency of our operations.

Conceptual Integrity
In The Mythical Man Month Fred Brooks emphasises the importance of conceptual integrity in design.
Our deep view of programming suggests some practical ways to achieve conceptual integrity.

First, we know the importance of mental maps. If every member of the team shares a mutually-agreed
mental map of the system being constructed, then it is possible for everyones' contribution to be in the
spirit of the overall design. If they don't, then it isn't, because a style guide detailed enough to allow
someone to get everything right without knowing what they are doing would be much harder for the
architect to write than the system itself would be.

Secondly, we have a picture of the programmer optimising a series of design choices to produce a
minimal solution and control complexity. So we need to look at the kinds of constructs the programmers
cook with, and ensure that they are shared. Such a project style guide indicates a coherent collection of
variable naming conventions, error-handling strategies, example idioms for using the subsystems' APIs,
even the comment style. On might say that by controlling the shape of the bricks, the architect can
constrain the shape of the house, while leaving flexibility in the hands of the designer. The structure of
the code then ensures that the code between the canonical examples is predictable and elegant. So code
examples in style guides control structure, and structure controls code. Here we see another echo of the
Knight's Fork - if we use the right structure, we can bring necessary and sufficient syntax into play and
write minimal text. Conversely, the more twisted the stuff gets, the more of it you get, to be twisted up.

A final benefit of conceptual integrity that is very valuable to the professional programmer is very
practical. Imagine you are on a roll. You've seen the way to divide up your functionality, you've got a
really elegant way of catching all the odd ways the OS can signal failure, you're half way through coding
up all the cases, and you need a new variable name. Your head locks up, overloaded by a triviality! The
exponential benefits of getting focussed and staying that way are as great as the exponential benefits of
minimising the code size, so every silly distraction you can get rid of is worth it. On sites where
everyone stops work every ten minutes to argue about administration, the benefits of real focus can
never emerge anyway, but where external conditions have been sorted out, having a style guide to let
you pretty much derive this kind of stuff on the fly can dramatically improve effective productivity.

Mood Control
Packers have rules of debate, that involve taking turns to score points of each other and demonstrating
complete disinterest in the outcome by demeanour and language. Mappers have rules of debate too, but
they are different.

Mappers are allowed to jump up and down and shout a lot. This does not mean that they are planning to
murder each other, it means they are involved. They will likely go skipping off to lunch together, only to
resume yelling on their return.

They will each have their own way of talking about the features of the problem, and will need to agree a
common project jargon. Just doing this acknowledges the shared mental model, and focuses the group
on creating and challenging a piece of group property, rather than throwing rocks at private sandcastles.
Hate the sin and not the sinner!

If a colleague is saying something that you don't understand, or seems paradoxical or nonsensical, ask
yourself if the person is trying to tell you about a part of the map that you are seeing in a very different
way. Check what they mean by words that concern you. Start with the assumption that they have
something interesting in their heads, and try to figure out what it is. This style of discussion has been
thought about a lot by the Zetetics fans, that broke away from the Society for the Investigation of Claims
of the Paranormal (SICOP), to investigate what the rules of evidence that would be able to test for
genuine paranormal phenomena might be.

Just being a group of mappers with a shared mental model isn't enough to start modifying it together.
Like everything else, we must become explicitly aware of what we are trying to do. At different times in
the project, the team will need to do different thinks. Sometimes you will want to gather difficulties, and
complicate the model. At other times organising and simplifying it will be strategic. Sometimes you will
want to describe what is needed, at other times you will have to decide how to explain it to the customer.

If different members of the team have different objectives in a discussion, little will be accomplished.
One member cannot construct a reasonable description of the technical issues if they are being
interrupted by people who think that the goal is maximising customer acceptability.

This is not to say that all meetings without explicitly declared purposes must explode into name calling -
that only happens when the randomly selected goals are mutually exclusive. But even discussions with
multiple objectives can be clarified by first openly stating what the objectives are. Nor does the group
focus have to be maintained with packer ritual obsessionalism, because the idea is to clarify discussion,
not prevent it. As ever, we must serve the objective, not micro-police the procedure. If a team member
sees something off-topic, that trashes the whole plan, they must speak up. Alternatively, if they see
issues that need to be addressed but are not so critical, they can scribble them on a bit of scrap paper and
raise them at an issue parade.

Mood control also extends to the overall phase of the project. By identifying particular moods and their
changes, the team leader can provide structure to the teams activities, and avoid situations where
everyone comes into work each day and wanders around sort of coding, without any clear understanding
of what a good day would look like.

Beyond the project, the mood of the overall organisation can also have an effect on the project. A major
threat can come from the way the organisation sees communication within itself. Some organisations
have highly ritualised boundaries between groups, leading to a considerable amount of time being spent
in self- administration. While there are plenty of forces that can grow the complexity, and hence reduce
the effectiveness, of baroque administrative procedures, there are few that can simplify them. This is
because only the people that connect with reality and actually do the deliverable work suffer the
consequences, while others get progressively more convoluted hoops to jump through while telling
themselves they are doing work.

The mapper/packer communication barrier often leads people to say that the effects of an intrusive
administrative overhead are limited. There are three kinds of effect that ineffective admin can produce,
at increasing levels of abstraction and hence as mappers know, power.

It takes actual work hours. Some organisations require people to fill in travel expenses forms so complex
that people actually reserve a half-day a month just to fill in the forms. That's 10% of the salary and
elapsed time budgets sacrificed to unchallengable, ritual, administrative proceduralism! The data on the
forms could be collected very simply, and the remainder of the clerical processing, if really necessary,
could be done by clerical staff who cost less and are more abundant.

It breaks flow. It often takes several hours to actually get a problem into one's mind, and if one is
constantly being interrupted by someone from Human Resources confused about their own filing
system, one can work for days without ever getting to the few seconds it would take to sort things out.
Pretty soon this develops into a kind of water torture, where the wretched programmer's mind veers
away from thinking about the problem because every time he or she invests the emotional energy
necessary to load up the hard, unstructured question to be considered, it gets blown away. This is a very
unpleasant experience. People used to attach electrodes to alcoholics and give them shock when they
touched a whisky bottle. It's the same thing.

It does your head in. Being a mapper involves seeking clarity and considering multiple issues. If
intrusive and incompetent admin has turned the workplace into a surrealistic nightmare, keeping a focus
on the high standards of clarity necessary to do programming becomes much harder, and if one can
never predict how long it will take for Purchasing to acquire a software package, no planning based on it
can be done.

Teams can do a lot to isolate themselves from admin chaos within their organisation, by allowing people
that know the game to shield others. In the same way that a good manager shields the development team
from external pressures and harrassment so that they can concentrate, a good administrator shields the
team from lousy admin.
Remember that the packers in the organisation will not understand the effects described above, because
they do not acknowledge the existence of the approach and state of mind with which we do
programming. This is the open plan office problem!

Situation Rehearsals
An effective way to maintain the shared mental map of the problem, the design and the group's activities
is to hold regular situation rehearsals. These are short meeting where one person takes ten minutes to
explain their current understanding of the group's situation. As with everything else, this is not a ritual
that must be performed listlessly as part of the misery of work, it has a purpose. This means that it is
worth doing a rehearsal even if not all of the team ate available, or calling impromptu ones just because
some interesting people are around.

The Sloane Ranger's Handbook included a Sloane Ranger's map of the world. About 50% of the total
surface area was covered by Sloane Square itself, Scotland was connected to London by the thin
causeway of the M1, and the major continents were squeezed into the sidelines. The point of the joke
was that we all have our own distorted map of the world, but the Sloane Ranger's was particularly
distorted with respect to geography. To a Sloane Ranger, it was not a joke - it was a fair representation
of their world, and they argued that theirs was no more unrealistic than anyone else's. (Some of them
bought the book so they could check it for accuracy. It passed muster.)

In the same way as we all have our own map of the world, we all have our own view of the problem and
the group's activities. Hearing the differences in emphasis between different people's view of the
problem brings more benefits to the team than just allowing the members to check that their vision at
least maps to the speakers and identifying qualitative or factual differences (which the rehearsal also
does). If looking at the problem from different directions brings understanding, hearing how the comms
team describe the application can tell the application programmers things they never realised about their
own task.

To understand why situation rehearsals are worth the disruption involved in getting some of the team
together for a few minutes each day, it is useful to think about two different physical types of image
storage systems. Traditional photographic plates store a different part of the total image in each part of
the area of the plate. The mapping between image area and plate area is direct. Chip off a corner, and
that corner of the image is lost. Holographic plates however store a transform of the whole image in each
part of the surface of the plate. Chip off a corner and the image is still available, but at a lower resolution
because the corner contained information about the distribution of a particular frequency component of
the image.

The team need not concentrate knowledge about topics in individuals to the exclusion of all other
knowledge. If it tries to do this, the results will be tragic because the team won't be able to communicate
internally. The distribution of knowledge throughout the team must be more like a hologram than a
photograph. I need to know a lot about my job, and a little about yours. The little I know must be true
and fair, no matter how bizarrely I chose to express it from your point of view. Then you and I can talk
to each other.

In situation rehearsals it is important to observe a strict time limit, or you will inevitably get bogged
down. That means the speaker must have a few minutes to summarise What Really Matters, with the
consequences being followed up off-line. These might be comparisons between views where team
members actually disagree with the speaker, recognitions of opportunities for simplification where I
learn that I'm doing something in my layer that you undo in yours, or offers of specialist knowledge.

Also remember that the model that everyone has a view of is the group model. If in the light of someone
else's view of the model you can see a flaw in the shared model, attacking the model is not attacking the
person whose novel approach has revealed the flaw you couldn't see on your own.

If the group can become comfortable with this Zetetic approach then an additional benefit is available
from situation rehearsals. You can pick a speaker at random. This mean that everyone will be motivated
to run the whole project through their mind regularly, so they can be really elegant and insightful if they
are picked. The effects of this can be astonishing.

When was the last time you had a job where you were required to think about your work, as you are
required to make progress reports, fill in timesheets and sign off code review forms before passing them
to the quality rep for initialing and filing in the project history cabinet unless it's one of Susan's projects
in which case you file it under correspondence and record its existence in the annex to the project
management plan to be found on Eric's hard disk?

This file last updated 26 October 1997


Copyright (c) Alan G Carter and Colston Sanger 1997

alan@melloworld.com

colston@shotters.dircon.co.uk
Customs and Practices

The Codeface Leads


TQM is all about awareness. Awareness of what we are doing when we perform repetitive procedures or
do similar kinds of jobs allow us to capture those rare moments of insight, where we see a way to do
things more effectively, and communicate them to our colleagues by modifying our process. This means
that the process definition might contain boring stuff necessary for establishing communication between
groups, but the stylistic or approach issues discussed should all be little treasures, worthy of transmission
in such a high visibility document. The idea of this aspect of the process is not to specify every last little
action, but to retain knowledge. This gives us a test, albeit still a subjective, matter of opinion type test,
for whether a clause is worthy of the document. For example, microspecification of header ordering is
not appropriate for inclusion in the coding standard because apart from anything else, it will almost
certainly be violated on every real platform if the code is to compile. However, the technique of using
conditional compilation macros around the whole module contents to prevent multiple inclusion is a
little treasure that belongs somewhere that everyone can see it and follow the protocol.

In car factories, the shop floor leads continuous improvement, because the people that do inefficient jobs
recognise that they are inefficient and correct them. The genuine parallel with software engineering
should be that the codeface leads improvements to the process. One of the most costly consequences of
the mapper/packer communication barrier is that packers, in panic because of the `software crisis" are
obliged to assert that `the process' is an inexplicable, mysterious source of all good bits, and as such, is
correct. In some organisation, the process becomes the mechanism of a coercive attempt to enforce a
rigid packer roboticism on the whole workforce, as this is seen to be the correct mindset to magic
success out of nothing.

To get an idea of the scale of this problem, consider the evolution of programming languages and
models, development environments, CASE tools and so on over the last thirty years. There is really no
comparison between the two ends of the interval, in anything except coding standards. Some aspects of
the discussion begun by Djikstra on structured programming were captured as ritualistic dogma, and
then the dogma has been copied from standard to standard ever since. Indeed, a major feature of most
coding standards eagerly proclaimed by their promulgators is that they have copied the thing from
someone else. This is how one sells rubbish to the frightened and ignorant. Saying that one has nicked
something off someone else is a good way of adding false provenance to something with no inherent
value, as well as learning from something with inherent value. Coding standards are handed down from
management to programmers, rather than discovered at the codeface and passed upwards. The kind of
lively and informed (if primitive) discussion that led to the original coding standards, that were
wonderful at their time, has not been repeated since. As soon as the first standards were defined, and
improvements were noted, the packer business world seized them, set them in stone, and announced that
they constituted `proper procedure'. Stylistic debate has been sidelined to the religious wars, where it
does not face the responsibility of running the industry and so gets silly. Meanwhile the existence of
these `proper procedures' implicitly denies the existence of new stylistic issues coming along with new
languages and models, that need informed debates as intense as those surrounding structured
programming if we are to learn how to use these languages well.

A programmer using something like the ParcWorks Smalltalk development environment gets as much
benefit out of a sanctimonious mouthing about not using gotos and putting data no- one ever looks at in
standard comment lines at the top, as a modern air traffic controller gets out of the bits in Deuteronomy
about the penalties for having sex with camels.

Who Stole My Vole?


This section is about complexity. We'll start with a thought experiment, involving an imaginary Martian
ecology.

On Mars (as everyone knows) there are rocks. There are also two kinds of lifeforms. There are Martians,
who eat voles, and luckily for the Martians, there are voles. Voles hide behind rocks and eat them.

Not much happens on Mars. Martians mainly spend their time sitting in the desert and watching for
voles darting between rocks. Because there are rocks as far as the eye can see on Mars, in all directions,
a Martian needs to be able to see well in all directions at once. That is why Martians evolved their
characteristic four large eyes, each mounted on stalks and pointing in different directions.

Not much happens on Mars, so Martian evolution has progressed entirely in the direction of vole
spotting. Each huge eye has an enormous visual cortex behind it, that can spot a vole miles away in all
manner of light conditions. Most of a Martian's brain consists of visual cortex, and these four sub-brains
are richly cross-connected to allow for weird light conditions to be compensated for. Martians do a great
deal of processing up front, in the semi-autonomous sub-brains, so they don't really have `attention' like
humans - they focus on the play of input between their `attentions' instead.

When they spot a vole, the Martians have to sneak up on it. This means keeping the vole's rock between
themselves and the Vole. It requires Intelligence. Soon after the Martians evolved Intelligence, they
invented Great Literature. This is scratched on big rocks using little rocks. It obeys the rules of Martian
grammar, and uses the North voice for emotion, the South voice for action, the East voice for speech,
and the West voice for circumstance. Not much happens on Mars, so the closest Martian equivalent to
our own Crime and Punishment is called Who Stole My Vole?:

Emotion Action Speech Circumstance


Grumpy Sneak Horrid Martian Cold, Gloomy
Determined Bash Die! Die! Die! Old Martian's Cave
Ashamed Steal Vole Dead Martian

Breathtaking, isn't it? That sudden void in the East voice as the South voice swoops - a view into the
very mouth of an undeniable damnation, real albeit of and by the self! I'm told it helps to have the right
brain structure...

What is the point of this Weird Tale? Well, imagine what a Martian programmer might make of C.A.R.
Hoare's Communicating Sequential Processes (CSP) theory. Its brain (easy to avoid gender pronoun here
- Martians have seventeen sexes, and a one night stand can take ten years to arrange) is already
hardwired to enable it to apprehend complex relationships between independent activities, so the
processes Hoare renders linear by series of symbolical transforms, and thus intelligible to a human are
already obvious to a Martian's inspection. On the other tentacle, by squeezing all the work into a huge
tangle that only one sub-brain can see, the human readable version is made unintelligible to the Martian.

A joint human Martian spaceship management system design effort, with lots of communicating
sequential processes controlling warp drives, blasters, ansibles and so on would face problems bleaker
than the mapper/packer communication barrier, even though theorem provers using CSP could perform
automated translations of many of the ideas.

The point is, complexity is in the eye of the beholder. We don't need an alien physiology to notice the
difference between how individuals rate complexity - it's the whole point of mental maps. When we
discover a structure that lets us understand what is happening, we can apprehend the antics of many
more entities in a single glance. Think of any complex situation that you understand, perhaps the deck of
a yacht or the stage of an amateur dramatics company. When you first saw it, it would have looked like a
chaos of ropes, pulleys, vast sheets of canvas, boxes, walkways, and odd metal fitments of totally
undiscernible purpose. When you had acquired a pattern by finding out what all that stuff was for, the
deck, stage or whatever seemed emptier, tidier than when you first saw it. But it hasn't changed, you
have.

No sane skipper or director would attempt to operate in such a way that the greenest novice could
understand what is going at first glance. Just sailing around the harbour or raising the curtain take some
training.

In the software industry the leverage issue, mapper/packer communication barrier and language
specialisation have all acted to mask this point. The leverage issue says that the benefit we get from
moving numbers around instead of bricks means that we can afford the investment to ensure that the
simple numbers describing bricks are presented in a form accessible to all. The industrial and
commercial context of many programming operations has the notion that the competent just have their
information organised, that complexity is handled by not having any, and that in that way, the progress
of the bricks is assured. This is all true, and is the correct attitude to information about bricks. But when
we substitute information for bricks, we get a problem. We can't abstract from a big ugly brick to an
informational `1'. We can abstract of course, but every time we do, we lose important information which
may bite us later. We can't just say that the competent just have their data organised, because the job is
now organising the huge pile of data that has just been dumped on us. We no longer need the fork lift
driver's skills, but we need new ones. And we can't handle the representation of complexity just by
requiring that the representation be simple. The mapper/packer communication barrier makes the
situation with this inappropriate analogy between information and bricks harder to discuss, because just
about every step of the brick logic is there in the information argument, but instead of 1% control and
99% payload, the management problem is more like 90% control and 10% payload. This relationship is
what makes the difference in all the brick management heuristics, and it's relationships that packers
score badly on. They think they recognise the situation, trot out the knowledge packet about being neat
and documenting everything, and off they go. The idea that they may be creating a stage rocket that will
never get off the ground because the engines are too inefficient, and exponentially greater management
of the management will is needed to compensate for lack of subtlety is hard for someone to grasp if they
aren't trained to draw mental models and see patterns in them. Finally, the existence of specialist
languages increases the appearance of the possible. If only everything could be as easy as SQL. One
must remember:

1. It's taken 30 years


2. The kind of things it does are very restricted.
3. It's very processor intensive

SQL isn't easy. It's exactly what we've described - control through familiarity with idioms - everybody
understands horrors like outer joins.

The Knight's Fork appears again here. Is it better to make a really sophisticated code, send it, and then a
short message, or to send a simple code, and a longer message. What is the expected calibre and
experience of the maintainer? How much can we extend their understanding in the documentation, so we
can use more `complex' idioms? A long switch() statement is usually a pretty grim way to do control
flow, unless you're writing a GUI event loop, where we all expect it and go looking for it.

There is no absolute measure of `complexity'. This must be born in mind when talking of complexity of
algorithms and code style, and whatever it is that complexity analysis tools produce after the pictorial
representations of the system, which can be very valuable. Complexity in these pictures (after the system
has been reduced to necessary sufficiency) is not A Bad Thing - it is the essence of your problem laid
bare. We should not be trying to drive out the inherent complexity of problems. It is a futile strategy that
leads us away from the development of abstract understanding that will let us organise it.

Reviews and Previews


A fundamental element of the packer view of work is control by threat. Perform action so-and-so, or
else. To make the threat effective, the rule must be policed, so we must look, after the fact, to ensure that
the rule has been followed. Then the idle and untrustworthy workers will know they will get caught out
and will perform the action as specified, because of their fear of retribution. In fact, an important aspect
of denying the nature of mapper work is supporting the falsehood that mapper jobs can be
microspecified, and the only reason why that must be done is so that rules can be specified. Only with
rules written on pieces of paper can the programmers be caught breaking them, and it's the catching out
that is central to the whole model!

Of course, there is an additional purpose, in that a review can also spot small oversights than must be
corrected before the work is allowed to go to the customer. This is like inspecting a Rolls Royce and
polishing off a tiny smear on the bonnet with a soft cloth - it can't turn a soap-box cart into a Rolls
Royce.

In the software industry, we have processes that put the conventional emphasis on review, which is
either trivial or a police action, but do nothing to pull the team's focus and group experience into finding
the best way to do the job. This leads to reviews where individuals or subgroups produce their best
efforts, which frankly are often not very good, especially if the programmers concerned are
inexperienced. The time for doing the work has elapsed, so if a baroque collection of application layer
processes with complex instantiation rules have been used to kludge up something the OS does for free,
it is too late to redesign. The review group members have to collude to not notice the ropey logic, and
concentrate their efforts onto ritualised objections to trivial matters of style. None of this does anything
for quality.

The solution of course, is a mapper one. We have to accept that most programmers would love to do a
really good job, given half a chance, and invest in helping them rather than threatening them. Instead of
spending our half-day or whatever in a review when it is too late, we should spend it in a preview, where
we can evaluate the options and agree the general direction, before the work is done. Then the situation
where the reviewers pass the work with regret can be avoided, because with the workpiece already Rolls
Royce shaped, the final review just need to check for trivial smears on the bonnet.

Code Inspections and Step Checks


Code inspections form an important part of many organisations processes. The primary reason they are
done is to fulfill the common sense, mapper, TQM dictum: `When you have done the job, look at what
you have done, and check that bit is OK.' But there is something funny about code inspections, that often
deflects their purpose. This is because code inspections are like Christmas. They are an old festival,
older than the structure they currently have a place in. And like Christmas, there's still a lot of holly and
mistletoe hanging around from the Old Religion.

Once upon a time, programs had to b written down on coding sheets, which were given to typists to key
onto punch cards. These were rekeyed for check, and then the cards were fed into the computer, which
would be used for an incredibly expensive compiler run. It wasn't just the labour and capitalisation costs
- the cycle time for a single attempt could be a week. So the wise developed the habit of sitting down
together and examining each others' coding sheets in minute detail before they were sent off for
punching.

Today we have spiffy editors, and processors in our toasters, so the original motives no longer apply, but
we need to continue to inspect our code so that logical errors can be identified before they cause a fault
in service. This is where the confusion sets in. Few organisations are as confused as the IT manager who
recently said that the staff should perform code inspections on their listings before compiling them. For
pities sake, we have our design phase to get the big picture right, and the compiler to check syntax
errors. Manually poring through the code looking for syntax errors is not going to impose any useful
discipline, and is a very high cost and unreliable method of finding syntax errors, even though it is the
way it has always been done! The balance has changed, and we need to consult our mental maps, as with
everything we do in this information intensive game.

Although most organisations let people compile and test the code before inspecting it, the holly is still
up over the fireplace. Why do half a dozen people suddenly get up, abandon their whizz-bang class
browsing, abstract modelled, GUI development environments, and huddle away for an afternoon with
paper listing, in organisation after organisation, day after day? At least get a workstation in there, so that
people can ask searching questions and get hard answers to questions like, will that value always be
positive by searching the source.

Code inspections are very costly, and we should aim to get the best out of them. A very good way to do
this where there is a symbolic graphical debugger available is to break the job into two parts. First the
individual author, who is intimately familiar with the structure and intentionality of the code uses the
debugger to single step every single line and test in the program. This may sound labour intensive and
low benefit, but the effects are wonderful. The program itself naturally keeps track of exactly what the
designer is checking at any time, and the designer's eye is in turn focussed on each step of the logic. One
doesn't have to traverse the buggy side of a test to see it, because the mind runs more quickly than the
finger on the mouse button, and picks things up so long as it is pointed in the right direction. It can be
useful to print a listing, chunk it with horizontal lines between functions and code blocks, and draw
verticals down the side as the sections are verified. This uses the designer's knowledge, machine assist,
and takes one person, while picking up a lot of problems. The full group code inspection can then focus
on things that a fresh viewpoint can bring, like identifying implicit assumptions that may not be valid, as
the designer explains the logic.

Code inspections structured in this way in conjunction with individual step checks have a purpose, and
are less likely to degenerate into holier than thou religious wars over comment style, as too many
expensive programmers currently spend time.

Coding Standards and Style Guides


Coding standards and style guides have come up several times in this course, and what has been said
may well be at variance with what is commonly believed. It will be useful to explore these differences,
and understand exactly what we are talking about.

We have argued that the software industry often finds itself trying to sit between two very different
views of the world. The packers have been trained to structure reasoning and discourse around
identifying the situation in terms of learned responses, and then applying the action that is indicated.
One's conduct in the world is basically about doing the right thing. At work, one is told what to do. The
mappers seek to obtain a general understanding of the situation, reusing known patterns where they are
appropriate, and making up new ones where they are not. Mappers expect to act guided by their map,
given an objective. At work, one understands the problem and finds an optimal solution. We have also
seen that the mapping approach is what making new software is all about, and packing can't do it.

So coding standards are part mapper motivated, part packer motivated, and have their purposes
confused. The mapper/packer communication barrier applies, so mappers tear their hair out as packers
smear the complexity around until it is invisible on any one page of listing, citing Knowledge Packet
47684 - Clarity Is Good, while removing all clarity from the work.

If we accept that mapping and TQM are at the heart of good programming and that mapping and TQM
are all about understanding and control, we can look at the goals, and see what we can do to make better
coding standards and style guides.

The first point is about clarity. There is an idea around that using the syntactic richness of one's language
is A Bad Thing, because it is `complex'. Not when the compound syntax is an idiom. Not even when the
idiom has been introduced and discussed in documentation. And not necessarily in any circumstances.
When Newton wrote Principia, he wrote it all out in words, even though he could have used algebraic
symbols, being the co-discoverer of the fluctions or calculus. Today we have decided that it is better to
do maths with algebraic notation, even in exposition. Now it takes several pages of text waffle to say
what can be said succinctly in algebraic notation in one page, and although the reading speed per page
goes down, the total reading time goes up, because it's much harder to keep track of the text waffle. So
should our programs be more like prose in their density of complexity per page or expression, or should
they be more like maths? We suggest that if a person is new to a language, it is better that they can see
the overall structure succinctly and struggle with each page for a few minutes than that they can read
each idiot line perfectly and not have the faintest idea what it's intentionality is! How often have we seen
keen people, sitting in front of reams of code, not having the faintest idea where to start, and feeling it
must be their fault.

The second point is about conventions. Before adopting a convention, make sure it is going to gain you
more than it will cost. We had an example earlier where if one didn't know what a variable was for, there
didn't seem much point in it announcing it's type, but it's not just adding to the convention overhead that
`good' programmers are supposed to store as knowledge packets that is the problem. The fact is, too
many conventions are just plain ugly. If one is aiming for one's minimal code to be beautiful, it is harder
with great blots of ugly gzw_upSaDaisies littering the code. Never do anything to inhibit your
team's aspirations to make a great product. There was a site where they thought that conventions were A
Good Thing. Several people were instructed to Make Rules, which they duly did. One of them
announced that variable names were to be limited to 31 characters. Very sensible - many compilers
could only differentiate that many characters. Another announced that the sub-system the variable was
declared in should be indicated by a three character alpha code at the start. Every variable, just in case
anyone ever used a global. (Another announced that global variables were banned.) Another produced a
baroque typing scheme that parallelled the languages own compound types, and announced that this
must be included in each name. Quite why we didn't know. Another published a list of abbreviations of
the names of code modules known to the configuration manager and said that this must be included in
each variable name. By now they were getting `duck's in a row crazy'. The fun really started when we
realised that the total length of the obligatory stuff plus the typing of `pointer to function returning
pointer to record' exceeded 31 characters. The Law Givers simply informed us that the construct was too
complex and we weren't allowed to do that, although it was integral to the architecture, and messing
around declaring intermediate variables and type casting while assigning to them wasn't exactly going to
help clarity or efficiency. So finally the bubble burst and we adopted some pragmatic standards that
looked good and told us how to derive names quickly by establishing a project vocabulary and some
acceptable abbreviations. From the mapper/packer viewpoint, we can see that the situation developed
because the Law Givers were Announcing Worthy Rules, which is a A Good Thing, but the cost, for
pities sake, the cost...

The third point is about the nature of building bricks. A style guide that contains a multi-levelled hodge
podge of naming conventions, return capture, idiom and an example module for designers to emulate
will serve you better in the task of keeping good order than a collection of imperative instructions that
restrict the programmer's ability to seek elegance off their own back. If you can't trust the team to know
when to explicitly delimit a block and when not to, how can you trust them to write your MIS?

The fourth point is about those imperatives. There are some facilities that should never have been
invented in the first place, such as UNIX scanf() and gets(). The imperatives to never use them in
deliverable code are reasonable. But there are some things that you can't just get safely another, better
way. And there is always the balance of clarity issue. We'll look at two concrete examples where we
argue that there is a good case for using C goto - something you may have been told doesn't exist.

In the first, there is no other way. Imagine a recursive walk of a binary tree:

void Walk(NODE *Node)


{
// Do whatever we came here to do...

// Shall we recurse left?

if(Node->Left) Walk(Node->Left);

// Shall we recurse right?


if(Node->Right) Walk(Node->Right);
}

So as we walk the tree, we start by making calls that lead us left, left, left, left... until the bottom. We
then wander about in the tree, visiting every combination of left, left, right, right, left... and so on, until
we finish our walk by doing a whole bunch of returns from our final visit that was right, right, right,
right...

Every step left or right entails opening a new stack frame, copying the argument into the stack frame,
performing the call and returning. On some jobs with a lot of navigation but not a lot of per node
processing, this overhead can mount up. But look at this powerful idiom, known as tail recursion
elimination:

void Walk(NODE *Node)


{
Label:

// Do whatever we came here to do...

// Shall we recurse left?

if(Node->Left) Walk(Node->Left);

// Shall we recurse right?

if(Node->Right)
{
// Tail recursion elimination used for efficiency

Node = Node->Right;

goto Label;
}
}

We use the stack to keep track of where we are to the left, but after we have explored left, the right walk
doesn't need to keep position. So we eliminate a full 50% of the call and return overhead. This can make
the difference between staying in C and having to add an assembly language module to the source. For
an awesome example of this kind of thing, see Duff's Device in Stroustrup's The C++ Programming
Language.

The second example is a pure style issue - no need to invoke assembly language at all. Remember that
when Djikstra considered goto harmful, he was referring to the habitual use of goto for flow control
in unstructured 1960s code. The idea was that by using goto less, we would improve clarity. The idea
was not to sacrifice clarity to avoid goto at all costs. Imagine a program that needs to open a port,
initialise it, initialise a modem, set up a connection, logon and perform a download. If anything goes up,
anywhere in the logic, we have to go right back to the beginning. A self-styled structuralist might say:

BOOL Done = FALSE;

while(!Done)
{
if(OpenPort())
{
if(InitPort())
{
if(InitModem())
{
if(SetupConnection())
{
if(Logon())
{
if(Fetch())
{
Done = TRUE; // Ouch! Hit the right hand side!
}
}
}
}
}
}
}

Which we think is just silly. There is a cleaner alternative that makes use of the way the && operator
stops as soon as the statement it is in is rendered FALSE - `misuse' of the language normally deprecated
in most coding standards:

while(!(OpenPort()&&
InitPort()&&
InitModem()&&
SetupConnection()&&
Logon()&&
Fetch()));
This is clear, and neat provided we can encapsulate each step into a function. The trouble with this kind
of job though, is that there are a bunch of horrid things that have to be gotten right, like initialisation
strings and so forth, and to work with the code one needs it laid out like a script. In this case, we can do
this:

Start: if(!OpenPort())goto Start;


if(!InitPort())goto Start;
if(!InitModem())goto Start;
if(!SetupConnection())goto Start;
if(!Logon())goto Start;
if(!Fetch())goto Start;

Which is exactly what specialist scripting languages designed for this kind of job allow us to do!

Don't forget, if you want your SO to understand your love letter, you won't allow pedantries of spelling
and grammar to distort the letter, and if you want your colleague to understand your program, don't twist
the structure out of all recognition in the name of `clarity'.

Meaningful Metrics
We can turn the lens of practical understanding of purpose on the collection and interpretation of
metrics, which many sites spend a lot of money on, so it behoves us to get them right.

There are three kinds of motives for going out and collecting numbers. All are valuable, but it is always
important to understand what our motive is. The three motives are:

Descriptive Science

This involves going out and collecting data about an area to see if one can find any interesting features
in the data. One needn't know what one is expecting to find; that is the point. Uncritically collected raw
source data are the roots of everything. Modern entymology owes a great debt to the Victorian and
Edwardian ladies that spent their time producing perfectly detailed watercolours of every butterfly and
stick insect they could find. It has become something of a tradition that really interesting comets are
simultaneously discovered by a professional and an amateur astronomer. Our discipline has suffered
from a crude transfer of `metrics' from mass production to an intellectual, labour intensive activity. If we
want to draw analogies with factories, we need to ask about what optimises the complicated human
elements of our production facility. We need to spend more time on our descriptive roots. For example,
do test case fails go up with code that was written in summer, when there are so many other things one
could be doing instead of taking the time to check one's logic? One could brick up the windows, or
factor seasonality into the site's work plan to maximise the chance of quality. What are useful indicators
of quality? Internal and external fault reports per function point? KLOCS per function point?

Experimental Science

This involves making a change to an otherwise controlled environment, and seeing if the result is what
we expected. It enables us to validate and improve our mental map of the workplace. It's fairly easy to
do in a mass production environment, very hard in software engineering, where cycle times can be
months, team members change, and no two jobs are exactly alike. One can either hire a good statistician
with a real understanding of the art of programming, or look for really big wins that drown out the noise.
We know there are big wins out there, because the hackers exist. This course is designed to point
professionals towards under-exploited areas where big wins lurk.

Cybernetic Technology

This is where we really know what we are doing. Before we take a measurement, we know how we will
interpret it, and what variable we will adjust given the value recorded. If we really had software
enginering down pat, then this is what we would do. Unfortunately we don't. The field is so complex
that we probably never will, but we can develop some very good heuristics. We must take care that a
packer culture's need to pretend that we already have total control does not prevent us from achieving
better partial control by mystifying our actions and the interpretation of such statistics as we can get.

The pattern that emerges is, don't put the cart before the horse. If we run around collecting statistics
without a clear understanding of what we are doing, an important tool is distorted into a bean counting
exercise. Without wise interpretation, people become more concerned with creating rewardable artifacts
in the statistics than getting the job done well and letting the statistics reflect this. This is not a vice of
machine tools. Without a clear cybernetic model, `bad' statistics become a stick to beat people with. they
are bad people, and should consider their sins. This will surely produce improvement. People start
having meetings where they try to count the bugs in different ways, to `improve the situation', but the
customer's program still doesn't work.

With metrics, like everything else, we can do nothing by abdicating our responsibility to a procedure.

Attitude to Tools
Whether a designer is using the socially normal packer strategy, or has got good at mapping will have a
strong influence on that person's attitude to tools. A packer sees a tool as a machine that does a job, like
the photocopier in the corner of the office. In fact, this is the way most of us use compilers - chuck the
source in one end, and an executable pops out the other. This is usually OK, although an afternoon spent
reading the compiler and linker manuals will pay for itself many times over.
Packers like big, expensive tools with complicated GUIs and incredibly complicated internal state. They
promise to do everything, and take weeks to set up. There is lots of complicated terminology involved.
All the razzamatazz has an unfortunate consequence. Amidst the noise, its easy to lose track of the fact
that the premise of the glossy marketing brochures, that programming is a data processing operation that
this product will automate for you, so you can wear a tie, smile a lot and be `professional' is a load of
baloney.

Mappers don't think of tools as photocopiers, they think of them as mind prosthetics. They are the
mental equivalent of Ripley in the film Aliens, getting into the cargo handling exosketeton to beat up the
chief alien. Mappers retain responsibility for everything, and use their tools to extend their reach and
awareness. Mappers don't like putting all their stuff into one tool where another can't get at it. They like
their tools' repositories and I/O to be plaintext and parseable, so they can stick tools together.

Mappers think it is reasonable to write little programs on the fly to manipulate their source. They are
aware of what they do well and what computers do well, using their own judgement when they examine
each call to, say, a function whose definition they are changing, and the computer to ensure that they
have examined every single instance.

There are some excellent mapper tools available - browsers, reverse engineering tools, even some
`integrated development environments' that are accessible from the outside. It is always worth bearing in
mind however what can be achieved on most systems with just some scripts and the system editor. There
was a team that became very excited when they were shown a tool that gave them all sorts of valuable
browsing and cross-indexing facilities. The only differences between the tools and a bunch of scripts
they already had, and which had taken a morning to type in were:

1. The tool was not modifiable.


2. The tool cost UKP 20,000 plus UKP 5,000 per seat.
3. The tool took several weeks to set up.
4. The tool had a GUI.

And when someone enthuses about what a new product can tell you, always pause to check if the right
response should be, `So what?'

Software Structures are Problem Structures


Can you imagine what Margot Fonteyn would have looked like with her arms and legs in plaster casts,
and a pair of chain-gang style leg irons on her ankles? The result would not be very graceful.

One of the saddest things that able young designers do is start out with a step by step approach that
enables them to start building software, and while they are doing it they get familiar with the idioms that
enable them to map problem to language and approach. The whole point is that the languages and
approaches they use are supposed to map to their problem domains. It is hardly surprising when they
start to see and describe their problems in terms of software structures. The dividing line between a good
way to describe a problem and a good way to solve it is blurred, and with object approaches and
languages this intent is to maximise this blurring.

But just at the point where they could become skilled and insightful, these designers start to feel guilty,
because they feel they should `see the problem, not the solution'. So they start a peculiar performance
where they pretend they can't see the very thing they are talking about. If they have a specific goal to
fulfill, like maintaining implementational independence, this kind of manoeuvre can be accomplished
adroitly, because they know exactly what they don't know and it becomes an exercise in rigour, but if
they are just pretending to be dumber than they are, where are they supposed to stop?

If you are good, you are an asset to your organisation, because you can succinctly state that solution Y is
good for problem X and why, next question please!

It is not a crime to be skilled and experienced.

Root Cause Analysis


Root cause analysis is formalised as a part of many organisations process. In it, the organisation
recognises situations where things have got screwed up, and looks at what happened to understand what
happened and ensure that it does not happen again. Mappers do not have a problem with this - it's
business as usual. But for packers its a pretty unusual thing to do at work.

To understand what is important about root cause analysis, we can look at how a mapper does the same
thing, by a different name, in requirements elicitation.

Imagine a transport firm that because of its marshalling policy has traditionally categorised all jobs as
rural or urban. The rural and urban split might easily work its way into every aspect of the business
process. When the software engineer turns up to identify the needs for the new system, it is important
that the engineer looks at the patterns of data flow, and does not get confused by the customer's
continual talk of rural and urban, which has no bearing on most of the requirements, and will need big
pairs of complexity piles to import into the design.

The lesson is to see what is there, not what you are told to see. This means that you must step outside the
system seen by the customer.

When performing root cause analysis in the workplace, it is important to see what actually happened,
rather than expressing the events in the language of the process. In a car factory, adding a rubber stop to
a conveyor belt might stop damage to workpieces, but we rarely have all the elements of the situation in
front of us like the parts of an assembly line. If the events are always expressed in terms of the process,
the most likely conclusion is that a scapegoat has failed to follow the process, which serves the twin
packer goals of allocating blame and celebrating the perfection of the process.
In fact, causes of problems can be classified in terms of the involvement of the process as:

Unconnected

The whole team went down with chicken pox for a month. We can't do anything about this by changing
the process, but maybe we can gather some interesting observations to pass to senior management about
the tradeoffs in risk management.

Operational

The order should have been entered into the system but wasn't. This is often taken as the only kind of
problem, but actually is the most rare. Even when it occurs, the packer tendency to then assert that the
sales clerk is morally wanting and regard the problem as solved is not good enough, because we have
not in fact established a root cause. Perhaps the sales clerk has been poorly trained. Perhaps the process
is ambiguous. The question is why the order didn't get entered. This kind of problem can sometimes be
solved by messing with the process definition, but it usually comes down to issues of morale, or
identification with the job. That is, sociological effects that are powerful in the workplace but outside the
domain of the process.

Ergonomic

The process is OK in principle, but the implementation is not viable. The sales clerk has to do cash sales
too, and the continual interruptions interfere with the accuracy of the data input. Leave the process
definition as it is but apply a little common sense to the implementation.

Procedural

The process is badly defined. Customers are invoiced for parts according to the delivery notes we get
from our supplier, so customers are charged for parts that haven't even been delivered to us yet when our
suppliers make an error. Change the process.

Complexity Matching and Incremental Boildown


All interesting systems from a game of backgammon to calculus, have operations that make the state
more complex, and others that simplify it. Most of the formal process and study of software engineering
is focussed on growing the corpus. Opportunities to shrink it, with all the benefits that brings, we must
take on our own initiative. We can be tasked to write a function, but we can't be tasked to realise that it
can be generalised and wipe out three others.
Great lumps of obfuscation always come in pairs, one to convolute things and one to unconvolute them.
This extends to requirements elicitation, where it is common for users to request the functionality of
their existing system be reproduced, including procedural consequences of the existing systems
limitations, and their workarounds! These are sometimes called `degrading practices'.

A problem that is emerging with object libraries is that objects really do a very good job of
encapsulating their implementation. This allows us to use class hierarchies that we really know nothing
about the insides of. So we can end up calling through several layers of classes, each of which (to take
an example from Geographical Information Systems) mess with the co-ordinate system on the way, just
to place a marker on a map. The difficulty doesn't emerge until we attempt to set up several hundred
markers for the first time, and we discover that the `simple assignment' we are so pleased with is so
computationally intensive it takes half an hour to draw one screen.

A project that does not regularly examine its class hierarchies to ensure that the internal currencies are
natural and standard, and that the design is appropriate for the use cases found in practice, can suddenly
find itself in very deep water with no warning because of this hidden cost of reuse.

As ever, the best weapon against bloat is conceptual integrity, achieved by comparing mental models of
the project.

The Infinite Regress of `Software Architectures'


There was a team whose customer had asked them to build a runtime environment which could support
little processes being glued together by their I/O, like a kind of graphical shell which could specify
complicated pipelines. If we must classify it, it was a systems programming job. This team decided to be
Professional, and Use Proper Methods. So they went and produced an object oriented model of the stuff
in the URD, using a tool that automates the Shlear-Mellor approach. There was a bit of difficulty with
this, because the problem was wrong in places (\fIsic), but eventually they kludged it into the Proper
Methodology. Then they could use the code generator! They pressed the button, and what came out was
a whole bunch of glue classes. Each one just called down to an API routine from the `Software
Architecture', a layer that the approach requires, that allows the application to be mounted on a particular
type of computer system. In this case the Software Architecture would be a system facility providing a
graphical interface to a shell supporting complicated pipelines!

As mappers, we can see what happened with this rigorous piece of packer Professionalism and
Following The Procedure. The team had fired the wrong knowledge packet, and used a well-thought out
and excellently automated approach and language in quite the wrong context. Shlear-Mellor is intended
to capture real world application level behaviour, not for doing systems programming. In old fashioned
language, they couldn't see that the problem categorisation was wrong because they had no common
sense, which is either a result of moral deficiency or congenital idiocy. We would prefer to say that they
were conditioned to believe they weren't allowed to think about the job directly, but were obliged to rush
off and find a crutch as the first order of business. Then they could only see the problem through the
approach, even though it didn't fit well. Then an application layer approach represented their problem to
them in application language. So they never tried to think about what sort of program they were writing,
and notice that it was a system shell, and not a real world application. To packers, Professionalism
means always picking up your toothbrush with chopsticks, and never mentioning that your colleague's
toothbrush has ended up rammed up his nose, albeit with full ceremony!

As things became more absurd, the tension grew, and the team became very defensive about their
Professionalism. At one point, someone tried to suggest a little pragmatism, and in passing referred to
himself as a `computer programmer'. The result was remarkable. The whole team started looking at the
floor, shuffling its feet, and muttering `Software Engineer... Computer Scientist...', as if someone had
just committed a major social gaffe. Two of the members later explained that they did not find `mere
code' particularly interesting, and that their Professional Interest was in The Application Of The Process.

It's hard doing systems programming if crufting up some instructions to the computer is beneath you. It
leads to very high stress, because it's impossible. So it's very important that each individual voluably
demonstrates their Professionalism to management, so that none of the crap flying around in the eternal
chaos that is the human condition sticks to them.

You can't write computer programs if your strategy is playing an elaborate game of pass the parcel. Not
even if you are following the sublime packer strategy of passing the document to the person on your left,
`for review'. Project managers that write task descriptions showing the inputs and outputs of a task can
often get this kind of thing under control, but it only works provided they have respected the cognitive
atoms in the project and done the chunking well, and ensured that the outputs are actually nearer the
processor, either directly or by informing another part of the project's activities, than the inputs.

There is an interesting lesson for mappers in tales of this kind, too. One has to be careful of getting into
games of functional pass the parcel. Complexity doesn't just go away, and functionality doesn't appear
out of nowhere. Complexity can be expressed more simply, with a deeper viewpoint. You know when
you've done that. It can also be cancelled out by removing another load of it at the same time. You know
when you've done that too. If in your musings a lump of horrible awkwardness just seems to go away,
you've probably just moved it somewhere else in your structure. Now this can be really good when it
happens, because when you find it again, you've got two views on the same problem, and that can lead
to great insights. But don't develop an attachment to any solution that seems to have just lost complexity
by magic - it will reappear eventually and spoil your whole day. This applies at every level from
requirements elicitation to bug fixing. Bugs don't just go away. The ones that pop up and then go back
into hiding are the most worrying ones of all. Get the old release out, find them, and make sure they are
no longer in the latest one.

For a practical example of functionality not appearing by magic, consider the problem of atomicity. On
multitasking systems, many processes run without being able to control the times when they are
suspended to allow another to run. Sometimes, two processes need to co-ordinate, perhaps to control a
peripheral. The problem is that it always comes down to one of them being able to discover that the
peripheral is free, and then mark it as owned. In the space between the two events, it is possible for the
second process to make its own check, find that the peripheral is free, and start to mark it as owned. One
process then just overwrites the other's marker, and both processes attempt to access the peripheral at the
same time. No matter how one jiggles things around within the user processes, you can't cruft up the test
and set as a single, `atomic' operation out of any amount of user process cleverness. You can encapsulate
it in a GetLock() function and tidy it away, but GetLock() must get the atomicity from the
operating system. If the system's CPU is designed to support multiprocessing in hardware, and most do
today, we ultimately need an atomic operation implemented in the instruction set, such as the TAS
instruction on Motorola 68000 series processors.

None of this should be seen as suggesting that one shouldn't use layering, or specialised languages.
Indeed, if one is in fact relying on a specific opcode just to grab the printer, we need some major
layering! It simply means that we use the optimal level of specialisation to achieve maximal leverage -
we don't just delegate the impossible to miracle boxes and proceed with a design predicated on a
falsehood.

The Quality Audit


To mappers, the process is a protocol for communicating with their colleagues through time and space,
not a prescriptive set of rules that must be policed. It offers facilities rather than demanding dumb
compliance. In order to keep this distinction straight, we must approach the quality audit in the correct
spirit.

In the default packer model, the audit is a trial by ordeal. Managers work up apprehension in the time
leading up to the audit, and staff respond by booking leave and customer visits, or planning to call in
sick, in order to avoid being audited. When the auditors swoop, they approach team members
adversarially, and coerce them to tacitly acknowledge that the process is perfect, and that any flaws they
find must be offences committed by the individual. The individual becomes the patsy that takes the flak
for systemic flaws of the organisation, over which they have no control. This is the canonical situation
that starts staff chewing tranquilizers before they go to work. There is no doubt that this is the model,
even though it is rarely expressed this way, because the standard manager's Bogeyman Briefing includes
advice such as, `Do not volunteer information. Keep answers short. If you are asked where the project
management plan is, say it is in the Registry A barrister would sound very similar briefing a client who
was facing cross- examination by the prosecution!

There is absolutely nothing in ISO 9001 that requires this sorry ritual to get in the way of improvement.
We have no issue with ISO 9001 at all. In fact, we are concerned that talking about `going beyond ISO
9001' while we are incapable of even applying ISO 9001 itself with a positive mental attitude will just
shift the same old stupidity into yet another `radical new breakthrough in management science'. So how
would a gang of happy mappers go about a quality audit?

Firstly, the thing under scrutiny must be clearly identified as the process itself. We can do our staff the
courtesy of assuming that they are doing their very best, because it is nearly always true even when they
are treated like idiots. We therefore assume that any problems are by default, systemic faults of the
process. It's no good blaming repeated airline crashes on `pilot error' - obviously the cockpit ergonomics
need redesign.

Secondly, the comparison made by the auditor must be between the facilities of the process, and the
business need of the users. One of the worst things about an adversarial audit is that non-compliances
can easily emerge as consequences of a general procedure in a specific circumstance. For example, most
people's process contains a clause saying that personnel records should be updated with training records.
This is entirely appropriate in a semi-skilled job where workers can pick up `tickets' allowing them to
perform specific tasks. In a section like a transport department, there is often an added need to retain
actual certificates for legal reasons, and the problem is slightly different. In a programming team, formal
training courses comprise such a low proportion of the training done in the team that they are almost
irrelevant. Most training will be either on the job, or in the worker's own time. Many programmers will
spend several hundred hours per year in self-training at home. Authors of `or else' based processes
should remember that the employer cannot even require the employee to divulge what they have been
studying at home, so they'd better not get carried away with rudeness, because the project managers
really need these data, and have to ask politely. So testing dumb compliance to a global mechanism is
futile and will only lead to arguments about interpretation. Instead the auditor must evaluate the local
business need, and examine the suitability of the process in the light of the business need.

Thirdly, the auditor must be recognised as a specialist business colleague with his or her own positive
contribution to the work to make. These people see hundreds of business needs and filing systems, both
automated and manual. If the silly war between auditor and auditee can turn into a joint criticism of the
process, the auditee is free to be open about their problems instead of keeping silent as advised by most
managers today. Only then, when they know what the actual problems are, can the auditors search that
vast experience and suggest solutions that they have seen work for others.

Quality auditors should not be bean counters whose most positive contribution is proposing elaborate
rituals for balancing inappropriate complexity in the process. Their role goes much further than that.
Robert Heinlein said that civilisation is built on library science, and quality auditors are now the library
scientists of engineering industry. They can tell us how to file data cost effectively so that we can find
them later, given the sort of thing we'll be wanting to do.

This file last updated 1 November 1997


Copyright (c) Alan G Carter and Colston Sanger 1997

alan@melloworld.com

colston@shotters.dircon.co.uk
Design Principles

Simple and Robust Environments


The development environment consists of all the tools (including word processing packages) used by the
programmers, and the machine and network infrastructure they run on. A beneficial environment will be
as simple as possible. Just like the software you are developing, the more complexity you let creep in,
the more room for problems there will be, and the higher the maintenance costs will be.

A good general rule is to keep all your own work, including configuration stuff (in script files if need be)
as plaintext. Be able to clean everything but your raw source out of the way whenever necessary, and be
able to rebuild automatically.

Your repository and configuration management system's most important job is to give you security.
Every bit of complexity you add increases the danger that the system will fail. The team that abdicates
control to a whizz-bang client-server architecture configuration management system risks corruption due
to loss of referential integrity within the system. The resultant chaos, as all work stops, efforts to find
some way to have confidence in backups commence, people start to identify what they must rework, and
morale disappears, shouldn't happen to a dog. It's not hard to build a good configuration management
system out of system scripts using shopping lists, with something as simple as SCCS or RCS providing
underlying version control.

The same logic, that the objective is security, so simplicity increases reliability and confidence when
system and users are under stress, applies to backup as well. When something goes wrong, the most
important thing is to know that you have rolled the system back to its exact state at a given time in the
past. Incremental backups with convoluted strategies for handling changes to the directory tree can
reduce confidence that they've worked OK even when they do. Or rather, they should. The team that
automatically cleans down to plaintext, streams everything off to tape, and rebuilds every night knows
where it is, and on this solid foundation can spend its time making real progress is actually smarter than
the sophisticated team that spends a month trying to baseline itself.

Keep it simple. Never fix anything - you never know if you found all the problems. Clean down and
reload instead. Always be able to reformat your disk, reinstall your tools, retrieve your plaintext
repository, reconfigure and rebuild. This brings total security and saves all that time you have to spend
worrying about viruses. Who cares?

System Types
One of the most important things to ask about a new system, or even an old one you have to do some
work on, is what sort of system is it? One would not attempt to lay out a hippy colony around a parade
ground, or a military training camp as a loosely coupled collection of teepees bound together by
communal walkways! Any system may have more than one of the attributes listed below, although some
are mutually exclusive. The attributes are possibly useful crude categories encountered in practical
experience. They are not derived from any underlying theory. Examples of kinds of system yours might
be are:

Monolithic

Centralised processing and either offline connections to users or pretty dumb terminals with all the wok
done in the monolith. Users are either in the same building or have a lot of leased line capacity. This is
great way to do enormous amounts of commercial processing with industrial strength paper handling
facilities adjacent to huge disk stores and laser printers chucking their output several feet into the air.
Monoliths have their own problems - users are often loathe to go unavailable for backups for instance!
These are mitigated by the high degree of control one can exert over what happens on the site. Fault
tolerant hardware (including RAID and power management) and sophisticated database engine
technology are areas where competition has produced real benefits as well as hype.

Client-Server

Distributes processing upfront near the user, with work that needs centralising in one place. Provides
good layering, by separating storage, processing and HCI. Allows local specialisation of functionality
and multiple vendors, and hence future proof to an extent. Requires (and therefore exploits) smarter
networking than monoliths, but better suited to interactive operation by allowing as much as possible to
be done on processing local to the user, using processing appropriate to the user's needs. All client-
server architectures actually have a monolith in the background.

Interactive

Allows the user to engage in dialogue with the system. Essential when working, say, in the near term or
with the general public. These days usually means graphical. The system state evolves continuously so
journalling or periods of exposure to data loss are an issue. Sizing is an issue with interactive systems
because as soon as they get them, the users' use patterns change, and the use patterns can vary a long
way between peak and normal.

Batch

Often thought of as an old fashioned strategy using punch cards, batch systems are simple, reliable,
wonderful if communication links are unreliable, and more readily scalable than interactive systems.
Event Driven

Respond to events in the outside world. Applications with GUI s spend most of their time waiting for the
user to click somewhere on the desktop, and respond to the event. One armed bandits are event driven,
as are burglar alarms. Event driven systems have complex state spaces and significant danger of feature
interaction. They often have an obligation to respond within a certain time limit to an event. If a problem
can be represented as a non-event driven system, it is probably better to do so.

Data Driven

Similar to event driven and batch systems, but with a clear data flow through each sub-system, where the
availability of the input data is the triggering event that causes each sub-system to perform its duty
cycle. Data driven systems are more flexible than batch systems, because batch size can be varied
dynamically, but they have the reliability of batch systems because we can always know how far we
have got in processing each batch. Each subsystem can arrange an atomic operation that makes its output
available, and removes its input data, so that even a powerfail at any time is immediately recoverable
because system state is never ambiguous. Email systems are data driven.

Opportunistic

These kinds of systems never suffer from communications failures because they only ever use channels
when they can. In fact, most business offices are opportunistic because this is how the underlying
Ethernet works. Data are buffered until the local transceiver can transmit them without a collision.

Dead Reckoning

Attempt to track each step of the analyst's reality as it evolves. Often seen as desirable because they
allow strong validation of user data, they can be brittle in use, leading to the famous joke, `It's no use
pointing to it - the computer says it isn't there!'

Convergent

These relax the interest in tracking each step of a perceived real world process, and focus on gathering
changes of state at key monitoring points and subsequently integrating the data to form an accurate
picture of the real world at some point in the past, and progressively poorer approximations as the point
of interest nears the present. Mobile users who move data from their laptops to the corporate net are an
example. We know exactly how many sales we made last week, nearly how many we made yesterday,
but we haven't got Jack or Jill's figures yet, and so far we know of three sales today.

Wavefront
Systems that deal with things as they happen. Lighting rig control or telecommunications switching are
examples. In cases of failure we are usually more interested in fast recovery than data loss.

Retrospective

Concerned with maintaining an accurate record of the past. Avoiding data loss is usually very important.
Examples are accounting systems.

Error Handling - a Program's Lymphatic System


It has often been noted that one should trap one's error returns, but there is not much point in this if one
does not know what one is going to do with them. Error handling is as much of your program's structure
as the successful logic flows, but not as celebrated. The relationship is rather like that between the
circulatory and lymphatic systems in the body. You need to consider error handling at each stage of
design. For example, there is no point in capturing an error return from failing to write to the error log in
an error handling routine!

Conceptual integrity requires that you define an overall approach to error handling to use throughout
your design and stick to it. How will you indicate failures? What idioms will programmers use to
elegantly test for errors without breaking the main flow? It should not be necessary to make a second
call to find out either whether an error occurred or what it was - otherwise code will bloat. Errors should
ideally be testable for on the function return to allow for terse idioms, which make for apprehendable
code and beyond that, the quality plateau:

if((fp = fopen(...)) == NULL)


{
// Error
}

or

if(!DoTheBusiness())
{
// Error
}
One hears many excuses for bloating up the error handling logic to the point where a rigorously coded
function call can take so many lines of code it isn't possible to see the big picture any more. As mappers,
we know that accreting so many Worthy coding standards you can't write a beautiful program is just
missing the point.

In procedural (and some object) code, there is a debate about error handling strategy that it is worth
addressing. There are two approaches addressed in the debate. The first says that the called subroutine
should not return until it has done what it as asked, and we might call it `total delegation'. The other says
that the called subroutine should proceed until encounters a problem, and then tidy up any mess it has
made and bale out telling the caller what has gone wrong - we'll call it `ready failure'.

The attractions of total delegation are that it makes for very clean code in the caller, it can be very
efficient, and it devolves responsibility for maintaining the state of lower levels to the levels themselves.
The disadvantage is that it only works if the caller doesn't need to actually handle the consequences of
error using its own context. This limits it to systems programming type situations, where the application
really can have no knowledge of the goings on below, and if the lower layer really can't sort the problem
out, any kind of exit niceties above would be inappropriate because the OS is going to sling the process
out or panic trap.

Ready failure always allows the caller to make a response to problems, and nested calls can pop their
stack until they reach a layer that can deal with the problem. The error handling logic is threaded
through every layer, but can be minimised with careful coding, and both author and maintainer know
how to operate within the scheme. In addition, one can ensure that a call trace showing how the process
got into trouble so that the situation can be reproduced is always produced.

We don't think there should be a debate because when total delegation is appropriate it is at the lowest
level. Mixing the two is a nightmare in code because it breaks conceptual integrity.

Some object languages provide exceptions, which allow the automatic collapse of the call stack to a
layer qualified to handle the error. These are a great way to decouple the main flow from error handling.
Important points to remember are that an exception can sometimes be thrown be lot higher if you just
want to handle it than if you want to know how it got into trouble. An error message from a low level
saying

Could not write() datafile ftell() = 246810

followed by another saying

Could not Save World


just doesn't help with debug. You can throw exceptions up a layer at a time without compromising the
main flow, and should think about doing this.

Do not abuse exceptions to create weird control flow in company time. In particular do not hide
longjmp()s in macros and call them from handlers. If you wish to experiment with the Powers of
Darkness, do it at home. We all have to do it, but it is madness after all, and your colleagues might get
the wrong idea and start rationalising it. Isn't it strange that we are producing languages today that are
becoming anally retentive to the point where it takes ages just to get the const declarations set up right
in function prototypes, but allow us to pull stunts with control flow that we'd never have tried to pull off
in assembler after we got just a few K to play with?

Try to avoid leaving assert()s and conditional compilations of debug macros littering your code.
You cannot achieve the necessary sufficiency of the quality plateau with all that junk lying around.

Modalism and Combinatorical Explosion


For some reason, there is an assumption going around that in order to be robust, systems need normal
modes, failure modes that they enter when they fail, and recovery modes that they get into between
being in failure mode and going back to normal mode. Part of this is certainly encouraged by misguided
users who are attempting to describe objectives in cases of failure, but do so by talking about system
`modes'. It is a ticklish area, because when discussing failure users must think about the bits of a real
system that can fail, and they must discuss failure early on if they have to submit a URD that can later be
used as a stick to beat them. This means they must attempt to learn more about the final implementation
than the designers themselves know, so that they can specify what to do when the components fail.

As well as emphasising the importance of dialogue, this points out an often overlooked point. Does the
user really want you to implement the failure mode described in such detail in the URD? Might a system
that just works be acceptable? Of course it would be, but many teams just go ahead and implement the
failure just like the URD said.

A modern legend at ICL tells that when they bought their first load of boards from Fujitsu, they
specified that there would be a 1% rejection rate. So just before the first batch of 100 shipped, a senior
Fujitsu executive picked the top board out of the crate and smashed it with a hammer before repacking
it.

Apart from the need to manage state transitions and execute rarely exercised code, often across
distributed platforms during conditions of failure which is just asking for trouble, there is always a
deeper problem with systems of this kind.

First we are in normal running. Then we enter failure mode. Then recovery mode. What happens if we
now fail? Do we have failure during recovery from failure mode? Recovery from failure during recovery
from failure mode? It is so easy to introduce a need for this kind of infinite regress into modal systems,
and not even recognise it. Of course, if in your design every level of failure and recovery in the regress
is identical then you are OK - all you have to do is prove this is the case.

If you can collapse the infinite regress then you can probably take the next step - eliminate the normal
and recovery modes altogether and stay in failure mode! (Or eliminate the normal and failure modes and
stay in recovery mode if you'd rather see it that way.) Then there are no co-ordinated state transitions to
manage across multiple platforms while the gremlins are shoving the power leads in and out. The system
need not ever know that it is under sustained real world attack and this is the fourth time it has tried to
process a bunch of transactions. It need not know its context to do The Right Thing, if you have defined
The Right Thing carefully enough.

Having multiple modes for handling failure really is much less necessary than most people think, and
avoiding them makes huge gains in controlling complexity. If we wish to keep control and
understanding of our designs, we must minimise complexity everywhere we can. On the win side of this
equation is the quality plateau. On the lose side is interaction between complexity with other complexity
to produce vast growth in the system's state space called `combinatorical explosion'.

Avoid Representative Redundancy


Every database designer knows about normal forms. It ends up as a complicated area if one wants to do
a thorough treatment of the subject in real world conditions, but the basic idea is very simple. Avoid
redundancy in representation. If you need an order record and an invoice record, both of which need the
customer's name, store the customer record in one table, and use a unique index into the customer table
in the order record. Then index into the order table in the invoice record. Then things never get into a
confused tangle where you end up having to remember to check loads of little things every time you
want to change a datum.

The thing is, database normalisation concepts apply everywhere, for the same reasons. Never store a
thing, and another unconnected thing somewhere else that asserts that the thing exists. Let data control
its own structure, and it won't get tangled. Ritualistic use of data structures, often includes an aspect of
pretending to be in control by picking up one's toothbrush with chopsticks. If we create the financial
accounts structure in Box A, and a complicated description of Box A in Box B, we can spend a long
time thrashing about in Box B and never have to address the fact that we really don't understand what is
happening in Box A at all!

Don't fall into this trap. Let data represent themselves, or as Laurie Anderson said in Big Science,

Let X = X
Look at the State of That!
In the same way that it important to avoid representational redundancy of data in the context of your
system, it is important to avoid representational redundancy of your system in the context of the
platform. This is true because global resources can be left in ambiguous states due to failure. The design
should always consider cleanup of all system resources, particularly partially written files that can eat
space even if they don't confuse processing.

Be aware of which system resources clean themselves up (such as semaphores) when the owning
process dies and use them preferably.

Avoid `cleanup processes' that run around non-deterministically on the system clock with slash and burn
rights against all your system's resources. Try to use initialisation protocols that start by determining a
known state and moving forward instead. An example might be,

1. Find an input file


2. If the output file already exists, delete input file and exit.
3. Open the input file
4. Open a temporary output file with a standard name in truncate mode.
5. Process from input to output file.
6. When output file is complete, change its name atomically to the output name.
7. Delete the input file.

Or, grasp branch firmly with left paw before releasing right paw!

The Reality of the System as an Object


This section is primarily intended for designers of object systems, because the problem it addresses
primarily appears in the object approach. This is because of the rigorous encapsulation the object model
affords. We have already discussed the two approaches to designing object systems that mappers and
packers prefer. The mapper approach entails understanding the nature of the desire, and then as an
iterated activity, identifying appropriate system dynamics and producing an optimal mapping between
problem dynamics and system semantics.

Object designs are intended to create a formalised Knight's Fork, by providing an approach which
explicitly relates real world objects to viable system semantics via object programming languages (be
they Eiffel or UML code generators). When producing these designs their creators tend to represent
everything that is in the real world today, rather than tomorrow, when the system will be in use. The
major difference is that tomorrow the system will exist in user's world - today it does not. So analysts
regularly produce little pictures of the user's world in the future that contains everything but the very
computer system that is central to the whole scenario.

Meanwhile, the internal design of the system is also hampered by the lack of representation of the
system itself. One might say that the real world system and the internal system are the same thing in
both real and abstract worlds, and hence this identity forms the appearance of the Knight's Fork in its
most basic form in object designs.

The two deep questions when finding objects and how they link together are:

1. Who instantiates who?


2. Who exerts who's methods?

With a clear System class in the design it's a lot easier to draw out the instantiation hierarchy, as well as
see where things like GUI and tape I/O come from, let alone use cases that are triggered by wall-clock
time! This doesn't mean that functionality can't be moved out into specialised classes later in the design,
but it does give the user world reality an equal footing with the system reality in the design, so the result
will satisfy both criteria.

Getting to grips with an abstract set of classes floating around in the air with no rigorous way to get a
reality check can be as painful as anything else when you don't know what you are doing.

Of course, the need for a System class disappears if one in interesting in simply modelling, rather than
automating business flows where the control systems are not represented. What would be the point in
that? Here we stress the point that engineering informed by the mapping cognitive strategy involves
more than a set of procedural actions. It means bounding your own problem, clarifying your own
desires, and finding the optimal point of leverage between problem dynamics and system semantics. If
your design won't benefit from having a System class, don't use one!

Memory Leak Detectors


There are a number of products on the market that by a variety of strategies, detect memory leaks in
your application. A memory leak is what happens what a program requests some memory from the heap
(using, say, malloc() in C under UNIX or DOS , or the operator new in C ++), and then forgets to
give it back when it is finished with it. This can sometimes damage other processes on the same
platform, because some OSes will allow one process to gobble up all available system memory and
swap!

Even if the OS is sophisticated enough to limit the amount of real memory it will allocate to a single
process on a multiprocessor, the application can soon gobble up its own quota, which usually ends up
failing in a user-visible fashion, at the very least by having the application lobbed out of the system by
the OS !
So memory leaks are A Bad Thing.

This is why people sell, and buy, memory leak detectors. The trouble is, memory leaks are a symptom of
problems, not a cause. It isn't hard to call free() or delete objects that are no longer needed. Using
delete on collections of pointers to active objects is just plain sloppy, as is over-writing their
addresses. What if these objects have, say, callbacks registered with the GUI ? How will you get rid of
them? Destructors out of control mean programmer out of control. If a programmer can't show control
over objects enough to avoid memory leaks, how can we know that anything else is right?

Conceptual integrity is one of the strongest supports in keeping control of objects. A useful general rule
(although like all rules it is not always appropriate) is to say that the layer that constructs a module
should also be responsible for destroying it. This at least focuses attention on the objects lifecycle, and
not just a few aspects of it behaviour that might be indicated in use case diagrams.

Timeouts
One of the most effective ways of getting a seed for a random number generator is to look at the system
clock. Similarly, if two processes are running on the same multiprocessor, we can never predict just how
much wall-clock time will have elapsed between their starting execution, and a given point in the
program being reached. We can't even predict exactly how much processor time each will have been
allocated.

Therefore timeouts are A Bad Thing. In deliverables they make the system's state space vastly bigger, so
making its behaviour much harder for the designer to predict. In debugging, they can make the
conditions under which the fault occurred impossible to reproduce. Don't use them unless you absolutely
must.

Communications layers are often obliged to use timeouts, because when it comes down to it, the only
way to find out if a remote box wants to play is to send it a message, and wait to see if it sends one back.
How long should one wait? The `Byzantine Generals' Problem' illustrates this. So most modern systems
have timeouts in the comms layer, but this is not an excuse to use them all over the place, and where
they must be used, they should be hidden within an encapsulated object that can be replaced by a
deterministic event generator (such as a key press) for debug.

Design for Test


It is rarely enough that our systems are correct. Usually we need to know that they are correct as well.
This point may sound trivial, but it has consequences for how we set about our work.

At a requirements elicitation level, we can wander around the particular part of the problem domain we
are supposed to be tackling, without ever knowing if we have yet captured all the issues that are
relevant. To gain confidence that we have not missed anything, we need to widen our gaze, so that we
can see where our outputs go to, and where our inputs come from. We need to find a way to present
these flows so that we can see the big picture at a glance. Reams of prose or fat folders full of Data Flow
Diagrams are of no help at all here (although they may well be needed elsewhere on the project),
because they will not allow us to see at a glance that there are no loose ends. If we can see that there are
no loose ends, we can be reasonably confident that there are no hidden horrors that we will discover
during implementation. This is an example of the mapper technique of problem bounding.

At an architectural and detailed design level, the same idea applies. During our contemplation of our
design we represent our ideas to ourselves in as many ways as we can, and challenge them to see if we
can break them. It is important that we use some feature of the design, such as the number of possible
input states, to show that the system we design will be robust in all cases, by showing that we have
considered all cases. This does not mean that we attempt to enumerate all cases - instead we find a
means to group them, and show that we have considered all groups.

When single-stepping code with a graphical symbolic debugger, at each decision we should consider all
the circumstances under which the path we are taking would be followed, and all cases where the other
path would be followed.

In all these situations, design for test begins by laying our work out so that its correctness is visible to
inspection. In this light it is interesting to consider what we mean by a mathematical proof. The purpose
of proof is usually described as being to show that a proposition is the case. That is a very packer,
activity-centred way of seeing things. The mapper description of the purpose of proof is that it shows us
the proposition in a new light, in which the truth of the proposition is obvious to inspection. For
mappers, a proof doesn't just establish a fact, it increases our understanding as well. We have recently
seen computer-assisted proofs that fulfill the packer purpose, but do nothing for the mapper purpose.
Because they do not exploit the leverage which comes from understanding, these proofs are also weaker.
Is it necessarily the case that the correctness of the code (let alone the architecture of the computer) that
is going to perform the search is obvious to inspection?

Wise architects usually layer their designs so that there are discrete stages visible in the transition from
end-user facing code and OS facing code. Every one of these layers provides an opportunity to write a
little test application. These opportunities should usually be taken, because although it may seem like a
high up-front cost, tracking down bugs that do not have well-defined test points in the layering can
explode the final test phase and add enormous time costs just before delivery. To fully exploit these test
opportunities, we should consider test when defining the API s for our layers. Is it possible to simplify
the API definition so that we can reduce the proportion of all possible calls that are meaningless? Each
layer must either validate its input, or if time is really critical, require prevalidated inputs. Test must
ensure that this logic works properly as well as the job the layer is supposed to be doing. If the API can
be simplified, the test requirement is automatically simplified at the same time.

Considerations that apply between layers also apply between runtime processes. Most non-trivial
systems require several processes to co-operate either on a single platform or across a network. The
functionality of these processes should be divided up so that it is possible to test them, ideally in
isolation and from a command-line or script.

Sometimes we cannot avoid introducing discontinuities into the solution where none exists in the
problem. For example, if our database is so big we must spread it across several machines (and our
COTS RDBMS isn't managing this for us) we need to recognise the points where the logic of our
programs must change to look on another system, and test that this change is negotiated correctly.

Designers of object systems have a particularly easy strategy available for automating test. Every class
(or key classes at the discretion of the architect) can have a matching class defined that exercises the
methods of the system class. This works so well because the class declaration forces the surface area of
the class to be tested into a standardised, and well defined format (this is what objects are all about). So
each class can carry with it it's own test code, that just needs calling itself from a little application
wrapper to automate the test. These test classes are sometimes called `yang' classes (the deliverable
classes are the `yin' classes).

There are two benefits that can be attained when automated test is in place. The first is that the tests can
be run every night, as part of the build process. It does programmers no end of good to come in and find
an email from the development environment saying that everything that the entire team has developed to
date is still working properly. When the email says that something is broken, they don't waste days
trying to find out what is wrong with their new layer when in fact the problem has appeared two levels
down. The second benefit is that automated test code cannot slide out of date as documentation can. If
the automated test compiles, links, and passes, than we know that the description of the behaviour of the
tested code that it contains is true.

These ideas of the definition and execution of automated tests are especially important on very
sophisticated projects where dynamic configuration management and incremental compilation tools out
of science fiction books allow hundreds of developers to hack away like demented monkeys on cocaine
without even stopping for sleep let alone thought. (Said the authorial voice rhetorically.) Checkpointing
and running full tests from the ground up should not be considered an interruption of work - it is a very
cheap way of buying confidence in the solidity of the ground. As an added benefit, such events can
become team festivals as module after module, layer after layer, announces its own successful build and
test on the configuration manager's workstation. It is at these festivals that the team can naturally reflect
on all that they have achieved to date, because the first festival should be simply to compile and run a
`Hello world!' program and prove that the compiler is working properly, while the last produces a
working product that is deliverable to the customer with all objectives achieved.

Dates, Money, Units and the Year 2000


An area where test (and failure) can be massively reduced by reducing system complexity is by
recognising discontinuities in the problem domain and avoiding their deep representation. What does
this mean in practice? One example is time and daylight saving. Time actually proceeds at the rate of
one second per second, and the planet does not do a little shimmy in its orbit every Spring and Autumn.
So even though the requirements document may talk about switching in and out of daylight saving, there
is no need to represent this within the system any lower than the user interface level, which just needs a
function, method or whatever called LocalAdjustTime() or some such. UNIX has wonderful
support for doing this stuff right, and sadly few sites ever use it properly.

The same thinking applies to time zones. Your users may well work all over the planet and want to talk
in terms of their local times, but your network should use GMT (or UTC if you really mean UTC )
throughout, and files should be so timestamped. Sequencing issues with files created on computers with
different clock settings still absorb far too many programmer hours. One manager of an international
network went to her local domestic furnishings store and bought forty hideous, 1950s pointy-style,
identical clocks and a boxful of spare batteries. Over the following year she took a clock with her
whenever she visited a remote office, set it to the correct GMT time and hung it on the wall. By the end
of that year, the persistent undercurrent of difficulties that were ultimately caused by sequencing issues
magically went away, because every operator had a very big reminder of what time to set the system
clocks to when they rebooted them.

Another example of an avoidable discontinuity of problem domain is the two kinds of money most
countries like to maintain. There are always 100 pence in the pound or cents in the dollar, or just store
the pence or the cents, and if the user really wants a decimal point printed out after the second digit, put
a 2 in the database somewhere and use an output routine that does the database lookup. That way
sensible currencies like pesetas and lira don't end up causing problems because one must remember not
to print out the redundant decimal point...

The difficulties associated with the Year 2000 problem repeatedly reveal an issue that programmers
seem to have to discover over and over again. Programming languages provide data types because
within the type there are a permissible set of operations that one can either perform or not, and if one is
reading code and the operations are being performed, one can see them. OO languages extend this
facility to any kind of data we wish to so control by giving us Abstract Data Types (ADTs). The real
difficulty with Year 2000 is not the way so many programmers coded the year into two digits - in days
gone by that was a necessary storage saving and some Year 2000 prone software is quite old. The
problem is the way some programmers chopped up their two digit dates and tucked them away all over
the place, without using a consistent subroutine, macro or even code fragment to do it. That means that
to dig out the Year 2000 problems one must read and understand every single line of these horrible
blathering old programs.

Security
Sites differ widely in their attitude to security. Some of this is an inevitable consequence of the nature of
the business. Many military and commercial operations have a genuine need to prevent the competition
from discovering what is going on. But many of the differences come from confusion as to the intent of
security, and this is the topic of this section. As has often been the case in this course, situations and
techniques for increasing security are given elsewhere. We shall here concentrate on appropriate
relaxations of usual security.

First, one should distinguish between the security requirements of one's products, which come from the
users' requirements, and the security needs of one's own development environment. These may be
linked, where, for example, the security of the product depends on the confidentiality of the source code,
but linkage is not equivalence. In products, don't just add `security' features by force or habit. Is it really
necessary to associate a password with every user ID in your product? Do you need user IDs at all? Can
non-contentious functionality be provided behind a `guest' ID that requires no password? Every
password in your product must be remembered and maintained, reducing ergonomic viability and
increasing cost of ownership, for the little darlings are sure to forget their passwords.

Next, there are two kinds of security threat, malicious and inadvertent. Your product may need to guard
against malicious threats, but if you need to guard your own development environment against malicious
threats from within (we assume you are a grown-up and have a firewall), you have bigger problems than
tweaking a few file permissions will sort out for you. So give up on malicious threats at work. As for
inadvertent threats, such as accidentally deleting the entire source tree, you have your backups don't
you? Placing a high cost security overhead on every operation in a development environment to guard
against `disasters' that are in fact low-cost when, if ever, they happen is misguided. As programmers
become more familiar with the doctrine of the personal layered process, even these low cost errors
reduce in number, and the development of shared mental models and mapper jargon within teams means
that informal `etiquettes' develop readily, such as the cry `Reinitialising the test database - all OK?'
before cleaning up trashed test data. These outcry etiquette elements are the only acceptable shouting in
the hated open plan office, and just about the only valid reason for them. It is not a good enough reason
however.

So don't lock up your development environment to the point where changing anything at all requires
every team member present to type in their passwords. Don't create or adopt a configuration
management system that stops a developer dead at eight o'clock at night when he or she is on a roll but
can't even book out a hackable file for read to try something out. Not only does this directly impede your
project: it is also an emotionally painful experience that you dump on the most highly motivated animal
in the commercial world - a programmer in Deep Hack Mode. What has this person done to hurt you?

And finally, don't allow any element of the packer article of faith that we must know exactly who did
exactly what at exactly what time with respect to absolutely everything to cloud your thinking. If your
project is a team co-ordinated by etiquette and formalised as necessary you have a chance. If it is a
bazaar regulated by detailed records you are doomed anyway.

This file last updated 9 November 1997


Copyright (c) Alan G Carter and Colston Sanger 1997
alan@melloworld.com

colston@shotters.dircon.co.uk
Prudence and Safety

Brain Overload
As we said right at the beginning, mapping and packing are different. The difference can be seen in the
organisation of the workplace, and the behaviour of the people within it. A packer organisation sees all
work as consisting of a mechanical series of actions, that are performed at a particular place. It's not that
cynical managers believe that putting everyone in open plan offices, thus stopping them concentrating
and hence impacting the product doesn't matter because their goals are short term - it's that they don't
believe that there is any such thing as concentration (as mappers know it) going on in the first place!

Some work environments can be so packer-orientated that they render mapping impossible. There will
be constant interruptions, preventing a mapper attaining flow. Meetings will be structured as a series of
`posture statements' from individuals that are scored to identify winners and losers without regard to the
comparative relevance of any particular sound bite. In this situation, a mapper whose considered thought
may require two, or even three sentences to enunciate, will simply be seen as a pathetic loser.

Worse, the very ineffectiveness of the packer cognitive strategy leads its users to become very defensive
when the etiquette that conceals all the packers' lack of understanding is breached. If a small proportion
of the staff are mappers (or even a large proportion if they don't know what is going on), acrimonious
situations can emerge.

The key point in this picture is that there is no point in the mappers trying to persuade their packer
colleagues of the value of a rigorous and complete approach with ever more careful arguments - the
problem is that the packers aren't ready to accept any sort of detailed reasoning in the first place! So a
mapper can work themselves sick trying to reason with people who just aren't listening.

You really can work yourself sick when doing mapping, and it is very important to watch this and avoid
it. The first thing to do is recognise situations where an intense mapper approach is appropriate.

All mapping can be seen as a search, and the thing about searching is that one does not know where the
sought thing is. Therefore one must usually undertake to continue the search until the object is found.
This is much easier to do if one can have confidence that the object of the search actually exists!
Otherwise, one must impose some sort of artificial termination such as a time limit. This is where the
`mapper faith' we mentioned at the beginning comes in - mappers discover over and over again that the
natural world is always simpler than it looks, providing that one looks at it in the right way. Sometimes
great hidden complexity must be uncovered and explored on the way, but the simplicity, the necessary
sufficiency of the `quality plateau' will be revealed in the end. In all situations found in the natural
world, a mapper investment will be worthwhile, because the deeper the hidden view, the more
worthwhile (powerful) it will be. Situations that do not involve the `natural world' in this sense (after all,
everything in the universe is `natural') are those where consciousness has acted to create a local area of
irrationality. In other words, where you are playing against another mind, which by accident or design is
setting out to confuse you, by setting out to show you only parts of the whole system (which is rational)
so that what you see appears as complete but irrational. So packers, by using the same language as
mappers when talking about thinking, but meaning something different, appear to behave irrationally.
When one adds the mapper/packer communication barrier into the picture, we're back in the natural
world, which includes the mind that is the opponent, and rationality is restored.

There is a radio panel game called Mornington Crescent, whose rules are for some reason never actually
published. If one listens to the play and assumes that there is a rational system of rules that is being
adhered to, one will go mad. There are no such rules. The real game consists of the adroitness with
which the players make it appear that the rules do exist, and that the game has a particular character.

So if there is another mind in the situation under consideration, its potential perversity must always be
considered to ensure that the situation remains natural. As computer programmers, this might seem to
leave us in a paralysis of paranoia, because many of the problems we deal with involve users. But things
are not this bad because if a business activity has existed for some time, it is going to be a consistent
natural phenomenon that is amenable to mapper analysis, even if none of the human plyers actually
understands what is really going on. Remember however, that short lived business activities, such as
types of transactions offered by merchant banks for short periods only in a perpetually changing market,
may not actually be sustainable, and thus must be treated as the products of perverse minds. This doesn't
mean that the transactions aren't automatable - just that the only thing to do is code up the madness in
your 4GL or whatever RAD tool you are using just like the risk managers ask you to do it, and let them
worry about renormalising their own behaviour with respect to their equally perverse peers. Sometimes,
organisations that do this sort of thing ask mappers to look at the whole situation and see if they can find
any logic if the boundary is wide enough. These jobs can be extremely interesting and rewarding.

Having identified situations where we should either abandon mapping or redefine the problem, we are
left with the problem of how long understanding will take to come. Experience with mapping does build
up a bizarre kind of intuition about intuition, which can often give a good sense of this. Don't pronounce
estimates until you have convinced yourself privately that you have built up enough experience to do
this. Write down your own private estimate before you embark on a mapping job, and see if you are
right when you're done. Even with decades of experience, you can still be wrong. If the job was
understood, there would be a COTS product, wouldn't there?

Mappers often contemplate problems for many years before they crack them - the investigation
culminating in this course took either thirty years or six years, depending on where the boundary is set!
It is important that while there is a limit to the intensity one can bring to bear on a problem, there is no
limit to the duration of a state of alertness with respect to a problem, save the lifespan of the mapper.
There is no disgrace in recognising a long haul problem and reducing the intensity of one's
contemplation. This attitude produces one of the most hilarious examples of the mapper/packer
communication barrier. To a packer, a project consists of a series of actions to be performed. The speed
with which the actions are performed indicates the efficiency of the worker. Working on a problem, then
appearing to leave it alone, is evidence of shambolic disorganisation on the part of the worker. Hence
packers are able to be extremely patronising about mappers' `curious projects', while recognising and
simultaneously dismissing the remarkable creative results that mappers regularly deliver, because they
clearly weren't attained `correctly', although the packers have no suggestions as to what the `correct'
approach might be. Poor things. Education has a lot to answer for.

With the rules of engagement laid down, we must next address taking care of one's self while doing
intense mapping. Basic physical health can be compromised in two ways. During intense engagement
some mappers find that nutritional and exercise needs are left unattended. Be proud of what you are and
what you can do, and make sure there is plenty of fresh fruit and a full freezer before entering the state.
Then eating is easy. Every mapper seems able to work well while taking a solitary walk, so take walks.

The second way to compromise your health is by overfilling your brain. The following advice is not to
be taken literally, as we have no neurological basis for it, but this is what it feels like to some mappers
who have experienced the difficulty. If the problem is a big one, the complexity of it, that has to be held
in the mind in one go before collapse can occur, seems to fill more and more of the mapper's brain, and
it starts to occupy the brain parts that hold the mapper's body image. When this happens, physical fitness
can collapse in days, and a limber, trim person can turn into a stiff couch potato very quickly. We aren't
saying don't do this - it really is up to you. But if you stop within a week of getting suddenly slobby, and
regain your body image by working physically and getting some feedback, you can retrieve your fitness
as quickly as you lost it. Swap your body image out for too long though, and it gets much harder.

Remember what we said about not wasting energy by repeating unproductive cycles of thought.

Remember that things are also happening in the outside world. Your personal relationships need
maintaining, and while some of those close to you will recognise the state and wait for you to reappear,
others will need some contact. Practice working in background, by using the `plate spinning' technique
we described earlier. With time, you will find that you can vary the amount of your mind you give to
contemplation, and the amount you leave free for holding witty conversations. If you're at a stage where
you need to commit a lot of your mind, you don't want to stop because it will take a week to retrieve the
partial picture you have, and there's a function you're expected to attend, you can always just go in
happy idiot mode. You know you're paying no attention to the prattle around you, but incredibly, the
prattlers rarely do!

Pay no attention to the opinions of packers as to your unhealthy ways. Such comments are nothing to do
with the genuine health considerations we have discussed in this section. To packers, `thinking too
much' is a disorder in itself!

Brain Overrun
Mapping is an intense, absorbing emotional experience. Every problem collapse is exhilarating. The
trouble is, we hit the peak of intensity and exhilaration, and then the damn thing is cracked, and there's
nothing to engage with any more. This can lead to a danger of depression at the end of a project, because
the fun has stopped. It can also lead to a mind racing around and around, skipping around what is now a
very simple structure in an extreme example of going around and around an unproductive cycle of
thoughts.

All this stuff is bad for you. If you have been working within a team, do talk down sessions. Give
yourself something useful to talk about by analysing how you approached the problem, and what you
have learned about the class of problems from the specific one you just tackled. What have you learned
about your platform? Is it cool or bletcherous? During talk-down sessions, remember mood control. The
idea is to wind down, not up. Replace the pleasure of cracking a problem with a celebration of your
success.

If you are not within a team, try to get into a totally separate activity involving others as soon as
possible. there is always a problem in that when you are engaged, you don't think about your social
calendar, and as soon as you're done, you can get into a funk too quickly to sort yourself out. So either
have the wit to invite some friends around in advance, or go blind visiting.

If the worst comes to the worst, and you are stuck on your own with a solved problem, get it over with
as soon as possible. Get your trophies, your listings, your diagrams, your product, get utterly loaded on
the poison of your choice, and spend an evening gloating. It might seem highly self-indulgent to do this,
but it recognises and deals with a genuine emotional cutoff that is related to bereavement. Just don't
overdo it - one evening, then go get a life!

Overwork
Don't confuse genuine mapper intensity with packer work binge displays. Remember that mapping is all
about leverage, and get things worked out in your head so that what you actually do in the physical
world is limited to necessary and sufficient right action.

Bear in mind your personal layered process, and evaluate your response to the situation by asking if the
plan it currently represents is appropriate. That way, being at work or at home is a practical and
objective issue, not a moral one.

Cultural Interface Management


Be aware of the need to manage the interface between the mapper values necessary to produce software,
and the packer values that usually surround your project in its commercial environment.

Do not get involved in discussions without establishing the ground rules for rational thought. Claim the
space to make a structured case.

When there are choices to be made, don't try to lay out all the logic as you might want it yourself.
Remember that your need to be informed of all the facts so that you can make your own decision is not
shared by packers. Don't try to explain why the optimal solution is correct, either. This just incites the
packer need to score political points (the only way to survive in the infinite chaos) by arguing with you.
Instead, teflon yourself by working through several options with costs and benefits, and restrict yourself
to making sure that everyone understands the options. Packers can do this - it is how they buy washing
machines and Double Whammo Choco Burgers.

Declare ambiguities and manage them. A useful buzzword is to call this exercise a `Risk Parade'.
Identify the unknowns and post them publicly with an estimate of their becoming a problem. Update the
Risk Parade when things change. Present these data either formally or informally, but make sure
everyone knows where they are.

Be willing to use the phrase `I don't know'. This act of simple honesty can deflate no end of packer
pomposity and coercion, while leaving you with a clearer understanding of where you are.

All these techniques work by addressing a basic problem. Packers want to remove complexity by
shifting blame onto others. Unless you act to prevent it, you can find yourself `responsible' for the
difference between the real world and the packers' wish-fulfillment fantasies. By acting to place the
realities in a publicly visible place, but without foisting them onto any one individual, you actually help
restore your whole environment to sanity while saving your own butt.

Individual Responsibility and Leadership


Mappers are used to sharing and aligning their mental models. They can then easily refer to aspects of
those models in casual language to increase mutual knowledge. They also put emphasis on doing things
optimally, and seem to be more comfortable with the win/win model of co-operation rather than win/
lose.

All these factors lead to a general tendency that emerges when mappers get together to share problems
and solutions and educate each other. This co-operative tendency is an important part of the hacker
culture.

The simple fact is the techniques of mapping, particularly mapping in a given domain, are a craft art.
Whatever we do to quickly upload new languages and notations to programmer's brains in formal
courses is only ever the icing on the cake. The real training happens on the job, as experienced people
show newcomers techniques they may find useful. The newcomers themselves then evaluate what to use
and how to use it, in the light of the state of the discipline as they enter it. This is one way that our field
evolves quickly.
We can either work with this fact and foster it, to take control of our own development, or we can ignore
it and play with `skills summaries' that list programming languages. We propose that a sensible way to
take control has already evolved as a natural result of the problem types. Our industry abounds with
formal but arbitrary categorisations, but the one we are about to offer is informal but real, and already
exists, needing only to be openly considered in the workplace.

Traditionally, those starting to learn artisan skills are referred to as apprentices. They are entrusted with
real work from day one, but always under the close supervision of a more experienced worker. When it
is evident that the close supervision is no longer necessary, the worker is recognised as a journeyman,
who can be trusted to do a good job and guide the apprentices he may need to help him.

Many competent workers enjoy the activities of a journeyman, practicing their craft, and remain as
journeymen for the remainder of their careers. They prefer that another person, perhaps of a different
temperament, takes responsibility for the success of projects. Such a person cannot be created by
nomination. Either the large amount of journeyman skill, drive and insight into the nature of the craft are
there, or they are not. While the development of a master craftsman can proceed with the guidance of
others, it is the new master which must find his own voice. The subsequent regard is fully, and properly
accorded to the student, not the teachers. To become recognised as a master craftsman, a journeyman
must produce a masterpiece. In this, the craftsman demonstrates his (or her - this is olde language ability
to create a workpiece of exemplar quality. In olden times, when the work was with material goods,
masterpieces were somewhat overdone, because the new master would wish to demonstrate a range of
skills, and would probably never produce anything as baroque again. The later stuff would be more
directed towards an actual purpose, and so better fitted to its task. Thus the masterpiece was actually the
lowest level of masterly work, not the highest as common usage would suggest! Today, a masterpiece is
a whole system delivered at the quality plateau, and the only difference is that we abhor unnecessary
bells and whistles. The masterpiece is still the first system, and all subsequent ones should be better, as it
is the experience of all good programmers that we are always learning. One of the reasons that it is
easier for programmers to learn from each other is that we are all aware that whether we are at any
moment teachers or students, we will both be in the other position pretty soon.

There are a couple of consequences of recognising the craft model. Firstly, it produces maximal
development and maximal productivity at the same time. The master craftsman controlling a project
must ensure that each team member is challenged within their capabilities, but at a stretch. There is no
shortage of jobs for competent programmers, so finding the challenges is not a problem. This requires
the worker to make an effort, which not only pays direct dividends in the care that is taken, but also
ensures that resources are actually used at the margin of efficiency, which is what the accountants want
to achieve, but cannot within a procedural model that copies a packer repetitive industry. No two
cathedrals or systems are alike.

Another consideration that all programmers already know but which is worth repeating in a win/lose
packer society is that the fear of teaching one's self out of a job that worries so many professionals just
does not apply to us. We are at the beginning of a new cultural Age. Look about you and see if you can
see any way society could be made more intelligent. Have you ever tried to buy a house? There will be
no shortage of work for programmers for a very long time, and if there ever is, well the robots will be
doing everything and we'll just program cute graphics for running at our saturnalias.

The False Goal of Deskilling


This section will explicitly make a point that has been mentioned several times in this course, because it
is an essential distinction between the mapper and packer views of the workplace.

The packer worldview is not natural - it has to be trained into a child instead of the development of the
child's natural exploratory faculties. It was probably a low-cost form of minimal education and maximal
organisation, from the beginning of the Agrarian Age to the end of the Industrial Age. In it, humans
perform repetitive tasks in the material world.

The mapper worldview utilises and develops the natural human mental faculties for exploration of ideas,
and is the unique preserve of humans in the Information Age.

Programming is a mapper activity. If we really had to repeat the writing of the same program over and
over again, some bright programmer will produce a COTS product, and that is the programmer.

The traditional packer view towards any job is to assume that it is therefore repeated, demeaning labour,
and figure out how to do it as simply as possible, optimise this, and if not actually automate it, deskill it
to reduce labour costs.

To assert that the packer strategy that works for bales of hay and cogs works for all jobs, just so long as
many people are engaged in them, denies the movement from a brute material economy, where a human
is a poor substitute for a horse or a steam engine, or even a numerically controlled machine, to an
information economy where menial labour is less necessary than understanding or creativity.

The film Saturday Night and Sunday Morning begins with a mechanical engineering worker having a
boring time mass producing components on a machine tool. `Nine hundred and ninety-eight... nine
hundred and ninety bloody nine...' he complains. Grim though its appearance was, this was actually an
age of remarkable enlightenment compared to modern times. In those days, the managers paid workers
by the workpiece produced, devolving real power and incentive to optimise to the worker. In a software
context, we should not attempt to control every aspect of the typing in of 1,000 lines of identical code
every day, we should be asking why the worker hasn't written a macro.

The deskilling concept is right through our society, and this makes it pernicious. Muddled thinking can
slide it by you, but any argument that uses it is bogus. Never forget this.

Bearing in mind the impossibility of deskilling programs, we can examine a couple of myths. Even in
the realm of material mass production it is hard to compare like with like. Sure, we can compare this
month's production of motor cars with last months, and see if we are doing better. But last year? We
were using three different kinds of light cluster units then, and only offered five colours. Ten years ago?
Every aspect of the technology, from anti-lock brakes to engine management systems to air bags to
traffic announcements to fuel composition has changed. Real insight is needed to even tell if we are
richer than our ancestors! Academic attempts to do wage/price comparisons over long ranges fall back
on the time investment required to buy a housebrick and a loaf, because those are just about the only
things you can just go and buy across many centuries! So what on earth can we make of the wonderful
exponential `productivity improvement' curves associated with each new `breakthrough technology' that
will enable you to staff your project with orangutans and get ten MLOCS out of them per second. What
on earth can these curves be comparing in our ever-changing field? One must conclude that there is an
awful lot of rubbish being talked of very statistically shabby work going on.

And what of the `user friendly metaphors' that mean that the orangutans can now do anything they like,
no skills required? We suggest that the true situation is that some sections of the market have been
exploiting the myth that deskilling of complexity management is possible, and have been offering
products that on superficial examination over a short time do in fact seem `easy to use'. The trouble is,
users actually have to do things like configure their IP addresses, firewalls, disks, scanners, printers,
share drives, accounts and so on. At this point we discover that instead of a computer that requires no
skills because it pretends to be another piece of furniture such as a desktop, we have a computer that
relevant computer skills don't work on, because after all, a desktop doesn't need to have its user accounts
configured, so there are no such things as desktop user account configuration skills out there to be made
use of. We eventually discover that even in domestic situations where all one might wish to do is pick a
new IP without reloading the whole machine, shareware systems that admit that they are computers are
more user friendly than the so-called `user friendly' stuff.

Escape Roads
As professional programmers working in real commercial environments we often work under deadlines
that we cannot guarantee to complete a real quality plateau solution within. An important part of the
personal layered process, and the informal project management plan in sufficiently mature organisations
is therefore the definition and continual re-definition of our contingency plans.

The most common kind of contingency plan is sadly based in dropping functionality. This is rarely an
efficient way of recovering time, because most of the lower layer functionality usually still needs to be
present to support the reduced application layer stuff, which should be a low-cost activity anyway if the
lower levels are providing the right kind of application layer specialisation.

We suggest that the following approach is much more effective:

● First, get your basic layering right. Get the essence of each layer's API defined.
● Second, invoke Ken Thompson's dictum, `When in doubt, use brute force.' Define a bloated, high-
cost, inefficient, hard-coded and ugly way of providing the functionality within each layer. It
doesn't matter that the whole system might well simply not work if it were actually implemented
that way, because it won't be.

● Third, get on with providing each layer at the quality plateau. Revisit your crude technique from
time to time to add the good bits that you do possess, and fill in the rest using possibly different
crude techniques.

● Fourth, when the difficulties start, make an optimal decision in the light of your customer's short
and medium term needs, your own risk parade, and the time available, as to which bits you will
deliver crudely, and which bits will still be well done.

This approach has the enormous benefit that it enables you to do whatever is the best thing at the time.
You cannot do any better for your customer than that.

When the layers can be implemented crudely, and if you have the code fragments you've written to test
out you OS or specialist library API to hand, you can often actually implement the crude version very
early on. This gives every programmer a common set of test stubs, significantly derisking the
simultaneous construction of all the layers.

New Member Integration


Be kind to new members who are joining a new team. Like everything else in this course, we are not
referring to a wishy-washy sanctimonious `welcome wagon' ritual: we mean something very practical.

The team has a mental model of the job at hand. Share it with your newcomer. Make sure they
understand what Situation Rehearsals are, and attend them. Explain the goal of the project, and then
explain all of the external (customer facing) and internal (mental model) language in use on the project.
Take them through the development environment including tools, configuration management, compilers
and so on. Don't make them ask about each stage.

Don't ever make the mistake of carefully making sure that they have a desk, a chair, a workstation, but
no account or anything to actually do. The worst thing when arriving on a new project is to find one's
self sitting there like a lemon, with each minute stretching into longer subjective interval than the last.

A very sensible practical idea used very effectively at BT is to introduce a newcomer to an official
`nominated friend'. The nominated friend is a peer who has been on the team for a while, and is
explicitly introduced as the source of information, whom it is `\s-1OK\s0 To Bother', about the kind of
stuff a new team member needs to know. One of the best things about this approach is that being peers,
the nominated friend will actually know the real answer to questions the newcomer will ask. Paper is
usually in the brown cupboard, but the A3 stuff for the big diagrams is in the green cupboard downstairs.
This file last updated 10 November 1997
Copyright (c) Alan G Carter and Colston Sanger 1997

alan@melloworld.com

colston@shotters.dircon.co.uk
Some Weird Stuff

Richard Feynman
For any person wishing to carry a mapper's strengths into the workplace, the life and work of the
physicist Richard Feynman is worth studying. He told stories. The Spencers' Warbler was a bird
identified for him by his father. The name was made up. His father then made up names for the bird in
many other languages, and pointed out that young Feynman knew no more than when he started. Rote-
learning names of things means nothing. Only looking at what the bird itself is doing tells one anything
about it.

He was utterly honest and saw through artificial complexity by always insisting on simplicity and facts.
See his personal version of The Challenger Report, contained in his book What Do You Care What
Other People Think?.

He used simple, humorous, curious language, filled with little pictures and enthusiasm. His techniques
for puncturing pomposity were unrestrained.

His Lectures on Computation have recently been published, and are worth reading, as is everything he
ever published, from Six Easy Pieces, to the Red Book Lectures. James Gleik's Genius and the Gribben's
Richard Feynman are rewarding biographies.

Get hold of his stuff and read it.

George Spencer-Brown
The Laws of Form, by George Spencer-Brown is a little book of mathematics and commentary that is
described by modern logicians as containing a form of `modal logic', characterised by having the rules of
the logical system applying differently in different places, in a manner defined by the rules of the logic
itself.

From the point of view of a programmer, there are two aspects to this book that will certainly stimulate
thought. In the main text, the author shows how to do predicate logic with just one symbol, offering a
deeper view of `fundamental' logical and computational operations such as NOT, OR AND, XOR than
one might have guessed existed.

Then there are the notes, simple and profound thoughts that one returns to again and again, often
informed by the technique of doing predicate logic with one symbol, that can be thought of as simply
cutting a single plane into two pieces, so that there are two distinguished things, and thus something to
talk about. For example, the author says,

In all mathematics it becomes apparent, at some stage, hat we have for some time been
following a rule without being consciously aware of the fact. This might be described as
the use of a covert convention. A recognisable aspect of the advancement of mathematics
consists of the advancement of the consciousness of what we are doing, whereby the
covert becomes overt. Mathematics is in this respect psychedelic.

Or try,

In discovering a proof, we must do something more subtle than search. We must come to
see the relevance, in respect of whatever statement it is we wish to justify, of some fact in
full view, and of which, therefore, we are already constantly aware. Whereas we may
know how to undertake a search for something we can not see, the subtlety of the
technique of trying to `find' something which we already can see may more easily escape
our efforts.

Or,

Discoveries of any great moment in mathematics and other disciplines, once they are
discovered, are seen to be extremely simple and obvious, and make everybody, including
their discoverer, appear foolish for not having discovered them before. It is all too often
forgotten that the ancient symbol for the prenascence of the world is a fool, and that
foolishness, being a divine state, is not a condition to be either proud or ashamed of.

Unfortunately we find systems of education today which have departed so far from the
plain truth, that they now teach us to be proud of what we know and ashamed of
ignorance. This is doubly corrupt. It is corrupt not only because pride is in itself a mortal
sin, but also to teach pride in knowledge is to put up an effective barrier against any
advance upon what is already known, since it makes one ashamed to look beyond the
bonds imposed by one's ignorance.

To any person prepared to enter with respect into the realm of his great and universal
ignorance, the secrets of being will eventually unfold, and they will do so in a measure
according to his freedom from natural and indoctrinated shame in his respect of their
revelation.

In the face of the strong, and indeed violent, social pressures against it, few people have
been prepared to take this simple and satisfying course towards sanity. And in a society
where a prominent psychiatrist can advertise that, given the chance, he would have treated
Newton to electric shock therapy, who can blame any person for being afraid to do so?

To arrive at the simplest truth, as Newton knew and practiced, requires years of
contemplation. Not activity. Not reasoning. Not calculating. Not busy behaviour of any
kind. Not reading. Not talking. Not making an effort. Not thinking. Simply bearing in
mind what it is one needs to know. And yet those with the courage to tread this path to
real discovery are not only offered practically no guidance on how to do so, they are
actively discouraged and have to set about it in secret, pretending meanwhile to be
diligently engaged in the frantic diversions and to conform with the deadening personal
opinions which are being continually thrust upon them.

As a beautiful summary of the mapper/packer communication barrier that we have discussed at such
length, one can hardly do better than that! Finally, there is a vision of the power of the mapping
cognitive strategy, as it continues to seek for ever deeper structure behind the phenomena it regards,
offered by way of what we get by making a single distinction in the void,

We are, and have been all along, deliberating the form of a single construction ... notably
the first distinction. The whole account of our deliberations is an account of how it may
appear, in the light of various state of mind which we put upon ourselves.

Elsewhere he says,

Thus we cannot escape the fact that the world we know is constructed in order (and thus in
such a way as to be able) to see itself.

Richness from ultimate simplicity. The limit of complexity cancellation, and the art of using the triangle
of creativity to place the Knight's Fork of our perception at the correct level of abstraction for our
purposes. As programmers, we work in, and by our every deed prove the unification of, exactly the same
creative space as the most abstracted of mathematicians and lyrical of poets. Remembering George
Spencer-Brown, look at this poem by Laurie Lee, and ask if your code has ever drawn structure from
domain, done all that has to be done, and outroed so perfectly?

Fish and Water

A golden fish like a pint of wine


Rolls the sea undergreen,
Glassily balanced on the tide
Only the skin between.

Fish and water lean together,


Separate and one,
Till a fatal flash of the instant sun
Lazily corkscrews down.

Did fish and water drink each other?


The reed leans there alone;
As we, who once drank each other's breath,
Have emptied the air, and gone.

Physics Textbook as Cultural Construct


We are regularly invited to see the world in a certain way, by users who believe they understand their
world, by style and approach gurus, by our own preconceptions. We are continually challenged to see
the world as it is, such that we make its representations in our systems as simple as possible. Just as one
has to see the quality plateau (albeit only once) before one can recognise it, so one has to `Walk around
the side of the Gone With the Windbreak and see how many times they lit the fire'; one has to see a
supposed solid reality questioned, before one can know what this is about.

There can't be much more solid than A Level Physics: anyone who says that that is a cultural construct, a
social agreement between cynical physicists to make the world obscure to civilised people with media
degrees would clearly have to be off their rockers. The strange thing is, some people genuinely do argue
that the laws of physics are made up by physicists rather than discovered, and they should be constrained
to make them up differently!

Th real tragedy for these prattling fools is that if only they were to study some physics, they might have
discovered that although the laws of physics were in place long before the physicists that study them and
are quite independent of the opinions of the physicists, the perception of the universe that we draw from
these laws may well be a cultural construct.

To explain this amazing claim, we need to refer to three physicists. Isaac Newton discovered modern
mechanics, and actually recorded his discoveries mainly in Latin prose, not in the symological style we
use today. That was invented by the Victorian Oliver Heavyside, and what we usually refer to as
`Newtonian' physics is nearly always in fact the Heavyside rendition of Newton's physics. Richard
Feynman was a physicist of modern times, who attempted to summarise what was known as elegantly as
he could for undergradultes in the Red Books. Where things get interesting is when we compare the
ordering of the tables of contents in the Principia of the genius Newton, the parts of the genius
Feynman's Red Books that were known to Newton, and the parts of Advanced Level Physics by Nelkon
and Parker (the standard British textbook), again that were known to Newton.

Principia

● Newton's Three Laws of Motion


● Orbits in gravitation (with raising and lowering things)
● Motion in resistive media
● Hydrostatics
● Pendulums
● Motions through fluids.

Red Books

● Energy
● Time and distance
● Gravitation
● Motion
● Newton's Three Laws
● Raising and lowering things
● Pendulums
● Hydrostatics and flow.

Advanced Level Physics

● Newton's Three Laws


● Pendulums
● Hydrostatics
● Gravitation
● Energy

What seems to be distinctive about Advanced Level Physics is that its mechanics builds up the
complexity of the equations of Heavyside's system, wheres the two other works are motivated by
different intents.

Newton starts with his Three Laws, while Feynman gets energy into the picture really early and leaves
the Three Laws until later. But once they have defined some terms to work with, both geniuses start by
telling us of a universe where everything is always in motion about everything else, and then fill in that
picture. They do this long before they discuss pendulums, which are arithmetically much easier, but are
a special case compared to the unfettered planets in their orbits.

Advanced Level Physics puts pendulums before gravitation, indeed deals with the hydrostatic stuff both
geniuses leave until very late, before it even mentions gravitation, by which time, we suggest, the
student has learned to perform calculations in exams as efficiently as possible, but has possibly built a
mental model of a universe of largely static reference frames with oddities moving relative to them.

Algebraically inconvenient though it may be (and while Newton's prose might not be influenced by
algebraic rendition, Feynman obviously had to consider it), both geniuses want to get the idea that
everything moves, in right at the start.
Might it be possible to learn even physics the wrong way, and end up able to do sums concerning the
goings on within the universe, but still with a warped and confused view of it?

Are Electrons Conscious?


In The Quantum Self, Danah Zohar considers some questions relating to the nature of consciousness.
One idea from consciousness studies suggests that the phenomenon of consciousness emerges from
complex relationships between things that are not, in themselves, conscious. This begs the question of
how little consciousness one can have. Can an electron, jigging about and doing its mysterious, wavicle
thing, be a little bit conscious?

We have raised Zohar's question not to attempt to answer it directly, but to try to approach it from
another direction. And as with all this `Weird Stuff', the intent is not to provide information, but to
demonstrate just how close the day to day work of a programmer really is to the highest arts and the
deepest mysteries.

We will start by doing you the courtesy of assuming you are conscious. Imagine you make a study of
synchronised processes sharing resources. As a good mapper, you research the literature, and
contemplate what others have said. You also try some experiments yourself. Pretty soon you start to see
the deep invariant patterns, both successful ones and unsuccessful ones. You come to realise that a
potential deadlock situation is a potential deadlock no matter how it is decorated with complexity. You
also come to recognise a potential livelock when you see one.

For those readers that have not made this study, please note that you should, as too many programmer
hours are wasted on this stuff, but here's a summary of deadlock and livelock. A deadlock arises when
two (or more) processes end up halted, mutually waiting on each other. For example, one process might
acquire exclusive access to the customer database, while another acquires exclusive access to the stock
database. Then each process attempts to get exclusive access to the database it hasn't got. Neither
processes' request can be fulfilled, because the other process already has the exclusive access requested.
So the database manager just leaves both calls pending, both processes asleep, until the requests can be
fulfilled. Of course, this will never happen, because neither sleeping process can relinquish the database
it already has, so both sleep forever. The easiest way to avoid this situation on a real project incidentally,
is not particularly clever. The word customer sorts before the word stock, so make it a mass drinks
buying offense to ever acquire the stock database before the customer database, even if this means that
situations emerge where one only has access to the stock database already, and so one has to relinquish
stock, acquire customer, acquire stock. It's worth it and let's face it, either access will be granted
instantly or some other necessary process will get in there and the cycles will be used well.

A livelock is a kind of variation of a deadlock where (for example) each process returns with a failure
code instead of sleeping, and tries to help by relinquishing the resources it has got and then carrying on
with its shopping list. So both processes chase each other's tails until one or the other manages to get
enough cycles in one go to acquire both resources at once, and break the cycle.
So now you know livelocks. From bitter experience you know livelocks, and you recognise a potential
livelock when you see one. Now imagine you are planning to meet a friend. You aren't sure which of
two bars you will want to meet in, because one or the other is always lively when the other is like a
morgue, and you can never tell which way around it will be. You don't know which of you will arrive
first. The two bars are on opposite sides of the same city block. Of course, you know livelocks. As a
furry animal running around planet Earth you aren't going to have your say, opportunities to mate,
reduced by a stupid livelock wherein you both chase around in circles between the two bars looking for
each other. When you make the date, you are the person who says, `And if you want to check the other
bar, walk around the river side of the block so I'll see you if I pull the same trick!'

That's you. It's the kind of person you are. The person you are going to meet has already been attracted
by this simultaneously imaginative and sensible aspect of your character, and approves the plan.

So what we understand and what we are are intertwined. When you understand livelock, understanding
of livelock becomes a part of your consciousness - the awareness that this universe does that kind of
stuff, so you deal with it.

Now imagine that you are asked to look at the information flows around a major corporation, and
propose a network management algorithm that optimises corporate bandwidth. You perform a mapper
study, as you did with livelock, and eventually you experience insights (problem quakes) that allow you
to see an elegant, robust and extensible network management strategy.

Now this strategy, just like livelock, is a part of you. When you see bits of the problem repeated
elsewhere, bits of your strategy will be obviously applicable, although at the time, you may swear blind
that `It's just so!', and be unable to say why. So when you subsequently capture your elegant, succinct
understanding in a programming language and set it running, to what extent is there a copy of a little bit
of you running the corporate comms, 24 hours a day?

This is a deep question, and not at all easy to understand. To see it explored somewhat, look at Marvin
Minsky and Harry Harrison's science fiction novel, The Turing Option.

For the traditionally philosophically minded, we might make an additional observation in this regard.
Usually the essential, such as the Platonic abstraction of `two-ness' is never seen directly, but only
through the phenomenal, such as two dogs, two legs or eyes. It is usually considered that the essential in
some way proceeds the phenomenal, because the abstraction of two-ness remains even when there isn't a
pair of anything in view. The phenomenal is usually, if covertly (in Spencer-Brown's use of the word)
seen as proceeding from the essential.

Now consider what happens in the writing of a one-bit program. The triangle of creativity, comprising
problem dynamics, system semantics and desire, is certainly phenomenal, because it takes place in the
head of the programmer, who has to be actually and physically in existence. However, the triangle of
creativity leaves as its product the Knight's Fork, which is an essential mapping of problem dynamics to
system semantics. The Knight's Fork, which is essential, in this case is in the image of, and proceeds
from, the triangle of creativity, which is phenomenal. Could this reversal of the usually accepted
direction of ontological priority be connected with the strange way that a ROM chip gets a peculiar kind
of negative entropy added to it as it passes through our hands?

Teilhard de Chardin and Vernor Vinge


Pierre Teilhard de Chardin was a palaeontologist and Jesuit who wrote The Phenomenon of Man in the
mid-1950's. By deducing a pattern from fossil evidence and filling in the black-box properties of the
parts of his model that he didn't understand with semi-allegorically, semi-religiously worded
speculations, he arrived at an unusual view of evolution that proposed a predictable direction of its
future course. Although Teilhard de Chardin's thought was very peculiar at its time, his ideas have been
sliding towards the centre of some peoples' view of what is happening with technology at the moment
and the universe in general. The work hasn't changed, its just that we are picking up evidence suggesting
that the mental model of evolution that it proposes happens to be close to the truth.

Teilhard de Chardin identifies a raising in complexity of forms, first with the aggregation of atomic
matter in the formation of planets (geosphere), then upon the geosphere the appearance of life
(biosphere), then the development by life of consciousness. He suggests that the next stage is the
interaction of conscious units to create a `noosphere', which will be a whole new ballgame using the
underlying minds as a platform, as the minds use the brains and the brains use the molecules. The
behaviours and relevant environmental influences of minds, brains and molecules are totally different,
and we can expect the next stage to be no different.

He suggests that there does not have to be any coercion involved in the necessary adoption of co-
ordinated states by enough individual minds for an aggregate identity to form - perhaps this is what we
see in a `gelled team', which shares a mental model about what the hell is going on. He proposes that the
ultimate confluence will be what he calls the Omega Point, where co-ordinated interaction of the
constituent minds of the noosphere overwhelms non-coordinated action and a new state emerges.

He was not without his critics - Sir Peter Medawar wrote a scathing attack that focussed on the language
changes at the interfaces between the solid evidential parts of the argument and the processes of
unknown mechanism fitted in between them. In particular Medawar became very excited about Teilhard
de Chardin's use of the word `vibration' where it was clear that the words `coupling' or `constraint' could
have been used, and might not have excited Medawar quite so much. The trouble is, mappers have to
work with things they don't understand, so the language inevitably gets a little fluffy in places. That's
where new theories come from (and one might say that a program is the programmer's theory of the
problem domain). Unfortunately this kind of language drives some people crazy, even though most of
the good stuff has some of it kicking around, if only in the form of saying that things `want' to do this or
that, and filling in the unknown mechanism with an anthropomorphism that is just as silly, applied to an
electron, let alone an ant, as proposing an `ineffable spirit', but is for some reason more acceptable.
For an extreme example, listen to Newton, slagging off the bits he could see were missing from his own
physical picture, but could not explain the mechanism of (which was the whole point of course)...

Of course, it's a well known fact that Newton spent much of his life `messing around with theology'!

Vernor Vinge is an Associate Professor of Mathematical Sciences at San Diego State University, and
one of the best science fiction writers around. In his famous `Singularity Paper', (use the WWW and the
SF books Across Realtime and A Fire Upon the Deep, he proposes that the intelligence of beings on this
planet will increase, either by improving human brains genetically, or by giving them hardware
enhancements, or by building new trans-human computer architectures. After this, networking and a
new agenda that comes from seeing more will create a world that we are inherently incapable of
imagining in our current state.

There is a striking similarity between the ideas of Teilhard de Chardin and Vinge, only by moving
evolution into the fast-burn of software, we shrink the millions of years of organic evolution required by
Teilhard de Chardin for the construction of the noosphere to the thirty proposed by Vinge.

But don't take our word for any of this stuff - check it out, see if it gives you a new perspective on what
the universe is doing when you are programming, and above all, think about it if only for practice!

Society of Mind
Marvin Minsky proposed in The Society of Mind that the phenomenon of human consciousness emerges
from the interaction of numbers of unconsciousness processing agents that run like co-proccesses in the
brain, each with its own triggers and agendas. The agents are then connected up and arbitrated via a
`nettiquette' that allows them to determine the course of action the organism as a whole will take. When
we feel ourselves exercising free will in pursuing our whims, we are in fact simply enacting a decision
that has already been arrived at by the collective of agents. The model certainly has its attractions, and
gives a basis for the drives that we use our creativity and intelligence to accomplish, but doesn't seem to
give a useful description of the creativity and intelligence themselves. With these generalised cognitive
faculties, the brain seems to be used as a directable general purpose pattern recognition device whose
internal representations are coupled to the sensory components indirectly, at least such that the abstract
and the concrete can be considered in the same terms.

The relationship between the society of mind model of cognition and motivation and the general purpose
faculties mirrors the relationship between what we have called the packing and mapping strategies, and
there is a further parallel with two simple approaches to managing data in computer system design.

Hash buckets operate by abstracting some sort of a key out of the data - perhaps by taking a 20 character
name field and adding up the numeric value of all the characters. That number can then be used to index
into a table and find the full record. Real hashing algorithms are designed to maximise the spread in the
resulting number from typical input data, and must cope with situations where the hash bucket is already
full by putting references to several records in them, such that retrieval involves then checking the full
key on each record in the bucket. Hash buckets are often very effective in simple situations, and are
reminiscent of packing, where some abstraction of the situations encountered is used to trigger
`appropriate action'. In packing, hash collisions seem to be poorly handled. They will not even be
noticed unless one or more participants will suffer short term loss due to `appropriate action'. Then an
`argument' will ensue, where one packer points to one way of abstracting the hash key from the situation
and argue that it is `the case', while another will point to another hashing algorithm and argue that no,
their way is `the case'. This is not productive and shows a breakdown of the strategy above a certain
level of problem complexity, where we are just trying to cram too much variation into too few hash
buckets and have not developed the skills to do the significant amounts of full key examination that is
then necessary.

Object models allow the data structures held in the computer to grow in complex and dynamic ways,
constrained to the semantics of the modelled objects. The shape of the whole data structure can change
completely during processing, and retrieval always stays `natural' in that the data are where they `ought'
to be - they are all directly associated with an appropriate other datum. Hence there is no complexity
introduced by a foreign algorithm such as hashing to be cancelled by something else such as exhaustive
key comparison. Above a certain level of complexity, object models are more suitable than hash
buckets, but there is no doubt that they are actually harder to implement. The reason why we can use
them at low cost today is that we get a lot of specialist support from our languages for describing
objects, and our operating systems for free memory management. Object models seem so similar to the
mapper strategy that we have described mapping as the attempt to construct a viable object model of the
problem domain.

These parallels between functional (society of mind and pattern recognition), subjective descriptive
(packing and mapping) and computational (hashing and object modelling) models of consciousness
suggest that there may even be a neurological correlate to the mapping and packing strategies we have
described. We certainly know that early stimulation of infants causes increased neuron growth and
interconnection in infant brains, and this correlates with higher `intelligence' (whatever that may be) in
adult life. Whatever `intelligence' is, the kind of cognitive and problem solving skills that are tested have
little place in the Taylorist, packer workplace, where the whole idea is to deskill and constrain
behaviour.

Perhaps the question at the start of the Information Age is: `What part of your brain is it appropriate to
use at work?'

Mapping and Mysticism


Right at the beginning, we looked at two different ways of going about solving problems. Packing was
characterised as a socially conditioned habit of accreting `knowledge packets' that specify `appropriate
action', and not examining or reconfiguring the relationships between the knowledge packets. The
strategy degenerates into kludging reality to fit the known packets and blaming luck when things go
wrong. Mapping on the other hand, involves putting investment into building an internal object model of
the world as it is perceived and getting leverage by identifying deep structure. Mapping can be
developed by learning techniques that help the exploration of conceptual spaces and help one recognise
what it is that one is actually seeing happening in front of one's own eyes, by recognising the deep
structure patterns in the goings on. Mappers can respond flexibly and are the only people in a position to
propose new approaches. They can learn vastly more quickly than packers, and unless they are seeing
deep structure, they are seeing as yet unsolved mysteries. The experience of mappers and packers may
be quite different, in exactly the same circumstances.

Mapping is the natural state of people, and everyone is a mapper at heart. Unfortunately, societies the
world over developed an alternative, which we have called packing, possibly at around the same time as
we discovered agriculture approximately 6,000 years ago. It could not have been before that - a pre-
agrarian packer confronting a wild animal on the hunt could hardly have fared well by sticking his nose
in the air and claiming that the animal was failing to follow procedure!

The alternative involves convincing people that good living consists of following prescribed procedures,
and supressing any alternatives. It must have brought benefits to new societies based around the raising
of crops, where significant tedious work must be done in the fields, and if things are tight, the only thing
one can do is plod, plod, plod, until harvest when more crops will be available. Packing thus involves
socialising the young into the packer mindset, and constructing a society where reality consists of the
packer approach and a set of knowledge packets, and nothing else. Any person who suspects that there
may be other ways of looking at things is then at odds with every member of the society he or she finds
themselves in, at odds with the inefficiencies of packer society and the ritualised manner that even social
occasions are brought down to. The dissenter may be ascribed weird properties such as magical powers
if they are lucky enough to implement some common-sense ideas, or madness if the surrounding packers
manage to sabotage their deviation in time. Most people would not even believe that any way of
approaching the world other than packing could even exist.

Today, there is little call for stoop labour in the developed world, but a significant need for fully aware
people to create the new programs that will run our automation. Only natural mappers have the pattern
recognition skills essential for writing computer programs.

For much of its existence, the packer strategy has probably served its users pretty well, keeping order in
the fields and early factories, and ensuring that the simple manual labour that was essential to survival
was performed. In the subsistence conditions that prevailed, perhaps the literary arts could have been
better served, but since the invention of the printing press, there have actually been more poets than
printing presses, so perhaps there has not even been a great cost there. But by the start of the 20th
century, the age of industrialisation had made packing a dangerously inefficient strategy. We were just
too wealthy. Our engines could allow us to do things undreamed of by previous generations, and we
needed understanding to guide our use of them. Trapped in a packer mindset, and in possession of
knowledge packets inappropriate to industrialised societies, Europe was torn apart as millions went to
war with internal combustion engines, tracked vehicles, barbed wire, machine guns, mustard gas,
aeroplanes and other equipment that distorted the pre-industrial knowledge packet that `War is an
extension of diplomacy by other means' out of any reasonable diplomatic definition of objectives, on
grounds of cost alone.

Now the cracks are really starting to show. We have achieved the dream of ages, and have abandoned
the need to work for millions, freeing up their time to do want they would wish. Yet we see this as
unemployment, and furthermore we keep millions working away in what used to be low-overhead jobs,
just manipulating the tokens of an agrarian economic system. There are so many of these non-productive
jobs that it is actually hard to see it, but every supermarket checkout operator, bank cashier, ticket
inspector, financial advisor, tax collector, accountant and on and on is in fact engaged in non-productive
labour. Only a tiny fraction of the population are doing any work necessary for the maintenance of our
material lifestyles, yet still we believe ourselves to be in scarcity!

Even the currently visible stresses, far worse than any in history (packing always leads to a kind of
stupidity when decsions about unusual circumstances have to be made), cannot point the way back to
mapping in packer language. One might say that it is a function of packer language, evolved over
millenia, to prevent the discussion of mapping! So in times gone by, returning to mapping must have
been very rare indeed. If the effectiveness or acceptability of a mapping approach to a problem is a
matter of opinion, the opinion of the majority of packers will always be that it doesn't matter that the
result was obtained, because it wasn't done `properly'.

Only today is there a real opportunity for an individual to practice mapping and get the realistic feedback
that is essential for learning. That is because only mappers can program computers. If a person, still in
the packer mindset, follows the procedure and translates a requirement, the result is likely to be a mess.
At this point, he or she could blame the compiler, the operating system or the user, but possibly might
just recognise that the computer really is, with utter faithfulness, reflecting what it has been told. So the
individual can accept that it is they, and they alone, that must understand the problem dynamics and
system semantics. Here begins many late nights, and the opening of the road to really thinking, rather
than performing the abberation that is packing, and which the packer majority calls normal.

From this perspective, it's interesting to look at several strands of previous thought that attempt to
describe the experience of mapping in cultures where one cannot just say `The program works!', and
have a very strong argument on one's side, in language that mappers can understand in terms of shared
subjective experience of playing with representations of reality in one's head until they are correct
enough to be useful, and which packers cannot understand at all. Perhaps it is not surprising that many
great programmers have interests in these strands of previous thought..

We have already discussed the nature of alchemy as an internal journey that changes the operator's view
of the world - the basic technique of mapping. Alchemical traditions likely spread into Europe from
Moorish Spain through people like Roger Bacon.

In In Search of the Miraculous, PD Ouspensky records some conversations with GI Gurdjieff, which
took place in Russia in 1915. A strange figure who made a remarkable impact, Gurdjieff told that he had
spent many years studying mystical traditions, Ouspensky records,

In all there are four states of conciousness possible for man... but ordinary man... lives in
the two lowest states of conciousness only. The two higher states of conciousness are
inaccessible to him, and although he may have flashes of these states, he is unable to
understand them, and he judges them from the point of view of those states in which it is
usual for him to be.

The two usual, that is, the lowest, states of conciousness are first, sleep, in other words a
passive state in which man spends a third and very often a half of his life. And second, the
state in which men spend the other part of their lives, in which they walk the streets, write
books, talk on lofty subjects, take part in politics, kill one another, which they regard as
active and call `clear consciousness' or `the waking state of consciousness'. The term
`clear consciousness' or `the waking state of consciousness' seems to have been given in
jest, especially when you realise what clear consciousness ought in reality to be and what
the state in which man lives and acts really is.

The third state of conciousness is self-remembering or self-consciousness or conciousness


of one's being. It is usual to consider that we have this state of consciousness or that we
can have it if we want it. ur science and philosophy have overlooked the fact that we do
not possess this state of consciousness and that we cannot create it in ourselves by desire
or decision alone.

The fourth state of consciousness is called the objective state of consciousness. In this
state a man can see thinngs as they are. Flashes of this state of consciousness also occur in
man. In the religions of all nations there are indications of the possibility of a state of
consciousness of this kind which is called `enlightenment' and various other names but
which cannot be described in words. But the only right way to objective consciousness is
through the development of self-consciousness. If an ordinary man is artificially brought
into a state of objective consciousness and afterwards brought back to his usual state he
will remember nothing and he will think that for a time he had lost consciousness. But in
the state of self-consciousness a man can have flashes of objective consciousness and
remember them.

The fourth state of consciousness in man means an altogether different state of being; it is
the result of long and difficult work on oneself.

But the third state of consciousness constitues the natural right of man as he is, and if man
does not possess it, it is only because of the wrong conditions of his life. It can be said
without any exaggeration that at the present time the third state of consciousness occurs in
man only in the form of very rare flashes and that it can be made more or less permanent
in him only by means of special training.

This certainly sounds like packing corresponds to the second state, mapping to the third state, and
whatever happens in problem quake to the fourth state. Happily the difficulties Gurdjieff described are
greatly mitigated today by kind employers who are willing to pay us high salaries to sit in front of
training machines all day. If we can adopt third level language at work instead of second level, we will
be able to repay these kindly people by writing lots of nifty computer programs for them.

We should say in fairness, that while much of In Search of the Miraculous is directly accessible in terms
of the mapper/packer model, much is not. There is also a system of `Hydrogens' that seems to be utterly
unconnected to particle physics, which supposedly describes the structure of the universe. It does
however bring fractal structure and attractors to mind, and purports to be a world-view that enables an
individual to enjoy vastly increased options by `freeing himself from general laws' in a fashion not
amenable to reductionist description. We can't make head nor tail of this stuff, but having seen mappers
and packers in the work only after finding them amongst the computers, we suspect it may be worth...
contemplating.

In Islam, there is the concept of two Korans. There is the written Koran, recorded by the Prophet at the
command of God, and the manifest Koran, which is the world about us created by God. It is the duty of
every person who enjoys the luxury of improving himself by spending his time studying these works of
God, to pass on his findings in a manner accessible to all. Perhaps this beautiful idea, which allows the
student to acknowledge his ignorance by setting up a hopeless direct competition with God that
everyone is bound to lose anyway, and then teaching that it is the student's spiritual duty to reduce this
ignorance, might have something to do with Islam's staggering contributions to our field. We all know
where algebra and algorithms came from!

In China there is the ancient Taoist tradition, which also suffers from a communication problem - the
Tao Te Ching begins,

The Tao that can be told is not the eternal Tao.

Taoists concentrate in finding the deep structure of the deep structure, and obtaining maximal leverage
by `right action'. A Taoist does not limply `go with the flow', he has a clear (and hence non-
contradictory or perverse) understanding of what he wishes to accomplish, and looks for the right point
to apply influence, by looking at the structure of the interconnected phenomena that he is interested in.
The right action might then be a swift kick at just the right place! In common with all mystical
traditions, Taoists have no time for pomposity whatsoever.

When Taoism met Bhuddism, Zen appeared. From the mapper/packer perspective, Zen might be
described as a specialist set of mapper techniques and building blocks, that allow exploration of deep
structure that is often counter-intuitive to someone afflicted with the packer mindset. When Zen asks,
`What is the sound of one hand clapping?', it is saying that the clap is to be found in neither the left
hand, nor the right, but in the interaction between them. Many great programmers, especially Artificial
Intelligence workers, love tickling themselves with Zen koans.

Alchemy, Taoism and Zen are all mystical teachings that have no supernatural component to them at all.
They discuss the state of mind of the practitioner, and thus increase the available options by removing
the rigid mass of preconceptions that packing produces. As Kate Bush (another favourite amongst
programmers) put it,

Don't fall for a magic world


We humans got it all
Every one of us
Has a heaven inside.

But despite their practical emphasis, they all have to use allegorical language to discuss the subjective
mapper experience. Ancient allegorical language that makes no sense at all to packers is easily mistaken
for religion, and in the 19th century other workers attempted to erect strictly secular descriptions of what
is going on.

The philosopher Frederic Nietzsche ran up against the mapper/packer communication barrier in a big
way, and caused great excitement amongst his local packer community by declaring that the Superman
was not bound by mere laws. He died in a mental asylum, but not before making a significant impact
upon philosophy.

Nietzsche was concerned with the difference between a person who has reached his own potential and
someone who lives in a socialised packer reality. He really didn't like the snivelling, envious, spiteful,
small minded common man that he held up against his Superman at all. He has been geting renewed
interest from people involved in TQM recently.

Sigmund Freud interviewed large numbers of Viennese middle class women and came up with an
original psychoanalysis that included an idiosyncratic view of human motivations and preoccupations.
Not all of his sucessors have retained the preoccupations, but his concept of `alienation' has stood. This
is a situation where a person plays a role instead of behaving `authentically', and is therefore divorced,
alienated from, his comrades, who are also playing roles. Eventually the exterior, bogus reality becomes
the world-view of the person, such that he becomes alienated from himself and can no longer identify
and address his own desires and concerns.

Soren Keirkegaard was worried about how we can know anything at all in the madness that surrounds
us, and created the philosophical position of existentialism, where the value and meaning of an act can
only be evaluated by the actor, based on the information to hand. This kind of social relativity certainly
decouples the individual from the group, in which condition self-censorship from mapping may be
avoidable, and one can wear black and brood a lot. It does however insiduously suggest that there is no
such thing as objective, external reality (or if there is it doesn't matter because no-one knows what it is).
This is liable to abuse, because it means that standing on the corner and making faces is as worthwhile
an occpation as relieving terrible suffering or building houses, if the idiot says it is. This aspect of
existentialism is contradicted by the mapper experience, which leads mappers to believe that there is an
external reality, of great subtlety, and although none of us has yet appreciated it it all its wonder, if one
of us discovers a phenomenon, it will eventually prove compatible with any other phenomena we have
discovered. In this sense, the external reality is important even if it is not percievable.

Keirkegaard was followed by Jean-Paul Sartre who wrote of the condition of the members of a society
that denies itself , and RD Laing, who took existentialist ideas into psychiatry, where he saw whole
families colluding in maintaining one individual who had been identified as `schizophrenic' is a
condition of complete confusion, as they spent a significant amount of their resources, both financial and
lifetime, in protecting the packer reality of their less than happy families against the threat of the mapper
that has appeared in their midst. From the mapper point of view, the `patient' is in the middle of a
complex web of mystification and coercion, distributed amongst the whole family which must be
deconstructed if all are to find happiness. The packer view is that Laing is `blaming' the parents for
`causing the illness'. This means that Laing's work has fallen out of favour in a clinical situation, where
the effective power is in the hands of the patient's relatives (or they wouldn't be a patient). However,
Laing's colleague Melanie Klein, who inspired many of his own ideas, had worked in the industrial
sector, and existentialist ideas succeeding Klein are still of interest in industrial psychology.

Recently Peter Senge of the Sloan Business School at MIT has been writing about Systems Thinking,
which is an approach to problem solving based in forming mental models and taking account of things
like feedback.

When we set out to understand why some people are so good at programming, we knew that the answer
would be interesting, but we never expected to come up with a simple model that could also draw a
unifying theme between so many mystical and philosophical schools. It is probably valuable to have
done so, because with so many apparently different ways of saying the same thing kicking around, the
situation for anyone trying to break out of packer thinking but not realising that stopping stopping
yourself and learning some disciplines is the way to go, is very confusing. One's friends might even
think one had turned into a wierdo!

Mapping and ADHD


There is said to be a disease called Attention Deficit Hyperactivity Disorder (ADHD), which afflicts 3%
of the population. It's sufferers can expect to have a difficult life, handicapped as they are, but with
appropriate drugs and suport, they can hope for some integration into society.

In terms of the mapper/packer model, we suspect that ADHD may just be the results of natural mapper
children, effectively being much smarter than their peers, getting into worse and worse standoffs with
the packers surrounding them, as they think harder and harder, trying to understand what the packer
teachers, peers and relatives around them want of them, while the adults see the children as disobedient
or diseased because they do not evidence the necessary dysfunction required to sit repeditively
performing the same simple, pointless, rote activities while not behaving like a herd animal.

How The Approach Developed


The development of this work has in itself an exercise in mapping, so it will be illustrative to describe
how the picture came together.

The work was motivated by watching what happened as ISO 9001 was rolled out within the computing
industry. It seemed that at best, it ensured that we could be confident that an ISO 9001-certified
organisation was at least off the `Laurel and Hardy' level, where one is capable of losing the source code
of the programs one's customers are running, but did nothing positive to improve the programming skills
of the people doing programming within the industry. There was an incident some years ago where the
employees of an organisation that provided software to drive giant flour mills had to visit a customer's
site on a pretext, pull a ROM and copy it before disassembling the contents and maintaining the
program. No-one that has ever been in such a situation will ever forget it. So ISO 9001 was good, but the
real work needed to go into the `engineering judgement' and `common sense' referred to all over the best
process documents - the bits we couldn't get by apeing car factories.

But then we saw that in some organisations, there was an unexamined but almost religious faith that by
reducing everything to simplistic proceduralism perfection would be attained, and that the metres of
shelfware comprised the necessary simple procedures. With the process around, the limited thinking that
had been going on could be abandoned or better, stamped out, and everyone could run around being
`professional' without actually acheiving anything at all. In the old days, at least poor organisations
actually had the source long enough to sell it to the customer and pay the rent!

We needed to find out what real programming is all about, to counter the negative effects of badly
applied ISO 9001 as well as an important ingredient supplementing well applied ISO 9001. On the basis
that there was something missing from the ISO 9001 description of the workplace, and in honour of the
surrealistic London Underground announcement, the working title at that point was `Mind the Gap'.

We started with the observation that there are some programmers who are much better than most, and
that they agree amongst themselves on who they are. They can talk amongst each other about
programming, and although they often disagreed about value judgements, they often agreed on a great
deal.

Of course, right from the beginning, we had trouble describing what we saw talking to great
programmers, in `management speak'. We spent a long time arguing around in circles, trying to get a
two-dimensional creature into the third dimension, by showing it a series of steps each smaller than the
last. The whole notion was of course flawed, because no matter how thin one slices the step, it is still a
three dimensional object, inaccessible to a two dimensional creature. But we didn't know that then.
At the same time, we were looking at the great programmers' mind-set from within, deconstructing our
own mentation while working, and watching others. This made much better progress, and we identified
the `Artisan Programmer' as a figure more like a craftsman of old than a modern production line worker
very quickly.

We were also interested in the underlying cognitive neuropsychology of the programming act, but hardly
got anywhere at all. We could not find much work tying subjective experience to its platform, and one of
the areas we would have particularly liked to have looked at, gender differences in programming,
seemed particularly sparsely covered. Neuropsychologists commented to us privately that cognitive
gender differences are sometimes hard to research becuase of political `correctness' considerations in
grant applications. However, in the absence of useful psychological reasearch, we did attempt to
construct an operational definition of a subjective experience. This is what eventually produced the one
bit program thought experiment, which demolished the external process view utterly, and left us to
concentrate on subjective experience.

Between Spring 1992 and Autumn 1995 we spent our time talking to programmers, and discussing and
contemplating what we had learned. We must have tried hundreds of ways of `telling the story', and
every one of them died on the language barrier. However, we had discovered that the same few issues
kept coming up over and over again, on site after site, and these issues had right answers. These have
been included as Design Principles. We had also discovered that there were some ideas and stories that
we had gathered from great programmers that had a very positive effect on the novices we told them to.
This material has also been included in the Stone.

Then in autumn 1995, Frederick W Kantor's extraordinary work of physics, Information Mechanics
provided a major inspiration. In it, Kantor throws away all crutches and attempts to build a consistent
picture of physics purely out of information concepts. Perhaps the solution to our problem would be to
throw out all the language we knew didn't work, and use the language we knew did work. Perhaps
through this kind of ontological rigour, we could construct a self-consistent picture, even if it was
divorced from `mainstream' reality, that we could at least see clearly.

Very quickly we focussed on the movement of consciousness, and saw the link to alchemy. Links to
other mystical traditions followed quickly, and we tried using mystically inspired language to novices,
and explained about the circulrity of hermetic journeys. We found we could improve the performance of
programmers better than ever before, but we still couldn't explain why in mainstream language. By now
we were calling the project `Deployed Conciousness'.

In summer 1997 we were pointed to ADHD, and immediately recognised in the character profiles of
ADHD children, the great programmers we had been talking to. We could see what the kids were doing,
but it seemed pretty obvious that the psychologists and other professionals dealing with them could not,
or they would by teaching them real stuff like number theory insread of burning them out by prescribing
amphetamines so that they sat down and performed mindless, packer school `work'. This was a great
shock, but it gave us an important clue: there really must be some kind of cognitive blindness that meant
that the psychologists were simply unable to understand the kids, and couldn't even realise that there was
something going on that they couldn't understand.

This showed us why the language problem existed - amazing though it seemed, we had to conclude that
our colleagues really were all in one of two (and only two) possible states, and we could describe the
differences between them. We quickly wrote this up, assuming some kind of underlying black-box
neurology, possibly involving a shift in processing strategy once some resource or other reached a
critical level and made a strategy switch optimal, and distributed it to friends that had been talking with
us about this subject from the beginning.

We received a lot of feedback, most positive, but one comment proved critical. We were asked if there
might be any tie-in between this work and ME (aka CFIDS), the debilitating post-viral disorder that
smashes the lives of so many active, creative people. Many mappers seem to know several people who
had suffered from ME, and we made a list. Yes, they were all energetic thinking people, and not the
brash, anti-intellectual yuppies that were characterised as getting `yuppie flu'. But further, they were all
thinking people whose essentially gentle personalities led them to respond to acts of gross stupidity
thrown with all the contempt a packer can muster with sadness, rather than say, anger or contempt. Poke
a monkey with a stick for long enough and its hair will fall out. This is a physiological effect of
sustained psychological cruelty. ME had appeared during a period when packer fundamentalism had
broken out all over the developed world, leading to enormous amounts of stupidity and cruelty. ME
might well be an effect. But why just the gentle ones? They were all highly active, being the sort that
would retile the barn because it was sunny, or cycle across Canada to celebrate their recovery, although
they were all daydreamers. Daydreaming couldn't be it anyway, because we are all daydreamers... and
the penny dropped.

The difference between packers and mappers is that packers have been socially conditioned to suppress
their natural faculty for building mental models by daydreaming, and fall back on rote learning
procedural, action oriented responses instead. We could throw away the neurological black boxes, and
just say `daydreaming' to make the bridge to mainstream language. Then the empirical work, as well as
the understanding of the nature of the language problem, all fitted into place.

And that was the journey of exploration we ended up taking. When we started we didn't know what we
would find, but we felt sure it would be worth it. For the first three and a half years we managed to help
a few novices develop but didn't seem to acheive much else. We were gathering material and looking for
patterns.

The overall work took nearly six years, but that is good going for a deep result. If we had never started
we would not have reached the end of our journey, which we can now offer to you to read much more
quickly than that!

Complexity Cosmology
It is the repeated experience of mappers that their high investment in a cognitive strategy that they do
not know will pay off is usually worth it? Why is this? Do we have any pointers to a deep answer to this
question?

One possibility lies in the way our universe seems to have a thing about complexity. We know that from
the earliest moment, structure has been emerging in the universe. We know that the physical constants in
nature are just right for making atoms, stars, planets, complex chemicals. We haven't proven that the
emergence of life was inevitable in the universe, but as we all know by now, put just about anything in a
bucket and kick it the right way, and you'll get self-organisation.

We might take the approach of extending the ideas of Teilhard de Chardin and Vernor Vinge discussed
earlier, and wonder if our behaviour in adding complexity to the universe by writing software is just
evolution, operating at a cosmic scale, uping the rate of change again. Then we might say that we get to
win from two directions - first because the complexity we see has been built up out of simpler layers, so
by drawing our arbitary system boundaries we will often find opportunities for complexity cancellation
within those boundaries, and second, because by adding more complexity we are just doing what comes
natually.

Building complexity might be a natural arrow of time quite as much as entropy, and thus inherently
acheivable in this universe, for reasons that we do not yet understand. Quite where this is heading we
don't know, but perhaps we have the chance to find out before we get there.

The Prisoners' Dilemma, Freeware and Trust


The Prisoners' Dilemma was extensively studied as a model of first strike nuclear ballistic missile
strategy. In it, two prisoners are held separately, and both are offered the following deal, `If neither of
you confess, you shall both go free. If both of you confess, you will both receive long sentences. If only
one of you confesses, that one will receive a short sentence, but the other will receive a doubley long
one.'

The thing is, unless I can be certain that you won't confess, the best thing I can do is to confess, and
settle for a short or long sentence, but avoiding the doubley long one. You feel the same way. So unless
we are both certain (remember the old packer `certainty'), which we cannot be, we both end up with long
sentences where we could have got off with none at all.

This result was depressing during the Cold War, when considerable strategic advantage could be gained
from a first strike. While the game theorists insisted that a double launch was inevitable, the human race,
faced with utter destruction, was able to behave rationally and avoid any kind of nuclear exchange at all,
let alone the Spasm predicted by game theory.

What went wrong with game theory? It comes back to the problem of establishing the certainty, in
packer terms, of the other's certainty that you are certain... it's just too difficult. In order to do it, both
players must be what the theories called `super-rational' - able to be both rational in themselves, and
rational about the rationality of the other. There didn't seem to ba any obvious way to acheive this except
quoting Ghandi to people, which didn't seem too certain to people who were themselves armed with
nukes.

Within the mapper/packer model, things are much easier. If we are both packers, we both get long
sentences. If you are a packer, I must confess, because you will. But if we know each other, it is easy for
us to recognise each other's abiliity to construct mental maps and reach direct, intimate understandings
of them. If I know you are a mapper, I know you'll be able to figure the trick of the Dilemma, because
it's not a big map. You know I'm a mapper so you will also be able to predict my full understanding of
this simple trick. So we walk away. In other words, we win because the difference between idiots and
sensible people is a discontinuous thing, but only visible to sensible people. To packers it's a gradation
beween the insane (mappers who can't think properly and so... er... escape), through people who can
only memorise a few knowledge packets and so are stupid, to Responsible Persons who apply
knowledge packets with robotic precision. Remarkably, although we will kill millions and waste vast
amounts of wealth playing face-saving games to keep our noses properly in the air, as a species we held
off from blowing up the planet. Perhaps it was something to do with there not being anyone left to
impress...

Packer preoccupations with certainty without the vision to get it, coupled with the zero-sum game of
material economics and scarcity, lead to a very constrained set of transactions that are possible. To do
software engineering we must be mappers, and the Prisoners' Dilemma shows us that we have
opportunities for seeing effective strategies denied to packers. Producing software is a non-zero-sum
game - if I copy your program we both have it. And we are out of scarcity, if only because programming
is well paid. So there are more kinds of transactions open to us than to any other group in history. We
are already starting to see examples in the shared production of standards, commercial organisations
placing key source in the public domain, and in the growth of the freeware market. The people doing
these things are not doing poorly out of it - the benefits of leadership often outweigh the costs of giving
something away that you've still got anyway. Sound business judgement consists of correctly evaluating
this new business environment, and unless we do this, we will incur opportunity costs while competitor
organisations are doing it for themselves.

The software market is liable to remain interesting for quite some time!

Predeterminism
In The Structure of Scientific Revolutions, Thomas Kuhn introduced the concept of a paradigm - an
underlying theory of the world that one doesn't even recognise as a theoy but instead calls `reality'.
Whole societies share paradigms, and they can have an extraordinary effect on the behaviour of a
society's members. There was once a philosphical paradigm called `predeterminism'. It said that God had
everyone's life planned out at birth, and the trials that one was subjected to could not be avoided because
they were the will of God and so must be borne with good grace. Then there was a religious debate that
ended up with the point that this contradicted free will winning out, and predeterminism bit the dust.
This was good news, because the thing about predeterminism is that people don't do much. If everything
is down to God's will, our puny efforts won't count for much.

With predeterminism out of the way, we were free to believe we could have some control over our fate,
and so we did.

Ever since then, we've been waiting for the other shoe to drop. Although we believe that results are
possible, and so we make efforts to better our lives, most of us still don't believe that understanding is
possible, so we don't make efforts to understand. Now that our automation has both made understanding
necessary and proved it possible, we have an opportunity to enter a new age of human experience - the
true Information Age.

This file last updated 10 November 1997


Copyright (c) Alan G Carter and Colston Sanger 1997

alan@melloworld.com

colston@shotters.dircon.co.uk
The Programmers' Stone

Stoned! Sites, User Reports, Additional


Materials, Links and References

Stoned! Sites

User Reports

Additional Materials
Knowledge Autoformalisation One contributor's experience of an approach that is very compatible with
the Programmers' Stone.

Extreme Programming Another contributor's summary of the new book.

Unsolicited Testimonial To the power of the Programmers' Stone.

Links
TRIZ A remarkable Russian website describing an approach that complements the Stone. One point -
while the objective laws governing the development of technical systems are quite real, the resultant
algorithm is not proceduralisable in a programmatic sense. The use of the word "imaginative" at each
point in the description of the algorithm recognises this. TRIZ could be used to semi-proceduralise the
ideas in the Stone for introduction to the workplace, and the Stone explores the necessary psychological
development of the TRIZ user.
The Cathedral and the Bazaar Eric S. Raymond's paper on the incredible flexibility and efficiency of co-
operative software development compared to central planning - let alone the commerical dumb
compliance policing model.

Mining Usefulness As opposed to compliance. For example.

The Jargon File The classic celebration of hacker culture, maintained by Eric S. Raymond.

Design Patterns in MFC An interesting study of the design patterns that can be seen in the MFC and
other graphical toolkits.

References

Adams, Scott

The Dilbert Future

Boxtree
ISBN 0-7522-1118-8

Very funny and perceptive. A lot of nonsense is talked about Adams. Some say that he has failed to
champion the cause of cubicle dwellers. As far as I know, he has never claimed to be the cubicle
dwellers' champion - just a very funny cartoonist. Others say that he is a terrible, cynical person. This is
because he documents workplace stupidity with staggering accuracy. All of the pomposity, dishonesty,
bullying and ritualism is there. The end section of this book, about affirmations etc. should make your
hair stand on end.

Brookes, Frederick P.

The Mythical Man-Month

Addison Wesley
ISBN 0-201-00650-2

Generally recognised as the most sensible guide to running practical, effective software projects,
Brookes' every observation seems to have been thrown out by the ISO9001 ritual fixing zombies. This is
why commercial software production is stagnant.
DeMarco, Tom & Lister, Timothy

Peopleware: Productive Projects and Teams

Dorset House
ISBN 0-932633-05-6

Common sense observations regarding making effective software projects. The best bits are the railing
against open-plan offices. In Reciprocality, open-plan can be seen as desirable because ritual fixers love
to regard one anothers' ritualised movements all day, and the endlessly ringing phones don't cause a
problem, because no-one thinks anyway. Also look out for the comments on "jelled teams" and
"professionalism" which is exposed as a euphemism for smirking pomposity.

Degrace, Peter & Stahl, Leslie Hulet

The Olduvai Imperative

Prentice Hall
ISBN 0-13-220104-6

The authors set out to write a book about CASE tools, and discovered the vast spaces waiting to be
explored when we ask what we are really doing when we make software. I don't think the "Greeks vs.
Romans" split they propose works too well, but they do introduce the idea that there are two distinct
approaches.

Feynman, Richard P.

Feynman Lectures on Computation

Addison Wesley
ISBN 0-20148991-0

All good, but particularly the sections on Charles Bennett and the energy value of information. This
book was stuck in legal wrangles for 10 years, but now we can get Feynman's words on this remarkable
result, so essential in Reciprocality.

Gamma, Erich et. al.

Design Patterns: Elements of reusable Object-Oriented Software


Addison Wesley
ISBN 0-201-63361-2

The book on design patterns. Emphasises the compositional aspects of software design - the bit M0
victims can't do. Very handy on sites where the M0 reductionist misinterpretation of ISO9001 has got
entirely out of hand. You just reference the pattern (by name) in the Architectural Design Document,
and talk about details in the Detailed Design Document. This produces a useful document that doesn't
prevent good composition by requiring the design to fit into an imbecilic, mandatory document structure
created by people who can't understand what composition is, but are determined to stop it!

Goldratt, Eliyahu M & Cox, Jeff

The Goal

Gow
ISBN 0-566-07418-4

Fairy stories about how our heros manage to think around M0 and solve problems, instead of being
driven off site with their stuff in binliners, which is what would really happen.

Goldratt, Eliyahu M.

It's Not Luck

Gower
ISBN 0-566-07637-3

More fairy stories.

Hohmann, Luke

Journey of the Software Professional

Prentice Hall
ISBN 0-13-236613-4

As far as anyone could go towards the Programmers' Stone while retaining M0 paradigm and language.
The closest thing to the Programmers' Stone in print. The Journey of the title is of course, Hermetic.

Levy, Steven
Hackers

Penguin
ISBN 0-14-023269-9

How the "clearly very stupid" people changed the world. Starring Anukin Gates as the young Darth
Vader. (Fact: In 1978 I bought a Microsoft product called EDAS for TRS-80 Model I. It was such
rubbish I used it to write it's replacement and threw it away. The musicassette tape it came on was too
small to hold anything useful. It's lineal descendent is called MASM.)

Naur, Peter

Computing: A Human Activity

ACM Press
ISBN 0-201-58069-1

Wise words from the dawn of time. How could it possibly be anything other than a human activity, but
people have forgotten this.

Schwartz, Howard S.

Narcissistic Process and Corporate Decay

New York University Press


ISBN 0-8147-7938-7

Describes M0 in commercial settings in a Freudian model. The model is largely correct of course - M0
rather than infantile memories is where the motivational and delusional structure comes from.

Senge, Peter M.

The Fifth Discipline

Random House
ISBN 0-7126-5687-1

M0 free business thinking. Introduces "Sengian Patterns", which I reckon M0 victims will not be able to
spot in real world situations.
Spencer-Brown, George

Laws of Form

E. P Dutton
ISBN 0-525-47544-3

A cult classic amongst hackers nearly 30 years ago, also referenced in Robert Anton Wilson's "Universe
Next Door" books.

Weinberg, Gerald M.

The Psychology of Computer Programming

Van Nostrand Reinhold


ISBN 0-442-20764-6

This ancient text still hasn't been bettered. No-one dare look for some reason.

White, Michael

Isaac Newton - The Last Sorcerer

Fourth Estate
ISBN 1-85702-416-8

White doesn't seem to understand that alchemy is a transformation of the operator - mapping - but his
journalism is excellent so you can draw your own conclusions from his data.

Yourdon, Edward

Decline and Fall of the American Programmer

Prentice Hall
ISBN 0-13203670-3

I've not yet seen the second edition. The offshore problem didn't happen, because programming isn't the
kind of context-free proceduralism people think can be done well in open plan offices. Sets out the
dreary predictability of the standard management stupidity rituals in M0 shops.
Knowledge Autoformalisation
Here is a short summary of my experience in the wonderful world of 'Knowledge Auto-formalization'.
Unfortunately, neither book nor other papers by the author were ever published in English.

Aleksey

1. Short Summary of the Book


1.1 Problem Area

● The really hard part of software development is to formalize the problem you are going to solve.

● In progstone lingo mappers are ones who do formalization.

● There are some human activities which are extremely hard to formalize. Favorite example is an
autopilot for an off-road car: even not so smart human being is generally able to select
appropriate path driving off-road so it will not be stuck, at the same time smartest software
engineers on earth are light years away from coming even close to making software doing same
thing.

● In many cases there is no hope that sw engineer will ever be able to understand customer
problems well enough to do formalization on its own.

● In all cases customer does not fully understand what he/she wants.

● In all cases customer is unable to adequately express his/her understanding in the form of
specification sufficient for successful development.

In other words it is very hard to do mapping outside immediate technical area.

So, the problem is how to crack a hard unmappable application ?

1.2. Knowledge Auto-formalization Process

General idea is that the only way to solve it is by creating an environment where advanced user can
express him/herself through programming. There devil is in the details.
1.2.1 Pilot User

It is generally possible to find a 'pilot-user': a user with advanced knowledge in the field who either has
some programming experience (e.g. college course) or is able to be taught basics of programming.

1.2.2 Support Engineer

Support Engineer creates and maintains environment for a pilot-user. It is evident, that we cannot expect
much from pilot-user beyond ability to arrange ready to use function calls into program flow. Key
element is that he does not have to do anything beyond this very limited area. All hard software parts are
done by support engineer.

1.2.3 Process

Support engineer evaluates initial needs of pilot-user e.g. what interface to hardware he has to have to
start with, what functions he has to have to call etc.

For example, in my own experience I started from a simple menu program allowing pilot-user to
perform elementary equipment control functions through a set of menus and pilot-user modified it step
by step into fully functional prototype. (*)

Pilot-user goes on playing with the stuff. Every time he/she encounters sw/hw problem it is
responsibility of support engineer to resolve it: add new stuff, improve performance, discuss results etc.
After that goes another cycle of pilot-user development etc.

In other words, pilot-user is splitting problem into pieces. Some of them he resolves him/herself, some
of them are passed to support engineer. The key is that support engineer does not have to understand the
whole problem and pilot-user does not have to program well.

At the end there will be working (crude and slow, but working) prototype of the thing being developed
which can be used a basis for formal specification and then a bunch of packers will hack away with it.

2. Personal Experience Using This Stuff


In my own experiences results were surprisingly good. I had very telling experiment in this area: There
were guys who developed an advanced glass viscometer and they need to control it by a computer (it
was loooong time ago, at the time some chemists were not that comfortable with computers as now).
The problem was not so trivial because glass is a very strange liquid with strong temperature
dependencies and big variations from sample to sample (in addition all processes are painfully slow, so
it may took half an hour for a temperature to be distributed evenly over sample, for displacement to
become linear with time etc).

I found a pilot-user who did a lot of FORTRAN programming doing scientific calculations, however, he
never did any computer control stuff. As I said (*) I had assembled and calibrated all hw control units
and wrote sub-routines to do user level operations with the stuff: change furnace power, measure
temperature, apply load, measure set of displacements. And I made it into simple basic menu driven
program. I was called to help him a few times to improve one or another part of the control/measurement
system and this was it, he did rest himself working a few hours a day in a month. He got device which
was working with any human intervention beyond entering geometry of the sample and which was
adaptive enough handle wide range of temeperaturs/viscosities

At the same time I did not know and I am still do not know anything about glass viscosity beyond very
basic understanding He did not know and he is still does not know anything about computer control
stuff. And we were able to do the thing with truly minimal efforts from both sides.

Here comes an interesting part. By the time I started working on the project they were developing specs
for other guys for a few months. After our project was over, I asked him to compare it the latest spec he
was written with the actual program he developed. He found 13 differences:

● He decided that he can live without a few things which do not provide much bang for the buck. It
accounted for 2 differences.

● Seven more were really trivial to figure out and he was sure that in the normal spec-test-spec-test
process these items would be found rather easily.

● Two more were less trivial, however, these items could still be discovered by spec-test-spec-test-
spec-test process.

● Two were non-trivial, his feelings were that he would not be able figure it out fast enough
without playing with stuff by himself.

● This project was really small, however, in my view it is a quite telling example.

3. Other Examples
Unix is a system designed by software developers for software developers, that is why it is so
comfortable development environment.
Extreme Programming
From: Colston Sanger

Kent Beck's new 'Extreme Programming explained: embrace change' (Addison-Wesley) arrived from
Amazon this morning.

From the Preface:

To some folks, XP seems like just good common sense. So why the "extreme" in the
name? XP takes commonsense principles and practices to extreme levels.

● If code reviews are good, we'll review code all the time (pair programming)

● If testing is good, everybody will test all the time (unit testing), even the customers
(functional testing)

● If design is good, we'll make it part of everybody's daily business (refactoring)

● If simplicity is good, we'll always leave the system with the simplest design that
supports its current functionality (the simplest thing that could possibly work)

● * If architecture is important, everybody will work defining and refining the


architecture all the time (metaphor)

● * If integration testing is important, then we'll integrate and test several times a day
(continuous integration)

● * If short iterations are good, we'll make the iterations really, really short - seconds
and minutes and hours, not weeks and months and years (the Planning Game).

I've had a lot of contact with Xp people over the last few months and the ideas make good sense to me.
They push on from where we left the Programmer's Stone in late 1997.
Unsolicited Testimonial
Well... :-)

From: "Philip W. Darnowsky"

On Tue, 2 Nov 1999, Alan Carter wrote:

Slogan: The Programmers' Stone improves your sex life. This is not an abstract ethical
argument I know, but it's real.

I will attest to this. Until a few weeks ago, I worked in a highly packerly environment. I was doing
software development, but it was for a large government contractor, so the packers were running the
place. When I quit, and took the job I now have in a place run by mappers, I started to notice a feeling
that I hadn't had for quite a while. Lust. My sex drive has since recovered to its natural level.

También podría gustarte