Tuesday, September 23, 2008

The Case of Emergent Design.

This one I owed it since a long time.
In the year of 2000, David Cavallo published his article "Emergent Design and learning environments: Building on indigenous knowledge". In this article, the term "Emergent Design" was coined, as an alternative of blueprints. Cavallo indicates that many analysts researching the situation of educational change projects failing, was actually because of the committee-generated blueprint that was created to drive the change. The result was always a reformed implementation of the blueprint that did not solve the problem.

So, Cavallo proposes another "way" of doing that process. He bases his discussion in two innovations: The Digital technology and the Administrative decentralization. The second one does not change the management practices, but only lowers them to other levels in organization to be distributed.

Now, Cavallo proposes some principles to guide this. Constructionism (Papert, we build knowledge when we construct things), that is based on Constructivism (Piaget, we build upon what we already know). The other ones are Technological Fluency, Applied epistemological anthropology, immersive environments, critical inquiry, long-term projects and Emergent Design.

The idea was to allow children to be immersed in long period projects, so they can handle fluent technological language. With that, the students will start looking with criticism the new understood world, and using the epistemological anthropology the teacher may be able to detect the learning forces of the students freedom and design "on-the-fly" the next learning framework to explode the possibilities.

All this is handled using the emergent design approach. The learner focus on building new knowledge upon the existing knowledge, and the tools and guides are given to facilitate that. Thus, there is no preexisting path, but one that is built as it goes on. It is pure Discovering by Construction.

Now, that is a learning experience Cavallo says it is not the only one possible. It was mapped to software development by Kent Beck, and it is presented as an alternative to blueprints (that is, the dreaded BDUP).
Emergent design is built on top of a Make-Verify-Improve life cycle. Since the new world is not created yet, the developers usually give baby steps, one step at a time. Decisions are not taken at earlier stages, but a just-in-time approach is followed, and developers are the one making those decisions.

These can prove good points:
  • Avoids Prediction. Decisions are taken upon knowledge acquired by constructing, when the developers know that it works, and at the time it is needed.
  • Verifiable products. The created products are real, working, and thus they can be validated against the requirements. New development is done on top of this real thing and not on top of paper ideas.
  • Decisions are backed up by information obtained from feedback of real products and not assumptions.
Still, there are a couple of bad points to mention:
  • Refactoring vrs Redesign. Note these are two different things. Refactoring is about improving the code after it works, redesign includes changing what it is done to use another approach or structure simply because the first one does not work. So, emerging design may not be about creating and then improving, but creating and then recreating.
  • Late decision at Macro level. The problem with too late decisions are that the ones that affect a level higher that code may require much more amount of rework, recreating functionality in another way. Remember, refactoring is work to improve, rework is work to make it work as it should.
  • There is certainly more information (a developer immersed knows more about his local environment than a designer), but there is less higher view to make the correct decision. That is, the grass-level view Cavallo explains work for individuals creating a single knowledge, since that knowledge is ok to be incomplete (there are lots of data the learner will not know). When you have individuals working together, you need a second level view to understand how all the individuals are doing matches together.
  • The validation is locally centered. When individuals validate their decisions taken upon the current local value, the isolated solution may work. When integrated, the correct parts may not add up to an integral sound structure.
  • It is assumed the individual will make the correct decisions based on information, which requires a long time to acquire (Cavallo’s long process). Thus, initial development is a challenge since no individual has enough info to make certain decisions.
So, those and other pitfalls can be avoided by taking care of tactical and strategic decisions:
  • Use guidelines, like principles, to driven the decisions in case of doubt.
  • Create from the bones. That is, create on top of a secured and sound base that explains by itself the rationale of the solution. That may require to bring some critical decisions Up Front, always knowing those decision may be changed in the future.
  • All decision has to have a rationale that supports it. The idea is that later on, when the decision proves to be wrong, the rationale may help to chose another idea.
  • Multilevel Coding. I know this is something not much people understand, but actually defining the global structure, then defining each component using a tactical structure to bind the bones, and finally writing the code to add meat to them, is what I call coding at levels, since every description of the solution is code. This helps a lot to visualize when and how a decision of any level can be taken or improved.
Hope this helps to clarify a myth about Emergent Design: It is not an easy two step methodology of blindly doing and then step back to see what emerged from the disconnected decisions. And, it may not be good for all projects nor for all people. Lastly, it is not an alternative to architecture, but just another technique to construct the architecture itself.

William Martinez

Saturday, April 26, 2008

About the NOOB definition– Part 2: Globalization.

Prior the Internet days (some people talk about prior television), we had different villages, local ones, with no communication. Now, we live in a global village. We need to communicate and share knowledge, even in different languages and cultures. Furthermore, we learned it is not just us and the "my limited world", but any thing we do affects the globality.

Now, when working on a real world problem, working to find a solution of course, we need to communicate and all or our work is to complement the one from others. There is no one single person that fights all the villains and the rest of the team observes and cheers.

Solving a problem with a software intensive system, involves working with all parts of the system: Hardware, Software, people and processes. No one lives alone with its computer, everybody interacts with all the other components of the system, and the developers are part of that system too!.

Making sense of that, the strategy of working only in my local line forgetting there is something beyond that, goes back to the local world and denies globality. Does it mean you need to work all at the same time? No. It means you work in your local line knowing there is something bigger of which that line is an important part.
What does this matter in Steve Yegge’s post about NOOBs? Well, going extreme, the post indicates there is much waste, fat we need to remove to be healthy. The comments, even the modeling and describing language structures are needless for the seasoned, non-noob programmer. I agree, as long as the programmer is working in a system that has only two components: the programmer and his lines of code.

Unfortunately, that is not the reality. A system not only requires the work of many other components, some non-software and some non-programmers, but also requires all those components to communicate at design and execution time. Robert Martin once said that one of the software functions is actually communicate with people, readers that may have not participated in the development. Does that mean the comments should stay? Not exactly, but that the code itself should be readable, understandable.

One way of doing that, for the development group, is using sentences that make sense and do the function (you can scramble the sentences and do the same thing but being not readable).

Globalization in the System means all components will need to communicate and work with other components. Creating the product with those characteristics, because they are needed, is not being NOOB, but being aware of the fact the software is not an island.

William Martinez.

Sunday, March 30, 2008

About the NOOB definition - Part 1

Paul pointed me to a TSS post I missed. Steve Yegge posted some rants about complexity and got to a conclusion about the how good and clean are scripting languages (Portrait of a NOOB ).
I couldn’t post a comment to it, since it was already closed to comments. So I guess I will answer here.
In this first part of a response to that blog, I will reproduce the post I did to TSS. Probably nobody read it since I posted it one month later.
First, let’s summarize Steve’s post (or what I understood of it):
1. Comments are metadata. Metadata is of no use for the compiler, so it can be get rid of.
2. Noobs are comment obsessed creatures, thus metadata enthusiastics.
3. Modeling is a way to comment intention of code in code language itself. So modeling a metadata creation process.
4. Teenagers are obsessive modelers.
5. Classes and in general OO, are metadata in code itself.
6. Static typing is also metadata since the real thing is memory afterwards.
7. Seasoned Programmer is one enlighten creature that does not use metadata, and thus can see the real thing when coding, not ethereal domain models.
What do I agree with:
Metadata is useless for the compiler. True.
Modeling is a metadata creation process. True.
What do I disagree with:
Metadata (at least not all) is not useless. Metadata is important, although not for the compiler.
Probably one of the things Steve Yegge misses in the post is that a language has much more than just telling the compiler what to do. In the DSL post, we talk about languages oriented to one specific domain, using vocabulary and grammar of that domain only. The idea is languages are there to help you solve a real life problem, not to help the compiler. Languages are for you (and your peers), not for the machine. And maybe not all languages are meant for a computer to read them.
Furthermore: Lack of metadata does not mean better code.
Here we talk about two very different issues: Code quality and the actual coding process. One may affect the other. A good coding process should help produce healthy and good code quality.
Any code with or without the comments will execute the same, good or bad. So comments may not add quality per se.
But, adding comment may affect (and improve) the coding process.

Now, here are my definitions:
Coding: Process of describing the solution using a specific language or set of languages.
Language: Set of words (verbs, nouns) and grammar, used to create describing sentences.
Coding Level: Abstraction level at which the solution is described.
Compiler: Entity that will convert a solution description down into another description in another language (usually a more detailed one).
So, for all of you that think that your Java lines of code actually execute, sorry to tell you that is not true. Those lines will be converted into bytecode. And that is another language that will not execute either. It will be converted again into (probably C) calls to the Operating system at hand. And those to assembler codes, to binary codes and then to electric impulses. Lots of compilers, huh?
Well, for each compiler of Level X(n), the language of compiler X(n-1) is a level higher, and contains sentences that entities of that level can understand. And for X(n) those are just a less detailed solution. Since the sentences are actually the same solution, I may say X(n-1) is just a modeled solution for X(n-1) entities, and thus metadata for X(n).
So, metadata is not a useless waste of characters, it is simply of use for other entities that are not the compiler at hand (Junit 4 and TestNG are proof of that). If you fail to see that, is because you are looking at your compiler level, with your actual language rules.
Thus, for me, a NOOb is when you go out the street and start walking, knowing your block (you talk too much about all that is in your block). Teenage is when you have a car and star driving to know your city (you talk about love and other things, not much about your block). Seasoned programmer is when, using a SUV, you start climbing mountains and going out to wild lands and foreign countries (you start talking other languages). You become an analyst when you drop your car and start studying the world by TV and news channels, and fly around in helicopters and planes (you now talk politics, business). You end up being a senior ancient, when you take your ticket to the space station, and see the whole world at a glance, and start talking about future. B)
Will

Sunday, March 23, 2008

TDD or not TDD?

In a presentation by Cedric Beust and Alexandru Popescu at QCon San Francisco, about "Designing for Testability", called "Next Generation Testing", and in his post, Cedric talks about "Test Driven Development" (TDD), and does not talk to promote it. |-|

Actually, he does not talk against it, but simply questions about using it, as if it is a good idea or apply-able to all cases. Cedrics points several problems of using Test-Up-Front technique, and one of his lines is: "Promotes micro-design over macro-design". Follow me on this...

Let's change the last D from Development to Design. A Test Driven Design is that one that is accomplished by basing the decisions on the test (The principle may read: follow the option that is best testable). No one is saying when to test, nor if you create the test before or after the actual coding. We are actually thinking on design, and taking into account one -ility: the Testing one.

Now, from my post about the design levels, (and here is what drove my attention over that Cedric's line), the term "micro-design" is defined as the design decisions the developer makes during the implementation of the "tactical design". Those design decisions should be in accordance to the tactical design, and those should support the strategic design.

Here, the question is now if the Test Drive Design is driving a design level that is not the micro design! Tests, if we talk about Unit and Functional tests as I think TDD means them, are a developers level thing. They, in this case, should drive Micro-Design only, and should never drive other design levels.

Let me explain it in other words: TDD shouldn't tell you what classes to use, which artifacts (queues, files, etc) or which relations between functional units there are. But it may help you decide which call sequence to use, which functions should a class have, which structures to rely on. Those are micro-design decisions, that may later on be refactored.

TDD, used at micro-design level will help define interfaces, improve cohesion and even enforce encapsulation and information hiding (if used correctly, of course, since we can create tests that violates all those best practices). But that is when the developer is creating the operational level descriptions (code). There should be a tactical design in place before doing this. And there should be an strategic design in place before that. We can use the testability to drive some decisions at those levels, but they cannot be the main -ility at all.

Sunday, March 16, 2008

Design Levels

Working with developers, I often found them thinking of design as a generic word to indicate "sketch the classes you are going to create".

Of course, I don't share that definition. I think of design as:

"composing the sentences, in a particular design language, that describe the solution of the problem"

That said, I add that there are several design levels and types. That is, you may find out you are designing when you think you are doing something else!

Amnon H Eden and Raymond Turner had some publications where you can read about a proposal for an ontology (using the Intention/Locality). Part of their definition is that the software design should be divided into three categories: Strategic, Tactical and Implementation.

Of course, Strategic design sentences are those for Architectural design, where decisions will impact the whole. Tactical sentences are what we have always call the design, where the focus is local to one part of the problem, and that is abstract enough to be variable. Then the Implementation are sentences that are local, but also the actual execution.

That last one I call Operational design. I also call it Micro Design. That design is the one the developer does when creating the sentences that define the executable solution. Note that the decisions here are also important. Note also there are design patterns to this level too, that are different depending on then selected language (those are called Idioms).

There is another point to this definition of design: We can describe a solution at different levels, and it will be the same solution. The good point here is that each level is an abstraction of the lower level, where details was taken out of the way to clearly see broader requirements. Not all designs can be executed, and not all levels may have all the required details to be able to execute as it should.

Funny, huh?