Back on Track (and the Software Factories Architect Forum 2006)

Despite the fact that you probably haven’t noticed or cared, I want to apologize for my lack of posting in the last few months.
Sure, I’ve been extremely busy, but in many ways I feel like I’ve been a horrible member of our software community, unfairly learning without sharing my findings with everyone else.
So here I am again, promising that I will try to contribute more in the future with the hope to encourage others to do the same.

There are several things I feel I should write about (including the amazing software architecture workshop that took place last month in Cortina d’Ampezzo, Italy) but today I will just mention the more recent Microsoft Architect Forum 2006 which was superbly organized by Bill O’Brien here in Dublin.
I really couldn’t miss the full-day event since Beat Schwegler and Ingo Rammer were set to dig deep into a topic that I simply can’t afford to ignore these days: Software Factories .

Beat and Ingo were excellent in articulating Microsoft’s vision of how to combine model-driven development, guidance, frameworks and tools together with the objective of creating one or more product lines able to systematically exploit commonalities among the members of software product families .
Bill has kindly made available all the slides in this post so I won’t go into the details of each session.

During his analysis, Beat emphasized that we should consider that, in many cases, up to 70% of the cost of a software project goes into operations rather than pure development; as a consequence, he recommended that we should start adopting model-driven development not only for code generation, but also for requirements, deployment, configuration, and, more generally, for all other activities that are involved during the full lifecycle of a project.

While in principle I don’t disagree with this thought, today I find quite unlikely that different stakeholders (business analysts, network engineers, enterprise architects, solutions architects, developers, security specialists, QA testers, etc.) would accept to use the current incarnation of tools and designers and be confined to one single hosting environment, namely Visual Studio Team System.
But hey, in fairness we are talking about a medium-to-long term goal here, so I will surely change my mind on this point whenever Microsoft (or I :-)) will get there.

Ingo was really impeccable throughout the day and used several examples to illustrate the capabilities of the DSL tools. In one specific instance however, I could not help but notice that the version of the domain specific language that he used did not provide a particular option he needed for his demonstration; he then resorted to manually modify the generated code to accomplish his objectives. Tut-tut Ingo, you are not supposed to do that ;-).
I know I know, it was just a demo, and I really sound way too fussy.

It would be good however if somebody out there explained that, in the real world:

  • We obviously cannot modify code once generated since the DSL models will diligently overwrite everything at the next transformation; as a consequence:
    • Put a comment header in each template to explain that “This code has been generated by a tool…do not modify…etc.”
    • Do not put the generated code under source control. That code is a dispensable artifact. Put the designer file and the templates under source control instead.
  • Modifying a template is clearly a better option, generally however:
    • You need to put it under source control
    • You need to deploy it in a centralized location so that it can be shared across different applications in the same family
    • A change in the template may not be done lightly as it could easily break all other existing solutions that use the same template.
    • You need to unambiguously identify the version in use (you may put a version number in the header of even resort to change file name if changes are substantial and break existing applications that use it)
    • A change in the template that breaks existing applications may trigger the beginning of a new product line, particularly if you can’t accomplish 100% code generation of a solution and you can’t afford to retrofit all the existing solutions.
  • If we are unable to build systems that achieve 100% code generation (which happens if the degree of variation in a product family is not completely known):
    • We need careful guidance (patterns) to understand how to happily write manual code beside generated code. How do we create the extension points? What do you put in the underlying frameworks and what do you keep in a template? Who should make the changes in the first place? Is it up to us to rediscover these tricks over and over? Sure we can use base classes, template methods, etc. But no, the one-size-fits-all solution of using partial classes does not work all the time (wake up people, we often deal with xml files with little or no control on the reader…has anyone heard of the app.config file for example?)
    • There is an evident abstraction leak as developers need to appreciate the generated code as they are going to write beside it and even debug it if necessary (tip: keep templates “thin” by leveraging rich frameworks instead). By the way, the idea that the “smartest” people will write DSLs and templates while the others will just use them is flawed in my opinion, but I will definitely save an explanation for another post.

I’m exhausted already and I’m aware that this list is far from complete; and I haven’t even started talking about how I see we could version a domain specific language or an entire product line using the current tools. Or maybe I should attempt to figure out how I would assemble effective teams within this new paradigm. Has anybody discussed yet how to deal with developer’s natural resistance to model-driven development? How should we address their concerns of domain-restricted job specialization and the rightful dislike for anything that contains the word factory in it? Or has everybody agreed that this is not an issue?

As often happens, the real problem is that the tools are getting there, with their capabilities and limitations, and we really need to go beyond the simple APIs to be good at using them.
But perhaps it is unfair to ask a toolmaker to tell us out how to excel at using those tools.
After all Mozart wasn’t a piano maker, right ;-)?

Claudio Perrone

My passion is to develop the critical thinking skills of people and help them bring the best of their work to the world. Lean & Agile management consultant, startup strategist, award-winning speaker and entrepreneur. Creative force behind A3 Thinker and PopcornFlow.

This Post Has 8 Comments

  1. Claudio blogs (finally :D) about some real world obstructions I have experienced myself applying Software Factories and Model Driven Development (DLS). He also mentions the Software Factories Architect Forum 2006 organized by Bill O’Brien that took place in Dublin, Ireland. Boy, do I regret not being an Irish architect after missing out on this gig. Check out the slides and comments on Bill’s blog.

  2. João Pedro Martins

    Alguém tem experiência concreta com a utilização deste tipo de conceitos na prática?

    Conheço pelo menos 2 casos de empresas nacionais em que se faz geração de código em larga escala, mas ambos são para domínios muito específicos (nomeadamente, ERP’s), mas quando os domínios são mais latos parece-me mais complicado…

    Encontrei um post sobre este tema que me pareceu interessante, e que descreve algumas das dificuldades do MDD em geral…

  3. Emil Marceta

    Excellent points. I’d add few from my experience with MDD tool in Java (diffewrent language similar issues).

    – Context (mental) switches from one tool to another is difficult. Using MDD tool, then generating code, and then using a different tool (an IDE for concrete implementation) makes

    – Refactoring. The generated code is more difficult to refactor. Sure the templates can change, but then the concrete implementations
    may go out of synch, and all the side-effects may not be immediately visible.

    One example of DSL that I really enjoy is Ruby on Rails. Sure it is a different and dynamically typed language, but there are very important points there :
    The framework, DSL and the scaffolding is in Ruby. There is no difference when editing / using DSL and editing regular business logic. It’s all in the same language, in the same classes in the same editor. The DSL instruction result in code generation on the fly, and programmers barely notice it. People use it without really understanding that this is a DSL. For a developer that is simply Ruby.


  4. Claudio Perrone

    Hi Emil,
    There is certainly a lot to learn from the experience gathered using other platforms as there are lots of similarities despite the obvious differences.
    I’m really glad that you shared some of it in your comments!

    I’ve been looking at varius forms of MDD lately, including MDA and DSM (there is a LOT to learn from the experience of the guys in MetaCase since their MetaEdit+ looks like a really mature tool, by the way).

    Ruby on Rails is extremely interesting for sure and I can see the advantages of keeping the same language at all levels…
    However, the key of its success is perhaps in the conventions (names, patterns, etc) it uses.
    Although InnerWorkings is a pure .NET shop, Ruby is often a subject for interesting conversations with my collegues and our experience with Rails has been really positive so far.
    In fact, we heavily depend on a custom workflow application that a collegue (and friend) of mine, Michael O’Brien, developed in Rails a while ago.
    It coordinates and tracks all the steps across teams in our development process!

  5. Emil Marceta

    Hi Claudio,

    I agree that conventions and patterns are key in Ruby on Rails, but the fundamental mechanisms that allow it is RoR metasystem written in Ruby. Sort of like Lisp written in Lisp.

    I too think that the ‘Code that writes Code’ is a Good Thing’
    but I’m thinking that the codegen (and other artifacts that are generated) unversally work when done just-in-time. By that I mean :-

    – ‘universally’ means exactly what you have so nicely explained “that the codegen does not know the degree of variation in a target product or product family”).

    – no external model. As much as possible, all the the necessary
    descriptions are maintained in the code. The editor, IDE or script,
    is automating the step that a developer wishes to perform on the
    fly instead in advance.
    Everything can be refactored when needed, there is no ‘special’
    non-editable and editable classes. Everything is open field.

    I looked at the product you mentioned, and I always wander with similar tools why do they think that visual diagrams are better? There is no evidence nor research done in IT industry and Comp-Sci that shows that the visual diagrams are better then text. 🙂
    It is the opposite that can be argued actually 🙂 There is often
    an issue about ther precision, subtly different interpretations etc..

    Few links, that you may find interesting:

    Alan Kay on the metasystem topic:

    Intellij Meta Programming System


  6. Claudio Perrone

    Thanks for the links.
    You touched so many concepts and now I don’t know where to begin!
    Let’s see:

    RoR does a lot for you behind the scenes and you are right, it is reasonable to give a lot of credit to its reflective metasystem based on Ruby.
    While it’s true that static languages don’t have analogous capabilities, however, I fail to see why similar productivity gains couldn’t be achieved with adequate frameworks, IDEs and tools..time will tell I guess.

    I must confess that I haven’t really thought about just-in-time code generation about until now (duh!); if I understand correctly, you refer to Ruby’s capability to dynamically extend existing types, which in this context allows RoR to generate code and let users consume it on the fly without additional steps. Is it a better experience than requiring manual steps to the user? You bet :-)!

    Are visual languages better than text based languages? I think this question deserves a whole series of posts (well…one at least)!
    Let me clarify that, although I keep my eyes open, I don’t buy into the idea that 100% visual modeling and code generation is feasible nor desirable in any but few ad-hoc scenarios. There are big differences between OMG’s MDA (which is totally model-centric, uses UML as input, prescribes PIM and PSM transformations), Metacase’s DSM (totally model-centric, visual DSL as input, claims – and I’m sure obtains – 100% code generation for rather specific scenarios) and Microsoft’s Software Factories/DSL Tools (visual modeling can’t do everything, so graphical DSLs must coexists with code, IMHO applicable in wider scenarios). Effectively, I’m skeptical about MDA, understand Metacase’s DSM (but my domain doesn’t fit in the “applicable” scenarios), but I choose to buy into the latter approach with all its promises and consequences.

    In short, I believe that different domains need different languages/notations, some of them should graphical and some should be text-based. Why and when one should be used is an interesting question that I asked myself many times. I must confess I don’t have many answers although I promise I will write something about it in the future.
    Hopefully one day the context-switch problem that you mention will disappear if either IntelliJ or Intentional Software succeed in their…intent. As it happens, last February I had the privilege to meet Charles Simonyi and I’ve seen what he is up to…so stay tuned!

  7. Emil Marceta


    Just a few clarifications, and I’ll stop taking space on yor blog, I promise 🙂

    Charles Simonyi is my “neighbour” (Vancouver, BC are my coordinates) so that must be the influence 🙂

    “While it’s true that static languages don’t have analogous capabilities, however, I fail to see why similar productivity gains couldn’t be achieved with adequate frameworks, IDEs and tools..time will tell I guess.”

    I too agree with that, sorry for not being clearer. Statically typed languages such as C# and Java are perhaps not that flexible as dynamically typed and scripted languages, but I’m not sure that this is the reason to take different direction then Lisp, Smalltalk or Ruby. See to put MDA tool, Visual DSL , DSM in the same sentence with any fo those is a joke in a sense 🙂
    DSL in Ruby is written in Ruby (say Rails), Lisp macros can redefine the Lisp, and Smalltalk is similar.

    This is why I’m examining / suggesting just-in-time code generation as the alternative. Think about it as IDE refactoring on strong-strong steroids.

    Then to define the DSL for C# application one uses C#, and similar for Java. And this is my basic point. I believe that this solution is then sound, elegant, concise, minimal and universal and that the external tools (MDA, Visual DSM etc) are suggestive and actually very messy. They create this divide and disconnect between the model and the code. I’d rather see the language grow with metaprogramming capabilities instead of extracting that into some external model.

    Then the language becomes an universal DSL tool that can tackle your beautiful statement :

    “which happens if the degree of variation in a product family is not completely known”.


  8. Claudio Perrone

    Emil, You said “taking space on your blog”…eheh…far from it. I really enjoyed these “parallel monologues” and I will definitely examine your point of view with great care; at the moment I just don’t see why, for example, I would use a text-based language to describe user interfaces while a specialized visual language would clearly do the job more efficiently. As I explained before, my point is to use the most efficient language for the job; however, i’m struggling to understand when and why I can objectively argue that a visual language is better than a text based language. I’ll “give” you one thing anyway, you arleady managed to get me to pick up and play with Ruby on Rails again this weekend 🙂

Comments are closed.