Group Ritual Design

What makes a good climbing group? In a way it is the process of working together as a group, that actually creates the group, or that contributes to group learning.

But the same is true for any good work group. People work together on a challenge and through the challenge they learn how to work together.

Designing these kind of experiences is usually difficult. It is easier and can happen in situations that are arranged in a play. Like for example a climbing garden, there is a predetermined Course, with constructed obstacles, that are clearly designed to be challenging, but not too difficult or dangerous. There is a guide, and there are safety lines installed, that are controlled and solid.

The experience of going through that obstacle course is similar of a challenging group experience, but because it is prepared and limited in time, it is safe for the participants.

Developing an unlimited challenge is more difficult, because first of all it needs to be decided whether it will take place within the professional part of the participants time, or within his leisure time. If it takes part in his professional time, some argument of economic viability needs to be constructed. And for the leisure time project, the activity needs to be at least attractive and stimulating enough, to be considered of value to the participants.

What makes the process attractive could be rewards in the form of respect and validation within the social group.

What is needed to get at least some group coherence is a form of interaction. Those interactions could happen during regular meetings, while practicing together, a play or an act. The interactions could also happen through phone or email, when the topic is more of an abstract matter.

There is also the question of how much time should be spend in interaction. For some kind of sports on an amateur level, one or two hours each day can be enough. In the workplace 8 hours is considered adequate, however, not all of this time is usually spent in interaction.

What is also important, is the have the right intensity and variation of interaction. When the goal of the group interaction is to learn or improve a skill, then intense practice is needed, but also eventually alone time for the participants, where they can reflect upon their progress, and then again collective gatherings of reflection might be helpful as well.

The meetings and gatherings are conducted is a matter of ritual, and ritual can either be accidental and informal, or structured and preplanned. Both are valid approaches, and both can fail or work in different ways.

Usually when it is a physical meeting, it happens in a room, and there is a date and time, when the gathering happens.

The room already provides a setting, there might be chairs, suggesting the meeting will be conducted sitting, the chairs might be arranged in a circle, or facing a conductor in the front. This already sets a mood. Or the chairs me be stacked in a corner, suggesting that the participants themselves should choose an arrangement. There might be a carpet and sitting-cushions, inviting the participants to take a seat on the floor.

There are very different modes of acquiring status within a group, in some groups the publishing of a written paper is an adequate form of expression. The paper has to conform to a certain writing style, and it has to fit into a template, that is set up for the occasion.

In other groups the mode of expression means taking part in a competition, pitching one member of a sports-team against a member of another team in a duel.

And all those group interactions and modes of operation are codified into some set of ritual, and as the popularity of groups or rituals spreads, rituals get copied and adopted. Like the Gene-Pool, they become part of the Meme-Pool.

Popular Memes multiplying, mutating, and unsuccessful ones disappearing.

How the Fossil DVCS solves the privacy problem of cloud based applications

If you do not know what a DVCS (Distributed Version Control System) is, just think of Git, the popular source code management system used by the Linux kernel.

But it is quite different from Git, and seems to solve a couple of problems, that always left me a bit unsatisfied when dealing with Git. In fact, it seems to give me the same nice and warm feeling I got from using the Darcs DVCS back in the days.

I will start out with a couple of issues I had when using git that I experienced recently.

1) For many developers today, when they say they are using Git, they are in fact using github.com a service that has Git in its core, but provides a number of additional services. A nice webinterface, a ticket tracker, a wiki and probably more that I am not yet aware of. However, while the product and concept is well executed, it is still a hosted solution, and while you are using it, you are dependent on an external company to provide this service.

2) I’ve been trying to replicate a similar service that github.com provides on my own machines, and experimented with software like indefero and phabricator, but those solutions leave quite a lot to be desired. InDefero for example provides both a wiki and a ticketing system, but now I have the following problems: I have to provide a MySQL Database for the wiki content, and the ticket system, that needs to be maintained, updated and backuped. Which is kind of a headache, especially considering the nice solution that Fossil is providing.

3) There are various variants of git based wikis that I’ve tried, but installing them and keeping them running is almost always a nontrivial issue.

4) Overall, using git without relying on 3rd parties to manage various aspects of the deployment, is almost impossible, or creates quite a lot of busy-work. I only quite realised that when I came across the ingenious solution that Fossil came up with.

So lets see what Fossil does that makes it so nice. It is quite simple:

Fossil stores Tickets and Wiki data within the repository. And not only that, it comes with an integrated web interface, to conveniently use this ticket system and wiki. And said web interface can be easily made available on a public website.

These choices enable a couple of convenient side effects. Whenever you have a checkout you automatically can easily use wiki and ticket system offline, and once you push to the shared repository everything is update again. Great for using it on the road without internet access. Also, makes keeping a backup of the complete project way easier.

Makes for a much more resilient and independent tool, way better suited for the lone hacker and his small crew. Hosting is trivial, you can setup a private server, or use any kind of virtual server hosting.

In fact, I like that concept so much, I am already thinking about what kind of webapps could be “unclouded” with such an approach as well. Using a distributed versioned database that keeps a complete local backup and only needs to resync periodically to the master host.

Maybe it would be possible to extract some of the Fossil infrastructure and bake it into a general web framework. Or build something similar for different kinds of webapplications.

Have fun,
Cheers
-Richard

Engineering Managers Should Code

Hi Everyone, in this post I just want to direct your attention to the following article at Dr.Dobb’s:

Engineering Managers Should Code 30% of Their Time

Of course I really like the article, and I agree to that statement. And I want to keep this as a reminder for myself on this blog. 😉 And I also think it fits really well with some of the articles that I have written recently.

Have fun,
-Richard

Building a virtual Software Company

Roles and Infrastructure
So let’s build a virtual software company. That is, a company that creates and sells software. While this kind of effort might be possible to do alone, to make it a bit more interesting it will be designed as a collaborative effort.

One idea of collaboration is to pool resources, and exploit the removal of redundant effort. This will be one motivation of that idea, but the other part will be that of a learning/teaching opportunity. Knowledge transfer if you want to call it that.

So in a highly structured effort which software development is, lets define a few roles, that might be worth considering.

Plumbing
Sets up Servers, installs operatingsystems, configures the routers. Eventually DNS, Firewalls, etc.
However, as cloud services, vservers, etc. are somehow a commodity, I’d prefer to keep these tasks to a minimum or outsource them entierly by using some service.

Mail Master
That role will be neccessery, even if it is decided that the company will use an external service for email. But someone has to add new teammembers, so this role will be admin for Email. The role could include setting up an email server on some virtual machine. Eventually the company will be sending out some kind of newsletter to customers, so somebody needs to take care that mail gets out. If a Newsletter Service is used, that role will include configuring that service.

Additionally, that role will likely include setting up and maintaining test-email accounts at various freemail providers, to make sure that mails go through to them unharmed, if custom email systems are deployed.

Web Master
That role will essentially do the same thing for web as the mail-master does for email. It will include setting up webspace, provide ftp-accounts, eventually installing and setting up apache, etc. If some kind of external Hosting Provider is used, and that should be done, this role will take care of manning the web hosting control panel.

Eventually the Web Master will also make sure that there is webspace available for having some kind of testing stage, when web application changes are rolled out.

And if the kind of software that the company is providing is web based, then there will be close collaboration with the next role.

Devops
Development operations is kind of the glue between web master and programmers. Devops basically provides and maintains tools for programmers. This role will be doing nightly builds, will make sure test suites are run, will make sure git repositories are available. They will be packaging applications, upload them to the file area, etc. They will also install the applications that the programmers create. They will maintain the tools to continously collect code metrics.

They will install development tools on virtual machines, and set up images for build servers.

Web-Programming
This role will be doing the actual programming, and that is for web applications. This means both, public facing widgets that are on the web site, signup-forms, databases, cms-tools, crm-tools. The application that is being sold, and internal tools that will support other parts of the business, may it be a company adressbook, wiki, or whatever.

The job of that role is to commit stuff to a git repository. Whatever is being used as development environment is up to the individuals, as long as coding conventions are followed, unit-tests are maintained, etc.

App-Programming
While related, the advent and wide spread adoption of smart phone app markets has probably made it necessary to have some of those as well, so this group will be working on that. Devops will be creating packages and submitting them to app-markets. App-Programmers will submit code and tests to git.

Design
This role will make sure that programmers do not lose focus. Here Application workflows will be created, and layouts will be sketched out, and later refined. This role will probably work closely with programming. Much of this role will also include making sure that the programmers go all the way through and implement clean validations for forms, that everything is pixel perfect and not thrown together.

Last words
And finally, don’t get me wrong, every role will be programming, mail-masters, web-masters and devops won’t be doing their tasks by hand, they will be creating scripts and control panels to streamline whatever they find themselves doing. May it be shell-scripting or automating a service via an API.

The idea here is to have some “easier” tasks, that youngsters can get their hands dirty, while creating some automation that will allow them to move up from their roles and graduate into more interesting fields.

Using magical practices to work with large code bases

A large and legacy code base, can sometimes be like a collection of mystical documents, they are riddled with number magic, cryptic references, mysterious rituals and descriptions. Some are are guarded and governed by old masters, that are grumpy and quick to dismiss any newbie asking for access to their code. They answer in muffled mumbling, and are easy to anger.

Also, the habits and customs with which the guardians of the old code work with the code, interpreting it continuously, rewriting it, trying to understand it, very much matches which how I would expect a magical order to be working through old manuscripts trying to recreate ancient rituals.

However, many of the maintainers of old code do not necessarily indulge into this kind of self reflection about their work. They tend to be governed by fear, from failures in their code, from changes in certain “mythical” parts, from confrontation by their superiors, or by the choleric masters of certain rooms. The strategy to deal with those challenges is often a form of rationalizing, which includes denying how emotions and relationships might influence the code. Communication is often channeled into ineffective rituals, including meetings with no results, and throwing around jargon that aims to hide, what needs to be said, because of lack of a common vocabulary. The value of deep and completely open communication in such endeavors is often completely neglected, and there exist no mechanisms to facilitate such communication.

Part of this phenomenons is often a very deep focus within the programming community on “rationality”. The programming community prides itself on its reliance of “facts” is composed largely of atheists, and in general people who keep a high nose, in respect to religious, mystical, esoteric practices. Everything that carries even a scent of “softness”, is immediately thrown into the corner with hocus pocus and fairy tale magicians.

Those prejudices are unfortunate, because some of the techniques of working with mystical and old manuscripts and ancient rituals, might be quite well suited to deal with old and difficult to understand code.
There might be complicated code bases, that deal with legacy functionality and complex algorithms, that are quite well understood by their teams, and those teams might dismiss this quite “esoteric” approach. And maybe for those well oiled and effective teams, it might not make any sense indeed.

However, there are teams, where I believe much healing needs to be done, misunderstand needs to be cleared up, and honesty as well as understanding needs to be fostered. Often this also goes along with a feeling of exhaustedness, mistrust, and aggression. Team members tend to withdraw, and build walls around them. And in such cases, I believe magic practices and experiences might come in useful, to help resolve those situations.

I also believe, that industry practices, which are often described in books and are believed to be easy to apply by practicing the theory often fail to work. Which is because the required self transformation does not take place. Those industry practices often even fail to mention the necessity of transformation, rather the indulge into detailed descriptions of their “rituals” and artifacts. And therefore they fail to create a conscience for spiritual development in the industry.

This actually is a common pitfall of many industrial and developed nations in which the dominant mainstream consciousness is based around the notion of consumerism. The consumer-culture, that often promises happiness and success, to be tied to objects and artifacts that can be bought. Like to better computer, or the faster bicycle, or any kind of more effective product. However, the fact that growth results from discipline and from transformation of the inner self, and not from buying arbitrary artifacts is seldom communicated and highlighted.

Even advice that goes into building teams and software companies always starts at the point of selecting people based on their psychological qualities and their ability to work and communicate together. It start with the notion that some people simple are mature enough to engage in the art, and some are not. Without elaborating on how people, who might not yet have the required maturity might acquire it. It deals more often also with the suggestions of how to remove “poisonous” team members rather than how to inspire them to grow. Sure, some people might effectively work against such efforts and sabotage them, but those that can be reached, should be.

And here comes in what I believe magic (or magick) could do for the development of programming as a craft and art form. Especially, because something that is seen as “wishy-washy” and arbitrary and be members of the programmers community, and especially because it would evoke quite an emotion and foster discussion. In a sense it is a provocation. It would rattly things up.

And therefore it would require quite some questioning and reflection about ones art.

What I failed to mention in this article is mentioning some concrete techniques. This is intentional, and the entire point, what I suggest should not necessarily and even could not be learned by reading an article on the Internet. It should be developed, and learned in communities, by reflection, coaching, mentoring, and self discovery. We are just at the starting point of this journey. How could I know where it ends?

Take care,
Cheers
-Richard

Conquering rising youth-unemployment, applying lessons learned in hacker communities to the “real” world.

A gap is widening, on the one hand, unemployment among young people is on the raise, on the other hand, it is getting more difficult for companies to find qualified works. A paradox it seems? I don’t think so, neither is it coincidence.

Two factors are coming into play here.

1) The “kind” of job that companies are offering, are not very compelling. The values, benefits and problems that are prevalent in the job marked are no longer in line, with what young people expect. Growing up with open source software, social networks, Wikipedia and the Internet, makes young people expect a much higher level of transparency, than is currently lived in “old world” businesses. Old world businesses are a lot about projecting an image, about hiding information, about political bickering etc. A system that fundamentally relies on an asymmetry of knowledge. If however, all of the worlds knowledge is available in seconds, relying on an asymmetry of knowledge is simple not a sustainable business model. People will find out, eventually sooner than later.

So a young person gets to start at a job, and some higher up tells him this and that. She looks it up on the Internet, and its obviously bullshit. Well, she will find that situation unacceptable, and if she’s a high performer, she will take her business elsewhere, and “old world” businesses will have to do with the worst of the crop.

2) The second factor, is the education system. While people who are driven and have learned to educate themselves have thrived, those that got tangled up in the education system have unfortunately lost. The world of business is evolving faster than the educational system. That is, the knowledge, habits and attitudes that people leave the educational system with, are nowhere near what is required in the business world of tomorrow. They won’t learn it on their first job either, because as they are already behind, they will only be able to get work in an “old world” business of yesterday. And habits learned in those jobs will only further their unemployability.

What to do about it? Well, unfortunately many of the businesses of the future are not yet even created. Because it will be the top crop of our generation to build those businesses.

What will the future business look like?

It will probably be about diversity and automation. Many of the basic necessities of our life are already automated. What are the most fundamental pillars of wealth, luxury and comfort: shelter, energy, water. Look around you in your own home. What couldn’t you do without? Most of the stuff you pile up is probably expandable, but electricity and tap water are probably the most defining achievements of the first world countries. All other kinds of wealth are mere afterthoughts. And those amenities are available to almost everyone already.

And large parts of this infrastructure are already automated. So there is little workforce needed in the field of utilities. So, the mass market is already saturated. This brings us to the long tail. And the long tail means diversity. So now, that we as a civilization no longer need to focus on the fundamentals, we can spread our interests.

And this brings me to another observation, that I’ve made during my life. That is, if you take a specific and narrow field of interest, and you try to find all of the top performers/researches in this field, you will find that there are no more than about 50 people in this group worldwide. That’s it.

And to be quite honest, if you cannot be in the top 50 of a field, it is probably not worth pursuing at all. That may sound harsh, but the good news is, there is an almost unlimited amount of fields available at any given time. That is, there are enough fields worldwide, so that everybody can find and work in his/her specific top 50 field.

And that I believe will be the future of the workplace. To provide an environment, where everybody can strive to enter the top 50 of his/her personally preferred field and do the best work of his/her life!

Think about it.

Software Engineering: Workflow Pipeline

Building Software is difficult, it helps to have the best tools available though. One such Tool to have is an integrated development pipeline for your workflow though. Especially if you are planning to work as a team.
And since there are not really many “pre-made” tools, for this, you are probably going to need to create your own. Of course, depending on what kind of software you are working on, you will also need to choose the components your pipeline is based on.
Also, creating such a pipeline can be a bit of a chicken egg problem, because you might not to start working until you have the setup ready, but you are not going to know what setup you need, unless you start working on your problem. It might be easier though, if you’ve already done a few similar projects, and you already know what you need.
Choosing a tools for a pipeline might also depend on your budget, there are commercial tools available that integrate into certain development environments. But as I like to go “raw” in most of my development efforts, I am going to present you mostly with open source software, that is free to use, even for commercial projects.
I might not be a big fan of “over engineered” tools, but I am a big fan of mendaciously planned workflows and processes. And by process I mean everything that happens from the line of code entering the editor, to the click of a satisfied customer into the gui, or whatever it is that has been delivered. There needs to be a way from my line of code to get under the customers mouse cursor, that is working as smoothly and as reliably as a Swiss clockwork.
Many of those tools, the average software engineer or programmer probably is already familiar with, but I believe that is also of value to be aware that these tools and workflows, don’t just appear magically. They are conceived, built and it takes great care and determination to get them set up just right.
Also, I believe that especially “non technical” managers, if there is such a thing, need to have some idea about what is going on in software development. Startup founders that are building software based businesses also come to my mind here.
But lets get to the beef, the 3 basic building blocks of the set are, the source code repository, the build server, and eventually the test environment. The source code repository is where all the work of the team gets together. Here the integration happens, here the source code is backed up, should one of the developer workstations crash, and lose all its data. Here all the changes are recorded, here is where older states of the software could be restored.
The source code repository is also the key hole, through which the build server connects to the developers. Depending on what your development philosophy is, everything that you need to build the software should be in the repository. So that the build server can get it to build the software. When a developer wants to have something in the built, he needs to put it into the repo, so that the build server can get it from there. There should be no shortcuts, no tampering with the build server. No pressing buttons on the build server, no adjusting files, etc. once the process is in place.
The build server itself, should have a standard installation of the development environment you are using. However, in a way that a build can happen fully automated. If that means a few scripts here and there are needed, then be it so. Whatever your final product is, the build-server should be able to create it autonomously. The product could be a finished windows installer, for example. So then the build server creates that installer from the information he gets out of the repository.
The last role of the build server should then be to push the finished product into some kind of test-environment, be it whatever it is for your product. It could be a virtual machine if you have some rich client application, it could be a webserver, if your product is web based, or it could be some device, like tablet, smartphone, etc. if this is what you do. Could be in emulator of some kind too. Or an app-store, if you are publishing to your users directly from your workflow.
The point of this being is that whoever is testing your product, especially if the testers are not the same people as your developers, and they should not be, need to “sit” as close to development as possible. Without actually needing to disturb them constantly of course.
Source Code Repository
As for the source code repository, I do prefer git these days, although probably any distributed revision control system will do. With distributed meaning, that working with the repository does not require a permanent connection to the repository server, allowing the developers to work offline, which is generally convenient because local operations are faster, they can work offline, on the train, the plane, etc… also there is no single point of failure, such as the server being offline for maintenance, etc.
However, a word of warning, git has a ton of features, and you ain’t gonna need all of them. So don’t get too comfortable applying clever tricks. In certain developer cultures there is a tendency to advocate the use of branching for example. Others tend to shy away of using them. I am firmly in this camp. Of course, it all may boil down to a matter of judgement, and situation, bit in general, don’t use it. That is branching allows to keep several streams of development in parallel, which I believes fragments the development efforts unnecessarily, and I use other techniques to introduce “larger” changes, mostly on the sourcecode level, rather than the repository level.
In my process, the role of the source code repository is simple and limited:
  1. Bring developers code together
  2. Record history
  3. Feed to build server
There are some features that are convenient, like for example using a rebase pull to keep the history clean and avoid superfluous merge-markers, etc. But I again, I like to not get to clever about that.
As for branching in the source code repository, I mentioned using other techniques, one of them being modularising the code and have it communicate via fixed interfaces. This should allow it, if applied reasonable to implement even larger enhancements, in small increments, without confusing the other developers, as you can work mostly in your own newly created module.
That means however, that the core of the application has to be already relatively stable. So only few devs can work on that, but that is probably the nature of software development, and having branching in your version control system ain’t gonna change that anyways.
Build Server
As for the build server, I’d recommend to use the open source build server Jenkins, these days, but in a way it is just the scaffold for your actual build tools. It is java based, but can be used for any kind of environment, I use it for C++ for example. Eventually it will just trigger some scripts, to do what ever it is you have to do for a build of your software. If your process is already well developed, and you can do all your build and delivery steps via a script, you can already hook it into Jenkins, add a trigger based on commits to your repository, and have a go.
There are different philosophies about how to go about your build. Some people like to have everything from compilers to libraries, etc. in their repository, and basically keep all the build tools in version control too. I for one prefer a rather “vanilla” environment, based on a standard Linux distribution, with all dependencies out of the package system, with as little customization as possible. This makes it more convenient to what I do. If however you have specific requirements on your gcc version, a cross compiler, etc. because you do work for obscure embedded platforms, then the other philosophy might do you better.
Depending on your process, and the infrastructure you have, the build server might be a good place to run your unit tests, code metrics, static analyzers, etc. Jenkins has all kind of capabilities to record and archive your results. You might also be able to do a setup that pushes releases based on tags in your repository. That includes possible uploads to your homepage, an appstore, posting updates to shareware software news sites, beta areas of your user community, etc. If you are in anyways like me, you want to get updates out to your users as fast as possible, and this is your secret weapon, to spread as quickly as you can.
That is I am mostly talking about apps, if you are doing web development, well, then you probably need to adjust that recommendations to your specific needs.
If however you are in a more corporate context, you might not just push software into the wild, but rather into a test sandbox first. So this is what we do next.
Test Environment
This shouldn’t be to difficult, so, whatever kind if device it is you run on, set up an installation that is as close to vanilla as reasonable, and make sure, either the build server pushes regularly to it, or the device can fetch updates in a reasonable time frame. Preferable again, installation of your test environment should be automated and not require much tinkering.
The take-away here being that as I said, you want to have a tight feedback loop between testers/users and developers. And that feedback loop should be automated, because if it is not and setting up test devices requires developer intervention, then it is doomed to distract your development crew, and it is doomed to deteriorate. That is, bit rot will set in, people will complain, developers need to fix it, so then they cannot do their actual job, and things will deteriorate further, a deadly spiral.
However, if you have a large user community, you can also push updates to selected beta testers, and have them report back for example. The details of which is beyond this article though.