Software branching is wonderful and at the same time – nightmare. Depends on how it is carried out.
Letting each branch live their lives and only occasionally merging them leads to lot of problems – from software design to actual time needed to do it. On the other hand, constant small merges take less time, but if changes need to be done then those would have to merged as well, creating more unnecessary work.
But, but – I hear You saying – this is a testing blog, why this talk here? Consider the following :
Software release candidate Y is done. This is a major release and testers are working hard on it. But development is also going on, continued on an another branch, to include new features.
Some errors are found and as those fixes are done on the release candidate branch to create a new Release Candidate Y2. Release testing is resumed taking into account the new changes.
Software Y2 is released.All is well.
4 weeks later, a new release candidate Z is done, including the new features (and all the fixes to errors that were not included in previous release). The new features and other changes are tested thoroughly. Everything else has less attention to save time ( – it worked before, and was not changed, so it should work.)
Software Z is released.
5 weeks later, an error is reported from the market. The same error that was found and fixed during the Release Test of Y. Sale is stopped until it is fixed.
After investigation it turns out that one error correction from Y2 was not merged to the new branch of software Z. Why? The branches had grown too distant from each other that it was missed with all the other changes. A human error.
Seems, that ‘constant small merges’ would have made it almost impossible for this to happen.
We have two similar, but different hardware. One is upgrade to another, and some large parts of the software are identical. The software G for newer hardware is in active development, the older software K is in maintenance. Error corrections for the relevant parts are always merged to the older hardware as well. And after the thorough testing and release of G, these fixes in K released with small amount of testing.
Everything is going fine.
2 months after the latest release of software K (older hardware) , there is error reported from the market that the product sometimes ‘just shuts down‘ all by itself. Sales are stopped, etc.
After investigation it turns out that there was a change in software G to improve stability with I2C bus communication with a hardware component. As this component is exactly the same on the older hardware – the change was also merged to K. But for some reason, it actually decreased the stability.
Seems, that constant small changes can fail as well.
Why didn’t testers find those problems ?
In those two examples,the errors were occurring rarely, and more importantly, only after the product had been in certain state for some hours. Although that is a normal for typical user, it is not while testing. Testers usually expect problems to occur when they are actively using the system, which is true for most of the time, but not always.
An other problem was that in the ‘release notes’ there was no indication of these merge changes. Testers were unaware of them and thus did not know to take those changes into account.
No matter how the branching and merging is arranged, testing is still needed. And the more information about the software changes is available to the testers, the better.