Wednesday, October 12, 2016

Product Owners don't cut Wood


Does this sound familiar to you? There is this backlog grooming session every week. The product owner would present his user stories. They have title no one in the room understands. They follow the Connextra template of as-a I-want-to so-that. When reading this you sometimes feel like you get a notion of what the title of the whole thing may have meant to mean, but it doesn't grab a hold with you. It just keeps slipping away. And there are these acceptance criteria. In full detail. Describing a solution to a problem that you do not fully understand. Somehow you come to think of it as yet another more or less farfetched special case that has to be added. Everything the product owner tells the team about this story is so rich of detail and so deep into the "How" of it that it resembles a specification. Did I mention that this is the first time you would hear about this particular story?

You feel like you would like to know what problem should be solved, what the customer would like to be able to do, what context this case related to. You would like to know that big picture thing but all you get is tons of technical detail. You feel lost and drop out of conversation.

Ever made this experience? No? Lucky guy you are. The whole thing wouldn't be too bad if it would just be wasted time spent in a boring meeting. But it's worse than that. There are two options.

First the team ignores the very detailed spec like acceptance criteria when implementing and the product owner barely checks what he really gets. The consequences are that the product owner thinks he has got a certain product but the team built a slightly different one. His understanding of the product will run out of date and so do his user stories.

The second option. The team actually builds what was requested but does not take the responsibility for the product. The product will start to resemble a bunch of special cases. Similar things are not done in similar ways. Features start to become mutually exclusive. (Yes. This could happen. Have seen it ...) And the code base reflects this. Adding new features takes longer every time. Even minor changes take ages. Bug rates increase. You are in a mess. You know you should clean this up. But you don't know the direction or any direction at all for all you know about the problem space is detail.

In both options the team will have a hard job to build an identification with the product they build.

Cutting Wood

Do you smell the smell? What's wrong here? Taking it to an analogy the product owner takes the team on a tour to the woods. They walk around and pick a tree to cut, and another, and another. Here and there. Randomly it seems. Then he picks them all up and drops them in another forest where they pick another set of randomly chosen trees to cut.

No one has any idea about what made the tree so special to be cut and why the tree next to it did not qualify. No one has any idea about where they are how they got there in the first place. They know the trees they’ve cut and for there is no real memorable connection between they might have forgotten about a number of them.

What the team misses to see, is the landscape around them, the patches of trees, the forests, the meadows and creeks, lakes, hills and pathways that form the world they have to move in. They have no way navigating their surroundings on their own. They depend on their guide. The depend on their product owner to tell them what to go for next.

This puts the team in an uncomfortable position. It cannot fulfill the role they are supposed to in Scrum. They are not in a position to act at eye level with the product owner.

Shape the Landscape

To stay in the picture the product owners job is to shape the landscape to develop a big picture of where to place a forest, a lake, a creek. Where to lay a path or spread a meadow. The detail work will be done by the team on its own account. If a forest shall be then trees have to be planted. If there has to be a simplification some trees have to be cut.

The team always sees the tasks in context with what else is required. They can bring their detail knowledge to the big picture to help refine and adjust it if needed. They have the chance to understand connections between tasks and might offer shortcuts or optimizations.

What a Product Owner should do

The job of the product owner is to take care for the big picture of the application to be built. She has to balance the sometimes conflicting requirements. She has to balance the stakeholders to keep them happy and satisfied with the product and the way it moves on.

The product owner has to serve as the on-site customer for the development team. She has to convey the requirements and the context they come from to the team. She has to understand the problems that should be solved and what the customer really wants to be able to do. She is the person that connects the team to the problem domain and has to make sure the domain and its terminology is known by the team.

Especially the last one. User stories should reflect the problems within the domain language. Acceptance criteria have to use the domain language to describe what should be possible when implementation is done. The product owner is a major player in building the ubiquitous language both the customer and the development team understands. Just as described by Domain Driven Design. Using this ubiquitous language opens up opportunities for the team. It offers possible abstractions they could use in their implementation. Something a very detailed and technical description of the problem never could provide. It helps the team to see connections between features and to further drive abstraction in the code. And with that it helps to improve extensibility and maintainability of the code make sure the speed of feature development could be persisted over a long period of time.

In a way one could conclude that the way a product owner writes his stories she could drive a healthy code base or a horrible one. It depends.




Read also:
On Coaching Agile - What I've learned from being an agile coach
On Sporadics - How to deal with intermittent tests in continuous delivery
On Agile Testing - Do we need testers?



The opinions expressed in this blog are my own views and not those of SAP

Wednesday, September 28, 2016

There Are No Agile Testers - Balancing the power

This is a follow-up on my posts regarding developer testing versus tester testing which came to the conclusion that rather than having a QA department or a micro QA embedded in an agile team in form of a number of testers I'd prefer to have developers taking on a QA and QC perspective in their day-to-day work. All of the posts negated the need for a QA department.

I intend to change my mind. A bit sort of. at least.

Feature vs. Quality

The situation that induced this partial change of mind I could observe over quite some time now. In an ideal world development teams would take on their responsibility for testing and a reasonable QA. There are places where this actually happens. And there are many more places where this turns out to be a challenge for the teams even if they are willing to do so.
In many shops development teams are faced with a high feature pressure imposed to them by product owners and/or management. Ever more features are requested and sold before they are built. Development teams get forced to deliver even if the quality of the features are not at a level that would suite their own standard. Nor does it fulfill the standard the customer expects.
The quality often is so poor that merely the happy path works. Any step aside would lead to trouble, meaning bugs being reported, adding to the ever increasing demands the development team has to keep up with. In the end the same people that force the team to deliver buggy software are the same people that complain about poor quality. Often I have been witness to conversations like:


PO: I want that feature end of week.
Dev: We are not able to deliver that soon. We're not done. There are some issues to tackle yet.
PO: Doesn't matter. I take the risk.


To be sure. The PO would say anything to get the pressure of his chest by delivery anything he could lay his hands on. He knows very well that he is not taking the risk at all. The development team will have to handle the outcome of the unduly risk taking.


The White Knight: QA

Here is where a QA department or QA stuff comes into play. Given the QA lead is equipped with the same prestige and rank as the management or product owners that force the development teams into bad behavior, the QA could serve as a support for development teams in their effort to ship quality software instead of premature features. The QA is there to balance the power sort of. Putting weight on the side of quality.
It is not about huge organizations of testers swarming out to test all the bugs out of the features. It is about a department that is able to define minimum quality standards and KPIs that have to be met before anything leaves development and that is able to enforce them.
Sure. This is adding pressure on the development teams from one side but reliefs them on the other side. QA will demand the minimum standards are met by development. And QA will prevent stuff from being shipped if the minimum standard is not fulfilled. Thus achieving what development teams are not able to achieve for lack of power.


Balance of Power
To sum up all of my posts: development teams have to do their testing themselves. It makes no sense to build micro silos within teams by adding testers that do testing and nothing else. There should be a convergence of tester and developer to better be able to load-balance within the teams according to the needs of the features to be built. However there are testers that turn into test enablers by providing test automation tools to the development teams or by educating developers with respect to testing. There may even be some testers that do some very specialized testing like user acceptance testing or usability testing. And finally some of the testing stuff will turn into QA - or remains there - to serve as a counterweight for over-ambitious product owners and managers.


So, finally I admit: There is a need for a QA team. A team to set standards and to enforce them. The obligation to fulfill these standards lies with the development teams. The tools and knowledge they need will be provided by former testers.




Read also:
On Coaching Agile - What I've learned from being an agile coach
On Sporadics - How to deal with intermittent tests in continuous delivery
The opinions expressed in this blog are my own views and not those of SAP

Thursday, June 23, 2016

Do experienced developers have to learn?

Just recently I’ve had the not so nice experience of ignorance presented publicly. There has been a session on software engineering a colleague and I compiled for our department. The narrative was around extensible architecture a topic that would need some promotion round here as we are faced with a bunch of legacy code.

Virtually no-one showed up. Afterwards I figured that they were too busy or just not convinced that a topic like this would apply to them for they are experienced developers with some 10, 15 or even 20 years of experience in corporate software development. They just know how to code.

How did actually write all this legacy code, then? I wonder.

What happens here is a nice little mistake: experience == knowledge applicable

This is no new situation to me. I’m faced with claims like that all the time. Just like a UI developer once told me that she would not provide any mock ups for the team to discuss and commit to for she would be an experienced UI developer who knows how to build UIs for years. Again a manifestation of experience outweighs everything else.

What’s wrong with a developer so convinced and self-assured that he would give a statement like this?

Basically nothing when it comes to self-esteem. Basically everything when it comes to the ability and will to learn to progress in ones skills and abilities.

The guys giving these statements were the ones that showed only little impulse to question themselves to reflect on their skills and how they fit to the needs. It definitely has consequences when a skilled procedural programmer tries to use her skill on an object oriented language and environment. This would result in a legacy code base that is not extensible, supportable or testable, as could be observed in our code base.

Why does some otherwise intelligent person fail to see this difference? Do they really not see the difference? Or is there something else?

I didn’t come to a conclusion about this yet. My hypothesis circles around uncertainty, fear of loss of control, self-betrayal and plain ignorance.

Thinking about being in the situation of having 20 odd years of experience in software development (which I have) and considering myself an expert (which I do) and someone not even working in my problem domain coming around trying to tell me how to do things differently (which I basically did) I guess I would have and show my objections to that person.

In the end this person would question me. The expert. Would basically say I did something less optimal and perfect than I would consider it myself to be. I could imagine myself not giving a damn about this. I just would not feel comfortable with admitting my work over years has been less than excellent.


Switch back. While this feelings are just and cannot be ignored there has to happen something to improve the code quality, the design. In my opinion the admittance of less than optimal work, the admittance of failure would be to closely related to failing as a person, to closely related to not getting the bonus granted by the boss. The culture of failure needs to be there. A culture where making and admitting a failure would be understood as a learning opportunity. A culture where there is no need for a never failing expert who tells everybody who things are handled here and who has to approve any idea. We need a culture where true expertise shows itself in the experience that everyone will make mistakes and the simple thing that one single person could not know everything. Even an experienced expert level developer could learn something from a complete newbie every now and then. 



Read also:

On Coaching Agile - What I've learned from being an agile coach
On Sporadics - How to deal with intermittent tests in continuous delivery
On Agile Testing - Do we need testers?



The opinions expressed in this blog are my own views and not those of SAP

Thursday, April 7, 2016

Approaches to Distributed Development of a Software Product

Note: This article will be published as a series of installments. See the installment history at the end of the article to track changes.

Introduction

Just recently I came to think about a proper setup for a product developed by distributed teams. As it happens the use of git was a prerequisite. When working with Distributed Version Control Systems (DVCS) like git teams see themselves faced with the task to figure out what would be their way of organizing source code they contribute to a larger system. As git is a powerful tool with loads of features and means to do things one way or the other it offers both simple and rather complex solutions. In this post I want to explore some major approaches and compare them with each other. I admit that I am biased by concepts like Continuous Integration (CI) and Continuous Delivery (CD) which may influence my conclusions.

I consider this an experiment and will define an example product to set some constraints for the exploration. Some findings may be restricted to this setup others may be more commonly valid. However none of the conclusions claim to be universal. The approaches investigated are taken from daily life. Every single of them crossed my way and I consider all of them worth looking at. Any idea as far fetched or remote as possible would be worth to at least provide reasons why it wouldn't be a good idea.

Sample Product

Let's assume a product of considerable size, say 1M LoC. Let's further assume the product consists of a number of large components, say 5-10, which themselves may be made of smaller components. Team setup follows the top level component structure by and large, although an individual may see the need for changes in several components. The development team consists of 50-500 people actually touching code. Finally the product ships as one. There are no independent releases nor patches of parts of it.
None of the components is intended to be reused by other products. Components are a reflection of the current architecture of the product.
The current dependency structure of our product looks like:


Components A, B and E are top level components forming a collection of services the product consists of. Component F forms a UI framework Components A and B plug in to when present. F does not care for any components plugging into it. Components BA and BB are backends to A and B respectively. Component E is an extension to B.
Components are not necessarily identical with libraries, archives or any other sort of distributable artifact. Basically they are a logical structuring of the source code.
The product has a maintenance obligation with respect to already released versions.
What approaches could be applied to organize the source code of the given product? These are the ones that pop up on my mind:
  • Component Repositories
  • Topic Branch
  • Feature Branch
  • Trunk Based Development

Approaches

When exploring the different approaches I will try to shed some light on a bunch of questions that are far too often not getting considered. These questions touch several aspects of a software development life cycle. Think of questions like:
  • How do we get access to the component we depend on to make use of it in our component?
  • How do we make sure we get information about public API changes soon enough for us to incorporate them?
  • Should there be orchestrated schedules for component releases?
  • How do we handle splits/merges of components?
  • How do new top level components come to life?
  • How do top level components cease to exist?
  • How often do components ship new versions?
  • How do we make sure there will only be one version of each component used inside the product? Or how do we make sure components A and B are developed against the same version of component F?
  • When will integration testing be done?
  • How will it be done?
  • How does a component test itself in the context of the product? Or in the subcontext of its dependency tree?
  • How is the product being assembled?
  • Which component versions should be used?
  • How are component versions get managed?
The list may not be complete. It already holds some tough issues, though. What would be the answers when we work with separate Component Repositories?

Component Repositories

Our development team values highly decoupled components which interact via public APIs only. Any use of non-public APIs is prohibited. The developers understand the temptation of using non-public APIs for the sake of re-use and want to avoid that by obscuring the sources of their components towards other components as much as possible.
Our development team learned that a git repository for a large product worked on by many developers tends to become large which would increase time to clone and fetch. I've seen such repositories exceeding 4GB.
Idea: A small repository only containing one top level component worked on by 5-10 people seems to be a fair trade. The team could work in isolation. Their repository would not be littered with source code they do not own or are likely never touch. Things are simple when it comes to developing their component.

Development

For a developer working on component F with no dependencies or only dependencies to 3rd party life would be rather easy in this world. There are only things present one has to deal with directly. One could build the whole component including tests quite focused. As developers are free to add or remove sub components of their component rather freely they would not feel much of a downside.
Not all of the teams are happy with that setup, though. While the team providing the top level component F is quite happy with this approach, the teams depending on them are not. Why is that?
In order to plug into component F component A and B need to know about and have access to the current valid public API to at least be able to mock the dependency away in their tests and to use the right calls in their production code. However this may be done in the particular language the product is being built with, there has to be some sort of communication. Either interfaces have to be provided as files or as API documentation. Depending on the language these files have to be present during build. Latest when running integration tests of component A or B with component F a real component F would be required.
This would add an obligation to any top level component development teams responsibilities: There has to be some sort of releasing the component in a way other teams could rely on for their development. They have to maintain a release schedule and they have to actually release the component and make it available for the other teams to consume. Usually there will be some stable and some development version available. These versions could be used by components A and B for their development.

Integration

As long as component F publishes new versions on a regular base there will be some sort of "continuous" integration be available. Component A and B could make use of the latest component F version and report bugs if found in component F or fix their components usage of F accordingly. Depending on the release cycle of F the feedback loop length would stretch from rather short to pretty long. During development phase this may not be a problem but when release date closes in it rather certainly will turn into an issue.
A real continuous integration would be hard to achieve. Even if component F publishes release candidates with every pipeline run they would have to be verified by the depending on components before they could turn into released versions. Thus component F depends on the pipelines of each component up the dependency tree to verify successful usage of F and any component that uses F and so forth. The verification pipeline for F will become pretty long and in case of bugs found it would have to start all over again.
Whats more, if component A uses the development version of F to stay close to the newest features of F it relies on these development versions actually being released before release date of the product as no development versions of F will be shipped with the released version of the product.
Another complication would be the possible divergence of F's version used by components A and B. Just to make sure they are actually using the very same version of F there needs to be some governance enforcing this constraint.

Integration Testing

When teams are focused on developing their components they tend to consider any usage of their component the business of someone else. The integration of component A or B with component F will probably tested but the product as a whole will not. Who would be responsible for performing this assembly task with all its required testing?
The product in question is an assembly of the just right versions of all its components. Thus the product would be represented by a bill of material (BOM) only. The product assembly would drag all the named versions of the components and would perform the required packaging. What about the testing then? There would have to be a team that would take care of this assembly and the integration testing to make sure the BOM holds a valid and working combination of component versions. The assembly pipeline would have to run the integration tests of all components and eventually would have to provide for additional integration tests on product level. This team would not at all develop anything in terms of production code which bears the risk of them not knowing about features implemented. A dedicated communication would be required to make sure the assembly (or testing) team knows what to test for.
Another risk would be product level test breaking due to changes in components. As the product level tests run in the product assembly pipeline no component pipeline will run them and thus will not get feedback from them. It is the same as component A running integration tests with component F which could find bugs in F outside the pipeline of F. At any of these points there will be feedback someone would have to communicate to the depending on component. This feedback would have to dribble down the dependency tree with all the communication that comes along with that.
It would be best if the component could test itself in the context of the product within its own pipeline. To do that it would have to get access to the current BOM describing the product and to the product level tests. In order to run the product based tests it would have to build the product based on this BOM and replace itself with the component version under test. Component A's build and test process suddenly needs to know about the product and its assembly thus duplicating knowledge.
Another way would be to trigger the product assembly pipeline by replacing the version of component A in the BOM with the latest release candidate of A. If the product assembly pipeline succeeds the release candidate could be considered verified, it could be released and the BOM of the product could be changed accordingly. In this case the knowledge would not be duplicated but we would need a feedback loop from the product assembly pipeline back to the pipeline of component A. In order to get close to continuous integration any pipeline run of component A would include and wait for a pipeline run of the product assembly.

Release

As said before the product is being represented as a bill of material (BOM) containing the proper components and their versions. At this point uniqueness of components could be enforced, i.e. the version of component F to be used. Following the one repository per top level component the product as the topmost component will reside in its own repository along with the product level tests.
Releasing would include collecting all released versions of components making up the product and to run the product level tests as there would be no other tests available. If the versions of lets say component B and component F do not fit together for component B was using a different version of F in their integration testing there would be the risk that product level test would not discover this mismatch as they will not redo the level of testing done at component B's level. To avoid this means would have to be provided that would mitigate this like: components integration tests will be made available to the using components as well, a component will get hold on the BOM of the product to make sure they will use the proper versions of all components they depend on and so on.
This would introduce yet another set of communications required to mitigate issues induced by the general approach of Component Repositories.

Refactoring

As long as refactoring takes place within the boundaries of a top-level component things are fine. When it comes to structural refactorings of the product, i.e. introduction of new top-level components, removal of top-level components, it becomes cumbersome.
If there will be a new service C we could just create a new component repository and start working on C, adding it to the integration like the other components.
If an existing component ceases to exist in a newer product version we just could not get rid of it as long as the maintenance obligation exists. There will be a legacy component around in a repository no one will work full-time any more. This usually will cause the component to rot for no one will take on responsibility for that one. It is just to far out of sight.
Component Repositories make it hard to factor out new components. Consider a part of component A that would be useful for component B as well. How would B get hold of it? The cost of introducing a new component repository for the new reusable component would be quite high. So, copying the code and adding it to component B's repository seems reasonable especially as long as one would be wondering whether this part really is reusable by B. If it would be reusable and if someone really would avoid the code duplication and open up a new component repository who would be responsible for that? The new found component would not be a top-level component, so there would be no dedicated team despite the one for component A. Would this team be responsible for two repositories now?

Summary

The Component Repositories approach has got its advantages when it comes to the development of a leaf component. As soon as interaction with other components due to dependencies is involved things get messy. Components suddenly need a release management and version governance to make sure every component is on the same page. Especially the product assembly part will become a matter of discussions for no component development team will take responsibility for this integration level. A product assembly team would have to deal with that and would have to take care for product level testing itself.
Communication would be key in this approach. Whether it is done by introduction of additional automatisms to connect component repository pipelines with each other or by human interaction it adds complexity and the "one has to think of it" sort of things which tend to not been thought of.
As long as the component is not a real deliverable in its own right, i.e. will be used outside of the product, will be patched individually, I would consider this approach as not practicable.


Approach Component Repositories

Development (leaf component)


Development (non-leaf component)


Integration


Release


Refactoring


Organizational Complexity








Conclusion

As I've only considered one approach yet the only conclusion I could offer now is that I would not like to go for component repositories not knowing a better alternative for now.

Change History

This is the first installment.

Wednesday, November 11, 2015

Agile Testing Days 2015: There Are No Agile Testers - There Are Testing Facilitators

I had the opportunity to attend Agile Testing Days 2015 in Potsdam. It's been the second time. And again, there haven been great sessions, inspiring talks, eye-opening chats. But still there is something that bothers me. If you following my blog post you might have noticed the "There Are no Agile Testers" blog posts:


To say the least they have been received a bit controversially. Many testers felt personally offended. I could understand this to some extend. A year has passed since then and the idea circulated in my thoughts. I was trying to understand what really bothers me about the agile tester thing. I'm not sure I'm done with it yet. But new aspects revealed themselves. 

The thing that bothers me the most is the fact that by just bringing testers to agile teams does not solve the issues we've had with the development and QA departments. Testing still comes last in the development cycles and tends to be skipped or blamed for being a bottleneck. This is nothing I made up, but something that has been said by testers at the conference. In a way the testers in agile teams establish a silo just as the developers do. Developers rely on testers to get quality into the product and complain if testers do not manage to handle the amount of user stories done for coding stories tends to be faster then thoroughly testing them.

Over the years I became more and more opposed to silo thinking in teams. This discomfort still grows. I try to find ways that could help to overcome this dangerous tendency. My experience from many years shows that whenever a teams starts to separate into silos team performance, quality, and outcome drop dramatically. The teams start to dissolve and I've even seen teams to dissect.

A second aspect that I grow ever more uncomfortable with is the fact how developers are pictured as guys not willing to test, not willing to care for customer needs, not willing to care for quality. A great many developers may fit this description. But another great many developers care for concepts like Continuous Delivery, Lean Startup and DevOps. Both of them are heavily relying on being responsible and accountable for ones quality. Developers show that they are willing to produce stable, high quality code that covers actual customer needs. That they are willing to measure customer acceptance and to act accordingly. That they are willing to ship to production as often as possible. I reckon (a new generation of) developers understand(s) pretty well that they are no longer sitting in the basement coding all day long without ever bothering themselves with any consequences their work might have for the world around them.

For quite some time now developers proved to be no good at testing. Whether they are just testing agnostics, arrogant my-code-will-not-break guys or anything you might think they are doesn't count. Testing did not take place in a way and amount that would have been desirable. Because of that QA people had to be hired to clean up the mess they left behind. But no one bothered to tackle the root cause: Improving quality of the code from line one. This would have meant one has to deal wit these strange guys in the basement. So, an opportunity has been missed. The mere existence of a QA department that made up for the mistakes developers made encouraged them to code even more carelessly. There simply has been no need to do otherwise.

It is time to reverse this development. Now, as developers develop a sense of responsibility testers are urgently needed to share their knowledge and experience gained over so many years of testing. This knowledge has to be shared with developers. Testers are urgently needed to challenge developers to take testing beyond unit testing seriously. There is far more to testing than that as one could learn form "Agile Testing" by Janet Gregory and Lisa Crispin. 

Me, being a developer for most of my professional life, I would wish if not expect from testers that they make testing an integral part of a developers daily business. Testers in agile teams have to become test facilitators. There is no way around that. If an agile team would be staffed with as many testers as you would need to make sure all user stories are covered with acceptance tests, all new code is covered by unit and component tests, all security, usability, performance and you name it what test are required up to thorough exploratory testing one could easily end up with maybe 3 (or more?) testers per developer. Would this be the way you would like to go?

In my humble and honest opinion we would need to tackle the problem from two sides.

1. Testers Coach Developers


Testers gained insights on lots of different aspects of testing including experience of typical hot spots especially when it comes to integration testing. Testers would need to pair with developers to support them when writing these kinds of tests figuring out what test cases are needed and how to best test them. It may be that while doing so testers become familiar with development itself and eventually cease calling themselves a tester. I would consider this a great achievement for our industry when testers did their coaching job that well that developers are able to do the necessary testing and former testers turn to writing code themselves. In a way we would have to call them all Agile Engineers. Then Continuous Delivery and DevOps would fully unfold their potential.

2. Testers Provide Automation Frameworks


Not all testers would like to turn to bare development. I would propose another direction for them. Many developers do not care for security or performance testing because there are no frameworks and no infrastructure available that would perform these tests reliably and easy to write and evaluate. Any of these frameworks needs to be made available in build and test pipelines. If they are not available that way they will hardly be done. Developers need to be urged into writing  these kinds of tests by making this unavoidably easy. Whether or not these frameworks have to implemented from scratch or could be bought. Someone has to set them up for use in pipelines. Someone has to educate developers of how to utilize them.

3. Testers as Integrators


Developers tend to be off by one, so am I. There is a third category of testers. The ones that never ever wrote a line of code and are not fond of doing so at all. There are businesses that have to fulfill legal requirements with respect to quality assurance. There businesses that built huge products with millions lines of code contributed by teams not at all co-located to each other and often enough not well connected. Products like that tend to have integration issues and no one feeling responsible for them. These are areas testers in a more classical sense still would be needed without the urge to turn into developers.

Conclusion


Testers in agile teams should try to see themselves as coaches and facilitators to spread the art of testing. Developers need to be educated and enabled to do lots of testing on their own while and before writing any code. Developers need to learn to look at what they do from a users side in order to enable them to decide in favor of a users needs.

Testers could provide frameworks for automated security, performance, product life-cycle testing and alike. These frameworks have to be made available to developers in their daily work to make these kinds of testing an integral part of coding well before a user story is labeled DONE.


The tasks testers will face in the future might change. For some this change may even be dramatic. But I think we could not afford moving on as we did before. It is time for Developers 2.0






Read also:
On Coaching Agile - What I've learned from being an agile coach
On Sporadics - How to deal with intermittent tests in continuous delivery
The opinions expressed in this blog are my own views and not those of SAP