Quality skills vs roles

Resources

Join Murray Robinson and Shane Gibson as they discuss why quality as a whole team’s responsibility. We discuss why it’s critical to integrate QA processes from the beginning, the importance of clear acceptance criteria, how to handle technical debt and the continuous improvement process. Tune in for an engaging and informative discussion on quality in your agile projects.

Recommended Books

Recommended Books

Podcast Transcript

Read along you will

Shane: Welcome, I’m Shane Gibson.

Murray: And I’m Murray Robinson. 

Shane: And today we’re going to talk about quality. Quality assurance and how that role fits within an Agile team. Should we actually do it or should we just wait for our customers to find the things they don’t like about the product as a way of quick testing. 

Murray: I have mixed feelings about what you just said, Shane. On the one hand, if you know what you’re trying to do then an Agile team should aim for zero defects in production. But if you actually don’t really know for sure what your customers want, then it actually is a really good idea to get something out there to show them as soon as possible and get their feedback. But anything you put out to people should work . 

Shane: Yeah, if you have a pull down widget on your screen, little filter thing, it should probably have a list of values when you pull it down. When you click on one of those list of values, it should probably do the thing that you’ve got on the screen to do. So how do we assure quality? 

Murray: So for me, quality is something that is acceptable to the customer, the user, your internal client, Your product owner. You’re doing things for other people so it has to meet their needs. And, you need to find out what they need and have some agreement on what it is that you’re going to do. And then you should make sure that what you’re doing does actually meet what you agreed to do. 

Shane: So when I’m working with a team, I get them to define definition of ready, definition of done, and definition done, done. So definition of ready is the work we’re about to do is ready to be worked on. So there a bunch of checklists or criteria to say acceptance criteria have been written, backlog groomings been done. There’s a bunch of things that we want to check have been done that so that we’re ready to go.

Definition of done for me is the team setting the standards of what they expect to do before they think the stuff’s done and ready for acceptance by a product owner so that would be things like peer review, performance testing. There’s a bunch of things the team just expect to have happen before they give it to somebody else. And definition of done is typically around the acceptance criteria that the product owner or the stakeholders have given the team. So making sure that while we know it runs and it works, does it actually meet the objective we were given? So for me, those are the things that have to be put in place to make sure we have a quality product coming out the other end.

Murray: Yeah, and done, means it’s deployed into production for people to use and it’s working. 

Shane: Yeah, actually, one step further, it’s the value that was expected from that piece of work is delivered. So if we’re making a change to a sign up screen and the product manager has defined the value of this change as we get more sign ups by 5%, Then we sign off as done when that value was achieved. For me, pushing to production is really just a mechanism. It should be done by default. It shouldn’t be something we celebrate, it should just happen. 

Murray: But there’s a difference between having something that’s working in production as the product owner said it should. And having something that meets the business objective. 

Shane: Yeah, it’s all hard. Ask a product owner to define good acceptance criteria that we can use to make sure we meet their goals. That’s hard. And then defining a business metric that we would expect to move as a result of this investment, that’s hard as well. So not saying it’s easy. 

Murray: I would say that last one, that the feature you deployed has increased signups by more than 5 percent isn’t really a measure of quality. It’s a measure of whether you’ve made a business outcome. I’d say this feature is done, didn’t achieve the goal as much as we wanted. So now we’re going to do another version of it, another iteration of that feature.

Shane: Yeah. So for me that definition of done shines on the product ownership and product management within the organization. The example that we will often see is we’ll have teams that spend a large amount of time getting some data ready, creating some metrics, putting on our dashboards. And when we monitor that dashboard, nobody uses it. So there needs to be some rigor around the quality of how we make decisions on what we invest in. And that’s not typically up to the delivery team. It’s the rest of the organization that’s trying to define it. So for me, that’s why done’s important. The team spent time building it. Did we actually achieve the outcome the organization expected?

Murray: Okay, so I suppose that’s a measure of the quality of the requirements you’re getting from the product owner, that they actually achieve the business goal.

Shane: Yeah. Cause what’s the point of investing all that time and money without achieving the business goal? 

Murray: So I reckon that you can have defects in requirements and design and architecture, you can have defects in the code, defects in the integration, defects in the implementation. So there’s defects everywhere. Normally when people talk about defects they mean defects in the code and downstream from there. Generally people ignore defects in requirements and design and architecture, or they just say that’s a change request. 

Shane: Yeah have seen teams that are starting out, when we talk about their QA process, they talk about another developer peer reviewing. A bad example is developers who are peer reviewing go in and go, it’s using tabs, not spaces to format it. I ran the code and it executed and did not fail. Peer review done. 

Murray: That seems low value for a peer review. 

Shane: The quality of that code is questionable, We didn’t look at how fast it ran. We didn’t look like whether it was actually producing a result that met the acceptance criteria. We didn’t look at how maintainable that code is in the future. We didn’t look at how well documented it was in case we had to come back and touch it. There’s a whole lot of QA tasks that we should do over and above it’s nicely formatted and it runs without an error. 

Murray: Yeah so my experience doing waterfall projects Shane is that you get large numbers of defects at the end of a project. I’ll give you an example. I was running a large, waterfall project for a Telco about 12 years ago with a large Indian service provider, well known. And when we got into testing, we found thousands of defects. And we were getting defects on defects. They’d say, this is fixed, and we’d test and we’d find actually you’ve only fixed half of the problem and the other half is still there. There were so many defects that we did this thing that everybody does in Waterfall. We triage them. Is it a Sev 1, 2, 3 or 4 defect? Sev 4 is a trivial user experience, because that doesn’t matter, right? Sev 3 is, Some sort of minor functional thing you can work around. Sev 2 is a major functional thing you can work around, which would be a really awful experience. And Sev 1 is, it doesn’t work at all.

So we would triage everything and say, in order to deploy, we have to fix all the Sev 1s and Sev 2s and some of the Sev 3s but we can probably go into production with a couple of hundred known defects, that’d be okay, because it’s more important to hit the date so managers can say that they hit the date.. And then when it’s in production, Ops will fix it, so that’s fine. That still happens on fake Agile projects too. But on real Agile projects I’ve done we’ve gone into production with very, very few defects. The only defects we had were ones where the production environment was set up different than we had thought it was in our integration test environment. So I think Agile Teams can aim for very low defects. And , the way you achieve that is by fixing the defects as soon as you find them and by having a QA in the team, Shane. 

Shane: I’d go one step further. The way you fix it is you make QA a team problem and you do that on day one. I’m going to guess that, that outsourcing approach for that waterfall project, had a separate testing QA team. 

Murray: They They did. Yes. 

Shane: The developers wrote some code and they threw it over the QA wall. The QA people were really paid to find bugs . So they’d find two types of bugs, a bug where the code just didn’t run, and then a bug where the code didn’t meet the requirements. But to do that, they have to reinterpret what the requirements were and what the developers codes doing and then compare those two to see whether it passes the QA. So now what we’ve got is three teams writing different things and each of those teams interpreting those artifacts in their own way to say that we’ve met the acceptance criteria. And that’s part of the problem, is that conversation via documentation. That handoff behavior is the first thing we have to fix when we talk about QA within an agile team. 

Murray: The handoff behavior is really bad, and you’re right. They had a dev team not handing to the test team and then the test team, were under pressure because they’re at the end. So their managers thought that they were delaying everything. So they put a lot of pressure on the test team to just pass things, and they passed all sorts of rubbish on the basis that we’d find it in UAT, And then they’d have more time to fix it. They buy them time. So by the time we were testing it in UAT, that’s where we found the thousands of defects. 

Shane: let’s say we did two weeks of testing by the QA team. And we find a couple of Sev 1 bugs, bad bugs. We need to fix those before we can go live. So we say to the developers, fix those bugs. And so the developers change the code. And then we’ve got new code now. And we push that and we say to the testers, could you just test that bug’s fixed? And so they do. And nobody says I wonder what that change has done to the other bits of the code. We may do regression testing, but we tend not to. 

Murray: I’ve found that test teams in Waterfall always will do some regression testing. It’s more a matter of how much time pressure they’re under. 

Shane: Yes, and if the tests are automated, then ideally they push the button, the regression suite runs. 

Murray: They wont be automated in a Waterfall team, so that’s gonna make it hard for them. 

Shane: Actually, there’s no reason why a waterfall testing team couldn’t automate their tests.

Murray: I tried to get this Indian service provider to automate their tests. They said they would, and they put on an extra automation test team. So there was the dev team, the test team and the automation test team, and they automated a few things after the testers had passed things. And when the testers were retesting them, they didn’t run the automated tests because it wasn’t for them it was for me. It was supposed to be for them, but they didn’t care so that didn’t work. 

But I’ll tell you what does, work really well is as you said, having a definition of ready to develop. So we have a feature or a story and it’s got our objectives and what we’re trying to do , what the function is, but it’s also has testable acceptance criteria. So I get somebody in the team. Usually somebody quite experienced with testing and I get them to work with the other people in the team who are defining the story, and somebody else who maybe we’d call a tech lead. And the three of them, the three amigos with the product owner make sure that the story is ready. Normally you write the requirements and then somebody else comes along and writes acceptance criteria. But the best way to do it is to write testable requirements 

Shane: If we come back to my maturity framework we have people who are novices, we have people who are practitioners, we have people who are experts, we have people who are coaching in a certain part of what we need to do. 

Often you’ll find within a team, the testing skills have always been outsourced to another person or another group. So I’m a great fan of bringing in somebody with expertise or coaching level skills in terms of testing. How to write good tests and how to automate those tests. But what we’re doing is we’re trying to increase the maturity level of the team members and their testing skills and their QA skills. I agree with you. Well defined acceptance criteria as part of your definition of ready actually increases the quality of your product. Because you’re planning early. Your developers are looking at those acceptance criteria as they’re writing their code. They have them in the top of their mind and they are naturally testing their code against those criteria because they know what they are. That’s what most good developers will do if that information is available at the beginning.

Murray: of course, cause it defines what they’re supposed to be doing, what they’re supposed to be achieving what the outcome is supposed to be. And if you do this as part of getting a story ready, then everybody has the same understanding of what it is and what it would look like when it was done. So it produces a lot of clarity, which means you’re more likely to achieve what you said you were going to achieve within the sprint for that story. 

Shane: Yeah, and it’s the same with definition of done, if the team are defining the things that as professionals they expect to actually happen when you develop then each of the developers are baking that in as they work. A lot of the architecture failures or the technical failures that you will see happen, disappear because that definition of done means the developers know what they’re coding towards. And so we will see better quality because those things have been set out up front. 

Murray: I agree. So for my definition of done in a team, I asked teams to define their own, but I recommend things like the developer thinks that they’ve achieved the acceptance criteria. They’ve got some automated unit tests and, if the team has decided to do BDD, then they’ve got the automated Gherkin tests and so on. And then there’s a code review by somebody else in the team that they’ve met their criteria for maintainability, And then I like to have the developer call the tester. Because the tester’s sitting in the team. I also like to have automated testing. Automated testing is very good for DevOps, CI, CD, but sometimes people can’t get fully automated tests. 

I think it’s very useful to have a tester in the team who is helping the team with their quality assurance by defining definition of ready and so on. But is also providing an independent test of what the developer has done because I find that developers get tunnel vision. They make assumptions, even if you’ve got a well defined acceptance criteria. They will test what they think of testing. But it needs independent checking by somebody else. Now, it could be done by another developer, but other developers tend to be pretty bad at testing, So you need somebody, could be the product owner, it could be a BA, it could be somebody else. But I find that testers are actually really good. At thinking about what is this supposed to do? What is the behavior and can I break it. And that different point of view is, actually really helpful. 

Shane: So I agree that we often need expert or coaching level testing skills in the team because the team don’t have those skills at the moment. And that having a person who’s sitting next to you, to look at it and say, what about this? Then they’re all good behaviors, they should never be called a tester and they should never touch the keyboard the developer should be testing they need some help in their skills at testing and being able to look broadly at what test should be applied.

Murray: I think a developer should be testing. It’s just that developers have a narrow technical point of view of what they’re doing. And it does need somebody else. And I like to have a QA as part of the development team. So Scrum has three roles. The development team, the product owner, and the scrum master. The development team does not mean Software developers. It’s any combination of skills required to achieve the outcome. So your development team could include a business analyst, a quality assurance specialist, a user experience designer, some software developers, a front end developer, back end developer, somebody who’s really good with infrastructure, it doesn’t really matter. The combination of skills gives you the ability to do everything end to end . 

Shane: Yeah, so each of those skills are really important. Each of those skills is not a role. So I use a T Skills framework with teams to figure out where we have strong skills, where we have weak skills, where we have good double up of skills, and where we have gaps we need to fill. So we have people who have good business analysis and facilitation skills, and we have people who have good coding skills and people who have good testing skills. People who have good architecture skills, people who have expertise that is quite broad, so they tend to form that tech lead type of behavior. But our goal should be that everybody has strong as skills in each of those areas as possible. And we get that breadth across the team. They’re not roles. We don’t have the role of a BA in a development team. We just have one or two people in that team who are developers with really strong BA skills. 

Murray: It takes a lot of time and effort to learn how to be a good developer. And a good developer is likely to be a crappy BA in my experience. And for them to try and become a really good BA, means that they’re going to be investing a lot in a skill set that they’re weak in, and not doing the thing that they’re really good at. I think that’s a waste of talent. I agree with you with T shaped people, but I think that a team of people who are specialists with secondary skills is much better than a team of generalists. Much more powerful. 

Shane: Yep, definitely agree. So I’m not saying that you can find a team of unicorns where everybody’s at coaching level for every skill. So people have a strong T, and then augment some of the other skills. But what we’re looking for is skills to do the task that needs to be done not roles. 

What I’ve observed is when people have specific roles, we end up with handoff behavior. So where we have a person who’s identified as the tester everybody expects to hand off testing to that person. And that’s the behavior that we want to discourage in a team.

Murray: There’s always handoffs in a team and it’s impossible to get rid of them. You just want to reduce them. 

Shane: Yeah. You want to minimize them and you want to make them as consistent as possible.

Murray: It’s bad for a scrum team to hand code to a separate test team do their testing in the following sprint and then come back to sprint afterwards with defects. That’s not agile. But I’m talking about a situation where you have a team, let’s say, four developers, somebody who is called a BA, a designer, a QA and a product owner. And they’re called that because that’s what they’re specialized in they’ve got five, ten years experience in it. The BA is really good at being a BA and they’re poor coder. But they’re fantastic as being a BA and they’re adding a tremendous amount of value to developers by doing that. I would have a problem if you have a team that says we take, or the developers take, no responsibility the requirements because there’s a BA there. That would be bad behavior. But I don’t see that in Agile teams that have roles.

Shane: So maybe it’s about that continuum because that behavior where you’re handing off to another member of your team as if they’re an external team that’s the behavior we want to discourage. Yeah,

Murray: I agree with that, but I just don’t see that. And the thing is that people have careers in these different specialties. People specialize in front end development, back end development, infrastructure business analysis, design, testing. If you want really good design, you should employ a specialized user experience designer because they’re going to be able to do it. If you ask your developer to do your user experience design. You’re going to get something pretty bad. 

Shane: We’re arguing semantics, but for me, those semantics are important. You are hiring people with strong skills in, those disciplines. You’re not hiring them for that role, so you’re not hiring them in the team as a designer role. You are hiring them because they have strong UX skills the team needs. And for me, changing the way we talk about it actually ends up changing the team behavior a lot.

Murray: You’re assuming that if you have a role then the whole team immediately engages in waterfall behavior. And I just don’t think that’s true at all. So therefore I don’t see a problem in having a role. 

Shane: I have seen that behavior happen sometimes. So in Scrum, the role is member of the development team. So if we talk about people’s skills, not their roles, then we are safe. We don’t need to talk about roles because it’s not a role. It’s just a set of skills.

Murray: A product owner is a role. 

Shane: That’s different, isn’t it? Because we’ve got scrum master role, product owner role, member of the development team. 

 

Murray: It’s only different because scrum says it is, But we don’t have to do scrum to be agile. 

Shane: But we do hand over to a product owner, don’t we? On a regular basis.

Murray: What do you mean by handover? 

Shane: Ask them for the acceptance criteria and they give it to us. And then we finish development and we say, does this meet your acceptance criteria? By defining those roles, we put a natural air barrier, where natural handover behavior comes in. Same with Scrum Master. Stand behind me at stand up. Help me run these ceremonies. Provide this guidance for us. Defining it as a role, we naturally put an air gap in there. We start encouraging handover behavior, and that’s why I’m so against talking about roles within a team, and I prefer to talk about skills.

Murray: If you were going to be logically consistent Shane, then you should object to the role of product owner and scrum master and only have developers. 

Shane: I haven’t got to yet, I haven’t figured out how to have a development team actually provide the product ownership behavior. But I can’t see why we can’t. 

Murray: This fine distinction just doesn’t make sense to me. There’s a need for specialized skills, need for careers and there’s no problem with it. What I object to is the behavior. And actually this reminds me of our discussion about project managers. I don’t have a problem with the project manager role. I have a problem with some project management behavior. So I don’t have a problem with a QA specialist. I have a problem with a team that doesn’t take responsibility for their quality because they give it to the QA person in the team. So it’s the behavior not the name of the role. 

Shane: Where we got to with the project manager was we agreed that it’s the behavior that’s important. We disagreed in that you think using the term project manager is okay. And I still don’t agree with you. By using that term for that role, the behavior comes with it. And that’s a thing I see happening time and time again.

Murray: You’re putting a really heavy load of assumptions onto the idea of a role, which I don’t think is really there. So let me explain the way I’ve set up teams. Cause I was running the delivery function for a digital agency with a whole lot of people. So I would find that generally about half the team should be people who were highly skilled at development. I find that a team that has, let’s say, four people who have specialized in development, somebody who specializes in quality assurance, somebody who specializes in design, and somebody who specializes in business analysis and then you’ve got your product owner and your other people like that. I find that a very effective mix. Now within that team, I expect that if, for instance, the quality person leaves then somebody else in the team will step up and take on that function, somebody who’s good at it. So I would expect that they would team would talk about it and decide amongst themselves. And probably they would say the BA can do it because dev is the bottleneck. If we get the BA to do it for a while that might be the best way to get flow through the team. But if the BA is the bottleneck then somebody else could volunteer. I’ve actually had that happen. That makes a lot of sense. 

Shane: Definitely agree. So again, what you’re saying is that the team members who have the strongest skills in that quality assurance testing, when the person who had the strongest skills left, jump in and do that task, and sometimes a person who’s not even the strongest in that skill set will pick it up because they’re the freest person available given where all the things are in the flow. So those are all good behaviors, as far as I’m concerned, when I work with a team. That the next best person who’s available actually does the work to unblock it. 

Murray: Yeah. but if I was going to assemble a team I would advertise for a business analyst and I would advertise for QA and I would advertise for designer and I would advertise for developers because I want people who have a lot of experience. Out of the people who come forward, I’d be looking for people who can be flexible and but I would still be looking for specialists.

Shane: Specialist skills yeah, I agree. 

Murray: If you don’t call them a role name, Shane, you’re going to have a lot of trouble communicating the special skills you’re looking for. I want a developer who has 10 years experience in quality assurance and testing. No technical development skills in software required. That doesn’t even make any sense. 

Shane: But you wouldn’t say that. You’d say, I want a person who has 10 years software development skills and strong testing skills. 

Murray: Then you’re going to get a software developer who’s got some experience in testing. 

Shane: Yeah, which is ideal. 

Murray: If I say I’m hiring a QA to work in an Agile team and they need to be flexible and T shaped and be able to help out with business analysis and automated testing and helping the whole team improve the quality of their work And they’re going to be working with Cucumber in a Java environment. Then I’ll be able to find somebody that will fit in well. I’m going to get the right person then and people will know what we’re looking for. 

Shane: Yeah, because you’ve described a set of skills you need perfectly. You haven’t described a role. 

Murray: No difference in my mind.

Shane: To me there is, but let’s agree to disagree on this one, because I think we’re both saying the same thing, we’re just calling a point of order on my hate of the word role, and how I prefer to use the word skills. But what we’re saying is that actually, those QA skills are often light in a team and we need to make sure we beef them up. 

And that we need to bring those QA skills as early in the process as possible, not at the end, because having those skills help somebody define the acceptance criteria so they’re clear and testable early means we get a better quality product out the end. 

Murray: Yes, we agree on that. 

Shane: Those skills should be in the team. They should not be another team that we hand out to 

Murray: Absolutely, they should be sitting next to the person who is skilled in development. So when the person who’s skilled in development thinks that they’re finished, they could ask the person who’s skilled in testing to double check it and give them another point of view.

Shane: Or they could have the person who’s skilled in testing sit next to them as they’re doing the development and use it as a peer development process where, as they’re writing code, the testing skilled person is saying, oh yeah, we’re calculating that KPI or, we’re wanting to filter all the records on that screen what test should we use to make sure that it’s meeting that requirement? 

Murray: That could help if the team thinks that’s efficient. I often suggest to the developers that they do a walkthrough with the person skilled in testing before they say they’re finished development. You don’t need to go through a formal testing process to just have a look at something and say, oh, I don’t think that works. 

Shane: Yeah, and that comes back to whether you do pair programming or not. I’ve found that having two people with a mix of skills developing the piece of code has a much better outcome than one person working on it and then engaging a second person in a walkthrough or a peer review. It improves the quality. So I think again, it loops back to what patterns and practices are you using to increase the quality of every step in what you do and don’t think quality is just testing at the end of the process. 

Murray: Yeah, so I think that there’s something seriously wrong with the idea of a hardening sprint or sprints at the end before a release cycle. SAFe has this and I think it’s a sign of an acceptance of poor quality. You’re just assuming that the quality is going to be bad. So that you need a sprint or two at the end to fix all the things that you didn’t fix as you were going. But if you fix a problem immediately, it’s much easier than trying to fix it later. 

Shane: Yeah, and I’m not a big fan of handling sprints. I think we often need a technical debt sprint because we haven’t done the right thing, there is stuff that needs to be fixed and it should have been done in the initial piece of work, but it wasn’t. So now we need to slice off some of the time from the development team to make changes. And those changes have nothing to do with a new part of the product or a new piece of value. It’s just, we didn’t do as good a job as we should have, and we have to tidy that up and so that technical debt is a cost. 

Murray: You shouldn’t accept this technical debt as it’s going through. If a developer is in their code and they spot a defect, they can fix it quickly. If they’ve finished their code and gone on to something else and somebody finds it even in the team, even in the same day, they’ve effectively got to load that code back into their mind so that they can understand what might be the problem. And that’s not easy. That takes some time to get that code back into your head again. So there’s an extra cost. Now imagine that somebody comes back two weeks later and they don’t come back to the person who wrote the code. They come back to somebody else in the team. Then it’s really hard, like you start getting 100 times the amount of time and effort. So it’s actually much more efficient to fix all these problems, even the small ones, straight away, in my experience. It’s not worth delaying them. 

Shane: Yeah, and I think they’re good indicators, if we’re seeing a team having hardening sprints before they can push their code to production, or we’re seeing lots of technical debt during the work, then we know we’ve got a problem with the quality of the work that’s been done in sprint, and we need to change something to make it better. I’m undecided whether, having windows where the developers can just sit uninterrupted increases quality or not. 

Murray: I think that the team should be able to decide on this themselves. Because they know what’s working and not working, and they can discuss it in their retros and try things. I always ask teams to do code reviews. So developer says, I’ve finished. Can somebody review this for me? And you’ve got three other people. And one of them says, yeah, I can do it. And the other people say, yeah, I can’t because I’m busy, because they got their head stuck on a problem. The team decides themselves, or somebody says, I can probably do that in an hour, just let me finish this. They decide themselves it seems to work out.

Shane: I agree, and there’s some techniques they can use to make themselves safe when that comes up in the retro which is where the coach comes in to say, you’ve called this out as a problem, here’s some things I’ve seen other teams do, which are those where you’d like to try or have you got something else in mind. 

Murray: So two things I’m thinking of before we wrap this up. One is that I think the benefit of this quality practice we’re talking about is much greater than the cost of it. It’s actually a tremendous saving in terms of time and money and effort to do things properly than to produce crap and try and fix it up later. 

This is one reason why Waterfall is not good, because it does have an enormous amount of quality defects you find at the end. Because nothing’s tested until the testing phase. And the cost of that is just enormous. If a project is going to blow up, it’s going to blow up during testing, and that’s when you discover that it’s going to take twice as long and cost twice as much. The cost and benefit of building quality in from the beginning is really high, so there’s no reason not to do it. 

So the second thing is that there is a role and a career path for quality assurance specialists. And it’s not just about testing other people’s stuff. It’s about helping the whole team improve their quality and finding systematic problems and helping the team improve it.

I think it’s very helpful for organizations to have a structure where the people with specialized quality assurance skills get together regularly to share ideas and information and that they also have mentoring and apprenticeship with each other. So when a quality assurance specialist in one team learns something good, they can spread it to the other teams through a community of practice. 

Shane: Or a guild, yeah, that’s why I like the term guild when it came out, because it has that sense of a carpenter’s guild or a plumber’s guild. It has that sense of the specialty skills and you need to increase your maturity. And to do that, you need support. You need to find people that have the same skills as you that can help you. I’m a great fan of that. I think baking quality assurance into the team is the important part. I remember working in a team many years ago when we were still doing Waterfall and the developer was always proud to say that 70 percent of his code always worked. He never quite knew what 70 percent did and what 30 percent didn’t, but the testers always found it for him. And that’s not the behaviour we want when we have an agile mindset. We want the people being responsible for the quality of their work with the support to help them increase their skills where they don’t have it. And other people that can help them with the effort when they need it.

Murray: Yeah. And quality is the whole team’s responsibility. Even if somebody is a specialist in that area, it’s still the whole team’s responsibility. And it’s wrong for, people in the team to say, Oh no, that’s that other person’s responsibility. I’m not responsible. The whole team has to be responsible for quality. 

Shane: Yeah, quality is everybody’s responsibility. In other disciplines like manufacturing, they’ve learnt that fixing it at the point that it’s broken makes everything more efficient, produces a better quality product. 

Murray: The review and retrospective process in Scrum builds quality in to the process. That’s a quality improvement, continuous improvement process. Which is very helpful and very important.

Shane: Yet we still see people adopting an agile way of working where the development team hand over their work to the testing team. 

Murray: I’ve seen that recently with a big four consulting company that we’re working on a major project a huge client, and they thought Agile was four months of design sprints, then four months of development sprints and then testing sprints. I still get a lot of people surprised that I would say that testing has to be done in the sprint with development and it’s not done until it’s tested. People are astonished. What are you talking about? We’ve been doing Agile for years. We’ve never done that.

Shane: Yeah, but we’ve got to recognize that it’s hard. 

Murray: It’s only hard because you’ve got some empire building development manager and another empire building test manager fighting over resources.

Shane: And they like to hire testing roles. 

Nothing wrong with testing roles Shane. 

All right, I’m gonna close out on that one, because I want the last word. 

Murray: All right. Thanks Shane. 

Shane: All right, I’ll catch you time.

 

Murray: That was the No Nonsense Agile Podcast from Murray Robinson and Shane Gibson. If you’d like help to create high value digital products and services, contact murray at evolve. co. That’s evolve with a zero. Thanks for listening.