Lisa Crispin – Holistic testing

Join Murray Robinson and Shane Gibson in a conversation with Lisa Crispin on Holistic Testing. Build quality in. Continuous testing for continuous delivery. Building shared understanding of requirements. Shorten feedback loops. Moving from traditional testing to agile testing. Test automation. The test pyramid and agile testing quadrants. Business-facing tests with behavior-driven and acceptance test-driven techniques. High performing teams.

Recommended Books

Podcast Transcript

Read along you will

Welcome to the no-nonsense agile podcast. I’m Shane Gibson.

And I’m Murray Robinson.

And I’m Lisa Crispin.

Hi, Lisa, welcome to the podcast.

It’s great to be here quite an honor, to be added to your list of illustrious guests.

Thank you. So the topic for today is agile testing before we get into it. Can you tell us a bit more about who you are and what your experience is?

I’m tester by trade. I have a very long experience in the software business, like 40 years and 30 years of that it’s a tester. And the last 20 plus years working as a hands-on tester on cross-functional agile teams. And I’ve been really lucky to work on a few really high-performing teams. And over the years collaborated a lot with Janet Gregory. We both started in the agile world or the extreme permanent world about the same time. And we’re looking for other testers to tell us what they do because nobody knew. And so she and I have collaborated on three books. And we have a training company called the agile testing fellowship where we offer our courses around the world through training providers. I just recently got an independent, so I’m not working as a full-time employee on an agile team anymore, which I miss a little bit. I live in Vermont on a little farm close to Canada with for donkeys, five cats, two dogs, and a husband.

Okay. Can you tell us, what is the state of software testing today, out there in the real world?

Practice test just did the state of testing survey. There’s a lot more interest in testers working on cross-functional teams and surprising amount of interest in testers getting more involved in continuous delivery and monitoring and observability testing in production. It’s not that many people doing it, but it was more than I thought I would see, but also sadly it’s still very siloed and very waterfall and most people are stuck in very command and control environments. So that’s sad. We’ve changed the name of our main course from agile testing to holistic testing strategies for agile teams to become everybody takes for granted they’re doing agile, whether they’re really doing it or not. And it just doesn’t mean a lot now to say agile or agile testing. But for us, holistic testing means something very specific. So, that’s why we’re changing our terminology.

Yeah. So can you tell us more about what holistic testing is?

Oh I should pull up the holistic testing model on the agile testing, fellow.com slash blog. The second blog posts down is holistic testing, what it means for agile teams. So it’s just a way to think about testing through that whole development life cycle. It really bothered us when people started to talking about shifting left and shifting, right. Software development is not a linear process so what could that possibly mean? So let’s go with holistic testing. The whole team is responsible for quality. The whole team should be responsible for making sure testing activities get planned and done at the right time. And the whole team is just going to participate in this all the way around the cycle.

One thing you’ve got in there is this idea of tasting as part of the discovery phase. I really liked that because I don’t see that a lot. I see a lot of people talk about testing as testing the software, testing the release for aggression testing, very technical testing, but not testing our hypothesis.

One of the biggest differences between agile testing and traditional waterfall type testing is in waterfall you’re focused on defect detection and on agile, we’re focused on bug prevention. We want to build a quality and we want to prevent bugs as we’re writing the code.

It’s interesting because we often don’t see people with testing capabilities come into that UX design process. Testers tend to be negative. They tend to look for the things that are going to go wrong because that’s what their skills are.

In my experience, we test her tend to be keeping the big picture in mind. When you’re writing code, you have to be very focused on what you’re doing. It’s a very narrow focus and you have to be positive about it, or you would just get discouraged and give up. We’re more big picture people. When some product person says, oh, this is my idea for a feature. And we start talking about it. I might first say, why are we doing this feature? What’s the purpose. But one of my early questions will be, what’s the worst thing that could happen when we implement this feature. Elizabeth Hendrickson has a great game in her, explore it, book called nightmare headlines. You know what, don’t what we want to see on social media that they, after we released this feature. That is a negative way of looking at it, but it’s also a way to help with lateral thinking. We’re all such slaves to our unconscious bias. And so if we can ask some questions to get ourselves thinking out of the box that helps. Pete Whalen years ago, his business card said QA question asker. And I really think that’s one of the things we’re good at. That’s why we’re such a help on the right side of the DevOps loop because we’re looking at log data, we’re looking at dashboards, we’re spotting, anomalies, responding patterns. We’re identifying risks because we’re asking questions of that production data. And we can dig into it now that we have all this big data and all these tools to explore it and analyze it. I think that’s one reason that I really want to get more testers working on that side of things as well. Don’t just throw it out the door and start working on the next thing.

So you just said quality assurance. And there are places where quality management and testing are considered to be totally different things done in different silos. So quality management is all about processes and compliance and testing is about bugs and defects. What is the difference? Should they be the same or separate or what our test is key wise or not?

QA has always stood for quality assurance . We cannot assure quality. You can’t test quality into a product it’s already there. I hear people, call testers QA. It’s like the initials don’t mean anything anymore. I think a lot of people have moved to quality engineering as a better way to describe it. They’re welcome to that term. Some people in the testing profession feel like tester is not a prestigious enough title.

Yeah, but should QA is get involved in defining quality control processes and quality management processes, as well as testing.

I think they could, but I think it’s a different field. I am involved in quality processes, but just from a software development perspective.

So let’s say that you are starting with, a more traditional testing team. They’re working in their functional division. I start writing their test cases while the developers are coding and I don’t taste anything until there’s been this big release into test.

Don’t forget, Marie. We gave them three months to tastes, but then we were a little bit late in our development processes. So they’ve got one day to do all the tests before we go live.

And then If it all falls over in testing, it’s obviously the test is fault.

Oh, and by the way as well as being late, the developers forgot to document anything.

And The requirements have all changed as well. So all those test cases you write, you have to rewrite them all now.

But we didn’t update the requirement documents that testers had for the last six months. But we haven’t deigned to tell them that things might’ve changed.

And fortunately though, we didn’t automate anything, so we don’t have to rewrite any automated test cases. So I thank God for that.

Well, That’s not even a good waterfall team. So I had the privilege of I’m working on a really excellent team or organization using waterfall as a process back in the early nineties. But guess what? Representatives from test and from development and from operations participated in all the analysis and requirements. They didn’t do test-driven development, but they did in the nineties percent coverage with unit level tests. We tested at the UI level. We automated all those regression tests. We had continuous integration. We had automated deployment to 20 different Unix environments. We only release every six months to a year because we were a database product and that’s all you needed to do back in the early nineties, but good practices or good practices. I don’t care if you’re a waterfall or you’re agile or you’re whatever.

I worked as a project manager in a waterfall environment with quite well-developed processes. And the idea was that you would recruit your test manager after you’d done your architecture to start writing the test strategy and the test plan, because obviously they needed the whole of the requirements and the architecture to do that.

And testers wouldn’t come on until development began and you’d have some of your tests come on to write your test cases. And then the rest would come on in India or some way to actually do the testing later. So that was considered to be standard because it kept the costs down and all the documents were produced. How do you transition from that functional test team that does things afterwards to a agile team?

A lot of the time people hear the word agile and sprint and think we’re going to go faster now. And the truth is, you need time for the team to learn how to self-organize. You need time for the teams to decide what technical practices they’re going to use and then learn those technical practices. I think one of the reasons test driven development has not been adopted widely, even though it’s been shown to prevent 80% of the bugs that are otherwise found later, is that it took about eight months to get any traction Brian Merritt called it the hump of pain. You’re learning to write unit tests. And do test-driven development. It’s more work. You are going to go more slowly. Eventually you’re going to get more experience. You build up a library of test components and you get over that hump of pain. And now you’re saving effort. Now you’re going faster. You’re going to go save effort, which now you can use to build new features. I’ve been lucky to be on teams, for enough years to see the magic happen.

So let’s say I’m doing this because the team hasn’t been producing much and we need to get more effective and efficient and faster and better quality.

I have a retrospective and I would say, okay, Let’s put all our problems up on our virtual whiteboard on our virtual sticky notes, and now let’s dot vote and figure out what’s our biggest problem. And now let’s design an experiment with a hypothesis that we believe that if we do X then it will result in Y and we’ll know, by this measurement within some short period of time, small frugal experiments as Linda rising says. So within two weeks or four weeks, we can know if it’s working toward the goal that we wanted to achieve. And if it’s not, we learned something from that experiment and we can tweak it or try a new experiment. We can’t fix everything at once. We can only fix our biggest problem at a time. We’re just humans. We can only do so much.

I would encourage people to build relationships within the team and to other teams. Especially the infrastructure type teams. And I would advocate working together in pairs or even better in ensembles cross-functionally because that saves so much time. When you have a tester in there, you’re going to find the bugs during coding and that later. If you have a designer in there, you’re going to get the design right, right away.

So you talking about a code test pass, the developer is starting the code and the test is, working with them on the same code at the same time.

Exactly. One of the mistakes I see newbie, agile teams make over and over is they want to make the business people happy so they agreed to take on all of these stories in one iteration and they can’t finish any of them. And so one of the things, that I try to help people do is commit to less. I know that we don’t commit anymore Instagram, but people still think you’re should. Finishing one thing at a time. So at the end of this brand or iteration, we have something we can deploy, maybe even something we can release. So one thing at a time, baby

Okay. Should we even have people with a test role in an agile team?

it depends on the team. I think most teams need somebody who’s a professional tester and has that expertise. I do think that depending on the rest of the team and how open they are to taking responsibility for testing and quality that tester is more of a test consultant, helping the non testers learn the skills that they need.

So for example, my last full-time job I would either sit in an ensemble with other developers, or maybe there’s two developers pairing and I’m in with them. I’m not going to take a turn driving. I’m not going to write code, but as they’re working I’ll say, I think it’s supposed to behave this way. Yeah. Oh, okay. Or I’ll say I noticed this kind of is duplicate of what you had over there that say something that should be abstracted out into a helper or something. Oh yeah, you’re right about that.

Or just raising an issue and it’s you know what, we better go ask the product owner what they want, or we’d better go ask the designer. We better get the designer in this zoom call and find out. They get so heads down coding and they don’t necessarily stop and sit back and say, are we actually building the thing that was desired?

And I would ask for feedback. Am I slowing you down and asking all these questions? And I know, no, because it’s going to save us a lot of time. If we thought we were done and then they throw it back to us, that would be a real waste of time.

Do you see mature teams still having testing as a separate thing in estimation that they focus on rather than it just being embedded in what they do?

I’m not a fan of estimation. I’ve found it much more productive to have the conversations, but slice all the stories down to where they’re the same size. They’re all one point stories. Then we just count stories. It’s getting the stories right size that really matters. If you’re doing scrum, you can write test cards for that story. And if you’re not doing scrum. It’s going to be a working agreement. We agree that we’re going to write unit tests because maybe we’re doing testing and development, but even if we’re not going to write unit level tests, we’re going to automate tests at the API and UI level for this story. If that’s appropriate, we’re going to do the exploratory testing we can do at the story level, and that’s all part of the story. And so we size that stuff accordingly. And if testing is really hard, testing might take longer than coding. We had to be aware of that. But it’s also important to do other types of testing that can’t be done at a story level.

So what’s worked really well for my teams is writing exploratory testing stories in the backlog along with the feature stories. And when enough stories are done that we can test a workflow in the end, like a real user uses it then the team starts pulling in the exploratory testing stories and doing those. And again, it’s not up to the tester necessarily to do those. Usually the tester is going to be involved in writing them. Although I have worked on teams where we held workshops and paired with the developers to where they all got pretty good at exploratory testing.

You might also have stories for testing infrastructure. If you’re doing infrastructure changes, you might have stories for testing the monitoring, the logging, are we capturing the right information? You might have stories for doing accessibility testing because you need to do more at the big UI level. So security testing. We might have stories for that. I like them to have all visible in the backlog. That’s all work we have to do. We’re going to have risk at releasing that product until those stories are done.

I been having a conversation with people recently about how to do system testing when you haven’t built the system yet, let’s say there’s some part of a system and you’re expanding it. How do you set up this automated continuous delivery environment. If you haven’t built all the interfaces and all the different components yet

I feel like that is where it’s useful to have a virtual quality team, where you take people who are testing professionals on these various feature teams and maybe infrastructure teams. And part of their job is not only helping their team, but Guinea together and say, how can we judge quality at the system level? What data can we use to know and what kind of extra testing might we need to do that? I’m a fan of building things end to end. Doing something like story mapping, where you lay out your scenarios and then you choose something that you can get working that’s valuable or the most important thing for your customers and the most important thing for you or your business get work and get into production. And then you’re going to have all the underlying infrastructure to support that. So that all has to be done together but it doesn’t have to do very much. Maybe it’s not even useful to your customers at first, but it’s useful to you because you can automate test for it. You can make sure it’s working. So doing things slice by slice.

I’ve asked people to set up end to end system testing for the current state so that as we change the system , we can see what happens. Quite often people will argue we don’t need to, do system testing cause we’re just making a local change in that one, impact anything. I say fun, then run your automated system test over it every night, just to make sure because you don’t know what all the consequences are.

Now that we’re dealing more with micro services and with applications that are supposed to have stable interfaces , It feels like we should be doing more API integration testing than we used to. Instead of focusing so much on unit testing. What do you think about that?

That’s a big trend in testing. So there’s a lot more adoption of testing at the API level and also doing contract testing, which is not just testing the contract between those services. But that’s important to do and having that just as an isolated thing, I think is really productive. Mark Wintringham is about to publish a book on API testing which I’m meant to be reading. I love testing API APIs. People have realized you can test a whole lot of the business functionality through the API level. There’s a lot of interest in API. There are a lot of great tools like postman and Jess lets you do a whole lot of functional testing in the UI. So we have a lot more technology at our disposal and the more we can isolate that I think is really good. Test automation university has some good courses on various types of API testing. So it’s really exciting to see people really embracing that.

So can you explain the test pyramid and how this change relates to that?

The idea that test automation pyramid is push all the tests down to the lowest level that you can. At the bottom there’s unit tests that are fast to write and give us a fast feedback and they’re easy to maintain. Those are the tests we love. And then that middle layer is this API level service level. There are a little harder to write they’re a little slower feedback, but we don’t hate them really they’re okay. And then that little triangle at the top, are those UI level tests that are actually workflow tests, not UI unit level tests. Those are the ones we hate cause it gets flaky and they fail. And that’s the way to look at it. It’s push them down as low as you can cause other tests you love. And so that API level is in the middle and I couldn’t really see these days for a number of reasons, your shape might be different. So we have a lot of really cool UI test tools. And then we have a lot more we can do at the API level. The test automation pyramid is just a thinking tool. You don’t have to end up having a pyramid shape.

Could you do me a favor and describe TDD ,a T D D And BDD, and what the difference between the three of the bar? Because I constantly get confused when I’m talking about them.

TDD is test-driven development or test first development. And that is actually a code design practice. It’s not actually a testing practice, but you end up with a nice suite of regression tests that provide a nice safety net for you. but it’s write a little test for some little tiny chunk of functionality. Right the code that make that test pass and then move on to the next little chunk and you’re going to be refactoring in there. So it’s red, green refactor.

Acceptance test driven development is okay, we’ve got this feature, we’re going to try to build some shared understanding about it. We’re going to size it. And the stories we’re going to talk about the story, make sure we understand the story, do something like example mapping to get the purpose, the business rules, concrete examples. We’re going to turn those concrete examples into an executable tasks that guide development. And so the developers are going to start with the happy path test, get that passing. Then they’re going to move on to different tests for boundary conditions or edge cases or whatever, get all those things passing. We’re going to have conversations all along the way as the product owner. We’re going to find bugs. We’re going to fix them right away.

Behavior driven development is very similar to acceptance test driven development, but You can use given this, when I do that syntax so it is English and business people can follow it. And so BDD is making sure that those tests are readable by the business people or product people so that they can participate in creating them in pre iteration planning meetings, where we talk about the stories.

I like example mapping from Matt Wynn, but there are other techniques to have a structured conversation to make sure that we’re doing lateral thinking about this story, is it the right size? So having those conversations before we do the iteration planning meeting so that we can have that foundation of shared understanding. But these days BDD is used more, at the acceptance tests level or story test level. And then there’s also a specification by example, from Greco ASIC, which is a very similar type technique. So they all have the same goal, building a shared understanding, writing the executable test, and using those executable tests to guide development. So, we know. What Coda right. And we know when we’re done writing it.

So I’ve found that natural language that given blah, blah, blah, and blah, blah, blah, then. Aligns with the way I think more than code. And so it feels more natural a way of writing tips or describing how something should work. What we found in the data world is we can actually push that earlier in the cycle and use it to define requirements so we can say, okay, we need to go and create a flag, which is active marketing customer and the rule will be something along the lines given us a customer and they have an account and the counters had a balance in the last three months and the person’s not dead, then they’re an active marketing customer. So do you find that natural language behavior driven design pattern is getting pushed into the specifications side of things or is it still being treated as a way of describing a test only.

I think what you’re describing are examples and they’re awesome. And there’s something that we need to do together. We need to have a product person. We need to have a coding person, a testing person sitting down together to write those. The unit level tests are all about co correctness and internal code quality so that’s owned by the team. The test you’re describing are business facing. Those are really the quality as defined by the business. And hopefully that matches the quality as defined by the customer, which is really the only quality definition that counts.

If you never automated those tests, I wouldn’t care so much because the value is in writing them together and building that shared understanding. And it’s funny because when I’ve worked on teams doing that I couldn’t tell you how many times it happened that I wrote a aTDD type of tests and the developer comes over and says, oh, this test is failing. And I say that’s because it should be doing this. And you wrote it to do that. And he’s no, I think it really should do that. Now we got to go to the product person and get the answer and it makes us talk to each other. That’s the value. And we did automate a lot. we didn’t keep all the ones we use to guide development because it was overkill. You can’t just have these gigantic suites of automated regression tests running even at the API level. So we kept the ones that made sense, but doing all the detailed ones, made sure we all had that shared understanding. We built the right thing, less rework, less waste, shorter cycle times. That’s what we love.

Yeah. Less why shorter cycle times, all that sort of stuff is great. Should we be aiming for zero defects?

Personally, absolutely I’ve known teams who have succeeded at that. I’ve been on teams who pretty much succeeded with that, to where the point, we didn’t really need a defect tracking system anymore. So it is an achievable goal and if you’re doing ATDD or BDD you’re gonna probably capture defects in development. doable. When teams are really competent in writing really solid, robust code, that’s not going to fall over the bugs are, misunderstood requirements. And if we focus our energy on getting those rates to begin with on the left side of that loop, we’re not going to have those problems.

What do you think about the idea that quality is free? That all of this time and effort we invest into testing has such a great return on investment, that it more than pays for itself.

Testing is part of software development. Just like coding is just like design is just like architecture is, it’s not a separate activity. It’s not something we should split out. Do we put a cost on coding and look for ways to save money on cutting? Maybe people do. I haven’t ever heard of that conversation. So it’s an integral part of software development and it needs to be done and right along with coding. And

Quality again, that’s up to the customer. Do we want our business to do well? We need to agree on what level of quality we want to deliver so that our business can do well because our customers are happy. Teams really need to have that conversation. What level of quality do we want? What can we commit to? And it needs to be a commitment because you’re going to run into obstacles. And if you’re not really committed to delivering that level of quality, you’re just going to give up. Then our customers are going to suffer because our code’s not going to be very good quality.

I’ve seen people talk about the testing quadrants, but I don’t really understand What is that?

agile testing quadrants are another thinking tool that helps us think about quality or testing from technology facing side. What do we, as a technology team care about to make sure that we’re happy with our internal code and framework correctness. and then business facing quality. What does our business want? What do our customers want? And then what helps us guide development? And then what helps us evaluate what we built after we’ve built it, and then the stuff on the right side of the, and feeds back to the left, just like the DevOps loop. It just to think of tools it’s really handy when you’re planning a new product, a new feature, even a story of what all the kinds of testing we’re going to need to do for this .

Performance testing is a great example. A lot of web applications and web products live and die by their performance, but people leave it to the end. We’re going to do that right away. We’re not going to leave it to the end. The quadrants imply no order. This is the quadrant four, but we’re going to do it first. Same with security testing. Let’s make sure our framework is secure, whatever it is. So it’s just to help us think laterally, getting over our cognitive biases so that we don’t get bitten later by the end known unknowns that we didn’t think about.

I have one last question for you before we go to summaries. What’s a really good agile team look like from a test perspective. What’s the result? What does it feel like? Can you describe it?

It’s interesting that you asked that because Janet and I talk about this, sometimes if you have not experienced a unicorn magic of being on a high-performing cross-functional team, it’s not something you can explain to somebody else. You have to just feel it. For me it’s a team that has had that conversation about quality and really cares and really has committed.

They’ve decided I want to write code that I would be proud to take home and show my mom and put on my refrigerator. And I’m committed to making it happen no matter what crazy obstacles we run into. Whether it’s our management or our organizational structure or the architecture team. We’re gonna find a way and being brave enough to learn the good practices that might be difficult that we know will give us good results. Now, of course that assumes that we have management that lets us do it the best way we know how. Just teams where every competency is equally valued and every voice can be heard. Psychological safety, obviously prerequisites as Joshua Kerievsky is a modern principals say everybody can raise issues. Everybody can ask questions just that safe environment. And we love our customers and we are so happy we can make them happy.

Great. All right let’s get a summary saying shine. What you got.

Alrighty. So you talked about waterfall where we focused on defect detection and we talk about agile. We focus on bug prevention but then quite rightly you talk and I to task when we described a bunch of very bad practices and called them waterfall. So I’m going to go and say that bad process is where we look at defect detection at the end of the process. And a good processes where we look at bug prevention by doing things early to stop them appearing.

I love the idea of having people that have testing skills in the UX process. And that’s because theyre the question askers they tend to be good at listening, understanding, and then asking the questions they had that mindset. I’m assuming that’s because they have a set of either conscious or unconscious patterns in their heads where they’re looking for those things are going to bite them in the bum. So I like that. One of the things you do is ask the team. What’s the worst thing that could happen and that will bring out some of the highest risks.

I, think that idea that there are a bunch of good practices and processes and patterns that teams can adopt. Things we can use to help our thinking. So the tasting quadrants. The test pyramid example mapping. A bunch of things that actually we can help to improve skills and our capabilities in that space.

But we should always start off with a test coach if we haven’t done it before. We don’t focus on the amount of change we’re bringing to teams on day one when we help them adopt this new way of working. I hadn’t thought about testing as yet another thing that we encourage them to relearn and that’s gonna take time.

I liked the idea as well that describing a test helps gain a sheet understanding. By actually describing what a testers, the different people that are working on their piece of work we’ll have a better conversation they should ever be in a shared understanding.

And the last one for me, which is definitely my takeaway do work that you’d be proud to take home and put on your mom’s fridge. So there’s going to be my two mentors. Agile ways of working are successful when teams are having more fun and they’re taking work home and putting on their moms that they’re proud of, so that was me, Mary.

I’m glad you said What you’ve said, because this is what I tell the teams I’m working with. I don’t know as much as you, but I do tell them to do all of those sorts of things. This is something we’ve got to work towards. It can take quite a bit of time. I think putting tests specialists and dev specialists together in one team is a really good start cause that brings testing to the front and then having the test specialist work with the product owner or the requirements people to flesh out the requirements I very often suggest people do that.

I haven’t thought of pairing developers and testers before. I’m going to start suggesting that to some people, I think it’s a very good way for them to understand each other.

think an agile coach should be talking about this stuff you’ll talking about Lisa I don’t like the idea of agile coaches, who don’t know much about testing or software development. And there are some of them. I think agile coaches should really learn as much as they can and read your book on agile testing and more agile Testing so they know what they told me about. I think it’s really important.

I have worked on a, couple of great teams where dev and test have worked very well together. And we actually developed a lot of our test practices through retrospectives. For example, before the developer puts a ticket to the tester why don’t they have a conversation and then develop a says to the test specialist I’ve done this and I’ve had a code review and I’d like somebody to test it for me. What do you think? And the test can say a more shy me. And I can just say look, I can see you miss this entire requirements, so it’s not ready. And then you don’t have to go through the whole test phase. That’s something came out of one of our retrospectives I had with a really good team.

it’s a beautiful thing when developers and testers and everybody else are working well together and the quality is high. It’s amazing how efficient you get when you’re not spending all your time dealing with defects and trying to fix them. People find it hard to believe.

I remember arguing with a client about a contract once, because he was insisting that they wouldn’t do any user acceptance testing until the release had been completed. And then they wanted to raise to apex. And we argued about whether the contract should allow them to raise defects on defects. And I said, it’s a terrible idea to label your user acceptance testing until we have a release. I’ll want you in there, rod at the beginning without test team and our dev team helping us to make shows that we’re finding problems and fixing them straight away. And I’m worried that if we have this contract, then you’re not going to get involved until the end. And then we’ll have all of these issues propagate because we misunderstood things. So love the idea of bringing the customer or the client or the stakeholders in early as well.

I’m glad you mentioned that, Show me thing when you’re talking about how do you get these people learning to work together that never worked together before on a new agile team. That practice of as the developer, oh can you show me what you’ve just coded before you check it in? I just like to see how it works and let’s look at it a little bit together. It’s just gonna take a few minutes, but it’s going to start building that relationship and people will start seeing the value of working together.

Yeah. All right. Thank you very much for coming on. Now how can people find you?

I’m very unimaginative. I’m just Lisa Crispin on Twitter and on LinkedIn. And, my website is LASIK chrisman.com.

And you’re available to help people with what sort of consulting you’re doing at the moment.

Janet gregory And I have our agile testing fellowship, where we offer our holistic testing course holistic testing strategies for agile teams. We’re also working on some new courses, including holistic testing for continuous delivery is going to be coming up. So I could facilitate the training, which was a lot of hands-on really fun stuff. And then I also can help teams with things like quality practices and assessments. Do process retrospectives and interview people and find out how are you doing quality now? How are you doing your testing now? Where could you improve? And then I can maybe offer some workshops or one-on-one coaching or something like that to help improve Agile transformations, teams that are trying to fit testing into continuous delivery. I still see teams that have embedded testers that are calling themselves agile, and now they’re going to do continuous delivery and deploy every day. And why is it that test are keeping up? So Yeah, those kinds of things, that’s one of my favorite areas to people with.

And then our book website is Agile Tester dot CA and you mentioned our books Agiel Testing and more agile testing, but if you haven’t read those books, the place to start is agile testing condensed, which is our latest book available on Leanpub in many languages and on Amazon as a print copy because it’s only a hundred pages long, and it has some information to send me other books, plus some new information, but it has a link. So if you find the topic that, oh, I want to learn more about that, then you can go get one of the other books that delves into it more deeply. And all of those books, we gathered stories and experiences from leading practitioners all around the world. Because I think we all learn really well from hearing other people’s stories. What problem did they have? How did they solve it? And so I think that’s the most powerful thing in our books and the thing we get the most feedback on this, what was most helpful.

thank you very much. Thanks for coming on.

Thank you. It was a great honor and wonderful to talk to you.

Yeah, it was nice to finally meet you. And again, just a shout out for all the tasting stuff that you share on, Twitter and LinkedIn around the world. Ive learned so much by just reading. That’s the end of the world.

Great.

That was the no nonsense at y’all podcast from Marie Robinson and Shane Gibson. If you’d like help with agile contact marie@evolve.co that’s evolve with zero. Thanks for listening.