Amazons product development process with Ryan Lysne
Join hosts Murray Robinson and Shane Gibson as they sit down with special guest Ryan Lysne, a Product Director at Amazon, responsible for the Amazon app, the content creator economy, and worldwide events.
In this episode:
We examine the inner workings of Amazon’s product teams. Is Amazon truly agile?
Ryan unveils the 10-step product management cycle that Amazon teams employ, providing a comprehensive peek into their discovery, funding pitch, technical development process, and experimentation phase.
We delve into how Amazon puts a keen focus on analysing customer behavior and the crucial role data plays in their strategy.
Ryan shares insights on what it takes to successfully utilise this pattern.
If you’re curious about how one of the world’s biggest tech giants manages its products, this conversation is a must-listen!
Read along you will
Shane: welcome to the No Nonsense Agile Podcast. I’m Shane Gibson.
Murray: And I’m Marie Robinson.
Ryan: And I’m Ryan Lysne.
Murray: Hi, Ryan. Thanks for coming on. So we wanna talk about the 10 step product management cycle that you’ve written up, describing what you do at Amazon. Why don’t kick us off by telling us a little bit about who you are, what you do, and how you got to this point in your career.
Ryan: Yeah first, thanks for having me. I appreciate it. So I have been with amazon for nine years now. I originally joined to help grow the Amazon app. It was only about 10 months old when I first joined Amazon. And it was growing like crazy, but no product vision, no product strategy. Small team and and really no marketing strategy or anything So came on to help provide some direction and grow the team. Since then built another product. Which is the Content Creator Economy at Amazon. So this is ingesting expert product review content from everybody from large content providers like the wire cutters and consumer reports of the world, all the way down to individual influencers that you would find on TikTok or Instagram.
And then we’ll ingest and then surface their content in different parts of the customer journey on Amazon. And then the third team is the worldwide events team. This is the team that runs the big events at Amazon, like Prime Day and black Friday, cyber Monday, Diwali . So we build product to be able to host those events get the big traffic spike and have resilient products to be able to handle it.
Murray: Okay, so did you start your career as a developer or in marketing, or how did you begin?
Ryan: I actually started product management prior to Amazon. So I was at a gaming company Zynga prior to coming to Amazon and worked on a number of games there. In addition to taking over the distribution arm to be able to bring customers into the games.
So we built product onsite and offsite to be able to attract customers to our games at so that’s where I started product management and then carried that over to Amazon.
Murray: Okay, so what’s your title at Amazon
Ryan: Director of product management. And at Amazon we have product management, and then we have product management, dash tech. So I’m a product management Dash tech.
Murray: Do you do Agile at Amazon?
Ryan: I think we do. So we’ll organize around single threaded teams. So these will be teams that are designed to solve a particular customer objective. And the team will get whatever that team needs in order to be able to solve that objective. So that might be software development engineers, data scientists, business intelligence engineers, technical program managers, marketers, program managers, business development, whatever’s needed to be able to solve that problem the team will get. And that team is fully functional, meaning there is no handoff that happens with different teams. So when we develop software products the engineering team will be part of the product development vision, the product strategy, business requirements, the customer stories. They will provide input into all that. Then they will own the technical design, technical implementation. They will also own all of the testing and then all of the operational metrics. So we don’t hand off anything to other teams.
Now, there are some exceptions to this. Some teams that might have highly complex customer facing experiences might have some qa dedicated to it. But generally speaking, all of the unit testing, integration testing that we do is within the development team.
Murray: Okay, so fully cross-functional teams focused on a goal. Are they empowered to make decisions themselves?
Ryan: Yes. The way that we think about it is we have a paradigm around one-way door versus two-way doors. So two-way door is an easily reversible decision whereas a one-way door would be something that is once you go through that door, it’s either, not possible or very difficult to reverse back out.
Maybe an example of that would be something where it will take 12 months to develop something. I would consider that close to a one-way door. That’s a huge investment be able to make. Whereas if I can reverse that out in the next sprint I would consider that a two-way door. Anything that’s a two-way door, we empower our team members to make those decisions.
And certainly when it comes to technical decisions where it’s about how do we accomplish a goal, not about what is this delivering to the customer. If it changes what it’s delivering to the customer, they’ll engage with the product manager at that point. And if it’s a two-way door, they can make those decisions. If it’s a one-way door, these are types of decisions that bubble up to some of our more senior people including myself. But we try to, as much as possible, allow our team members to make those decisions.
Shane: So for those one-way door decisions, is the people making the decisions still within the team, or is there a hierarchy above the team that those decisions get pushed up to?
Ryan: So the single threaded team. They’ll be led by a software development manager. Let’s say a individual engineer has a decision to make if it’s a two-way door typically be able to make it themselves. One way door will bubble up to the software development manager. The software development manager will make a judgment call of whether he should be bubbling that up further. And it just depends. We don’t have any hard rules on this. It’s really enabling our team members to use judgment of whether they should bubble that up or not.
Murray: And do you have product owner, scrum master in your teams
Ryan: No. Those roles are typically played by the software development manager or the technical program manager. If the team has one.
Murray: Okay. And how big are the teams? Are you they like a two pizza team or are they a lot bigger?
Ryan: Yeah. The smallest unit that we’ll do is a two pizza team. Two pizza team is eight to 10 people.
Murray: And do those teams do customer discovery continuously or just at the beginning?
Ryan: Both. We have mechanisms that are designed both for discovery of existing products and how are we gonna cycle through and improve those products, as well as mechanisms to help pitch new ideas and new products and things that don’t exist
Murray: And what about deployment? Are they deploying once every six months or, once a week or, every day?
Ryan: Depends on the platform. The browsers for desktop and mobile browser will continuous deployment. The mobile app we try to do every two weeks.
Murray: All right. You recently wrote an article on medium about the 10 step product development cycle at Amazon. Can you tell us about that?
Ryan: Yeah, I think a lot of people have heard about the working backwards process. This is really detailing down into one or two levels deeper about what the working backwards process is. And then how we think about the full software development cycle.
Murray: So in your article the first step is product and feature ideas. So how do you get those ideas? Where do they come from?
Ryan: Everything really starts with the customer and having multiple different mechanisms to be able to understand what the customer is looking for, both from new product development to how do we modify an existing program to help improve that experience all the way down to minor bug fixes.
Twice a year our team does a in-depth, customer interview process. We’ll go into customer’s homes, watch them use the product ask them questions about new potential products. We also do biweekly usability tests. So so like a product that might be in prototype phase will have a customer come in and use it and give us feedback on that .
So during that, we might ask customers some questions about new products that we’re thinking about to get their feedback. We also look at sources like app reviews and customer support issues. We’ll use all of these sources to be able to triangulate and say this is the important customer issue. And we’ll go through prioritization process where we’ll think about, what is gonna add the most to the customer experience.
Murray: So the second step in your product development process is the working backwards document. What Amazon calls the PR FAQ. Could you describe that for us?
Ryan: Yeah . If it’s a new program like the content creator economy we would create a business plan to describe that experience. We call that a PR FAQ, so a press release and frequently asked questions.
The PR will be one page, the FAQs will be anywhere from four to six pages. That’s a big investment to write a document like that. So we will do this if we have approximately one or two, two pizza teams working for over six months on something.
This will typically be for a brand new product or like a major expansion to a product. Let’s say in the content creator space, if we wanted to enable somebody to add music to their videos, that’s a big enough investment that we will write a PR FAQ .
And the FAQs will answer questions like, what’s the customer problem we’re solving? What’s the value to the customer? What’s the business model? What’s the experience look like? It’ll also typically include mocks for the experience. What are you building for the customer? And it’ll also include anything controversial in nature so that we can have those debates.
Before you write a line of code you need to understand what are you building. And if we have to make major pivots down the road, we consider that a big failure on our part that we didn’t describe that upfront. Now, of course, things change and new information comes and we change things along the way, but big controversial decisions need to be thought through well in advance. And so that’s the purpose of us doing it.
If we have an existing product and we’re adding a feature that it’ll take a couple engineers, say two months to do. We’re not gonna write a PRFAQ on that. We’ll through the customer stories. And then we’ll add to the epic prioritization process.
The third type is when we’re pitching new products. Our teams will do a shark tank like process where you’ll have a one pager that describes, the product at a high level and what customer problem you’re trying to solve. If we deem that to be really interesting, we’ll then go and invest in a full P R F AQ. Our intention was to make it as, lightweight as possible, for engineers to participate in that process. Cuz you know, a full six pager document is pretty intimidating for a lot of our engineers to develop. And we wanted to create something new that’s much more lightweight so that it can be involved.
Murray: Tell me about the third step, which is prioritization. How do you decide what is worth investing in?
Ryan: Yeah, We’ll pitch that PR FAQ document like you would at a vc to get funding to be able to go build that new product new development. We will use a combination of estimates of impact. Those estimates will be based on a variety of different sources, either, from some of the mechanisms that we use to get customer feedback. A number of metrics that we connect to long-term free cash flow. In addition to that, there’ll be other metrics, that will be specific to a team. Like customer adoption or percentage reduction in fraud or something like that. And we’ll use metrics like that to be able to say, if we build this product, here’s what we expect to change.
Murray: All right, so then the next step you wrote about was breakdown of customer stories. So you’ve started with your research, you’ve done your working backwards documents, you’ve got a, project prioritized, and then you’re breaking down customer stories. How do you do that? Do you use user story mapping or what’s your approach?
Ryan: Yeah, we get into a lot of detail in terms of as a customer or as a vendor or as a internal customer. And we’ll break down all of the different user stories, which will describe the functionality of the product. We will then discuss and debate a minimum lovable product, a product that we think customers will love.
This is different than mvp. , we have a higher bar just because of what our customers come to expect. And there’s some things that are, debatable in terms of what features do we need to prioritize. But there are some things that are non-negotiable. So for example when we add an experience to an existing page, like the product detail page, it needs to add zero latency, literally zero milliseconds of latency. Security, non-negotiable.
There’s specific requirements that we have that to meet in order for it to be secure. We also have a number of operational excellence and engineering excellence standards for when a service launches. It needs to meet a certain score that our foundation’s team is set in place in order for the service to be resilient.
Which includes things like availability and the number of customer errors that are happening . Those are all part of the process when we get into prioritization, there’s a number of things that have to be done. And then we debate the things that are optional.
Murray: What’s the format of a customer story?
Ryan: Very simple. It’s basically like a short paragraph saying as a blank it’ll describe who is wanting this particular feature. I want to be able to do x. And it describes the functionality that they’re trying to achieve.
Murray: And do you also do the, so that I can get a benefit?
Ryan: I don’t think we mandate that, but I think most of our product managers will include something like that.
Shane: And who’s creating those user stories?
Ryan: Our product managers.
Shane: Okay, so they’re creating them with the team, or they create them before the, team.
Ryan: The product manager creates it But it’s reviewed with the entire team. So the team will see all of those user stories and they’ll provide feedback, they’ll ask questions, they’ll, try to clarify things and a lot changes, in that process . But we need to start with a point of view. And the product manager owns that.
Murray: Okay, so then the next step in your process is to write a business requirements document, which is more of a traditional type of document. Just describe why you’re doing that and what’s in there?
Ryan: Yeah. This, encapsulates at a detailed level What we’re building and why we’re building it. So this will include the benefits to the customer. This will include all the detailed functionality. And we break it down starting with major buckets, and then we’ll go more and more detailed.
This is the key input that will go into the technical design to allow the team to be able to break down and understand what services are needed or what services we need to connect into, what do we need to modify and then how do we break that down.
That’s probably the top purpose. But it’s also super important for our team, and for any other stakeholder to be able to have a document to be able to understand what we’re doing and why we’re doing it. And so it’s easy for us to share with all of our stakeholders, including our team so that they can ask questions and they can provide input if needed.
Murray: Why can’t they get this from the PR FAQ and the mocks? Why does it have to be written up in a big document?
Ryan: The PR FAQ is too high level. And the mox sometimes it’s part of the PFA Q. Sometimes we do it as part of the BRD. If a PFA Q is going to like a senior person, we’ll definitely includes so that they can see them. If it’s not, we’ll probably include as part of the BRD so it depends on where that goes. But certainly the mocks are super helpful to see.
Murray: Okay. So then the next step is what you’ve called technical design and tech breakdown. So take us through that. Is that a document or is it a series of models or what is it?
Ryan: Yeah, to be able to deliver the functionality as laid out in the BRD. The team will develop a technical design. So what are the services required? What are the data storage what are all the security that we need to be able to deliver that. The team will typically review that with senior engineers. Principal engineers or higher, depending on what it is. And once the technical design is aligned on the team will start to break those down into epics to get ready for technical implementation.
Murray: And do you have technical epics and business epics.
Ryan: Just technical. Yeah.
Murray: Just technical. so you have customer stories and user stories and technical epics.
Murray: Because typically an epic is meant to be a collection of user stories.
Shane: Yeah, but what you’re saying is you are documenting the customer facing user stories early, and then at the technical design stage, you’re doing the developer tasks. So you’ve gotta just delaying when you do that effort, and then they should be bound back together.
Ryan: There’s both front end customer facing tasks to be done and there’s backend tasks to be done. And the b r d and the breakdown process. Do both of those things at the same time.
Murray: There’s been a bit of a debate in the Agile community about whether you should have technical stories and epics. I think that they have their place be as supporting components enabling, customer and business functionality. But there’s also a view that you shouldn’t be doing anything technically unless it’s contributing to a customer outcome. And so therefore they should just be tasked within a customer story.
Ryan: Maybe this is just semantics, but I think the, epics that we have are all evolved around functionality. Things that enable the customer. And by the way the customer might not be an end customer, it might be an internal customer, might be just an api and the epics are revolved around functionality, but the stories that we write will be all the detailed components to be able to make that. So that’s why I think it’s maybe just semantics. I think we’re saying the same thing.
Murray: Okay, These two big documents, business requirement document and technical design document, these seem quite wordy. I’m just wondering about the balance between explaining things visually, between explaining things in words. What’s Amazon’s philosophy on that?
Ryan: So when it’s a customer facing experience, we always have mocks to be able to walk through the functionalities, which then leads to discussions around happy path and then exceptions. What is the experience when it’s not happy path. It’s an iterative process with our design team. So the design team will be equally part of our team, just like a product manager or developer would. So, we’re reviewing the mocks providing feedback and input and updating and iterating on those.
The reason why I think we translate those is just to make it easier for our team members in terms of ownership. So here are the things that we need to build, which can cross mocks. And and then we’ll assign those blocks, to team members to deliver.
Shane: So one of the ones you’ve got in there is data as a product, which is a really hot term in the data world at the moment. Do you wanna explain what that means to you?
Ryan: Yeah. I’m writing another article on data as a product right now because it’s a massive topic. Especially in a world with AI being more accessible and easier to use. But in order to be able to do that properly, you need to have the right data, you need to have it organized properly aggregated, properly maintained .
And I think a lot of companies now work backwards to a degree on the customer experience, but don’t necessarily do that with data. And data is, everything from customer facing behavior metrics, operational metrics financial metrics. And working backwards with data being the product in my mind is absolutely critical to doing this properly and doing it right.
We definitely think of data and working backwards from data early on in the process. It’s included in our working backwards documents. It’s included in the BRDs, et cetera. And we call it out separately, especially if we’re talking about anything where we’re gonna include machine learning as part of the delivery of the product there’s gonna be a whole separate data section in there. And then we have team members dedicated to being able to manage that.
Shane: So What’s the organizational design around data as a product?
Ryan: Yeah. There are some products that are built with the purpose in mind of being a platform. And when that is the case that team is an api that serves other teams. For anything else where it’s meant to be a product that this team and this team only uses. The servicing of data is much more siloed. That’s the downside of this decentralized single-threaded nature of Amazon is in a lot of cases, data is specific to a particular team and not available to other teams. There’s also, a lot of situations where that data is not appropriate for team members to have, and they have very specific protocols to be able to provide access to data and so on and so forth. But at a high level, I would differentiate between products that are meant to be platforms and those that are not.
Murray: All right, so the next step in your process is technical implementation and testing. So tell us about that.
Ryan: Yeah, this is basically once we have the design and we have the epics and stories broken down, this is now the team executing against that. Once we have a product built they’ll go into integration and unit testing, and if it’s customer facing, we’ll do UATs. Once we understand the performance of the product and does it meet the requirements that we set forth in the BRD and or the expectations around performance metrics will go through a iteration process.
Murray: Do you have all of the user stories and epics written before you start any technical development?
Murray: How many months worth of work would you write up before you start Dev?
Ryan: It is highly dependent on the size of the product. I think. It could anywhere be from a few days to a month or longer depending on how complicated the product is.
Murray: A month’s not bad. Traditionally you would write, six to 12 months worth of detailed requirements before the dev team starts working. A couple of weeks to a month is much more agile. You did mention in your write up that this is owned by the technical team. So do you see there as being a difference between the product team and the technical team here?
Ryan: Yeah so we’ll have a product manager who owns certain parts of the software development process. We’ll have engineers and the software development manager and or technical program manager own certain parts of the process.
Now that said while there is a specific owner of each one of these steps pretty much the entire team is involved in each step. Even in technical implementation the product manager is involved in understanding, are we on schedule. Has anything come up that we need to go talk to another team or that we need to discuss triage or changing things. And the engineering team is even involved in providing feedback on what the end product will look like. Everybody’s involved throughout that process.
Shane: So we would typically see a product manager and some form of technical lead, and then a third person, which, is normally a subject matter expert or designer, And they form this trio where they do some early work. They’re probably the more senior people of the team. They’re helping facilitate those trade off decisions. So is that what you run?
Ryan: Certainly in the early part of the process for sure those are the people that are taking the lead on what I’ll call the first iteration and they own the facilitation of getting feedback from other team members. And that’s what I mean by owner. They own the output, and the facilitation of that. But the whole team is involved either in providing feedback. But yes, those are the people on point in the early part of the process.
Murray: Okay, so curious about testing. Do your teams automated testing every day? Do they do test driven development? Do they automate integration tests? Tell me more about that.
Ryan: Yeah, one of the key metrics that we look at every two weeks is what percent of our code coverage is automated from a testing perspective. So in the app, we’ll run those as part of the app release cycle. So like every two weeks we’ll do testing on that. And for the browsers we’ll do that as part of continuous.
Murray: Okay, so then you do customer flow and user acceptance testing. And it sounds like the way you’ve described it is that you wait until you’ve got a customer flow that’s complete, which could be two weeks to four weeks worth of work, and then you test that. Is that
Ryan: yeah, it could be anywhere from two weeks to months, depending on the complexity of it. And typically our product managers will take the lead on UAT. Our engineers will take the lead on unit and integration testing. And there’s anything that we find in that process, the product managers will be involved in the prioritization process. What do we fix versus what do we, let go?
Murray: And do you show real customers any of this as you’re working on it?
Ryan: There are real customers, but they’re employees. We have a beta app for employees. And we can put pretty much whatever we want in there to get feedback from them.
We view that as really helpful for bug finding and performance testing . It is not good for behavioral metrics. Will the customers actually use this, the retention rates, purchase rates, all that kinda stuff. It’s not representative of the broader customer set, but it is helpful for performance testing.
Murray: So are real customers, having input into the earlier stages? The mocks?
Ryan: Yeah. Some of the mechanisms that I spoke about earlier, like usability tests and things like that will inject as part of the part of the working backwards process. Everything from should we build this product at all? Is there a customer need or a customer demand for this will include in some of those mechanisms to like questions about specific mechanics that we might use or flows that we might use to solve a particular customer problem, we’ll get feedback to those usability tests..
Shane: So there are companies that go out and research with a customer problems that might exist. And once they find that problem, then they go into the whole process of how they might solve them. And so I put that in the UX research bucket on the left. And then on the right hand side we think we know what the problem is. The way we’re gonna validate is we’re gonna build something, go out there, and we’re testing both the problem exists and that we’re gonna solve it. And I’m struggling to figure out where you sit. Do you sit on the, UX research where you’re talking to the market first about what problems need to be solved? Or are you more in that as a team of experts, you think you know what the problem is and you’re going around and testing that?
Ryan: Yeah, I think it’s in the middle somewhere. It depends on what we’re building. If it’s an extension of something that we already have and we have a lot of customer data on it, and we know what it is, it’ll be more like the mVP just launched this. We’ll extend things without even testing it. Like, if we have a deep linking product that we didn’t launch on certain browsers or certain countries, we’ll just launch that stuff. We know it works, we know customers love it.
If it’s a brand new program that we got some feedback from customers in some of our deeper customer interviews but we ultimately don’t know if we spend the next six months building this, will anybody use it? Then we’ll do some more research to validate is this, is there any demand here? So yeah, it depends on effectively the risk of the situation and how much data do we have.
Murray: All right, so next step is final fixes and experimentation set up. I understand final fixes. What’s experimentation set up?
Ryan: Yeah, we spend a lot of time on this. This will be identifying how we’re going to be able to trigger the experiment. Reviewing the customer flows with an experiment bar raiser. Somebody that is well-versed in how the experimentation systems work at Amazon. And also statistical analysis and will review our success criteria and what would result in , what we would call a positive result. Something that we would wanna launch for customers versus something that we would not want to. Our native app is fickle when it comes to experimentation. And so we’re very careful about where and how we trigger things. Cuz it can cause all sorts of issues with our experimentation system. And we spend a lot of time on this making sure that we have the setup properly. We also, and then there’s some like very simple parts to this. Are we testing just the performance of the system, cuz we know we wanna launch it? Or are we doing a customer behavioral experiment and an ab experiment to understand is there incremental value by launching this?
Shane: So you’re saying that you’re definitely using the experimentation at the end to tune it, but it could also be a go no go for release. Where you go, actually there is no uplift, so we are not gonna push that at all.
Ryan: That’s right. Yeah. That happens all the time. And also, we use it to determine more controversial things. An example of this would be like an overlay where if there’s a interstitial or something like that we’ll test it to see does this drive incremental value or is this, overlay annoying to customers. And the experiment will tell us whether we should move forward with that or not.
Shane: So that’s quite a large investment to get to that stage though, if you’re talking about a couple of months worth of effort to get to that stage and then go, actually, there’s no uplift. So there must be a culture around this idea of a large upfront cost to , test whether the bet is true or not.
Ryan: That happens. Now, when we go through this 10 step process we have quite rigorous debate about what is required in order for us to test this. By the way, we might not even go through this from a coding perspective. We might test something that doesn’t require any code to understand, is there customer value there? That’s definitely step one. If we can test something without writing any code, we’ll definitely do that. Step two is if we have to build something, it will have a rigorous debate about what do we need to build in order to get a signal on the hypothesis that we have that this will create value.
And And also we’ll have a debate around, what things do we build now versus that we build later. So an example of that will be there’s some things that are non-negotiable securities, an example of that. There are some things that like whether we build this to 10,000 t ps do we build this to a hundred thousand t ps. Those types of scaling things?
Like we’ll have all sorts of debates around what do we build now versus, okay, now this works, now let’s build it. We try to do as close as we can to the lean startup approach. But like I mentioned earlier, we definitely have a higher bar than average on that in terms of providing a reliable, highly available customer experience.
Murray: How do you write up your experiments? Do you have a standard format for them? Because there is an approach that other people have talked about, which is here’s my hypothesis. Here’s what I would expect to see. If my hypothesis is true, here’s what I would expect to see if my hypothesis is false. And then here’s the tests I’m going to do.
Ryan: Yeah, , we definitely do that. When I was talking earlier about the experimentation setup that’s when we talk about, here’s the hypothesis we’re testing, here are the success metrics that will determine whether this is a positive experience or not. And and then we report out on what we, included as part of the setup.
Murray: All right, so then you coming to the end of your 10 step process, you’ve done your experiments, you’ve got some customer feedback. How do you write this up or talk about it to decide what to do next?
Ryan: Yeah, depends on the outcome. If it’s a positive experimentation result based on the success criteria that we laid out in experimentation setup. We’ll write that up and launch it. In some cases there may be some controversy. Some metrics are up, some metrics are down or some metrics are up and some countries are down.
Some particular Amazon businesses might be up or down. And so depending on like that situation there may be some additional escalation or discussions happening determining \ what do we wanna do here.
And so we’ll write up a document, pros and cons of the different options. But let’s just say for the sake of argument, it’s a clean case, either one way or the other to not launch or launch. If it is to launch then we’ll document all of the customer facing issues or other things that need to be fixed as part of, and we’ll include that as part of the product development process. Or if there are like additional features to add or if there’s other dimensions like countries or browser types or platforms like, adding TikTok on top of Facebook or whatever it is. We’ll take all those learnings and put it back into the product development process.
Murray: So let’s say somebody read your article and they want to implement it at their company, what are the common pitfalls that you need to avoid?
Ryan: Yeah, there are a lot. What I’ve described today has been evolved over a long period of time. I would take the simplest, fastest path to get through this, and then you can add sophistication along the way as you learn things. I would err on the side of simplicity. I would err on the side of speed and and, just make sure you learn along the way things that were missed that you need to add to the process.
Murray: What about communication? How do you keep your stakeholders informed during this approach? What do you do?
Ryan: I’d say 80% of the time we will develop a simplified version of our technical implementation roadmap. So for a particular product, we’ll say, okay, this is broken down into x number of steps. We typically try to, fit this on one piece of paper. We’ll have timelines, along this and we’ll send out a newsletter every two weeks if it’s a high profile project to once a month. And with that one pager basically saying, are we red, yellow, green on each one of these milestones?
Murray: What is the difference between product managers who are more successful and those that are less successful? Is it the process or is it something else?
Ryan: Certainly, to do this well requires a lot of ownership. It requires a lot of dive deep. It requires a lot of collaboration, and those people need to do it. Laying out the process does not guarantee success, that’s for sure. The people need to embrace it and participate in it and own it.
A lot of things that we talked about today will facilitate debate. It’ll facilitate collaboration. It’ll facilitate identifying problems early upfront and having the discussion and debate as early as possible of what do we bake in versus not.
It will facilitate accountability because there’s specific ownership along the way. And it will facilitate rigor. There is risk if you somewhat militantly follow this, that there could be delays. It could slow you down. That’s true. There’s a risk there for sure. But I think this is where ownership, this is where escalation, this is where some of our principles around disagree and commit, come in. Those are also required in order for this process to work.
Murray: Okay, Shane, I think maybe we should go to summaries. What do you think?
Shane: Excellent. I’ve been a great fan of the press release process. I think it’s a really simple pattern that has massive value. So I recommend everybody does it if they can. And I like the idea of your pitch deck. So the idea that you pitch like a VC to get funding for large programs, for new products.
You talked about single threaded teams cross-functional, self-contained and that their goal is to solve a customer objective. And the benefit of that there’s no handoffs between teams. That makes our teams much better and faster and more efficient and more engaged. You talk about testing within the team. So again, no handoff to a QA team to do the work. You build it, you make sure it works and the team’s accountable for the how. Once the objective’s been set, how it’s been built is up to the team. They’re the experts.
I love that concept, of one-way door and two-way door. If you go through the one-way door, it’s hard to reverse. So those are big decisions to be made because the blast impact of changing that decision is large. Whereas if it’s a two-way door, if you can reverse it really quickly, then it’s a team decision. Just get it done. Doesn’t work. Reverse it next time. Love that language.
The idea that you don’t have product owners and scrum masters, but you have people with those skills, doing those types of roles. You have your software managers your tech product manager doing that work.
And then when I heard about the press release and the faq, I naturally thought the FAQ was for the customer. But it’s not. It’s scope statement. It’s what I call the wills and won’ts. It’s looking at the big moving parts and go, are we gonna do that? Are we not? It’s front loading that effort to understand what’s in and now and have those arguments, those debates early.
I like the idea that prioritization’s about defining metrics and ensuring they will move, but those metrics are team or product specific. And minimum lovable product, I like that idea that it has to be viable, but it has to be usable. People have to like it to get some value.
And then you talked about what I would term a core set of immutable principles. So you talked about things like security and resilience, and so there’s a bunch of immutable things set by other teams that you just have to engineer towards, because that’s a definition of what your company does. Those are the rules. If you wanna break one of those, you have to go have a conversation with that team.
I like the idea that the PM curates the user stories and the team reviews them. So there is a person leading and providing a point of view, and then the team are reviewing and giving feedback.
Love the idea that you got data as a product. We forget that data is as valuable as the app halftime, so we need to bake that into our process.
Love the idea that you have different types of product managers, some who are more technical, and some who are less technical. And you get a product manager that fits the type of product you’re gonna build.
And I’m the whole business requirements doc and tech doc and the BRD is quite a lot of documentation up front which surprised me, not what I expected.
Which came back to this idea that you’ve written a process down that’s 10 steps. But they’re really guidelines because every time you gave us an example it was a dynamic way of working.
So you’ve got a set of steps that everybody’s following, but they’re empowered to change it based on the context. To release a single feature is very different to building a new product.
And we’ve seen bad behavior before where as an industry, we took the Spotify model, and we said, oh, the way they described it works, that’s what we should implement versus the context of how they implemented it was the important thing. So I think with your stuff, it’s the same, we need the context of here’s the set of guidelines and then teams tailor it to get the job done.
And then, the question we asked you, are you agile? You talked about we use the process that we, have to inform debate. We use it to facilitate collaboration in a better way. We identify our issues early, we facilitate ownership to get the job done. And we facilitate rigor to make sure the job’s done properly. That’s not the definition of growth mindset and agile, I dunno what is. So Murray, what do you got?
Murray: Yeah. I could imagine management consulting firms and organizations reading this article and turning it into a very bureaucratic process with stages and gates and big documents. And it’ll take three months or six months to go through a cycle. Or maybe even longer.
Because organizations love bureaucracy and standardized processes and documents, and every time we do something in the agile product space, everybody’s turning it into a standard process that everybody has to follow. And then there’s an auditor and checklists. Particularly when I see a BRD, which is a more traditional document, and then a technical design document, which is again, more traditional, it does worry me that it could turn into a bureaucracy.
But then, as Shane says, when you were talking about it, you weren’t talking about it that way, you were saying, we have empowered cross-functional teams. They’re doing continuous discovery and continuous delivery. And there’s this constant feedback loop and experimentation. You could probably do more.
Like it does seem weighted towards the end. And I think, best practice from people like Teresa Torres would be to, schedule customer interviews every week and show them whatever you’ve got to show them and get their feedback. It feels to me you’ve taken what people do, which is much more amorphous than this, and you’ve turned it into like a standard way of working to describe it, but then you are still doing it in quite a flexible way. Is that right Ryan?
Ryan: I think so, the word I would use when you were describing that I think is, tools to help facilitate discussion and debate. Because I think, what we’re all trying to do is build the right thing for the customer and and build it in the right way. So that it’s flexible and scalable. And I think that’s the intention of most of these steps, not to document for document purposes, but to have a tool to facilitate that.
Murray: We’ve talked to some Spotify, people and they said the way they think of it is that these processes are guidelines, what they call the golden path, which is this is a path that works for people and it’s supported by tools and templates, but you don’t have to do this. In fact if you have a different context, we encourage you to experiment. And then different teams experiment with different approaches. Some of the teams do the well-known Spotify model and others don’t. They do different versions of it. And in fact, it’s the different versions of the process that then feed back into improving the process.
So I think if we approach it that way as this is a set of guidelines and good practice that supports teams and helps ’em be successful, but they can still choose whether to do it this way or not. That’s fine. In fact, we encourage experimentation by teams because we need to consider whether people like us actually do know the answers or not. I think we have to have a certain humility to say, yeah, we think we’re experts. We think we do know how to do this, and this is what we recommend. But I think we have to be prepared to be surprised by other people who are doing things differently and actually getting some better results in some situations.
That humility and willingness to be open to experimentation in the process in the same way as you are experimenting with the product that helps us to, improve and, to be really good. So I think that’s how I would approach what you’ve written. I like it. I just don’t want people to turn it into a new bureaucracy.
Ryan: Yeah. Big plus one to that. A new product manager coming into Amazon can’t go to a Wiki And find this. Each team does things a little bit differently. I think this was my attempt to show our younger product managers here’s a way, here’s some tools and facilitation mechanisms to be able to think about product development. And to your point, should be constantly inspecting those mechanisms to see is it needed, yes or no, and or can we improve it?
Murray: All right, good. Ryan, how can people find out more about what you think? I see you have a medium blog. Is that the best way for people to find you?
Ryan: Sure. Feel free to reach out to me on LinkedIn if you want to.
Murray: That’s been great. Thank you very much for coming on
Ryan: Thank you for having me on today.
Murray: That was the no nonsense agile podcast from Murray Robinson and Shane Gibson. If you’d like help to create high value digital products and services, contact Murray evolve.co. That’s evolve with a zero. Thanks for listening.