Prototype to Production – Gerhard DeBeer

Jul 17, 2019 | AgileData Podcast, Podcast

Join Shane and Blair as they chat with Gerhard DeBeer on moving prototypes to production in an agile way.

Guests

Gerhard DeBeer
Blair Tempero
Shane Gibson

Resources

Recommended Books

Podcast Transcript

Read along you will

PODCAST INTRO: Welcome to the “AgileBI” podcast where we chat with guests or sometimes just to ourselves about being Agile with teams who are delivering data, analytics and visualization.

Shane Gibson: Welcome to the AgileBI podcast. I’m Shane Gibson,

Blair Tempero: I’m Blair Tempero.

Gerhard DeBeer: I’m Gerhard DeBeer.

Shane Gibson: Hey Gerhard, welcome. Thank you for joining Blair and you to have a chat today. One of the things we kind of always start off with is asking you to tell us a little story about your background. So how you got into this world of Agile, where you came from where you’ve been?

Gerhard DeBeer: So I’ve been working in data warehousing, and BI for about seven, eight years now. I used to work in in banking. And when I came to New Zealand, I sort of made a career change, and started working in government and sort of naturally progressed into this space of BI, where I started was kind of reporting. And then I got involved in a project at one of the government agencies, building a new data warehouse. I saw how the project being managed as a waterfall approach, it was actually a just a total disaster, the finding out there as well, that’s 90% of the cases when a project like in that form is being run. When I changed positions at another agency, a similar approach was used to gather requirements and implement change with mixed results, mostly negative. And then we go in through a huge change in how we’re going to approach this and moving more to an Agile-ish sort of way of working, which I found to be a lot more positive, and had much better outcomes than traditional approach.

Shane Gibson: I actually asked him to remember that I was part of that waterfall project. A long, long time ago, before I started my Agile journey. And the three of us worked on that new way of working. So I think kind of what we want to talk about today was one of the things we experimented with really, really early was this idea of, could we prototype and then production?

Gerhard DeBeer: So in my current organization we went through a process to acquire a business intelligence tool. Because previously, the organization had a well mature data warehouse but the only interface into it was via Excel cubes, and his SRS reports. So visualizing data, discovering insights, all of that was very minimal or very hard to do. So we realized that we needed to get a proper business intelligence tool, particularly you have the new generation of discovery tools. And as part of the RFP process, we went through trying to understand bit our requirements, one of the things that we discovered was that there was a need for us to be able to take something from a prototype and productionize it. So the way that we envisioned doing that was that an analyst or somebody more from a business type focus needs a tool where they can quickly come up with an insight. So loading a bunch of data, very quickly be able to blend different data sources together, creating some visualizations and discovering something of importance that we want to do be able to repeat on a regular basis. So taking that prototype to that analyst or business person has been created. And then productionalizing it in a way that is repeatable, maintainable and easy to use. So we’ve been through understanding a bit of a process of how that would work. So and capability of the tool was important for us to be able to do that. So we that we can able, you’ll be able to lift what an analyst has done and hand it over to a developer who can then go and implement a solution in a data warehouse for instance by building a fact table or a view or something of that sort, and then be able to quickly re-pipe the prototypes that was created to be from a proper source then can be production nicely implemented your production system. And we’ve done that a few times now. And it actually works relatively well. I would say that the type of tool that you’re using is important, but also the discipline that you have to apply to how your production is. A lot of the new tools that is out there in the market allows people to bypass any kind of formal data warehousing, there seems to be quite a bit of a drive these days to just get things out really quickly without thinking much about the long term repercussions, well, how is that going to be maintained? How do you have maximum reusability of the artifacts that you’re creating? Because it’s easy to just employ more people and churning out more stuff but in the long run, I think it is really detrimental to an organization’s data governance.

Shane Gibson: So I agree in terms of where the markets gone and I blame that data this time. Last time, I blamed [inaudible 00:06:02]. I’m not sure. And a couple more years, we’re gonna blame next. I think data pipelines is the one that is starting to rear its ugly head as the new way of heavy chaos.

Gerhard DeBeer: Well, there’s always chaos. There’s always a new product that does things in a different way that allows you to do things you haven’t been able to do before. But so for us, it’s always the principle. So we have a very clear principle in our organization that any business rules that we apply, needs to be in a managed system, which is our data warehouse, and that any kind of transformation or business rules that be that we want to apply on the business intelligence side, there has to be very good reason for doing that. And if not, then it has to be pushed back into your managed system.

Shane Gibson: For me, it’s around the principles. I like that word, because that’s kind of where I’ve got to with the teams I’m working with at the moment. So what I say to them is, we can manage chaos. Give me an example. When you say we need to have visibility of the rules, we can put a pattern in place that says when somebody writes in code, those rules and that code needs to be exposed are a data catalog. And that’s a minimum that they need to do to manage those rules. So there’s the management of it versus the reuse. So if somebody is going to reuse that role, somewhere else, or leverage it, then there’s actually a whole lot more management or some more conformity of the way we work is, it’s more important. So what I say to people, the teams at the moment is, it’s okay, because we talk about this middle area, this modeling environment, can we learn all the data in the lake, create these data pipelines that are unique bits of code that flow data out to a dashboard. And can we just consistently do that? And I say we can. There’s some things we got to watch out for. We have a scaling problem at some stage, but most of the technologies now can handle that. But what we have to be very clear is that we’re never going to you reuse components within their pipeline. Because if we want to do that, now, we have to bring in some different patterns and principles about reusability of the data concepts, customer product, reusability of the business rules, reusability of the way we transform that data, usability of the way we consume it, and that’s the right hand side version, is that we actually want a lot of reuse, which means we have to work differently all the way through. And so we want to make a decision of which one we kind of think we are what we don’t want to be enlisted in the middle, chaos theory where everything’s unique and that makes us fast but we’re expecting reuse.

Gerhard DeBeer: We tried that as well. So we had a semantic layer in our warehouse was used for our cubes. And we thought we can just reuse that for our business intelligence tool that we now purchased. And that didn’t work. Because that semantic layer was designed and built for a very particular presentation layer in mind and reusability was minimal. So we had to, we had to recreate a new semantic layer for the BI tool, although the principle was always again, we want to have something at least that has to be layered. So at some level in your warehouse, you need to have the artifacts and the objects created so that you from day, you can then farm it out your different reporting systems. So whether it’s a BI tool or account reporting system, whatever. So there is reusability if you design it well, but it’s not probably at the presentation mode.

Shane Gibson: So if we take the layering pattern where we have some constraints by in place which are based on those layers where things can happen, so typically you’d hear the words persistent staging, maybe data lake, but somewhere where you land the raw data in the form of the founder, and you keep history over time, and that layer has some principles or patterns that you can transform the way the data looks. And so we put those things in place to help us kind of reuse. So when you talk about prototype to production, is the analyst allowed to create all the layers in the prototype from source always through? So they have to create the prototype in a semi layered way? Are they allowed to write one big code blob, or do they have to actually break it down into a series of code objects?

Gerhard DeBeer: So the person who’s doing the prototype is not a disciplined develop. We’re talking about an analyst who will go ahead and write a piece of SQL for instance, and joining a whole bunch of tables, they don’t care where it is that it came with, it’s in your extract layer, or your staging area, or whether it’s a spreadsheet from a web website, or whatever the case might be. They just want to get the data, put it together and do stuff with it. So it is atrocious code, it is not prospered developed, it’s just hacked together. It is not the role or the interest to do that in any way, they can just be lifted and shifted into.

Blair Tempero: So you’re saying they’re not creating any objects at all at any stage?

Gerhard DeBeer: Some may do that, because it depends on the level for each and SQL skill, but I would not expect him to do that at all, it is enough to call the shear. So some will do that within tables, maybe in a database or just writing very complicated inner joins or left joins whatever the case might be. But if we do decide that, it’s about the outputs of what the error is created. If we decide we want that output as a productionized repeatable information product, then it is up to a data analyst to go and analyze what has been done in the code, and then structuring it and modeling it appropriately to implement the data warehouse.

Shane Gibson: So that’s always going to be a challenge. And the challenge, or there’s gonna be many challenges, but the first one I want to talk about is, so the analysts that hack this prototype who does it very, very quickly. They deliver the initial prototype back to the business owner, the product owner or the key user, very, very quickly. And then the second round to automate it to harden to make it managed, reusable, and fit your layered architecture takes a lot longer. And so I’ve always struggled with the business owner where they go, but I’ve got what I want. Why do I need to spend all these this iteration or those points, or how have you described?

Gerhard DeBeer: So those two aspects to that. One would be if it’s a once off, they’d find the product owners quoted what they wanted, there’s not going to be any energy spent to production is that. So we’ll take that it’s been used, we can archive it, and we could keep it somewhere. If they want to have that updated on a frequent basis with newest data that has a different story, then it has to become part of our data governance. And our data governance is about them managing how we do this and how we implement it. So we keep our people safe and that’s about that. So ours has grown and created something, but we are not, we are not putting a stamp of approval on that. We are not saying this is official data that can be used to make decisions, first of all, to provide information to external parties or anything like that. It should not be used for. Business people can understand that because we are very harsh on it.

Shane Gibson: So the behavior I’ve often seen is what I call the manual green button. The analyst goes “I can’t schedule it, so I’m just gonna pull it up”. And every day, I’m going to have a major green button that I’ve written that refreshes the data.

Blair Tempero: They are breaching our policy.

Shane Gibson: So strong data governance, or data principles that say, so actually, there’s quite, quite honestly. So I did a project many, many years ago, back in the waterfall days. And what we did is we actually color coded the reports.

Gerhard DeBeer: So I remember there was some stage that people put the [inaudible 00:14:35].

Shane Gibson: And actually, depending on the logical environment you’re working in, so the analysts had access to us these days, we call it a sampler. But any content you created on the sampler actually had a color scheme that was different to the content that was a manage part of the environment. .

Gerhard DeBeer: So actually, that’s a very good point, something that we should do, and we haven’t actually been making it visually different, isn’t it?

Shane Gibson: It was actually because we were experimenting at the time. And what we found was when a senior executive got given a number. They said, I can have a quick look at where you got it from. And all they were doing was looking at the color. And it was almost like a spectrum of trust. And they were going, when I talked to some of them, they said they had a conscious decision. They looked at where the data came that the answer came from, and if it was in the trusted color. They’re like, I’m good. I’m going to take it to the board. And I know that the rigor and that number is trustworthy. Then I said, it’s good. So if it was the other color, if it was like bronze, you don’t use it.

Gerhard DeBeer: I’m trying to trust.

Shane Gibson: Well, they said, no. They said, then what they do is they bring into the trust of the person that gave it to them. Do I have trusted this person? Have I worked with them for a long time? Have they consistently given me numbers that have been correct and validated?

Blair Tempero: You’re essentially using a prototype to make business.

Gerhard DeBeer: Trusting the person who created the prototype.

Shane Gibson: And what they said was look, often the value of using that information quickly from the prototype. We don’t actually to get that answer to make that decision. We don’t actually have time yet to make it a trusted piece of information.

Gerhard DeBeer: That’s fair, because this is benefit analysis that you do in your head. Because it means exactly that, whether you trust the individual put it together. But I think also how important it is to actually make a decision right now.

Blair Tempero: So you’d have to do it on a case by case basis. If you’re looking at trends, less trust as needed maybe if you want an exact number.

Shane Gibson: It was exactly what I was saying, as some information to support the decision on that to make is better than no information. If I have the context of how trustworthy the information.

Gerhard DeBeer: Exactly. We found this site as well. And it also depends on how much exactly the type of problem you’re trying to solve, and what is the consequence of getting it done.

Shane Gibson: Versus not making decision. So if you think about that process again, so we so we go back to that investment by the business owner for the second round of work to make it more trusted. How do you do that? Normally, we would say that the squad that’s going to do development would engage with the product owner, and the business owners are subject matter experts with wireframe what they needed with understand the core business events, we go through a repeatable process as a data squad, that as a way of working to do this. With a prototype, does it shortcut in some way?

Gerhard DeBeer: It does, because you already have a wireframe in terms of the prototype. So the analyst has already created a dashboard, and has the liquid fuel already that you understand what the data elements is required to support? So you already know what, how it needs to be put together to meet that, the only thing that is that needs some time to work out is to understand how the data was put together in the prototype. And that’s why you need data more than to go in and then just spend time with the person who created it to understand the logical process they followed in acquiring the data and blending it with other data sources and putting them together. And once that’s done, then the data model can just decide how do we then create the structures, or maybe already have the appropriate structures in your data warehouse? If not, then just fill in the blanks. So what did we developed?

Shane Gibson: So that prototyping, is it typically done by the core central Agile team, AgileData analytics team or have you actually devolved it out to…?

Gerhard DeBeer: No, it happens with anybody. So a lot of it comes from our innovation team. But there are other pockets in the organization who does the same thing. They are very capable people who have very technically minded, who does not find it very difficult to do this themselves. But then we would just work closely with him and get the right skilled people involved at the right time when they come to us and say, we want this as a production information product. And we just worked with him to understand, what have they done? And then, how would that be lifted into our managed systems?

Shane Gibson: And just following on to that, given the teams of one potentially decentralized all operating models, they’re out doing self-service within the business groups which is awesome. Would you ever go back and validate that actually, what they built was what the business owner and product owner actually wanted?

Gerhard DeBeer: Absolutely, that’s critical. So as part of that we need to they understand what the intent was, and also making sure that what they’ve done is actually matching that because it happens that they would do it in the wrong way. They did a joint if you’re wrong, or they didn’t use the appropriate data source that they should have used or something like that, it always happens. Because they don’t know this. There’s multiple things that they are not an expert in, they understand the business side of things, but not so much the data as much as we do. Because we have people who are experts in understanding the data sources. So it’s working with the understanding team, and then making sure validating actually what they have done. And it has happened a few times that it’s actually well, actually, you should have done it like this. I will do it properly. And that’s why we do it in this way. When we productionize it something is making sure that we’re using the right data for the right purpose.

Shane Gibson: And bringing all those things like peer review, data quality, writing tests to make sure the data’s The rules are matching what we thought they were all that kind of good management of data. It’s awesome. Do you find though that there are a bunch of individuals who prefer to prototype than doing the hard plumbing?

Gerhard DeBeer: Absolutely. So using proper business diligence tool, you can find them quite easily. And just looking at what people are doing and what stuff they are building. And then you can go and have a conversation and say, I see you both these things, and these people are using it. Tell us more what’s going on, we were keen to help you out.

Shane Gibson: This is one of the things we’ve been particularly bad as BI on BI is using data and analytics and BI techniques to actually see how much of our data is being used. And we do present what content has been created by our customers to inform us where we probably might want to suggest we invest next. We typically never had that visibility.

Gerhard DeBeer: And there’s still things that you don’t know about. We don’t have full visibility about everything. And this new generation of tools, it’s a blessing and it is also a bit of a curse. Because it does a lot of people talking about democratization of data. But there is danger with that as well. People need to also be responsible with the usage, understanding and sharing of data, and how that will be utilized to make a serious decision. So not just the monetary aspect of it, but they also societal aspects to that people have to really think hard about. It’s not just I gave you a number, and there’s no consequence, that number could influence a whole raft of other decisions, and people need to realize that there is a huge responsibility in doing that.

Shane Gibson: And I think that comes back to the idea of data literacy, which is kind of I think, we missed was when we move from centralized development and deployment more of a self-service kinda started off of self service, BI or visualizations. And then we kind of move to self-service data to a degree. So service analytics is the new wave coming. We enable that from a technology point of view. But we didn’t really focus on the data literacy.

Gerhard DeBeer: We don’t build on the capability of people to do that.

Shane Gibson: But not the people that create the content they’re doing. It’s also people they’re serving.

Gerhard DeBeer: Both of those. I mean, there are many people who create the content as well, because it’s very easy to assume that this data can be used for X, Y and Z. And a lot of times it can’t, because that was never the intended purpose for it. It’s also the people who then are given the information and have to make a decision about it. I’m just saying about both of those, it’s important to make sure both parties know and understand and have the capability and the literacy of the data to make the right decisions.

Shane Gibson: So if I look at it from another lens and the lens of friction, so the thing I’ve learned over the last few years working with different teams, and trying to say how do we take this Agile mindset and create some patterns for data and analytics teams versus app development teams, or digital transformation teams, I kind of came to this concept of friction. And the way I explain it is, if we look at the team as a way of modeling what that gave us was a way of removing the friction from the business analysts getting requirements for data to the modeler and developer that’s actually building the models. And if I think about the information product template and that concept and just working on a canvas for now, which is the document turning into A3. So hopefully that’s going to rock that removes the friction of how do we very quickly get out of a business owner or in this case a prototype, somebody’s been prototyped into something that the rest of the development squad can understand what business questions we’re answering, what’s actually going to be taken from what’s in another scope for that information product? How do we size and prioritize? I’ve done some work around defining the rules in a repeatable way. So how do we take ETL and turn it into business language for rules to remove the friction of when you say we’re defining an act of wholesale customer? It’s the definition of it. Is this what we have to do to the data to get that for you or not? So removing that friction, I’ve got two things that we’re talking about. And that one is the form of language to remove the friction between prototyping and production. And then data catalogs, are they the new cool way of doing this. So let’s go back to that, do you think in your in terms of information products and being been able to take a prototype that somebody has done? Have you been able to use those templates and populate them faster? Therefore reducing the friction from prototype to production?

Gerhard DeBeer: Yes, absolutely. I think that makes things a lot easier to understand how to model the data in appropriate way, and how it fits in with very simple data that you have.

Shane Gibson: Is it still the production team filling those out, or the stage where the people doing the prototype, understand how they can populate them?

Gerhard DeBeer: Again, the people who do the prototype don’t care about that. So that’s not of their interest. And I don’t think it should be, because that is our responsibility to make sure that we would do that in a way to manage our own data systems.

Shane Gibson: Look, I’m gonna challenge you on that one. I agree that actually had the overhead of filling out those templates for a prototype that’s not going to move to production, I get it right. It’s a wasted bunch of effort. You don’t care about removing that friction, because it’s never gonna turn up. But there must be a way that we can find that middle ground where somebody they know, we productionize or they have a feeling just purely because what I found is, once you educate somebody around those templates, they can naturally fill them out. And as long as the effort and this is why we kind of I’m doing that IP canvass at the moment, because I have this theory that we can actually fill it out in 10 minutes, if we know most of the questions. And even if we don’t, we’ve got gaps in it, that’s fine.

Gerhard DeBeer: That’s a fair assessment. I haven’t actually really thought about doing that or pushing for people to do that. But it’s maybe something that we can definitely try.

Shane Gibson: Let’s take the beam on. What I’d look to experiment with is not enforcing it. Not saying you have to fill out this beam, that’s saying when you write your code for your data, if you can think about who does what. Because we all know that those words are magic words for some reason. Then you’re educating with some literacy around modeling based on business events, even if they just gave you a document with five stickies, which is customer orders product is the thing I’m modeling.

Gerhard DeBeer: Just now thinking, because when we do training and training people how to build their own apps and information products. It’s actually something that we can incorporate in the training as well, when you develop your app, and you are guided through the process of acquiring the data and stitching it together. As part of that you can actually incorporate the who does what and so on, because that will actually also help you crystallize the output needs for this particular exercise that you’re embarking on.

Shane Gibson: Actually, Lawrence was over here, last year. And what he’s done is he’s taken the beam canvas, which is the quad A3 where it’s got, have you seen it where it’s got? And he’s now actually starting off the workshops of his customers with the bean Canvas first, and then go down to the bean table. So what I’m wondering is actually again, it’s about friction. You don’t want to have the bean tables, probably a lot of detail for them to fill out. But if they had the canvas and a PowerPoint, and all they had to do was draw on some colored sticky so they’ve got the who’s the what’s the where’s, and that’s another fact that they always produce.

Gerhard DeBeer: That’s a great idea and I think we should we should try that and I’ll let you know.

Shane Gibson: So the next one is data catalogs, so the new hot call.

Gerhard DeBeer: Yes, they are. It’s all over the place.

Shane Gibson: And they’re really useful. Again, if I talk about our domain as one, we should have always documented described what we do with data. And we typically never do. But from a prototype to production, if they’re prototyping in your tool and your tools integrated with a catalog, shouldn’t that give us some insight on what they’re doing in a way that reduced the friction?

Gerhard DeBeer: I think there’s a lot of value in that. However, at the moment, I can’t justify the cost involved in doing that right now. I could certainly see in a few years’ time, our organization at least will have to do that. And there will certainly be value in doing anything, we’re still on our path to maturity. So we’re not quite there yet, we don’t quite have the need yet. But these days are cataloging and these catalog tools is really going to be very important. In a few years’ time. I think for a lot of organizations already important, because there’s such a wealth of data out there, or getting your head around what can be used for what is getting harder and harder. People talk about big data or whatever. It’s not the data of data, it’s the breadth of the data that we have to work with, we have to really try to get our heads around. In New Zealand, particularly, because there’s so much information available from all kinds of sources from different government organizations, the departments, New Zealand’s integrated data infrastructure, which is also a wealth of information. But it is so wide and it’s so vast that it’s really hard to get your head around it.

Shane Gibson: Great. I think we’ll see. I think they’re almost a new black. Right now they’re kind of you have to buy a dedicated one and they’re not cheap. Now, we’ll start seeing their capability becoming more and more into in products again, as we go through that cycle of consolidation. So I’m still working with a bunch of squads where we’re still going for that Nirvana, which is in a three week a duration, we can go from an idea to acquire the data, transform the data, load the data and visualize it, and then a three week cycle. So one of the tools I’m working with at the moment, it was interesting, we’re starting the journey. And it was interesting. They the data, they’re pulling out some API based from the API calls to the system. And the calls to that system for the API. And for one of the other products, information, product sales are good. So we came into another information product that was going to leverage some new data from that system. But we had an acquisition way of working that we knew worked. But unfortunately for the data, they went to hurt the API work completely differently. So they couldn’t, they had to write another bunch of code to make that API work effectively, the squad sitting there and as like, okay, that’s gonna be two to three days for the engineers to write that new API call in, but the rest of the squad is suddenly going, you need to understand the data model and all that kind of good stuff. So what the engineer does is said, well, that’s fine. All I’ll do is, I’ll do quick manual, run the API manually, download the data, dump it in to an area for you to play with and then I’ll have not the acquisition code where you carry on. So simple for you. Your core Agile team, not the prototypes are you applying a prototyping technique?

Gerhard DeBeer: Yes, we need to. We create some, what we call just like dummy data, or whatever the case might be. Just to have it in a structure that you could then use to develop further, with the tingling to harder than up exactly like that. So we’ve done that a few times where I live was a new thing that we really haven’t created yet. So we had to come up with new things to create it all. It was something that we were going to change quite substantially. And we list the attributes that we don’t have. So we just made some attributes are just too not have people sitting around and wasting time while well actually all you need is just something you can go ahead and do the rest of it. So it’s just making sure that your human resource management

Shane Gibson: Optimization on the team squad so that it’s not half of them sitting around waiting, when they could make more risk.

Gerhard DeBeer: Sometimes you can’t do that, unfortunately. I think I think we’ve had a few times that we could do that.

Blair Tempero: Because people find BI to do, don’t they and they fill that void?

Shane Gibson: BI still has value, but it’s not focused on the main thing.

Gerhard DeBeer: We’ll always happen back.

Shane Gibson: So kind of just closing out then. So let’s talk about anti-patterns. So what are the top things? So this concept of self-service prototyping outside the kind of manage data team, and then moving it back into reinvesting and hardening and reuse and all those good things. What are some of the end patents? What are the some of the things you would suggest people watch out for as not? So good ways of working if they try and adopt this practice or patent?

Gerhard DeBeer: Well, one of the things would be not to just copy and paste the code back into your warehouse, but actually taking the time to understand what has been done and properly structured. Because it’s easy to just take a bunch of code and just create a view for it, which is highly inefficient. It does a whole bunch of things that should not be the view, for instance, but should you should actually have created additional fact tables for that particular purpose, and properly layering it. So you need a well-disciplined data modeler to go through there understand what the intent is.

Shane Gibson: So on that, we create some form of definition of done that when you’re grabbing a prototype, and you’re reclaiming it?

Gerhard DeBeer: It has to be reviewed by say your data architect as well. The data warehouse architects, there needs to be peer review of these. So that’s really important. It’s easy to do something in a quick and dirty way. And not to have to worry too much about it until next year, when for some reason something breaks, and then you have to spend three weeks trying to fix it. That’s the main thing for me is to make sure that we do things the right way. So there’s a right time, there’s a right way as well.

Shane Gibson: So definition of done to make sure we haven’t cheated when we’ve spent the time trying to harden. And what I’d also suggest, though, is actually definition of ready, head on something where we say actually for a prototype to be accepted for that rebuild. Here’s the things we need always back to that friction.

Gerhard DeBeer: Things like that as well. It’s the maturity of the data sources that’s been used. If it’s just something that somebody put together in a spreadsheet, then there’s a lot of questions whether that’s appropriate for you using something that should be a production information product. But if somebody just came up with a bunch of a list of something. So we have a bit of proper government practices around that as well then if it’s master data, newly created master data or something like that, these are more data governance issues that need to be addressed and making sure you do. Again, you can take a lot of shortcuts, you can do things in a quick and dirty way but make sure to do it the right way.

Shane Gibson: Excellent. Well, thank you very much for your time.

Blair Tempero: Thank you.

Shane Gibson: That’s pretty awesome. And we might get you back on to pick another subject in a few months’ time, I’d be happy to go around again. All right, we’ll catch you later.

Gerhard DeBeer: Alright, see you later.

PODCAST OUTRO: You’ve been listening to another podcast from Blair and Shane, where we discuss all things AgileBI. For more podcasts and resources, please go to wow.agiledata.io.