Agile Security

Apr 21, 2022 | AgileData Podcast, Podcast

Join Shane and guest Laura Bell as they discuss how you can eat the security elephant in an agile way.

Recommended Books

Podcast Transcript

Read along you will

PODCAST INTRO: Welcome to the “AgileData” podcast where we talk about the merging of Agile and data ways of working in a simply magical way.

Shane Gibson: Welcome to the AgileData podcast. I’m Shane Gibson.

Laura Bell: Hi, I’m Laura Bell.

Shane Gibson: Hey, Laura, thank you for coming on the show today. So today, I want us to talk about Agile and Security. So what we find when we kind of adopt an Agile way of working, there’s some large pieces of work that tend to happen upfront. So things such as the architecture, the platforms and security. And so we always struggle with how do we eat that elephant and small chunks? How do we implement the right level of security at the beginning at the rate as we go, but don’t spend an enormous amount of time upfront, worrying about the security architecture and getting it 100% perfect, but we get it right, and we get ourselves safe. So before we get into that, why don’t you just give a bit of a background about yourself for the audience?

Laura Bell: Awesome. Well, it’s lovely to be here today. So my background is quite interesting. Like most security people, I sort of accidentally ended up there. I started out as a software developer, my first language was COBOL, you’ll forgive me for being too cool. I moved on to Java and later was a PHP developer, in fact, contributed to a number of the frameworks. I accidentally became a penetration tester when I realized that I was quite good at finding bugs. And somehow now I’ve made a career of helping teams go really very, very fast, and put security through building software. And part of that I wrote a book, I co-authored a book with some awesome people called “Agile Application Security” that’s published by O’Reilly. And my second book is due out soon “Security for Everyone”. So this is something as an area that’s really exciting to me. And in fact, I have a whole company dedicated to it. So I run a company called SafeStack. And we have a SafeStack Academy and whole aim of that is to try and help as many development teams around the world as we can go really fast, and we’ve security through the whole lifecycle. So I think what I’m saying is we’re trying to put our money where our mouth is, and really try and get this at a global scale. So not just one or two unicorn projects, it’s very exciting space.

Shane Gibson: So we often talk about shifting left, how do we take things that normally don’t outside the team, increase the skills, the illiteracy, the capability, and bring it into the team to move earlier in the process? So is that what you’re seeing is that actually one of the gaps we’ve got is around the literacy of developers engineers to understand security? What should be implemented? What shouldn’t those patterns? Is it one of the things we need to deal with first?

Laura Bell: I think so. But I think also, the phrase shift left has been used probably for about five years now. But really, it’s being misused and misunderstood a little bit. So we’re saying, ”Yes, let’s shift left”. And we all agree with the principles, we move it to other people, we democratize a feeler. Well, we give the power to all of the people in our lifecycle. But in application security, we also still rely on using phrases like, well, we shifted secure coding left. Now, secure coding is a phrase that really tells us something about how we’re thinking about the shifting left, and how we’re thinking about the position of security inside our development lifecycle. Because security isn’t just about the code, it’s about the entire process that we need shift left. So having a great idea and thinking about security in it, planning and architecting the choices you’re going to make implementing them and writing the code is part of it. But then there’s testing, deployment, build, and then supporting that monster for the rest of its existence. So we understand the principle but we’re still slightly naive, myself included, I find myself doing it in how we understand what needs moving, because it’s really not just about the code that we write, but every process that’s involved in building that software.

Shane Gibson: We definitely see that in the data world. So we see that the first kind of the platform will tend to penetrate, do some form of testing, or CNA or whatever process we want to call it. And then we land the data, because we want to make sure the environments safe before that data is available and then we never touch it again. We start writing code, we start doing things with data, we introduced new components. We’ve done the security testing, it’s not a repeatable process. So do you see that a lot? Do you see that still a standard way of working?

Laura Bell: Now, don’t get me wrong, in any part of technology, there are outliers. There are definitely companies that are on the edge of this. And they are doing a very iterative combined testing all the way through. But for the most part, I’d say probably 90% of companies still are stuck in that same loop, where they’re tying their security checks or assurance, their activities, to very big milestones. So you might be happily deploying your code every 10 times a week, who knows. But the penetration testers are only coming in once a year. And that’s the extent of security or, we did a design workshop about a year ago when we started thinking about this feature. But that’s not how our dev process is really working. Between those two events, there’s been 100s of 1000s of little tiny decisions that have all impacted the security of it. And so we’re still in a way waiting till the very, very end, even though we know, when we’re moving at development speed faster, that we should be iterating quicker. It can be quite daunting to figure out how to turn these giant security activities into little things you can do.

Shane Gibson: So why is that? Because in terms of DevOps, a lot of organizations have introduced the idea of automated testing, test driven design, doesn’t this behavior driven design and testing and all that kind of stuff. So we’ve got relatively good at writing little tests, then when we make a change, running them all, and making sure we haven’t broken something, when we talk about security, I don’t see that pattern a lot?

Laura Bell: No, and there are frameworks out there. They’re not as well known. So for example, the BDD testing, you’re talking about behavior driven development, we have BDD security as a framework you can use for security automation testing. So the same way you would write a suite of automation tests for any of your human stories, you can do for security misuse stories, if you will. The trouble with them is many of our automation testing frameworks underneath the hood, they hook into security testing tools. Now, security testing tools, they’re not written with DevOps in mind. They’re written from a time 10 years ago, where testing was okay to run for six hours, and that wasn’t a problem. And so you can write these minute tests and hook into them and run them quickly. But you spend quite a lot of time trying to go. Well, how do I run just this tiny bit for the tool, so that my build pipeline and my testing pipeline doesn’t explode. And there’s an element there that in a way, we need to improve that security tooling, and make sure that it’s respecting the conventions and the expectations of how quickly they need to run that. It’s not good enough that by putting security in you bloat your timelines by a number of hours or a number of days.

Shane Gibson: We definitely see that in the data world. So in terms of tools to test data and not code, we’re way behind our app dev river. And so it sounds like the security space, it’s the same thing. Where is part of the problem that when I work with large organizations, we tend to use external third party organizations to do the testing. Now, there’s that element of having the cop come in who hasn’t been involved to give us that seconds for, I can understand that. But it seems to me that actually one of the core reasons we do it is because around that expertise, that literacy there in the organization, there’s never enough skills to understand how to test security, what’s important, what’s not, what we should repeat what we shouldn’t, and so we bring those skills in. And because then they’re external, they’re done based on a contract and for money, they’re not part of the team, they become more one off behaviors, is that one of the things that you’re seeing?

Laura Bell: I think there’s an element of that. So there’s a few reasons we get these external specialists. And then one is that skill set. Undeniably, if you’re sat in a security assurance job, but whether as a code reviewer or auditor or a penetration tester, or even in many cases, now security architects sitting outside the organization. They’re sat in teams that that’s what they do every single day. And so their skill set is being honed all the time. And it’s expensive to keep that person inside your organization, especially if you’re not a huge organization. There’s other bits to it, though, internally in your culture. So it can be quite hard to feed and water full time security person that is doing these things all the time, because every time they do what they do, they find things. It’s because that’s what they do. They’re good at that. It’s their job. And the more they find, the more work comes from security. And so we have this cash way because for most leadership teams security doesn’t get seen as a profit making area, or it’s seen was part of quality, but it’s also part of compliance and risk. And so what they don’t want is this continual stream of work, that’s not going to make more money in the end, which sounds very cynical, it’s not intended to be, it’s just kind of where we’re at. Now, if you can just do it as a controlled event every so often, you’re controlling that flow of work hitting the teams. And it’s a naive way of limiting the budget and time that you’re having to take on security. The other thing that is sadly true, because we have such a skills crisis now, when we’re seeing people train up and go into an organization as a dedicated security person, particularly in the application space, unless your team is really mature, and they’re able to go from, I’m here now, I found some things to making change in that first six months, it’s highly likely that they’re not going to feel effective. And there are dozens of other jobs out there. So we see very high turnover of these roles. So we have to be very mindful that we bring the skills into our team that we don’t just give them the skills, but we also give them that accountability, autonomy and authority to make the changes that they need. Because if they don’t have those three, then they’re going to feel pretty redundant, they’re not going to feel like they actually can get their job done. And so they’ll go and do something else.

Shane Gibson: So I see that same problem with testing. So if we think about teams that based on iterations, based on two or three week batches, that are following more of a scrum pattern, so it’s more across set of skills within that team. And ideally, they’re responsible for end to end process and that kind of thing. If they haven’t done it before, they tend to think about a tester, not testing skills. And if the primary makeup of the team, as developers who don’t have strong testing skills, we will often see a strong testing capability come in. And the goal then is that that person helps with the literacy of the team, they help with frameworks, the ways of working, understanding what should be tested when and then focusing more on the edge cases, because those are the hard ones to find. And so the idea there is testing becomes what the team does, but they’ve got help and understanding that. I don’t see that happening with security. We don’t see that security person come in early and be part of the team. And maybe that’s because we’re still treating it as something that happens after the fact. So you’re saying they come in, there’s nothing for them to test it secure, but they’re not involved in the design process. They’re not saying well, actually, if we design it this way, we’re going to have less risk, therefore, there’s less security holes and just see what you’re seeing is that again, we’re still treating it as another team thing that we compliance we do at the end to get to go live.

Laura Bell: Yeah, there’s definitely a bit of that. And there’s a weird bit of psychology about it as well. Now security is one of those areas where, if it was your job to find security bugs and you don’t find them, and then a bad thing happens, that makes you feel very, very vulnerable as a security person. Now, anyone who’s been in testing will know it’s impossible to find all of the bugs. So it’s an unwinnable situation. So in a way, we feel a little bit more comfortable in our teams if we think a bit more like Batman. So if you’d go back to the old comic books of, there’s a superhero, that somewhat aloof over to the side, you just call them when you need them, they take care of the bad thing for you. And then they go away and you don’t have to deal with that again. Nobody else in Gotham is really learning to be a superhero, they just call when they need. Now, that makes us feel safe, because we don’t have to care about that. It’s not our fault if the security thing fails, its balance. It’s that person on the team. Now, that doesn’t help the team long term though, because it doesn’t scale. And as that person realizes that there’s too many of these things, they can’t possibly save the day over and over and over and over again, then they’re less likely to stick around and stay and then they’re not cross pollinating into the teams. They’re not coaching and doing this up. Now, some of the solutions we see to this are things like security champions programs, where we’re instead of hiring just somebody who is technically very good at application security, we’re hiring people who will also for the better phrase, good humans, they’re there as coaches and as mentors and as in team evangelists. So their job is not to say, hey, I’ll take that off view is to say, Hey, I see you need to do that. Let me help you with that. And it becomes a coaching pairing activity, where you’re giving knowledge as you help others. But that skill set is even rarer than our technical folk, because I love my community, but many of us come from backgrounds that are not focused heavily our communication skills, then coaching and mentoring. So it’s really hard to find the right person for that, who can do the technical side, but also can connect reinforcement coach, so the team are empowered to do it themselves.

Shane Gibson: I have a framework that I kind of think about in terms of skills, in terms of people start off with novices. And then they become practitioners, and then experts, and then some people want to move on to coaching. And for me, to go from an expert to a coach is a completely different set of skills. You need to be an expert ideally. But now you actually need to be able to mentor, train, facilitate, and help people do it the way you did it, and that is hard. And so we see the role of an Agile Coach being kind of familiar now. But I hardly ever see a QA coach or a security coach. So I resonate with the idea. Actually, we need people to become coaches in that area that help other people do it the way they did. And maybe that’s actually a really good technique. If you’re an organization that’s large enough, and you’re worried about your security person not being busy enough, well, if they move into a coaching role, there’s always enough teams to help. There’s always enough work to be done, when you’re helping other people get better at what they do, versus doing the work yourself. So maybe that idea of a security coach is something that should start coming out in our market in our domain.

Laura Bell: I think if I was honest, we want to get those people coming from application development itself and cross skilling into security, it’s quite hard if your security background is in risk and compliance, or you start out in networking or in physical infrastructure. Because the application developer world, I wanted a better phrase, moves really quick. And I don’t just mean in the Agile sense. In terms of the technologies being used, the architectural frameworks, the way we build software is changing rapidly, it looks nothing today, like it did 10 years ago and it will continue to evolve. So when you’re saying you need to be a coach, you don’t just need to know the security stuff but you also need to be able to adapt that to say 10 different languages, or the 3 different types of frameworks that your team is working on. PS has a legacy system over there that we still have to keep on because it represents 30% of our revenue. And so you need somebody who is confident navigating the application space before you worry about the security space. Because if they don’t have that foundation there, whatever they give will not be able to connect or resonate with the underlying technology.

Shane Gibson: Yeah, and I’ll extend that out in terms of the domain you specialize in. So if you specialize, and you come from an application development background, then you applying your security practices and principles and patterns in the data world is actually quite hard. Because we go, well, we don’t control the app, typically, it’s off the shelf, we’ve got to collect Tableau Power BI, whatever, that’s a commercial product. We’re going to do some testing of it but not a lot because it’s outside our sphere of control. But the way the data is surfaced in those dashboards, what we showed her, that’s what we care about. And that’s a paradigm shift if you haven’t worked on the data domain. So again, it’s the idea of does a coach need to have been on the field? To be good at helping other people and in the security space, and in the data space, that helps. If you’ve been a player, then you’re a more effective coach.

Laura Bell: Absolutely. In the area of security and data science, is even less mature than application security. And the number of people now in really exciting data science or those machine learning roles where it’s not software development, like we know it is really algorithmic, and it’s based on the quality of the data going in, and how much you can trust it and the bias that’s in it and how that data has been prepared and put into systems speaking with, as well as then how we choose to tell a story with the end. All of that impacts its security in terms of the confidentiality, are we exposing too much or too little the integrity of it? Does it actually mean what we’re saying it means anymore, can we trust it? And the availability of making sure that we’ve got these big areas of data that are available when they need to because sometimes we go too far with our security as two insecure lock the doors, when actually that’s getting in the way of getting the job done. And anytime security fights getting the job done, getting the job done is going to win. So we have to make that data science and security areas something that’s much more front of mine than.

Shane Gibson: I think one of the things security teams have done their favor is they are seen as a risk and compliance team often and therefore they are the people with the pain to say go no go. It’s our testing and our documentation breather and who typically get the four weeks of work shortened down to one day when we’re running a more waterfall based process. So at least security people are seen as more important.

Laura Bell: Hope so, but sometimes it doesn’t feel like that.

Shane Gibson: Well, we don’t want the bad news. We just want the green tech because they we did everything right.

Laura Bell: Absolutely, dreams something.

Shane Gibson: But if we think about that idea of algorithms, machine learning, and I’m using my fingers here AI. Again, I see a lot of the times when we look at security, we look at rule based paradigms. So we’re saying, we’re gonna go test this, can this person access this piece of system? Can we get him to a place that we don’t expect to? We see a bunch of rules. And then potentially, if we’re automating some testing, we may run those rules a little bit later and as we make changes, but I don’t see a lot of observability. I don’t see, the example I use as, we need to test that people can’t access the system, if we don’t want them to. But we should actually set up some automation that observes what’s happening, and then alerts us when things look a bit weird. The idea of say, well, why don’t we log all the logins? And then why don’t we run a simple K-Means algorithm across it to say, here’s a cluster of logins, it just looks strange. We know that Google does it. We know Microsoft does it, we know that when we log in, I take my laptop, when I go into another KEF, it asks me a little bit of a different security equation, because it knows I’m not in a place that I normally are. So that rigor that, their ability to apply those techniques at scale, some of the big companies have, but when I work with companies that are less data driven or less digital nomads, those type of techniques don’t impairments, don’t seem to be top of mind. So why is that?

Laura Bell: I think in a way, it’s a scale problem. Unless there is an off the shelf solution that you can find that helps you with it. Gathering the data isn’t hard. So that’s probably like the most oversimplification, I will say today, but gathering the data, it is root is not the complex part of that puzzle. You can gather data together all sorts of locations, pop them into one nice pot. Great. It’s then using the appropriate technology and thought process to really know well, how do I know what this data is telling me? And not just what the data has historically told me, but in the metrics world, we’ve called the leading indicators, what are the clues we have that a bad thing is about to happen? Now, security monitoring tools historically again simplify something are generally a regex over a large set of data, our old snort and our all of our old security alerting and intrusion detection systems with just regex. They were big complex rules looking for common patterns in packet capture. Now to do analysis based on let’s call it heuristic stuff. So patterns of behavior, the fingerprint of a person, that’s much more than just a regex. That’s the correlation of dozens of sets of data at once is understanding how to rank that data in terms of priority, how much you can trust each piece of data. And each part of that then has to come together in quite a complex scoring system. Now, this is an area that I’m really excited as emerging insecurity, because what my thesis my studies were in artificial intelligence back before it was cool, and I think you could get a job and I wrote natural language systems. And the interesting thing about it, and it was interesting that it remains the same now is to get to the quality of analysis and interaction you need in these monitoring systems. The technology you’re building is not as finitely predictable as standard software. So there are so many different logic pathways through it that interact with each other, that it becomes very, very difficult to track whether you’ve got a complete picture or whether you are really taking into account all of the different combinations. So I think to be honest, it’s a hard problem that the only people we’re seeing solve really very well, right now are the people who have so much data to begin with and have made their business models were already aggregating it together. So they’re applying something they already had. For most of us, we can achieve that.

Shane Gibson: And so we’ve seen the advent of the cloud. So the AWS is using and Google Cloud worlds. And that’s changed the way we have to think about security as well. So, for example, I have often work with teams that are new to the cloud still. And they’ll still think about firewalls. They’ll still think about security as controlling the network pipe. And when they then realize that actually, with those cloud providers, everything is an API. You can come in and get access to everything through an API without going through a closed network and therefore, we have to think about things slightly different. That’s quite a shock to them because the techniques depends they use from an on premise which the challenge is the cloud providers make it look like you can still apply those techniques, you can set up VPC, you can install, third party firewall software. And it gives you that sense that those patterns are working. But the world has changed. And same with penetration testing, the number of times I have to have a conversation about so before we go and pin it to any form of penetration testing with one of those cloud providers, we need to talk to them first. Because if we try and do penetration testing, they’re quite good penetration testing software will shut us down because it’s what it’s designed to do. So have you found that this move from on premise practices to Cloud has helped required a reeducation of security people and what’s important?

Laura Bell: It’s very education for all of us, the way we learn, and the way we’ve always learned is by two by taking metaphors, and paradigms we are familiar with, and applying them to a new context. So we learned about email, because we’ve done letters. Now there’s a generation that doesn’t know really about a physical letter, and they’ll apply email to whatever comes next. In the cloud space, we’re applying what we learned over in some cases, 15 years of career in building on premise solutions, and the only thing we have in our metaphors that we could apply to it are those things. So we take them with us, and we look for that familiarity. Now marketers know this. So when they’re putting together, here’s why you should join the cloud. They go well, it looks just like the thing you’ve done before. So it’s familiar, and it’s easy. And I’m a security person. So for me, it feels a bit like social engineering, but I love marketing people. Please marketing people, don’t hate me for that comment. But that’s hard when you learn by metaphor. And those metaphors aren’t really appropriate. But they’re all you have. So that education layer isn’t just about saying, hey, click button number three, and button number two, and turn on this thing. That’s that implementation layer of education. We also need to actually teach before we get to that, the fundamental concepts. So what are the paradigms? What is our new metaphor here? What is it that we’re talking to? And I think we spend a lot of time abstracting away from that now and getting straight into implementation without teaching. Well, actually, that button is on the back of a giant space console, and this is what it looks like. And by doing this, it opens this floodgate over here. When I teach, one of the most fundamental things that we do with our apprentices and with our learners, is making sure that we understand for whatever button we’ve pressed, what’s physically happening, what is happening underneath, what is it causing, because without that, there’s no way to concretely, secure it, because it’s like trying to lock a door, when you don’t know what shape the door is, where the keyhole is, or what kind of mechanism it has, you’ve got no hope.

Shane Gibson: And so one of the problems I see is this idea of a shared language, that in IT, we all have our own little 3d record rooms, you gotta talk to somebody that’s outside of IT. And they asked you to fix your PC, and you’re going well, I work in IT, but I don’t fix PCs. So to them, the words we use are alien. And then when we look at data versus FDA versus security, same thing, our TLS 1.2, well, I kinda think I know what that is. But even within the security world is their own language. So where are we and getting that shared language so that people that are experts, or coaches and security can use words and paradigms that we can understand what they’re talking about? We go, now I know what you’re looking for. And then we can have that shared language, is that starting to happen?

Laura Bell: Maybe in places, but probably not. I mean, security, we are terrible for having many acronyms that mean the same thing, or many different acronyms for the same words. And I think, authentication versus authorization is my favorite example, those who’ve ever listened to me talk before, it’s the one that drives me nuts, as we abbreviate them both of the same thing, and then expect us to understand implicitly which one we’re talking about. For me, and I think this is where I differ in my approach to others in my same field. I don’t think the technical language we use in application security in particular is helpful. I think actually, sometimes we use it in a way that obfuscates or confuses the matter at hand. Security at its most fundamental elements is the same set of rules that has always been, whether it was physically protecting a building or a castle or a cave, or a treasure chest. It’s the same kind of things. We have preventative controls, we have things that stop bad things from happening. If detective controls, things that spot the bad thing is happening. And responsive controls things that allow you to do something when a bad things happened. And there’s no shame in us instead of going well. The Preventive Controls and this audit of showed blah, blah, blah saying, how we prefer to stop bad things happening? I’m a big fan of bringing the language to its playing this form and increasing our engagement that way, because there’s so much jargon. And it’s a nightmare to a fair even if you spend all day wading through it. So I think the time has come for us to actually call it out when we see and go, can we just speak plainly about this piece.

Shane Gibson: And the other thing that I see is when I’m working with people, this kind of idea of unconscious behavior versus conscious behavior. And the way I articulate it, as we increase our level of expertise, we get good at unconsciously identifying the things we know we need to care about. But when we moved to more of that coaching type role, we have to become more conscious about why do we look for those things? So as we’re working on a new site deployment of a data platform, and we have a security team that’s outside the team will tend to say to them, okay, what are you going to look for? If you can explain to me what you’re going to look for, to make sure we’re doing the right things, then that will help us to design upfront to make sure we’re doing those things. And ideally, we can automate some of the proof, we can actually go well, here’s the things we’re running. And here’s the green text to say that those things are in place. But a lot of people struggle with that, because what they want as a design document, or they want to be able to explore the system, because they know unconsciously they’ll find what they care about. But if they had to describe it, they struggle. And again, is it just the maturity of the security market at the moment that there’s never really been a focus on embedding those skills and a team, so therefore having to explain them is not as important as been able to execute them?

Laura Bell: I think potentially, when it comes to that level of shirts that we’re talking about here, where people feel more comfortable going begin to document them or give them. I think part of that, again, is that sense of vulnerability get in that security space. It’s really hard and security to sit in a room with nothing documented and try and navigate and go on, I need to find all of the bad things, and I have no map. So documentation is undeniably almost always out of date and irrelevant and doesn’t really represent the end product as that’s we get that there’s no judgment, there just is what it is. It at least is a starting point. Now, that said, I don’t think it’s the best starting point. I’ve seen some incredible threat assessments with no paperwork in the room at all. But what they’ve had is they’ve pulled in people from end users of the system. Say, we’re talking about bank pulling, you’ve got some customers, you’ve got some people who are bank tellers, you’ve got people who work in the call center, you’ve got people who are developers, and the technical roles are there too. And then somewhere in there, you insert Chaos Monkey, by Chaos Monkey, I mean, somebody who’s in the arts, somebody who’s person, somebody who writes scripts or fantasy novels. Now, throwing a Chaos Monkey in gives you this voice in the room that is just there to go. But what if I press that button? Or, hey, wouldn’t it be cool. And that natural skill set, that energy that comes from somebody who is completely divorced from both technology and the original product, but they’re also a creative and artistic type? So they’ve got that ability to go? Yeah. But let’s ignore physics for a second. And let’s pretend that laws don’t exist. What if that is incredibly freeing, and I actually think we need more of it. We need more conversations where we can sit in a room and ignore the fact we’ve all been taught to follow the rules and to be law abiding citizens and not get in trouble. And just for a second go, wouldn’t it be cool if I went through farmer’s department store and put my arm across the top of all of the tabletops? How long do you think it will take them to catch me if I destroyed all of their glassware in five minutes? Would we do it? No, we’re not evil people. But we need to talk about the chaos, we need to talk about the people and the thought process of what if there were no rules? Because that’s where the vulnerabilities come from, from people looking outside of the frameworks we naturally sit inside.

Shane Gibson: Yeah, didn’t talk about that way. Because the people that want to get in aren’t going to follow the rules. In fact, they’re going to look for the exceptions outside the rules. They’re going to do weird shit, because that’s how they find a little gap you forgot about or that you didn’t find?

Laura Bell: It’s the funny thing about crime. We spend a lot of time in security building rule sets and patterns of behavior. The literal point of somebody who is criminally minded, is to work around any rule set that exists and find a way around it. So our natural tendency to follow and create rules is one of the reasons we’re not that effective all the time at stopping bad things from happening. It’s a catch 22 really.

Shane Gibson: So, for me, I spend a lot of my time working with data teams around patents. And the idea goes there, there’s lots of things we do in the data world that we do every time we typically walked. So we will typically collect some data, we will typically combined some data, we will typically consume some data, there are a bunch of patterns for the way we store it the way we model it. There’s lots of horrible ways like SED twos and things that we used to confuse ourselves. But there are a bunch of patterns. And you can say, 9 times out of 10, we’re going to use a pen that looks like this. And so if we could document those, and we can put them in our toolkit, we save time, we save here from it, we say argument, we’re ready, and that’s good. But we’re not particularly good at it. We’re nowhere near as good as the FDF domain yet. What’s that in the security world? So I’m doing a startup at the moment, and we’re bootstrapping. So we’re really conscious on cost. And so for me, I just want the basic security patents, so an example, multi-factor authentication. For me basic security pattern, anybody that’s working on our product has to have MFA turned on. So it’s just a no brainer, it’s what I know about. But then I’m going well, there’s probably 10 or 20 other patterns that a security expert or coach knows about. So why can I find those in an easy and consumable way? So I can just adopt, and I don’t use the word, but I’m gonna use the word minimum viable security. Give me the 10 dumb things that I should do on day one, and then you’ll work with me to find the other hard ones? So does it exist?

Laura Bell: Well, have I got news for you? It does. So let me walk you through it. Because actually, it’s existed for a while. But one of the things we’re not great at in the security community is sharing the amazing resources people have built in a way that people actually ever find them. So here’s a few things you can go track down. And I’m gonna split into three groups, there’s securing your company, securing your secure development lifecycle, and then securing the product you’re building. So starting with “securing your company”, that is an amazing open source checklist. It’s available on GitHub, if you Google, literally Google CTO security checklist, or startup security checklist, you will find this list and it’s based by the stage your company and how much funding you’ve had. And it’s the basic controls you need to have in your organization. That’s everything from having a basic policy to turning on MFA, and it goes from the real basics onwards. And it’s a great way to going as I done this or not, no auditor required. Next is your SDLC. Now, OWASP, the Open Web Application Security Project has two projects that are going to be really, really useful to you and your audience here similar challenges. So the first is the OWASP. So that’s this software assurance maturity model, politics is my brain just hooked up. So it’s a way of us modeling and measuring the maturity of our software development process. So it splits our software development process into chunks. And for each bit, it says, what does a mature process look like versus an immature process? What would we expect to be happening? Now the nice thing is they give you away the spreadsheets and things. So you can go and do a little assessment of what you’re doing right now, doesn’t cost you anything to do. And then there’s a guide as to how to get further and more mature. So that’s your SDLC. And then on your product itself. Also in the OWASP base, we have the ASVS, which stands for the application security verification standard. But don’t worry, if you call it ASVS, that’d be okay. Now, what that is, firstly, you get to prioritize your application in terms of its importance or sensitivity. So there’s three tiers of application from the least sensitive to the most sensitive. And then there’s a set of about 16, give or take what they call domains or areas. And those are things like authentication, data, connections, networking. All of the areas and patterns or architectural areas of your application are there. And for each one, it defines what the minimum expected implementation should look like for each level. So if you’re a level one, say, for example, the least sensitive application, it might say, well, you need to have decent authentication. So a good username and password. But it might not go so far as saying multi-factor authentication. But by the time you get to level three, it’s saying actually, you need multi factor authentication, and you need this level of control in place around your authentication systems. It’s going from a couple of things was less than stiff to a much more comprehensive list for more sensitive apps. Now, the thing I like about that is you can start where you are, if your application is just a “Hello World” type app that you’re building for fun, start with the basic controls really sensitive app. If you then start building a bank system, you’re gonna move down that sensitivity and you can start building up your security as you go. Now, the final thing that I was passing, it doesn’t have a great project name. So I will get back after this recording. And I’ll give you the actual name for you to give a pointer to, they have a set of architectural design patterns that are now open source in their repos too. So these are documented design patterns and diagrams for common application architecture processes. Now, the one thing I would say on that he is just like, there are a lot of languages out there, there are a lot of design patterns, especially when it comes to the nuances of security and each one. So that’s still one of the more fledgling spaces, but I was definitely want to watch and be involved with if you want because the security world doesn’t belong to security people, it’s actually we’re there to support it’s the development folks that really need to be there and being vocal because it’s your software that we’re trying to protect.

Shane Gibson: So again, you’re interesting, the number of three leaders.

Laura Bell: I told you a whole industry, it’s acronyms all the way down.

Shane Gibson: So in terms of the content was that the day it created by security people that wanted to share, or was it created by application development people who then understood security and wanted to share. What’s the genesis of that documentation that they come from?

Laura Bell: I think, originally started with members of the security community who were more in the assurance space. Some mostly original players were penetration testers, and security auditors. And the idea was they were seeing common patterns. So when you look at things like the OWASP, top 10, which is one of the things you’ll hear talked about a lot. So our top 10 common vulnerabilities in web applications. That was basically them crowd sourcing, anonymized findings from penetration, penetration testing companies, and going well, what’s commonly coming up time and time again, I think as time has passed, and I was this has celebrated a big birthday recently, we’ve measured in decades now, you really are starting to get to the point where it’s a more mixed community. Now, that’s good and bad, there’s still a lot of security people in there, and you will hear more security talk about those things at security conferences, than you will at development conferences, and that’s a problem. Because we actually need to reverse that, it needs to be more in the developer space. And we’ve also got vendors starting to be as part of that community. So that comes with its own challenges. But there is an increasing number of application security engineers or application development specialists who are hybrids that are emerging and becoming vocal champions. So it’s an exciting time, and I think it will get better and better as we become more aware of it as a development space.

Shane Gibson: I mean, it’d be great if actually, we could put data liens on and actually build out their patterns and toolkits to include more around data security or security when working with data platforms over those active platforms. So you mentioned a couple of times about the software market. So again, what we find is that, in terms of application development, the market, I believe, as we’ll see are for software for the whole lifecycle, for the whole wild way of working, there is typically a good choice of tools or software you can use to make your life a little bit easier. In the data world, not so much, and I think, again, about testing off data, it’s dire, it’s finally getting better. There’s some new stuff coming out, but it wouldn’t be uncommon to kind of articulate it as well, we put the data in front of the user. And if they don’t notice, it’s wrong, then it’s not. And we’re getting better at that. So we are bringing in some of those practices and patterns that we should do as professionals. But is it the same for software, intent for security is that the software market is kind of behind the changes in the way we work now and trying to catch up?

Laura Bell: There’s pockets are very good, and pockets that are really so behind. A lot of the early applications, security tools in particular, they were written by security people who kind of when you write a tool from outside the community, it’s kind of like saying, Hey, I see your baby’s ugly, here’s my solution to your ugly baby. And that never clicks well with the community, because you kind of coming in, you don’t really know anyone in the room and you’re saying you screwed up, and I know better than you. And it’s not a great way to start a relationship really. There are however, a new wave of things coming through that are much closer to the development phase and are doing a lot more product focused development. So if you look at what’s happening, for example, with snick, so they do checks to see if you’re using vulnerable third party code in your stack, or you look at the stuff that GitHub is doing with its security monitoring, so it’s doing checks for secrets embedded in code bases. These excite me because they are very close if not already in embedded in the development tools. And they’re being written by people who’ve come from application development and are applying security on top rather than kind of shoehorning it in. There will always be, for a long time yet, there’s the old guard, the big brand names, we expect to be there. And they sell because people know the company name. And sometimes the buyers aren’t developers themselves, perhaps the security team trying to help the development team. And that help is well intentioned, but not really aware of the challenges that the team is facing. So I think, hopefully, the smaller players that are more app SEC focused and coming from dev are gonna get louder, and not just get acquired by big players. And these bigger ones over time will start to fade away in terms of relevance, but only time will tell on that. And it’s all going to depend on a few things that are happening in the world with mergers and acquisitions. And also things like in the US, the software bill of materials, there’s a lot of big global change in this space, it’s going to make big difference in the next 12 minutes.

Shane Gibson: We’ve seen that in the data space as well. So it’s kind of renting at the moment in the Twitter sphere around vendor washing of the team called Data Mesh. So a new paradigm has come out. It is a little bit of a restatement of things we’ve been trying to do for years, but it’s articulated in a different way. And it’s, it’s got a lot of momentum behind it yet. First thing that happens is all the vendors take their old lamps, call them new and say, here’s our old legacy software, that’s now data mesh enabled and it’s frustrating, because people who aren’t in the domain don’t know that actually, it’s no different to what they were sold last year. So it takes a while and same thing, we’re getting consolidation, we’re seeing a lot of startups in the observability space. So the idea of data dog for data, because we didn’t tend to monitor what was happening to our data, little ideas that so we see a lot of those come up and have value and then get swallowed up by the big companies and that innovation or that focus on that category gets lost. So it sounds like the security worlds in the same place.

Laura Bell: Absolutely. I think if I was going to take your data mesh as our zero trust, and I’m in the app dev space micro services, everything’s micro service enabled now. I think even Sam Newman would agree that stop doing it, it’s a terrible idea. We need our words back. And they need to mean something.

Shane Gibson: And which version of auth did you mean, was it authorization or authentication? Because they are slightly different, even I know that.

Laura Bell: Don’t even get me started on our auth. And what we’re doing with the definitions of that these days, because that’s a feral mess, but that’s a rant for another day.

Shane Gibson: Yes, it’s one that actually we’re struggling with at the moment is how do we actually use multiple authentication systems to allow our customers to come in the way they want? And the amount of technical learning, you have to get to understand those many moving parts. And what they do and don’t do and the new teams that are introducing is as a big challenge for people that don’t typically work in that space. So I found that really interesting. So what I didn’t realize is actually there is a bunch of patents that have been published that actually are really available for anybody who wants to start their journey. And I really like that idea of maturity model, that you can start off small, adopt the things that just make sense when you’re in that space and then grow. So the other thing is that idea of the security coach, so having a look at what you’re doing, that’s kind of where you’re going, isn’t it in terms of what you do is that security coaching, that idea of increasing the literacy of people that aren’t in the security domain to do good things? So if people wanted to talk to you about that more, we would they connect with you?

Laura Bell: Yeah, absolutely. And just to give them a hint of, if it’s the right thing for you. We want you to be able to whether your team is two people on a big dream, or 3000 people, but you’ve only got three security people in the mix for some reason, there is a foundation of skills you can build for every stage of the lifecycle. And when you build that foundation, you make everyone stronger and more secure. Now you can come along and find us at academy.safestack.io. You can find us on Twitter @SafeStack. And you find me @lady_nerd. And just come and chat with us. What I would say is, we are on a mission to try and bring security through as many development teams around the world as we can. And that means that we’re not there to aggressively sell to you. We’re there to listen to how your world works. And if nothing else, I would love to hear how application security works in your team. Because every day’s a school day in SafeStack, even for people like me, so I’d love to hear your stories. So please do reach out.

Shane Gibson: Yeah, and let’s hope that the security domain carries on improving and growing. I mean, I’d hate to see it go the land of code well, we’re very few people do it. And there’s still systems running on it.

Laura Bell: They are paid very, very well for those jobs.

Shane Gibson: I do believe it’s a great retirement scheme as to go back into COBOL world in your later years too.

Laura Bell: I tell you that I think my whole plan in life and building SafeStack Academy is to make people like me redundant. I want to get to a point where we don’t need application security specialists, because it’s part of what we do as developers, as testers, as architects, just like we do scalability and observability and testability. Let’s make secure ability and agility and part of everyone’s role. So help me become redundant really, and start doing this.

Shane Gibson: Yep, I agree. And adopting those patterns and approaches and helping up skill people to do the work themselves when they can do it safely and do it well. That’s what we should all be striving for. So that kind of makes my heart sing. So thank you for your time, that’s been great. I’ve learned a lot more about security than I did from the beginning. And also, it’s kind of interesting for me how the security domain is following closely with the data domain where we’re still what I see as second class citizens and the way we work compared to some people in the application space who really are forging ahead of where we’re at. And as professionals, we should peel faster and catch up because it’s just the right thing to do.

Laura Bell: I think, to be a bit fairer to our communities, sometimes were seen as unsexy problems. It can be really tempting to say, well, I’m building a self-driving car, what are you doing? Well, I’m, doing the quality of data, or I’m securing a system. But without these systems, without data without security, the self-driving car kills people, the rocket crashes and burns, and the financial transaction systems don’t work, or they lose money, and we give away a chimp pictures to people on the internet. So I think we need to remember that’s important part, or be it like the unsung hero supporting character, rather than the main event.

Shane Gibson: I think we’re lucky in the data and analytics world that we’re the hot hyper curve at the moment, it’s scarcity of resource and all the vendors pile again, data is definitely the cool place to be at the moment. So hopefully, security follows straight behind us.

Laura Bell: That’s what we’re trying to do. It seemed lovely to talk about, Shane.

Shane Gibson: All right, thank you for your time, and we’ll catch you all later.

PODCAST OUTRO: Data magicians was another AgileData podcast. If you’d like to learn more on applying an Agile way of working to your data and analytics, head over to agiledata.io.