- Why Bright
-
Product
- Resources
- DAST
- Application Security Testing
- Penetration Testing
- Vulnerability Management
Guide to DAST (Dynamic Application Security Testing)
Your primer for application security testing.
We explain the concept of penetration testing.
Comprehensive overview of vulnerability management.
- DevSecOps
- API Security
- Unit Testing
- Fuzzing
All the necessary knowledge to get started with DevSecOps
We take a deeper look into securing & protecting your APIs!
All you need to know about keys of unit testing & best practices.
We explore fuzzing and evaluate if it's the next big thing in cybersec.
-
Company
- Partners
- Contact
Resource Center  > Webinars
Overcoming The Unintended Consequence of DevOps
Speaker 1: Hi, I’m Tanya Janca, and I’m your host today of the Bright Webinar, and I’m here with Eric. Eric, could you just tell everyone about yourself and your company and everything? Just introduce yourself.
Speaker 2: Yeah, absolutely. Hey, everyone. My name is Eric Sheridan. I’m a managing member over at Infrared Security. We’re a company that specializes in helping educate and helping organizations build the skill sets needed to write and produce more secure software. So very education focused. My background personally, I’ve been in application security for over 17 years. I come from a more technical background, product development. I’m grateful to say I have like 14 patents in developing technologies in this space. So if you want to geek out any time, would love to. With that said, I’m incredibly grateful for the opportunity to speak with you and be a part of this webinar. It’s a great community, a lot of fun, and honestly, this would be a very fun conversation today.
Speaker 1: Also. So for those of you that don’t know me, I’m Tanya Janka and I work at Bright and I’m also the founder of WeHackPurple, a community about application security. And so when I met Eric, I was like, Oh my gosh, obviously we’re both really obsessed with helping teach others and make the world a more secure place, especially the software. And so I was just like, How can we find an excuse to work together as soon as possible?
Speaker 2: Exactly.
Speaker 1: Yeah, exactly. Oh, and Bright makes a dynamic web app scanner. So I joke that we do poo poo, but it’s more than that. We try to find lots of vulnerabilities and then show you how to fix them. And yeah, and it’s really software developer focused. And so yeah, we have a lot of claims to fame and at some point in the chat we’re going to invite you to Bright’s online community, which is a Discord server and it’s named Bright, as you might imagine, like Bright Community. And anyway, I will leave that till later, but oh, there we go. Thank you. Amanda reads my mind. So. Okay, so this webinar is called The Unintended Consequences of DevOps. Eric What are we talking about? Like, why are there unintended consequences?
Speaker 2: Yeah, absolutely. And so this is a topic that hits near and dear to my heart. Having seen customers experience it for so long. In 2011, a gentleman named Mark Andresen wrote an article called Software is Eating the World. And the general premise was like, Hey, software is everywhere. It’s going to drive our lives, do all these really cool things. And so in my mind, like, you know, that’s like the great promise of software, which is cool. With that said, I’m also a slightly paranoid, slightly cut this half empty, however you want to put it, type of person. And so I looked at that and said, look, you know, great people write software, they come with great intent. But we people, we make mistakes. We’re human, right? And so if salt is going to be running our lives and people are making software, we run the risk of having insecure software eating the world. So that’s my great concern. Now, the unfortunately, there’s been a little bit of validation of that concern of mine over the years. If you look at a lot of the annual stats reports that various friends in this space will put out, you’ll see stats like it takes over 180 days to fix a critical vulnerability. So that’s time to fix. So the moment the vulnerability is discovered to when is actually fixed and pushed to production is over 180 days. And so from my experience, my perspective, everything I’ve seen and in Tanya and conversations with you prior to this, it sounds like you’re experiencing similar situations. For me, that is the unintended consequence of DevOps. The adoption of DevOps has unfortunately led to organizations pushing vulnerabilities faster to production than ever before.
Speaker 1: I agree. I agree so much. It’s so I’m I am a nerd. And I as a developer, a lot longer than I did security. And so when I discovered DevOps and basically like I love automating things in my life at work, etc., I was like, this is the best ever. But I also from a security perspective, I like, Oh, we’ve got a lot less time to work with now if they’re going to release ten times a day. We’re used to like two releases a year. Like, we’ve got to figure out a new way to do this. I was friend named Imran Mohamed and he runs a company called Practical Devsecops and I really like his use of the word practical because he’s like, I don’t say.
Speaker 2: Perfect.
Speaker 1: Right? And he was saying he’s like, So us app stack folks, we still want what we’ve always wanted, right? We still want secure software. It’s just we have to trust ourselves if we’re working in a DevOps environment. And so that can mean new tools or it can mean like, you know, scheduling things in different ways than we used to, right? Like if they’re doing sprints, we have to learn to do part of our work in, in sprints as much as we can. And so would you say that with a lot of companies adopting DevOps that there’s sort of like pressure on the security team, like with the program guys?
Speaker 2: Yeah, and it’s kind of it’s really a result of the security teams. And indirectly now directly feeling the pressure that’s been on the development teams for a number of years now. A really good illustration of that for me is Google has this team called the DevOps Research and Assessment Dora. And so this team, you know, every year we put out this DevOps, you know, annual stats report and in there they have these four metrics designed to measure software delivery performance. I’ll just read them out loud. You’ve got you got lead time for changes, deployment rates change, failure rates and time to restore service. So you got these four metrics that are being placed on DevOps teams to measure their performance. And the underlying theme of all of them is really speed. Like, how fast can you do all of this stuff? And so when you introduce this security testing into that, now that the teams who are responsible for security and security testing and working with development teams are now feeling that pressure and have been for a while. And so that’s that’s really hard. And if you if you don’t have the right skill sets, like understanding vulnerabilities, how to fix them all the little I play chess, I’m terrible at it, but I play chess. And so, like when you play chess, you can fall into traps when you make moves. Same thing with fixing vulnerabilities, right? You could think you fixed it. You make a mistake, it reverts the fix, and you’re basically back to square one. So the software teams are under all this pressure to move fast, that the security teams are under pressure to better support the software teams and are now feeling that that pressure and without the skills and the right tools, that’s when this unintended consequence of DevOps really services the pushing vulnerabilities faster to production. So it’s hard. It’s a hard dynamic to experience.
Speaker 1: I agree. Eric, I’m wondering if the audience agrees. So you and I are both biased because both of us founded start ups that try to educate everyone about how to create more secure software because we’re both really passionate about that. But I’d like to ask everyone in the chat, do you agree that basically if you’re going to start doing DevOps that there needs to be more education like at work so they can teach you like this is how you do this securely or this is how this tool works? Like, do you think that it’s the employer’s responsibility to provide education or should everyone just learn it on their own? Like, I’d love to hear what the audience thinks because as much as I believe it to be true, I’m biased. I’m really biased. Eric Like when I was doing I was a software developer. They would like give me software development training every year, at least once a year or a whole bunch. And then I moved into security and they’re like, Oh, security training’s too expensive. You can’t have any. I’m like, What? The person in charge of making sure things are safe to use. You don’t want to teach me how.
Speaker 2: Right? Yeah. You and you and I were speaking about about this topic before, and you had a way that you phrase it. So I’ll try not to say it exactly, because I can’t recall exactly how you said it, but I loved it. It was along the lines of like, you know, you want people in your organization that want to learn more. That’s and figuring out how to support that is a good problem to have, because the harder problem is having people in your organization that don’t want to learn, don’t want to grow. Like how is that any better? And. So yeah, I’m completely biased as well for that very reason.
Speaker 1: There’s some comments in the chat. So I agree. Without the right skills and tools, it can be hard. I agree with more education employer as well on their own. Yes, I think it speaks to a larger issue of software developers to understand their supply chains. Yeah, I definitely that has come really up front and center since last December with Log four J It’s mainly project managers who want to deploy the feature fast to satisfy customer and respect deadlines. Yeah, I agree with that too. I read out the chat things a lot because we tend so we have a certain number of people show up live, but we actually usually have five or ten X people watch or listen to the recording later. So when I read out the stuff in the chat, that’s why it’s for them. I know all of you can see it in all of your quite literate, just so you know, I’m reading it for people later. So someone was saying, Most definitely. I think more education is needed and it needs to start at the top of the companies. I’ve seen it first hand that bottom up doesn’t tend to work as well and it can fail. I feel like it’s not just the dev team that needs education on this, but the wider company. So marketing, sales management, product managers, project managers. I can make an argument for training for everyone. Yeah, I, I agree with that too. Like security is changing all the time. Eric, You and I have to keep learning, right? Like we have to keep learning.
Speaker 2: Yeah. Yeah. I mean, with the technology evolving as much as it has, I mean, you went from like these classic monolithic applications. Now you have distributed microservices infrastructures code, hybrid cloud, native environments, Web 3.0 and insert acronym here. Like you kind of stay up to date with this stuff. And I appreciate the comments, too, because like, they’ve they’ve validated a lot of my experiences. You know, the bottom up, the more passive approach doesn’t work well because, you know, you’d like to believe and want to believe that everybody in the organization wants to choose to take it or take a course or do something as a means of for education. But if if there’s competing pressures tied directly to dollars and cents, either paycheck, I mean, it’s a no brainer. So I don’t blame that person for choosing something else over the education piece. But if it’s top down, if it’s a cultural shift, that makes the biggest difference. And so, Tanya, what’s nice is I know you not you called out correctly so that you and I are incredibly biased on this. But the most recent State of DevOps report that was released by Dora actually said the biggest predictor of an organization’s app dev security practices, was cultural, not technical. So, like, you know, we have we’re in pretty good standing with that statement.
Speaker 1: I agree. And so someone in the chat agrees. So DevOps is more about culture than technology. Yes, of course we want to implement all the super cool tools, but education on the culture and becoming a true student of DevOps will help every team. I 100% agree. So you and I were talking, so Eric and I met a few months ago and then we’ve been nerding out ever since and you were talking about up to the minute knowledge education. Can you explain what that means?
Speaker 2: Yeah, Yeah. So it’s, it’s funny, a while back we interviewed a number of developers trying to understand, hey, look, you know, what are your pressures, what are you going through and so forth. And to and probably in response to the question of like, what are you trying to do most or what is your greatest concern to? Nobody’s surprised. The number one answer we got back was, I want to close the ticket. I want to close the ticket so I can go home. I have dinner with my family and go watch some cool show. Right. So in that environment, when you have that, like, hey, I want to close out the ticket quickly and you have it a DevOps environment that puts you so much pressure on you, it is getting harder and harder and harder to ask anybody, let alone developers, to step away from their day job to go spend dedicated time in some in a course, right? And so based on that fact, that’s one reason why we’re putting serious effort and condensing our courses to be shorter and more direct. Right. No fluff. Let’s get to the point. So that’s one fact. Then the other is, you know, if I’m getting vulnerabilities presented to me as a part of my daily workflow flow, right? So like, let’s say I’m a bright customer, I’m getting vulnerabilities fed to me as a part of my build pipeline. A ticket was created. I got to close that thing out. I don’t have time to step away to go practice fixing vulnerabilities in juice shop or some other fictitious lab environment, right? That doesn’t move the needle on how my team is being measured. What actually moves the needle is me fixing the vulnerability in my code as quickly as possible. And so education really needs to accommodate that. We need to have the ability to make education a daily part of the developer’s routine. And a great example of that is if I’m presented a vulnerability in my code, teach me how to fix that vulnerability in my code as quickly as possible so I can close out that ticket and move forward.
Speaker 1: Yeah.
Speaker 2: If you’re successful at that, you get to you as a security team, get to support that software delivery performance metric instead of metrics that Dora pushes out while also supporting your goals of reducing risk within the organization.
Speaker 1: Absolutely so. So I wasn’t supposed to just talk about Brate the whole time, but. So when you find a vulnerability with our tool, it gives you some instructions on how to mitigate it. It’s not a formal lesson like what you provide, but it’s like, here’s this, here’s some references, here’s places where you can go and learn and read more. But I agree with you. Like let’s say they read the instructions, so you’re supposed to do output encoding that would solve this problem. But if they don’t know how to do output encoding, knowing where to go and find that answer, because sometimes the reason why the mitigation isn’t there because they didn’t know it exists or they don’t know how to do it. Like there’s lots of programming languages like programmed in. I don’t know every single function that exists. I’m not magic. And so being able to just go to a system, look it up and get an actual lesson, that sounds kind of awesome.
Speaker 2: Yeah, because, I mean, they’re very traditionally speaking, they’re like very siloed technologies, capabilities, and they need to merge. They need to come together to have the maximize the benefit of the developer as a part of their daily workflow. So yeah, and what’s nice about that is, again, once again, I’m biased, but Dora supports me in this. Dora actually noted that when people adopt these types of security practices and that culture adopt that cultural mindsets, they have an increase in the adoption of security practices in general and reduce developer burnout. You know, if you’re trying to keep key talent within an organization, particularly developers, burnout is an absolute killer. And so the unintended consequence that we talked about earlier leads to burnout all the time. And Dora supports that. So by having this integrated daily experience so they can get what they need when they need it most to fix the vulnerability helps in solve their delivery, which indirectly helps reduce things like burnout.
Speaker 1: So we have some comments in the chat and someone disagrees. And I really like it when people disagree because then I learn something new. And so I disagree. Things like, Oh, ask Shop are just priming the mental pipes to look at these vulnerabilities and these patterns. A break to fix your mentality, but it doesn’t last long. OC Yeah, yeah, I see that. And then we have more comments. So a developer life work today is all about closing tickets. I suggested my team to add security aspect and acceptance criteria in each story, but it ended up frustrating the team that had to handle the new things they weren’t used to doing. I agree that is hard, especially if there’s not kind of like job shadowing or some sort of support that they have. So sometimes you can take a course and learn how to do the thing, but if you’re not doing that, it’s like, what can you do to help make sure they understand what’s needed of them? Um, and so there’s another one I think pre commit hooks in I’d plug ins help provide that immediate feedback and education as the developer develops. So the developer programs the code may help, but scaling and managing IDE plug ins at a workstation can be really challenging.
Speaker 2: Yeah. Yeah. So, so to pull this off, integration is the biggest technical challenge. I think you hit that out directly. So what is the primary toolset being used by your team? What are the, the integration points and how do you pull it off? And the answer to that will be different based on all the toolset, the technologies and vendors that you use. But at the end of the day, if you can come up with a plan to pull that off, you’re going to find yourself in a much better position.
Speaker 1: Yeah, so there’s there is more comments. So a break in mentality will only give you just enough knowledge to fix the problem of that day. Sometimes that’s the thing you need to do, right? Like you just need to get past this ticket because you have a deadline on Friday, but sometimes you want to have a deeper knowledge that lasts longer and then it can be repetition that gives that to you or applying it in different context. What do you think? Like, how can we make sure they don’t just know the answer today? The over time they learn?
Speaker 2: That is a great observation and I’m going to use that as a segue to another topic I wanted to speak about. So this concept of having education integrated into the daily workflow is not necessarily something I’m arguing should be the only approach or the the norm, or actually I am arguing it’s the norm, but rather that alone is not enough. And I think this question calls it out directly. If we integrate into the daily workflow, the education is going to be more often than not will be very focused on issues that are either like true or false. Like this is how you do it is how you don’t do it. But there are bigger topics, bigger issues and security that don’t necessarily fit out of those those yes or no, true or false type of situations, things like authentication, access control, design patterns, Security design patterns is in the most recent top ten software supply chain. You mentioned that earlier. We actually released a course on that earlier this year. Incredibly popular and incredibly non zero one true or false type of topic but that these these are the types of things that people need to know. And so for me, the I like to try and get the benefits of both. You have like a formal education curriculum speaking to these topics across many technologies and languages and so forth. And you also have the integrated daily workflow education piece. And what’s cool is like they can kind of feed on each other. So like if you have the bite size education fed in, you’re teaching people how to close out that crossing scripting vulnerability as quickly as possible. But if you find that team or that the people working on that project are frequently introducing vulnerabilities, however you track that whatever platform you use, that could be the data that you need to say, Oh, I want that team to go through maybe a more formal training curriculum because there’s a lot of sort of fundamental understanding that we need to build. So yeah, a more complete program is not just about the daily workflow. There is this wider industry and set of topics that we need to be aware of.
Speaker 1: So I have another question. And I also I forgot to tell the audience you are allowed to ask questions to. I’m sorry, I totally forgot. I’m just asking questions of them. So what do we do to keep on top of trends? So I feel like a big trend in the past 12 months is software supply chain because of log four J And although I’m sad that the log four day thing happened and it was quite stressful and had to love overtime, it really brought supply chain, supply chain security issues and vulnerable dependencies to light in a way where I could talk to anyone about it because suddenly people are paying attention. And so that was like a big obvious way. But rather than having giant disasters, are there other ways that we could stay informed of of industry? Industry trends, I guess is what I want to say.
Speaker 2: Yeah. And so I’ll, I’ll answer this. You know, I know you have experience in space, so I’m passing the question back. I would love to hear your perspective. I’ll try and answer that from the perspective of some of the developers that we work with and in supporting them. So our customers, our buyers tend to be the dedicated security teams, and so they’re the ones that have the pulse on the industry and they do that through engagements with webinars like this or various communities that are out there and are on top of things like software supply chain security. What I’ve seen help with them sort of disperse that knowledge within the organization is making frequent internal webinars or other sort of campaigns, if you will, to raise awareness on these topics, often in the form of like a Security Champions program. So either having representatives on the security team once a month, perhaps inviting development teams once a month, carrying out internal webinars or on Slack channels, sending out links and resources is once a week, once a day or whatever cadence makes sense for that organization, sharing this information. And to that ends. One of the things that we’ve done with the concept that we create is talked about sort of this concept of integrating education to the development work, the daily workflow. We actually did that in our product by breaking up into like these educational bite sized videos. Some of our folks actually send out like videos within like Slack channels and stuff like that to kind of keep this content fresh. That’s what’s cool about that is if you’re if you’re in the business or internally within your organization or as a vendor creating educational content, there is value in delivering bite sized quick pieces of content and delivering it fast, as opposed to try and build this comprehensive stuff. Because if you have that capability, which is almost reactive in some sense, it would allow you to speak to industry trends like software supply chain within your organization much faster. So I’ve seen that work for the folks I worked with. Tanya I’d be curious, given your experience in this space, what what you see work.
Speaker 1: So I find a lot of so I’ve been working with a lot of clients over the past couple of years to start and and build their security Champions programs and I’m like really passionate about that. So when you said it, I was like, Oh, so like when I started my first OPSEC program. Eric I didn’t know what a Security Champions program was, and I accidentally made one. Like I wanted everyone to scan their, their web apps with a dast. And so I got a dast. I learned how to use it. I showed all of them how to use it. I told them, you know, I’m really concerned because I keep finding vulnerabilities that are you know, these allows people say they’re bad news. And so I’m like, we need to fix them. And so very quickly, there were there was just one person on each team who sort of self identified as the person who was my point of contact. And so just very quickly is like, oh, it’s Liam or Stefan or whoever the person was who was the most interested. And so then I would teach that person the most. And eventually it’s like I just have meetings with this one person. They handle all the security things for their team and they’re awesome and help me get my job done. And so that really helped. But for so if I want to stay on top of industry trends, I read a lot of blogs, I follow a lot of people, I’m on a lot of newsletters like tldr so too Long didn’t read security and he just summarizes like a ton of blogs every week and it’s awesome. So I’ll read his summaries and then I’m like, okay, these two, I want to read the whole thing. And I found that very helpful. I also listen to a lot of podcasts and those things help me figure out what it is I want to dive deeper into. I also like create a lot of content, and then if you’re going to do content, I like to read up on all the things and you know, I like to quote others work a lot, which is probably good. And so then I’m reading this and that, and all of a sudden it comes together and I’ve finished the article and I know way more about it than when I started writing. So but not everyone learns by teaching others. But that’s a way that really works for me, for a company though. So for instance, when I wanted to teach everyone about the death scanner, I started with OC. So there’s 300 of you and one of me, and I don’t have enough time each day to do everything, so I need your help. And I tried to get across to them. I can’t do this without you. You’re our first line of defense, not me. Like they see your app and your code long before it gets to the security team. Like, if it’s out there on the internet, they’re interacting with your work, not mine. And so I need to kind of build all of you up and build all of your knowledge up and your defenses by giving you tools that you can use. And so I know we talked about we talked a lot about the unintended consequences of DevOps, of people not having a lot of knowledge that could help them do better. But we also, like briefly talked about and I kind of want to talk about a little more tools. That are changing over time, basically. So like you and I, you and I both know. So the tools that were available in 2001 were basically not existed. And in 2011, there is a bunch of tools and now it’s almost 2023. And this year, I think like what, 50 new startups that talk about API security, like how do we even choose? So I want to put in the chat, I want you to ask questions about new security tools and like what interests you or what you want us to talk about. But Eric. How are you seeing the tools changing over time? Like or or do you see a change in the tools or is it the same as it’s always been?
Speaker 2: Oh, it’s funny. Both. You may have hit Pandora’s box with me on this topic, which is great. Let’s see. With the adoption of DevOps and this unintended consequence which stems from all this pressure and speed placement developers, that was then also places security teams to deliver results faster. And so what we had for many years were, I’ll just call them legacy tools that I would best describe as like, you know, you point it somewhere, you click a button in a couple of weeks, maybe even a month, you get some report back, maybe a PDF, and you just kind of toss it. You generally speaking, toss it over the wall for the other team to do something with. Right. That flat out fails in a DevOps world. Don’t even bother trying. If you do, you’re in a world of pain. I’m sure Tanya will be there to answer your call when you’re ready. But you know, there are with the tools, those legacy tools kind of recognize that. And from a marketing perspective, like they’ll adjust how they speak to DevOps and so forth. But what I’ve seen is the more successful solutions out there, the ones that actually started from the ground up started from the ground up, Hey, how do we rethink this technology in this world? We can’t continue to Frankenstein something from the nineties and early 2000s thinking it’s going to somehow solve today’s problems. And so, yes, there’s a lot of startups out there and it is really hard to differentiate. But the want the startups that are truly beginning from the ground up have the opportunity to help from a tool perspective in this situation. So like I said, Pandora’s Box. I’ll pause there, Tanya, because I’d love to hear your perspective.
Speaker 1: Well. So whenever I talk about DevOps and so I work for a vendor now, but I’ve never worked for a vendor before, so this is new to me. And so I came up with this list a few years ago because I got asked to keynote a conference that was about DevOps and I was like, Well, I better learn DevOps really fast. And so DevOps, if you read the The Phoenix Project, the DevOps handbook, accelerate, and now just recently that big group of people that write all those awesome books released The Unicorn Project. And so they sort of have like three rules, like it has to be fast. So I have a bit of a cough. It has to be fast. It has to be accurate and it has to be I don’t I don’t want to say easy, but it has to be realistic and doable. So sometimes I see tools where they’re like, we do these really advanced attack simulations and production and I’m like, Whoa, what? I am not there yet. That does not sound easy. That sounds like I’m going to find some of my security team crying. We do not want to do this right now, or I’ll see tools where it is fast. It runs in only 6 hours. Our competitors run in 8 hours. I’m like, Nah, nah, that’s not fast for a software developer. Like, come on, it’s.
Speaker 2: Not the right unit of measure. Ours is not the right unit.
Speaker 1: Yes, yes. And then like the accuracy thing. So I think you mentioned this, and I’ve had a lot of people mention this, the idea that like false so false positives breaking the build, everyone freaks out and they go to try to fix the thing and they investigate and then it’s not real. Yeah, right. And I this is probably going to sound bad, but I did it in my first DevOps project and I had this open source project and I was working with this really awesome person named Abel. And I, I put a dynamic scanner in that gives false positives somewhat regularly. Unfortunately, I didn’t know that at the time. I have a lot more experience now. And so basically it made a false positive and it said, you know, you have SQL injection. And I was like, Oh my gosh, I’m like freaking out. And then Abel’s like, I did not know. And then we looked at it. It was a false positive, and we ended up spending like the whole evening for nothing. You know what I read and I was presenting at a conference the next day, so I was really freaking out. I’m like, My pipeline has to work. And so then another time, like with the same open source project, I was giving a presentation and Abel had pushed some code the night before, and I don’t know how he got past the pipeline. He might have disabled my tool. But anyway, so then I go and run it in front of everyone and it breaks. And I was like, Oh no, no, my demo failing. And someone’s like, Your demo is not failing. It found a high vulnerability and it broke. That’s what it should do. And so then together we all look into it and it was a false positive and I was like, Oh my gosh, I’m going to die of embarrassment now. And so when you do that in real production, like people are not they’re not okay with that. Like it’d be different if it had happened one time, but like, this is like a small part time working on the weekends occasionally project. And I did two like major false positives and the team’s like, Hey, so do we need to really have this piece of crap in our pipeline? And I was like, No, it’s important. And so yeah, so there’s a couple of comments in the chat I want to read before I ask you more questions. But so Christine is saying DevOps for Modern Enterprise by micro herring is another good book. And so I’m going to take note of that because I actually haven’t heard of that book. And then Don says there are several tools that verify the findings using an exploit, though the same tools APP and API scanners promise low no false positives. Yeah, I agree. Bright makes that promise as well and I haven’t found a false positive with it before, which is really awesome because then people don’t get angry at me like devs. They’re like, Why did you break my bill? Tanya Yeah, I agree with you. So I think that’s one of the trends that has changed is that like when I Eric, when I gave my first talk about apps at a big conference, I had said I always start with DAST instead of SAS, because SAS has so many false positives, everyone just gets really angry at me all the time. So I need to use that because it’s way, way, way more accurate. And so I skip SAS and do like manual code reviews or I do SAS out of out of bounds, so to speak. So like I’ll do it, but I’m not going to just send the results to Dev. I’m like, I have to seek through this and like find the things that really concern me, validate those results and then make like a mini report to send to them.
Speaker 2: Yeah.
Speaker 1: Have you, have you seen a change over time though, that there’s. Is it in my mind or is it true that it’s becoming more accurate and it’s improving? What do you think?
Speaker 2: So I I’m very biased with SAS. I love SAS Technologies. I actually wrote a few. So it depends. It’s a question of like, are you using the right tool for the job? In my mind, you know, when SAS and Das were first really out, it was very much one or the other. And so in order to compete at that time, what you saw were like these technologies that were trying to test for everything under the sun. So, you know, on their marketing page, hey, I find 600 different types of vulnerabilities. I find 601. So I must be better. Right? And so your result, you have a lot of noise. So there’s areas where one is better that than the other and you kind of need to figure out what that balance is for you and your team and your organization based on your technology stack with that said, when it comes to Das Technologies, what I really like, and I think this was just highlighted in the chat, is that you get like the steps to reproduce the exploit ability of that. You don’t get that with SAS, right? There’s a lot of cool things you can get with SAS compared to DAS, but in the context of the unintended consequence of DevOps, if you’re going to put something in front of a development team, it needs to be actionable. And you know, I have this saying when I’m doing a sales call, sales presentations, like if you can have a technology that’s fast, easy and accurate, you get a fourth benefit for free, which is scalable, right? And so then you get a scalable technology in place. So. Yeah. When it comes to SAS, you just need to use it for the right pieces. But dast will give you exploitable, actionable results, which in the context of DevOps means that when people act on it, there’s a higher probability they’re going to act on something real.
Speaker 1: So big agree from me. So I so this will this will make sense in a minute. So I worked in the Canadian government for 13 and a half years and then I’m in Ottawa right now and I went to an Ottawa conference and I saw lots of my friends and people that I know from my many years working in the government. And so we have some questions about government security. And so I’m going to read a few of them in a row so that we can talk about them together. And so first off is so what about it? Shops that work in government. So civic, state, federal, and this could be America, this could be any country. So how can some of these infosec shops leverage tools that align to their very strict IT security policies because they’re very heavily regulated and because they’re so heavily regulated, there’s a lot and a lot of processes. And so then there’s a follow up on that question, and I kind of want to just read all of them and then we can kind of pick out parts and answer it over time, if that makes sense, because there’s elaboration. So how do government devsecops show break through the centralized I.T governance and red tape and start using the right tools and processes? And so there’s an there’s an answer from Don in the chat, but so this might sound bad, but when I was in the government, my number one thing that upset me was the really big processes. Like I remember I wanted to run a data scanner and it was a fully automated one, like bright without a buzzer. So Bay also doesn’t have a buzzer. And what that means is, is there’s no like fuzzing of input validation, sort of testing where you could create false records and you could damage things in the database and put things into the database that might be bad, you might build a crash, the app, etc.. And so without a phaser, a dast is actually really safe to use. And so I was so annoyed with them that they told me they’re like, Well, you can’t do a scan without wait for it. Eric A 21 step approval process with wait had to be had to be approved by someone five levels above me. So the deputy director of an entire federal government department had to waste her time reading this stupid like nine page thing that I filled out that she clearly does not care about. She does not have time for my crap. Right? And so then I’d have to wait till she had time to sign it. So an emergency scan would take minimum 21 days. And I was just like, This was me. All the time. And I was like, We’re not adding value with her signature. We’re not adding value when people who don’t have anything to do with this application, who aren’t monitoring the app like the 21 steps, most of them aren’t adding value. So what if instead we look at what adds value and go from there and also like, why don’t we pick a tool that’s not dangerous? So I have run tools. Eric When I had less experience in with Junior and they had a buzzer. And I destroyed the production web server by accident, like I used the automated scanning thing and it took down the database and the web server and I was like, I am so sorry. But I and it was because I had had zero training. And then they put me out there and you’re like, Good, good luck with that. And so one thing would be to buy safer tools and more modern tools tend to be a lot more safe because we have because we know that would be like the first suggestion. The other suggestion would be maybe questioning what is the value that is being added by each part of the process because there still are things, there are processes that can add value. It’s like, what do we see the value? How do we measure the value we’re doing in this process? Does it make sense to have this process at all? And that means having to have real, genuine conversations with management. And I have to say like I left government because I found it so frustrating, but I really hope you don’t leave because we need people like you. Seriously, Eric, Like I don’t want the good ones to leave. I am a citizen. I still need you to do a good job. Do you have some suggestions?
Speaker 2: So I guess a couple of comments. The first one is if you are able to go up five levels in the organizational charts to get a signature and still keep everybody happy, you probably deserve a raise. So that’s pretty cool. And then the the other thing I’ll highlight is that I have seen some changes from a government perspective over the past couple of years top down that are hopefully helpful in escalating the efficiency of some of these initiatives. You know, you had the executive order last year around supply chain security, for example, and then I guess was a couple of months ago, there was another memo put out around forcing the adoption of things like, you know, producing s bombs and so forth. And what’s interesting is like the turnaround time that organizations, agencies, vendors and so forth that are expected to deliver on these sorts of requests is pretty fast. I mean, I think it was like, you know, within 45 days or something along those lines, like they had to have this stuff implemented. And so my hope is that if you generally speaking, the audience that you know, those that can sort of influence policy, if you can put pressure to deliver the outcomes, the value, the outcomes within a certain period of time, I would think that the process that’s there to try and deliver that and needs to adjust. So I can’t wait 21 days to get a sign off to do a test. If I have to return it, turn around within 24 days. Right. So, you know, I can’t say I have a ton of experience in the federal space. And my hope is if you can influence policy and the deadlines for the outcomes, that would then change the process
Speaker 1: So I’m going to be super biased again. What if we trained the people who are making the policies and approving the policies? Like, what if we taught them what DevOps was? And what if we explained like. Like, listen. I’ve worked with a lot of teams, like not just government, but quite, I would say now significantly more private sector. And sometimes like they’ll be talking about, you know, trying to make the process better or whatnot. And they’re like, well, we just can’t possibly, you know, give a. Mm. You know, if, if you run this scan and you’re only blocking on critical and high is like a medium could get out. I’m like you’ve seen the reports, we have thousands of highs, we have hundreds of articles in prod right now. What you’re doing is making it extremely difficult to fix them. That’s the risk to our security is how long you make it take because of your processes, like you’re the cause of this risk. And I’ve had some companies like, Oh crap, like if you make it take 21 days just to run a scan, that means that’s three more weeks. Everyone’s waiting for those results. But also it’s not the first scan I’ve ran. There’s a whole bunch of other bugs that are in there already that we know about. If it’s taking me three weeks to run a scan, it’s probably taking three months to release new code, and that’s 90 more days that we’re not secure. And it’s very it’s frustrating to see people where, from my viewpoint, they’re shooting us in the foot, if that makes sense. Like, I’m like you adding all these processes to make sure that every single thing is perfect has resulted in it taking so long that we’ve been vulnerable for an extended period of time.
Speaker 2: Yeah, it’s interesting. So like, if if the audience that that you’re engaged with who is has this process so forth is striving to achieve DevOps or has DevOps, you know their software delivery performance is being measured by the same four metrics I listed earlier. All of this process and these, these gates, if you will, are negatively impacting that. And so from a security perspective, naturally you just trying to run a security test is going to negatively impact that because by the time you get permission, by the time you get to run the scan, by the time you get the triage and disseminate the results, a whole quarter has passed, right? We’ve got three months that are just gone. And so if the goal is to truly get software, high quality software out faster, streamlining this process is essential and making it more efficient is essential. And actually, you know, when we talk about bright and, you know, doing the security testing from an efficiency perspective, one of the things I think is pretty cool is like, you know, and I can’t speak for you, but I’m going to pretend for a moment and just be honest about it, you know? Right. Like, you’re running these security tests, you’re generating a bunch of data for your customers. That data can be used to make more form decisions about around application security in general. And so, you know, when it comes to having the right tools like Bright, for example, you’re getting results fast. That data can be used to help figure out what skills you need to instill in the team as quickly as possible. And so, like if we’re going to talk about efficiency, you’ve got to talk about how to use data to to help figure out what skills you need and help to drive that. So going back to one of my earlier statements, you know, overcoming this this consequence, this unintended consequence of DevOps requires both the right skills and then the right tools. You can’t have one or the other. You have to have both a need to be able to play well together.
Speaker 1: I completely agree. There’s two more comments in the chat I wanted to read out. So one is from Dawn and she was suggesting something that that I’ve done before. And so the way she worded it was maybe do a printout of a few select gaping holes and show them what it is and explain the context. And so what I did at one place is I wrote up, I called it a risk acceptance signoff document, which I made up, and I put it on company letterhead. It looked really good, and there was a whole bunch of things that were wrong that everyone kept refusing to fix. And they’re really they’re like literally keeping me up at night. I was like, This is going to be bad. I’m in charge of security. I’m going to look so bad. And I begged him back and no one would do it. And so I just wrote it to the deputy of Security and the chief information officer. And I was like, listen, you know, these are the four things and here’s how it could affect the citizens of our country. And I can’t have the authority to sign off on this because I don’t have the authority to force people to fix the things but you to do. And so I need you to sign off that you accept these risks. And the head of security called me. He’s like, this is a giant piece of crap. I’m not signing this. This is awful. Why would we not fix these things? And like, it’s been over 90 days of me literally meeting with the teams and then refusing, refusing, refusing. He’s like, You tell them. You tell them, I said. And so I was like, Oh, this person sent me. And he said, They just jump up and they would run and do the thing. And so we got all the things fixed in like a week and a half.
Speaker 2: And funny what happens when there’s some level of accountability, right on paper.
Speaker 1: Right. And well, and someone so I did that in another department and they’re like, Well, I’m not going to sign this. So then I didn’t accept the risk. And I’m like, Well, I’ve officially informed you in writing. And by not signing, but also not taking action, you are accepting the risk, right? Like I’ve informed you, I’ve informed you that I have been working my buns off, trying really, really, really hard to get people to fix these things and people are outright refusing. So you’re aware of the issue and if you do nothing, that is you accepting the risk. And they’re like, you are a trickster and should have been a lawyer. And then some of them got fixed. So there’s one more comment from Sarah, and then we’re going to we’re going to have to try to wrap up, which is hard. So the most frustrating thing about tool providers is making free features become pricey. It holds a lot of company initiatives that want scale security practices. We have to keep changing tools and learn how to use them each time. I don’t know how to solve that. Sarah We all got bills to pay, and so sometimes companies make things free for a while and then we change them, or sometimes we make them cost money and then we make it free later because we realized we actually make way more money from other parts of our product and we can offer this thing for free. And so that’s like, I don’t know if you have an answer for that, Eric, but it’s hard.
Speaker 2: Yeah, any time pricing costs money’s involved, it stirs up a whole lot of emotions, you know, around the spectrum of emotions. You know, in an economy where you have inflation and growing as much as it is, at least within the states, Yeah, the price changes is fairly common right now. And in fact, one of our vendors just came at us with a price change as well. So we got that email. And I will admit I threw a little bit of a temper tantrum. It’s probably why I have guitars, so I play that to chill myself out and it’s kind of just a reality of the world we live in.
Speaker 1: I, I agree. Also, it’s really hard to figure out how much to charge for something like when I started, we had people trying to figure out like, what is a reasonable price? What is the price people will pay is a price that people will not pay. Like, am I being ridiculous by charging this? And like I remember when I first started, I had people write me and they say, You’re not charging enough. I don’t like it. I want to pay more because it was $7 a month to join. WeHackPurple when we started and people were like, So what some of them would do is they would buy one for someone else. And I’m like, Oh, thank you.
Speaker 2: That’s very kind of yeah.
Speaker 1: But it’s weird to figure out what a service or a product is worth. And, and it’s really hard because you have competitors charging various prices and they are all over the map, right? And it’s like.
Speaker 2: I had someone once told me that, you know, you found the right price point when 50% of your prospects say, no thank you over price. And so that just kind of blew my mind. But there’s some MBA logic behind it. But that’s my way of saying it’s kind of feels like a guessing game sometimes.
Speaker 1: I know that’s exactly so. So if you need to know how much to ask for your product, don’t ask us because we don’t know.
Speaker 2: Yeah. Yeah.
Speaker 1: Like I’m good at security. I’m just not good at that.
Speaker 2: There’s another team I depend on for that.
Speaker 1: I know, right? But going back to what this this whole thing was about, Eric, what would what would be a key takeaway that you wish that people would have from this webinar, like a thing that they could take forward and think about?
Speaker 2: Yeah, absolutely. So I mean, for me, fixing the unintended consequence of DevOps, which is pushing vulnerabilities faster production requires two key elements. The first one is around people change the culture to make security a daily part of your workflow. And we talked about a number of ways that you could do that. The second thing you need is around your technology. Have technology that is built for the world that we live in today in attempting to solve the real challenge that we face today, producing results fast. And we talked about the various criteria of that. So if you have the right skills and you have the right technologies producing the data, it should feed, and then you should be able to measure progress. If you look for ways to measure, here’s three great ones. First one, increase software delivery performance. Second one, increase your remediation rates. Third, decrease in the number of vulnerabilities that are being introduced in your software over time. You can present that to somebody. You’re definitely saving the organization money and reducing risk.
Speaker 1: Oh my God, that’s so good. Now I have to think of a follow up, so I’m going to ask you another question. While I think of something that’s half as good. So if people want to learn more about you and your company, where could they learn more or how could they follow your activities?
Speaker 2: Sure. Yeah. So I’ll be brutally honest. We’re a small company. We don’t have a whole ton of activities, but I am speaking. I speak a lot for my customers and I often speak for folks that are not my customers internally within the organization. So if you want to find out more, you can go to our website and press security. You can email me directly, Eric or I see at infrared security dot com. Just drop me a notes. Like I said, happy to do sort of presentations and these sort of webinars internally with you all if you find the topics interesting.
Speaker 1: Awesome. So my key takeaway. So we talked about people process and, and tools and I would say that we need to invest in our people. And I feel that training and and knowledge transfer. So this could be job shadowing. It doesn’t have to be something you pay for, but investing in your people so that they learn all the new things they feel comfortable with, all the stuff they’re doing. They have the knowledge to solve the problems because you’ve done job shadowing or mentoring or training or like there’s so many things. And then also be nice to your people by giving them tools they like. A lot of times when I do OPSEC consulting, the devs are just like, Why are you being so mean to me? Making me use this 20 year old tool that’s garbage. So be nice to your people by helping them choose the tool and invest in them by training them. Those would be my takeaways. Eric So I’m Tanya Janca from Bright Security and I’m here with Eric from Infrared, and I guess it’s time to say goodbye. I’m so bad at this. I’m bad of saying goodbye, Eric.
Speaker 2: I can smell from my an absolute pleasure to be here. Thank you all for your time.
Speaker 1: Thank you so much, Eric. I really appreciate you taking the time to do this with us.
Speaker 2: Absolutely.
Speaker 1: Okay. Until next time, bye everyone.