Resource Center  >  Webinars

Building an impactful DevSecOps function; practical tips


Speaker 1: Welcome everyone. Should we say hi to everyone while everyone’s coming in?

Speaker 2: And I think they’re starting to trickle in very quickly.

Speaker 1: Excellent.

Speaker 2: North of 20 and counting.

Speaker 1: Excellent. Thank you, everyone, for coming to the webinar. We are happy to have you. And whether it’s good morning or afternoon or evening to you. We are pleased you’re here. Also, everyone apparently color coordinated this morning except me, so please forgive me. And the red. Everyone’s in purple and blues. I did not get the memo.

Speaker 2: No, Tanya, it’s you and I. I’m in blue. You’re in red. Together. It’s purple. It’s all good. There we.

Speaker 1: Go. There we go.

Speaker 2: Luckily my my mom is an artist. I learned those things at a very early age. See more and more people joining. So we’ll give it maybe another minute and get started.

Speaker 1: Sounds good. Our participants are allowed to put things into the chat, or can they only use the Q&A?

Speaker 2: No, I’m already seeing things in in chat, so I’m guessing they can.

Speaker 1: Excellent. While we’re waiting for just one more minute. If you’re cool with it, put where you’re calling in from in the chat. I’m in Canada, I’m just outside of Victoria, BC, a little island above Seattle, and the callers are the panelists that we have are from all over the world. Or are you from the United States? And I’m actually incorrect. No. We have someone from Israel. Yep. It was some from someone from California. And then where are you from, Mark?

Speaker 2: I’m from Georgia. Nice, Georgia, in the southeastern United States, about almost as far away from Vancouver as you can get, right?

Speaker 1: Right now, I think the leader is Scotland. I’ve seen three Scotland.

Speaker 2: Well that’s nice.

Speaker 1: And I’ve seen four different Canadian provinces. Tanya.

Speaker 2: That’s awesome. Represent go up.

Speaker 1: Iran. Three. Africa. Wow, this is impressive.

Speaker 2: It really is. Oh my gosh, totally amazing. So how many people do we have? Or do we feel like everyone’s had a chance to get in? What would you like to do?

Speaker 1: I think we can. We can get somewhere north of 50. So let’s get started.

Speaker 2: All right. Welcome, everyone.

Speaker 1: No, sorry.

Speaker 2: Welcome everyone to the Bright webinar building and impactful DevSecOps Function Practical tips. And there are whole bunch of us here from Snyk Circle CI. And WeHackPurple and I’m Tanya, I’m kicking it off and I want to introduce you to all of the speakers. Oh, actually, let’s do the agenda first. So first I’m going to introduce you to all the speakers. Then we’re going to basically talk about what an impactful DevSecOps function looks like. So there’s going to be a lot of discussion on that. Then we’re going to get into building blocks of DevSecOps. So basically all the things we want to have, if we’re going to have a good program, then we’re going to get into kind of a discussion of practical tips. And then at the end it says audience Q&A, but you’re actually allowed to ask questions whenever you want to. But if it has nothing to do with the slide, I’ll probably save it till the end. But if you ask a question while they’re talking about the topic, it makes sense that I give it to the speaker. So if you have questions, you can say them now, you can say them later, you can say them twice. It’s up to you. The chats open and the Q&A functions open. So take your pick of which one works best for you. Okay so these are the panelists, but I’m going to let them tell you about them because they’ll do a better job. And I believe we’re starting off with Nico Nico.

Speaker 1: Thanks for the introduction, Tanya. And the warm welcome. I’m Nico Virat. I work at CI. I am from Amsterdam, the Netherlands, so I represent the European contingent here on the call and I’m very glad. We already had three Scots and I see a lot more, so that’s great. Keep it coming. I’ve been running engineering for about 20 years in my career. Uh, as a VP engineering in companies like quest, Liberty Global, Vodafone. Um, so bring to the table a little bit of experience and how not to make a cloud migration. Um, so I’ll, I’ll put in my $0.02 on that. Been running products for about ten years. Uh, run my own company in release management called bio, which was acquired by Serco about two years ago. So have a little bit of a scar tissue and an interesting considerations on how to do things and how not to do things. So. Thanks for having me.

Speaker 2: Awesome. And Nico, all of us make mistakes, otherwise we would not be humans. So we learn from them.

Speaker 1: We’ll talk about it. We’ll talk about it.

Speaker 2: Okay, Mark?

Speaker 1: Hello everyone. Mark Nichols I’m a senior partner solutions architect with Snyk. And in my role and really through a lot of my career, I’ve been working on integrations between enterprise based platforms that allow us to do things like some of the stuff we’re going to be talking about today in order to improve the DevOps lifecycle or the DevSecOps lifecycle. And that’s what sneaks all about, is making the world a safer place. And so I’m very excited to be here with everyone today.

Speaker 2: Awesome. Mark. Next is me. I’m the CEO and founder of Wired Purple. I wrote a book called Alice and Babbler and Application Security. And I should say my name. I’m Tanya Jenkins, and basically I teach secure coding and application security and basically help people build better programs. And I quite like it. And I do a lot of things like this. And up next we have Gabby.

Speaker 1: Hey, everyone. My name is Gary Bashford. I’m the CEO and right, very proud to be presenting with our partners here. We actually have a partnership with all all three participating companies. You can actually see links to those. You can see the circles or that we have the integration that we have with Snyk will also post those links later in the chat, so everybody can take a look at the combined value we can provide. I’ve been doing this for for quite some time. Started many moons ago in doing some cyber in the 8200 unit of the IDF, and have grown through many different companies and roles across the world. But very excited to to be here and talk a bit more about how DevSecOps can be done right. Mainly based on learning from many years of mistakes. So I’m in the same boat with with Tanya and Nico here, but hopefully we can give you guys some helpful insights.

Speaker 2: Excellent. And so thanks for everyone introducing yourselves. That is excellent. So everyone knows who you are and how to reach you, which is good. At the end there’ll be more contact information. So if you’re wondering, oh, I really want to catch up with Mike or Nico Gatti or I, there’ll be contact information about all of us. So now for building an impactful DevSecOps function. Let’s go on to the next slide. So what does an impactful DevSecOps function mean? So generally it means that you can measure that you are finding vulnerabilities and ideally as early as possible. So if the developer can find it themselves as they’re writing the code, obviously that would be the best ever or even in a threat modeling session before you even start coding. But the sooner you can find it, even if you find it just before it goes to production, that’s way better than a malicious actor finding it before you. We want to measure if we can remediate, and that means fix the stuff faster, decrease in cost of fixing security issues. So I know that I’m harping on the doing things earlier, but when you fix something earlier in the SDLC, that’s the system development lifecycle. It costs a lot less money and time. And often depending you can do a better job. So we’re probably going to talk a lot about trying to find vulnerabilities earlier. And that’s why with DevSecOps we need collaboration from the teams. It’s not dev separate from separate from ops. The reason that squished into one word is because we want to all collaborate together. We want to have a reduced time to market. And that’s everyone in DevOps, whether they’re doing it securely or not. We all want to get to market ideally faster than our competitors. We want to improve our software quality and scalability, so we want to be able to serve a wide, wide audience of people and companies. And so up next, security has to be built in. Nicko, is this a slide that you might want to speak to?

Speaker 1: Yeah, I think a good, good example, at least from my history, when I started the first services on EC2 in my time at Liberty Global. By the way, Liberty Global is for the US based people. It’s it’s the Comcast of the rest of the world. So building services we we actually had a security officer but was not really built in. And actually he stopped everything in a during the migration to production and on production of everything. So and by the way, he had a good reason because we screwed up on production accounts versus test accounts. So he was right. But that that is all. That’s what this is all about. So security is not an afterthought. And it’s not a gate at at any shape or form. And historically, you you would have a paradigm between speed and security. So you could have either one of them at least that was kind of the industry thinking. And that’s an old paradigm. Uh, we from Circleci, we don’t believe in that. We think you can get both. Of course, there’s a number of things you got to do in terms of integrating your security, your security people, your security function, and with the validations, which are mainly driven potentially in your software development pipeline. Um, they could they could actually fulfill this integrated role. And you, you already started talking to you about, uh, developers themselves starting to kind of look at security. Um, that’s always been a challenge historically. Um, there’s probably more a hate relation between developers and security than a love. Relation, unfortunately. But we and I cannot have you all raise hands, but I’m sure I see even the speakers nodding, so I guess that’s true. But. But the way we see this is we call it radically shifting left on security. So we actually we the things we do on the code extension we want before a developer starts hitting a commit button, we want to do the first validation. So security, um, that’s an early yes. You can find the issues that’s as cheap as you can find the issues. That’s as little as a blast radius. These issues would make it if it actually runs true finally to your production. So I think that’s that that’s one thing doing this at the left hand side. Um, the second thing is, um. Kind of the risk management on this, which is. To you put it left, the more complex it will become, and the more you will actually be reliant on automation of everything you’re doing. And I think that’s that’s underdeveloped in a way. Think we’re in industry version 0.9. I don’t think we’re even an MVP level as an industry. The good news is we are with these partners. I think we’re making we’re making a dent. We’re making a big a big push. Um, but I think in terms of automation and machine learning, application of security threats and policies, there’s still a way to go. And but we’re going to be part of that. I think that’s my view on this integrated security built in.

Speaker 2: Excellent. Thank you. So we actually have a question in the chat. And I was thinking I’m like, oh, should I save it because I want to get to the next slide. But I feel like maybe Gatti or Mirko haven’t had a chance to speak yet. Might want to answer it. So I’m going to read it, and then whoever wants to answer can kind of try to get in there first. I feel like all of us have an answer. So question. A lot of tools that have that are on the market can detect vulnerabilities at the earlier stages of the system development lifecycle, but how can you quickly remediate them and eliminate false positives? Is there any approach or tips or processes or tools? Thank you in advance. I realize it’s a very wide open question. Would anyone like to tackle it?

Speaker 1: I’ll go ahead and jump in on this, because it’s something that Snyk has given a lot of thought to, and I know that Brighthouse, too. But as as Nico was just pointing out, there may be a level of complexity added on to the shoulders of developers as we surface and identify these vulnerabilities that need to be fixed. And some of the options for a well defined tool for assisting with this process, or the ability to automatically create a pull request that has the fix in it as part of of the process. And, you know, this is not really the format for me to go into the state capabilities here, but but there are tools out there like Snyk that are able to do this. And of course I think we do it the best. But, you know, I think I’ll allow Gary also to add Bright’s capabilities in this area because I think they’re very valuable to to the toolset here.

Speaker 2: Yeah. Thanks. Thanks, Mark. I think you’re hitting the nail on the head with that question because and Nico touched on it as well. He said, there’s that no love lost relationship. Won’t call it hate at this point, but think the no love lost is not because these are all mean people, right? These are all nice people that are trying to do their job. They just haven’t had the right processes, the right tools to do this, their job correctly. And if they do get the right tools and they do have the right integrations that enable them to do it right, it’s much easier. And going back to Mark’s point, if you’re able to run a scan with a sneak code and find those vulnerabilities and then validate those vulnerabilities with the bright test, because guess what? Dask can give you proof of vulnerability and show you that, yes, you really have that vulnerability and give you a proof where it is. And then you go back and do the sneak pull request to fix it. That gives you the tools to now do your work correctly. So it’s all about adopting these modern solutions that actually enable both the developers with apps, governance and guidance to do their job more effectively.

Speaker 1: I want to add in one tiny extra thing. If you’re working in sprints, you could add a sprint to your schedule to fix these bugs, because sometimes you have great tools and you have nice processes, but there’s no time in the schedule. So if from the beginning you have 1 or 2 sprints throughout your project or throughout the year where all the developers are just dedicating to addressing everything that the tool is find, then you actually have the time to remediate it. And it’s not just kind of squishing the developers between a rock and a hard place. And you could also take that time to validate false positives, like through using two tools to find the same thing et-cetera like everyone else had on the call. We have another question in Q&A, but I’m going to go to the next slide and then do that one after. Okay. This is a slide that’s kind of for everyone. And so basically the building blocks of DevSecOps. And I know some people will probably say, but what about containers or what about a name like the actual things that we build, but we want to talk about from a higher level. And so visibility controls education and culture. And so I want to put it to the panel, who wants to do each one? Because I feel like all of you have lots to say on everyone. But if all of us speak to all four points, we will be here till tomorrow.

Speaker 2: Somebody have anything else to do today? I freed up the rest of my day. It’s all good.

Speaker 1: Gadi, how about.

Speaker 2: You go first.

Speaker 1: Then? Yeah. Maybe I’ll get started and I’ll touch you. I look at these points in different names and call them people process technology. But the the age old consulting moniker that that helps there. Um, regarding regarding visibility, it’s very, very important, as Nico mentioned, to scan and Tanya touched is to scan throughout the whole software development lifecycle and be able to find vulnerabilities and provide visibility into those vulnerabilities with no false positives, with proof of vulnerability, with remediation guidelines and pull requests. So the whole plethora of tools as early as possible to developers, because we have this problem in this industry where depending where you read, there’s a ratio of 1 to 500, 1 to 200 people to developers. And if you wait for the apps team to come in and validate and provide more guidance, you’ve already lost the battle. You have to have enough. You know, the 8020 rule, 80% of the stuff should be able to be done by the engineers and developers without apps being involved, and the rest of the 20% apps that can provide the value and provide the guidance. So if you give that visibility early on to developers and guide them correctly, and then give the broader visibility to the apps team to make sure that they can take action and educate better. And I’m jumping into education and culture for a second, and then I’ll open up to the rest of the panel if the team actually has time to, not just constantly trying to figure out what is true vulnerability, what is not true vulnerability, but they get the reports and the visibility they need, and they can run a real app program. Then they can work to educate and enable the developers and make sure that the developers are much more effective and implement a much closer culture between the apps team and the developers to make sure that security is not an afterthought. It is part of the process. It is thought about ahead of time. It is implemented as part of the development lifecycle, and you make sure that the developers are constantly learning and understanding how to release secure code, and not just high quality code. And that’s very, very important.

Speaker 2: Excellent. Nico, may I pick on you a little bit? Will you go next and choose one of the topics?

Speaker 1: Yeah. Obviously to me, visibility and controls are the two things that are very close to, let’s say, the heart. Um, and let me go back to a little bit of the. Love and hate relationship. And of course, it’s obvious that from a developer’s perspective, uh, just to shine another light on that mean. From an organizational perspective. Developers are measured typically on functional code points. On great products they build. They’re typically not so much measured on this whole thing. Security. Um, and and I think that’s, that’s one of the things on visibility. So the moment you start making that visible in terms of someone using hard coded secrets in their code and, and even if you decide to make soft rules or soft passes or heart fails on this, but at least making it visible is going to make a huge difference for individual developers to develop this kind of security awareness and security mentality. We’ll come to talk about it later in, in the, in the presentation, but I think it’s important on how visibility links to culture ultimately. Um, good. And then I think, um. The other part is. Kind of. How do you go about the security controls? Because that is basically the major thing to go to. And I’m putting this in the context of platform engineering teams. Yeah. So for larger groups of engineers, that’s basically you’d have to set fairly strict rules with heart fails. Um, you’d have to implement I don’t think you have a real choice, a method for secrets management in terms of whether you use a parameter store or HashiCorp field or Azure Key Vault, doesn’t really matter, but pick one of them. Um, things like Oidc. I don’t think they’re optional anymore in any, in any given enterprise these days. So. So that would be my biggest. Biggest thing on on control ultimately. And then I think, uh, yeah, the templated we call it within. Within. We call it path paths. With guardrails. So guardrails are your controls, but also make it easy for developers to take the template that automatically kicks off your sneak, or your bride, your sauce and your dust, and all the things that you need to do without having to go through a whole hassle of configuring this kind of stuff. Yeah. I think it’s extremely important. And also start measuring. How how much time it takes for a developer to actually fully configure. And run a first pipeline. Yeah, if that’s too long, you’ve made it too hard, and then it’s going to impact your time to market your competitiveness. And we should take that to heart as platform engineering teams to basically make it more simple. I think that’s my my view.

Speaker 2: Thanks, Nico. Mark, I’ve seen you being very patient because I know you have something to say on this. Would you like to go next?

Speaker 1: Yes I do. One of the. I think one of the wonderful parts of this pillar here is the education pillar. And I know that that you and I are both Tanya are interested in this. And that’s the idea that the learning has to be built in and part of the process. It’s not. I mean, I’m so thankful today for all the folks that have decided to expand their education by joining this webinar. But even in the processes that if there’s a link that you can click on that says, this is why this is a vulnerability, and here’s a suggestion for how to fix it. And I’ve even seen one of the questions pop up here about the use of AI to do this. And so having a platform that can integrate AI for answering your your security questions as you’re doing the development process, my goodness, that that not only makes your education helpful at the point of the development, but it also it also gives you something to keep that from happening in the future. So, you know, having that that education is part of your process makes that easier for everybody.

Speaker 2: I definitely agree Mark. There was a bunch of questions in the Q&A and chat about making a more secure culture or security conscious or security positive culture. And I agree, education is one of the ways. Another is building trust with the developers. A lot of them are like little nice, wonderful puppies that got hit in the nose 400 times by the security rolled up newspaper. And I remember when I went to join security, I was like, oh, but they’re the Department of No. And they’re always so mean to me. And then eventually I met a different type of security person, and then I met an application security person who understood how software was built. And so sometimes it’s just building trust between the two teams to start. And that means helping. And that means admitting when we make a mistake or saying, I don’t know how to do this, can we brainstorm a solution together? And every time you show a tiny bit of vulnerability, or you show empathy, or you help them with a problem legitimately, we build more trust. And that’s part like you educating them and teaching them. That’s you building a culture and advocating not only for security, but also giving the devs enough time to do the security. And so all of you made excellent points. Thank you. We have some interesting questions in the Q&A, and a lot of them are all from the same person. And I thank you for all of your questions. But I can’t just let you ask questions all the time. So I was going to say so one of the questions was, do you have a strategy or I suspect it, or a tool that is able it says to detect wait boxes after pull request, but I suggest I suspect what they mean is to detect vulnerabilities in a wait box fashion after you’ve done a pull request. And I suspect that since Bright builds a desk and dusts are more black box style unless they work with a Sast, or actually, maybe I should just let all of you answer for me. Who wants to take this one?

Speaker 1: I think.

Speaker 2: Mark, maybe this one is you around whitebox. And then I can talk about the validation.

Speaker 1: Yeah. To tell you the truth, I’m like Tanya. I’m not quite sure what they mean by white box on this, but I think the idea here is that getting clear feedback in the pipeline of what failed and what failed, and having those results well enumerated to understand how you’re going to go fix those, maybe what he’s referring to there.

Speaker 2: Yes. Or using stack analysis. So generally black boxes like you just have access to the app but you can’t see the code where. Wait pocket often it’s called clear box or translucent box. So you can see through it. Meaning you can see the code underlying. And a lot of web apps and APIs. You can’t actually do that unless you have access to the code repository or the ci CD, like Circleci. Awesome.

Speaker 1: That’s exactly what we do. Yeah. So we provide you the internal to the code right away from your sneak report on your pipeline. That’s also why this integration with these three partners work so nicely.

Speaker 2: Okay, Gadi, did you want to add on to Nicole and Marc?

Speaker 1: Um.

Speaker 2: No, not not on the white box. I think the only thing to add there is that with an integration of static and dynamic, you can get proof of vulnerability. If you can get proof of vulnerability, that makes developers much more amiable to actually fixing the issues very quickly because it makes it easy for them. There’s there’s a stat that says that if you fix something early in the development lifecycle as part of the ID, as part of the unit testing, versus if you fix it in production, it’ll take a 60th of the time just to understand what that means. If it takes you an hour in the development lifecycle, it takes you a week and a half, which is a whole sprint in production. And that also addresses the question of how do you convince developers to to do this work? There’s a question in the Q&A as well. You explain to them that, look, you have two options. Either you do this work now, okay, because it’s not a false positive. We’ve given you proof of vulnerability. We’re showing you that it’s a high or critical or medium vulnerability. And it has to be fixed because it went to a production branch. And you spend an hour on it now, or you’re going to forget about a sprint in two sprints, three sprints for sprints. And that will just create antagonism in this system because you’ll have to fix it at some point. So giving that visibility and working closely is very, very important. And showing the developers this is the real ROI from you doing this work, it will save you time.

Speaker 1: Excellent. We have two more questions in the Q&A, but I was hoping we could go to the next slide and then answer 1 or 2 of them and kind of slowly make our way through the slides. And so this slide is pipeline visibility and reduced dwell time. And Mark, were you the person who really wanted to do this slide.

Speaker 2: Yes.

Speaker 1: Yeah. This is you know I want to start with a quick story. I mean, how many of us have been to a social function and we’ve had an appetizer and we get food stuck in our teeth. And everybody we were walking around the room and everybody sees that we have food stuck in our teeth, but we’re not aware of it. And then finally a good friend comes up and says, hey, you got something right here in your teeth. You get it cleaned out. And, you know, that’s kind of the embarrassment that you live with. If you’ve released something to production and all of a sudden your company resources have been leaked out onto the internet from a hacker. And that’s kind of what we’re talking about here, that dwell time from the time that we know that a vulnerability exists, there’s exposure out in the public world to the time that we can get that fixed, and getting that reduced means that our companies are safer and the applications that we produced, and that’s not just externally, but that’s internally too. And so this is one of those concepts where visibility means that you find out about the food in your teeth way before you get to the point of, of everybody else finding out about it. So visibility, I think, is critical. And there are so many tools that we could talk about in terms of being able to see what’s going on in your system. And one of the things that’s already been brought up in this that I wanted to mention in this category is the issue of prioritization, the fact that developers are probably overwhelmed with if they have a good tool like Snyk and Bright and, and the tools that we can integrate with Circleci and and they may feel a little bit overwhelmed. So really visibility is the first step and the foundation to being able to do that. Prioritization. So let’s go to the next slide because we’ve got some practical tips here. And I think everybody’s here for the for the benefit of the practical tips here. And the first one is something that we’ve already mentioned is as having a comprehensive monitoring system. And this is so foundational to not just understanding the visibility of what you’ve got, but creating the processes around getting things fixed and the prioritization projects, whether that’s a SIM tool or an application security posture management system, and then feeding into that tool the right automatic automated security testing tools like the Dash and the SAS and the SCA. And I’ll say that that there’s also the expansion of this, which is part of the next bullet item, that the ability to test infrastructure as code in that process. So it’s critical that infrastructure is code has the potential to expose vulnerabilities just as much as anything that you’re writing your software here and being able to understand that and the the policies that you can set up in order to protect your environment. And then having, of course, very well documented and defined CI CD practices with tools like Circleci that allow you to to have a repeatable, duplicative process that you go through every time. If you want to have a good security program in place that protects your organization, this is these are the things that will make life easier for you in achieving that goal.

Speaker 2: Nice. Nico, I think you also wanted to say a few things on this slide.

Speaker 1: Yeah, but Mark did such a nice job.

Speaker 2: Can just go on to the next one if you want.

Speaker 1: Well, I’ll make one. One little note. So, um, so to me, a practical tip is how do you glue it all together? And I think that’s where the industry made quite a bit of progress in the last, last few years with OPA and Rigo. So and that’s basically also for if you think about platform engineering teams, we need to make it much simpler. So we build a feature which is called config policies, in which at a very easy UI kind of way, you can start building and integrating all these great tools like the brides and the sneaks. And I think that’s really important to get started with this kind of policy building, starting to decide what our soft fields hard fails. What do I want to see in terms of reporting? And that’s that’s kind of the visibility part and the cultural development perspective on on what should be my KPIs on on developers individually, at teams, at project level or at organizational level, and pull them out of the great policies you’ve been building into, into either Rigo or Opa. Um, that’s why I would add. Thanks.

Speaker 2: Excellent. There’s so there’s many questions in the chat, but this one happens to be right on top of it. So it said any resources about the configuration policies we had. Purple in our community has a free course about securing infrastructure as code, doing just that. So if you wanted to go to we have purple sign up for the community and click the Infrastructure as Code Security course. You could learn about that or read the blogs of any of these companies. I suspect that they have at least one, if not many different articles about that. So excellent question. Can you send the link there in the chat? I can’t because I’m running the webinar, but I bet Amanda could find it for us in the community somewhere, but I digress. We have another slide reduce risk of unauthorized access. And I suspect that Gary had a few comments he wanted to make about this slide.

Speaker 1: Yeah, I think a lot of research, as you can see here, is showing that people are still a problem. If we look at the areas that are most prone to vulnerability or to be vulnerable in our organization, it’s twofold. One is application and API security, and two are people. And you want to make sure that you’re implementing the processes that enable you to make sure that those people are not putting a risk. And again, it goes back to just like developers, the people are not mean. They’re not trying to do this on purpose, but people make mistakes, and you want to put in the processes that enable them to not make those mistakes and design the processes that enable them to prevent making those mistakes. So if we go to the next slide, there are a few points on things that you can do. And I look at it from from two sides. One, there’s a whole list of things that you can do here and everybody can read this. I don’t need to read this for everybody. But in addition to putting these things in place, I’m going back to our original slide regarding visibility and monitoring and make sure that you have the right controls in place so you can put these in place, but there’s always ways around them, right? Prevention is is one factor, but there’s always ways that mistakes can be made. And these things are not implemented correctly into the organization. And you need to make sure that you are validating all of them. And some of these vulnerabilities are very sophisticated vulnerabilities, right. If you have MFA in place, that’s great. But still, if people are able to have privilege escalation and all sorts of multi-step attacks and a lot of fun stuff then and you haven’t validated those and you haven’t eliminated those vulnerabilities early in the development lifecycle, even if you put all of these remediations or all of these remedies in place, you might still be vulnerable. So you have to make sure that you’re testing for them. And a lot of them are business logic attacks. One other point that I would put in here is note that in most organizations, what we’re seeing there is a big difference between how your applications behave and how your APIs behave. Many people don’t think about putting the compensating controls or the same remedies into their APIs because they’re not as visible, and it is amazing to see how many times things that work correctly and have been validated correctly in applications will not work correctly in the APIs, and there will be a loophole or workaround or vulnerability in the APIs. So you have to make sure that you’re doing both.

Speaker 2: I want to second the comment that Gatti made about APIs. So the new OWASp API Security Top ten just came out in the past two weeks, and three of the top ten are around giving access to things that you shouldn’t by accident. So broken author authorization, not authentication. And it’s like it’s at the object level. It’s at the whole level of calling the whole API. It’s down to the field level. It’s very, very common and needs to be tested thoroughly. If it’s it’s I believe number three, number four and number seven, I could have the numbers wrong, but like three of the top ten are the same thing. So definitely pay close attention to the tips on this slide. Marco Nico, would you like to add anything before we go to the next slide?

Speaker 1: Good. Okay. We can.

Speaker 2: Do it. Okay. So this next slide I was going to start off. And then if anyone else wants to comment they can but basically decreasing the impact of human error. So according to the Verizon breach report last year error human error not error human is responsible for 13% of breaches and 13% might sound like not very much, but that’s way too many. We want to have more safeguards to help with this, and we want to make sure we put we empower every single employee, not just the developers, to know how to do their job securely. We want to give them the tools, and we want to show them how to use them and how to do their job securely. Quite often, like I only just work with developers. But then I remember, oh yeah, those sysadmins. And like, even like lawyers or marketing people, like they have a lot of power within their orgs. And if they make a security mistake, it could be just as harmful. Does anyone want to add anything on this one? Before we go to the next slide? So you think we all know human error is not the best? Okay, so now we have number three. And so I wasn’t sure if. If this is my site or if this is someone else’s site, I think it’s mine. But basically, if we want to achieve goal three, you want to provide comprehensive and ongoing security awareness training for all relevant employees. And it’s really important that it’s so you have some that’s general for everyone, but each different type of job has different risks. So for instance, marketing people, you want them to know what GDPR is, what the California privacy laws are, Canadian laws. ET cetera. To make sure they don’t accidentally land your entire organization in hot water by copying all your customer information into Facebook Pixel and then wondering what’s wrong. This is super important that each one of them understands how to do their specific job safely. If we can utilize automation and infrastructure as code, it can reduce the likelihood of human error, making code review mandatory as part of the development process, whether that be automated code review like static analysis or doing it manually. If you have a lot of people that have that skill and have that time, and then lastly, conduct regular post-incident reviews or a postmortem, often it’s called to find the root cause. And whenever possible, do do mitigation or prevention steps so that you never have that same type of security incident again. So it’s okay to error. It’s not okay to do the same error over and over and over. We want to learn from our mistakes. And up next I believe we have.

Speaker 1: Can I can I add one more point to that? One of the things that we’re working on here internally at Snyk is creating what we call a blame free culture. And there’s an excellent video out there with three about the Three Mile Island incident, where they talk about removing the blame. Culture allows people to be more open and honest about the processes that contributed to those mistakes, that we may sometimes say that our human error, but they’re really the processes that we’ve built around why we’ve made the mistake. And that’s very critical to this whole thing.

Speaker 2: Yes. Psychological safety. I don’t know if any of the viewers or the speakers have read anything by Gene Kim, but he talks about that in many of his books. And I own a giant pile of Gene Kim books. So yeah, I totally agree, Mark. This is one that I wanted to give to Nikko fostering a security first culture. Nico, did you want to speak on this slide a bit?

Speaker 1: Yeah.

Speaker 2: Um, actually, just to pick up on what you just said, Tanya and Mark, this is all about a blame free culture. Um, so I have been in the luxury position to get many, many learning events. Made the press several times, by the way, um, at I still remember about 12 years ago at Christmas. Being in a Dutch press for all systems, being down, people not being able to watch TV and this kind of stuff. Yeah, it’s it’s not nice, but I have to say there’s a difference between. Um. Learning. And taking measures. So we all have pages full of confluence retros, where we all point out what all is wrong and how other people probably should fix this. Yeah, but very little of that is truly followed up, and I think it’s a bit off the chart here. I would also recommend. Very basic organizational process implementation and cultural implementation principles. I love the Rockefeller habits. I owned a like you run the other books. I own a ton of furniture, books on scale up and stuff like that. Those are truly good books to learn. How to be rigorous about implementing things you’ve learned rather than just learning things. Yeah, which is a big difference in an organization that makes the difference between. Ultimately the winners and kind of the security first culture. Those are the learning organizations that also have been implementing, that have discipline, that have rigor applied to all the things they came to learn. Yeah, that’s the stuff that I’ve been learning in the next. You go to the next slide, please. Um. So some some practical tips. Of course you can read this. Guess we’re distributing the slides anyways. We are. Well, I don’t know. Um, but. I think it’s important that, um. Security is an agenda item and I would even recommend security sprints. We said, well, we’re going to put dedicated time to it. Security sprints. There’s nothing wrong with it. The interesting thing is from a managerial perspective, if you come to your to your senior manager and say, I’m going to do security sprint, let him say no. Yeah, the manager can never say no to security sprint because he knows he’s going to be on point. If something goes wrong, he’s going to be blamed. The manager doesn’t never want to be blamed. Yeah. So that’s kind of to work work through this kind of organizational dynamics. The gamification I love that’s the stuff we got to do. That’s the reporting for example, that we run in our in our, in our business. Very important to to not make it blame full but make it playful. Yeah. Which is at the total other spectrum. I take pride on talking about my mistakes. And you see that trickles down to basically all layers of your organization. And then I welcome people to actually come with their learning points and their implementation plan to their learning points. Yeah. And that scar tissue there. That’s really, really important. Um, yeah. Security champions. There’s a whole lot to say about security championing mean. That is something we try to fight on the one hand, on the other end, in certain organizational setups, you really need them to to start championing things. Uh, some organizations from a maturity perspective, are already further down to the maturity levels, where we’re actually it’s really as integrated that going back to one of the first lines, we might we, we set. So and of course, it’s a challenge. I mean, from an industry perspective, 20 years ago we came up with scrum and this was pretty much free for all. We could all dance around the campfire, Kumbaya, and we all had the mandate and we could do whatever. And if our managers came, we said, well, we don’t know when we’re going to deliver this feature. These days are over. Yeah. Developers also need to man up and take responsibility and accountability on this. And I think that’s that’s one thing. What I see happening more and more that developers start to understand that this is fundamental and security is in their top one, two, three priorities of fixing. And if it’s not theirs, it’s a KPI of their manager. And and that’s the good news for us. I think I’ll, I’ll hand it over to whoever wants to say Mark.

Speaker 1: Maybe just one point to add, which is something I’ve had multiple discussions with CTOs about over the last couple of years, and that is for some reason there’s this perception in the industry, and I have no idea how it was created, that a bug is the responsibility of a developer, a security vulnerability is the responsibility of OpSec. And that’s just ridiculous. Security vulnerability is a bug, and it needs to be treated as a bug, just like anything else. You also need to give the developers the right tools to be able to address them and work with them correctly. ET cetera. But once you do that, they’re bugs and they need to be fixed just like any other bug.

Speaker 2: Yes I agree. We have a couple of questions that keep coming up in the chat. And DevSecOps is a way of adding security and automation to software development teams following the system development lifecycle called DevOps. It’s not anti malware. It won’t help protect your personal digital devices at home. So thank you for asking that question four times. Also, we have some other questions in the chat because now it’s Q&A time. So. So there’s a lot of interesting questions. Did anyone want to cover Web3. So I am not that experienced in Web3 at this point. But we have a question about it. As anyone want to do some crypto blockchains questions or shall we leave that for now? So I.

Speaker 1: Can give.

Speaker 2: I can give a very quick answer and I think it touches on two different things. One is web 3.0 crypto, blockchain and the other is new AI technologies like generative AIS that that are coming out. Um, we look at both of those in two different ways way or method. Number one is what new vulnerabilities are introduced into the world because of these new technologies, because the technologies themselves become underlying technologies, and they will bring a host of new vulnerabilities that we haven’t seen before. And how do we find those? So one of the things that we are focused on and we’re very early in that process, but we’ll get there, is what are the vulnerabilities that will be introduced by these new technologies. So we can anticipate them, find them and help you prevent them if you’re using them as part of your underlying black box technology, that’s one. The other one is how do you use these technologies in order to improve our solutions and and we’re starting to use generative AI. We’re not using blockchain yet even though and Web3 yet even though there have been multiple discussions around, are you able to actually create new types of attacks and manage those new types of attacks using blockchain and getting contribution from external sources? That is more problematic.

Speaker 1: I feel a lot of the excitement about Web3 is all of the marketing. There’s a lot of people who have a ton of money stuck in those cryptocurrencies, and they can’t get it out unless the rest of us fall for this Ponzi scheme. But those are my opinions, not the opinions of this panel. Yeah. Mark, did you or. Sorry, Nico?

Speaker 2: Yeah, I’ve something to say about I agree, by the way, with you, Tanya on the web, three thing and Ponzi schemes. But point I wanted to make with with kind of the generative things that are happening now. So what I expect in the next 2 to 3 years is really just making its way into, into coding backend frontend, which actually means that we’re going to get a lot more code to review. Yeah. So so actually the speed of development is going to improve because of this reason. That would also mean that from a human perspective. It’s going to be very, very hard if you’ve not automated kind of your security policies and you’re not enforcing these in the right way. Actually the blast radius of what’s going to happen is going to be probably a multitude of what it is today in your current business. So so I would see it as an increased risk, which humanly is going to be very, very hard to do any, any manual checking on that. So, so the necessity of starting the building blocks, of putting all your kind of security elements and tooling platforms in a connected way together on top of that, with policies to make a a safe delivery of your functionality, taking care of non-functional requirements as good as functional requirements is going to be harder and would pose more risk in the future.

Speaker 1: Yes. With this though, I want to make sure everyone has the ability to contact all the speakers. So if you are going to take a screenshot of a slide and you are a viewer, this is the one that I would take. And yes, they’ll give you a recording later. And if you ask really nicely, they might even give you the slides. But just taking a screenshot might be really helpful for this one. And I believe that Gadi had an announcement about some championships he wanted to give.

Speaker 2: Yeah. One of the things that we we love highlighting, and it encompasses a lot of the items that we discussed above, is how do you develop security champions. Right. And you want to make sure that you’re developing abstract champions across the development organization. You’re developing these abstract champions across the security organization, across the DevOps organization, and having them all collaborate. And in order to help that, we are giving out all sorts of awards to help you drive that process. So sometimes logic prevails. Sometimes bribery prevails. In this case, we’ll go with the latter. So if you’re interested in having your friends, your developers, your apps professionals, join this app select champions program. You can see the link here. Have them join. You can win prizes. And it’s it’s a fun thing we’re trying to do to drive the behaviors that we’ve all been talking about throughout this presentation.

Speaker 1: Excellent with that. Is that the last slide? Excellent. I want to thank everyone on the panel, everyone behind the camera that helped make this happen and actually makes all of us sound and look amazing. And everyone who attended. Thank you all very much. And we are supposed to wrap up now and I actually did it on time. So I’m very impressed because I kind of just wanted to ask these gentleman questions all day, but I digress. And we do have to end the webinar. Thank you everyone very much for coming. And thank you, gentlemen, for being on the panel with me.

Speaker 2: Thanks, everybody. Thanks, guys. And wonderful as. Yeah. Thank you. As discussed we will share the the recording. Feel free to follow up with all of us on any additional questions. And we look forward to continuing the discussion.

Speaker 1: Thank you all.

Speaker 2: Everyone thank you for coming.

Get Started
Read Bright Security reviews on G2