- Why Bright
-
Product
- Resources
- DAST
- Application Security Testing
- Penetration Testing
- Vulnerability Management
Guide to DAST (Dynamic Application Security Testing)
Your primer for application security testing.
We explain the concept of penetration testing.
Comprehensive overview of vulnerability management.
- DevSecOps
- API Security
- Unit Testing
- Fuzzing
All the necessary knowledge to get started with DevSecOps
We take a deeper look into securing & protecting your APIs!
All you need to know about keys of unit testing & best practices.
We explore fuzzing and evaluate if it's the next big thing in cybersec.
-
Company
- Partners
- Contact
Resource Center  > Webinars
Five Leading Trends in Modern Enterprise DevSecOps
Speaker 1: Hi, everyone. Thank you for attending the webinar in collaboration with Fusion Fund, Incubate, Fund and J Ventures. Five Leading Trends in Modern Enterprise Devsecops. I’m Yoshimi Director of Partnerships at Dana Ventures. Today we’re going to have anchor R.S. Ratnam, former executive Director, head of Data Protection Technology at Jp morgan Chase and Company, and Gadi Bashvitz, President and CEO at NeuraLegion as speakers. And they’ll go over the shift that has been happening in software development and the challenges that developers and enterprises have been facing from the shift and how to manage application security. Then we’ll have Rio Midha managing partner at Diginex Ventures as a moderator and jump into a discussion. We’ll try to include your questions as much as possible during this time. So feel free to post your questions through the Q&A box during that presentation. So without further ado, I’ll hand it over to Omkar.
Speaker 2: Thank you so much for showing me. And maybe we can move on to the next slide. Awesome. So I’ve been doing this for quite a while longer than I’d like to admit. And as long as my wife would like to remind me that I am. So I got into I got into the industry back in 1998. And if you remember back in 1998, that was the days of Windows 98, not even Windows 98 CE, like Windows 98. That was before we had released any of the modern application development practices that we enjoy today. And for those that weren’t around back then, you’ll notice that software moved a lot slower then. And what I mean by software moving a lot slower was that we employed a method called Waterfall. And for those of you that remember, Waterfall started off with a very in depth analysis as to what all of our requirements were going to be. And you would have a fleet of business analyst that would sit down with your business users or your prospective customers to understand those requirements to an extremely specific detail. Those that those requirements would frequently then be run by security or compliance people so they could understand what the implications of those requirements might be from a business context as it pertains to the app. An example of this might be a particular set of requirements around encryption if you were getting credit card information to process a transaction. The requirements themselves kind of categorize into two buckets. You have your functional requirements. So I want to be able to move account or money from account A to account B as well as non functional requirements. So I’d like the system to be able to take this many transactions. I’d like to manage this capacity in this way. I would like the system to be this available. And you’ll notice in these broad categories of requirements, there may be both security or there may be security requirements in both the functional and non functional side. So you might want a particular kind of experience when a user authenticates or you may want a certain availability envelope. Constrained by the number of transactions per second or a certain kind of resilience to distributed denials of service. After the great meditation was done. Of all these requirements, you would then flow into design. And design would be where very smart people would come up with the best design documents and diagrams and architectures that fully embrace all the requirements that you figured out during the requirements gathering phase and really start to build out what that looks like in terms of the overall system design and architecture. Here again, you’d have a security inflection point where a very smart security architect may inquire as to whether you’re using the appropriate patterns for authentication and authorization. Or perhaps whether you’re using a well known service in order to do auditing or perhaps in order to understand whether a particular network flow of sensitive information was being appropriately protected. And the exit gate to this would be basically a handoff to the engineers themselves that we’re going to go write code with the approval, both implicit and explicit of the appropriate personnel such as security, such as design authorities that may be responsible for certain architectural standards within the organization, so on and so forth. Moving in our waterfall from design into implementation. The engineer receives the code. It’s broken down in some sense into a logical set of components or functions. The engineer begins writing code. They perform their own rudimentary unit testing. Maybe they’re doing a little bit of static analysis. Maybe they’re doing a bit of fuzzing on the edges of the APIs that they’re exposing. But genuinely, it’s something that’s pretty self contained down to the individual engineer, optionally. If it’s a junior engineer, perhaps a senior engineer that’s doing some kind of code review. And the level of introspection is quite subjective, based on based on how that discussion goes. Last but not least, at the exit, once you’re kind of code complete at that individual or unit. Testing perspective. You move on into the integration test. This is where you orchestrate all the bits together and you actually start seeing how those end to end flows are performing, often again in the legacy of Waterfall, this was the first time that you got to see how performance the system was and end. You start to be able to see hotspots within the system. You start to be able to see from a security perspective if particular units are making bad assumptions in terms of how the security will work or what context is being passed between units. It’s it also ends up being part of where you would incorporate third party testers, which often get you very valuable, but also very expensive and hard to replicate results. And what I mean by that is when you have a system that’s almost complete code and you expose it to a pen testing firm, they may find all kinds of security edge cases that you did not think of in the lower environments. And they may actually do a very good job at kind of weeding out stuff that could be false positives. But again, this this isn’t a sustainable and then when you move past testing and actually into production. You may begin the cycle again. There may be things that you ended up pushing off from your initial release just due to timelines. There could be things that features that you chose to defer because budget dictated or what may have you. But this life cycle usually leads to about 1 to 4 releases per year. Nothing major, something quite predictable. You’ll also note that there is a very large gap between getting the customers input and allowing them to see how this manifests in terms of actual code or system operation. In addition to some. Point in time and very long in duration gaps between effective security testing along the way. For this and a number of other reasons. Well, this was a very popular method of software development in the past, to say the least. Things have changed. Mr. Mayor, can we move on to the next slide? Thank you. So how do we write software today? Well, for those of us that are software engineers are involved in any kind of agile or scrum based process, this should look pretty this should look pretty familiar. And what this seeks to solve is much more of an iterative solution by quickly writing code, testing it, deploying it, and then checking in with our customer, we can be much closer to what our customers expectations are. Sometimes customers may express things that they don’t even realize are imperfect until they see it in production, and then they’ll want to come back and work with you to tweak. I know there are some very strong feelings about Agile. There’s some very strong feelings about Scrum as well. But I think the entire idea behind what we view as optimal software development now is that the there is a much shorter cycle between hearing something from your customer or gathering requirements. Writing the code, testing the code, deploying the code. And in keeping this loop much closer, we’re able to more appropriately react to market demands and customer expectations, leading to much higher customer satisfaction. So how does security show up in this cycle? Well, we start off with the backlog, which is a grand list of all kinds of things that we’ve been asked to do as engineers. Our first work in the backlog grooming is to prioritize the stuff within the backlog. Add new work is appropriate and make sure that each line item of work meets an agreed to definition of ready. What I mean by that is whether it’s your product manager or your customer or whomever is giving you new requirements, you should have some kind of request and acknowledgment process that allows you to express whether a request is complete unto itself. Do I have enough detail here within the backlog to begin the next part, which is the sprint planning? If yes, including any relevant security requirements, then we move into sprint planning. Sprint planning is really quite simple. Or a sprint represents a parcel or a unit of work. Whether that be over the course of two weeks, four weeks, six weeks, whatever the norm within your organization is, and the engineers looking at the backlog pick off. Items of work that they feel can be accomplished within that sprint. And begin the work of assigning it to teams as well. Throughout the actual execution of the sprint, you will frequently have somebody such as a team manager or a scrum master that meets with the engineers in order to understand if there’s anything blocking their work. Like I need more details from the infrastructure team. I need the product manager to tell me what. Further clarity about the requirement, and that serves as a method to ensure that the engineers are always moving at full velocity towards the code that will meet. The definition of done when complete. Finally, the Sprint’s complete all of your security checks are done. The code is checked in. In some organizations, you may have already deployed the code multiple times to production, depending on how frequently you choose to deploy and what your CI or CD pipeline looks like. And there’s a demo. The demo serves as a ceremony that allows the engineer to demonstrate back to the person who requested a particular feature that the feature is complete and here’s what it looks like and gets input and feedback as to whether that’s something that satisfies their demands. At this point. There may also be some kind of security interrogation. We may have some kind of validated process in order to make sure that all the security requirements have also been met. Assuming that everything was met. The acknowledgement. That the work was complete was in agreement that the definition of done was met. Last but not least, the most important part of this cycle is the retrospective. So what went well? What could have gone better? How could we have improved, be it security or just the base engineering? So again, this kind of iteration is something that we see much more frequently in organizations today. You’ll see in the center I have been in organizations where we’ve released software bi weekly. I’ve been in organizations where we release software multiple times a day. And some of the traditional methods of interrogating security, fitness. That was appropriate during the waterfall cycle start to fall apart here. Can we really conduct a multi week third party penetration test against the software? If we are if we need to do one of those every single time software is released. Can we do a very rigorous multi month long compliance assessment? If we’re releasing software multiple times a day. As we start thinking about these new paradigms and software, we also need to start thinking about how we engineer software differently. So you’re showing me if we could move on to the next slide. So the velocity of deployment has increased tremendously. One of the methods that. Organizations that I’ve worked in in the past as well as currently employ our canary deployments. So we release our software. We gradually release the new version over the old version and take more and more customer traffic. And we measure our success based on metrics associated with the new code deployed and how the system is reacting. There are some very interesting software frameworks out there, such as Spinnaker, which allow us to achieve the second major bullet, which is one of the ideas behind rapid iteration. And moving at a much faster velocity in software is because we don’t have quarterly releases, we can fix things quicker too. But in order to make sure that we’re being good engineers, we need to ensure that we’re making a lot of changes that are extremely predictable. Small changes are safe, big changes are extremely risky. So if we can use Spinnaker to deploy containers that have completely contained your app code, all the dependent libraries, all the system patches. And release those very frequently. We can quickly hone in on where and if we have any systematic problems and roll back or correct. If, however, we make a ton of changes and wait three months and deploy. It’s a lot of hard work to determine where the defect was actually introduced. Code is also much more complex than before. If any of you look at your code bases today. There’s potentially millions of API interactions in just a moderate size code base and code comes from everywhere. If you’re using Node.js or Python or Java, any of the popular languages. Chances are you’re downloading a third party framework from somewhere with usually unknown provenance, even if it is a widely used open source library. And our ability to curate and understand the security posture of these as we rapidly iterate is key. The other thing that I’ve noticed in my tenure in engineering and security. Is the accountability for application. Security has moved. No longer is it? Oh, that’s the CSOs problem. It becomes a core component that engineering leaders are accountable for. Engineering leaders don’t get to defer compliance or security accountabilities to the Central Security Department. They are accountable for the security outcomes. So if I’m developing a retail bank app and the app gets hacked, I can’t put the blame on the CSO. As a head of engineering, I have to own the accountability and therefore I have to be the one that is advocating for proper security practices within my organization. So, Yoshimi, if we could turn to my last slide. Thank you. So how does security need to change? Security management and software development need to occur in parallel? That really tight loop that you saw in terms of how we develop software today? There is no separate security slim line there. It needs to be bound as part of this practice, integrating security early. And it really saves time down the road. If you save all of your security testing to some kind of quarterly release when your engineers are releasing software daily. It is going to be near impossible to get the kind of security outcomes that you want. You have to integrate and embed within the process. Developers like security facts and not hyperbole. I am a software engineer that adopted security very early in my career, and I have a significant amount of empathy for engineers that have to go through weeding through tons of false positives for them to be able to extract something actionable to do with their code. Automation and consistency yield secure results. If you have people that are doing things ad hoc, you’re never sure systematically if you’ve eliminated all the risks that you sought to. If you have a repeatable and automated process policy driven, you have a much better chance of achieving the security outcomes that you want. By reducing complexity and inconsistency. In summary, you can’t bolt on security. It has to be part of the core fabric of what you do. And with that said, Yoshimi, I’m going to hand it back over to you.
Speaker 1: Thank you. I’m ga. And we’re just going to keep going with Gary.
Speaker 2: Excellent. Thanks, Omkar. Thanks, Yoshimi. Really great setup for where I’m going next. So I really described the problem of shifting from legacy security solutions and the legacy way in which OPSEC was done into what we really have to do today in a modern world with modern applications and modern release cycles. So Norwegian is focused on application security from build to compliance because we really believe in having security integrated or application security integrated into every step of the process. If we move to the next slide, we can talk about who we are. Our focus since we started releasing solutions was on providing a dast solution that is fully integrated into the development process and enables developers to actually use the solution. When we talk about DEST, we’re looking at web applications, whether those are standard or legacy apps, if they’re single page applications, if they’re web socket or other protocol based. And looking at APIs, whether those arrests, soap, GraphQL or any other modern type of API and obviously mobile applications as they are released. So we want to make sure that organizations have full coverage on their Web assets and APIs is part of the development process and they can test for vulnerabilities early and often. That starts with the ability to build the scan surface from the unit test level. So historically we could only run and we’ll see that in more detail in the next slide that solutions as part of the pre production or production environment in order to enable what Omkar mentioned in detail of integrating security very early on and having it a iterative and automated process, we have to be able to identify the attack surface from the unit testing level and much earlier in the software development lifecycle. We also have to have seamless integration into continuous integration, continuous delivery and make sure that people don’t have to run these processes manually. They don’t need to trigger anything manually. Everything needs to be automated and easy. And very importantly, if you are going to expand the solution beyond the application security team that actually understands what real positives and what false positives are. You have to make sure that you’ve developed the mechanisms in order to eliminate false positives. And the results have to be actionable. And the other and very important focus is in order to take out the human factor that is one of the longest poles in the tent and takes a long time to scan applications. You have to be able to scan for business logic vulnerabilities and not just technical vulnerabilities as part of an automated solution. And that’s really what we have focused on since the beginning of our journey three years ago. She makes, we move on and we’ll talk about how does this actually look. So this is a different depiction of what Omar showed in terms of the actual development cycle. And it talks about how application security has shifted left and can be integrated much, much earlier in the development process. You can see on the left hand side that historically we could only run best solutions as part of pre production or production. That’s where those solutions ran because we had the long time frame that Omar mentioned. You had a couple of weeks to run these tests as part of your waterfall cycle, etc. and that wasn’t an issue or wasn’t a problem. But now that we have increased the velocity of releases very, very significantly with we have some of our customers that are doing 2000 releases a day today. You have to be able to test early and often, and that means that you need to be able to run these tests even though there are dynamic analysis tests. So they still require compiled applications. They can’t integrate or they don’t integrate directly into the code, but you make sure that you’re testing as part of unit testing, you’re testing as part of the CI, you’re testing as part of QA automation, and you make sure that you’re identifying vulnerabilities much, much earlier in the process. The customers that have implemented this in the best way can identify 50, 60% of the vulnerabilities very early in the development process, literally on every build and remediate them very early. So when they do get to pre-production or production, the number of vulnerabilities that are identified is very, very small and they can still remediate those. Now the setup is relatively straightforward and it’s intended to be automated so the security team can provide the governance and developers can actually use the application on a day to day basis. And that means that in this example we’re showing Azure DevOps. But as you can see in the bottom, it can be any CI or CD solution and we are using GitHub as the source repository. It can be any source repository in JIRA as the ticketing application. Again, it can be any ticketing application that you use in your organization. But essentially when code is committed and the CI process starts, we are called for a scan as part of that process. And then you can define the rules to say, Hey, will the build succeed or fail based on vulnerabilities that are found? Do we want to just have any build, succeed or do we want to say whenever a high vulnerability or multiple medium vulnerabilities are found, that build will fail? In any case, tickets are opened automatically for developers, so developers never have to leave their development environment. Those of you who are developers, that brings a smile to your face, right? Because you now know that. Yes, I don’t need to leave the standard workflow, the standard process I have, and much more importantly, somebody from security is not going to come to me three months after I’ve released an application and tell me that I now need to dive into that application and try to figure out what those vulnerabilities were that were introduced months ago, the relative time that it takes to resolve an issue, if it is brought up immediately as part of the build versus going back in production and trying to resolve it is about 60 X. So the time savings, the headache savings are very, very significant for an organization if all the tickets are opened automatically and the buck tracking system, obviously, very importantly, all the results and all the information is shared with the security organization so they can track vulnerabilities over time and continue providing the governance in the best way possible. That’s how you create a solution that’s iterative and enables you to identify and solve vulnerabilities early. Obviously, those customers that are using best practices are still scanning in preproduction or even in production to make sure that they’re identifying vulnerabilities that manifest themselves at those stages and wouldn’t manifest earlier. But that makes remediation much, much easier at that point. Now that we understand that, let’s jump to the next slide and look at what actually makes a Daas solution amazing for developers. There are a few components that we’ve built in from the beginning to make sure that developers are willing to adopt these solutions and are not really saying, no, this is security is responsibility. The first one is no false positives. That is a problem that has been a long term issue in our industry and you want to make sure that you’re eliminating them. And we’ve built the mechanisms and capabilities into the product to eliminate false positives. The next one is really to enable to run focused fast scans, and that means that you don’t need to rely on a crawler like Legacy Solutions have used, but you can actually rely on hard files for web applications or on swagger slash postman collections for APIs to make sure that you can really narrow the scope of the scans. And if you’re able to narrow that scope, scans can complete much faster. If you can run a scan in 10 seconds, 15 seconds, or even two or 3 minutes, that means that it can really be integrated into your CI process. If a scan takes 5 hours, it’s not something that you can integrate. Definitely not if it takes five days. And that means that in order to run these scans quickly, you need to be able to shorten them and define a much more clear scope for them. The next one is once vulnerabilities are actually found, how do you give guidelines that are in developer speak? You’re not just telling them, Oh, this is CV one, two, three, four, five, and go figure out what that means because you’re a security professional so you know what to do with it. So developers don’t know what those things mean. And you need to make sure that the guidelines that you provide on remediation are in developer speak. Next one is really that full automation with CI so developers don’t have to do anything manually. And the last thing is very, very important. And make sure that developers can get started for free. And if you actually go to our site and our religion dotcom as a developer, you can download an application and you can start running your tests for free today. So it’s very empowering for developers to know that I can actually release secure applications. I don’t need to rely on my security team in order to do that. Now that we understand that, let’s move forward. I think we might skip a couple slides here. I’m just looking at conscious of time. But before we do that, when when our customers are really using the product, you want to leverage those developers. We all know the diagram and the top right hand corner of in most organizations, for every one app person, you have 100 developers and you don’t want to rely on that one app person to be the the boy holding the hole in the dike. Right? You want to utilize the 100 developers that can actually use the solution. As I mentioned in the beginning, absent can still provide all the governance and say what we should scan, when we should scan, what we should scan for, etc.. But then you want to leverage the power of that much larger team of developers to run the actual test in an automated way and make sure that you are identifying the issues early. Excuse me if we can skip the demo slide and go to the one after that. If we have time at the end, I can go into a quick demo. What? Everything that I just presented is not just theory. There’s a lot of research by Gartner and Forrester and other companies that show that there is real proof to the value that these solutions offer. Today, a data breach in the US or North America will cost an organization an average of almost $8 million, $7.9 million to deal with. Obviously, there are some that are much, much more significant for very large organizations. But if you do integrate an app six solution and fully automated early in your development process, you are able to cut that cost by roughly half. So from almost $8 million, reduce it to less than 4 million. And moreover, and this is very exciting, it can enable you and this is specifically for DAAS that is integrated into your process. It enables you to shorten the amount of time that you have vulnerabilities in production. So today, this is staggering. Most organizations will have vulnerabilities in production for more than nine months, 280 days. If you’re able to integrate automated dast into the process, you will reduce that very, very significantly. So this isn’t just theory, what I’m saying. There is a lot of value to it, and the next slide shows the proof. So while we’re a startup, we have quite a few customers and organizations using this solution from large banks to financial institutions to telcos, etc.. And we’re very proud of the fact that there’s quite a few cybersecurity companies that are actually using our product and getting benefit from the solution itself. So this is tried and tested by many organizations, and we’re very proud of that. With that, I’ll wrap up and we can move on to the Q&A. As I mentioned, please go to our site and you can get started for free that.
Speaker 1: All right. Great. Thanks, guys. Got an arm card? That was those great bullet points that you guys have pointed out. So let’s get into some of the possible discussions that we can have over this topic. And during the meantime, audience, please submit your questions or anything that you want to ask. Those two gentlemen will definitely welcome that. But while waiting for that, you know, I could start asking questions. And one of the biggest questions that I have as a venture capitalist, you know, devsecops, you know, is something that we we’ve been talking for quite, quite a few years. And the biggest question that I have is in this ownership and the project, you know, does security practice security implementations? Security operations has been in the hands of the apps for many years. But with this change in the way the way that the software build process has evolved and changed over the course of years, as I explain how it was done and how it’s done today, you know, it’s it’s definitely things are moving, things are changing. But this ownership in a budget, you know, how should we understand? Is it really the budget and ownership shifting from the ops team to the actual product team or the DevOps team so that they could actually they are actually making decisions. They own the budget to pursue and implement this kind of security tools into their dev cycle. Got any of you?
Speaker 2: Sure, sure. I’d love to take a shot at that and got you. I’d love to hear your thoughts too. So in terms of my career, I began in technology. I spent about a decade on Wall Street, and I’m back in technology now. During my time in Wall Street, what I saw was this evolution behind. Initially, everybody went to the CSO, right? I need a new tool. Let’s put on the SEESAWS budget. Eventually that started changing and this could be unique to financial sector. So I’d love to hear what others have seen within their own sectors. But it started to move more and more to downloading the accountability for risk. The management of the PAL and ultimately the velocity at which any feature was released, be it a security fix or a core feature to the application. All of that went down to the head of engineering. So you’ll notice on the Agile chart that I had, there wasn’t a backlog for security and a backlog for features. There was a backlog and it was up to the individual head of engineering as to how that was prioritized. One of the things that we consider within my current day job is something we call error budget, right? So you can move super duper fast, you can add lots of new features until things start failing in production. And then we’re going to turn that knob and allow you to release things maybe a little slower. You’re going to have to be a lot more thoughtful and you’re going to have to start paying down that tech debt that led you to this place. And it’s a balancing act, right? You never quite reach homeostasis, but the idea is that you, as the accountable head of engineering, need to make the call as to how much of this you’re going to invest in and ultimately how you’re going to be held accountable if you have a security control failure in production. So that’s what I’m saying. Gordie, how about you? What are you seeing from the perspective of your potential customers?
Speaker 1: Yeah, i definitely say that when we when we talk to organizations like you started saying, oh, it depends. I’d say that two years ago we saw all the budgets in the security organization and in most large organizations, we’re still seeing the ultimate budget come from security unless there is a transformation project. And those companies that are going through a transformation project, suddenly the budgets are starting to come from either development, so the CTO office or from DevOps, which is another area where they’re saying, okay, we’re going through this massive shift in the organization, we’re all moving towards a DevOps practice and that means that we are actually creating a new group that is in charge of that and they will own the budgets for the transformation that will include security budgets. For smaller organizations, we are definitely seeing that the budget is coming directly from the CTO office. A lot of them don’t even have a CSO or those that have a C, so the CSO actually reports to the CTO in some cases. So those budgets are coming directly from technology.
Speaker 2: I see. So it’s it’s many of application building. An application itself comes with its own pal. So the production product side, you have to have the engineering side, they have to be really risk aware and they need to be implementing the security practice in the dev cycle so that they could maintain, even if it means to slow down a little bit. But overall it maintaining the momentum of this velocity of the faster cycle. Even if that’s the case, let’s say there are many different layers of the tools out there that dev team could implement. There is a SAS, there’s a DAS, as you guys got your building, there’s a is, there’s a security component analysis. So many different kinds, many different layers of the toolsets out there available like it’s a little bit confusing but but how how should people look at this and is there any priority is there any pros and cons on each layer of the fence like maybe this is yours or.
Speaker 1: Yeah. So obviously dast is the best solution and all the others are terrible. No kidding. Yeah, I think, you know, it’s always an interesting question when people tell us, well, are you really asked if you’re integrated that early and you cover both the Web apps and APIs? And I say, yeah, yes, that is the category that we fall under. I would say that most organizations that are deploying even our modern version of Das are still doing software composition analysis and static analysis as part of the overall stack to make sure that they are covering different parts and different aspects of application security and they have full coverage. Ultimately, the cost of deploying these solutions is so much smaller than a cost of a breach that it’s better to have more tools and and find different vulnerabilities because each one finds different types of vulnerabilities within your organization. We are definitely seeing and this goes back to some of those smaller organizations that don’t really have a dedicated OPSEC team and they’re saying, look, we can’t utilize all these solutions. Static analysis has a lot of noise. So we are actually going to deploy DAS and software composition analysis together and forego static analysis, not something that we’re seeing with large organizations, but definitely something with smaller organizations where I would say we are also seeing a trend is companies reducing the amount of manual pen testing that they’re doing and saying, okay, we will do the manual testing once a year because we still have to for compliance. And hopefully our long term vision is that you won’t even have to do that for compliance, but that will require regulatory change. But we will implement automated solutions which are much faster, much more predictable and much cheaper than doing manual pen tests on a regular basis.
Speaker 2: I think that.
Speaker 1: Makes that consistent with how you see it as the practitioner.
Speaker 2: Yeah, I think that makes sense. I mean, Daddy, you did a wonderful job of saying what I say, which is it depends, right? And I think all of these work together in concert when done properly. I think the advantage with both SCA and dast over some of the static stuff, to what Gary pointed out, is not only the fact that we can tune these and there’s a lower barrier to entry and there’s better ways of making these super low false positive, but also in organizations where you’ve inherited a bunch of code or a bunch of binaries that are statically linked into your code. Static analysis isn’t going to help you, right? Like static analysis helps you when you have the source. And in scenarios like that, you can get a very good outside and black box. Look at how your system reacts without having to have access to the source code. In other scenarios, there may you may have access to the source, but you may be tied as to how well you can affect that either for licensing because it’s owned by a different department, but you can defensively use those libraries within your own code if you know where some of the edge cases are. The last one, which is a favorite of any software engineer that’s done multithreading or distributed systems at scale. Race conditions only show up at full system load, right? And you only get that in a test environment. You can statically test to your heart’s content. And even the best static tester might say, Hey, this doesn’t look thread safe. There could be a problem here, but until you actually see how that runs in production, you’re not going to have much of a sense as to where, how, and if it breaks and how you should code defensively to avoid that.
Speaker 1: Yeah, so it’s very true that many of the application has started to implement, you know, lots of third party libraries. And another one that we’re seeing as a big trend is this API implementation all over the place. And that’s again is something that is hard to to gauge the risk of, of that and lots of APIs to give to companies started to pop up in the last few years. And it’s quite sometimes it’s astonishing to understand how enterprises actually don’t have much visibility in API management itself. And so so what’s your view on this API security and Omkar and Gadi, what can, can, does playing a role in this API security point of view as well?
Speaker 2: Absolutely. I think back to what we were discussing earlier, when you don’t own the code base, one of the things that you can measure quite effectively is how your system reacts when provided unexpected input or under high load or under some other security edge case. And that could be an orchestration of multiple internal APIs, external APIs, as I mentioned in my slides, like even a. Even a moderately sized internal application can have thousands, hundreds of thousands, even millions of API calls depending on how how faithfully to the microservices architecture the system has been designed. There are so many benefits of doing that you have. If you have a if you if you decompose that monolithic code base, you can operate on it with such more precision. You can change very specific code paths. But then the major effort comes back on how you have a holistic test that considers all of this end to end. And that again, is where DAST comes in. Static analysis can tell you, Oh, neat. You have a dependency here where you’re going to call something else. You should check that. An SCA might even say, Hey, that looks like based on the swagger file, that looks like API version one, two, three. Watch out for these things, but you don’t truly understand the edges until you exercise it in a test and you don’t have full visibility as to which code paths you may want to incorporate. For example, additional input sanitization, or you may want to react to a API unavailability in a more graceful way. All of these components, I think, really rely on a DAS system to be able to validate. What do you think about it?
Speaker 1: Very well explained Gary. On top of it, what are you going to say? Maybe you could answer. One of the questions that came up was if you if you guys provide an audit on API.
Speaker 2: Yeah. I think that when when we look at API scanning, we divide it into three holistic parts, right? The first well, maybe for even, but the first one is do you even know what APIs you have in the organization? And that’s a big question for many organization. That is something that if you already know what they are, that’s great and we can scan all of them. One of the bigger challenges we’re seeing from customers now is that they don’t even know what APIs they have. So that is a new requirement that we did not address in the past. But we’re starting to look at addressing either through a partnership or through our own solutions to even give you that discovery of what’s out there. And once you do know what the APIs are, then it divides into two other categories. And one is are you looking just at the production APIs or are you looking at development and production? We believe that like anything else, you should be looking at development and production and securing the APIs as part of your development process and not just the not just the production environment. And that’s where that assurance that the question was about comes in. Make sure that you’re validating that they’re not vulnerable early in your cycle and not finding that out in production, which is very risky. The last component is really what are those API formats that you’re using? More and more organizations are shifting from using legacy soap or rest APIs into GraphQL APIs and new technologies that legacy solutions just don’t support because their backend doesn’t even enable him to support that. So make sure that you’re able to test for all the different types of APIs that you have, whether they’re GraphQL or a new API technologies that are popping up right now. There are all sorts of things around web sockets and web sockets, communication that historically we didn’t need to support, and now we have to add support for in order to be able to identify vulnerabilities in the.
Speaker 1: Thanks, Gadi. Just got another question in the chat room. Thanks, Mahendra. So this looks like a question to Omkar. So what challenges does the Cecil office face internally with power shifting to the dev side in cloud first companies?
Speaker 2: Oh, my God, Developers are getting power stuff. The press is. In all seriousness, I am. I’ll answer the question. But I think a lot of this comes down to your your leadership philosophy to some extent, right? I. I refrain from micromanaging people through the steps of what they need to do, and I prefer to measure them on outcomes. And I think then you get not only better engagement, but I think you get novel ideas that you may have never thought of as a leader. And that’s key when we think about innovation, like innovation is overcoming constraint without prescription in a novel way. And I think if you set up an environment for that, great things will happen. Now, directly getting back to Mahindra’s question. So in my opinion, when we were in, there’s a certain mentality that says. Security has to be micromanaged out of the social. I’ve seen that. I haven’t seen that scale very well. When you’re talking about very large organizations measured in hundreds of thousands of people. Because eventually to be very much the engineer again, you’re going to get into this own scalability problem where your department has to grow at some rate, at which the rest of your organization is growing, and you’re just never going to be able to hire that many people. So that means you need to take a step back and say, Well, how do I scale this so that I get the security outcomes that I want while empowering people to be secure and perhaps approach security in ways that me as a governance and risk and compliance, CISO or even a CSO that used to be technical or maybe isn’t as familiar with modern development practices. How do I get to a point that they still get to my outcomes in the most optimal way? And I think the right way of doing it is to hold those engineering leaders that are accountable for that panel, accountable for risk as well. So they’re accountable for managing the risk within their organization. They’re the ones that get to modify their engineering practices in the appropriate way to address the concerns that you are most concerned with as a security leader and that you do so in partnership. And I think the way that the CSO should approach this is by measuring the engineering leader or in some organizations, the CIO or in other organizations, the product management leader against the security outcomes you want, against the risk outcomes you want, and ultimately allow that person to be the senior leader and professional that they are and manage their budget to the outcomes that they’ve been set. Does that make sense? And perhaps by extension, I wonder how you’re seeing this show up with some of the customers that you’ve been working with.
Speaker 1: One of the ideas we had early on and we haven’t done yet, but hopefully we’ll we’ll be able to get to as we grow is we were planning on implementing a security or app security team and development team happiness index and showing how happy they were before they integrated solutions that really made their lives easier and how happy they are after implementing these solutions. I think that that’s a critical issue right there. Is that always that conflict between security and development on, hey, these guys are forcing me to do these things that I don’t want to do. And and that’s how development thinks. And then security is it’s mind boggling to me that development doesn’t know this from the beginning. One of the things that we’ve been doing and happy to extend that to anybody who is listening is we’ve actually been doing workshops for developers on what is the top ten, what is the OWASP API, top ten, and educating people on why this is so critical and how it can be implemented by them in an easy way to reduce that friction and make everybody happy. So maybe one day we’ll be able to get to that point where we have the happiness index and that will be something that we start publishing regularly.
Speaker 2: Oh my God. Engineer Happy Index. That’s a new that’s new to me. So very exciting. Gary So. So it looks like it’s a lot of I don’t know if it’s a lot, but it’s definitely a big thing for the team to adopt this kind of things. But what’s the typical frictions or the actual example of frictions that you have been seeing that the DevOps teams like pushing back on? This is not my thing or this is too much change. Like what is the actual friction do you see when you try to talk to those people?
Speaker 1: Yeah, I think I think there is. I’ll start with a higher level issue and then I’ll dive into the developers themselves. A lot of times there is really a desire within the organization to say, Yes, we want to integrate this early and often. We want to make sure that we are integrating application security into the process. However, we don’t really have a partner on the development side that is willing to do this, and those organizations that are really able to adapt the solution and use them, we’ll find 1 to 3 development teams that are keen and they’ll find that developer leader that is interested and prove it out and have those quick wins within the organization too. So like, look, this team is actually happier with this solution because they’re able to remediate issues early and we don’t have to come back to them. six months later and have them delve into code. So I think that’s the first challenge of really finding one or two partners that are excited. But then the next one are those components that I listed around. What makes a developer want to adopt this solution, right? And one, get rid of false positives. They just don’t know how to deal with false positives to make sure that the scans can run very quickly and they don’t delay their process because ultimately they are measured on throughput. Three Make sure that when you’re giving them a vulnerability and you’re giving them remediation guidelines, those guidelines are in developer speak, right? It’s something that they understand. They don’t have to start searching or start looking into find what the solution is. So if you’re targeting the solution towards them, then that is very, very important to make sure that they will actually adopt it and try to use it.
Speaker 2: Got it. Got it. Before we wrap up, just one more questions. Just came up, pop up in the chat. So the primary code vs third party library vulnerabilities, lots of talk from the vendors gives you the biggest bug bag for BUG. Do you think primary code vulnerabilities are being overlooked or downplayed? Well.
Speaker 1: It’s really interesting. I’ve been doing a lot of research as as we ramp up for for some more investor discussions. And what we’re seeing is there’s a ton of noise in the industry around software composition analysis, as you know. So Sneak and other companies have raised ridiculous amounts of money, but in the end, only 30%, 31%, to be precise, of 31% of code, is actually third party code. At 69% of code is first party code. And if you’re not paying that attention or attention to that, you’re falling short in providing coverage. So there is. A big gap that needs to be addressed for first party applications, and I believe we can offer that a solution there.
Speaker 2: Got it. Thank you. I think we only have like a few seconds left before we hit the 11:00. So, you know, we we need to wrap up. But before everyone goes off, you know, please look at the chart. We have a survey monkey survey that we want to do, and we always want to host this kind of session so that we could be attributable to everyone in that community. So would appreciate if you guys could give us a feedback on today’s webinar so that we can improve on ourselves. So thanks, Gary Omkar And a lot of stuff, discussions and lots of changes happening, the DevOps system building cycle. So it’s, it’s a change that they have to adopt in order for them to maintain their risk exposure. And that’s being one of the tools. I think we have seen lots of goods and bads in the every tool, but hopefully this will be one of the way that people could adopt and increase the engineers happy index. So that will be the the good goal for everyone. So thanks everyone for joining. Gary Omkar Thanks a lot.
Speaker 1: Excellent. Thanks, everybody. It’s a.
Speaker 2: Pleasure.
Speaker 1: Great to have you both. Thank you.
Speaker 2: Bye bye. Bye.