- Why Bright
-
Product
- Resources
- DAST
- Application Security Testing
- Penetration Testing
- Vulnerability Management
Guide to DAST (Dynamic Application Security Testing)
Your primer for application security testing.
We explain the concept of penetration testing.
Comprehensive overview of vulnerability management.
- DevSecOps
- API Security
- Unit Testing
- Fuzzing
All the necessary knowledge to get started with DevSecOps
We take a deeper look into securing & protecting your APIs!
All you need to know about keys of unit testing & best practices.
We explore fuzzing and evaluate if it's the next big thing in cybersec.
-
Company
- Partners
- Contact
Resource Center  > Webinars
Avoiding Security Incidents with a Dev-First AppSec Program
00:00:00
Speaker 1: Okay, So good morning. Good afternoon. Or indeed, good evening, depending upon where you are. We really do seem to have a very international presence today, which is great, but more importantly, a very informative webinar on avoiding security incidents with a dev first OPSEC program presented today by Offer MA of Mitiga and neuro Legion’s very own Gadi Boschwitz. I will let the two speakers make their own introductions, but a little housekeeping perhaps, and look at our agenda for today. Also will be taking us through real world examples where major security incidents could have been prevented. Offering Gaddy will also discuss together the challenges that you’re probably facing from a security standpoint and how you can shift security testing left successfully to prevent issues further downstream. We will have a Q&A at the end, but please do add your questions or remarks throughout the webinar and either in the chat or indeed via the Q&A functionality and perhaps where relevant. I can pose these questions throughout the webinar to be as conversational and interactive where relevant. But we will certainly be having a Q&A at the end. This webinar will be recorded. And without further ado, I will pass it the mantle over to the speakers to introduce themselves and. So. Yeah. Gary, over to you.
00:01:29
Speaker 2: Excellent. Thanks, Oliver. Thanks, everybody, for for joining. Very excited to be here with my esteemed colleague also, who is signed in with my name for some reason. Also, I don’t know why you did that, but I appreciate it.
00:01:44
Speaker 1: That’s the link I got.
00:01:46
Speaker 2: Okay. You can change your you can change your name. I just noticed that. If you make me look much more handsome. Come on, let’s be honest. So just two words about myself before I hand it over to. To offer. I’m the CEO at Norwegian. We’ve been around for about three years, really focused on driving vast for developer focused organizations. Bit about my history started my way in in the cyber world in the 8200 unit in the military. A few years after that that I can’t really talk about so we’ll skip that part in my history. And then I joined the commercial world. I joined a company called Variant at a very early stage focused on cyber. All sorts of fun recording stuff. Voice over IP when it just started in in the product world and had a great run with from an early stage company until after we went IPO. And since then I’ve moved into a bunch of commercial roles, culminating with being at Norwegian for the last couple of years. I also hold an MBA from New York University and undergrad from Tel Aviv University, and I’m based just north of San Francisco in California. I’ll be speaking at the at the end or the latter part of this presentation, but for now, let me turn it over to Mo to introduce himself and dive into the presentation.
00:03:24
Speaker 1: Thank you, Goddy. So, hello, everybody. It’s great to be here. For those of you who don’t know me, name is more. I’m currently the CTO and co founder at Mitiga Cloud Air Company. I’ll talk about that in a minute. But actually, most of my career has been in the apps space, so I’ve been in this industry for over 25 years. I’ve started apps in in 2000 when I joined as the first employee to Small company that later became Imperva. And so I’ve had the chance to do pen testing and work for a while. And then I founded my own pen testing company called Haptics, which was then acquired by Y, and I then started the first I asked Company Seeker, which was later acquired by synopsis where I spent a big chunk of my time. And so most of my career has been around apps. In the last two and one half year, I moved a little bit to cloud security. But apps is a big part of cloud security as well. And so I keep coming across that. I’m Cloud second app passionate during the day and then incident response during nights and weekends because incidents always come at the end of the day or on Friday afternoon. It’s a thing. It’s a cosmic thing. I don’t really get it, but it is that and I’m a huge Devsecops fan. It’s been most of what I’ve been in evangelizing while on synopsis and it’s why I’m so happy to be here with your religion because the first company that brings Daas to developers and I think it’s great. So that’s short about me. Mitiga is, is a company focused on cloud incident response, and I don’t want to talk too much about Mitiga, but the work we do in Incident Response has led me to this presentation. And so I’ll talk today about incidents we came across today. We do incident response a little differently. We build our incident response platform for the cloud. Can read about it if you care. But let’s move into the the stories. And so what I’m here to tell you today. How about a few incidents? I chose three. There are many more, but I don’t want to bore you too much. There were the main source or the main problem or the main issue have been app vulnerabilities that could have been prevented by having a good or even an existing apps program and testing in earlier phases and because they weren’t in place. An entire incident or breach happened. And some of these have been big and catastrophic. And so at stake is not just about having to do it. It’s a real world access vector. So let’s start with the first one, and I’d like to talk about this incident because it’s it’s a very cloud native, but also act related. Gary, can you switch slide? And this is not the incident, but it’s a similar incident that was published online. So I can use the stuff that was published online because clearly the incidents themselves that I’ve talked about today are all under confidence. But the story is very similar. Right. And the story here is it’s not a pure OPSEC problem, but it’s somewhere in between. And the story is of people having their code stored in GitHub and GitHub being compromised. And it’s not GitHub that’s compromised, it’s the GitHub marketplace. And so as we expand access to the cloud and cloud native world, there are so many platforms and SAS platforms and marketplaces. With so many third parties in this in this case, a company called whatever was compromised. They had credentials to GitHub and they had access to Dave’s source code. Source code was stolen through that access. And in Dave. Com source code, there were passwords in clear text. Why? Because there was no security program in place and people stored passwords and clearance and they used that and access their data, stole seven and one half million records and leaked them on Darknet. Guess who was blamed? DevOps because they didn’t do the right thing. Next slide. And so this is a thing so deep source is another company. It’s actually, amazingly enough, it’s a company for static analysis for security. I don’t know if they’re still around, but they were breached. All the repositories of their customers were stolen. Next slide. And one of those customers came to us. And that’s a partial timeline clearly obfuscated of what happened. But basically, sometime around the end of 2019, the customer did a posse of the source. They just deep source, again, static analysis. They need full permission to your code because it’s it’s static analysis. They did a plc of deep source. They didn’t like it and they stopped using it, but they forgot to remove the permissions. And then around April 2020 and a phishing attack took down deep source. They got access to their main server and through there they got access to all their customers. They cloned the code, they downloaded it. We saw an IP, I think it was in Turkey that was used to actually access the code. That was in May 31st. Deep source was informed by GitHub that they’ve seen suspicious activity and that their account was breached in July 11. So a month and a half later. And that’s actually fast in the incident world, right? But only ten days later, GitHub notified our customer that they’ve been breached. Right, So. It’s now almost 50 days since their code was downloaded that they had the first indication that the code was removed. And 50 days is a long time for attackers. Next slide.
00:10:17
Speaker 2: So I think you’re bringing up a really interesting story with both of these. And I’m jumping the gun a bit. I know we’re supposed to have a discussion later, but I was wondering for our audience if you can post in the chat, have do you actually know what incident response does? Because all sales team is brought in to if everybody remembers Pulp Fiction when they bring in the fox as the cleaner to help clean up afterwards. That’s that’s what often dresses up for Halloween every year because that’s what he does throughout the year. So stuff has already hit the fan and you’re brought in to to clean up. But obviously we’ll talk a lot more about how these things can be prevented. And the whole idea here is not to be heroes, but how these can be prevented. But also want to talk just in 2 minutes about incidents, response and when you’re actually brought in. I think that’s very important. So people get the visceral pain that comes with it.
00:11:20
Speaker 1: Yeah. So so whenever we are, so we are brought, we come at the worst time of an organization when they had a substantial breach and basically. Usually it means the CEO of the company is involved. Everybody’s stressed out. Some people are worried about losing their jobs and there’s a lot of legal havoc because, as you know, most of the business we’re in is regulated at some level. And so the minute there’s customer data or health data or PII or anything, it needs to be reported and there’s liability issues and legal ramifications. So basically what we do when we come, we have and again, we can’t prevent the breach, right? Already happens. So we are here. We are there, as Gordy said, to help clean things up. So the first thing we do is a really thorough investigation because the investigation helps us understand what actually happened. So was that data really leaked? Was it abused? Was it used in the wrong way? Then we help close remediate the system. We help to manage the risk. We sometimes bring in a negotiator if it’s a ransomware and they’ll talk about it a little bit later and and we help manage the crisis for the customer so that they can go back to business as usual. And actually, one of the one of the challenges of AI are is you usually don’t have enough data. And that’s part of what we do at Mitiga with our platform is we collect all your forensics data up front so when you have an incident and you will have an incident, we can do it. Do all this work really, really fast. So this was still this was an ad hoc customer, right? So they didn’t have any of our platforms. So we had to make do with what we have. It took us about a week to get the more detailed logs from GitHub because it was just hard to get it. SAS makes logging very difficult, but that’s another discussion that we can have another time. Anyway, let’s let’s go to the next slide. So what happened that during this time, the development, the attackers, they downloaded the code, right? And they started looking into the code because they had the code. So they found some old credentials, which some of them were still active around GCP, but they also found some vulnerabilities in the code because other than trying deep sauce, this company never did a proper test or DAST or anything or any serious apps program. And what we saw and luckily for them, by the way, most of this was still in reconnaissance, meaning they tried things. They found some persistent across the script things. They identified some internal links that leaked customer names, but we haven’t seen any indication that they actually used it in this case. But they were looking at the code and and taking advantage of vulnerabilities in it and from the response perspective. So the minute we got in, we started looking at what’s going on. We looked at the code, We saw they were we saw some suspicious activity from these apps using vulnerabilities, and that triggered an emergency apps program. So basically we brought in a code review team external, a pen testing team, a desk and a static analysis. All like crazy top priority and their poor developers had to work day and night to patch everything because the codes already leaked. Right? We can’t undo that. So we have to fix the vulnerabilities. But we should have done or what they should have done. Earlier on, we also looked at everything else. We did a full incident response, very expensive for the customer to go through all the process because we had to rule out so many things. Reviewing all their code for all, all their old code for secrets and what potential access could have been done with secrets. So it’s a very big and unpleasant event that and they have some health care customers. So that was a big deal. And they had to notify their big customers that they have a breach because that’s in their contract, everybody’s contract now. And so one of their customers took down the service for a week. So they lost money there. And so. Really, really lots of bad things because they had no real access program in place. Nobody taking control over GitHub, over testing over none of these things.
00:16:23
Speaker 2: I think I think what I’m hearing you say, and maybe that should have been the title of our presentation is the the saying An ounce of prevention is worth a pound of cure is probably more is worth a ton of cure, not a pound of cure here. And that’s that’s the whole idea of what we’re talking about. You need to make sure that you’re doing the small steps ahead of time so you don’t have to deal with this stuff going forward.
00:16:51
Speaker 1: So the next incident, this is really interesting. It was a very high profile incident. Again, I can’t I can’t mention the customer name because I’m talking about the details. It was a high a high profile incident that. So it started from the incident perspective, right? Not from the investigation perspective. It started by us getting a phone call from a customer that has a ransomware demand, double ransomware. I’ll explain that in a minute. They have all their systems shut down. Out and evidence that very sensitive data has been leaked outside of the organization. And this is what started the case as we investigated in a lab into that in a minute, we discovered that all of this big thing, including Active Directory leaks and full compromise of servers and everything that happened there started with no less than a SQL injection. And, you know, I’ve been doing APS for 25 years. I’m amazed by how much SQL injection is still prevalent out there. It’s like. It’s like the most basic thing, but it’s still out there a lot. And if we don’t test for it and work at it, we’ll have it. And so in this case, hundreds of thousands of leaked records, documents, extremely sensitive data, complete remote takeover of servers and full wiping and a double ransomware request. So for those of you who are not familiar with with double ransomware, ransomware started by, look, I’m going to delete, slash, encrypt all your data. If you pay me money, I’ll give you your data back. Right. The problem with ransomware, it works really great. If you don’t have backups. If you have good backups, most organizations will not pay the ransom. And so they started doing double ransom before they encrypt. They download the data and then they tell you, well, you need to pay. And if you pay, we give you your data back. And if you don’t pay, we publish it on the darknet or we sell it or we do something bad with it. And that kind of puts you in a very unpleasant position. So this was the case here. Next slide. And I’m not going to go through all of it, but like I said, it started through a SQL injection in a QA server that was, for some odd reason, open to the Internet. Let’s skip to the next slide. They started by scanning. It just ran. They actually ran a commercial test. One of your one of your competitors, they ran their buster. But but look, they found it with a desk tool. So if if they would run a desk tool, that would have been solved over 20 million tested URLs and directories. Then they found a desk. They use SQL map to exploit it. All automated tools, all simple stuff. Next slide. And then from there, they started getting data, right? So I mentioned the data exfiltration from the database server, download records, usernames, passwords, credit cards, personal information, but also used it to discover internal IP addresses, server names. And then they moved on to the next level because the database was configured with high privileges. So they were able to start running commands and they got greedy. So. So this was over a few weeks, this process. Nobody noticed anything. So they tried this and try this. Eventually they uploaded a file using something called Unicorn. It’s an open source that lets you upload base64 encoded binaries that passed the Microsoft protection and then install themselves. And through that they uploaded a bunch of tools for dumping processes, Active Directory structures, everything, you know, the whole shebang. It was real fun. Then next slide. They established persistency. So to prevent a case where they may somebody may patch the server at some point they put a bunch of web shells. So Web Shell is basically an ASP or ASP.NET page in this case that they give a name that looks fine, like a feature in the app and actually runs commands on the server. So they put to web shells to keep persistency, which they later used after to access it quicker. And then they work there for month, month with that and they downloaded data and they collected more and more. And then eventually one day they decided to activate. And so next slide they trigger the attack, they wipe the servers, wiped the backup server, which they also took over and then published that they have sensitive data suggests that. Require double ransomware. And then we came in. And so luckily the customer had. Backups on top of those that were wiped. We spent about a week just helping them clean the backups because the backup, they were there for a month, so all the backups were compromised as well. So all the images of the servers were compromised because they’ve been there for months. And so the customer initially just wanted to put everything back from the backup from the day before. But clearly that’s not a great idea. So we have them clean everything up and come back from backups. We negotiated about the ransom. We brought a negotiator to negotiate about the ransom and decided eventually not to pay the customer. That’s a business decision. In this case, the customer decided they do not want to cooperate with that and they would take the the. Legal and regulatory issues. Lots of there still in that with the authorities in the US around the data. But, you know, and then the attackers we know the attackers sold the data on the documents. We have Bitcoin evidence of that. Again, huge incident, millions of dollars, maybe tens of millions by the time they’re done with the regulator could have been prevented with a simple SQL injection detection. Last one.
00:23:41
Speaker 2: I know.
00:23:42
Speaker 1: Because I’m.
00:23:43
Speaker 2: Running out. I know that the average incident in North America right now, the average incident like this costs about $8 million per company. You can assume there are a lot of small ones. So that is one of the examples that skews that significantly up.
00:23:58
Speaker 1: We’ve we’ve we’ve admitted mitiga. We’ve been part of incidents where only the ransomware was already over $50 million, not including all the rest of the cost. Yeah. Incidents are expensive. You don’t want to go there. Okay, last one. I’ll make it quick. Really stupid. So again, starting with the ransomware note, entire MongoDB erased a sample record of stolen data and a ransomware note that was actually the only record left in the database. So they created a new a new table with that record in it. Pay for bitcoin, whatever. They had this microservice API that was used to load balanced connections to the MongoDB. It would effectively authenticate the microservices, the rest of the microservices and give them connections straight to the MongoDB. Somebody accidentally opened it to the internet. It didn’t have proper authentication because it was supposed to be only internal. And so they got the connection to the MongoDB just, you know, just direct connection to the database, erase and so on. Luckily, in this case. Next slide. This was a good case for us. The customer had backups. So again, double ransomware, but we didn’t need to pay because they had backups and we were able to prove that they did not actually download all of the database before erasing it, but only a small subset to show that they have data. And we’re able to do that by looking at traffic logs off their US. They didn’t show enough traffic going out, so this time ransom was prevented. Still, the whole thing was their team, probably over 100,000 of cost just to to deal with the incident, legal air and so on. So over to you, Gary.
00:26:17
Speaker 2: Good. Thanks for all of those horror stories. I think I think where I want to to shift now. We’ll open it up. And again, we we encourage you to post questions on the Q&A. I know there’s lively discussion going on on the chat, so feel free to post any questions that you have. But I’m tying back into the statement from earlier regarding the ounce of prevention. Realistically, when you look at research and you look at what the industry is saying, and the reason this is so important is applications are still the weakest link in your organization. We talk about cloud, we talk about infrastructure, we talk about all of those components. But realistically, and when we say applications, it means applications, the application layer. So it’s applications and it’s APIs. And with the rise or very significant rise of APIs and the significant increase in velocity of development, unfortunately security is forgotten and left by the buy side because developers are under a lot of pressure to release quickly. And that means that security doesn’t take top priority because if we don’t do it, we can do it later. Unfortunately, as you saw from these incidents that also brought up, that is absolutely not the case. You have to find a way to do this earlier because realistically, look at these numbers. More than 40% of applications of vulnerability actually come from the application layer. And there is there’s more stats. But to me, the most shocking one is that 89.6% of organizations, so almost 90% of organizations are knowingly releasing vulnerable applications into production because they don’t have time to test them. Or by the time that they find vulnerabilities, it’s too late to remediate them because they found them too late in the life cycle and the hassle of remediating them will delay their deployment. That just tells me that the other 10% are unknowingly releasing these vulnerabilities. So many organizations have these vulnerabilities.
00:28:37
Speaker 1: And I want I want to add here that. Again. I’ve been around this space for so long and and these numbers are not getting better and and they’re not getting better despite a huge increase in adoption of some of these technologies. But the problem is development is also becoming faster. So people adopting finally pen testing or running a desk to once a quarter, which they should have done ten years ago and they didn’t. They’re adopting it now, but now that’s not enough anymore. And so it’s like a sliding window. We get better at apps testing, but not better enough to deal with the pace that that development is moving into.
00:29:24
Speaker 2: Yeah, I think it’s really the fact that a new approach is needed because doing the same old thing just doesn’t work anymore. To me, there are two staggering things here. And these all these points on the slide to tie into those two. One, we have increased deployment velocity very significantly. And whereas in the past it was fine if we ran a DAS tool three, four times a year because we did three or four releases a year and that enabled us to cover ourselves. Now we have customers that are doing 100 deployments a day, so just do the math quickly. Let’s assume that you’re running your scans three times, four times a year. So once every 90 days and between those 90 days, you have 100 releases a day. That means that you have 9000 releases that you have not tested and could have a million vulnerabilities in them. And the same applies for the manual pen testing aspect. Most organizations will do manual pen testing as part of their SOC compliance once a year or maybe twice a year. And that means that between those two releases, you have tens, if not hundreds of thousands of releases that you have not tested, and that’s where you become vulnerable. The other thing tied to that is for some reason, about 15 years ago, there was a divergence in perception where people said, okay, a bug goes to developers and developers fixed it, etc. a security vulnerability that that that goes to security, that doesn’t go to developers. And that’s just a ridiculous approach. A security vulnerability is a bug. It should be treated just like any other bug and it should go to the developers. If you need apps, support and apps to help, that’s great. But you have to make sure that vulnerabilities are treated in the exact same way that bugs are treated. And when they are severe, you have to make sure that they are treated very urgently. So when we look at what is most important, there is still a divergence between developers and application security. And that goes back into the whole notion of there is misalignment between the two and that misalignment needs to happen and in order for both developers and security. So red team, blue team to align, you need to make the solutions really, really easy. It’s very important to automate the process. It’s very important to have accurate scans because the scans are not accurate, then developers will look at you and say goodbye. So we know that developers are prima donnas. We all come from that background and they want to make sure that the solutions talk to them in a way that they can understand.
00:32:25
Speaker 1: I think I want to add one thing on that. I think all of these things right. Ease of integration, ease of use accuracies, they all translate into velocity. For DevOps people, everything is velocity. And and we come to them and say, Oh, you need to be secure. But that’s not that’s not their agenda. Their agenda is velocity. And so if we want to be secure, we need to offer them the tools to do security while maintaining velocity. And that’s why all of these things are so important. And it has to be seamless. It has to knock slow down the cadence of the ICD chain. Yeah.
00:33:13
Speaker 2: It’s awesome. Yeah, absolutely. It’s all of these are times I’m skipping a couple of these slides to to also point out to me the exciting thing about this is the fact that if you actually do these things right and you do deploy a solution that can integrate early into your your software development lifecycle and can enable developers to find vulnerabilities and make sure that they remediate them early, the cost to the organization and the usability is significantly lower. If you look at all the steps in the development lifecycle, if you are actually integrating the DAS tool as early as possible, say at the not even the pull request, but do it at the unit testing level at the minute that you have compiled the code, it will be 60 times faster to remediate an issue than it will be if you wait for production or for a pre-production scan. And that means that I’m a developer, I’m doing a release, I run a quick scan or I’m doing a unit testing, I run a quick scan, I find a vulnerability. I know exactly where it is because I’m checking a very small part of my application. I remediate it and I run a retest and make sure it’s fixed. That can be a 510 second process. If I wait until production and let’s go back to that example of we found the vulnerability three months after 90 days after it was released, which means I’ve already done 90 122,000 releases since then. Now I need to go back, find old releases, re scan, try to find it, and that can take a few days and throw my entire plan out of whack. That’s why running these early, finding them early or remediating them early is so important. How do you actually do that? Right. That’s all great theory. But what does that that mean? What does that translate to? So this is a diagram showing how we typically run our solutions or how customers typically run our solutions. And you can see that there are different stages in the software development lifecycle. So stages four and five pre production and production are where people historically ran DAS tools. Why did they run them there? Because the solutions were focused on the apps team. The results that were received were in apps speak, and that meant that only apps professionals could use them. They took a long time to run, so you couldn’t run them as part of your development cycle. And there are many components that go into the need to crawl these applications, etc. and that’s why they took a long time to run. So you didn’t have a choice. You had to run them late because if you tried to run them early, your entire development cycle would be thrown out of whack. We’ve changed that and we’ve built the solution from the ground up to be focused on developers. We can still enable the app team to provide the governance and say what needs to be scanned, how often does it need to be scanned, etc. and dictate to that, but make sure that it’s automatic and it’s integrated into the development lifecycle. And that really gets into a discussion between apps that can development or hopefully there is a unified team that’s working on this of how exactly should the tool be deployed. But you can run it in various stages in the development lifecycle and we have customers running it as part of their CI. In this example, it’s showing Azure DevOps, but it can be any CI that you can think of. We’re integrated with all of them. It can be part of the unit testing, so directly integrate it into the ID and run very early in the development lifecycle. It can be as part of QA automation. If you’re already using Selenium, we can take the outputs of selenium and make sure that we are scanning those. So you have a secure application as part of the testing or the best practice is doing in all of them. Start as early as you can, run very quick tests early. We learn about your application as you build it and as you expand it and we help you automate and make the tests at later stages in the lifecycle much, much more efficient based on what we’ve already learned. What is the actual process look like? Let’s say that I submitted coding to get that triggers the SCI process that we are running or the process. The process calls our application and we trigger a scan based on the definitions that were made, whether it’s an API scan or if it’s a web app scan, if you’re using a crawler or if you’re using an HTTP file to target it. All of those are enabled and you can define the rules to say what happens when the scan starts running and find vulnerabilities. Do you want the bill to fail? Assuming you’re integrating into the bill, Do you want the bill to fail? When you find the first high vulnerability or you find multiple medium vulnerabilities, do you want the build not to fail but just open tickets automatically for the developers when a vulnerability is found and give the developers all the information that they need in order to take action based on the vulnerability. Because just finding the vulnerability doesn’t help anybody. We need to make sure that we tell the developer, this is the entry point it’s in. These are the parameters it’s under. Here is the potential impact. Here is how you remediated. Here are code samples. Here’s proof of the vulnerability. So we make sure that they understand that these are not false positives. And we’ve eliminated with an automated tool the false positives so developers can actually take action. And that’s how you automate this process. Obviously, if there is an app sex team and they want visibility into all the information and they get access to all the data through our web portal, but the developers never have to leave their development environment. It goes back to what also said earlier regarding automating the process and making it easy for them and very usable for both the developers and the apps team so they can both take advantage of. And so that’s how you automate the actual process. Right now. Yep. Go ahead.
00:39:57
Speaker 1: I just. I just want to say here, it’s great to offer this this variety. But I will say this is part of. Of befriending DevOps and making things work. Try to keep, build, failing to a minimum and more of the rest of the ways to work. Because if if you keep failing the build, you are not. They don’t like you.
00:40:23
Speaker 2: Yeah. What we’ve definitely seen is that most of our customers will only fail to build on high or critical issues. So go going back to your SQL example or excess examples or vulnerabilities like that where it’s it’s a real significant issue. Then they will fail to build on those. Most customers will not actually fail the build on on anything else. They will just open tickets and make sure that they are remediated. And it is absolutely a discussion between the development organization and the security organization, because one of the things that we coined about a year ago was the happiness factor and how do we increase the happiness between developers and the app teams and to reduce the antagonism that exists between the two. Because if you’re giving both sides the tools that they need to do their work, you’re able to reduce that antagonism. Obviously, you’re not going to completely eliminate it, but you are helping them better align and make sure that they at least have a common language to to speak to it. And that’s actually what this whole slide is about. I’m not going to get into all the details about this, but you can see how from the ground up, we’ve thought about every aspect of the solution and take both sides of the coin, both the developers and the needs of OPSEC and try to marry the two. And they don’t always chime together. And sometimes there are conflicts and you need to make decisions. But we just give organizations the leverage to decide how far they want to go in terms of times that takes this time, in terms of the user usability, in terms of coverage, in terms of the payloads that they’re scanning for, so they can make those decisions and get to a more harmonious place within their apps that program.
00:42:32
Speaker 1: And yeah, and I think I think for me, bullet number two is super critical, easy to use for developers because a lot of these tools were built for security people. And the persona, the persona the product manager optimized for was the security engineer or the security analyst. And what’s easy for them is not what’s easy for developers, right? And so this is a big deal. And optimizing the UI, the experience, the workflow for developers is what really changes the ability of developers to use it, even if security people like it a little bit, a little bit less for that.
00:43:20
Speaker 2: And in the end, if it’s not getting adopted and issues are not getting remediated, it really doesn’t matter what you’re doing. You have to make sure that issues are being remediated and remediated early. So so you are more secure and you don’t have to. Sorry, I don’t mean to take a business away from from you, but you don’t get into a situation where you’re you’re you need to go to the wolf.
00:43:48
Speaker 1: Don’t worry about my business. There are enough people that don’t do what they should, and we get business.
00:43:53
Speaker 2: Well, plus, your model is to deploy early with them and make sure that that you’re helping them in case something happens. So you’ve moved over to the dark side of prevention instead of the cure. So that’s good. Perfect. So a couple other points in, Oliver. I saw that we had a few questions come in, so if you want to read those out in a second. But one. Don’t don’t take my word for it. Just go to our site Norwegian dot com and sign up for free and start using a product or phrase. Start doing the prevention for free and get the value from the solution immediately. You can start small and you can increase from there as you want, but make sure that you are utilizing that prevention early.
00:44:48
Speaker 1: Absolutely. Well, that’s answered One of the one of the questions that that was on the on the Q&A. So that’s great. We’ve got a question for ofhow. Does switching to cloud native reduce OPSEC risk?
00:45:02
Speaker 2: So I get this a lot. And I got this, by the way, both on infrastructure and on on app. And unfortunately, the answer is no. Even though we would love it to be us specifically when we when we look at Cloud Native apps, the Cloud Native is a lot of the infrastructure, right? The database, the service, the platform is a service. We get all that. But at the end of the day, the vulnerabilities are in the code that you write or the developers write, and that’s still a code that runs, even if it runs in a, you know, in in a serverless or any other type of platform, it’s still running the code. And the code can still be vulnerable. We get a lot of infrastructure that we can use for building cloud native applications more securely, but we need to use it. And if we don’t use it, we still get the same problem. And we also get new problems because new infrastructure introduces new vulnerabilities. So I’m a big fan of moving to the cloud and everything, but it doesn’t solve the problem. You still need to work at building secure code and testing for secure code. And doing that.
00:46:16
Speaker 1: Yeah. Okay. Thank you. Gary, a question for. For you. I’m assuming developer adoption is key to achieving security automation, but we failed on more than one occasion with Dast PHP, which I’m assuming they’re talking about false positives. Hard to integrate and slow. How is new religion dev first.
00:46:38
Speaker 2: Yeah, I think that’s when we started the company or actually when we figured out what we want to do a couple of years ago as a company. That was our entire approach. We understood that that was really falling out of favor. And if you look at the adoption of past over the last decade, there was a big ramp and then there was a decline in adoption and in usage because as organizations shifted into DevOps practices, they understood that that doesn’t work anymore because it was built for OPSEC, it was not targeted towards developers, it had false positives. It didn’t run quickly enough. And all of those components and those are exactly the considerations that we built into the product. And to to be very honest. Right. It’s not perfect yet. It doesn’t run in 100% of cases for developers, but we are constantly with that mindset of how do we make sure that we live up to our dream of application security from build to compliance. And you can run it on every build and you can run it very early in the development process by developers while enabling the organization to get to compliance. And it’s all of those points that you raised, all of that are addressed as part of that solution to make that happen.
00:48:03
Speaker 1: Yeah. Thank you, Gary. And I think actually, you’ve touched upon a few points that Bryan asks as well. For developers, ease of use is very important, but they’re looking for the ability to be able to test this as a team in order to pitch an automated solution to security and indeed their their management. So. It’s hard to sell, he says. Why hasn’t it been a transition of making these tools freemium to lower the barrier? So, God, I don’t if you want to touch upon that again.
00:48:35
Speaker 2: I want to.
00:48:36
Speaker 1: I want.
00:48:37
Speaker 2: To answer that, actually. Yeah, I have two answers. And actually, you can try your religion for free, as Gaddy says, just just for this purpose. But I think at the end of the day, you know, we all like open source, but. Building app. SEQ Technology is very complicated and it’s very expensive. As somebody who’s built multiple product in the space. I can tell you it’s not, it’s not something that you can easily take two or three developers build something that works pretty great and open source and so. It’s it’s there is a cost for you to build secure software. But I think the challenge here is different. It’s not about why do we have to pay for it. It’s about understanding. Why do we have to pay for developers, Right? Why do we have to pay for a cloud services? It’s just a cost that is part of doing business. And if you don’t pay that, if you don’t secure your software, you end up paying a much higher cost. And trust me, I see those cases. And so that should be the sell, not oh, we need to do something. Let’s let’s get it for free because you’re not going to AWS and asking servers for free. It’s about understanding that there is a business value for doing secure software, secure infrastructure security across what we build.
00:50:06
Speaker 1: Yeah, And one last one last question that’s come through. It seems that Devsecops concept is often misunderstood. Being confused with security operations, automation. It’s also seen as a thing for security or OPSEC, but not developers. Do we have similar experience in Gary? I’m sure you can touch upon that and how the the tides are changing.
00:50:29
Speaker 2: Yeah, I definitely think we’re seeing a big shift in the market where many organizations are understanding that having security is a bolt on and that ties both to what also said and what the question was just doesn’t work. You have to change your mindset to say security is exactly like any other bug. There’s absolutely no difference. Maybe actually it’s more risky than other bugs and you have to address security vulnerabilities just like you would address bugs at the same severity, which means that the developers have to be part of the solution and have to be very engaged and very involved. And the tools that you deploy in your organization have to make sure that they apply to those developers and you’re not trying to force their hand or try to make them use a tool that’s unnatural and that’s the shift that is coming. We’re already seeing it in other parallel industries, right. And look at what companies like Sneak have done that They’ve built a whole multi billion dollar organization in software composition analysis around this, and this has to happen in other parts of the apps space.
00:51:47
Speaker 1: Yeah. Okay. Well thank you very much to to AFL for joining us. Got a really, really great informative webinar. I hope everyone enjoyed it as as you can see in front of you. You can try it out for free app dot new religion dot com forward slash sign up and all the all the sort of guns blazing in terms of the capabilities of the solution to really try it out whether you’re a developer or indeed an app asset professional. If you do have any further questions, please do raise these directly with new religion via any of our social handles or indeed to support or info at your religion dot com. Of course, if you’re looking for incident response, then please do reach out to mitiga mitiga dot io. But it’s been a pleasure having you all. Thank you very much for your time. And yeah, stay safe and stay secure.
00:52:47
Speaker 2: Thanks also.
00:52:48
Speaker 1: Thanks, Oliver. Thank you.
00:52:49
Speaker 2: Thanks, everybody.