The article is referenced from the source: https://www.stickyminds.com/article/integrating-security-and-testing-practices
Summary: QA and information security use different methods to reach the same goals. When both teams work together, they can have a greater impact on the security of products. Here’s how the QA team can collaborate with the infosec to enforce strict security standards, prioritize what needs to be tested and get faster feedback on processes, eventually seeing fewer issues on the product. more related to security.
QA is often required to deflect a percentage of the error, which means that it is the duty of QA to check to find as many different errors as possible. Most of us find it unquestionable to pursue a series of test cases, ranging from experience expectations to unexpected encounters without any criteria, and maybe even are toxic cases.
But how many of us feel comfortable sitting across from the auditorium and talking about security testing? This situation QA often encountered many times in their industry. That fear is especially true if the company does not have strong security requirements, or does not apply them to every project. QA needs to talk to information security to ensure we have the right.
Team security has one problem: How can they ensure that security requirements are met? Penetration testing, red groups and dynamic scanning are all rigorous ways to find vulnerabilities in your software, but they are usually applied after release, once the software is available to the public. use. Static analysis tools provide a number of easier options for replicating and certifying QA for obtaining security standards if certified products are reasonable.
All of these are good practices, but maybe that’s not enough. Many security errors have appeared after the release, causing anger for customers. It is a great opportunity to add value to Team security.
Perhaps many people have found themselves as a security team member a few years ago: With a new security bug, business is available for why team security should take the time to focus on the search. Add them and very little knowledge of effective implementation. So the team needs to have risk-based testing, but the team doesn’t know when to isolate the security risks. It is time to build skills. So taking classes, researching and talking to infosecs is essential. There the team learns the basics of threat modeling.
The Tester (QA) already knows that if you only test where you think there may be problems, you will miss sophisticated flaws that can become important in the software. Typical RBT practices do not include security concerns and often take a more confusing view of the project’s architecture. The problem is to first check for the highest priority, so if a RBT (Risk-based testing) practice is not capturing the highest priorities, then that reality is seriously flawed. Adding a threat model will give a much more complete picture and can completely reorganize the priority of the test.
Start by getting a data flow diagram for the system being tested. For the purposes of this article, let use a sample web application, including login, database query, and the ability to print generated reports. If the priority request is based only on that, we can approach the login first, driven by experience with the error authentication systems, then the database query.
Apply a threat modeling method, no matter what you use, as long as it works for your system and your company shows that while we may have chosen the function Logging in is the highest risk based on sensory and practical experience, the highest priority is to protect sensitive customer data. In addition, all highest priority items are found by the threat model, not the original RBT matrix.
We need a way to prioritize them even more, to make the best use of limited resources. This is another type of left shift: not moving QA up the development chain, but introducing security testing earlier than it started.
The answer has been found in strict security standards. The software industry has strict rules that promote exactly what we need to do, but even without it, models like OWASP’s Top Ten and CWE’s Top 25 can come up with a list of The vulnerability is assessed and removed from the software. We find that each point of a standard and each item in the list is created at least one test case. Develop common test cases from those standards to allow other testers to easily access the requirements and start implementing them.
Of course, we can skip our functional tests to focus strictly on security! This adds a lot of test cases and they are often functions that take both expertise and time. One principle quickly found that one person cannot support every project and other testers on the team are not trained to do the basics without guidance.
Our solution is once again cooperation. The information security team has started a security protection program and the team size has doubled. Others have taken on roles in other parts of the organization, and that has led to a new path for testers to get important information. Not only can we directly ask security questions, but creating a second QA security role also takes a lot of work.
We closed the feedback loop by putting the information back to the infosec. QA knows how many bugs have security implications; and the team security has shared their resolutions and we have received a report from the running infosec scan.
As a result, after a few months of work, teams that carry out security checks have fewer bugs, lower severity errors than teams that do not perform security tests. Not only that, but there are fewer security-related production incidents than teams without intentional security checks.
These difficulties are why testing security during testing is an uncommon practice. With the effects seen, I can confidently say that this is a difficult path to follow, but it is the reason for the good results for our software product.