Practical Application Security Testing in CI Pipelines [Part Two]

In this second and final article of the Practical Application Security Testing in CI Pipelines series, Azunna explains the implementation of security checks within stages of ...

Azunna Ikonne
Azunna Ikonne

Security Engineer

Share this blog post:

Deimos Fallback Image

When integrating security tests, you either perform static analysis or dynamic analysis. Static analysis typically involves software composition analysis, secrets detection, identifying coding flaws and security misconfigurations, while dynamic analysis will involve vulnerability scans of the web application. Static analysis can occur before or after a project has been built using its relative build commands, while dynamic analysis occurs after the deploy stage. 

In some cases, especially with containerized applications, you can perform dynamic analysis in the pipeline or a dedicated environment for dynamic analysis in situations where there are multiple service dependencies, before deploying the application to production.

Static and Dynamic Analysis breakdown Static and Dynamic Analysis breakdown

Choosing a security testing workflow

In Part 1, I introduced some considerations when selecting security tools in CI pipelines; this will be an important factor when designing your security testing workflow.

 A security testing workflow should cover both the software and infrastructure aspects of security. The diagram below distinguishes different types of security testing activities according to what aspect they cover.

To design a security testing workflow for a CI pipeline, I will use an example organisation to go over some considerations and make the best decisions for the security workflow to be implemented.

XYZ Org Example

XYZ org wants to implement security testing for their software applications; this includes a mix of backend API services and frontend apps, all written in Node.js. Their projects make use of Gitlab SCM, and the company has provided the following requirements:

  1. Prevent developers from committing secrets and .env files to the source code repository.
  2. Review the software packages for security vulnerabilities.
  3. Automate source code review for security flaws.
  4. Identify vulnerable docker images before pushing them to the private registry.
  5. Prevent IaC security misconfigurations.
  6. Perform automated vulnerability assessment on key APIs for common vulnerabilities (XSS, Injection, CORS, security header misconfigurations).

In addition to the security testing requirements, XYZ org would like to keep the pipeline runtime as fast as possible to avoid affecting the speed of pushing frequent changes to production. 

Finally, to comply with internal and external security policies, XYZ org would like to fix all Critical and High vulnerabilities before deploying changes to production while adding other issues to their backlog. 

Let us look at two considerations that will be useful in meeting XYZ org’s requirements:

  • Commit Strategy – It will be beneficial to understand what security checks will be put in place during source code commit. The first requirement is to prevent developers from committing secrets to the repository; an ideal implementation will be using pre-commit hooks to achieve this. Other implementations exist, such as using secrets scanning tools with SCM webhooks to monitor commits to the repository but could be a more complex alternative.
  • Pipeline Strategy – Triggering security scans in the right stages will help maintain the speed of development. Let’s look at two different types of Gitlab pipelines and what benefits they could offer in security testing.
    • Basic Pipeline: The basic pipeline is the simple GitLab pipeline with jobs that run concurrently in each stage. When implementing security checks in a basic pipeline, you combine the software build jobs with the security testing jobs. An example is shown below:

Some drawbacks of using this approach are:

  • Increase in pipeline execution time which means deployments will take longer
  • High risk of pipeline failure, which will affect deployments.

DAG Pipeline: DAG pipelines work by using the ‘needs’ declaration. This means that certain jobs in the next stages can run without waiting for all the jobs in the current stage to finish. This could speed up the development process if the job relationships are defined properly.

In the workflow above, the jobs needed to build and deploy the application are defined with the ‘needs’ declaration. This means that the time it takes to deploy the application will remain the same while the security jobs also run. This method is suitable for deployments to development or staging environments where you don’t want security checks to impact the process of developers implementing and testing their changes. The main drawback of this workflow is that implementing quality gating for security vulnerabilities will not be as effective in the basic pipeline model.

Other types of GitLab pipelines exist like the Parent-Child pipelines, Merge Request pipelines and Merged Result pipelines which can be useful in deciding how you want to implement security testing. You can also use tag and schedule pipelines to decide on when you want to run security tests.

When deploying to multiple environments, using the right deployment strategy is very important. For example, if XYZ org maintains three deployment environments (dev, staging and production), they could use tag pipelines for software releases and implement the DAST stage in tag pipelines. 

The use of job controls (rules, only and except) can be used to indicate when certain jobs should run. For example, XYZ org can decide to only run dependency analysis jobs when changes are made to package manifests during commits or merge requests. 

A scheduled job can also be used for dependency analysis and image scanning to identify new CVEs that have been disclosed.

Some things to note when implementing security testing:

  • Quality Gating – Vulnerabilities must be addressed before hitting the production environment. Quality gating provides a form of security assurance, especially when compliance requirements need to be fulfilled.
  • Feedback, Notifications and Alerting – Feedback is essential in CI pipelines because it allows pipeline improvement and monitoring. Getting feedback on security scan results can prompt developers to address security concerns while they make changes to code.
  • Vulnerability Management – Vulnerability management is essential for keeping track of different projects’ security statuses. Vulnerability management platforms like defect-dojo offer several integrations with security testing tools; this allows consolidating all security vulnerability information relating to a particular project into one platform as the source of truth.

Addressing Security Issues

There are two major ways to address security issues discovered in a CI pipeline:

  • Fix before deploy
  • Deploy, then fix.

Depending on your pipeline strategy, you can use any of the above methods to address security issues. 

I will use the following contexts to explain further:

Context A: 

  • If XYZ org decides that they want developers to bear the responsibility of addressing security issues. A suitable approach would be to implement quality gating at each stage of the pipeline. This means that a pipeline will fail if the project does not meet the security requirements established by XYZ org.

Context B

  • If XYZ org decides to implement an agile process to security testing, they will go with the second approach. Quality gating will typically be implemented per environment; i.e. issues will be fixed in the development environment before deployment to staging is authorized, and the same strategy will be applied for the production environment. In this context, pipeline jobs will not fail due to failed security benchmarks, rather, feedback will be provided to developers through available notification platforms like Slack, and scan results will be consolidated to a vulnerability management platform.

XYZ org’s decision will also influence their tool selection. In context A, platform-less or dockerized tools that can run seamlessly without the need for any API integration platform subscription will be prioritized. Open source and internally developed tools will be the go-to selection for security testing implementation. The ability to see scan results either as pipeline artefacts or on job outputs will also be essential for addressing security vulnerabilities.

Some examples of tools to consider in this context are:

In Context B, A criterion for tool selection is the ability to visualize scan results on a dashboard or platform. XYZ org would go for more complex open-source tools or commercial products.

Some examples of tools to be considered will include:

Some other factors to consider in implementing a security testing strategy are:

  • The possibility to integrate with other security tools like vulnerability management tools, notification, and alerting platforms
  • The scan results/report formats, scan artefacts creation and storage
  • REST API based platforms for developing custom tools to be used in CI pipelines.

Share this blog post via:

Share this blog post via:

or copy the link
https://deimos.io/post/practical-application-security-testing-ci-pipelines-part-two
LET'S CHAT

Let one of our certified experts get in touch with you