Managing and implementing DevSecOps practices can be challenging. In this course, Implementing and Managing GitHub for DevSecOps, you’ll learn what are the core functionalities offered by GitHub platform to implement DevSecOps practices. First, you’ll explore core features that GitHub offers to make it easier to adopt DevSecOps practices. Next, you’ll discover how to configure features like secret scanning and detection in the source code, automatic detection of vulnerable components, or code scanning in the DevOps pipeline. Finally, you’ll learn how to integrate GitHub with other tools to provide a holistic look into source code and GitHub platform security. When you’re finished with this course, you’ll have the skills and knowledge of GitHub needed to implement DevSecOps practices and improve security posture of your code at an early stage.
]]>In this course, Automating Threat Response with Microsoft Sentinel, you’ll learn what Microsoft Sentinel is and how it can help enable end-to-end security operations. First, you’ll explore core Microsoft Sentinel features. Next, you’ll discover how to configure Microsoft Sentinel. Finally, you’ll learn how to detect threats and automate threat response. When you’re finished with this course, you’ll have the skills and knowledge of Microsoft Sentinel needed to collect security insights, detect and investigate threats, and automate responses to mitigate them.
]]>In this series we are going to talk about building modern identity solutions with Azure identity services like Azure Active Directory, Azure Active Directory B2C (CIAM) or Microsoft Entra Verified ID.
All part of the series are published on Tech Mind Factory YouTube channel:
]]>As new types of cybersecurity attacks rise, DevOps practices are not enough. Hackers now use DevOps environments to gain access to the enterprise, including cloud environments.
In this course, Introduction to DevSecOps on Azure, you will learn how to implement DevSecOps practices to harden software supply chain and application solution security by integrating security early in the development cycle using Azure DevOps, GitHub, and Microsoft Azure cloud.
First, you’ll explore what DevSecOps means and how it extends to DevOps. Next, you’ll discover how to use tools for software supply chain security with Azure DevOps, and GitHub. Finally, you’ll learn how to improve the security of an application’s environment on the Azure cloud.
When you’re finished with this course, you’ll have the skills and knowledge of DevSecOps practices needed to implement secure DevOps solutions on Azure.
]]>This is the last article from the series called DevSecOps practices for Azure cloud workloads. Keeping application solutions on Azure secure requires constant security monitoring and evaluation of threats from source code up to running workloads. In this article, I would like to present what Microsoft Sentinel is and how it can help with security incidents in the Azure environment and DevOps platforms.
Microsoft Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. It provides intelligent security analytics and threat intelligence, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.
Microsoft Sentinel requires a Log Analytics workspace to collect and analyze data. It can be a new or existing workspace. Here is the main dashboard of Microsoft Sentinel:
Once the Log Analytics workspace is connected, Data Connectors can be configured to start ingesting data into Microsoft Sentinel. There are many out-of-the-box connectors for Microsoft services, which can be integrated in real-time. There are many different connectors for Microsoft services but also for non-Microsoft products. We can also ingest data with REST APIs and Common Event Format (CEF) used as data format:
Data Connectors Examples:
Microsoft Sentinel provides built-in templates to help create threat detection rules based on data ingested by specific Data Connectors. For instance, once we connect Microsoft Defender for Cloud connector, we can create incidents (I am explaining them below) automatically in the Microsoft Sentinel dashboard:
We can also create custom Analytics Rules in Microsoft Sentinel based on our specific needs.
Incidents are groups of related alerts that together create an actionable possible threat that you can investigate and resolve. Microsoft Sentinel uses analytics to correlate alerts into incidents. It is important to remember that alerts triggered in Microsoft security solutions that are connected to Microsoft Sentinel, such as Microsoft Defender for Cloud don’t automatically create incidents in Microsoft Sentinel. We can decide which built-in rules should create Microsoft Sentinel incidents automatically in real-time. By default, when we connect a Microsoft solution to Microsoft Sentinel, any alert generated in that service will be stored as raw data in Microsoft Sentinel. Rules can be edited to define more specific options for filtering which of the alerts generated by the Microsoft security solution should create incidents in Microsoft Sentinel.
Automation rules help investigate incidents in Microsoft Sentinel. They can be used to automatically assign incidents to the right person, close noisy incidents or known false positives, change their severity, and add tags. They are also the mechanism by which playbooks (I am explaining them below) can be run in response to incidents.
Playbooks are collections of procedures that can be run from Microsoft Sentinel in response to an alert or incident. A playbook can help automate and orchestrate response and can be set to run automatically when specific alerts or incidents are generated. It is worth mentioning that PlayBooks are just Logic Apps triggered from Sentinel when an incident or alert is created.
We can create our PlayBooks or utilize templates:
Microsoft Sentinel can be used to keep an eye on the security posture of DevOps platforms like Azure DevOps or GitHub. In this article, I would like to show how to monitor Azure DevOps however before this, let me write a few things about GitHub also.
Microsoft Sentinel has a dedicated connector for monitoring GitHub. It enables to easily ingest of events and logs from GitHub to Microsoft Sentinel using GitHub audit log API and Webhooks.
Together with the connector, the analytic rule templates are installed including:
Once we install the above, we have to configure it to properly connect with our GitHub organization.
*IMPORTANT - this connector works only with GitHub organizations with a GitHub Enterprise license.
To connect with GitHub we have to provide the name of our organization together with Personal Access Token (PAT) with admin:org permission:
Once the connection is verified we should see confirmation in the Azure Portal:
Logs should be able to see in the custom table called GitHubAuditLogPolling_CL after a few minutes:
Once we have data ingested, we can utilize the Workbook template provided by Microsoft:
You can also ask about the above, second Workbook template called GitHub Security - it is used to display information about security events from organizations with GitHub Advanced Security Enabled.
Azure DevOps can be also monitored with Microsoft Sentinel. First of all, we have to connect our Azure DevOps organization with Azure Active Directory. Why? Because without it we cannot enable audit logs which I will explain below.
Once Azure AD is integrated, we can enable Audit Logs:
At the moment of writing this article, Azure DevOps Audit Logs functionality is in preview. You can read more here. With audit logs we can collect information like:
Logs can be accessed under Auditing tab:
We have to ingest data to the Log Analytics Workspace connected to Microsoft Sentinel. To do it, we can easily configure a connection with our Log Analytics Workspace:
We need Workspace ID and Primary key values which can be found in the Azure Portal:
Once we connect, the logs stream is visible in Azure DevOps:
Once we wait a few minutes, logs should be available for us to investigate:
To aggregate queries together, I created a dedicated Log Analytics Query Pack:
Then I created a dedicated query for each log type. For tests I created three queries:
Here is the code for each query:
This query gets all the logs for removed branch policies for the Azure DevOps GIT repository:
AzureDevOpsAuditing
|
where Category contains "Remove"
and
OperationName contains "Policy.PolicyConfigRemoved"
This query gets all modified variable groups in Azure DevOps:
AzureDevOpsAuditing
|
where Category contains "Modify"
and
OperationName contains "Library.VariableGroupModified"
This query gets information about installed extensions in Azure DevOps:
AzureDevOpsAuditing
|
where Category contains "Create"
and
OperationName contains "Extension.Installed"
Next, based on the above queries, I was able to create scheduled Analytics Rules:
For the Detect removed branch policies in Azure DevOps Repos Analytics Rule I set the below logic:
AzureDevOpsAuditing
|
where Category contains "Remove"
and
OperationName contains "Policy.PolicyConfigRemoved"
|
extend RepositoryName = tostring(parse_json(tostring(Data)).RepoName)
I also selected Group all events into a single alert to make sure that based on the above events, one alert will be generated in Microsoft Sentinel.
As the last step I configured creating incidents based on the alerts, and grouping the related alerts, triggered by this analytics rule, into incidents:
After a short time, incidents were created based on the events in Azure DevOps:
On the main dashboard we can see all, aggregated events and alerts, together with incidents:
If we connect GitHub also, we have a comprehensive look into our DevOps platform’s security:
In this article, we discovered Microsoft Sentinel capabilities and how we can react to security incidents in our Azure cloud environment. We also learned how to monitor DevOps platforms like GitHub and Azure DevOps. I hope you had a chance to read previous articles from my series called DevSecOps practices for Azure cloud workloads, and you enjoyed it.
If you like my articles, and find them helpful, please buy me a coffee:
]]>This is the next article from the series called DevSecOps practices for Azure cloud workloads. security monitoring with Microsoft Defender for Cloud and DevOps. Keeping application solutions on Azure secure requires constant security monitoring and evaluation of threats from source code up to running workloads. In this article, I would like to focus on the Azure cloud environment and security of DevOps platforms - GitHub and Azure DevOps.
Defender for Cloud is a solution for cloud security posture management (CSPM) and cloud workload protection (CWP). It is used to secure native Azure resources, on-premises resources, and other cloud providers resources (Amazon AWS and Google GCP). Defender for Cloud finds weak spots across Azure services configuration, helps strengthen the overall security posture of the cloud environment, and can protect application workloads from evolving threats.
Defender for Cloud addresses the most urgent security challenges:
First of all, it is worth knowing that not all Defender for Cloud features are paid.
There are two core, free features of Microsoft Defender for Cloud:
Recommendations are tailored to the particular security concerns found in the Azure workloads. Defender for Cloud not only provides information about security posture but also specific instructions for how to improve it. Recommendations help reduce the attack surface across each Azure resource. Free pricing tier is enabled on all current Azure subscriptions when Defender for Cloud is opened in the Azure portal for the first time or enabled through the API.
Azure Security Benchmark Azure Policy Initiative is automatically assigned to all Defender for Cloud registered subscriptions. The built-in initiative contains only Audit policies to not break current solutions and configurations. Only recommendations are displayed. Defender for Cloud policies are visible in the Azure portal Policies section. Security compliance can be verified on specific Azure resources.
As mentioned above, Azure Security Benchmark policies are automatically assigned to subscriptions connected to Defender for Cloud. Many different policies help improve security posture. Some of them provide information on how to manually fix the security issues:
Some of those policies provide a direct “quick fix” option to fix the issue with security posture of the Azure resource:
As mentioned before, the built-in Azure Policy Initiative contains only Audit policies to not break current solutions and configurations.
Defender for Cloud continually assesses Azure resources and subscriptions for security issues. It then aggregates all the findings into a single secure score. It shows the general, current security situation. The higher the score, the lower the identified risk level.
To take advantage of advanced security management and threat detection capabilities, a standard, paid tier has to be enabled. Below there are some features of the paid plan:
Here is an example of security protection for containers- images in the Azure Container Registry are constantly scanned and if the vulnerability is found, we are informed about this fact together with recommendations on how to mitigate it:
Defender for Cloud and Defender for Cloud plans will generate alerts when threats in our cloud, hybrid, or on-premises environment are detected. Each alert provides details of affected resources, issues, and remediation recommendations. Defender for Cloud leverages the MITRE Attack Matrix to associate alerts with their perceived intent, helping formalize security domain knowledge.
Defender for Cloud assigns a severity to alerts to help us prioritize how we attend to each alert. Severity is based on how confident Defender for Cloud is in the:
A security incident is a collection of related alerts. Incidents provide you with a single view of an attack and its related alerts so that you can quickly understand actions an attacker took, and the resources affected.
Here is an example alert that will be sent to our mailbox when a new threat is detected:
Under Security alerts tab in the Azure Portal we can access all alerts and read about details:
Next, we can take actions to mitigate security risk and solve the problem:
At the moment of writing this article, Defender for DevOps is in preview, it was announced during Microsoft Ignite 2022. I would like to write about it and some really interesting features when it comes to secure DevOps and the security of DevOps platforms - GitHub and Azure DevOps.
Until now it was not so obvious how to monitor source code repositories, CI/CD pipelines, and general security and access in DevOps platforms like Azure DevOps or GitHub. This is where Defender for DevOps can help. We can connect Azure DevOps or GitHub (and more platforms in the future) to Defender for DevOps and monitor holistic security posture of them.
Key capabilities in Defender for DevOps include:
We can add GitHub and Azure DevOps environments, customize DevOps workbooks to show desired metrics, view our guides and give feedback, and configure pull request annotations. We can review findings like:
There is well-written documentation on how to connect DevOps platforms with Defender for DevOps:
During the Microsoft Ignite, there was the announcement that GitHub Advanced Security will be available for Azure DevOps. It will provide secret scanning, dependency scanning, and code scanning. At the moment of writing this article, this feature is in private preview.
However, when I was searching in the Defender for Cloud documentation, I found new extension that can be used with Azure DevOps and GitHub called Microsoft Security DevOps. This extension provides great capabilities to increase the security posture of our source code repositories together with code quality and security. Microsoft Security DevOps is data-driven with portable configurations that enable deterministic execution across multiple environments.
With Microsoft Security DevOps we can:
With this extension, all security incidents are reported back to the Microsoft Defender for DevOps (from Azure DevOps and GitHub).
Here is an example of secret scanning in Azure DevOps:
Here is the security report in the Defender for DevOps dashboard:
It is really helpful because Security Teams receive detailed information where the secret was detected and they can take actions to inform developers about this fact:
Now there is a question - what about GitHub Advanced Security for Azure DevOps and GitHub?
I am not sure yet but in my personal opinion it will be possible to utilize both options:
The case is that the two above can report security issues to the Defender for DevOps. Now let me provide some more details.
The extension is available for free to be installed from Azure DevOps marketplace. Once it is installed in our Azure DevOps organization, we can start using it.
Here is an example job from my sample pipeline to detect secrets:
jobs:
- job: 'Build'
displayName: "Build Web API"
pool:
vmImage: 'windows-latest'
steps:
- task: UseDotNet@2
displayName: 'Install .NET 6 SDK'
inputs:
packageType: 'sdk'
version: 6.0.x
# Here we scan the code with Microsoft Security DevOps:
- task: MicrosoftSecurityDevOps@1
displayName: 'Microsoft Security DevOps scan'
inputs:
categories: 'secrets'
- task: DotNetCoreCLI@2
displayName: Restore NuGet packages
inputs:
command: 'restore'
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
displayName: Build project
inputs:
command: 'build'
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
displayName: Publish project
inputs:
command: publish
arguments: '--configuration Release --output $(Build.ArtifactStagingDirectory)'
projects: '**/*.csproj'
zipAfterPublish: true
- task: PublishBuildArtifacts@1
displayName: Publish package ready for deployment
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'drop'
Currently, it works only with vmImage: ‘windows-latest’ agents - this is important because if you use Linux agents, no scanning output will be available.
Microsoft Security DevOps uses the following Open Source tools:
Name | Language | License |
---|---|---|
Bandit | Python | Apache License 2.0 |
BinSkim | Binary–Windows, ELF | MIT License |
ESlint | JavaScript | MIT License |
Credscan | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files common types: default passwords, SQL connection strings, Certificates with private keys |
Not Open Source |
Template Analyzer | ARM template, Bicep file | MIT License |
Terrascan | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | Apache License 2.0 |
Trivy | container images, file systems, git repositories | Apache License 2.0 |
Please note that the extension utilizes Credential Scanner which originally was part of Microsoft Security Code Analysis (MSCA) extension which will be retired on December 31, 2022.
With Credential Scanner we can detect secrets in our source code during the build process and we do not have to utilize GitHub Advanced Security features. Here is the link to official documentation to read more.
Action is available for free to be used. We have to add action to our workflow like in the example below:
name: MSDO windows-latest
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
workflow_dispatch:
jobs:
sample:
# MSDO runs on windows-latest and ubuntu-latest.
# macos-latest supporting coming soon
runs-on: windows-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-dotnet@v1
with:
dotnet-version: |
5.0.x
6.0.x
# Run analyzers
- name: Run Microsoft Security DevOps Analysis
uses: microsoft/security-devops-action@preview
id: msdo
# Upload alerts to the Security tab
- name: Upload alerts to Security tab
uses: github/codeql-action/upload-sarif@v1
with:
sarif_file: $
Once the scanning is completed, we can access it under Security > Code scanning alerts tab. Here is the link to official documentation to read more.
It is worth mentioning that this action does not utilize Credential Scanner mentioned above for Azure DevOps. The reason why is probably the fact that GitHub natively supports secret scanning and detection for public repositories and GitHub Advanced Security for private repositories.
As mentioned before, we can also scan Azure infrastructure source code. Once we enable it, the report is generated in the separate tab:
This is a comprehensive extension for DevSecOps! I am also waiting for dedicated connector for Microsoft Sentinel to connect Defender for DevOps.
In this article, we discussed how to improve the security posture of the application environment in the Azure cloud with Defender for Cloud and how to monitor DevOps platforms (GitHub and Azure DevOps) with Defender for DevOps. In the next, last article we will see how to detect and respond to security events in Azure with Microsoft Sentinel.
If you like my articles, and find them helpful, please buy me a coffee:
]]>This is the next article from the series called DevSecOps practices for Azure cloud workloads. In this article, I would like to focus on Azure Policies and how we can utilize DevOps automation to automate Azure cloud governance. As always, it is good to understand some fundamentals first.
Probably most of us are familiar with creating resources in the Azure cloud. We can use Azure Portal or we can create resources using ARM/Bicep, or other tools like Azure CLI. The typical approach is to create an Azure Resource Group and then create Azure Resources inside it. Eventually, we have one container with all Azure resources in one place. However, the real-world situation looks quite different. Organizations that utilize Azure cloud, create many Azure Resources inside many Resource Groups and need a way to efficiently manage access, policies, compliance, and track costs. This is why it is important to understand the core Azure architectural components.
Azure provides four levels of management:
The following diagram shows the relationship between these levels:
Azure arranges Management Groups in a single hierarchy. This hierarchy can be defined in our Azure Active Directory (Azure AD) tenant to align with our organization’s structure and needs. The top level is called the Root Management Group. We can define up to six levels of management groups in your hierarchy. Only one Management Group contains a Subscription. Let me collect some important details about each level:
Now we understand the management of levels and hierarchy of Azure, it is time to talk about how to organize Azure resources, and enforce organizational standards like naming conventions or Azure regions where resources can be created. If we have only a few subscriptions, it is quite easy to manage them independently. However, managing compliance at scale requires a more efficient approach. Azure Policy helps to enforce organizational standards and to assess compliance at scale. Provides governance and resource consistency with regulatory compliance, security, cost, and management. We can define multiple policies to enforce different rules over Azure resource configurations so the configurations stay compliant with corporate standards. We can apply the policies to Azure Resources by using Management Groups.
Here are a few use cases for Azure Policies:
Azure Policy is used to create policies that define conventions for Azure Resources. A policy definition describes the compliance conditions for a resource, and the actions to complete when the conditions are met. One or more policy definitions are grouped into an initiative definition, to control the scope of policies and evaluate the compliance of Azure Resources.
There are four steps when creating and using Azure Policies:
It is important to say that many different definitions are already available to us so we do not need to create all of the from scratch. If we open Azure Policy built-in definitions for Azure Resource Manager we can access many different definitions. All of them are available on GitHub. In the Azure Portal, we can access definitions and filter them out based on our needs:
We can for instance limit Azure regions in which we can create Azure Resources:
All policy definitions have a specific JSON format. Here is an example of policy definition to restrict the locations our organization can specify when deploying Azure Cosmos DB resources:
{
"properties": {
"displayName": "Azure Cosmos DB allowed locations",
"policyType": "BuiltIn",
"mode": "Indexed",
"description": "This policy enables you to restrict the locations your organization can specify when deploying Azure Cosmos DB resources. Use to enforce your geo-compliance requirements.",
"metadata": {
"version": "1.1.0",
"category": "Cosmos DB"
},
"parameters": {
"listOfAllowedLocations": {
"type": "Array",
"metadata": {
"displayName": "Allowed locations",
"description": "The list of locations that can be specified when deploying Azure Cosmos DB resources.",
"strongType": "location"
}
},
"policyEffect": {
"type": "String",
"metadata": {
"displayName": "Policy Effect",
"description": "The desired effect of the policy."
},
"allowedValues": [
"audit",
"Audit",
"deny",
"Deny",
"disabled",
"Disabled"
],
"defaultValue": "Deny"
}
},
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.DocumentDB/databaseAccounts"
},
{
"count": {
"field": "Microsoft.DocumentDB/databaseAccounts/Locations[*]",
"where": {
"value": "[replace(toLower(first(field('Microsoft.DocumentDB/databaseAccounts/Locations[*].locationName'))), ' ', '')]",
"in": "[parameters('listOfAllowedLocations')]"
}
},
"notEquals": "[length(field('Microsoft.DocumentDB/databaseAccounts/Locations[*]'))]"
}
]
},
"then": {
"effect": "[parameters('policyEffect')]"
}
}
},
"id": "/providers/Microsoft.Authorization/policyDefinitions/0473574d-2d43-4217-aefe-941fcdf7e684",
"type": "Microsoft.Authorization/policyDefinitions",
"name": "0473574d-2d43-4217-aefe-941fcdf7e684"
}
We can also create custom policy definitions and even export them to GitHub:
As mentioned before, Initiative groups policy definitions and include one or more policies. Initiatives have also a specific JSON format. Here is an example of how we can specify multiple policies for the initiative in the Azure Portal:
This is a sample policy I created. It restricts Azure regions in which resources can be created only to north and west Europe regions:
For this specific policy I set the scope to specific Azure Resource Group - rg-tmf-devsecops-dev:
Here is an example of how the policy will enforce compliance. When I try to create Azure Storage Account in a region different than north or west Europe, there is the information displayed in the Azure Portal:
I mentioned before that Azure Policies and Initiatives underneath are JSON files so they can be stored in the source code repository. Before we move forward, I would like to underline important facts about definitions structure, and data format.
We can keep Policies and Initiatives definitions as pure JSON files or we can define them using other approaches like ARM Templates, Bicep files or Terraform. Azure Policy as Code is the approach used to keep policy definitions in the source control and whenever a change is made, test, and validate that change. The recommended general workflow of Azure Policy as Code looks like this diagram:
I encourage you to read more about the above approach in the official documentation.
Here we can see examples of pure Policies and Initiatives definitions:
Policy definition in JSON:
{
"properties": {
"displayName": "Allowed locations",
"policyType": "BuiltIn",
"mode": "Indexed",
"description": "This policy enables you to restrict the locations your organization can specify when deploying resources. Use to enforce your geo-compliance requirements. Excludes resource groups, Microsoft.AzureActiveDirectory/b2cDirectories, and resources that use the 'global' region.",
"metadata": {
"version": "1.0.0",
"category": "General"
},
"parameters": {
"listOfAllowedLocations": {
"type": "Array",
"metadata": {
"description": "The list of locations that can be specified when deploying resources.",
"strongType": "location",
"displayName": "Allowed locations"
}
}
},
"policyRule": {
"if": {
"allOf": [
{
"field": "location",
"notIn": "[parameters('listOfAllowedLocations')]"
},
{
"field": "location",
"notEquals": "global"
},
{
"field": "type",
"notEquals": "Microsoft.AzureActiveDirectory/b2cDirectories"
}
]
},
"then": {
"effect": "deny"
}
}
},
"id": "/providers/Microsoft.Authorization/policyDefinitions/e56962a6-4747-49cd-b67b-bf8b01975c4c",
"type": "Microsoft.Authorization/policyDefinitions",
"name": "e56962a6-4747-49cd-b67b-bf8b01975c4c"
}
Initiative definition in JSON:
{
"name": "general-allowed-locations-policy-set",
"properties": {
"displayName": "Allowed locations Initiative",
"description": "This initiative contains the policies necessary to limit Azure region deployments for all resources and resource groups.",
"metadata": {
"version": "1.0.0",
"category": "Org Governance"
},
"parameters": {
"AllowedLocations": {
"type": "Array",
"defaultValue": [
"centralus",
"eastus",
"eastus2",
"southcentralus"
]
}
},
"PolicyDefinitions": [
{
"policyDefinitionReferenceId": "allowed-locations-resources",
"policyDefinitionName": "e56962a6-4747-49cd-b67b-bf8b01975c4c",
"parameters": {
"listOfAllowedLocations": {
"value": "[parameters('AllowedLocations')]"
}
}
},
{
"policyDefinitionReferenceId": "allowed-locations-resource-groups",
"policyDefinitionName": "e765b5de-1225-4ba3-bd56-1ac6695af988",
"parameters": {
"listOfAllowedLocations": {
"value": "[parameters('AllowedLocations')]"
}
}
}
]
}
}
To manage and update the above, we can utilize Azure Policy extension for Visual Studio Code.
We can also define Azure Policies and Initiatives using Azure Bicep. There are basically three main resource types we use:
In this article, I utilize Azure Bicep to create, test and assign Azure Policies. Let me show you how easily you can start using Azure Policies and Initiatives with Azure Bicep.
Below we can see an example of how to assign built-in Azure Policy to a specific scope, in this case, specific Resource Group:
@description('An array of the allowed locations, all other locations will be denied by the created policy.')
param listOfAllowedLocations array
param policyDefinitionID string
resource policyAssignment 'Microsoft.Authorization/policyAssignments@2021-06-01' = {
name: 'resources-location-lock'
scope: resourceGroup()
properties: {
displayName: 'Allow only north and west Europe regions'
description: 'Resources can be created only in west and north Europe regions.'
policyDefinitionId: policyDefinitionID
nonComplianceMessages: [
{
message: 'Resources can be created only in west and north Europe regions.'
}
]
parameters: {
listOfAllowedLocations: {
value: listOfAllowedLocations
}
}
}
}
Here is the parameters file content:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"listOfAllowedLocations" : {
"value": [
"westeurope",
"northeurope"
]
},
"policyDefinitionID":{
"value": "/providers/Microsoft.Authorization/policyDefinitions/e56962a6-4747-49cd-b67b-bf8b01975c4c"
}
}
}
When we deploy the above Policy assignment, only resources with the north and west Europe region will be allowed to be deployed. We define listOfAllowedLocations parameter to indicate allowed regions and also we reference existing (built-in) policy definition by policyDefinitionID parameter. If we open Azure Portal, we can find this policy definition using ID:
We can also define custom policies and then assign them to specific scopes. In this case we create custom Policy which will be assigned to specific Management Group:
targetScope = 'managementGroup'
@description('Target Management Group')
param targetMG string
@description('An array of the allowed locations, all other locations will be denied by the created policy.')
param allowedLocations array = [
'westeurope'
'northeurope'
]
var mgScope = tenantResourceId('Microsoft.Management/managementGroups', targetMG)
var policyDefinitionName = 'LocationRestriction'
resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2020-03-01' = {
name: policyDefinitionName
properties: {
policyType: 'Custom'
mode: 'All'
parameters: {}
policyRule: {
if: {
not: {
field: 'location'
in: allowedLocations
}
}
then: {
effect: 'deny'
}
}
}
}
resource policyAssignment 'Microsoft.Authorization/policyAssignments@2020-03-01' = {
name: 'location-lock'
properties: {
scope: mgScope
policyDefinitionId: extensionResourceId(mgScope, 'Microsoft.Authorization/policyDefinitions', policyDefinition.name)
}
}
We can also create Initiatives with multiple Azure policies. These policies can be either built-in, or custom. Then we can assign initative to specific scope, like Azure Subscription:
targetScope = 'subscription'
param listOfAllowedLocations array = [
'westeurope'
'northeurope'
]
param listOfAllowedSKUs array = [
'Standard_B1ls'
'Standard_B1ms'
'Standard_D2s_v3'
'Standard_D4s_v3'
]
var initiativeDefinitionName = 'BICEP Example Initiative'
resource initiativeDefinition 'Microsoft.Authorization/policySetDefinitions@2019-09-01' = {
name: initiativeDefinitionName
properties: {
policyType: 'Custom'
displayName: initiativeDefinitionName
description: 'Initiative Definition for Resource Locatoin and VM SKUs'
metadata: {
category: 'BICEP Example Initiative'
}
parameters: {
listOfAllowedLocations: {
type: 'Array'
metadata: ({
description: 'The List of Allowed Locations for Resource Groups and Resources.'
strongtype: 'location'
displayName: 'Allowed Locations'
})
}
listOfAllowedSKUs: {
type: 'Array'
metadata: any({
description: 'The List of Allowed SKUs for Virtual Machines.'
strongtype: 'vmSKUs'
displayName: 'Allowed Virtual Machine Size SKUs'
})
}
}
policyDefinitions: [
{
policyDefinitionId: '/providers/Microsoft.Authorization/policyDefinitions/e765b5de-1225-4ba3-bd56-1ac6695af988'
parameters: {
listOfAllowedLocations: {
value: '[parameters(\'listOfAllowedLocations\')]'
}
}
}
{
policyDefinitionId: '/providers/Microsoft.Authorization/policyDefinitions/e56962a6-4747-49cd-b67b-bf8b01975c4c'
parameters: {
listOfAllowedLocations: {
value: '[parameters(\'listOfAllowedLocations\')]'
}
}
}
{
policyDefinitionId: '/providers/Microsoft.Authorization/policyDefinitions/cccc23c7-8427-4f53-ad12-b6a63eb452b3'
parameters: {
listOfAllowedSKUs: {
value: '[parameters(\'listOfAllowedSKUs\')]'
}
}
}
{
policyDefinitionId: '/providers/Microsoft.Authorization/policyDefinitions/0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56'
parameters: {}
}
]
}
}
resource initiativeDefinitionPolicyAssignment 'Microsoft.Authorization/policyAssignments@2019-09-01' = {
name: initiativeDefinitionName
properties: {
scope: subscription().id
enforcementMode: 'Default'
policyDefinitionId: initiativeDefinition.id
parameters: {
listOfAllowedLocations: {
value: listOfAllowedLocations
}
listOfAllowedSKUs: {
value: listOfAllowedSKUs
}
}
}
}
We can deploy Azure Policies defined with Azure Bicep in the exactly same way as we deploy other Azure resources. On the ocal machine we can utilize Azure CLI:
az deployment group create --name $deploymentName --resource-group rg-tmf-devsecops-dev --template-file bicep/main.bicep --parameters bicep/parameters/main-deploy.parameters.json
We can also utilize Azure DevOps or GitHub to deploy Azure Policies and Initiatives. Here is a nice tutorial on how to Implement Azure Policy as Code with GitHub.
In the Azure Portal we can verify compliance after applying specific policies:
Azure Policies can be stored in the source code repository, so they can be tested before they got deployed. To test Azure Infrastructure (not only Azure Policies), there is also a great tool called Pester - testing and mocking framework for PowerShell. Pester is most commonly used for writing unit and integration tests, but it is not limited to just that. It is also a base for tools that validate whole environments, computer deployments, and database configurations. In this article, I will not explain in detail how to start with Pester but I recommend checking the link above as Pester has great documentation and a quick start section.
Here I would like to present a simple test written with Pester to validate Azure Policy. This is the structure of the files on my local machine:
Under Bicep directory I have main.bicep file together with parameters file:
@description('An array of the allowed locations, all other locations will be denied by the created policy.')
param listOfAllowedLocations array
param policyDefinitionID string
resource policyAssignment 'Microsoft.Authorization/policyAssignments@2021-06-01' = {
name: 'resources-location-lock'
scope: resourceGroup()
properties: {
displayName: 'Allow only north and west Europe regions'
description: 'Resources can be created only in west and north Europe regions.'
policyDefinitionId: policyDefinitionID
nonComplianceMessages: [
{
message: 'Resources can be created only in west and north Europe regions.'
}
]
parameters: {
listOfAllowedLocations: {
value: listOfAllowedLocations
}
}
}
}
Here is the parameters file content:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"listOfAllowedLocations" : {
"value": [
"westeurope",
"northeurope"
]
},
"policyDefinitionID":{
"value": "/providers/Microsoft.Authorization/policyDefinitions/e56962a6-4747-49cd-b67b-bf8b01975c4c"
}
}
}
Now I use Pester to validate Azure Policy definition. To start Pester tests, I created Main.Tests file:
$Data = @{
AllowedLocations = @(
'westeurope'
'northeurope'
)
TemplatePath = ".\bicep"
}
$container = New-PesterContainer -Path '.\Azure.Policy.Tests.ps1' -Data $Data
Invoke-Pester -Container $container -Output Detailed
Above I declared AllowedLocations collection with two Azure regions. I want to validate whether Azure Policy has those defined.
In the Azure.Policy.Tests file I have three tests defined. Let me explain the below content. We have to pass two parameters:
In the BeforeAll block we reference two files:
function Get-DeploymentResources ([string] $TemplatePath) {
$bicepTemplatePath = "$TemplatePath\main.bicep"
$armTemplatePath = "$TemplatePath\main.json"
az bicep build --file $bicepTemplatePath
$maintemplate = Get-Content $armTemplatePath -Raw
$templateJson = ConvertFrom-Json -InputObject $maintemplate
$resources = $templateJson
return $resources
}
function Get-DeploymentParameters ([string] $ParametersFilePath) {
$parametersFilePath = "$ParametersFilePath\parameters\main-deploy.parameters.json"
$maintemplate = Get-Content $parametersFilePath -Raw
$templateJson = ConvertFrom-Json -InputObject $maintemplate
$deploymentParameters = $templateJson
return $deploymentParameters
}
Then in the Describe we define our tests. In this case, I want to validate if:
AfterAll block is executed once all tests are executed. In this case, we want to delete main.json ARM template file generated by Get-DeploymentResources function to transform Bicep to ARM (JSON) file.
param (
[parameter(Mandatory = $true)]
[string]$TemplatePath,
[parameter(Mandatory = $true)]
[array]$AllowedLocations
)
BeforeAll {
. $PSScriptRoot/Get-DeploymentParameters.ps1
. $PSScriptRoot/Get-DeploymentResources.ps1
}
Describe 'Azure Policy Tests' {
It 'Allowed locations should contain 2 regions' {
$allParameters = Get-DeploymentParameters -ParametersFilePath $TemplatePath
$count = $allParameters.parameters.listOfAllowedLocations.value.Count
$count | Should -Be 2
}
It 'North and west Europe regions are allowed only' {
$allLocationsCompliant = $true
$allParameters = Get-DeploymentParameters -ParametersFilePath $TemplatePath
$declaredLocations = $allParameters.parameters.listOfAllowedLocations.value
foreach ($location in $AllowedLocations)
{
Write-Host $location
$containsAllowedLocations = $declaredLocations -contains $location
if($containsAllowedLocations -eq $false)
{
$allLocationsCompliant = $false
}
}
$allLocationsCompliant | Should -Be $true
}
It 'NonComplianceMessage should be: Resources can be created only in west and north Europe regions.' {
$allResources = Get-DeploymentResources -TemplatePath $TemplatePath
$message = $allResources.resources[0].properties.nonComplianceMessages[0].message
$message | Should -Contain 'Resources can be created only in west and north Europe regions.'
}
}
AfterAll {
Remove-Item "$TemplatePath/main.json"
}
This is the output after running tests:
Now when I change regions in parameters file, tests should break:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"listOfAllowedLocations" : {
"value": [
"westeurope",
"northeurope"
]
},
"policyDefinitionID":{
"value": "/providers/Microsoft.Authorization/policyDefinitions/e56962a6-4747-49cd-b67b-bf8b01975c4c"
}
}
}
Now once we understand Policies structure and we know that they can be stored in the source code repository, we can easily set up deployments using Azure DevOps or GitHub. This is the Azure DevOps pipeline responsible for deploying resources to a specific resource group. I changed the location for Key Vault to be westus and then triggered the pipeline. This is the end result:
Here is the error message displayed in the logs:
Resource ‘kv-tmf-devsecops-dev-pt’ was disallowed by policy. Reasons: ‘Resources can be created only in west and north Europe regions.
There is also one more approach which was recently described in detail on Tech Community Blog here. This approach utilizes pure JSON policies and Initiatives together with PowerShell scripts. It is also worth checking!
In this article, I explained Azure cloud governance with Azure Policy and how we can automate Policy management using Policy as code approach. In this case, we can utilize DevOps automation to update policy definitions and assign them to different scopes in the Azure cloud.
If you like my articles, and find them helpful, please buy me a coffee:
]]>This is the next article from the series called DevSecOps practices for Azure cloud workloads. In this article, I would like to talk about Azure Active Directory, together with Azure role-based access control (RBAC), and what is their role in the DevSecOps world. When we talk about DevSecOps practices, we focus on security scanning, secret detection, and in general on a secure approach to CI/CD. However, DevSecOps is not only about the CI/CD pipelines and security incorporated into them. It is important to remember identity and access management. For instance - who can sign in to our organization’s Azure DevOps/GitHub or which Azure resource can access another one with a specific permissions level.
Probably most of us noticed that in the corporate world, access to Azure DevOps or GitHub is provided using the corporate account we receive when we join the company. Maybe some of you do not realize but in most such cases Azure Active Directory is used. When we want to access Azure DevOps projects or GitHub organizations, we have to be first assigned to these, and then we can use our corporate Azure AD account to authenticate and access them. Here is an example of an Azure DevOps organization connected with Azure Active Directory:
We can also enable login with Azure Active Directory for GitHub:
By default, we can use our own (private) accounts to authenticate to Azure DevOps or GitHub. However, in a world where there are so many different cyber-security threats, it is not a good idea to use personal accounts with corporate assets. Utilizing Azure Active Directory provides security enhancements like Identity Protection. Microsoft analyses trillions of signals per day to identify and protect customers from threats. Identity Protection allows organizations to accomplish three key tasks:
To make access to Azure DevOps and GitHub (and in general to all other resources in our organization) it is crucial to use Azure Active Directory. Azure AD Identity Protection protects users and responds to suspicious activities like:
Azure AD Identity Protection protects users and responds to suspicious activities using the following three policies:
For each policy, we can set a threshold for risk level - low and above, medium and above, or high. Eventually, when we want to access Azure Portal, Azure DevOps, or GitHub, it is not enough to provide a username and password. We have to also approve the request in the Microsoft Authenticator App:
This is about the access to the platforms but there are some other important aspects like:
All these different aspects are related to DevSecOps practices on Azure. Let’s dive into the topic a little bit more.
When we look into Security Control v3: Identity management from the Azure Security Benchmark, we will see many recommendations for identity and access management. For instance:
Use a centralized identity and authentication system to govern your organization’s identities and authentications for cloud and non-cloud resources.
Use managed application identities instead of creating human accounts for applications to access resources and execute code. Managed application identities provide benefits such as reducing the exposure of credentials. Automate the rotation of credentials to ensure the security of the identities.
The above (and more) points refer to Azure Active Directory to establish a secure identity and access controls. Then, we can utilize Azure role-based access control (Azure RBAC) which helps manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. Now it is time to talk and quickly explain some concepts behind Azure Active Directory, and RBAC.
Microsoft Zero Trust model assumes breach and verifies each request as though it originates from an open network. Core to Zero Trust, are the principles of verifying explicitly, applying for least privileged access, and always assuming breach:
One of the key points of Zero Trust architecture is the Zero Trust User Access approach. It states that every access request is fully authenticated, authorized, and encrypted before granting access. Microsegmentation and least privileged access principles are applied to minimize lateral movement.
When it comes to Zero Trust architecture and approach it also refers to access to specific Azure resources and authorization mechanisms. There are two key elements here:
Let’s discuss them to better understand how they can help implement DevSecOps practices.
To grant access to specific Azure resources, and applications secured by Azure Active Directory, we have to first add a user to the Azure AD tenant, and then assign a specific role to this user. We can also create a group in the Azure AD, add a user to this group, and set assign specific role/roles to this group. Eventually, all users within this group will have roles assigned to the group. When we want to specific Azure Web App to access secrets in the Azure Key Vault, we assign Managed Identity to Azure Web App, grant Key Vault Secrets User role, and then we can access secrets.
This is all great but now there is also the question: what is the difference between Azure AD roles, and RBAC in the context of roles?
To make it simple. Azure RBAC roles control permissions to manage Azure resources (like access to Key Vault secrets from different Azure resources), while Azure AD administrator roles control permissions to manage Azure Active Directory resources (users, groups, and applications). The following table compares some of the differences:
Now each role (RBAC or Azure AD) can be assigned to a specific user, group, or Service Principal. Let’s explain Service Principals, and Managed Identities first.
To access resources that are secured by an Azure Active Directory tenant, the entity that requires access must be represented by a security principal. An Azure Service Principal is an identity created in the Azure Active Directory for use with applications, hosted services, and automated tools to access Azure resources. This access is restricted by the roles assigned to the Service principal, giving us control over which resources can be accessed and at which level. For security reasons, it’s always recommended to use service principals with automated tools rather than allowing them to log in with a user identity. This is why we have users (user principal) and applications (service principal).
There are three types of service principal:
Now when it comes to security best practices, we should eliminate the usage of any credentials when it is possible. This is why Managed Identities were introduced. We do not have to manage any credentials like secrets. Managed identities do not cost anything, and we can use them to securely access different kinds of Azure resources. Here is the official list of serices that can support Managed Identities. Using a Managed Identity, we can authenticate to any service that supports Azure AD authentication without managing any credentials. Here is an example of Managed Identity enabled for Azure Function App:
In the picture above probably you noticed two sections: System assigned, and User assigned. Let’s talk about differences here.
There are some signifficant differences between User-assigned and System-assigned identities:
Once Managed Identity is created, we can assign Azure RBAC roles to it.
Managed identities are an important part of DevSecOps practices to limit credentials usage in the source code.
Once we create Manage Identity, we can assign RBAC roles to it. There are multiple ways to do it, like from the Azure portal or using Azure CLI. Let’s see how it is done in the Azure Portal. Let’s say that we want to grant access to read secrets in the Azure Key Vault from the Azure Container Apps. To do it, we have to first enable Managed Identity on Azure Container App (it can be used-assigned or system assigned) and then go to the Azure Key Vault resource tab. There from the Access control tab we can assign different roles to users, and Managed Identities:
The most important aspect of using Managed Identities with Azure RBAC is the fact that we can grant access without using any credentials in the source code or Azure resource configuration.
Now once we understand concepts around Azure AD roles, RBAC, Service Principals, and Managed Identities, we can get back to an important topic. How do we deploy from Azure DevOps or GitHub to Azure cloud securely?
Probably most of us are familiar with this view in Azure DevOps:
With this approach, we can automatically create a connection between Azure DevOps and our Azure subscription. However, it is worth understanding what is happening underneath and how service connections are configured in the real-world projects, especially when we want to deploy to anything in the client’s environment. Probably you faced a situation when during working on the project you did not get full access to your client’s Azure subscription. Cloud Team created Service Principal and provided details to you to set up deployments only to specific resource groups in Azure. In this section, I would like to explain the details.
To securely configure the connection between Azure DevOps and Azure cloud, we have to register Service Principal (application) in the Azure Active Directory connected with Azure subscription. Here is the example of an application that I registered in my Azure Active Directory tenant:
In the application’s overview tab we can find two important values which will be required to setup a connection between Azure DevOps and Azure cloud:
We need also client secret which can be created under Certificates and secrets tab:
Now we have to assign RBAC role to the Service Principal we created above. We want to follow the least-privilege principle so Service Principal we have the below roles assigned:
Reader on our Azure subscription:
Contributor on our Azure resource group:
In this case, we assume that the resource group is already created. If not, please create one and then follow the steps below.
Important note here!
The description says that Contributor grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC…. This is an important fact here! If you plan for instance to create User-assigned Managed Identities (using Bicep, ARM or PowerShell within your DevOps Pipeline), and assign roles to them, you will have to either set Owner role or create the custom one.
Once we have the above configured, with client ID, client secret, and tenant ID values we can go to Azure DevOps, and set up the connection.
Now to successfully connect with Azure subscription and be able to deploy to the resource group we created earlier, we have to setup connection parameters for the Azure DevOps project:
Instead of using Automatic approach, we select Manual:
This is an example with parameters for the connection I created:
Please note that I do not grant access to this connection from any Azure DevOps pipeline. I want to be sure that only selected pipelines have access to it:
Now in the Security settings, I can set access to this specific service connection from specific pipelines:
Now I can trigger the pipeline, and for instance create all Azure resources using Bicep in my pipeline:
Important
There is one downside of the solution above, we have to rotate the secrets of the Service Principal we created for the Azure DevOps service connection. I can confirm that the Product Group from Microsoft works hard to improve this element and switch to Workload Identity Federation. Let’s discuss it a little bit.
When it comes to GitHub, we can follow similar steps to configure connection to the Azure cloud. However, we can utilize Workload identity federation approach which allows accessing Azure Active Directory protected resources without needing to manage secrets. This is a huge benefit as we do not have to remember to rotate secrets as mentioned above.
The main difference here is that instead of generating client secret, we use Federated credentials:
Then on the GitHub side we have to add below three secrets:
Now in the GitHub Action workflow, we can successfully authenticate and then deploy resources to the Azure cloud:
name: Build and deploy securely to Azure cloud
on:
push:
branches: [ main ]
workflow_dispatch:
permissions:
# This step is required to obtain ID Token from Azure AD during the authentication process:
id-token: write
contents: read
packages: read
env:
AZURE_FUNCAPP_NAME: func-tmf-func-app
AZURE_FUNCTIONAPP_PACKAGE_PATH: '.'
AZURE_RG_NAME: rg-tmf-devsecops-dev
jobs:
build-func-app:
needs: [clean-working-directory]
runs-on: self-hosted
steps:
- uses: actions/checkout@v2
# Here we use workload identity federation authentication approach:
- name: Az CLI login
uses: azure/login@v1
with:
client-id: $
tenant-id: $
subscription-id: $
- name: Publish Function App to Azure
run: |
az functionapp deployment source config-zip -g $ -n $ --src '$/func-app-package.zip'
I created dedicated video on YouTube about this topic - it is not long, and I recommend to watch it to fully understand the concept:
Secure Cloud Deployments with GitHub
We can also read more in the official documentation here.
In this article, I explained the core concepts of Azure AD roles and Azure Role-based Access Control (RBAC). I hope that now it is more clear what is happening underneath when we want to connect GitHub or Azure DevOps to our Azure subscription securely. As I mentioned, Microsoft works hard to enable the workload identity federation approach for Azure DevOps so we can avoid using secrets. It is also important to remember that utilizing Managed Identities together with RBAC is recommended approach when setting up access from one Azure resource to another. With this approach, we can avoid using any secrets in our DevSecOps process. In the next article, we will talk about how to achieve continuous compliance with Azure Policy.
If you like my articles, and find them helpful, please buy me a coffee:
]]>This is the next article from the series called DevSecOps practices for Azure cloud workloads. In this article, I would like to talk about an important topic that is not always obvious, especially for the beginning Azure Cloud Engineers. How to deploy code to Azure resources (like Azure Function Apps or Azure Container Apps) integrated and isolated with Azure Virtual Network.
I also strongly recommend reading my other article which is strongly related to this topic: Azure Hints Series - Containers for Azure DevOps Automation
Before we continue talking about secure deployments, let’s stop for a minute and understand the important fact about Azure cloud resources. Resources like Azure Functions, Azure Web Apps, or Azure Container Apps can be created without Azure Virtual Network integration. It does not mean that there is no network infrastructure underneath because of course, there is. The thing is that we can create all these resources without additional network-level isolation, integration, and security.
Let me first put an example. Below we can see two solutions. The first one is without Azure Virtual Network integration, the second utilizes Azure Virtual Network to isolate public access to Azure resources:
The most important difference is that in the first solution we do not have any additional layer of security around network access to Azure resources. Of course, Azure has mechanisms to detect potential attacks like Basic DDoS Protection protection at no additional charge but this is not the only option that makes our solution fully secure.
When we look into Azure Security Benchmark - Security Control v3: Network security we will find out below recommendations:
Create a virtual network (VNet) as a fundamental segmentation approach in your Azure network, so resources such as VMs can be deployed into the VNet within a network boundary. To further segment the network, you can create subnets inside VNet for smaller sub-networks. Use network security groups (NSG) as a network layer control to restrict or monitor traffic by port, protocol, source IP address, or destination IP address.
Deploy private endpoints for all Azure resources that support the Private Link feature, to establish a private access point for the resources. You should also disable or restrict public network access to services where feasible. For certain services, you also have the option to deploy VNet integration for the service where you can restrict the VNET to establish a private access point for the service.
Use web application firewall (WAF) capabilities in Azure Application Gateway, Azure Front Door, and Azure Content Delivery Network (CDN) to protect your applications, services and APIs against application layer attacks at the edge of your network. Set your WAF in “detection” or “prevention mode”, depending on your needs and threat landscape.
As we can read above, it is highly recommended to utilize Azure Virtual Network to enhance the security of solutions built on the Azure cloud. This is why in the second architecture diagram I included components like:
Once we isolate all resources and we start utilizing integration with Azure Virtual Network, we can be surprised that deployments from Azure DevOps, GitHub, or any other automation tool will stop working. Here is a nice example of what will happen when we isolate Azure Container Registry using Private Link and try to push Docker images from Azure DevOps:
This is because we isolated our Azure resources. This will happen to every deployment to Azure resource isolated with Azure Virtual Network. This is because Azure DevOps agents or GitHub runners are not able to connect to these resources. In this case, we have to verify if we can update firewall rules for specific services or utilize a self-hosted CI/CD agent. Let me put a specific example here.
Azure Container Registry (ACR) supports Private Links so we can disable public access to it:
It means that we can access ACR only from inside the Azure Virtual Network. What about the situation when we want to build and push Docker images to ACR using GitHub-hosted runners or Azure DevOps agents? In such a scenario we can update the firewall of ACR dynamically to enable temporary access only from the specific IP address. In this case, this will be the CI/CD agent IP address. To make it more clear, here is the template I created for the Azure DevOps pipeline to dynamically get IP address of the agent, update ACR firewall to allow access from this IP address, and once Docker images are successfully pushed, IP address is removed, and public access is disabled:
parameters:
- name: azureSubscriptionConnectionName
type: string
- name: containerRegistryName
type: string
jobs:
- job: Build
displayName: 'Build Project'
pool:
vmImage: 'ubuntu-latest'
steps:
- task: AzureCLI@2
displayName: 'Add network rule to ACR'
inputs:
azureSubscription: $
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
IP=$(curl curl https://ifconfig.me/ip)
az acr update --name acrtmfdevsecopsdev --public-network-enabled true
az acr network-rule add \
--name acrtmfdevsecopsdev \
--ip-address $IP
- template: ../tasks/build.docker.images.task.yml
parameters:
azureSubscriptionConnectionName: $
containerRegistryName: $
- template: ../tasks/push.docker.images.task.yml
parameters:
azureSubscriptionConnectionName: $
containerRegistryName: $
- task: AzureCLI@2
displayName: 'Remove network rule from ACR'
inputs:
azureSubscription: $
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
IP=$(curl curl https://ifconfig.me/ip)
az acr network-rule remove \
--name acrtmfdevsecopsdev \
--ip-address $IP
az acr update --name acrtmfdevsecopsdev --public-network-enabled false
With the such solution, we can still utilize agents provided by Azure DevOps and GitHub. However, there can be more situations when we do not want to allow access to Azure resources from any IP addresses outside of our Azure Virtual Network or like for Azure Web Apps, when we enable Private Endpoints (Private Link) to Web App, all public access is disabled. In this case, the best solution is to utilize Azure DevOps self-hosted agents, or GitHub self-hosted runners.
In the scenario, when we have our Azure resources isolated in Azure Virtual Network, we can create self-hosted agents and runners utilizing one of the Azure services connected to our Azure Virtual Network like:
It is always good to look at the cost. It is cheaper to utilize Docker to host self-hosted runners instead of using Virtual Machines but of course, it is important to assess each situation/environment individually because Virtual Machines can be helpful in some scenarios. I encourage you to read my article called Azure Hints Series - Containers for Azure DevOps Automation where I explained different options (including the cost aspect) when it comes to hosting options. In this article, we will see how to utilize Docker for Azure DevOps self-hosted agents and GitHub self-hosted runners.
With GitHub self-hosted runners we can host our runners and customize the environment used to run jobs in GitHub Actions workflows. Self-hosted runners can be physical, virtual, in a container, on-premises, or in the cloud.
I encourage you to read more about GitHub self-hosted runners in the official documentation.
To run GitHub self-hosted runner we need two files:
Here is the Dockerfile content, this will install Azure CLI together with PowerShell:
FROM ubuntu:20.04
#input GitHub runner version argument
ARG RUNNER_VERSION
ENV DEBIAN_FRONTEND=noninteractive
LABEL Author="Daniel Krzyczkowski"
LABEL GitHub="https://github.com/Daniel-Krzyczkowski"
LABEL BaseImage="ubuntu:20.04"
LABEL RunnerVersion=${RUNNER_VERSION}
# update the base packages + add a non-sudo user
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
# install Azure CLI and other required packages
RUN apt-get install -y --no-install-recommends \
curl nodejs wget unzip vim git azure-cli jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev python3-pip
ARG PS_VERSION=7.1.4
ARG PS_PACKAGE=powershell_${PS_VERSION}-1.ubuntu.20.04_amd64.deb
ARG PS_PACKAGE_URL=https://github.com/PowerShell/PowerShell/releases/download/v${PS_VERSION}/${PS_PACKAGE}
# Define ENVs for Localization/Globalization
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false \
LC_ALL=en_US.UTF-8 \
LANG=en_US.UTF-8 \
# set a fixed location for the Module analysis cache
PSModuleAnalysisCachePath=/var/cache/microsoft/powershell/PSModuleAnalysisCache/ModuleAnalysisCache \
POWERSHELL_DISTRIBUTION_CHANNEL=PSDocker-Ubuntu-20.04
# Install dependencies and clean up
RUN apt-get clean
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
# curl is required to grab the Linux package
curl \
# less is required for help in powershell
less \
# requied to setup the locale
locales \
# required for SSL
ca-certificates \
gss-ntlmssp \
# PowerShell remoting over SSH dependencies
openssh-client \
# Download the Linux package and save it
&& echo ${PS_PACKAGE_URL} \
&& curl -sSL ${PS_PACKAGE_URL} -o /tmp/powershell.deb \
&& apt-get install --no-install-recommends -y /tmp/powershell.deb \
&& apt-get dist-upgrade -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& locale-gen $LANG && update-locale \
# remove powershell package
&& rm /tmp/powershell.deb \
# intialize powershell module cache
# and disable telemetry
&& export POWERSHELL_TELEMETRY_OPTOUT=1 \
&& pwsh \
-NoLogo \
-NoProfile \
-Command " \
\$ErrorActionPreference = 'Stop' ; \
\$ProgressPreference = 'SilentlyContinue' ; \
while(!(Test-Path -Path \$env:PSModuleAnalysisCachePath)) { \
Write-Host "'Waiting for $env:PSModuleAnalysisCachePath'" ; \
Start-Sleep -Seconds 6 ; \
}"
RUN pwsh -Command Install-Module AZ -Force
# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
&& curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
# setup permissions for docker user
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
# add over the start.sh script
ADD script/start.sh start.sh
# make the script executable
RUN chmod +x start.sh
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker
# set the entrypoint to the start.sh script
ENTRYPOINT ["./start.sh"]
Here is the start.sh script file:
#!/bin/bash
GH_OWNER=$GH_OWNER
GH_REPOSITORY=$GH_REPOSITORY
GH_TOKEN=$GH_TOKEN
RUNNER_SUFFIX=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 5 | head -n 1)
RUNNER_NAME="dockerRunner-${RUNNER_SUFFIX}"
REG_TOKEN=$(curl -sX POST -H "Accept: application/vnd.github.v3+json" -H "Authorization: token ${GH_TOKEN}" https://api.github.com/repos/${GH_OWNER}/${GH_REPOSITORY}/actions/runners/registration-token | jq .token --raw-output)
cd /home/docker/actions-runner
./config.sh --unattended --url https://github.com/${GH_OWNER}/${GH_REPOSITORY} --token ${REG_TOKEN} --name ${RUNNER_NAME}
cleanup() {
echo "Removing runner..."
./config.sh remove --unattended --token ${REG_TOKEN}
}
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
./run.sh & wait $!
Please note that we need three parameters to start self-hosted runner:
This are the Personal Access Token scopes required:
One we have the files above ready, we can decide which Azure cloud service we want to utilize to run agent in the Docker container.
We can run the runner on our local machine to test configuration using below Docker commands:
docker build --build-arg RUNNER_VERSION=2.294.0 --tag gh-sf-docker-runner .
docker run -e GH_TOKEN='g...' -e GH_OWNER='Daniel-Krzyczkowski' -e GH_REPOSITORY='test-sh-repo' -d gh-sf-docker-runner
This is example of the GitHub Actions workflow with sself-hosted runner selected to run the jobs. As we can see we can still utilize the same actions as we do on the GitHub-hosted runners:
name: Build and deploy Live Notifications Azure Function App
on:
push:
branches: [ main ]
paths:
- src/live-notifications-handler/**
workflow_dispatch:
permissions:
id-token: write
contents: read
packages: read
env:
AZURE_FUNCAPP_NAME: func-tmf-identity-live-ntfs
AZURE_FUNCTIONAPP_PACKAGE_PATH: '.'
AZURE_RG_NAME: rg-tmf-identity
jobs:
build-live-ntfs-func-app:
# Here we indicate that we want to utilize self-hosted runner:
runs-on: self-hosted
steps:
- uses: actions/checkout@v2
- name: Setup .NET version
uses: actions/setup-dotnet@v1
with:
dotnet-version: '6.0.x'
- name: Install dependencies
run: dotnet restore ./src/live-notifications-handler/TMF.LiveNotifications.FuncApp
- name: Build
run: |
dotnet publish ./src/live-notifications-handler/TMF.LiveNotifications.FuncApp --configuration Release --no-restore --output '$/func-app-package'
Compress-Archive -Path '$/func-app-package/*' -DestinationPath '$/func-app-package.zip'
- name: Test
run: dotnet test ./src/live-notifications-handler/TMF.LiveNotifications.FuncApp --no-restore --verbosity normal
- uses: actions/upload-artifact@v2
with:
name: func-app-package
path: '$/func-app-package.zip'
deploy-live-ntfs-func-app:
needs: [build-live-ntfs-func-app]
runs-on: self-hosted
steps:
- uses: actions/download-artifact@v2
with:
name: func-app-package
path: '$/func-app-package'
- name: Az CLI login
uses: azure/login@v1
with:
client-id: $
tenant-id: $
subscription-id: $
- name: Publish Func App to Azure
run: |
az functionapp deployment source config-zip -g $ -n $ --src '$/func-app-package.zip'
With Azure DevOps self-hosted agents we can host our runners and customize the environment used to run jobs in Azure DevOps Pipelines. Self-hosted runners can be physical, virtual, in a container, on-premises, or in the cloud exactly like for GitHub self-hosted runners.
To run the Azure DevOps self-hosted agent we need the same two files as for GitHub self-hosted runners:
Here is the Dockerfile content, this will install Azure CLI together with PowerShell:
FROM ubuntu:18.04
LABEL Author="Daniel Krzyczkowski"
LABEL GitHub="https://github.com/Daniel-Krzyczkowski"
LABEL BaseImage="ubuntu:18.04"
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat \
libssl1.0 \
&& rm -rf /var/lib/apt/lists/*
RUN curl -LsS https://aka.ms/InstallAzureCLIDeb | bash \
&& rm -rf /var/lib/apt/lists/*
ARG PS_VERSION=7.1.4
ARG PS_PACKAGE=powershell_${PS_VERSION}-1.ubuntu.18.04_amd64.deb
ARG PS_PACKAGE_URL=https://github.com/PowerShell/PowerShell/releases/download/v${PS_VERSION}/${PS_PACKAGE}
#https://github.com/PowerShell/PowerShell/releases/download/v7.1.4/powershell_7.1.4-1.ubuntu.20.04_amd64.deb
#https://github.com/PowerShell/PowerShell/releases/download/v7.1.4/powershell-lts_7.1.4-1.ubuntu.20.04_amd64.deb
# Define ENVs for Localization/Globalization
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false \
LC_ALL=en_US.UTF-8 \
LANG=en_US.UTF-8 \
# set a fixed location for the Module analysis cache
PSModuleAnalysisCachePath=/var/cache/microsoft/powershell/PSModuleAnalysisCache/ModuleAnalysisCache \
POWERSHELL_DISTRIBUTION_CHANNEL=PSDocker-Ubuntu-18.04
# Install dependencies and clean up
RUN apt-get clean
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
# curl is required to grab the Linux package
curl \
# less is required for help in powershell
less \
# requied to setup the locale
locales \
# required for SSL
ca-certificates \
gss-ntlmssp \
# PowerShell remoting over SSH dependencies
openssh-client \
# Download the Linux package and save it
&& echo ${PS_PACKAGE_URL} \
&& curl -sSL ${PS_PACKAGE_URL} -o /tmp/powershell.deb \
&& apt-get install --no-install-recommends -y /tmp/powershell.deb \
&& apt-get dist-upgrade -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& locale-gen $LANG && update-locale \
# remove powershell package
&& rm /tmp/powershell.deb \
# intialize powershell module cache
# and disable telemetry
&& export POWERSHELL_TELEMETRY_OPTOUT=1 \
&& pwsh \
-NoLogo \
-NoProfile \
-Command " \
\$ErrorActionPreference = 'Stop' ; \
\$ProgressPreference = 'SilentlyContinue' ; \
while(!(Test-Path -Path \$env:PSModuleAnalysisCachePath)) { \
Write-Host "'Waiting for $env:PSModuleAnalysisCachePath'" ; \
Start-Sleep -Seconds 6 ; \
}"
RUN pwsh -Command Install-Module AZ -Force
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT ["./start.sh"]
Here is the start.sh script file:
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
# If the agent has some running jobs, the configuration removal process will fail.
# So, give it some time to finish the job.
while true; do
./config.sh remove --unattended --auth PAT --token $(cat "$AZP_TOKEN_FILE") && break
echo "Retrying in 30 seconds..."
sleep 30
done
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_PACKAGES=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=$TARGETARCH&top=1")
AZP_AGENT_PACKAGE_LATEST_URL=$(echo "$AZP_AGENT_PACKAGES" | jq -r '.value[0].downloadUrl')
if [ -z "$AZP_AGENT_PACKAGE_LATEST_URL" -o "$AZP_AGENT_PACKAGE_LATEST_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent"
echo 1>&2 "check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and extracting Azure Pipelines agent..."
curl -LsS $AZP_AGENT_PACKAGE_LATEST_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 0' EXIT
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
chmod +x ./run-docker.sh
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
# https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-linux?view=azure-devops#run-once
./run-docker.sh "$@" & wait $!
Please note that we need three parameters to start self-hosted agent:
This are the Personal Access Token scopes required:
One we have the files above ready, we can decide which Azure cloud service we want to utilize to run agent in the Docker container.
We can run the agent on our local machine to test configuration using below Docker command:
docker run -e AZP_URL=https://dev.azure.com/xxxx -e AZP_TOKEN=e... -e AZP_AGENT_NAME=selfhostedlinuxagent -e AZP_POOL=Self-Hosted-Docker azdevops-sf-docker-agent:latest
This is example of the Azure DevOps pipeline with self-hosted agent selected to run the jobs. As we can see we can still utilize the same tasks as we do on the Azure DevOps hosted agents:
trigger:
- master
# Here we indicate that we want to utilize self-hosted runner:
pool: Self-Hosted-Docker
steps:
- task: PowerShell@2
inputs:
targetType: 'inline'
script: 'npm install'
- task: PowerShell@2
inputs:
targetType: 'inline'
script: 'npm run build.azure'
- task: CopyFiles@2
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)/build'
Contents: '**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'azure-app'
publishLocation: 'Container'
Here is the sample solution architecture of DevSecOps on Azure I created:
I utilize Azure Container Apps to run Azure DevOps self-hosted agents.
In this article, I explained why Azure Virtual Network is important when it comes to Azure solutions security, and what are the possible ways to deploy from Azure DevOps, and GitHub to Azure resources isolated with Azure Virtual Network. It is also important to remember that there are multiple hosting options in the Azure cloud for self-hosted runners and agents, like Azure Virtual Machines or Azure Container Apps. In the next article, we will talk about how to control access to Azure resources with Azure AD and Azure RBAC.
If you like my articles, and find them helpful, please buy me a coffee:
]]>This is the next article from the series called DevSecOps practices for Azure cloud workloads. I would like to talk about Azure infrastructure security, show how to keep infrastructure code secure, and how to implement the process of constant security posture verification for Azure infrastructure templates using Azure DevOps, and Checkmark’s KICS - open-source solution for static code analysis of Infrastructure as Code.
Let me start with some theory around security for Azure cloud before we will jump into implementation tasks. Have you ever wondered how security standards and practices are defined for Azure cloud workloads? Of course, there is well-written documentation but it is important to understand what is the real source.
The Azure Security Benchmark (ASB) provides prescriptive best practices and recommendations to help improve the security of workloads, data, and services running on the Azure cloud. This benchmark is part of a set of holistic security guidance that also includes:
The Azure Security Benchmark focuses on cloud-centric control areas. These controls are consistent with well-known security benchmarks, such as those described by the Center for Internet Security (CIS) Controls, National Institute of Standards and Technology (NIST), and Payment Card Industry Data Security Standard (PCI-DSS).
Here is a sample page from CIS Azure Benchmark:
As we can see above, there is a rationale provided, impact, and audit steps to verify the security status of specific security control. There is also a remediation step provided.
With a recommendation from them, we can implement the Azure Security Benchmark with the following approach and steps:
Using the above recommendations we can improve the security posture of our Azure cloud workloads… and DevSecOps practices too! In the section called Security Control v3: DevOps security we can find helpful details on how to implement secure DevOps. Under DS-6: Enforce security of workload throughout DevOps lifecycle we can read that we should automate the deployment by using Azure or third-party tooling in the CI/CD workflow, infrastructure management (infrastructure as code), and testing to reduce human error and attack surface.
This is why in this article we are going to talk about Azure infrastructure code and its security.
There are different tools we can utilize to verify security posture of Azure infrastructure configuration. I decided to use Checkmark’s KICS which is an open-source solution for static code analysis of Infrastructure as Code. There are a few reasons behind the choice:
Let’s see how to use KICS on a local machine first to scan Azure infrastructure code.
With shift left approach, we want to solve all bugs and security issues at the earlier step, before they will be discovered on the production environment. Infrastructure scanning can be also executed on the local machine using Docker. Here you can read more about how to set up it. I used the approach with checkmarx/kics Docker container locally to scan Azure infrastructure code written with Bicep. The important fact is that KICS does not support Bicep file scanning directly so we have to first convert Bicep to ARM (JSON) files.
To scan Azure infrastructure code locally we have to first convert Bicep to ARM (JSON) file. We can do it using the below command:
az bicep build --file main.bicep
Once we have ARM file, we can apply KICS scanning on it using the below command:
docker run -v "C:\tmf\tmf-devsecops-azure-infrastructure\src\bicep":/path checkmarx/kics scan -p "/path" -o "/path"
After a while report is available:
Here is the fragment of it:
"queries": [
{
"query_name": "Key Vault Not Recoverable",
"query_id": "7c25f361-7c66-44bf-9b69-022acd5eb4bd",
"query_url": "https://docs.microsoft.com/en-us/azure/templates/microsoft.keyvault/2019-09-01/vaults?tabs=json#vaultproperties-object",
"severity": "HIGH",
"platform": "AzureResourceManager",
"category": "Backup",
"description": "Key Vault should have 'enableSoftDelete' and 'enablePurgeProtection' set to true",
"description_id": "8e3ca202",
"cis_description_id": "CIS Security - CIS Microsoft Azure Foundations Benchmark v1.3.1 - Rule 8.4",
"cis_description_title": "Ensure the key vault is recoverable",
.... REMOVED FOR BREVITY.......
"files": [
{
"file_name": "../../path/main.json",
.... REMOVED FOR BREVITY.......
},
As we can see above, the Key Vault we declared does not have soft deletion and purge protection enabled. We can also see the source of recommendation: CIS Security - CIS Microsoft Azure Foundations Benchmark v1.3.1 - Rule 8.4. This
Once we check the report, we can apply improvements so for the above recommendation we have to update Bicep module for Azure Key Vault:
resource keyVault 'Microsoft.KeyVault/vaults@2021-10-01' = {
name: keyVaultName
location: location
tags:{
environment:environmentType
}
properties:{
sku: {
name: 'standard'
family: 'A'
}
#HERE WE ENABLE PURGE PROTECTION AND SOFT DELETION#
enablePurgeProtection: true
enableSoftDelete: true
#HERE WE ENABLE PURGE PROTECTION AND SOFT DELETION#
tenantId: subscription().tenantId
networkAcls: {
bypass: 'AzureServices'
defaultAction: 'Deny'
virtualNetworkRules: []
}
accessPolicies:[]
publicNetworkAccess: 'Disabled'
}
}
The below diagram explains the process implemented using Azure DevOps pipelines:
Let me explain the steps:
Below I provided some screenshots to show the result.
Now it is time to talk a little bit about implementation.
As I mentioned before, first we have to convert Bicep to ARM (JSON) files. Here is the task responsible for this operation:
- task: AzureCLI@2
displayName: 'Convert Bicep template to ARM'
inputs:
azureSubscription: $(azureSubscriptionConnectionName)
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az bicep build --file $(bicepFilePathForSecurityScan) --outdir $(Build.ArtifactStagingDirectory)
We have to provide azureSubscriptionConnectionName parameter together with the path to Bicep file that should be converted (the main one) using bicepFilePathForSecurityScan parameter.
The next step is to scan infrastructure (ARM) code with KICS tool. Here is the code to achieve it:
- script: |
/app/bin/kics scan --ci -p $(System.ArtifactsDirectory)/arm -o ${PWD} --report-formats json,sarif --ignore-on-exit results
cat results.json
TOTAL_SEVERITY_COUNTER=`grep '"total_counter"':' ' results.json | awk {'print $2'}`
export SEVERITY_COUNTER_HIGH=`grep '"HIGH"':' ' results.json | awk {'print $2'} | sed 's/.$//'`
SEVERITY_COUNTER_MEDIUM=`grep '"INFO"':' ' results.json | awk {'print $2'} | sed 's/.$//'`
SEVERITY_COUNTER_LOW=`grep '"LOW"':' ' results.json | awk {'print $2'} | sed 's/.$//'`
SEVERITY_COUNTER_INFO=`grep '"MEDIUM"':' ' results.json | awk {'print $2'} | sed 's/.$//'`
echo "TOTAL SEVERITY COUNTER: $TOTAL_SEVERITY_COUNTER"
echo "##vso[task.setvariable variable=highSeverityIssuesCounter;isOutput=true]$SEVERITY_COUNTER_HIGH"
displayName: 'Scan infrastructure code'
name: infraCodeSecurityScan
# scan results should be visible in the SARIF viewer tab of the build - SCANS tab
- task: PublishBuildArtifacts@1
displayName: 'Generate infrastructure scanning report in SARIF'
inputs:
pathToPublish: $(System.DefaultWorkingDirectory)/results.sarif
artifactName: CodeAnalysisLogs
- task: PublishBuildArtifacts@1
displayName: 'Generate infrastructure scanning report in JSON'
inputs:
pathToPublish: $(System.DefaultWorkingDirectory)/results.json
artifactName: CodeAnalysisJson
First, code is analyzed with KICS, then SARIF raport is generated, and report in JSON format. We will use the last one to send logs to Azure Log Analytics.
One more important notice - KICS requires using container job in the Azure DevOps pipeline:
- job: Scan_With_Kics
dependsOn: Conver_Bicep_To_ARM
condition: succeeded()
displayName: 'Scan infrastructure code'
pool:
vmImage: "ubuntu-latest"
container: checkmarx/kics:debian
steps:
- task: DownloadPipelineArtifact@2
displayName: 'Download ARM files'
inputs:
artifactName: 'armFiles'
downloadPath: '$(System.ArtifactsDirectory)/arm'
- template: ../tasks/scan.infrastructure.with.kics.task.yml
Here is the structure of generated JSON report:
{
"kics_version": "v1.5.8",
"files_scanned": 1,
"lines_scanned": 1246,
"files_parsed": 1,
"lines_parsed": 1246,
"files_failed_to_scan": 0,
"queries_total": 42,
"queries_failed_to_execute": 0,
"queries_failed_to_compute_similarity_id": 0,
"scan_id": "console",
"severity_counters": {
"HIGH": 4,
"INFO": 0,
"LOW": 1,
"MEDIUM": 0,
"TRACE": 0
},
"total_counter": 5,
"total_bom_resources": 0,
"start": "2022-05-25T03:51:29.661322371Z",
"end": "2022-05-25T03:51:32.356792905Z",
"paths": [
"/__w/1/a/arm"
],
"queries": [
{
"query_name": "Key Vault Not Recoverable",
"query_id": "7c25f361-7c66-44bf-9b69-022acd5eb4bd",
"query_url": "https://docs.microsoft.com/en-us/azure/templates/microsoft.keyvault/2019-09-01/vaults?tabs=json#vaultproperties-object",
"severity": "HIGH",
"platform": "AzureResourceManager",
"category": "Backup",
"description": "Key Vault should have 'enableSoftDelete' and 'enablePurgeProtection' set to true",
"description_id": "8e3ca202",
"cis_description_id": "CIS Security - CIS Microsoft Azure Foundations Benchmark v1.3.1 - Rule 8.4",
"cis_description_title": "Ensure the key vault is recoverable",
"cis_description_text": "The key vault contains object keys, secrets, and certificates. Accidental unavailability of a key vault can cause immediate data loss or loss of security functions (authentication, validation, verification, non-repudiation, etc.) supported by the key vault objects. It is recommended the key vault be made recoverable by enabling the \"Do Not Purge\" and \"Soft Delete\" functions. This is in order to prevent the loss of encrypted data including storage accounts, SQL databases, and/or dependent services provided by key vault objects (Keys, Secrets, Certificates) etc., as may happen in the case of accidental deletion by a user or from disruptive activity by a malicious user.\nThere could be scenarios where users accidentally run delete/purge commands on key vault or attacker/malicious user does it deliberately to cause disruption. Deleting or purging a key vault leads to immediate data loss as keys encrypting data and secrets/certificates allowing access/services will become non-accessible.There are 2 key vault properties that plays role in permanent unavailability of a key vault. enableSoftDelete : Setting this parameter to true for a key vault ensures that even if key vault is deleted, Key vault itself or its objects remain recoverable for next 90days. In this span of 90 days either key vault/objects can be recovered or purged (permanent deletion). If no action is taken, after 90 days key vault and its objects will be purged. enablePurgeProtection : enableSoftDelete only ensures that key vault is not deleted permanently and will be recoverable for 90 days from date of deletion. However, there are chances that the key vault and/or its objects are accidentally purged and hence will not be recoverable. Setting enablePurgeProtection to \"true\" ensures that the key vault and its objects cannot be purged. Enabling both the parameters on key vaults ensures that key vaults and their objects cannot be deleted/purged permanently.",
"files": [
{
"file_name": "../a/arm/main.json",
...REMOVED FOR BREVITY...
]
},
...
]
}
Once logs are sent, I want to break the pipeline execution if there are HIGH severity issues with the infrastructure code. To do it I have created the below task:
- script: |
echo "SEVERITY COUNTER: $(highSeverityIssuesCounter)"
SEVERITY_COUNTER_HIGH=$(highSeverityIssuesCounter)
if [ "$SEVERITY_COUNTER_HIGH" -ge "1" ]; then
echo "Please review all $SEVERITY_COUNTER issues with infrastructure code" && exit 1;
fi
displayName: 'Validate scanning result'
In this article, I explained Microsoft Azure Security Benchmark, and where to find information about security recommendations. I also presented how to set up Azure infrastructure code scanning and auditing using Azure DevOps, and Checkmarx’s KICS. It is worth mentioning that when it comes to KICS, it can be used to scan other platforms like CDK, or Terraform. In the next article, we will discover how to deploy securely to Azure resources in the Azure Virtual Network.
If you like my articles, and find them helpful, please buy me a coffee:
]]>