Visual Studio Geeks | Creating KEDA Scalar in Azure Container Apps using Azure Portal


In this post, I would like to show you how we added custom scaling with KEDA using the Azure Portal — Thanks to the KEDA scaler, we have a dynamic scaling pool, which automatically scales when there are more jobs in the queue and scales back down when demand reduces.

We run our Azure DevOps Build agents inside a container. This has allowed us to package various tools like kubectl, helm, and terraform are installed and available inside the agent image, which gives us control over versions of the tools we use — as we can execute our continuous integrations with consistent configuration. Also, adding any new tool is just adding installation instructions to the Dockerfile and publishing a new image.

Microsoft has detailed documentation on running Azure DevOps agent inside a container here

Further, we are running our agents as an Azure Container App, which has freed us from maintaining a dedicated AKS cluster and lets us dynamically scale the agents — a new agent per pipeline job with the help of custom scaling rules and KEDA.

Creating an Azure Container App

Creating an Azure Container App can be done in a variety of ways — Terraform or any other IaC code, Azure CLI, or through Portal. We are using Terraform internally, but for the sake of this post, I am showing creating using the portal.

So, search for the service and select Container App.

The first step is the provide the name and select a region and Container Apps environment. I have decided to use the existing environment below.

configuring the container app

The next tab in the wizard is about the container. Our team-specific image is in our internal Azure Container Registry and the wizard fills the drop-downs for easy selection.

The only change I have made here is, changing the CPU and Memory values as needed for the image — Notice I am using Consumption Plan as we know this team does not need high-performance agents.

As conveyed previously, this feature of allocating individual CPU and Memory configurations has great benefits over AKS. For example, we have a separate container app with a dedicated workload profile for running CPU and Memory-intensive jobs.

specify the container

The rest of the wizard is left at defaults as I do not have any bindings or Ingress to configure.

This should get your container app running.

Create a secret to store Azure DevOps PAT

The first step in the container app is to Add a secret and store our Azure DevOps PAT (Personal Access Token). PAT is needed for the Azure DevOps build agent to connect to our Azure DevOps service. Later in the post, we use this PAT to let KEDA authenticate to our Azure DevOps service to monitor the Agent Pool for new jobs.

Add PAT as a secret

Create a new revision

The next step is to edit the container and define the few environment variables that are required for the container (for more on these specific environment variables refer to this documentation).

Notice, for AZP_TOKEN the source is set as Reference a secret as I want value for that to come from the secret defined in the previous step.

Edit container environment variables

Create a custom Scale rule

The next and final step is to define the custom scale rule — which in our case is KEDA.

So in the Scale tab, click +Add and then enter the details below

  1. Rule name: This is the name for the custom scale rule, can be anything.
  2. Type: Custom
  3. Custom rule type: This is defined with the Scaler definition. I am using Azure Pipelines scaler, so this should match type field of the Scaler definition.

Next, we need to add a few metadata values so that KEDA knows which agent pool should it monitor for new jobs. These will come from metadata keys of the scaler definition.

  1. peronalAccessTokenFromEnv: This lets KEDA authenticate with our Azure DevOps service to monitor the Agent Pool. The value is PAT we defined in the secret previously — which has been passed to the container as an environment variable, so we use that environment variable.
  2. organizationURLFromEnv: This is our Azure DevOps organization URL, again we set this as an environment variable in the previous section.
  3. poolID: This is the Azure DevOps pool ID which KEDA should monitor. Refer to the scaler docs on how to get this.
Add metadata for the scaler

That is it for the post. Hope you found it useful.

Conclusion

In this post, I showed how we added custom scaling with KEDA using the Azure Portal. Thanks to the KEDA scaler, we have a dynamic scaling pool, which automatically scales when there are more jobs in the queue and scales back down when demand reduces.


Visual Studio Geeks | Exploring GitHub Advanced Security for Azure DevOps


It has been a few months since GitHub Advanced Security (GHAS) has been made generally available for Azure DevOps. During this time, I’ve engaged with numerous customers eager to implement GHAS within their Azure subscriptions. In this post, I wanted to show you a quick way to set up GHAS within Azure DevOps and explore the features available.

Enabling Advanced Security in Azure DevOps

This is easy, however, you need to be a member of the Project Collection Administrator group. You can verify that from Organization Settings -> Permissions and then the Members tab. You should see your name.

Once you have verified you are a Project Collection Administrator, you are ready to enable GHAS.

You can enable GHAS either individually per repo or across the organisation for all the repositories.

If you want to enable it for all the repositories in your organization, you will need to go to organization settings and then the Repositories section and enable it there.

For this post, I am enabling it only for a single repository. Once you click the Advanced Security flag (1), you will be prompted to show the number of committers you will be billed against (more on billing below). This repository has only me committing to it, so it has correctly identified 1 active committer (2).

Once you click Begin billing GHAS should be enabled.

If your ADO instance does not have a linked and active subscription, you might get the below error.

You will need to select an active Azure subscription under the Billing tab under organization settings in ADO.

Once you select a valid subscription, you will be able to enable GHAS for the repositories.

Exploring GHAS features

Block secrets on push

Once you enable GHAS, by default Block secrets on push feature is enabled too. With this setting enabled, ADO will automatically check any incoming pushes for embedded secrets and reject them automatically. Not only works on CLI, but it works on the web interface too.

For a simple test, below I am trying to commit a file with the GitHub API key, and it was rejected.

Note that at the time of writing this GHAS supports secrets push protection only for certain service providers. Although secrets from the majority of service providers are supported, I was surprised to see GitLab personal access token is not supported yet.

Although not recommended, there is a way to push a secret that has been blocked. To push, you need to have skip-secret-scanning:true in your commit message.

This will allow the secret to be committed, however, it will be caught in the Secret scanning alert (more on that below).

The great thing about GHAS is that clicking on the alert will give you remediation steps too.

Dependency Scanning and Code Scanning

Dependency and Code scanning are additional features of GHAS. Dependency scanning will scan any vulnerabilities in your repo from your open-source dependencies. Code scanning will scan your source code (from supported languages) for vulnerabilities.

Both these tasks are enabled through pipeline tasks. On any GHAS-enabled repo, you will be able to run the pipeline with these tasks and get the status of the repo.

You can create a pipeline and add the tasks for dependency and code scanning.

However, this post has already gotten super long than I intended it to be, I will probably do another post and explore both of those functionalities in detail.

Pricing

GHAS for Azure DevOps is a paid product and is available only for Azure DevOps Services. For Azure DevOps Service linked to an Azure Subscription, this will automatically be visible in your billing for subscription. At the time of writing this, it costs 49$ per active committer per month.

That is it for this post. Thank you for reading. 🎉