Sunday, February 16, 2025

Azure Keyvault secret change notification using Event Grid Subscription and Logic App or Azure function

 To create triggers for changes in Azure Key Vault secrets, you can leverage Azure Event Grid by setting up an event subscription on your Key Vault that will fire an event whenever a secret is updated, deleted, or created, allowing you to then configure an action like a Logic App or Azure Function to respond to these changes. 

Key steps:
  • Configure Event Grid subscription:
    • Go to your Azure Key Vault in the portal. 
    • Navigate to the "Events" tab. 
    • Select "Create event grid subscription". 
    • Choose a suitable Event Grid topic or create a new one. 
    • Select the event types you want to monitor, such as "SecretNewVersionCreated" or "SecretNearExpiry". 
  • Create a consuming application:
    • Logic App: Set up a Logic App with an Event Grid trigger that will be activated when an event is published by your Key Vault. 
    • Azure Function: Develop an Azure Function that is triggered by the Event Grid event and performs the desired actions based on the secret change. 
    Important considerations:
    • Access control:
      Ensure your consuming application (Logic App or Function) has the necessary permissions to access your Key Vault to read the updated secret values. 
    • Filtering events:
      You can filter the events received by your consuming application based on specific secret names or other criteria using Event Grid filters. 
    Example use cases for secret change triggers:
    • Automatic application reconfiguration: When a secret is updated in Key Vault, trigger a deployment to update your application configuration with the new secret value. 
    • Notification alerts: Send notifications to administrators when critical secrets are changed or near expiry. 
    • Data synchronization: Update data in another system based on changes to a secret in Key Vault. 

Friday, February 14, 2025

AKS docker Persistent Volume to decouple .net code deploy without rebuilding image and container

 Here’s how you can implement some of these methods to decouple your .NET web app’s source code deployment from the AKS container image:


1. Use Persistent Volume (PV) and Persistent Volume Claim (PVC)

This method mounts an external Azure Storage account into the AKS pod, allowing your app to read the latest source code without rebuilding the image.

Steps to Implement:

  1. Create an Azure File Share:

    az storage account create --name mystorageaccount --resource-group myResourceGroup --location eastus --sku Standard_LRS
    az storage share create --name myfileshare --account-name mystorageaccount
    
  2. Create a Kubernetes Secret for Storage Credentials:

    kubectl create secret generic azure-secret \
      --from-literal=azurestorageaccountname=mystorageaccount \
      --from-literal=azurestorageaccountkey=$(az storage account keys list --resource-group myResourceGroup --account-name mystorageaccount --query '[0].value' --output tsv)
    
  3. Define a Persistent Volume (PV) and Persistent Volume Claim (PVC):

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: azurefile-pv
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      azureFile:
        secretName: azure-secret
        shareName: myfileshare
        readOnly: false
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: azurefile-pvc
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
    
  4. Mount the Storage in Your Deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-dotnet-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: my-dotnet-app
      template:
        metadata:
          labels:
            app: my-dotnet-app
        spec:
          containers:
          - name: my-dotnet-app
            image: myacr.azurecr.io/mydotnetapp:latest
            volumeMounts:
            - name: azurefile
              mountPath: /app
          volumes:
          - name: azurefile
            persistentVolumeClaim:
              claimName: azurefile-pvc
    
  5. Deploy the Updated App to AKS:

    kubectl apply -f deployment.yaml
    

Now, your .NET app will dynamically read source code from the Azure File Share without rebuilding the container.


2. Use Sidecar Pattern with a Shared Volume

This method runs a second container inside the same pod to fetch and update the source code.

Steps to Implement:

  1. Modify the Deployment YAML to Add a Sidecar:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-dotnet-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: my-dotnet-app
      template:
        metadata:
          labels:
            app: my-dotnet-app
        spec:
          volumes:
          - name: shared-volume
            emptyDir: {}
          containers:
          - name: my-dotnet-app
            image: myacr.azurecr.io/mydotnetapp:latest
            volumeMounts:
            - name: shared-volume
              mountPath: /app
          - name: sidecar-git-sync
            image: alpine/git
            command: ["sh", "-c", "while true; do git pull origin main; sleep 60; done"]
            volumeMounts:
            - name: shared-volume
              mountPath: /app
    
  2. Ensure the Sidecar Has Access to the Repo:

    • Store SSH keys or tokens in Kubernetes secrets.
    • Modify the Git sync command to fetch from your repository.
  3. Deploy to AKS:

    kubectl apply -f deployment.yaml
    

Now, the sidecar container will fetch code updates every 60 seconds, and your app container will read from the shared volume.


3. Use .NET Hot Reload with Volume Mounting

This method allows live updates to your .NET web app inside an AKS pod.

Steps to Implement:

  1. Modify Your .NET Dockerfile to Enable Hot Reload:

    FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
    WORKDIR /app
    EXPOSE 80
    
    FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
    WORKDIR /src
    COPY . .
    RUN dotnet restore
    RUN dotnet publish -c Release -o /app
    
    FROM base AS final
    WORKDIR /app
    COPY --from=build /app .
    CMD ["dotnet", "watch", "run", "--urls", "http://+:80"]
    
  2. Mount the Source Code Volume in Deployment YAML:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-dotnet-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: my-dotnet-app
      template:
        metadata:
          labels:
            app: my-dotnet-app
        spec:
          containers:
          - name: my-dotnet-app
            image: myacr.azurecr.io/mydotnetapp:latest
            volumeMounts:
            - name: source-code
              mountPath: /app
          volumes:
          - name: source-code
            persistentVolumeClaim:
              claimName: azurefile-pvc
    
  3. Deploy to AKS and Start Hot Reload:

    kubectl apply -f deployment.yaml
    

Now, when you update your source code, the changes will reflect in your .NET app without restarting the container.


4. Use Azure DevOps Pipelines to Deploy Just Source Code

Instead of rebuilding the entire container, update only the source code.

Steps to Implement:

  1. Set Up an Azure DevOps Pipeline:

    • Use Azure DevOps Pipelines or GitHub Actions to deploy only source code updates.
    • Configure a build step to sync source code to a persistent volume.
    • Restart only the application process, not the container.
  2. Use Helm to Update the Deployment Without Rebuilding the Image:

    helm upgrade myapp ./helm-chart --set app.sourceVersion=$(git rev-parse HEAD)
    

This will ensure the latest source code is available without triggering a new Docker image build.


Conclusion

The best approach depends on your use case: ✅ For Persistent Source Code Storage: Use Azure File Share with PV/PVC.
✅ For Continuous Sync from Git: Use a sidecar pattern.
✅ For Live Updates During Development: Use .NET Hot Reload.
✅ For Automated Updates in Production: Use Azure DevOps with Helm.

Hope you enjoy please share and comment.🚀

Deploy in AKS web app without rebuilding image and container

 To decouple source code deployment from the container image in an Azure Kubernetes Service (AKS) environment for a .NET web application, you can follow these approaches:

1. Use Persistent Volumes (PV) and Persistent Volume Claims (PVC)

  • Store your source code on an Azure File Share or Azure Blob Storage.
  • Mount the storage as a Persistent Volume (PV) in AKS.
  • Your application pod reads the updated code from the mounted volume without rebuilding the container image.

2. Leverage Azure App Configuration and Feature Flags

  • Store configuration files or dynamic code parts in Azure App Configuration.
  • Use feature flags or environment variables to control runtime behavior without rebuilding the image.

3. Use Sidecar Pattern with a Shared Volume

  • Deploy a sidecar container that continuously fetches updated code (e.g., from Git or a shared storage).
  • The main application container reads from the shared volume.

4. Implement an External Code Server

  • Host the application’s code on an external location (e.g., an Azure Storage Account, NFS, or a remote Git repository).
  • The container only acts as a runtime, pulling the latest code dynamically.

5. Use Kustomize or Helm for Dynamic Config Updates

  • Helm can help manage application deployments, enabling dynamic updates without modifying container images.

6. Use .NET Hot Reload and Volume Mounting

  • If using .NET Core, leverage Hot Reload to apply code changes without restarting the container.
  • Mount the application source code from a storage volume so updates are reflected instantly.

Azure Naming Convention Tools and Best Practices

When working with Microsoft Azure, a well-defined naming convention is crucial for maintaining clarity, consistency, and efficiency across resources. In this guide, we'll explore best practices for naming Azure resources.

Why is a Naming Convention Important?

Following a structured naming convention helps in:

  • Easy resource identification and management.
  • Improved automation and governance.
  • Enhanced collaboration among teams.
  • Reduced ambiguity and errors.

Key Components of an Azure Naming Convention

Each Azure resource name should contain specific elements to provide clarity. A recommended format is:

[Company/Project]-[Workload]-[Environment]-[Region]-[ResourceType]-[Instance]

Example:

contoso-web-prod-eastus-vm01

Best Practices for Azure Naming

  • Use standardized abbreviations: Example: rg for Resource Group, vm for Virtual Machine.
  • Follow a consistent case style: Lowercase for DNS-related resources, PascalCase or camelCase for others.
  • Include environment indicators: Use dev, qa, prod for different environments.
  • Avoid special characters and spaces: Stick to alphanumeric characters and hyphens.
  • Be concise but descriptive: Keep names readable while following Azure’s length limits.

Common Azure Resource Naming Abbreviations

Resource Abbreviation
Resource Group rg
Virtual Machine vm
Storage Account st
App Service app

Tools to Implement Azure Naming Conventions

  • Azure Resource Graph Explorer: Helps in querying and managing resources efficiently.
  • Azure Policy: Enables enforcement of naming conventions automatically.
  • Microsoft Cloud Adoption Framework: Provides best practices and guidance for cloud governance. Learn More

Conclusion

A well-structured Azure naming convention helps keep your cloud resources organized and manageable. By following these best practices, you can ensure clarity, scalability, and consistency in your Azure environment.

References and Tools

Friday, February 7, 2025

Running Parallel Tasks in Azure DevOps YAML Pipeline

1. Running Parallel Jobs

If you want multiple jobs to run in parallel, define them separately under the jobs section.

jobs: - job: Job1 displayName: 'Job 1' pool: vmImage: 'ubuntu-latest' steps: - script: echo "Running Job 1" - job: Job2 displayName: 'Job 2' pool: vmImage: 'ubuntu-latest' steps: - script: echo "Running Job 2"

2. Running Parallel Steps within a Job

Option 1: Using dependsOn for Parallel Execution in Jobs

jobs: - job: Build displayName: 'Build Job' steps: - script: echo "Building the application" - job: Test displayName: 'Test Job' dependsOn: [] # Runs in parallel with Build steps: - script: echo "Running tests" - job: Deploy displayName: 'Deploy Job' dependsOn: [Build, Test] # Runs only after both are completed steps: - script: echo "Deploying the application"

Option 2: Using template for Parallel Execution

Create a separate YAML template file:

# parallel-template.yml steps: - script: echo "Task 1" - script: echo "Task 2"

Then, reference it in the main pipeline:

jobs: - job: ParallelJob steps: - template: parallel-template.yml

Option 3: Using background: true for Background Tasks

steps: - script: echo "Starting Task 1" displayName: 'Task 1' background: true # Runs in the background - script: echo "Starting Task 2" displayName: 'Task 2' background: true # Runs in the background

Best Approach?

  • Use separate jobs if tasks need different agents or environments.
  • Use parallel steps in a job if they share the same environment.
  • Use background tasks for lightweight independent tasks.

Let me know if you need a specific example for your pipeline! 🚀

Wednesday, February 5, 2025

Read ConfigMap of Pods Namespace in AKS using .Net Core

 To fetch ConfigMaps using the above approach, you can modify the code to use the Kubernetes client API for retrieving ConfigMaps.

🔹 Steps to Fetch ConfigMaps in a Namespace

  1. Modify the code to call ListNamespacedConfigMapAsync.
  2. Iterate through the retrieved ConfigMaps.
  3. Extract and display the required details.

Updated C# Code to Fetch ConfigMaps in the openlens Namespace

using System;
using System.Threading.Tasks;
using k8s;
using k8s.Models;

class Program
{
    static async Task Main(string[] args)
    {
        // Load Kubernetes config
        var config = KubernetesClientConfiguration.BuildDefaultConfig();

        // Create Kubernetes client
        IKubernetes client = new Kubernetes(config);

        // Specify the namespace
        string namespaceName = "openlens"; // Change as needed

        try
        {
            // Get the list of ConfigMaps in the namespace
            var configMapList = await client.CoreV1.ListNamespacedConfigMapAsync(namespaceName);

            Console.WriteLine($"ConfigMaps in namespace '{namespaceName}':");

            foreach (var configMap in configMapList.Items)
            {
                Console.WriteLine($"- Name: {configMap.Metadata.Name}");
                Console.WriteLine("  Data:");

                // Display the key-value pairs inside the ConfigMap
                if (configMap.Data != null)
                {
                    foreach (var kvp in configMap.Data)
                    {
                        Console.WriteLine($"    {kvp.Key}: {kvp.Value}");
                    }
                }
                else
                {
                    Console.WriteLine("    (No data)");
                }

                Console.WriteLine(new string('-', 40));
            }
        }
        catch (Exception ex)
        {
            Console.WriteLine($"Error fetching ConfigMaps: {ex.Message}");
        }
    }
}

How This Works

  1. Uses ListNamespacedConfigMapAsync(namespaceName) to get all ConfigMaps in the given namespace.
  2. Iterates through each ConfigMap and prints:
    • Name
    • Key-value pairs (if any)
  3. Handles errors gracefully.

🔹 Steps to Run

  1. Ensure kubectl is configured correctly.
  2. Install the KubernetesClient NuGet package:
    dotnet add package KubernetesClient
    
  3. Run the program:
    dotnet run