Articles

Automate Argo Workflow Monitoring with Teams Real-Time Alerts

Written by SDG Group | Jul 28, 2025 8:18:20 AM

Boost your operational excellence with automated, API-driven monitoring for Argo Workflows.

Welcome to TechStation, SDG Group’s hub for uncovering the latest innovations in data and analytics! In this article, we dive into a powerful, API-driven solution that automates the monitoring of Argo Workflows, a key orchestrator for modern CI/CD pipelines. From fetching logs in an AWS environment to sending structured reports via API, discover how this integration provides real-time visibility, reduces manual debugging efforts, and delivers actionable alerts directly to your development team's workspace in Microsoft Teams.

Interested in something else? Check all of our content here.

In today’s fast-paced software development scene, continuous integration and continuous deployment (CI/CD) pipelines are the backbone of delivering reliable software at scale.

Tools like Argo Workflows have revolutionized how teams manage these pipelines, enabling efficient orchestration of complex workflows.

However, as complexity grows, so does the challenge of effective CI/CD pipeline monitoring.

Monitoring these pipelines isn’t just a technical requirement—it’s a critical component of operational excellence.

Teams must quickly identify failures, bottlenecks, and unexpected behaviour to ensure software quality.

Traditional, manual monitoring approaches are often inefficient.

What’s needed is a solution that not only captures workflow events automatically but also delivers clear, concise updates to the right people at the right time.

This article provides a complete guide on how to successfully automate the monitoring of Argo Workflows pipelines, integrating it with Microsoft Teams to provide real-time updates in an AWS environment.

By focusing on delivering actionable information directly into a Teams channel, you can enhance visibility, reduce response times, and improve the reliability of your software delivery process.

 

Automating CI/CD Reporting:Use case description

This guide describes how to automate the monitoring of Argo CI/CD workflows and streamline failure reporting to a development team via Microsoft Teams.

By creating a solution that periodically checks pipeline statuses, fetches logs, and summarizes issues, the manual effort can be entirely replaced with an efficient and consistent process.

The automation is shaped as an Argo Workflow that runs on a daily schedule (via a cron workflow) to execute a custom Python script.

This script fetches pipeline metadata, parses logs, and generates clear, actionable reports delivered to Teams as reporting tables.

These reports summarize failures, provide direct links to logs, and extract key error messages.

Furthermore, custom reporting can be implemented by adding assignees to failures or categorizing issues by type, such as connection problems or runtime-related errors.

 

Solution Architecture for Argo Workflow Monitoring

To implement this automation, a robust architecture must be designed using key components of the AWS ecosystem and Argo’s API capabilities:

AWS Environment
  • S3 Buckets: Logs generated by Argo workflows are stored in S3 buckets for scalable and easy access.
  • IAM Roles: Fine-grained access control ensures secure handling of logs and other AWS resources.

Argo Workflows
  • Cron Workflow: A scheduled Argo Workflow runs daily, executing the monitoring script via a workflow template in a user-defined environment.
  • Argo API: The API is used to retrieve pipeline statuses, workflow metadata, and other details required for reporting or to automatically retry failed pipelines.

Custom Python Script
  • Log Retrieval: The script fetches logs of interest directly from AWS S3.
  • Parsing and Analysis: It scans logs for errors, warnings, and performance issues, grouping findings into categories.
  • Actions to take: The script can automatically retry pipelines that failed with common issues like connection errors, OOMKilled, or pod deleted statuses.

Microsoft Teams Integration
  • API for Reporting: The processed data is formatted into structured tables and sent to Teams. The team receives a notification directly in their channel with all reported failures.
  • Report Details: You can choose to split reports into different tables based on the issue category or prepare one aggregate table. This table will include: a) pipeline information, b) links to logs and the Argo pipeline, c) extracted error messages, and d) other required metadata.

Required Technical Packages

To implement the automation script in Python, you will need the following libraries:

 

 
import sys
from typing import Dict, List
import os
import requests
import json
import re
import logging
import boto3

 

 

Step-by-Step Guide: How the Automation Works

Below is a summary of the process flow.

 

Typically, companies run jobs of interest daily in various environments (e.g., development) to test, update, load, or refresh data.

To keep track of these jobs, a monitoring process is mandatory.

However, having a fast, precise, and automated way to do this is key to success.

Once all jobs are complete, a pre-defined cron workflow for monitoring is triggered.

To implement this solution, you start by defining this cron workflow, which launches a workflow template.

The template runs a Python script that will:

  1. Query the Argo Workflows API to extract the status of all pipelines within a specified namespace, using an access token for authentication.
  2. Filter the results to isolate workflows based on their statuses and datetime, focusing only on failed pipelines for analysis.
  3. For each failed workflow, retrieve its log file from the S3 bucket where it is stored, using its unique identifier to locate the correct file.
  4. Parse the log contents to extract relevant error messages and categorize the issues (e.g., configuration errors, resource limitations). This structured error information is then used to generate actionable reports.


1. Querying the Argo Workflows API


To interact with the Argo Workflows API, you can use its RESTful endpoints with a bearer access token. The prerequisites are:

  • An Argo Workflows server deployed and accessible via an API endpoint

  • A Bearer Access token

  • The endpoint URL

Here is an example of how to obtain a token from a Kubernetes service account:

 

 
# example to obtain a token from Kubernetes service account
kubectl -n argo create token <serviceaccount>
 
 
 
2. Retrieving Logs from AWS S3


Once failed pipelines are identified, their logs can be fetched. Since the logs are stored in an S3 bucket in an AWS environment, the boto3 library is required.

The following Python code demonstrates how to access logs for failed pipelines:


 
import boto3
import json

# AWS Credentials and S3 Bucket Configuration
AWS_ACCESS_KEY = "your_aws_access_key"
AWS_SECRET_KEY = "your_aws_secret_key"
BUCKET_NAME = "your-s3-bucket-name"
LOG_FILE_KEY = "logs/pipeline-logs/pipelineA.log"


# Initialize S3 Client
s3_client = boto3.client('s3', aws_access_key_id=AWS_ACCESS_KEY, aws_secret_access_key=AWS_SECRET_KEY)

# Fetch the log file
response = s3_client.get_object(Bucket=bucket_name, Key=log_key)
# Decode log content
log_content = response['Body'].read().decode('utf-8')

 

3. Parsing Logs for Error Messages


To extract error messages and other metadata (like timestamps or URLs) from the logs, a JSON parser is needed.

One could also process and transform the extracted information by analyzing failure patterns and types.

 

4. Sending Notifications to Microsoft Teams


To create a message with reporting tables for Microsoft Teams, you should use a Webhook or the Microsoft Graph API to send structured messages in a clear format.

For this purpose, Microsoft Teams supports Adaptive Cards, a JSON-based format for rendering rich messages, including tables.

 

 
{
   "type": "message",
   "attachments": [
     {
       "contentType": "application/vnd.microsoft.card.adaptive",
       "content": {
         "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
         "version": "1.4",
         "type": "AdaptiveCard",
         "body": [
           {
             "type": "TextBlock",
             "size": "Large",
             "weight": "Bolder",
             "text": "Argo Workflows Monitoring Report"
           },
           {
             "type": "TextBlock",
             "text": "Summary of the latest pipeline execution results and detected issues.",
             "wrap": true,
             "spacing": "Small"
           },
           {
             "type": "Table",
             "columns": [
               {
                 "width": "stretch"
               },
               {
                 "width": "auto"
               },
               {
                 "width": "auto"
               }
            ],
            "rows": [
              {
                "type": "TableRow",
                "cells": [
                  { "type": "TableCell", "items": [{ "type": "TextBlock", "text": "**Pipeline Name**" }] },
                  { "type": "TableCell", "items": [{ "type": "TextBlock", "text": "**Status**" }] },
                  { "type": "TableCell", "items": [{ "type": "TextBlock", "text": "**Details**" }] }
                ],
                "style": "emphasis"
              },
              {
                "type": "TableRow",
                "cells": [
                  { "type": "TableCell", "items": [{ "type": "TextBlock", "text": "Pipeline A" }] },
                  { "type": "TableCell", "items": [{ "type": "TextBlock", "text": "Failed", "color": "Attention" }] },
                  { "type": "TableCell", "items": [{ "type": "TextBlock", "text": "Error in step 3" }] }
                ]
              },
              {
                "type": "TableRow",
                "cells": [
                  { "type": "TableCell", "items": [{ "type": "TextBlock", "text": "Pipeline B" }] },
                  { "type": "TableCell", "items": [{ "type": "TextBlock", "text": "Failed", "color": "Attention" }] },
                  { "type": "TableCell", "items": [{ "type": "TextBlock", "text": "OOMKilled failure. Please retry." }] }
                ]
              }
            ]
          }
        ]
      }
    }
  ]
}
 

 

To send the JSON payload to Teams, you can use an Incoming Webhook.

First, create the webhook in your target Microsoft Teams channel via ConnectorsIncoming Webhook.

Give it a name (e.g., "Argo Monitoring Bot") to obtain the webhook URL.

Then, send the JSON payload using a POST request with the webhook URL, as shown in this Python snippet:

 

 
# paylod is the JSON containing the message created
requests.post(TEAMS_WEBHOOK, json=payload)
 


According to the schedule of your cron workflow, the notification will then be sent to your Teams channel in the defined format.

 

Key Benefits of Automated Argo Monitoring

This use case highlights the power of combining modern DevOps tools like Argo Workflows with cloud infrastructure and communication platforms. The key benefits include:

  • Proactive Monitoring: This automated approach ensures that issues are tackled promptly, potentially saving over 40% of the time typically spent on debugging efforts.

  • Reduced Context-Switching: Teams receive alerts directly in their workspace, removing the need to switch between different tools and minimizing the risk of human error.

  • Scalability, Reliability, and Consistency: No pipeline will be overlooked or missed. The automated process ensures every workflow is monitored consistently, enhancing the reliability of your entire CI/CD process.

Ready to take your CI/CD process efficiency to the next level? Contact us for a personalized consultation and discover how tailor-made automation solutions can become the engine of your DevOps strategy, ensuring proactive monitoring, a drastic reduction in debug times, and maximum reliability of your software.