Unleashing Potential through Cloud Innovation

Category: Customer Success Stories

Why Choose AWS Over Azure?

In the realm of cloud computing, two giants dominate the landscape: Amazon Web Services (AWS) and Microsoft Azure. Both platforms offer a vast array of services, from computing power to storage options to networking capabilities. But when it comes to choosing between AWS vs Azure, many businesses and developers lean towards AWS. Let’s delve into the reasons why AWS often gets the nod over Azure.

1. Market Leadership and Maturity

AWS:

Launched in 2006, AWS is considered the pioneer in cloud computing. Its early entry into the market has given it a competitive edge in terms of experience, innovation, and infrastructure maturity.

As our last update in 2022, AWS held the largest share of the cloud market, hovering around 32%. This dominance indicates a high level of trust and adoption by businesses worldwide.

Azure:

While Azure, launched in 2010, has made significant strides, it still lags behind AWS in terms of market share and service diversity. While growing rapidly, held a market share of approximately 20%.

For the most recent and detailed market share data, you might want to refer to reports from market research firms like Gartner, Synergy Research Group, or Canalys. They often provide detailed breakdowns of cloud market shares and growth rates.

2. Service Breadth and Depth

AWS:

  • Service Count: AWS offers over 200 distinct services.
  • Core Services: AWS’s core services include Amazon EC2 (compute), Amazon S3 (storage), and Amazon RDS (relational database service). These services have been around for a long time and have matured with extensive features and configurations.
  • Innovative Services: AWS often introduces new services in response to emerging tech trends. For instance, they have Amazon SageMaker for machine learning, AWS Lambda for serverless computing, and AWS IoT Core for IoT solutions.
  • Specialized Services: AWS provides specialized services like AWS Ground Station (for satellite communication) and AWS Snowmobile (for exabyte-scale data transfer).

Azure:

  • Service Count: Azure offers over 100 services, which, while extensive, is generally considered fewer than AWS when comparing similar service categories.
  • Core Services: Azure’s core services include Azure Virtual Machines (compute), Azure Blob Storage (storage), and Azure SQL Database (relational database service). These are direct competitors to AWS’s core services and offer a wide range of features.
  • Innovative Services: Azure also has services tailored to emerging tech trends, such as Azure Machine Learning for AI and machine learning, Azure Functions for serverless computing, and Azure IoT Hub for IoT solutions.
  • Integration with Microsoft Products: One of Azure’s strengths is its seamless integration with other Microsoft products, like Windows Server, Active Directory, and SQL Server. This makes it a go-to choice for enterprises heavily invested in Microsoft technologies.

While both AWS and Azure offer a comprehensive range of services catering to various technological needs, AWS, due to its earlier entry into the cloud market, has a slightly broader and deeper service catalog. However, Azure’s tight integration with other Microsoft products can make it a preferred choice for businesses already using Microsoft’s software ecosystem. The decision between AWS and Azure in terms of service breadth and depth would depend on the specific services a business requires and any existing technological investments.

3. Open Source Friendliness

AWS:

  • Commitment to Open Source: AWS has consistently shown its dedication to the open-source community. They’ve actively contributed to and even initiated several open-source projects.
  • Broad Language Support: AWS services, especially AWS Lambda, support a plethora of open-source languages like Python, Node.js, and Ruby, to name a few.
  • Collaborations: AWS has collaborated with popular open-source projects, ensuring that their services are optimized for these platforms. For instance, AWS offers managed versions of open-source databases like MariaDB, PostgreSQL, and others.

Azure:

  • Growing Engagement: Azure has been steadily increasing its engagement with the open-source community, especially under Satya Nadella’s leadership.
  • Azure Sphere: An example of Azure’s commitment to open source is Azure Sphere, which is built on a custom version of Linux.
  • Support for Open Source Tools: Azure supports a range of open-source tools and technologies, but its integration and optimization for these tools are sometimes seen as trailing AWS.

4. Global Reach

Azure’s strongest point against AWS services is its global reach.

While AWS initially had a head start in terms of infrastructure and global reach, Azure has aggressively expanded its global footprint in recent years. AWS emphasizes its Availability Zones for resilience, while Azure focuses on a broader regional presence and introduces concepts like Edge Zones for specific use cases. The choice between AWS and Azure in terms of infrastructure would largely depend on the specific needs of the business, such as data residency requirements, service availability in a particular region, and latency needs.

AWS:

  • Availability Zones: AWS has 77 Availability Zones.
  • Geographic Regions: AWS operates in 24 geographic regions globally.
  • Announced Plans: AWS has announced plans for 18 more Availability Zones and six more AWS Regions.
  • Data Centers: AWS has data centers in North America, South America, Europe, Asia, Africa, and Australia.

Azure:

  • Data Center Regions: Azure has more data center regions than any other cloud provider, with 60+ regions worldwide.
  • Availability Zones: Azure’s approach to Availability Zones is slightly different, but they also offer this feature in many of their regions to ensure resiliency and high availability.
  • Geographies: Azure divides its service availability into geographies, ensuring data residency, sovereignty, compliance, and resiliency. They have defined geographies in North America, Europe, Asia Pacific, and more.
  • Edge Zones: Azure has also introduced Edge Zones, which are extensions of Azure, placed in densely populated areas, providing Azure services and enabling the development of latency-sensitive applications.

5. Learning Curve and Documentation

AWS:

  • AWS Documentation: AWS provides an extensive online documentation library that covers every service in detail. This includes user guides, developer guides, API references, and tutorials.
  • AWS Training and Certification: AWS offers a wide range of digital and classroom training. Their courses are designed to help individuals understand the architecture, security, and infrastructure of AWS.
  • AWS Whitepapers: AWS has a vast collection of whitepapers written by AWS team members, partners, and customers. These whitepapers provide a deep dive into various topics, from architecture best practices to advanced networking configurations.
  • AWS re:Invent: This is an annual conference hosted by AWS, where they introduce new services, features, and best practices. Many of the sessions are available online for free, providing valuable learning resources.
  • AWS Well-Architected Framework: This is a set of best practices and guidelines that help users build secure, high-performing, resilient, and efficient applications.

Azure:

  • Azure Documentation: Azure’s documentation is comprehensive and covers all their services. It includes quickstarts, tutorials, API references, and more.
  • Microsoft Learn: This is Microsoft’s primary platform for providing free online training on all its services, including Azure. It offers learning paths, modules, and certifications tailored to various roles, from beginner to expert.
  • Azure Architecture Center: This provides best practices, templates, and guidelines for building on Azure. It’s a valuable resource for architects and developers looking to design and implement solutions on Azure.
  • Azure Dev Days and Webinars: Microsoft frequently hosts events and webinars where they introduce new features, services, and best practices for Azure.
  • Azure Forums and Q&A: Azure has an active community where users can ask questions and get answers from both Microsoft employees and the community at large.

Both AWS and Azure offer extensive resources to help users understand and make the most of their services. While AWS’s longer tenure means it has a more extensive list of long-term resources and a well-established training program, Azure’s integration with the broader suite of Microsoft learning resources and its active community engagement ensures users have ample support. The perception of one being more “beginner-friendly” than the other can be subjective and may vary based on individual preferences and prior experiences.

When weighing AWS over Azure, AWS’s market leadership, service diversity, open-source commitment, and global reach make it a compelling choice for many. While Azure remains a formidable competitor, those looking for a mature, comprehensive, and globally recognized cloud platform often find AWS aligning more closely with their needs.

Real Case – How we Optimized EC2 & EBS AWS Cost

In this article, we’ll walk you through our journey of optimizing AWS EC2 and EBS pricing and costs by identifying instances that haven’t been used in the last 30 days and subsequently deleting them, along with their associated EBS volumes.

The Problem: EC2 Instances & EBS blocks not being used.

As our infrastructure grew, so did the number of EC2. Over time, we realized that not all of them were actively being used. Some were remnants of old projects, while others were test instances that had served their purpose. These instances were silently adding to our monthly AWS bill.

The solution: Lambda function & CloudWatch alarm set up

Our goal was clear: Identify EC2 instances that haven’t been used in the last 30 days and delete them to cut costs. Here’s how we approached the problem:

  1. Create a Custom Metric:
    • We utilize a script or an AWS Lambda function to monitor the desired metrics, such as CPU utilization, of our EC2 instances.
    • If we observe that an instance has been underutilized for an extended period, we push a custom metric to CloudWatch.
  2. Create a CloudWatch Alarm:
    • Based on the custom metric, we created an alarm in CloudWatch.
  3. Delete the EC2 instances & EBS volumes:
    • ⚠️ Important Note: This method focuses on detecting instances with low CPU usage. However, there could be instances with minimal CPU activity but significant network usage. It’s crucial to consider all aspects of an instance’s activity before deeming it “unused.”
    • If you don’t want to delete the EBS volumes, this article will show you how we also trimmed 35% of billing costs in AWS by changing EBS unused volumes to a ‘colder’ S3 bucket.
    • Don’t know what a EBS volume is and how is it managed by AWS? Check this article.

Here’s a step-by-step guide:

Create a Custom Metric to identify the unused EC2:

The Lambda Function

  • Create a Lambda function with permissions to describe EC2 instances and put CloudWatch metrics.
  • Use the AWS SDK (e.g., Boto3 for Python) to describe your EC2 instances and their metrics.
  • If an instance has low utilization for over 30 days, push a custom metric to CloudWatch.

Here’s our Python example code using boto3:

import boto3
import datetime

def lambda_handler(event, context):
ec2 = boto3.client('ec2')
cloudwatch = boto3.client('cloudwatch')

# Get all instances
instances = ec2.describe_instances()

unused_instances = []

for reservation in instances['Reservations']:
    for instance in reservation['Instances']:
        instance_id = instance['InstanceId']
        instance_type = instance['InstanceType']

        # Get CPU utilization for the last 30 days
        metrics = cloudwatch.get_metric_data(
            MetricDataQueries=[
                {
                    'Id': 'm1',
                    'MetricStat': {
                        'Metric': {
                            'Namespace': 'AWS/EC2',
                            'MetricName': 'CPUUtilization',
                            'Dimensions': [
                                {
                                    'Name': 'InstanceId',
                                    'Value': instance_id
                                },
                            ]
                        },
                        'Period': 86400,  # One day in seconds
                        'Stat': 'Average',
                    },
                    'ReturnData': True,
                },
            ],
            StartTime=datetime.datetime.now() - datetime.timedelta(days=30),
            EndTime=datetime.datetime.now(),
        )

        # Check if average CPU utilization is below a threshold (e.g., 5%) for the entire 30-day period
        if metrics['MetricDataResults'][0]['Values'] and all(value < 5 for value in metrics['MetricDataResults'][0]['Values']):
            ebs_volumes = []
            for block_device in instance['BlockDeviceMappings']:
                volume_id = block_device['Ebs']['VolumeId']
                volume = ec2.describe_volumes(VolumeIds=[volume_id])
                ebs_volumes.append({
                    'VolumeId': volume_id,
                    'Size': volume['Volumes'][0]['Size']
                })

            unused_instances.append({
                'InstanceId': instance_id,
                'InstanceType': instance_type,
                'EBSVolumes': ebs_volumes
            })

return unused_instances

Here’s a high level breakdown of our code:

  1. Data Collection: Using Boto3, we pulled data on all EC2 instances and their 30-day CPU utilization from CloudWatch.
  2. Idle Detection & EBS Association: Identified EC2 instances with low CPU activity (below 5%) and located their associated EBS volumes, pinpointing potential cost drains.
  3. Optimization & Savings: By regularly assessing and acting on this data, we streamlined our infrastructure, reducing costs tied to unused EC2 instances and EBS volumes.

This Lambda function, when executed, returns a list of unused instances, providing a clear picture of potential cost savings.

Create a CloudWatch Alarm

  1. Navigate to the CloudWatch console.
  2. In the left navigation pane, click on Metrics.
  3. Click on the Custom namespace and then the UnusedEC2 namespace.
  4. Select the UnusedInstanceCount metric.
  5. Click on the Create Alarm button.
  6. Configure the alarm:
    • Name and description.
    • Define the condition. For instance, if you want to be alerted when there’s at least one unused instance, set the threshold to >= 1.
    • Configure actions like sending a notification.
  7. Click on the Create Alarm button.

Now, whenever the Lambda function identifies unused instances and pushes the metric, the CloudWatch Alarm will trigger if the condition is met.

Remember to schedule the Lambda function to run periodically (e.g., daily) using CloudWatch Events or EventBridge to regularly check for unused instances and push the metric.

By implementing this proactive approach, we saw a substantial reduction in our monthly AWS bill. Not only did we save on the costs of the EC2 instances, but also on the storage costs associated with the EBS volumes.

Note: Before deleting any EC2 instances or EBS volumes, always ensure you have backups and have communicated with relevant stakeholders. It’s essential to ensure no critical data or applications are lost in the process.

How a Smart EBS Policy Trimmed 35% of AWS Expenses

The Challenge of Cost Management in the Cloud

For businesses operating on AWS, managing costs can be a complex task, especially when talking about EBS policies. With resources dynamically scaling up and down based on demand, it’s easy for unused or underutilized resources to accumulate, leading to unnecessary expenses. EC2 instances and EBS volumes are common culprits in this regard, as they can be inadvertently left running or attached.

The Birth of a Cost-Saving Strategy

Our organization recognized the need for a proactive approach to cost management. They wanted to find a way to identify and address idle EBS volumes associated with EC2 instances effectively. The challenge was to trim costs without compromising data integrity or accessibility.

The EBS Policy Solution

To tackle this challenge, the organization devised a two-fold strategy:

  1. Identifying Inactive EBS Volumes: They implemented a custom monitoring system that tracked the activity of EC2 instances and their associated EBS volumes. If an EBS volume remained inactive for more than 15 days, it was flagged as a candidate for cost reduction.
  2. Archiving to S3 with Restoration Capability: When an EBS volume met the inactivity criteria, it was automatically archived to an Amazon S3 bucket using lifecycle policies. Importantly, this process was designed to be reversible. While the volume was archived, it could still be restored swiftly when needed.

The Benefits of the EBS Policy

The implementation of this EBS policy yielded significant benefits:

  1. Cost Reduction: The organization achieved a remarkable 35% reduction in their AWS costs. By identifying and archiving inactive EBS volumes, they were no longer paying for resources that weren’t actively serving any purpose.
  2. Data Preservation: Despite archiving, the organization ensured data preservation. Archived EBS volumes in S3 could be restored promptly if required, allowing for flexibility without compromising data integrity.
  3. Automated Efficiency: The custom monitoring system and lifecycle policies automated the entire process. This reduced the manual effort required for cost management and made it a seamless part of their AWS operations.

1: Create a Lambda Function

  1. Access the AWS Lambda Console: Log in to your AWS account and navigate to the Lambda console.
  2. Create a New Function: Click on the “Create function” button and choose “Author from scratch.” Give your Lambda function a meaningful name, select the appropriate runtime (e.g., Python, Node.js, etc.), and create a new execution role with the necessary permissions to access EC2 and S3 resources.
  3. Function Code: Write the Lambda function code to identify and archive inactive EBS volumes. Here’s a simplified example in Python:
import boto3
import datetime

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')
    s3 = boto3.client('s3')
    days_threshold = 15

    # List all EBS volumes
    volumes = ec2.describe_volumes()

    for volume in volumes['Volumes']:
        # Check the last attachment date
        last_attachment = volume.get('Attachments', [{}])[-1].get('AttachTime')
        if last_attachment:
            days_inactive = (datetime.datetime.now() - last_attachment).days

            # Archive volumes inactive for over 15 days
            if days_inactive > days_threshold:
                volume_id = volume['VolumeId']
                s3_bucket_name = 'your-s3-bucket-name'
                s3_object_key = f'ebs-backups/{volume_id}.img'

                # Create a snapshot of the EBS volume
                snapshot = ec2.create_snapshot(VolumeId=volume_id)

                # Copy the snapshot to S3
                ec2.copy_snapshot(SourceSnapshotId=snapshot['SnapshotId'],
                                  DestinationRegion='us-east-1',  # Replace with your desired region
                                  DestinationBucket=s3_bucket_name,
                                  DestinationPrefix=s3_object_key)
                
                # Delete the original EBS volume
                ec2.delete_volume(VolumeId=volume_id)
  1. Testing: Test the Lambda function to ensure it correctly identifies and archives EBS volumes.

2: Set up a CloudWatch Events Rule

  1. Access the CloudWatch Events Console: Navigate to the CloudWatch console.
  2. Create a New Rule: Click on “Rules” in the left-hand menu and then click the “Create rule” button.
  3. Event Source: Choose the event source that triggers the Lambda function. In this case, you may want to use a scheduled event. For example, to run the Lambda function daily:
    • Event Source: Event Source Type > Event Source
    • Service Name: Event Source Type > AWS Lambda
    • Event Type: Event Type > Scheduled
    • Target: Lambda function you created
  4. Configure the Schedule: Specify the schedule for running the Lambda function, such as daily at a specific time.
  5. Create Rule: Review your settings and click the “Create rule” button.

3: Testing and Monitoring

Test the setup to ensure that the Lambda function is correctly identifying and archiving EBS volumes with over 15 days of inactivity. Monitor the CloudWatch Logs and S3 bucket for archived volumes.

Please note that this is a simplified example, and in a production environment, you should consider error handling, logging, and more robust error recovery mechanisms. Additionally, you may want to add more sophisticated logic to handle tagging, notifying users, and ensuring data integrity during the archiving process.

© 2025 Ikaros Software

Theme by Anders NorenUp ↑