Github Actions vs. AWS Codepipeline

Github Actions vs. AWS Codepipeline – Overview

As software development moves to the cloud, there’s an increasing number of tools that can be used to build, test and deploy applications. While they’re not all directly competing with each other, it can be hard to decide which tool is right for your team.

In this article, we’ll take a look at how GitHub Actions and AWS CodePipeline/CodeBuild compare when it comes to workflow management and isolated job execution.

We’ll also cover pricing so you can make an informed decision about which tool is right for your team.

AWS CodePipeline and GitHub Actions are both ways to build and deploy applications. They differ in the way they are used but are similar in many other ways. This article compares and contrasts the two services to help you decide which one is right for your project.

AWS CodePipeline is Amazon’s offering in the continuous integration and delivery space. It provides a managed workflow service that allows developers to build, test, and deploy applications with ease. It integrates with various third-party tools such as GitHub, Slack, JIRA, and many others.

The pipeline can be configured to run on an hourly basis or whenever there is a change on a particular branch of code in your repository.

GitHub Actions is a new service from GitHub that lets you automate all aspects of your workflow from building code to deploying it in production environments.

The actions can be triggered by contributors submitting pull requests or pushes to branches on your repositories hosted on GitHub; however, unlike CodePipeline, these actions happen on individual repositories instead of across multiple projects at once time like CodePipeline does (which could be an advantage depending on what kind of workflows you’re looking for).

One important item to compare between these to discuss workflow management and isolated job execution.

Workflow Management

Both AWS CodePipeline and GitHub Actions offer easy-to-use tools for creating a workflow to run your code through different steps in the CI/CD process. CodePipeline offers more features for visualizing your workflow, while GitHub Actions’ visualizations are less detailed but easier to use.

Isolated Job Execution

GitHub Actions lets you run jobs immediately after they are pushed to a branch, while CodePipeline requires you to wait until all jobs have been completed before starting them again—which means that if one job fails, all other jobs will be stopped as well.

This can be particularly problematic if your build requires access to external resources that may change over time (such as an Amazon S3 bucket).

See Also

Using AWS Code Build for Continuous Integration

Posted in AWS

Using AWS Code Build for Continuous Integration and Delivery

How to Use AWS Code Build for Continuous Integration and Delivery

AWS Code Build is a service that helps you build and test your code on AWS. You can use it to compile code, run unit tests, and even deploy artifacts to other services like Amazon Elastic Container Service or Amazon API Gateway.

You can also use it to generate reports based on your automated builds. It’s an easy way to automate many of the tasks performed by Jenkins or CircleCI without having to install any software or worry about managing servers yourself.

What is AWS Code Build?

AWS Code Build is a fully managed service that lets you build software projects hosted on GitHub or Bitbucket in a few clicks without having to install, configure, or operate any servers. AWS CodeBuild uses Amazon EC2 Container Service (ECS) to build your application into containers.

It also provides an API that allows you to integrate AWS CodeBuild into your CI/CD process so that you can automatically build and test when commits are pushed to the repository.

You will find that AWS Code Build builds your code faster than any other cloud service; it’s fast, simple, and easy to use. AWS Code Build can build and test your entire application on a large scale without you having to worry about deploying it.

You can also trigger builds based on events from other AWS services like Amazon Lambda or Amazon EC2.

Examples of how to get started with AWS Code Build.

Here are three examples of how to get started with AWS Code Build.

  • Set up an AWS Code Build project, then link it to your Github repository. After that, you can set up a building job and trigger builds based on events from other AWS services like Amazon EC2 or Amazon Lambda. You can then trigger builds based on events from other AWS services, such as Amazon EC2 or Amazon Lambda. Triggering builds by an event is useful when you want to run tests, like making sure your application is always in a deployable state before deploying it.
  • Link code build project to a different repo.
  • Create a GitHub webhook for automatic builds

1: Set up an AWS Code Build project, then link it to Github.

  • Create the code build project and link a Github repo, then push some code.
  • In the AWS CodeBuild console, choose Create project and enter a project name.
  • Under Repositories, enter the URL of your source repository and select whether to use SSH or HTTPS for git. You can also create an IAM user with access permissions for this specific repository if you don’t want to make all of your repositories publicly accessible from within AWS CodeBuild (more on that later).
  • Click Save to finish creating your new build project and open it in edit mode:

2: Link a code build project to a different repo.

You can link a code build project to a different repo. This is useful if you want to test and build against the latest version of your source code, but are using a branch that isn’t currently available in the master branch.

To link a different commit or commit hash:

  • Select Link Source from the Actions menu for an existing CodeBuild project in AWS CodePipeline; then choose your preferred source from the drop-down list.
  • Choose Repository Branch, Commit Hash, or Tag Name (depending on your choice) and enter values as needed; then choose Save Changes when finished.

3: Create a GitHub webhook for automatic builds.

Next, set up a webhook in GitHub that automatically triggers a build when you push code changes to your repository (you can also do this with Bitbucket or GitLab).

  • Select Settings and then Webhooks & Services under the Integrations section.
  • Click Add Webhook at the bottom of the page, which will open up a form where you can configure some options:
  • Select “GitHub hook” as your trigger type.
  • Set “Frequency” to “Always.” This tells AWS CodeBuild that it should always watch for new commits and build them whenever they happen—no matter what time of day or how often you push updates. The other option here would be “Polling,” which means checking for webhooks about once per hour on average; this is useful if your team is actively working on projects but not committing too often (for example, just after lunch).

AWS Code Build can become a powerful tool in your CI/CD pipeline.

AWS Code Build is a fully managed service that can be used to build and test your code. It also can deploy your code, so you don’t have to worry about setting up any servers or managing them once they’re up.

Once you set up AWS Code Build for continuous integration, it will automatically run tests on your new commits and update your application if there are any errors. This saves developers time and ensures their code works before merging into the master branch.

AWS Code Build gives you the power to create custom pipelines that can integrate with other services like AWS CodePipeline or JenkinsCI and even run tests on Amazon EC2 instances! The possibilities are endless, but ultimately it comes down to knowing where your company stands regarding DevOps practices.

See Also

How to Use Amazon Chime

AWS Cost Saving Tips

Azure Pricing Calculator

AWS Status Check

AWS Free Tier

AWS Instance Types

Posted in AWS

12 Tips for AWS EC2 Cost Saving

AWS EC2 Cost Saving Guide

1. Selecting across the same instance type

Fine-tune your EC2 instance selections based on CPU and Memory usage. Even if you had chosen the instance type carefully, note that the application and its usage pattern might change over time.

Select always new instance generation. You will get enhanced performance at a lower cost. Look at how instances differ across generations. Instance prices decline as the new generations come. Upgrading to M5 generation from M3 saves you 27.8% per hour.

Upgrading to the new generation also gives you a performance increase from C3 to C5 large the cost per ECU decreases by %43 and from M3 to M5d.large it decreases by 44%

Of course, upgrading to a new generation instance might require some additional work; however, it will save money in the long run.

Selecting across the same instance type

Selecting across the same instance type

2. Instance Selection over Different Types:

Instance Selection over Different Types

Instance Selection over Different Types

Amazon EC2 offers a wide range of EC2 instance types and pricing options offer significant advantages if you select instances carefully.

To compare instances across types, you need to calculate per compute units like vCPU, ECU and Memory. The below table lists the most common low-mid-end instance types sorted by cost per vCPU.

The below table is sorted by Cost per Memory.

Considering both CPU and Memory options, interesting cost-saving opportunities arise. Here t3.large instances are cheaper than m5.large instances by 13.3%. Both have 2vCPUs and 8 GiB of memory. T3 might be the better selection considering the burstable option.

M4 large is priced as C4 large. Although the M4 has just less CPU performance than C4 (6.5 vs 8 vCPU), it has a signification memory advance (3.75 vs. 8 GiB) so if your applications do not very heavy on CPU, having M4 over C4 large more than double what you get from memory with the same price.

If your current application runs on Computation-optimized C types and you notice that you might need extra memory before increasing the size in the same class, always consider M types.

Here for example if your application runs on C4 large, before considering moving to C4 xlarge or C5 xlarge, consider M5 large where you will get 8 GiB memory at a lower price.

This is similar to even C5 large. You can consider moving to M5 Large. If you don’t require extra ECUs) you can select M5 large and then move to C5 xlarge.

3. Comparing Processors:

Amazon EC2 provides Intel, AMD EPYC and AWS Graviton types of processors where a selection of the processors directly affects the cost.

Below compares different processor types. In general, the cost for AMD processors is around 10% cheaper than Intel instances.

4. Generate Scheduling Plans:

Review your workload to find instances that can be scheduled. Especially development and test environments are of this kind.

You won’t need those resources always up&running. You can use AWS Instance Schedule which will only cost you $5 per month in AWS Lambda for each schedule, and $0.9 Cloudwatch schedule tracking will be added per schedule.

AWS Scheduler is based on schedule not a number of instances, so create as less as possible scheduler covering as many as your instances.

For on-demand requests, you can create scripts to make your environment up & down if needed.

5. Monitor data transfer costs:

One of the important pitfalls is underestimating the cost of data transfer on VPCs. VPC peering is a cost-effective method for transferring data across VPCs, if you won’t use VPC peering, then transfer happens via public IP, resulting in higher costs.

6. Regularly check CPU and Memory utilization:

Instances that run with CPU and memory utilization under 50% can be considered a candidate for resizing.

You can choose to deploy smaller and less expensive instances or totally terminate the instance by moving the workload to containers.

7. Check unallocated EBS or EIPs :

Terminating an instance does not always terminate EBS or EIPs. Ensure that EBS volumes are terminated and that “Delete on Termination” is selected during the creation of the instance.

In any means, a script that detects unallocated regularly and alerts teams will be very helpful.

8. Use EC2 Autoscaling and Fleet:

If your workload has variable loads, you can use EC2 Autoscaling & Fleet to save costs.

You can set the baseline capacity and scaling resources in terms of instances, vCPUs, or application-oriented units and also indicate how much of the capacity should be fulfilled by Spot Instances.

You should also select instance types that have reservations to use commitment discounts.

9. When you delete Ec2 Fleet, you need to stop or terminate the instances manually.

Be sure that if you delete the EC2 fleet, it only deletes the definition, not the actual EC2 instances. You need to stop or terminate them manually.

10. Set auto-scaling groups based on minimum resource capacity.

Be careful with setting up Auto Scaling groups. You should validate that the min resource capacity and desired capacity are not over-provisioned.

You should also define a scale-down policy for each scale-up policy. This can be based on CPU, Memory utilization.

11. Check the deleted instance if it belongs to Autoscaling group or not.

If a deleted instance belongs to an auto-scaling group, auto scaling group might spin up alternate instances to match the capacity. You need to change or delete group definitions before deleting the instance.

12. Be careful about 3rd party software licenses on auto scaling or EC2 fleet.

If you set up an autoscaling group or use software priced per CPU, you should check your workload before adding to the auto-scaling group.

See Also

EC2 Compute Instances

Azure Pricing Calculator

AWS Fargate Price Reduction

EC2 Instance Types

EC2 Pricing Model

EC2 Pricing Calculator

s3 Cost Calculator

Data Transfer Calculator

AWS Lambda Cost Calculator

Posted in AWS
Data Center Relocation Cost Estimate

How to get a Data Center Relocation Cost Estimate

Data Center Relocation Cost Estimate

Cloud technology has taken the business world by storm. It is lauded as one of the best inventions since the launch of the internet. The way data is stored and processed at an optimal level has been redefined by cloud servers. The technology offers incredible scalability, ease of access and security to enterprise applications and database of small and large companies. Conventionally, on-site servers had become a standard practice to create, operate and maintain an organization’s IT infrastructure. However, cloud technology has swiftly overtaken the norm and has become the preferred method of storing and processing enterprise database and applications. Our tips for data center relocation estimate calculator will help you to figure out the accurate estimate of cost for moving your business apps and data to cloud servers.

81% of Companies Have Multi-Cloud Strategies

The public cloud industry is truly massive. According to several reports, the cloud computing industry was just US$ 24.65 billion in 2010 and ten years later, the industry is set to cross the US 150 billion mark with ease. Globally, around 81% of companies have a multi-cloud strategy already in place and by the end of 2020, around 67% of enterprise-level IT infrastructure will be on cloud platforms. This will result in the cloud technology handling over 82% of the total enterprise workload. It is estimated that an astonishing 40 Zettabytes of data will be handled by cloud servers and networks by the end of 2020.

Business owners are always looking for solutions, like cloud technology, that can make their operations more efficient and easier. However, despite its increasing popularity there are still several thousand enterprises that are yet to adopt the innovative computing and storage technology. One of the main reasons that is still keeping several industries from adopting cloud technology is the suspected high cost of moving the existing enterprise applications and databases to the cloud servers. However, business owners need to know that overall, the cost of moving to cloud and using it as your database and processing platform is significantly low as compared to using on-site IT infrastructure.

Points to Consider when Finding Data Center Relocation Estimate

 

If you are hesitant to make the move to cloud platform because of suspected high cost, here are the simple ways you can figure out the precise data center relocation estimate for your business.

It is important to note that there are numerous small and large IT services and products that any business enterprise uses at a given time. The various products and services rack up a significant expense when accumulated. Hence, you need to carefully consider every aspect of expense that can be related to moving your enterprise database and applications to cloud platform.

We have classified the different expenses into several categories for your convenience:

Data Center Relocation Cost Estimate - expense categories

Data Center Relocation Cost Estimate – expense categories

Pre-Migration Costs

You need to have maximum details about the existing cost of maintaining and running your business application on in-house IT systems as well as information from system performance data. This helps to decide the right-sized IT architecture you will need on the cloud platform and also show how cost-effective will be your business operations on the cloud server as compared to the existing on-site IT infrastructure. Some of the factors that determine pre-migration costs are:

  • On Site Data Center costs – Includes cost of maintaining and upgrading servers, power / utility bills, storage, IT labor, network, etc.
  • Hardware – Includes hardware specification, maintenance, upgrading, etc.
  • Software – Includes cost of licensing, support, purchase dates, contracts, warranties, etc.
  • Operations – Labor costs, network connections, system performance data, etc.

Post Migration Costs

Post-migration costs are expenses that are likely to be incurred when running business applications on cloud platform, such as:

  • Monitoring
  • Alerting
  • Monthly / annual licensing and support service
  • Administrations
  • Operations
  • System maintenance and operation
  • Maintenance
  • System updates
  • Software version patches and updates
  • Training

Migration Costs

The actual process of migration of database and applications requires several different expenses, such as:

  • Application migration cost which is the actual cost of moving applications from existing environment to cloud servers.
  • Configuration and infrastructural changes that are required to make applications compatible to run on cloud servers.
  • Integrating and testing migrated apps to ensure that they run smoothly on the new cloud environment.

Bottom Line

Once you have the details of the overall cost of maintaining and operating your business applications and databases on current on-site data banks and on cloud servers, you can get an even more accurate data center relocation cost estimate using the Cost Calculator provided on the website of leading cloud service providers, such as Amazon Web Services (AWS), Google, Microsoft, IBM, etc.

cloud computing cost analysis

AWS Cost Optimization

AWS cost gone wild? Well, there is a solution for that!

 

A report that enables you to optimize your AWS instantly

After producing thousands of knowledge-base pieces, training videos, tools and methodologies for AWS cost optimization, we decided to put all of our experience on building a monstrous custom report engine that does the job. How it works? Keep on reading!

 

 

 

 

#1 Workload Analysis

Get a full picture of your cost of workloads. Each cost item is mapped to each configuration giving you an unmatched actionable intelligence.

 

#2 AWS Well-Architected Framework

Our methodology complies with AWS Framework giving you a clear visibility on 5 pillars of AWS Well-Architected Framework

 

#3 Usage and Cost Forecast

Describe a key benefit that your product provides for your customers – and explain the impact it can have.

 

#4 Historical Metric Usage

Describe a key benefit that your product provides for your customers – and explain the impact it can have.

 

#5 Performance Data

Describe a key benefit that your product provides for your customers – and explain the impact it can have.

 

#6 Session with Our Award Winning PhD Cloud Economist

Describe a key benefit that your product provides for your customers – and explain the impact it can have.

Posted in AWS

AWS S3 Inventory Pricing

Amazon S3 inventory Pricing?

This article provides a general overview of AWS S3 Inventory Pricing, and also highlights a few details in general.


What is S3 Inventory?

  • It’s one of the tools that are provided by S3 for aiding users to manage storage. It may be utilized for reporting & auditing the status of objects for requirements of regulatory, compliance, business & compliance types.
  • S3 inventory offers output files of the following types: CSV, ORC or Parquet. Those output files will list objects along with their associated metadata every day/week for a specific shared prefix (or) an S3 bucket.
  • Customers are even capable of reporting the Intelligent-Tiering access tier as a metadata field within their inventory reports free of charge.

It’s now possible for customers to utilize S3 Inventory for reporting every object that is stored in the S3 Intelligent-Tiering storage class. The newly created storage class has the following characteristics:

  • Simplifies the reduction of costs on storage for both IT infrastructure managers & developers.
  • Does not allow for loss of availability or performance.
  • It gets the job done by storing objects in either 1 of the following access tiers:
    > Tier optimized for Frequent access
    > A less expensive tier optimized for Infrequent access.

It is now possible for users to rely on S3 Inventory to view Intelligent-Tiering objects in any Region. To set and configure your S3 Inventory, you can use one of the following:

  • S3 Management Console.
  • API
  • CLI
  • SDK

Charges will be allocated by storing objects in S3 buckets, depending on the following:  

  • Size of the objects.
  • Monthly period of storing the objects.
  • Chosen storage class (Glacier, and Glacier Deep Archive, Standard and Standard – Infrequent Access, Intelligent-Tiering, One Zone – Infrequent Access, and RRS).

Charges are incurred every month for automation/monitoring for every object which gets stored in Intelligent-Tiering class. Also, moving objects in between access tiers & monitoring access patterns. 

  • Lifecycle rules for moving your data into whatever storage class you choose, as well as COPY and PUT will get per-request ingest fees.
  • Let the transition or ingest cost be taken into consideration prior to the act of moving objects into whatever storage class you choose.

All the costs referred to below are considered with US-East-2(Ohio) region.

  • Charges incurred for Storage management features such as analytics, object tagging & S3 inventory that is enabled on your account’s buckets.
  • S3 Storage Management will be priced according to every feature.
    > S3 Inventory————————————$0.0025 per million objects listed
    > S3 Analytics Storage Class Analysis———-$0.10 per million objects monitored/month
    > S3 Object Tagging——————————$0.01 per 10,000 tags/month
  • All your files that get generated using the S3 Inventory/S3 Storage-Class-Analysis-exports will be stored in one of the selected S3 buckets. Those files will also undergo S3 Standard storage charges.
Storage TypesStorage pricing
S3 Standard - General purpose storage for any type of data, typically used for frequently accessed data
First 50 TB / Month$0.023 per GB
Next 450 TB / Month$0.022 per GB
Over 500 TB / Month$0.021 per GB
S3 Intelligent - Tiering * - Automatic cost savings for data with unknown or changing access patterns
Frequent Access Tier, First 50 TB / Month$0.023 per GB
Frequent Access Tier, Next 450 TB / Month$0.022 per GB
Frequent Access Tier, Over 500 TB / Month$0.021 per GB
Infrequent Access Tier, All Storage / Month$0.0125 per GB
Monitoring and Automation, All Storage / Month$0.0025 per 1,000 objects
S3 Standard - Infrequent Access * - For long lived but infrequently accessed data that needs millisecond access
All Storage / Month$0.0125 per GB
S3 One Zone - Infrequent Access * - For re-createable infrequently accessed data that needs millisecond access
All Storage / Month$0.01 per GB
S3 Glacier ** - For long-term backups and archives with retrieval option from 1 minute to 12 hours
All Storage / Month$0.004 per GB
S3 Glacier Deep Archive ** - For long-term data archiving that is accessed once or twice in a year and can be restored within 12 hours
All Storage / Month$0.00099 per GB

 


S3 Replication pricing

In S3 Same Region Replication & Cross-Region Replication, S3 charges are going to be paid for the following:

  • Storage retrieval fees for Infrequent access.
  • Storage in the selected destination S3 storage class.
  • Replication PUT requests.
  • Primary copy storage charges

The following are a few additional details incurred with S3 charges

  • S3 Replication Time Control: The charges incurred for S3 Replication Metrics along with the Replication Time Control Data Transfer fee. The two will be billed at similar rates as those of the AWS CloudWatch custom metrics.
  • CRR: The charges incurred for inter-region Data Transfers OUT from S3 to the destination region.
  • Storage + PUT request for replicated copy Pricing: According to destination Region
  • Inter-region data transfer Pricing: According to source Region.
    > S3 Replication Time Control data transfer————$0.015 per GB
  • S3 Replication Time Control Data Transfer pricing will be the same for all AWS Regions.
  • S3 Replication Time Control can be used in all commercial Regions excluding the ones below as of now.
    > China (Beijing) Region
    > China (Ningxia) Region
    > GovCloud (US) Regions

Here are a few awesome resources on AWS Services:

AWS S3 Bucket Details

AWS S3 LifeCycle Management

AWS S3-EC2 Transfer Costs

Setup Cloudfront for S3

AWS S3 Bucket Costs

AWS S3 Custom Key Store

  • Cloudysave’s goal is to provide clear visibility about the spending and usage patterns to your Engineers and Ops teams.
  • Have a quick look at CloudySave’s Cost calculator to estimate real-time AWS costs.
Posted in S3