Our blog
Protecting data using AWS Backup
Overview :
Before the launch of AWS Backup, customers had to separately schedule backups from native service consoles. This overhead was also present when there was a need to change backup schedules or initiate a restore across multiple AWS services. AWS Backup solves this problem by providing customers with a single pane of glass to create/maintain backup schedules, perform restores, and monitor backup/restore jobs.
Customers want the ability to have a standardized way to manage their backups at scale with AWS Backup and their AWS Organizations. AWS Backup offers a centralized, managed service to back up data across AWS services in the cloud and on premises using AWS Storage Gateway. AWS Backup serves as a single dashboard for backup, restore, and policy-based retention of different AWS resources, which include:
- Amazon EBS volumes
- Amazon EC2 instances
- Amazon RDS databases
- Amazon Aurora clusters
- Amazon DynamoDB tables
- Amazon EFS file systems
- AWS Storage Gateway volumes
With customers scaling up their AWS workloads across hundreds, if not thousands of AWS accounts, customers have expressed the need to centrally manage and monitor their backups.
AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services. Using AWS Backup, you can centrally configure backup policies and monitor backup activity for AWS resources, such as Amazon EBS volumes, Amazon EC2 instances, Amazon RDS databases, Amazon DynamoDB tables, Amazon EFS file systems, and AWS Storage Gateway volumes. AWS Backup automates and consolidates backup tasks previously performed service-by-service, removing the need to create custom scripts and manual processes. With just a few clicks in the AWS Backup console, you can create backup policies that automate backup schedules and retention management. AWS Backup provides a fully managed, policy-based backup solution, simplifying your backup management, enabling you to meet your business and regulatory backup compliance requirements.
Benefits :
Centrally manage backups :
Configure backup policies from a central backup console, simplifying backup management and making it easy to ensure that your application data across AWS services is backed up and protected. Use AWS Backup’s central console, APIs, or command line interface to back up, restore, and set backup retention policies across AWS services.
Automate backup processes :
Save time and money with AWS Backup’s fully managed, policy-based solution. AWS Backup provides automated backup schedules, retention management, and lifecycle management, removing the need for custom scripts and manual processes. With AWS Backup, you can apply backup policies to your AWS resources by simply tagging them, making it easy to implement your backup strategy across all your AWS resources and ensuring that all your application data is appropriately backed up.
Improve backup compliance :
Enforce your backup policies, encrypt your backups, and audit backup activity from a centralized console to help meet your backup compliance requirements. Backup policies make it simple to align your backup strategy with your internal or regulatory requirements. AWS Backup secures your backups by encrypting your data in transit and at rest. Consolidated backup activity logs across AWS services makes it easier to perform compliance audits. AWS Backup is PCI and ISO compliant as well as HIPAA eligible.
How it Works :

Use Cases :
Cloud-native backup :
AWS Backup provides a centralized console to automate and manage backups across AWS services. AWS Backup supports Amazon EBS, Amazon RDS, Amazon DynamoDB, Amazon EFS, Amazon EC2, and AWS Storage Gateway, to enable you to backup key data stores, such as your storage volumes, databases, and filesystems.

Hybrid backup :
AWS Backup integrates with AWS Storage Gateway, a hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage. You can use AWS Backup to back up your application data stored in AWS Storage Gateway volumes. Backups of AWS Storage Gateway volumes are securely stored in the AWS Cloud and are compatible with Amazon EBS, allowing you to restore your volumes to the AWS Cloud or to your on-premises environment. This integration also allows you to apply the same backup policies to both your AWS Cloud resources and your on-premises data stored on AWS Storage Gateway volumes.

Working with Resource Assignments :
AWS Backup supports two ways to assign resources to a backup plan: by tag or by resource ID. Using tags is the recommended approach for several reasons:
- It’s an easy way to ensure that any new resources are automatically added to a backup plan, just by adding a tag.
- Because resource IDs are static, managing a backup plan can become burdensome, as resource IDs must be added or removed over time.
Here are some recommendations to help make the best use of tags with AWS Backup:
Multiple Tags :
AWS Backup allows customers to assign resources via multiple tags to a backup plan. The plan backs up resources matching any of the tag keys that are specified in a backup plan’s resource assignment. In the example shown in the below screenshot, the backup plan backs up all supported resources that match eitherthe Application: Ecommerce or the Application: Datawarehouse tags:

Resource Relationships :
There may be cases where, for audit or compliance purposes, you must identify the relationship of a deleted resource with a recovery point in AWS Backup. For example, you may have EBS volume snapshots going back a number of years after an underlying Amazon EC2 instance was terminated. You must be able to provide evidence to an auditor that the EBS volume for your snapshot was associated with the terminated EC2 instance.
For situations like these, I recommend enabling AWS Config configuration recording of your AWS resources. This helps identify and track AWS resource relationships (including deleted resources) for up to seven years.
Backup Overlaps :
If you use any scripts or AWS Lambda functions to take snapshots of AWS resources that are also being protected by AWS Backup, I recommend ensuring that there is no overlap between AWS Backup and your scripts/Lambda functions, as this can lead to backup job failures.
Regional Resource Assignments :
AWS Backup supports resource assignments within the same Region. Separate backup plans must be created to back up AWS resources within that Region. For an up-to-date listing of Regions currently supported by AWS Backup.
Restore Validation :
Generally, the most comprehensive data-protection strategies include regular testing and validation of your restore procedures before you need them. Testing your restores also helps in preparing and maintaining recovery runbooks. That, in turn, ensures operational readiness during a disaster recovery exercise, or an actual data loss scenario.
Encryption Permissions :
When using AWS Backup with encrypted resources, such as EBS volumes, the AWS Backup IAM service role must be granted permissions to the AWS KMS keys used to encrypt your resources. For more information about adding the default service role AWS Backup Default Service Roleas a new KMS key user.
Snapshot Limits :
As your AWS footprint grows over time, the number of snapshots in your account will also grow. You will want to review your service limits on a regular basis to ensure you aren’t in danger of getting close to snapshot-related service limits, which can cause your backups to fail. An easy way to keep track of these and other service limits is to utilize AWS Trusted Advisor , as it will report on major service limits regardless of which support plan you subscribe to. In the event that you notice any service limits reaching the Yellow or Red Criteria, you can open a Service Limit Increase case for the protected service (i.e. RDS, EBS, etc.) through the AWS Support Center. You can find more information on managing service limits and requesting service limit increases in our AWS Support Knowledge Center.
AWS Backup and APN Storage Partners :
It’s been often asked, “How does AWS Backup relate to our APN Storage Partner Solutions?” I think it’s important to understand that AWS Backup complements our Partners’ ability to deliver value to their customers.
For our Partners, AWS Backup makes it easier and faster to work with AWS services, providing them the ability to integrate with all of the AWS services that AWS Backup supports through a single API set. By providing a single point of interaction with all AWS services, AWS Backup can lower the development timeline and help accelerate time-to-customer value in our Partner solutions.
Through a single purpose-built API, AWS Backup enables partners to quickly support new AWS services that have native backup capabilities. It also supports services that don’t have a native backup API, such as Amazon EFS.
Conclusion :
In this post, I’ve provided an overview of AWS Backup and offered suggestions for scheduling backups and assigning resources. I’ve also given hints and tips to help you get started using AWS Backup, and discussed how AWS Backup adds value to our APN Storage Partner Solutions.
Well-Architected approach to CloudEndure Disaster Recovery
If there is an IT disaster, you must get your workloads back up and running quickly to ensure business continuity.
For business-critical applications, the priority is keeping your recovery time and data loss to a minimum, while lowering your overall capital expenses.
AWS CloudEndure Disaster Recovery helps you maintain business as usual during a disaster by enabling rapid failover from any source infrastructure to AWS. By replicating your workloads into a low-cost staging area on AWS, CloudEndure Disaster Recovery reduces compute costs by eliminating the need to pay for duplicate OS and third-party application licenses.
In this post, I walk you through the principal steps of setting up CloudEndure Disaster Recovery for on-premises machines in just few steps.
Cloud Endure overview :
CloudEndure Disaster Recovery is a Software-as-a-Service (SaaS) solution that replicates any workload from any source infrastructure to a low-cost “staging area” in any target infrastructure, where an up-to-date copy of the workloads can be spun up on demand and be fully functioning in minutes.
CloudEndure Disaster Recovery enables organizations to quickly and easily shift their disaster recovery strategy to public clouds, private clouds, or existing VMware-based Datacenter. CloudEndure Disaster Recovery solution utilizes block level, Continuous Data Replication, which ensures that target machines are spun up in their most up-to-date state during a disaster or drill.
CloudEndure Disaster Recovery supports recovery from all physical, virtual, and hybrid cloud infrastructure into AWS.
Benefits of CloudEndure Disaster Recovery :
- Average savings of 80% on total cost of ownership (TCO) compared to traditional disaster recovery solutions.
- Sub-second Recovery Point Objectives (RPOs).
- Recovery Time Objectives (RTOs) of minutes.
- Multiple IT resilience options, ensuring a cost-effective strategy.
- Support of all application types, including databases and other write-intensive workloads.
- Automated failover to target site during a disaster.
- Point-in-time recovery, enabling failover to earlier versions of replicated servers.
- One-click failback, restoring operations to source servers automatically.
Continuous Data Replication :
At the core of the technology is a proprietary Continuous Data Replication engine, which provides real-time, asynchronous, block-level replication.
CloudEndure replication is done at the OS level, enabling support of any type of source infrastructure:
Physical machines, including both on-premises and colocation data centers Virtual machines, including VMware, Microsoft Hyper-V, and others.
Low-Cost “Staging Area” in Target Infrastructure :
Once the agent is installed and activated, agent begins initial replication, reading all of the data on the machines at the block level and replicating it to a low-cost “staging area” in the customer’s account in their preferred target infrastructure. Customers can provide their preferred target infrastructure as well as other replication setting such as subnets, VLANs, Security groups, replication tags. The “staging area” contains cost-effective resources automatically created and managed by CloudEndure to receive the replicated data without incurring any significant costs. These resources include a small number of VMs (each supporting multiple source machines), disks (one target disk for each replicating source disk), and snapshots.
The initial replication can take from several minutes to several days, depending on the amount of data to be replicated and the bandwidth available between the source and target infrastructure. No reboot is required nor is there any system disruption throughout the initial replication.
After the initial replication is complete, the source machines are continuously monitored to ensure constant synchronization, up to the last second. Any changes to source machines are asynchronously replicated in real-time into the “staging area” in the target infrastructure. Continuous Data Replication enables to continue normal IT operations during the entire replication process without performance disruption or data loss.
Automated Failback :
Once a disaster is over, agent provides automated failback to the source infrastructure. Failback technology also utilizes continuous data replication, failback to source machine is rapid and no data is lost during the process. Automated failback supports both incremental and bare metal restores.
How CloudEndure Disaster Recovery Works :
CloudEndure Disaster Recovery is an agent-based solution that continually replicates your source machines into a staging area in your AWS account without impacting performance. It uses Continuous Data Replication technology, which provides continuous, asynchronous, block-level replication of all of your workloads running on supported operating systems. This allows you to achieve sub-second recovery point objectives (RPOs). Replication is performed at the OS level (rather than at the hypervisor or SAN level), enabling support of physical machines, virtual machines, and cloud-based machines. If these is a disaster, you can initiate failover by instructing CloudEndure Disaster Recovery to perform automated machine conversion and orchestration. Your machines are launched on AWS within minutes, complying with aggressive recovery time objectives (RTOs).
Replication has two major stages :
- Initial Sync: Once installed, the CloudEndure agent begins initial, block-level replication of all of the data on your machines to the staging area in your target AWS Region. The amount of time this requires depends on the volume of data to be replicated and the bandwidth available between your source infrastructure and target AWS Region.
- Continuous Data Protection: Once the initial sync is complete, CloudEndure Disaster Recovery continuously monitors your source machines to ensure constant synchronization, up to the last second. Any changes you make to your source machines are replicated into the staging area in your target AWS Region.
Architecture of CloudEndure Technology :


Disaster recovery is a critical element of a company’s business continuity plan, enabling the quick resumption of IT operations and minimizing data loss if a disaster were to strike. Organizations often carry a high operational and cost burden to ensure continued operation of their applications, and databases in case of disaster. This includes operating a second physical site with duplicate hardware and software, management of multiple hardware or application specific replication tools, and ensuring readiness via periodic drills.
AWS’s CloudEndure Disaster Recovery makes it easy to shift your disaster recovery (DR) strategy to the AWS Cloud from existing physical or virtual data centers, private clouds, or other public clouds. CLOUDENDURE DISASTER RECOVERY is available on the AWS Marketplace as a software as a service (SaaS) contract and as a SaaS subscription. In this SaaS delivery model, AWS hosts and operates the CloudEndure Disaster Recovery application. As an additional component, CloudEndure Disaster Recovery uses an operating system level agent on each server that must be replicated and protected to AWS. The agent performs the block level replication to AWS.
The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. Based on five pillars: operational excellence, security, reliability, performance efficiency, and cost optimization. The Framework provides a consistent approach for customers and partners to evaluate architectures and implement designs that will scale over time.
This blog provides guidance on applying the AWS Well-Architected Framework’s five pillars and provides best practices on aligning your CloudEndure Disaster Recovery deployment to the pillars to ensure success.
Operational Excellence pillar :
Ensuring operational excellence when implementing CloudEndure Disaster Recovery begins with readiness.
Once you are familiar with CloudEndure Disaster Recovery , you should:
- Map your source environment into logical groups for DR. These groups may be based on location, application criticality, service level agreements (SLAs), recovery point objectives (RPO), recovery time objectives (RTO), etc. CloudEndure uses these groupings to prioritize and sequence recovery.
- Ensure that the requisite staging and target VPCs to be used by CLOUDENDURE DISASTER RECOVERY are aligned to the Well-Architected Framework.
- Automate deployment, monitoring, and management of CLOUDENDURE DISASTER RECOVERY using the available CLOUDENDURE DISASTER RECOVERY API. Use the monitoring and logging data to monitor the operation status of CLOUDENDURE DISASTER RECOVERY and inject the data into existing centralized monitoring tools.
Security pillar :
- The Well-Architected Framework’s security principles should be applied at multiple layers when deploying CLOUDENDURE DISASTER RECOVERY . Design principles, including strong identity protection, traceability, all-layer network security, and data protection, should be extended to CLOUDENDURE DISASTER RECOVERY.
- To automate orchestration and recovery, CLOUDENDURE DISASTER RECOVERY uses the AWS API via IAM user credentials with programmatic access.
- CLOUDENDURE DISASTER RECOVERY agent uses an HTTPS connection to the User Console, which is used for management and monitoring. The User Console stores metadata about the source server in an encrypted database. The source data is replicated directly from source infrastructure to the target AWS account. While the connection can be public or private, it is recommended to use a private connection rather than the public internet. Enabling a private connection, and disabling the allocation of public IPs for CLOUDENDURE DISASTER RECOVERY replication servers should be set under replication settings.
- To ensure data security, CLOUDENDURE DISASTER RECOVERY encrypts all data replication traffic from the source to the staging area in your AWS account using the AES-256 encryption standard. Data at rest should be encrypted using the AWS KMS service. CLOUDENDURE DISASTER RECOVERY should be set to use the appropriate KMS key for EBS encryption under replication settings.
Reliability pillar :
- The best mechanism to ensure reliability in case of a disaster is to regularly validate and test recovery procedures. CLOUDENDURE DISASTER RECOVERY enables unlimited test launches allowing both spot testing, and full user acceptance and application testing. It is critical to test launch instances after initial synchronization to confirm availability and operation.
- Automate recovery by monitoring key performance indicators (KPIs), which vary by organization and workload. Once the KPIs are identified, you can integrate CLOUDENDURE DISASTER RECOVERY launch triggers into existing monitoring tools. An example of monitoring and automating DR using CLOUDENDURE DISASTER RECOVERY is detailed in this AWS blog.
- The CLOUDENDURE DISASTER RECOVERY User Console is managed by AWS and uses Well-Architected principles ensuring reliability and scalability. DR and redundancy plans are in place to ensure availability of the CLOUDENDURE DISASTER RECOVERY User Console. CLOUDENDURE DISASTER RECOVERY is leveraged to replicate its own User Console using a separate and isolated stack. This ensures recoverability of the User Console if the public SaaS version becomes unavailable.
Performance Efficiency pillar :
- As a SaaS solution, CLOUDENDURE DISASTER RECOVERY uses the performance efficiency pillar to maintain performance as demand changes. The replication architecture uses t3.small replication servers that can support the replication of most source servers. At times, source servers handling intensive write operations may require larger or dedicated replication servers. Dedicated or larger replication servers may be selected with minimal interruption to replication and be used for periods of time, for example to reduce initial sync times.
- To meet RTOs, typically measured in minutes, CLOUDENDURE DISASTER RECOVERY defaults target EBS volumes to Provisioned IOPS SSD (IO1) in the blueprint. During the target launch processes, an intensive I/O re-scanning process of all hardware and drivers, due to changing the hardware configuration, may occur. IO1 volumes reduce this impact to RTO. If IO1 volumes are not required for normal workload performance, we recommend that the volume type be programmatically changed after instance initialization. Alternatively, Standard or SSD volume types may be selected in the blueprint before launch. To ensure they meet your RTO requirements, be sure to test launch the various volume types.
- While CLOUDENDURE DISASTER RECOVERY encrypts all traffic in transit, it is recommended to use a secure connection from the source infrastructure to AWS via a VPN or AWS Direct Connect. The connection must have enough bandwidth to support the rate of data change to support ongoing replication, including spikes and peaks. Network efficiency and saturation may be impacted during the initial synchronization of data. CLOUDENDURE DISASTER RECOVERY agents utilize available bandwidth when replicating data. Throttling can be used to reduce impact on shared connections, and can be accomplished using bandwidth shaping tools to limit traffic on TCP Port 1500, or using the throttling option within CLOUDENDURE DISASTER RECOVERY . Considerations for throttling should include programmatically scheduling limits to avoid peak times, and understanding the impact of throttling to RPOs. It is recommended that all throttling be disabled once the initial sync of data is complete.
Cost Optimization pillar :
- CLOUDENDURE DISASTER RECOVERY takes advantage of the most effective services and resources, to achieve RPO of seconds, and RTO measured in minutes, at a minimal cost. The type of resources used for replication can be configured to balance cost, and RTO, and RPO requirements. For replication, CLOUDENDURE DISASTER RECOVERY uses shared t3.small instances in the staging area. Using EC2 Reserved Instances for replication servers is a method to reduce costs. Each shared replication server can mount up to 15 EBS volumes. In the staging area, CLOUDENDURE DISASTER RECOVERY uses either magnetic (<500 GB) or GP2 (>500 GB) EBS volumes to keep storage costs low. CLOUDENDURE DISASTER RECOVERY provides the ability to decrease storage costs further by using ST1 (>500 GB) EBS volumes. The use of ST1 EBS volumes can be configured in the replication settings, however this may impact RPOs and RTOs.
- In the event of a disaster, CLOUDENDURE DISASTER RECOVERY triggers an orchestration engine that launches production instances in the target AWS Region within minutes. The production instances’ configuration is based on the defined blueprint. Using the appropriate configuration of resources is key to cost savings. Right sizing the instance type selected in the blueprint ensures the lowest cost resource that meets the needs of the workload. Selecting appropriate EBS volume type for root and data volumes has a significant impact on consumption costs. TSO Logic, an AWS company, is a service that provides right sizing recommendations, and can be imported into the CLOUDENDURE DISASTER RECOVERY blueprint.
Conclusion :
In this post I reviewed AWS best practices and considerations for operating CLOUDENDURE DISASTER RECOVERY in AWS. Reviewing and applying the AWS Well-Architected Framework is a key step when deploying CloudEndure Disaster Recovery .
It lays the foundation to ensure a consistent and successful disaster recovery strategy. If disaster were to strike, implementing the concepts presented in the Operational, Security, and Reliability pillars supports a successful recovery. CLOUDENDURE DISASTER RECOVERY allows recoverability for your most critical workloads, while decreasing the total cost of ownership of your DR strategy.
AWS Landing Zone
What is AWS Solution all about?
AWS Landing Zone is a solution that helps customers to quickly set up a secure, multi-account AWS environment based on AWS best practices. With a large number of design choices, setting up a multi-account environment can take a significant amount of time, involve the configuration of multiple accounts and services, and require a deep understanding of AWS services.
This solution can help save time by automating the set-up of an environment for running secure and scalable workloads while implementing an initial security baseline through the creation of core accounts and resources. It also provides a baseline environment to get started with a multi-account architecture, identity and access management, governance, data security, network design, and logging.
Version Information:
Version 2.3.1 of the solution uses the most up-to-date Node.js runtime. Version 2.3 uses the Node.js 8.10 runtime, which reaches end-of-life on December 31, 2019. To upgrade to version 2.3.1, you can update the stack.
AWS Solution Overview :
The AWS Landing Zone solution deploys an AWS Account Vending Machine (AVM) product for provisioning and automatically configuring new accounts. The AVM leverages AWS Single Sign-On (SSO) for managing user account access. This environment is customizable to allow customers to implement their account baselines through a Landing Zone configuration and update pipeline.
1) Multi-Account Structure :
The AWS Landing Zone solution includes four accounts, and add-on products, that can be deployed using the AWS Service Catalog, such as Centralized Logging solution and AWS Managed AD and Directory Connector for AWS SSO.

- AWS Organization Account :
The AWS Landing Zone is deployed into an AWS Organizations Account. This account is used to manage configuration and access to AWS Landing Zone managed accounts. The AWS Organizations account provides the ability to create and financially manage member accounts. It contains the AWS Landing Zone configuration Amazon Simple Storage Service (Amazon S3) bucket and pipeline, account configuration StackSets, AWS Organizations Service Control Policies (SCPs), and AWS Single Sign-On (SSO) configuration.
- Shared Services Account :
The Shared Services Account is a reference for creating infrastructure shared services such as directory services. By default, the account hosts AWS Managed Active Directory for AWS SSO Integration in a shared Amazon Virtual Private Cloud (Amazon VPC) that can be automatically peered with new AWS accounts created with the Account Vending Machine (AVM).
- Log Archive Account
The Log Archive Account contains a central Amazon S3 bucket for storing copies of all AWS CloudTrail and AWS Config log files in a log archive account.
- Security Account :
The Security Account creates Auditor (read-only) and Administrator (full-access) cross-account roles from a Security account to all AWS Landing Zone managed accounts. The intent of these roles is to be used by a company’s security and compliance team to audit or perform emergency security operations in case of an incident.
This account is also designated as the master Amazon GuardDuty Account. Users from the master account can configure GuardDuty as well as view and manage GuardDuty findings for their account and all of their member accounts.
2) Account Vending Machine :
The Account Vending Machine (AVM) is an AWS Landing Zone key component. The AVM is provided as an AWS Service Catalog product, which allows customers to create new AWS accounts in Organizational Units (OUs) preconfigured with an account security baseline, and a predefined network.
AWS Landing Zone leverages AWS Service Catalog to grant administrators permissions to create and manage AWS Landing Zone products and end user’s permissions to launch and manage AVM products.
The AVM uses launch constraints to allow end-users to create new accounts without requiring account administrator permissions.

3) User Access :
Providing least-privilege, individual user access to your AWS accounts is an essential, foundational component to AWS account management. The AWS Landing Zone solution provides customers with two options to store their users and groups.

SSO with AWS SSO Directory :
The default configuration deploys AWS Single Sign-On (SSO) with AWS SSO directory where users and groups can be managed in SSO.
A single-sign-on endpoint is created to federate user access to AWS accounts.
4) Notifications :
The AWS Landing Zone solution configures Amazon CloudWatch alarms and events to send a notification on root account login, console sign-in failures, API authentication failures, and the following changes within an account: security groups, network ACLs, Amazon VPC gateways, peering connections, ClassicLink, Amazon Elastic Compute Cloud (Amazon EC2) instance state, large Amazon EC2 instance state, AWS CloudTrail, AWS Identity and Access Management (IAM) policies, and AWS Config rule compliance status.

- The solution configures each account to send notifications to a local Amazon Simple Notification Service (Amazon SNS) topic.
- The All Configuration Events topic aggregates AWS CloudTrail and AWS Config notifications from all managed accounts.
- The Aggregate Security Notifications topic aggregates security notifications from specific Amazon CloudWatch events, AWS Config Rules compliance status change events, and AWS GuardDuty findings.
- An AWS Lambda function is automatically subscribed to the security notifications topic to forward all notifications to an aggregation Amazon SNS topic in the AWS Organizations account.
- This architecture is designed to allow local administrators to subscribe to and receive specific account notifications.
Security baseline :
The AWS Landing Zone solution includes an initial security baseline that can be used as a starting point for establishing and implementing a customized account security baseline for your organization. By default, the initial security baseline includes the following settings:
- AWS CloudTrail :
One CloudTrail trail is created in each account and configured to send logs to a centrally managed Amazon Simple Storage Service (Amazon S3) bucket in the log archive account, and to AWS CloudWatch Logs in the local account for local operations (with a 14-day log group retention policy).
- AWS Config :
AWS Config enables account configuration log files to be stored in a centrally managed Amazon S3 bucket in the log archive account.
- AWS Config Rules :
AWS Config rules are enabled for monitoring storage encryption (Amazon Elastic Block Store, Amazon S3, and Amazon Relational Database Service), AWS Identity and Access Management (IAM) password policy, root account multi-factor authentication (MFA), Amazon S3 public read and write, and insecure security group rules.
- AWS Identity and Access Management :
AWS Identity and Access Management is used to configure an IAM password policy.
- Cross-Account Access :
Cross-account access is used to configure audit and emergency security administrative access, to AWS Landing Zone, accounts from the security account.
- Amazon Virtual Private Cloud (VPC) :
An Amazon VPC configures the initial network for an account. This includes deleting the default VPC in all regions, deploying the AVM requested network type, and network peering with the Shared Services VPC when applicable.
- AWS Landing Zone Notifications :
Amazon CloudWatch alarms and events are configured to send a notification on root account login, console sign-in failures, and API authentication failures within an account.
- Amazon GuardDuty :
Amazon GuardDuty is configured to view and manage GuardDuty findings in the member account.
Cloudxchange.io looks forward to working with security and compliance-focused customers, looking to implement cloud solutions that will help reduce the time and cost by using pre-defined solution architecture components. Contact us to learn more.
Written by : Sagar Autade
Microsoft on AWS
Migrating workloads to Cloud is just the first step towards Digital Transformation (DT or DX).
This step is a butterfly effect which pops up questions such as, Which cloud is right for me? or
What pitfalls do I need to watch for? or Do my people have the skills we need to manage our
cloud environment?
Most organisations are migrating to the cloud as part of their DX journey. But, Cloud Services
have developed and changed periodically and radically over the past decade, and many IT firms
discover it far too late, that migration is a far more complex process than they originally
anticipated.
Still, the benefits of migrating applications and other data to the Cloud are worth everything due
to its ability to scale, effective use of resources, cost control, increased security and compliance.
Microsoft workloads have long been the foundation of many enterprise businesses. With the
advent of Cloud technology, many enterprises are starting to move away from the traditional IT
operation, pay loading the workloads on AWS Cloud for its efficient, agile, and cost-effective
nature.
These services help organisations move faster at lower IT costs and scale on applications. Even
the largest enterprises and the hottest start-ups trust these services to power a wide variety of
workloads including web and mobile applications, data processing, data warehousing, etc.
While the cloud is the optimal solution for these enterprises, it can be complex and intimidating
at times during the decision to migrate off legacy infrastructures. Thankfully, with over 700
Windows Server-Based offerings in the AWS marketplace, it’s clear that AWS is a strong option
for running Microsoft workloads.
8 Steps for Migrating to the Cloud :
1. Develop a Cloud Strategy :
Before jumping on the bandwagon of Cloud, one must establish a strategy for migrating to the
cloud, including goals and objectives. No organization migrates to the cloud in any ‘just
because’ case . Identify ‘why’ the organization is migrating and get clear on what success will
look like. Use this to set concrete goals for the future.
2. Choose the Best Provider :
Choose the cloud provider that will help you best meet your goals. Where you build and host
your environment should depend on the goals you are trying to achieve, as identified in the
above step. For example, each public cloud platform is different, with different strengths and
drawbacks.
3. Choose the Right Cloud Environment :
Assess (legacy) application readiness. Applications typically perform best on a public cloud
when they have been designed to take advantage of the specific architecture of the platform,
relying on its strengths.
a. Lift and shift :
This method involves migrating an application or workload ‘as it is’ from one environment to
another. Think of it as moving an old house from the country to the city. Nothing about the
house changes, just the environment around it. For legacy applications, this is often the easiest
method, but for applications built in the pre-cloud era, it can introduce risks.
b. Rearchitecting / Recoding :
On the other end of the spectrum, re-factoring involves rearchitecting/recording part or all the
application to create a cloud-native equivalent. The most expensive and time-consuming, this
method is most often used for applications for which is not available commercially.
c. Middle Ground Modifications :
This middle-ground option involves modifying enough components of your application, to allow it
to run on a cloud-native platform. The advantage of this approach is that it will allow you to
realize more of the cloud benefits, faster than the ‘lift and shift’ method. However, this method
can also increase your security and compliance risks if your team doesn’t have the requisite
knowledge and skills.
4. Infrastructure :
Assess your current infrastructure. Your applications have all sorts of requirements when it
comes to infrastructure. You will need to ensure that the cloud platform can accommodate them,
or, that you are ready to re-platform your applications for the new environment.
The main factors to assess include:
- Amount of Storage Space needed
- Amount of Computational Power
- Networking Requirements
- Operating System Compatibility
5. Back-Ups :
The first part of any migration should be a full backup of your existing servers. Hopefully,
nothing goes wrong as you migrate to the cloud, but one can never be too careful!
6. Deployment :
The deployment includes provisioning and testing each component as it is shifted. It is important
to do this at a time when disruption to the business will be minimal.
7. Data Migration :
Once the deployment is done, you will want to migrate your existing data as well. This helps
ensure business continuity and increases longevity.
8. Testing :
Be prepared to test all components together once they have been deployed and, data has been
migrated. This should include load testing and vulnerability assessments. The user experience
should be seamless, and security testing should reveal no issues.
Why AWS for Microsoft Workloads ?
In many ways, AWS has worked to make Microsoft as native to the AWS Cloud as possible.
AWS notes that customers have ‘successfully deployed every Microsoft Application available on
the AWS cloud’. What AWS has succeeded in, is creating an avenue for deploying all aspects of
Microsoft for business to AWS.
With AWS, you pay only for the resources you use and scale its flexibility. This allows for greater
growth without risk or guesswork when calculating computing needs. Also, ‘Bring Your Own
Licenses (BYOL)’ policies allow businesses to save money on existing investments when
moving to AWS.
- For many corporate IT applications, AWS provides a cloud platform that helps run
Microsoft applications like SharePoint, Dynamics and Exchange in a more secure, easily
managed, high-performance approach. - For developers, AWS offers a flexible development platform with EC2 for Windows
Server and easy deployment and scaling with AWS Elastic Beanstalk, which helps scale
and deploy applications built on .NET, Java, PHP, node.js, Python, Ruby, and Docker. - For business owners, AWS provides a fully managed database service to run Microsoft
SQL Server, which helps you build web, mobile and custom business applications.
Benefit from High Reliability and Strong Security :
AWS offers high availability across the world. Each AWS region has multiple Availability Zones
and data centres, allowing fault tolerance and low latency. This, along with the 99.95%
availability allows for the maintenance of mission-critical data, keeping your systems online and
protecting you from dire failure.
Similarly, to its infrastructure, AWS takes a multi-layer approach to security. From dedicated
connectivity to security groups and access control lists, you can ensure your information is
protected. End-to-end encryption, AWS Direct Connect, and Amazon Virtual Private Cloud
(Amazon VPC) all offer security for your applications.
Whether you choose to take a hybrid cloud approach or dedicate your full enterprise operations
to AWS, it offers a simplified approach to running Microsoft workloads in a cost-efficient manner.
Explore the World of Microsoft Windows on AWS :
Today, we believe that AWS is the best place to run Windows and its Applications in the Cloud.
You can run the full Windows stack on AWS, including Active Directory, SQL Server, and
System Center, while taking advantage of 61 Availability Zones across 20 AWS Regions. You
can run existing .NET applications and, can use Visual Studio or, VS Code builder for a new
cloud-native Windows Applications using the AWS SDK for .NET.
You can also run Windows Server 2016 on Amazon Elastic Compute Cloud (EC2). This version
of Windows Server is packed with new features including support for Docker and Windows
containers.
1. SQL Server Upgrades :
AWS provides first-class support for SQL Server, encompassing all four-editions (Express, Web,
Standard and Enterprise), with multiple version of each edition. This wide-ranging has helped
SQL Server to become one of the most popular Windows workloads on AWS.
The SQL Server Upgrade Tool makes it easy for you to upgrade an EC2 instance that is running
SQL Server 2008 R2 SP3 to SQL Server 2016, allowing you to upgrade and launch the new
AMI.
Amazon RDS makes it easy for you to upgrade your DB instances to new major or minor
upgrade to SQL Server. The upgrade is performed in-place and can be initiated with a couple of
clicks. For E.g., if you are currently running SQL Server 2014, you have the following upgrades
available:
You can also opt-in to automatic upgrades to new minor versions that take place within your
preferred maintenance window:
Before you upgrade a production DB Instance, you can create a snapshot backup. Use it to
create a test for DB Instance, upgrading that instance to the desired new version, and
performing acceptance testing.
2. SQL Server on Linux :
If your organization prefers Linux, you can run SQL Server on Ubuntu, Amazon Linux 2, or Red
Hat Enterprise Linux, using our License Included (LI) Amazon Machine Images. Check out the
most recent launch announcement or search for the AMIs in AWS Marketplace using the EC2
Launch Instance Wizard:
This is a very cost-effective option since you do not need to pay for Windows licenses.
You can use the new re-platforming tool (AWS Systems Manager script) to move your existing
SQL Server databases (2008 and above, either in the cloud or on-premises) from Windows to
Linux.
3. Lambda Support :
Launched in 2014, and the subject of continuous innovation ever since, AWS Lamda lets you
run code in the cloud, without having to own, manage, or even think about servers. You can
choose from several .NET Core runtimes for your Lamda functions, and then write your code in
either C# or PowerShell:
Read ‘Working with C#’ and ‘Working with PowerShell’ in the AWS Lambda Developer Guide to
get more insights. Your code has access to the full set of AWS services and can make use of
the AWS SDK for .NET.
4. .NET Dev Center :
The AWS .Net Dev Center contains materials that will help you to learn how to design, build and
run .NET applications on AWS.
Also, check out AWS’s advantage for Windows over the next largest cloud provider.
Below listed are some of the powerful AWS services used for SQL Server Workloads.
Summarize :
AWS offers the best cloud for SQL Server, and it is the right cloud platform for running
Windows-based applications today and in the coming future. SQL Server on Windows or Linux
on Amazon EC2 enables you to increase or decrease capacity within minutes. You can
commission one, hundreds, or even thousands of server instances simultaneously.
- Greater Reliability
- Faster Performance
- More Secure Capabilities
- More Migration Experiences and
- Lower Customer Cost
Written by: Nitin Ghodke
source link : https://aws.amazon.com/windows/
On-Prem vs Cloud Database
Data now at the heart of business operations, how you choose to store, process and manage this crucial asset has become one of the most important factors in a firm’s success. Companies that struggle to take control of this will quickly find themselves falling behind, as they don’t have the insight, they need to meet the expectations of today’s customers.
So, you’re moving your workloads to the public cloud, and you’re considering your options. Do you switch gears and move forward with one of the new Database as a Service (DBaaS) offerings, or do you stick with a traditional database approach?
DBaaS Cloud Database vs. Traditional Databases
Traditional database management requires companies to provision their infrastructure and resources to manage their databases in data centers. This is costly and time consuming, and you still need to plan, raise purchase orders for equipment and software, and hire people with skills from multiple technical domains, including OS and database software.
This makes them familiar territory for customers with on-premises operations. However, companies that are moving to the cloud may want to look into other options, such as Database as a Service (DBaaS).
DBaaS resolves many of these issues, especially those related to provisioning and cost. You can quickly set up a database of enterprise class with a few clicks. DBaaS provides a cost-effective solution for large businesses, as well as SMEs and startups with smaller budgets.
Small startups can spawn servers that are self-service but can grow in capacity in line with their requirements or as the need grows. Even enterprises that have critical data and require peak time high resources for their reporting and OLTP systems are moving to DBaaS because of its ability to scale on demand. You don’t need to worry about availability and security because the cloud has enabled replication of databases across multiple geographical locations.
What is Database as a Service (DBaaS)?
A DBaaS is a database cloud service that takes over the management of the underlying infrastructure and resources cloud databases require and allow companies to take advantage of services in the cloud. This can free up personnel to focus on other tasks or allow smaller organizations to get started quickly without the need for several specialists. In many cases with a DBaaS you can quickly set up a database with a few clicks.
Running a cloud-based database makes it easy to grow your databases as your needs grow, in addition to scaling up or down on-demand to accommodate those peak-workload periods. You can also have peace of mind for any security and availability concerns as the cloud enables database replication across multiple geographical locations, in addition to several backup and recovery options.
And while there are many cloud providers that offer DBaaS, the market leaders are currently Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. Each offers DBaaS in a variety of flavors (MySQL cloud database, Microsoft SQL Server, PostgreSQL, Oracle, and NoSQL databases such as Hadoop or MongoDB, etc.) such as the database as a service AWS offerings Amazon RDS and Amazon Aurora, Azure database as a services Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL Database, and GCP’s Cloud SQL. The cloud service providers also have database migration services to help you migrate your data to the cloud.
Moving your database to the cloud doesn’t have to require your current IT team drop what they’re doing and focus on a cloud migration. There are services available to help you make the move. There are many benefits to using a service provider, here are of them:
1. Migration speed – using a team of experienced professionals eliminates the “uh oh” situations created by not understanding the challenges associated with developing and executing cloud database strategies.
2. Fully benefit from AWS, Azure and Google Cloud – using a managed services provider, you get the advantages of these cloud products with the monitoring, backup and disaster recovery to best fit your needs.
3. Ensure optimum database performance, access, protection, compliance, availability, and scalability plus – continuously manage the environment to ensure database optimization and maximum performance.
4. Ensure predictable performance – Analyzing and tuning each data set to match the workflow characteristics helps ensure the most predictable cloud database and application performance.\
5. Compliance – solutions are implemented to satisfy organizations’ compliance requirements such as HIPAA and PCI
6. Reduce database TCO – when moving to the cloud, there is opportunity to consolidate your databases and realize up to a 50 percent database consolidation ratio, reducing IT management efforts and database license costs.
7. Mitigate risk – by following industry best practices, you have peace of mind knowing that your cloud databases are protected, secure, and private.
Summary :
Looking at the current pace of adoption of cloud technologies, there is no doubt that DBaaS is here to stay. As noted above, with a compound annual growth of more than 67%, the industry is expected to touch $14 billion plus in 2019. This is mostly due to 90% of organizations preferring to go with a hybrid deployment. Market leaders are working hard on introducing security and compliance features that were once only possible with on-premise data centers. This will enable companies to adhere to all necessary compliance and security measures, and bring enterprise-grade databases to the cloud, allowing companies to benefit from DBaaS features.
Written by: Suyog Kalambate
Benefits of using Public cloud with multi cloud technology
Hey if you are new to cloud business and are planning to move your data to cloud the first thing that you should keep in mind is the benefits of using public cloud over private cloud and why multi cloud technology is been adopted by most of companies now a days.
There a many top cloud service provider companies providing these service to the users. But before that let’s get to know what exactly public cloud is.
Cloud: The moderately basic utilize word, for example, Infrastructure-as-a-Service for expansion or to check workloads; data backup to a single cloud or a Software-as-a-Service application. The cloud is a term alluding to getting to PC, Information Technology (IT), and programming applications through a system association, frequently by getting to server farms utilizing wide area network (WAN) or Internet network. All IT assets can live in the cloud. Due to it’s increasing need there is also increase in the number of top cloud service provider companies.
PUBLIC CLOUD: It is one of the type of cloud provider apart from private cloud and hybrid cloud. A public cloud is one in light of the standard cloud computing model, in which a top cloud service provider makes resources for example, virtual machines (VMs), applications or capacity, accessible to the overall population over the web. Public cloud administrations might be free or offered on a compensation for every utilization model by the top cloud service provider. Public cloud is the thing that a great many people allude to when they say “distributed computing.” The term is relatively excess as, by definition, something on the cloud is for the most part accessible. Public cloud combined with multi cloud has its own advantage.
Multi cloud: Multi-Cloud is one of the present times greatest trendy expressions. Multicloud implies utilizing in excess of a solitary public cloud. That use design emerged when user attempted to keep away from reliance on a single public cloud supplier, where they picked particular service from every public cloud to defeat each, or when they needed the two advantages. While a multi-cloud deployment can allude to any usage of various Software as a service (SaaS) or Platform as a service (PaaS) cloud contributions, today, it by and large alludes to a blend of public Infrastructure as a service (IaaS) environments. This likewise alludes to the conveyance of cloud resources, programming, software, applications, and so on over a few cloud-facilitating situations. Few of cloud infrastructures are– Amazon Web Services (AWS), Microsoft Azure, Google Compute Platform (GCP), OpenStack, VMware, IBM, SoftLayer and more.
Now to the main question why should one use Public Cloud combined with Multi Cloud for the benefit of the organization.
Few areas in which we can get the benefits of using these services are:
1) RELIABILITY
2) COST EFFECTIVE
3) FLEXIBILITY
4) SCALABILITY
5) STYLE COSTING
6) LOCATION INDEPENDENCE
A multi-cloud procedure offers the capacity to choose diverse cloud services or highlights from various suppliers. This is useful, since some cloud conditions are more qualified than others for a specific undertaking. Multi-cloud was, and still is, viewed as an approach to counteract information misfortune or downtime because of a limited segment failure in the cloud. The capacity to keep away from merchant secure was additionally an early driver of multi-cloud adoption.
In option, a few associations seek after multi-cloud methodologies for information power reasons. Certain laws, directions and corporate arrangements require endeavor information to physically live in specific areas. Multi-distributed computing can enable associations to meet those necessities, since they can choose from numerous IaaS suppliers’ server farm locales or accessibility zones. This adaptability in where public cloud information dwells additionally empowers associations to find figure assets as close as conceivable to end clients to accomplish ideal execution and insignificant latency. Multi-Cloud offers the greater part of the advantages of Public Cloud, for example, agility, adaptability, versatility, and so forth.
Now some important stats on public cloud:
1) Multi-cloud techniques will hop from 10% out of 2015 to over 70% by 2018 (Gartner). More endeavor associations than any other time in recent memory are breaking down their present innovation portfolio and characterizing a cloud procedure that includes numerous cloud stages.
2) About 80 percent of organizations research plan to have in excess of 10 percent of their workloads openly cloud stages in three years, as indicated by McKinsey’s 2017 worldwide cloud cybersecurity examine. ”
3) By the end of 2018, more than half of global enterprises will rely on at least one public cloud platform for digital transformation
4) Public cloud platforms represent the fastest growing segment: They will generate $44 billion in 2018,
5) Cloud rates third on CIOs’ investment lists
Current and planned usage of public cloud platform services running applications worldwide in 2018
SD-WAN Architecture: Advantages and Options
To understand the advantages and options of SD- WAN technology, first of all let us understand what is SD-WAN technology. Well SD-WAN or Software Defined Wide Area Networking is a metamorphic approach to simplify branch office networking and assure optimal application performance by simplifying networks between remote locations and branch offices.
For example, a WAN might be used to connect branch offices to a central corporate network separated by distance. In the past, these WAN connections often used technology that required special proprietary hardware. The SD-WAN movement seeks to move more of the network control layer is to the “cloud,” using a software approach and in the process centralizing and simplifying network management.
What Do Enterprises Need in an SD-WAN Architecture?
Enterprises have been increasingly investing in open, flexible cloud solutions. SD-WAN being one of them is particularly beneficial to environments separated by distance. For example, between main offices and branch offices. Whereas traditional WAN can be expensive and complex, SD-WAN architecture reduces recurring network costs, offers network-wide control and visibility, and simplifies the technology with zero-touch deployment and centralized management.
Apart from the SD-WAN benefits the primary advantage of an SD-WAN architecture is security. In the SD-WAN architecture, a company benefits from end-to-end encryption across the entire network. All communication between the main office and branch offices is secure, as is communication to and from the cloud.
Types of SD-WAN Architecture
- Premises-basedSD-WAN solutions- This solution involves an appliance that is placed onsite to achieve SD-WAN functionality. Premises-based SD-WANs can be cost-effective solutions for smaller and localized businesses.
- MPLS-basedSD-WAN solutions- This solution involves multiple appliances placed at network endpoints. These solutions create a virtual IP network between the vendor-proprietary appliances, giving them control of network packets from end to end.
- Internet-basedSD-WAN solutions also use multiple appliances at each customer location, using public Internet connections from customer-chosen providers. The customer pays for a portion of its Internet connections to be SD-WAN.
Written by- Simpi Nath
DevOps is not a product or tool. It’s an entire new approach for automating operations
DevOps is not a product or tool. It’s an entire new approach for automating operations. By Integrating operations & development teams for standardized and automated processes for infrastructure deployment.
One can achieve Faster Deployment of Infrastructure and Applications,Accelerated Time to Market,Higher Operations Efficiency & Better Quality,More Focus on Your Core Business Goals.
You can leverage our DevOps as a managed service for your application or dev team your wants to run their application on cloud. We work closely with your IT & development teams to help and advice them to leverage the benefits of the cloud and ensure your environment runs efficiently and effectively.
The combination of our technical experience in public clouds management, development and integration technologies, with our application automation service. You can be ensured to get an efficient and better performing system with less down time and faster deployment of changes.
HOW TO SET UP DEVOPS:
- Choose your cloud infrastructure: select best suited cloud infrastructure for your business. it could be AWS,Azure etc
- Setup CHEF/windows DSC
- Application automation
- Support
DevOps technique-Continuous integration and continuous delivery
Every industry is being interrupted by the demands of the digital marketplace. Traditional approaches to business no longer work as they don’t meet the current expectations of the market and are ready for disruption by more flexible competitors. Leaders in digital alteration are accepting that digital (app) is the new customer interface and are focused on adopting and providing experiences on that interface.
Continuous Development, Continuous Integration is the new ideal for creating and improving web applications.
Continuous Integration is a key integral of agile and flexible development practices. Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository. Each check-in is then confirmed by an automated build, allowing teams to detect problems early. By integrating regularly, errors can be detected quickly, and locate them more easily.
The value that Continuous Integration brings in is that it compels developers and team of developers to integrate their individual work with each others as early as possible. This exposes integration issues and conflicts on a regular basis. To make continuous integration work, developers have to communicate more and ensure that their work includes changes from other developers that may affect the code they are working on.
Continuous Deployment follows the testing that happens during Continuous Integration and forces changes to a production system.
Continuous Delivery is nothing but taking the concept of Continuous Integration to the next step. Once the application is built, it is delivered to the next stages in the application delivery life-cycle. The objective of Continuous Delivery is to get the new features and innovations that the developers are creating, out to the customer and user end as early as possible.
Leaders are fully integrating development, operations, security and business teams into closed loops for maximum impact, and offering services which enable and encourage innovation and development.
Written by: SIMPI NATH
DevOps: Development-operation lifecycle
Definition
Combination of developers with operation work better to have better communication, collaboration and integration between software developers and information technology (IT) professionals. The goal is to build trust and reduce the friction.
Process
ideation/paper/requirement stage raised by customers, marketing or enhancement request. How this is converted into products which is being valued by end customers, whatever you deploy; the idea, the new feature, the enhancement request, based on how they utilize it you get feedback and improve the product itself. There are number of stakeholders are in between who are involved in creating and delivering the software to get the product or software tested. The work for devops is to go through the feedback and improve the software delivery process itself. It can be improved by two ways: to reduce the rework and overhead.
The focus on the developer/operations collaboration originates a new approach to managing the complexity of operations like infrastructure automation, configuration management, deployment automation, performance management, and monitoring.
TO ADOPT DEVOPS, collaboration of the following parameters are required –
- degree of agility
- level of collaboration
- process maturity
Configuration/infrastructure/IT management solves the problem of having manually installed and configure packages once the hardware is in place. The requirement of using configuration automation solutions is that servers are deployed exactly the same way every time. If you want to make a change across a thousand servers you only need to make the change in one place.
- Chef
- Puppet
- Ansible
- Salt Stack
- Pallet
- Bcfg2
from installing an operating system, to installing and configuring servers on instances, to configure how instances and software communicate with each other, and much more. By scripting environments, one can apply the same configuration to a single node or to thousands.. Tools can be used in the cloud and in virtual and physical environments.
These tools tend to be used more frequently in Linux system automation, they have support for Windows as well. Puppethas a larger user base than Chef, and it offers more support for older, outdated operating systems. With Puppet, you can set dependencies on other tasks. With both the tools you get the same result with the same configuration no matter how many times you run it.
Why it is important for enterprises:
- Business continuity: improve business results by restructuring around customer value streams.. ensure quality to win, serve, and retain customers effectively
- Scalability
- Continuous integration and delivery
- Fault tolerance
- deliver innovative digital solutions that delight customers and win markets
- time to market: reduce time to market by through streamlined software delivery.
- Throughput: increase team productivity and deliver new functionality faster.
- Risk: early identification of quality concerns, reduction of defects across the life cycle
- Resiliency: operational state is more secure and stable and changes are systematically auditable. Its ability to shorten an application’s time to market by uniting development and operations teams, creating standardized processes that reduce the time it takes to get an application up and running. These standardized processes are developed through the orchestration of several different automated tasks to create a repeatable, reusable workflow.
WRITTEN BY: TINNI SAHA