Home AWS Solution Architect Associate Exam Questions AWS News AWS Exam PDF

Thursday, March 30, 2017

UKFast Opens Trapdoor under prices, Thumbs nose at AWS

The UK offers UK accommodation UKFast lowered its prices for overnight accommodation Openstack said, taking a burst of AWS.

The lowest prices refer to the UKFast Flex eCloud platform and the company says it has lowered its prices by up to 44 percent. Several discounts are available for wholesale purchases, including 30 percent for customers who have a three-year contract.

"It was very good for our industry where AWS entered the market as it fosters innovation and forces around the world to develop their game," said CEO Laurence Jones.

"But it is easy in the business to become a victim of their own success and I believe AWS because of its size as it is unable to match the level of support provided by some of the smaller British hosts and cloud games."

Jones added: "There is a lot of confusion as to how certain elastic and null products have a price clients end up paying a lot more than they expected while hidden costs arose .."

Is not wrong. While UKFast put their bids on a price calculator on a page, Amazon has a multitude of calculators and a wide range of options to choose from.

ECloud Flex UKFast said in a statement, it allows developers to "build and generate individual virtual machines programmatically with all virtual hosting platforms with the provided OpenStack APIs."

"The popularity of Flex is not a surprise," said the company, "but it has allowed us to expand the product and now we pass on the benefits to our customers. The investments we have made in our data centers mean that we now offer the cloud to A more competitive price.

The big elephant in the room to choose AWS is the cost of network out or bandwidth. Up to 10 tons / month bandwidth of the public Internet network Amazon EC2 (London area) on demand will cost you $ 0.09 / GB - $ 180 per month if you use all 10TB - then charge that UKFast To 10 TB of public Internet network bandwidth is £ 32.00 / month or $ 39.71.

On the other hand, an AWS "hard disk" can be moved from one server to another. Additionally, AWS scales for self-efficacy, which makes it more or less perfectly the highest amount of NIS bar leaving your wallet.

Thursday, March 2, 2017

AWS says a typo caused the massive S3 failure this week

Everybody makes mistakes. But working with Amazon Web Services means that incorrect entry can lead to a massive failure that will paralyze popular sites and services. This is apparently what happened earlier this week, when the AWS Storage (S3) service at the Northern Virginia vendor experienced a 11-hour system failure.




All other Amazon services in the US-ESTE-1 region that rely on S3, such as Elastic Block Store, Lambda, and the launch of a new forum to offer them the Elastic Compute Cloud service infrastructures have been impacted by the interruption.

AWS has apologized for the incident at an autopsy published Thursday. The blackout affected the likes of Netflix, Reddit, Imgur and Adobe. More than half of the 100 online shopping sites have experienced slower load times during the failure, according to APICA's monitoring service site.

Here is what caused the failure, and what Amazon plans:

According to Amazon S3 authorized employee has executed a command that was supposed to be "extracting a small number of servers from one of the subsystems S3 S3 that is used by the billing process" in response to the service of the billing processes running more Slow than expected. One of the parameters of the command was incorrectly entered and fired at a large number of servers that support a pair of S3 critical subsystems.

The subsystem handles metadata index and location information for all S3 objects within the region, while the inversion subsystem manages the allocation of new storage facilities and requires the index subsystem to function properly. Fault-tolerant, the number of servers stopped requires both a full reboot.

As it turns out, Amazon has not completely restarted these systems in their larger regions for several years, and S3 has experienced massive growth in the interim time. Restarting these subsystems has taken longer than expected, which added to the duration of the outage.

In response to this incident, AWS makes several changes to its internal tools and processes. The responsible for the failure tool has been modified to remove the slower servers and block transactions that have capacity below the control security levels. AWS also evaluates its other tools to ensure they have similar security systems.

AWS engineers will also begin refactoring the S3 index subsystem to help accelerate restarts and reduce the Blast Radius problems in the future.

The cloud service provider has also modified its dashboard health service dashboard to run in several regions. AWS employees were unable to update the dashboard during the crash because the S3 console depended on the affected area.