Home AWS Solution Architect Associate Exam Questions AWS News AWS Exam PDF

Featured Post

How to Pass AWS Certified Solutions Architect Associate SAA-C02 Exam in 2022?

 The AWS Certified Solution Architect Associate exam is the first step in a career in cloud computing. However, before you get started, you...

Sunday, April 24, 2016

AWS Summit: New Cloud Services, Expanded EBS Choices

Amazon Web Services announces new cloud services at their summit in AWS in Chicago this week, expanded storage options available with Elastic Block Store (EBS) and the addition of inspector and an accelerator data from Amazon S3 to your list offers.

Amazon Inspector is a service of safety assessment that can be applied in the future workload of an Amazon customer, while still in development in the customer premises. It has been available in the preview phase of several months and became generally available on April 19.

As agile development methods and other production speed of the application, the effort required to determine the risk of exposure and vulnerability can fall behind quickly output code, said Stephen Schmidt, Director of Information Security AWS. When applications are designed to run on AWS, "Customers have asked us to do the same rigorous safety assessments in their applications that do for our AWS services", he said in the announcement of Amazon.

Inspector provides APIs for customers to connect to service their application development and deployment processes.

Thus, the service can be invoked program when necessary and to conduct assessments at the scale at which the application will run in the cloud - something that can be difficult to do in the place where resources inspection and testing can be few. By having available the service, the code can continue deployment without having to wait for the developer or central staff to evaluate manually security, Schmidt said.

giving applications is a check when undergoing changes or increased use. An operator with AWS tags that identify the workload can run the service to an application via the AWS Management Console. The operator can order some tests off the list and set an amount of time running.

Inspector includes the ability to search a number of known vulnerabilities and collect information on how an application communicates with other Amazon services. He wants to know, for example, if the application uses secure channels and the amount of network traffic between EC2 instances. The inspector has a knowledge base safe operations with "packages" of related rules or regulations may be applied to different situations. Amazon updates the knowledge base with the latest threat information.

The results of the evaluation and recommendations for what should be done to reverse the vulnerabilities found the owner of the application are presented. "Inspector provides key of our security team of world-class learning," said Schmidt, guests can correct problems before deployment rather than after an incident occurs.

The second service announced at the Summit of 19 April AWS, Amazon S3 transfer speeds. Use the onboard network of the Amazon used to efficiently distribute content to end users from 50 different locations. The board supports the AWS CloudFront CDN network and quick to service queries DNS Route 53 responses.

Now sharpen network locations also serve as transfer stations data by using optimized protocols and advanced infrastructure to move data objects from one party to another country's network. Improving the transfer rate is 50% to 500% for the transfer of large objects between countries, Jeff Barr, head of AWS evangelist, wrote in a blog on the new service.

Barr said AWS has also increased the capacity of its data transfer device 50 TB to 80 TB of snowball. Users snowball load their data encrypted on the device snowball that is physically transported to an AWS data center. Data on a snowball can not be decrypted until it reaches its destination, and 10 snowballs can be downloaded simultaneously on a user account in parallel.

Besides the two new services, AWS launches two cheap storage options for running instances EC2, or for use with large data projects clustering Elastic MapReduce, Amazon version of Hadoop.
They are the two options EBS seek to combine the attributes of speed solid state drive with greater storage capacity and lower price per gigabyte hard drives. AWS uses two types of cloud storage to provide cold performance optimized HDD hard drive.

Optimized performance hard drive is designed for workloads of large data that could include, in addition Elastic MapReduce, such things as registration processing server; extract, transform and load tasks; and Kafka, high speed of the Apache Software Foundation may publish and subscribe system. It could also be used with workloads data storage. a price of 4.5 cents per GB per month will have data centers in Northern Virginia AWS. This price compares with the 10 cents per GB per month for storage units based utility EBS solid state.

HDD cool the price, even at 2.5 cents per GB per month. Optimized the same performance use cases meets hard drive, but would apply to jobs that are accessed less frequently, Barr wrote in another blog on April 19.
Both options are defined by their flow MB per second. Optimized performance hard disk provides 250MB per second performance data for TB will equip up to 500 MB per second for 2TB. Similarly, cold allows HDD 80 MB per second speed for each Provisioned TB data, building up to 250 MB per second for Provisioned 3.1TB data.

Both offer a basic performance and the ability to summon a burst performance level for short periods. Infrequent use of a built-breaking "blast bucket" or credit for time during the break system can be invoked, as well as the standard options of BSE type, Barr said in the blog.

"We listened to these volumes offer an excellent price / performance when used for large workloads data. To achieve performance levels that are possible with the volumes, the application must perform and large sequential I / O operations, which is typical of big data workloads, "Barr explained in the blog.

This is due to the nature of the underlying disks, which can "contiguous data transfer very quickly", but occur less favorably when prompted to carry out a large number of small volumes, random access E / S, such as those required for engine relational database. If these storage options are used for this purpose, Barr recognizes be "less effective and will result in a lower bitrate." General purpose ESB based on SSD would be a much better fit, he said.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.