Home AWS Solution Architect Associate Exam Questions AWS News AWS Exam PDF

Featured Post

How to Pass AWS Certified Solutions Architect Associate SAA-C02 Exam in 2022?

 The AWS Certified Solution Architect Associate exam is the first step in a career in cloud computing. However, before you get started, you...

Monday, April 9, 2018

Amazon Brings Machine Learning Smarts To Edge Computing Through AWS Greengrass

AWS Greengrass, the edge computing platform from AWS, got a facelift in the form of machine learning inference support. The latest version (v1.5.0) can run Apache MXNet and TensorFlow Lite models locally on edge devices based on NVIDIA Jetson TX2 and Intel Atom architectures.

Machine learning inferencing is a top use case for edge computing. Since edge computing gateways are expected to run in offline scenarios with intermittent connectivity to the cloud, they can serve machine learning models at runtime that can work offline. When combined with industrial IoT, ML inferencing makes the deployments valuable through predictive maintenance and analytics.

Amazon has been investing in all the three key areas - IoT, edge computing, and machine learning. AWS IoT is a mature connected devices platform that can deliver scalable M2M, bulk device on-boarding, digital twins and analytics along with tight integration with AWS Lambda for dynamic rules. AWS Greengrass extends AWS IoT to the edge by delivering local M2M, rules engine, and routing capabilities. The most recent addition, Amazon SageMaker, brought scalable machine learning service to AWS. Customers can use it for evolving trained models based on popular algorithms.

Amazon has done a great job of integrating AWS IoT, AWS Greengrass and Amazon SageMaker to deliver end-to-end machine learning support at the edge.

Customers upload training data to Amazon S3 before pointing Amazon SageMaker to it. They can choose one of the existing algorithms of SageMaker to generate a training model that is copied to another bucket of Amazon S3 in the form of a compressed zip file. This zip file is copied to the device, which will be invoked by an AWS Lambda Python function at runtime. It is also possible to directly point Greengrass to a pre-trained SageMaker model.

Developers can use a Raspberry Pi for local development and testing. For production scenarios, either NVIDIA Jetson TX2 or Intel Atom is the recommended processor. Amazon is also providing pre-built machine learning libraries based on Apache MXNet and Tensorflow models that can be deployed on Greengrass.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.