Home AWS Solution Architect Associate Exam Questions AWS News AWS Exam PDF

Featured Post

How to Pass AWS Certified Solutions Architect Associate SAA-C02 Exam in 2022?

 The AWS Certified Solution Architect Associate exam is the first step in a career in cloud computing. However, before you get started, you...

Sunday, September 29, 2019

Do Oracle's Claims About AWS Pass Scrutiny?

It was a major a week ago for big business IT occasions. Notwithstanding Pure Storage's Accelerate occasion, Oracle held its yearly client and accomplice journey, OpenWorld 2019. I went to Pure's occasion in the place where I grew up of Austin and had investigator Mark Vena go to the Oracle occasion in San Francisco.

I had the option to see the OpenWorld 2019 keynotes and observed Twitter, however, and wow was it fiery! Prophet referenced AWS more than I have ever observed an enormous organization talk about its rival. I had a couple of press, and even different examiners get some information about a portion of Oracle's cases identified with Amazon's AWS. I needed to get underneath them here and contrast with my very own compass. Net-net, I don't trust Oracle presented its defense against AWS.

Foundation

Prophet and AWS have altogether different plans of action AWS is an unadulterated cloud merchant principally in IaaS and PaaS with some half breed contributions like Snowball and Outposts, and Oracle is fundamentally an on-prem database and applications seller with some SaaS and IaaS contributions. That doesn't prevent the two from crashing at many, numerous clients. So how about we make a plunge.

I condense the Oracle guarantee, legitimately quote Oracle CEO Larry Ellison from the keynote, outline what I think Ellison is stating, and afterward give my take.

1/Oracle Claim: Autonomous frameworks wipe out human work, and when you take out human work you kill pilot blunder.

Ellison's Quote: "Self-ruling frameworks take out human work and when you kill human work you wipe out pilot blunder. In the event that you wipe out human blunder in self-ruling frameworks you kill information robbery. Mists are muddled. People commit errors. The Amazon information rupture, where Capital One had 100 million of their clients lose their own data happened in light of the fact that somebody committed an error. Somebody made a setup mistake. Presently, Amazon takes what I believe is a truly sensible position, saying, hello, you misconfigured the framework. That is your mix-up. We at Amazon can't be mindful. In the Oracle Autonomous Cloud, when you utilize the Oracle Autonomous Database, it designs itself. It's unrealistic for clients to make arrangement blunders on the grounds that there are no pilots to make mistakes. The framework designs itself. So in the AWS cloud, in the event that you make a blunder and it prompts cataclysmic information misfortune, it's on you. In the Oracle Cloud, when you utilize the independent database, the database consequently arrangements itself. The framework naturally designs itself. It naturally encodes itself. It consequently backs itself. All the security frameworks are programmed. People aren't included. There can be no human mistake." You can discover this at 4:20 in the session here.

Pat's Summary: The reason here was that on the off chance that you dispose of human mistake with a self-sufficient framework, you wipe out information robbery. Capital One was utilized for instance where 100M individuals were affected a programmer abusing a misconfigured outsider Web Application Firewall, a human blunder. Somebody committed an error, and Amazon doesn't acknowledge duty regarding client setup mistakes. As indicated by Oracle, The answer for this is Oracle's Autonomous Cloud administrations which naturally arrange themselves as Oracle clients are not ready to make setup blunders.

Pat's Take: I trust it's inconceivable for any cloud supplier, including Oracle, Microsoft Azure, Google Cloud, or IBM Cloud, to amazingly keep away from "design mistakes" in light of the fact that the extremely same activity by one client can be totally purposeful and vital and by another client, an arrangement blunder. One individual's open pail is another's shut container. Everybody's circumstance is diverse you truly can't state for sure that an open security gathering or even an open intermediary is a mistake. I am anticipating looking into progressively about Oracle's self-ruling database and Linux as the guarantee is intriguing. Most fascinating for me would be for a venture client state it has had no issues ever with the Autonomous database following a time of utilization.

2/Oracle guarantee: A solitary multi-reason database is superior to anything a few single-reason specific databases.

Ellison's statement: "This is only the start of a different design system. The one at Oracle where we state we are going to continue including highlights and information types and application types to the Oracle database, a solitary database, a solitary joined database that handles every one of your information types and every one of your applications versus Amazon saying that when something new comes up like the web of things, we'll give you a genuine quick IoT database. We have every one of the abilities in a single database. Amazon has a different database for the majority of one or the other makes a lot of issues. Every database has a section of your information. You must have specialists to keep up these databases." You can discover this at 30:40 in the session here.

Pat's Summary: The reason here is that numerous one of a kind and specific databases make issues and that every database has distinctive APIs, security models, recuperation methodology, and adaptability strategies. Each single-reason database has diverse operational qualities that require an alternate group with remarkable aptitudes. Every database has a part of client information. Prophet offers a solitary joined database that supports different information types like social, report, spatial, and diagram and application types, for example, exchanges, investigation, ML, and IoT.

Pat's Take: I accept a methodology of utilizing a social database as the main spot for your applications is an obsolete perspective. Hasn't this been the thought since the 90's? When has a one-size-fits-all methodology at any point worked effectively in tech the most recent 10 years? A ton has changed from that point forward. With a completely overseen cloud database administration, designers work with APIs and truly couldn't care less what is running out of sight as long as it offers execution, security, and unwavering quality at the correct value point. Reason fabricated, oversaw databases enable designers to fractalize complex applications into littler pieces and consequently utilizing the best device to take care of the issue, regardless of whether it be a sledge, screwdriver, or saw. AWS can rollout numerous client models like AirBnB who use DynamoDB for snappy queries and customized search, ElastiCache for quicker (sub-ms) site rendering, and Amazon Aurora as its essential value-based database. I will be intently observing Oracle's Swiss Army blade database and in the event that it can convey on the guarantee, will give it credit.

3/Oracle guarantee: Oracle can slice your AWS bill down the middle.

Ellison's statement: "It costs route less to run Oracle Autonomous Database than to run Redshift, Aurora, or any Amazon database. All things considered, the Oracle Autonomous Database kills human blunders, however it's arranged so that the system can fizzle, and the framework continues running, that a server can come up short, and the framework continues running. It's a deficiency tolerant framework. That is the reason we're in any event multiple times more solid than Amazon. I figure I may transform it one year from now to multiple times. Prophet Autonomous database is a whole lot quicker than Redshift. Presently, we demonstrated really, the Oracle Autonomous Database being seven, eight times quicker than Redshift when you are doing investigation. Aurora is their best value-based database. We were, again around eight or multiple times quicker. They're 7x more slow. That implies they're 7x increasingly costly. That is the reason it's so natural for us to ensure. You take any application off an Amazon database, move it to Oracle we'll ensure bringing your Amazon charge, we'll ensure that bill will go into equal parts." You can discover this at 19:30 in the session here.

Pat's Summary: The case says that it costs significantly less to run Oracle Autonomous Database than to run Redshift, Aurora, or any Amazon database and that is the reason Oracle is 25X more solid than Amazon, and one year from now could be 100X. Amazon is 7X more slow, likening to 7X more cost. Prophet pairs down and says clients can present to Oracle their Amazon contract and will ensure the bill will be half if the client goes with Oracle.

Pat's Take: Ellison is notorious in the business for making egotistic cases. Along these lines it was essential to take a gander at the fine print which says the case applies to database and information stockroom as it were. The cost cases don't cover different administrations including process, stockpiling, or any of the several AWS administrations. The head-scratcher for me is that AWS databases like Amazon Aurora can be 10% of the cost of Oracle databases and AWS says it has marked down costs multiple times since it propelled in 2006. The other thing I simply acknowledged in the previous year about AWS is that it attempts its best to drive trust with clients through "downshifting", or prescribing to the client how to bring down it costs. The "AWS Trusted Advisor" looks how a client is using administrations and will make proposals on the most proficient method to spend in an unexpected way. I figure Oracle would be best served to have huge undertakings give tributes on slicing their cloud bills down the middle by moving from AWS to Oracle.

4/Oracle guarantee: Oracle is the main cloud that offers secure information disengagement.

Ellison's statement: "The various mists have a common a mutual Intel PC. Who offers it? Indeed, Amazon has code in that PC and you have code in that PC. You may be the main occupant in that PC however you share that PC with Amazon. Amazon additionally has code in there. That is not how our own work. For our situation, you're the main inhabitant and our system control code is in a different PC with independent memory and that structures these safe confinement zones. Dangers can't get into (our) cloud. Gen 1 cloud, one shared Intel PC. Amazon can see your information and you can see Amazon's code. Both downright terrible thoughts. Try not to have the option to access cloud control code." You can discover this at 27:30 in the session here.

Pat's Summary: The reason here is that the various mists share an Intel-based server and that the client and Amazon has code in that server, regardless of whether it's single-inhabitant. As indicated by Oracle, the client is the just one with access and the system control code is in a different server with independent memory which makes "secure disconnection zones." With this, dangers can't get into the cloud. It proceeds to state that AWS can see client information and the client can see AWS's code, both which are ill-conceived notions. Clients shouldn't approach cloud control code and AWS shouldn't have information get to.

Pat's Take: Ellison is likely alluding to the way that in AWS's past engineering, when it utilized the Xen hypervisor, AWS had framework code running in the principle framework. Prophet's first-gen cloud worked like this as well. This is hypothetically more helpless than when virtualization code is kept running off the primary framework, as AWS does in its later Nitro engineering, and Oracle does on its second-age cloud. I don't accept there's anything here.

5/Oracle Claim: AWS Cloud Databases are not serverless or flexible.

Ellison's statement: "A great many people don't utilize DynamoDB. A great many people use Aurora, Redshift, RDS and a lot of the others. None of those are serverless and none of those are flexible. You need to scale up? Bring the framework down. The framework isn't running? Despite everything you need to pay for it. No servers are running? Really awful. You need to pick a shape. 10 centers and what happens when the application quits running? You pay for it. AWS Redshift not serverless. Amazon you need to scale up or down? That is vacation. Amazon, you fix? More personal time. With respect to of Oracle: We're discussing fundamental register. Fundamental stockpiling. Serverless when not running. Progressively scale itself up in various centers and measure of memory while it is running. That is what we're doing. No personal time. Scales up while it's running. Shouldn't something be said about capacity? Pick your beginning measure of capacity. As you need more stockpiling it will naturally scale up while it is running. No vacation." You can discover this at 43:40 in the session here.

Pat's Summary: AWS DynamoDB is a serverless, practically constrained database that not many clients use, rather picking to utilize Aurora or Redshift which are neither serverless or versatile. Along these lines, with Aurora or Redshift, you need to close down the database to physically scale up or down and pay for bigger designs than you need.

Pat's Take: Three AWS databases (Amazon Aurora, Amazon DynamoDB, Amazon Neptune) are serverless and versatile. AWS says that in excess of a hundred thousand clients use DynamoDB including Lyft, Airbnb, Samsung, Toyota, and Capital One to help strategic outstanding burdens. Amazon Aurora is serverless and flexible, offering highlights, for example, read imitations, serverless, and worldwide databases for single locale and cross-area failover. AWS says occasion failover regularly takes under 30 seconds. I'd like to see a type of Oracle and AWS cloud execution and unwavering quality "shoot-out".

6/Oracle Claim: Oracle has a bigger impression than AWS.

Ellison's statement: "We have 16 hyperscale locales around the globe today. All the Oracle areas run all the Oracle administrations. Every one of (our) administrations are accessible in the majority of the mists and that is our strategy. Amazon doesn't do that. Amazon has a few administrations some spot, a few administrations somewhere else... When we meet one year from now, we'll have a larger number of districts than AWS." You can discover this at 57:25 in the session here.

Pat's Summary: The contention says that the strategy at Oracle is that administrations are accessible in all districts, which Amazon doesn't do as it has a few administrations in certain locales. As of one year from now, Oracle said it will have more areas (36) than AWS (25). It proceeds to state that endeavor clients overall require topographically disseminated districts for genuine business coherence, catastrophe assurance, and local consistence necessities and that numerous accessibility spaces inside an area won't address this issue.

Pat's Take: I trust Oracle is contrasting apples and oranges here in view of the varying meaning of a "Locale". AWS has 69 Availability Zones (AZ's) in 22 Regions, and not at all like Oracle, every AZ has a datacenter. I trust AWS's AZ engineering is remarkable as it gives versatility (scaling and calamity) in a scale that Oracle doesn't have. Likewise of note is that Oracle included Azure and Azure datacenters under development in its examination, sort of like if AWS somehow happened to join forces with GCP and add GCP's ability to its own figures. Prophet isn't even in the best 10 of IaaS players where AWS is positioned #1.

Wrapping up

Prophet invested an unbelievable measure of energy discussing AWS in its OpenWorld 2019 keynotes. I don't trust Oracle put forth the defense on its egotistic cases against AWS, however I encourage you to watch the Oracle keynote here and the AWS re:Invent 2018 keynote here.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.