DevOps Engineer - Professional
2022 Questions With Well Explained
Answers
Your organization has a few million text documents in S3 that are stored in
... [Show More] a
somewhat random manner, and the amount of files is always growing. The
developer that initially wrote the system in use stored everything with a random file
name with some attempt at security through obscurity. Now your CEO and CFO both
need to be able to search the contents of these documents, and they want to be able
to do so quickly at a reasonable cost. What managed AWS services can assist with
implementing a solution for your CEO and CFO, and what would the setup process
involve? - Definition- Create a search domain
Implement Amazon CloudSearch
Set up access policies
Configure your index
CloudSearch by itself is enough to fulfill the requirements put forward here.
CloudSearch is managed, scalable can very quick to configure and get online. In
comparison it would take some time to set up EC2 and install ElasticSearch or any
other search tool, and would be much more difficult to scale. This involves creating a
search domain and configuring the index as required, and then setting up the access
policies.
Your organization is building millions of IoT devices that will track temperature and
humidity readings in offices around the world. The data is then used to make
automated decisions about ideal air conditioning settings based on that data, and
then trigger some API calls to control the units. Currently, the software to accept the
IoT data and make the decisions and API calls runs across a fleet of autoscaled EC2
instances. After just a few thousand IoT devices in production, you're noticing the
EC2 instances are beginning to struggle and there's just too many being spun up by
your autoscaler. If this continues you're going to hit your account service limits and
costs will blow out way more than you budgeted for. How can you redesign this
service to be more cost effective, more efficient and most importantly, scalable? -
Definition- Switch to Kinesis Data Analytics. Stream the IoT data with Kinesis Data
Streams and perform your decision making and API triggering with Lambda. Shut
down your EC2 fleet.
*In this instance, Kinesis Data Analytics with the data streamed with Kinesis Data
Streams is the best choice. It's also completely serverless so you are able to save
costs by shutting down your EC2 servers.
Your company has been producing Internet-enabled Microwave Ovens for two years.
These ovens are constantly sending streaming data back to an on-premises
endpoint behind which sit multiple Kafka servers ingesting this data. The latest
Microwave has sold more than expected and your Manager wants to move to
Kinesis Data Streams in AWS, in order to make use of its elastic capabilities. A small
team has deployed a Proof of Concept system but are finding throughput lower than
expected, they have asked for your advice on how they could put data on the
streams quicker. What information can you give to the team to improve write
performance? - Definition- Develop code using the Kinesis Producer Library to put
data onto the streams.
Check the GetShardIterator, CreateStream and DescribeStream Service Limits.
Use a Small Producer with the Kinesis Producer Library, but using the PutRecords
operation.
*You should always use the Kinesis Producer Library (KPL) when writing code for
Kinesis where possible, due the Performance Benefits, Monitoring and
Asynchronous performance. You should choose any answers which include KPL as
a solution. You should also check all relevant Service Limits to ensure no throttling is
occurring. Only three of these options are shown in the question, but there are many
more possibilities. The remaining options will work, but generally will give slower
performance.
Your team has been planning to move functionality from an on-premises solution into
AWS. New microservices will be defined using CloudFormation templates, but much
of the legacy infrastructure is already defined in Puppet Manifests and they do not
want this effort to go to waste. They have decided to deploy the Puppet-based
elements using AWS OpsWorks but have received the following errors during
configuration; "Not authorized to perform sts:AssumeRole" and also "The following
resource(s) failed to create [EC2Instance]". The team have been unable to resolve
these issues and have asked for your help. Identify the reasons why these errors
occur from the options below. - Definition- Ensure that the
'AWSOpsWorksCMServerRole' policy is attached to the instance profile role.
Ensure that the EC2 instance has the AWS service agent is running, has outbound
Internet access and has DNS resolution enabled.
*There are two answers which would resolve the error*s in the question. Any time a
"not authorized" message is displayed, it is nearly always a permissions problem and
in this case it can be resolved by attaching the AWSOpsWorksCMServiceRole policy
to the instance profile role for EC2. opsworks-cm.amazonaws.com should also be
listed in the Trust Relationships. For the second question, this error normally dictates
that the EC2 instance doesn't have sufficient network access, so we need to ensure
that the instance has outbound Internet access, and that the VPC has a single
subnet with DNS resolution and Auto-assign Public IP settings enabled. All other
options will not resolve the errors.
Your company runs a popular website for selling cars and its userbase is growing
quickly. It's currently sitting on on-premises hardware (IIS web servers and SQL
Server backend.) Your managers would like to make the final push into the cloud.
AWS has been chosen, and you need to make use of services that will scale well
into the future. Your site is tracking all ad clicks that your customers purchase to sell
their cars. The ad impressions must be then consumed by the internal billing system
and then be pushed to an Amazon Redshift data warehouse for analysis. Which
AWS services will help you get your website up and running in the cloud, and will
assist with the consumption and aggregation of data once you go live? - Definition-
Build the website to run in stateless EC2 instances which autoscale with traffic, and
migrate your databases into Amazon RDS. Push ad/referrer data using Amazon
Kinesis Data Firehose to S3 where it can be consumed by the internal billing system
to determine referral fees. Additionally create another Kinesis delivery stream to
push the data to Amazon Redshift warehouse for analysis.
* Amazon Kinesis Data Firehose is used to reliably load streaming data into data
lakes, data stores and analytics tools like Amazon Redshift. Process the incoming
data from Firehose with Kinesis Data Analytics in order to provide real-time
dashboarding of website activity.
You have decided to install the AWS Systems Manager agent on both your onpremises servers and your EC2 servers. This means you will be able to conveniently
centralize your auditing, access control and provide a consistent and secure way to
remotely manage your hybrid workloads. This also results in all your servers
appearing in your EC2 console, not just the servers hosted on EC2. How are you
able to tell them apart in the console? - Definition- The ID of the hybrid instances are
prefixed with 'mi-'. The ID of the EC2 instances are prefixed with 'i-'.
*Hybrid instances with the Systems Manager agent installed and registered to your
AWS account will appear with the 'mi-' prefix in EC2.
You currently host a website from an EC2 instance. Your website is entirely static
content. It contains images, static html files and some video files. You would like to
make the site as fast as possible for everyone around the world in the most cost
effective way. Which solution meets these requirements? - Definition- Move the
website into an S3 bucket and serve it through Amazon CloudFront.
*s3 and CloudFront is the cheapest solution which will ensure the fastest content
delivery around the world, but will also be cheaper due to no ongoing EC2 costs.
Your organisation is 75% through moving its core services from a Data centre and
into AWS. The AWS stacks have been working well in their new environment but you
have been told that the Data centre contract will expire in 3 months and therefore
there is not enough time to re-implement the remaining 25% of services before this
date. As they are already managed by Chef, you decide to move them into AWS and
manage them using OpsWorks for Chef. However, when configuring OpsWorks you
have noticed the following errors have appeared; "Not Authorized to perform
sts:AssumeRole" and "FATAL Could not find pivotal in users or clients!". Choose the
correct options to resolve the errors. - Definition- Create a new service role and
attach the AWSOpsWorksCMServiceRole policy to the role. Verify that the service
role is associated with the Chef server and it has that policy attached.
Install the knife-opc command and then run the command; knife opc org user add
default pivotal
*With the "Not Authorized to perform sts:AssumeRole" error, you can assume its
policy/role related and therefore creating a role and attaching the
AWSOpsWorksCMServiceRole policy should resolve this issue. Finally, any
message which states that you 'cannot find a pivotal user', requires you to add one
to the default location. All other answers will not resolve the problems listed.
Your organisation has dozens of AWS accounts owned and run by different teams
and paying for their own usage directly in their account. In a recent cost review it was
noticed that your teams are all using on-demand instances. Your CTO wants to take
advantage of any pricing benefits available to the business from AWS. Another issue
that keeps arising involves authentication. It's difficult for your developers to use and
maintain their logins all of their accounts, and it's also difficult for you to control what
they have access to. What's the simplest solution which will solve both issues? -
Definition- Use AWS Organizations to keep your accounts linked and billing
consolidated. Create a Billing account for the Organization and invite all other team
accounts into the Organization in order to use Consolidated Billing. You can then
obtain volume discounts for your aggregated EC2 and RDS usage. Use AWS Single
Sign-On to allow developers to sign in to AWS accounts with their existing corporate
credentials and access all of their assigned AWS accounts and applications from
one place.
* AWS Single Sign-On allows you to centrally manage all of your AWS accounts
managed through AWS Organizations, and it will also allow you to control access
permissions based on common job functions and security requirements.
You currently work for a local government department which has cameras installed
at all intersections with traffic lights around the city. The aim is to monitor traffic,
reduce congestion if possible and detect any traffic accidents. There will be some
effort required to meet these requirements, as lots of video feeds will have to be
monitored. You're thinking about implementing an application that will user Amazon
Rekognition Video, which is a Deep learning video analysis service, to meet this
monitoring requirement. However before you begin looking into Rekognition, which
other AWS service is a key component of this application? - Definition- Amazon
Kinesis Video Streams
*Amazon Kinesis Video Streams makes it easy to capture, process and store video
streams which can then be used with Amazon Rekognition Video.
You're developing a Node.js application that uses the AWS Node.js API. You interact
with lots of different AWS services, and would like to determine how long your API
requests are taking in an effort to make your application as efficient as possible. It
would also be useful to detect any issues that may be arising and give you an idea
about how to fix them. Which AWS service can assist in this task and how would you
go about achieving it? - Definition- Use AWS X-Ray, inspect the service map, trace
the request path, determine the bottlenecks.
*AWS X-Ray will produce an end to end view of requests made from your application
where you can analyze the requests made as they pass through your application.
Your chief security officer would like some assistance in producing a graph of failed
logins to your Linux servers, located in your own data centers and EC2. The graph
would then be used to trigger alert emails to investigate the failed attempts once it
crosses a threshold. What would be your suggested method of producing this graph
in the easiest way? - Definition- Install the CloudWatch Logs agent on all Linux
servers, stream your logs to CloudWatch Logs and create a CloudWatch Logs Metric
Filter. Create the graph on your Dashboard using the metric........ [Show Less]