AWS Keys Points

  • AWS does not copy launch permissions, user-defined tags, or Amazon S3 bucket permissions from the source AMI to the new AMI.
  • You should generate a password for each user and give these passwords to your system administrators. You should then have each user set up multi factor authentication once they have been able to log in to the console. You cannot use the secret access key and access key id to log in to the AWS console; rather, these credentials are used to call Amazon API’s.
  • Network throughput is the obvious bottleneck. You are not told in this question whether the proxy server is in a public or private subnet. If it is in a public subnet, the proxy server instance size itself may not be large enough to cope with the current network throughput. If the proxy server is in a private subnet, then it must be using a NAT instance or NAT gateway to communicate out to the internet. If it is a NAT instance, this may also be inadequately provisioned in terms of size. You should therefore increase the size of the proxy server and/or the NAT solution.
  • For all new AWS accounts, there is a soft limit of 20 EC2 instances per region. You should submit the limit increase form and retry the template after your limit has been increased.
  • Currently the S3 Classes are; Standard, Standard-Infrequent Access, One Zone-Infrequent Access, Reduced Redundancy Storage and for archive, Glacier & Glacier Deep Archive. Reduced Redundancy Storage is the only S3 Class that does not offer 99.999999999% durability and therefore any of the answers that contain Reduced Redundancy Storage cannot be correct.
  • The valid ways of encrypting data on S3 are
    • Server Side Encryption (SSE)-S3,
    • SSE-C,
    • SSE-KMS or a client library such as Amazon S3 Encryption Client.
    • Both the Oracle and SQL Server database engines have limits to how many databases that can run per instance. Primarily, this is due to the underlying technology being proprietary and requiring specific licensing to operate.
      • The database engines based on Open Source technology such as Aurora, MySQL, MariaDB or PostgreSQL have no such limits. Further information: https://aws.amazon.com/rds/faqs/
    • Security Groups are stateful and updates are applied immediately.
    • To see the process by which federated users are granted access to the AWS console. 
    • The Question describes a situation where low cost OneZone-IA would be perfect. However it also says that there is a high licence cost with each meme generation. The storage savings between IA and OneZone-IA are about $0.0025 this is small compared to the $10 for licensing. Therefore you may well be better to pay for full S3-IA.
    • You cannot tag individual folders within an S3 bucket. If you create an individual user for each staff member, there will be no way to keep their active directory credentials synched when they change their password. You should either create a federation proxy or identity provider and then use AWS security token service to create temporary tokens. You will then need to create the appropriate IAM role for which the users will assume when writing to the S3 bucket. https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
  • The aim is to direct sessions to the host that will provide the correct language. Geolocation is the best option because it is based on national borders.
    • Geoproximity routing is another option where the decision can be based on distance. While latency-based routing will usually direct the client to the correct host, connectivity issues with the US Regions might direct traffic to AP. In this case, the word “ensure” is operative: users MUST connect to the English-language site.
  • Additional clones of your production environment, ElastiCache, and CloudFront can all help improve your site performance. Changing your autoscaling policies will not help improve performance times as it is much more likely that the performance issue is with the database back end rather than the front end. The Provisioned IOPS would also not help, as the bottleneck is with the memory, not the storage.
  • There are many features which are native to the KMS service. However, of the above, only import your own keys, disable and re-enable keys and define key management roles in IAM are valid. Importing keys into a custom key store and migrating keys from the default key store to a custom key store are not possible. Lastly operating as a private, native HSM is a function of CloudHSM and is not possible directly within KMS. https://aws.amazon.com/kms/faqs/
  • The essence of a stateless installation is that the scalable components are disposable, and configuration is stored away from the disposable components. The best way to solve this type of problem is by elimination. Storage Gateway offers no advantage in this situation. CloudWatch is a reporting tool and will not help. An ELB will distribute load but will not really specific to stateless design. Elasticache is well suited for very short fast cycle data and is very suitable to replace in memory or on disk state data previously held on the web servers. RDS is well suited to structured and long cycle data, and DynamoDB is well suited for unstructured and medium cycle data. Both can be used for certain types of stateful data either in partner with or instead of Elasticache. 
  • An Elastic Load Balancer can help you deliver stateful services, but not stateless. Elastic Map Reduce is a data crunching services and is not related to servicing web traffic.
  • Consolidated Billing is a feature of AWS Organisations. Once enabled and configured, you will receive a bill containing the costs and charges for all of the AWS accounts within the Organisation. Although each of the individual AWS accounts are combined into a single bill, they can still be tracked individually and the cost data can be downloaded in a separate file. Using Consolidated Billing may ultimately reduce the amount you pay, as you may qualify for Volume Discounts. There is no charge for using Consolidated Billing.
  • DynamoDB makes use of parallel processing to achieve predictable performance. You visualise each partition as an independent DB server of fixed size. Each responsible for a defined block of data. In SQL terminology it is called sharding. The documentation is specific about the SSDs, but makes no mention of read-replicas or EBS-Optimised. Caching in-front of DDB is an option (DAX), but it is not inherent to DDB.
  • Spread placement groups have a specific limitation that you can only have a maximum of 7 running instances per Availability Zone and therefore this is the only correct option. Deploying instances in a single Availability Zone is unique to Cluster Placement Groups only and therefore is not correct. The last two remaining options are common to all placement group types and so are not specific to Spread Placement Groups. Spread Placement Groups are recommended for applications that have a small number of critical instances which need to be kept separate from each other. Launching instances in a Spread Placement Group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware. Spread Placement Groups provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time. In this case, deploying the EC2 instances in a Spread Placement Group is the only correct option. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
  • If you configure the auto-scaling to maintain 50% per AZ, then if you lose any one AZ the remaining two will carry the full load between them. This does mean that you carry and extra cost, but if the Board has decided that this level of resiliency is needed, that will be the cost.

Leave a comment

Design a site like this with WordPress.com
Get started