Originally posted here on 10/22/22
As of today, 10/22/2022, I officially completed Forrest Brazeal's Cloud Resume Challenge. To those who aren't familiar with the requirements check here.
For a quick peek at what I ended up building:
To view the finished site, click here
If you're more interested in the code, the GitHub repo is available here
Overall, I thought it was a great challenge that asks you to get down and dirty with all of the aspects of modern cloud infrastructure and web development, with the idea that you don't have to get that deep into the portions that align less with your current skillset/interests, just enough to create a fully-realized project. It challenges you to build a project beginning-to-end to requirements specific enough to give you a little guidance and keep you from getting lost while remaining broad enough to force you to find creative solutions to problems.
Overall, I think this challenge broke down neatly into 5 main sections:
The Show Before the Show (AWS Organizations/Orgformation)
The Front End (HTML/CSS style sheet, S3 website, Cloudfront Distribution, Route53 domain routing)
The Back End (DynamoDB database/Lambda function, API gateway stack, Javascript integration with the front end)
The Fun Part (IaC via Terraform, CI/CD pipeline in Github actions, Cypress end-to-end testing)
The Victory Lap (AWS CCP exam)
The Show Before the Show (Relevant Technologies: AWS Organizations, org-formation)
Before you can start the Cloud Resume Challenge, you need to build a solid foundation. While you can create a new AWS account and be ready to start creating new resources in minutes, it's very strongly advised to configure SSO on your account and create users that will manage the resources in question.
To get my SSO solution up and running. I chose to use org-formation. Org-formation allows you to programmatically create new user accounts via a YAML manifest stored in AWS CodeCommit, which will fire off a CodeDeploy pipeline to create/rearrange your user accounts. This means you can create a fresh AWS account in seconds in your AWS Organization, ready to begin a new project.
With that out of the way (and our user account for the challenge created), we were ready to get to work.
The Front End (Relevant Technologies: HTML/CSS, S3, Cloudfront, Amazon Certificate Manager, Route53)
In comparison to the sections that follow, this section is at least, very linear, which makes progression easier to come by. If your website isn't coming up at the S3 bucket URL, the problem lies within the S3 bucket. If you can't see anything at your Cloudfront URL, but the S3 bucket is still displaying correctly, the problem is with your Cloudfront distribution. If your content is up and displaying in Cloudfront, but your registered domain isn't redirecting as it should, check Route53.
The biggest problem I faced was with regard to DNS delegation in Route53. I registered the domain under the management account, but I created an SSO account (using org-formation from the first phase of the project) for all of the resources for the challenge (future projects will also have their own account, which helps ensure that resources from one project are unable to interfere with other projects). While I could create all my needed DNS records in the hosted zone in the management account, it is cleanest to create a hosted zone in the account that owns all the resources and delegate DNS authority from the management account. This involves creating an NS record in the management account that points to the name servers the accounting being delegated to uses (for more info, check out this article). I set up the NS record, and it wasn't working, even though I was able to route to the hosted zone in the management account. Eventually, I noticed the assigned name servers in the registered domain did not match the ones in the hosted zone in the management account. Once that was fixed, everything routed correctly.
Make sure these:
match these:
While I don't understand the exact intricacies how Route53 works to authorize DNS redirects, I presume that Route53 "authorizes" redirects only one hop away (I suspect this is designed to head off certain types of spoofing attacks). When the name servers didn't match, Route53 was willing to authorize a hop to the management account hosted zone, but since that authorization was used up on the first hop, the hosted zone wasn't able to delegate DNS permissions along the path another hop the way I was hoping.
The Back End (Relevant Technologies: DynamoDB, AWS Lambda, AWS API Gateway, Python/Javascript)
While the first part of the challenge was pretty linear, I thought this part was a lot more circular and provided a lot more opportunities to get stuck. It's also the first part of the challenge that asks you to write imperative code. While I found the DynamoDB database and the Lambda function relatively simple to configure (I'm pretty comfortable working in Python - especially at the level the challenge asked for), and even getting the API working through Postman wasn't TOO tricky, integrating this with the front end website was another story entirely.
Prior to this challenge, I had never coded in Javascript before, and even though I've written a couple of scripts and feel a lot better about using JSON as a data structure, the syntax still doesn't feel natural to me. It compels me though. I'd love to find opportunities to get better at it. I spent a lot of time getting an "undefined" value from my Javascript function. The good news with this error is that your API is still probably working, but you aren't populating the variable you are trying to pass to the front end correctly. I eventually figured it out in 16 lines of code (and more time than I care to admit)
The Fun Part (Relevant Technologies: Terraform, Github Actions, Cypress)
When I describe this part of the challenge as "The Fun Part", I meant it, even though this was by far the part I spent the most time on. I knew going into the challenge I wanted more experience building CI/CD pipelines, and I wanted real hands-on experience building AWS infrastructure with Terraform. They were the things I was most hoping to get out of the challenge. While I'm proud of all the work I did in part of this challenge, these are the sections I spent the most time on, loved the most, and am the proudest of. If there are any tasks in this challenge I'm hoping to be able to do in my future career, it's CI/CD pipelines and Terraform.
Overall, I was surprised at how quick and easy it is to build infrastructure in Terraform. Once I figured out how to use tmux to create one terminal session running terraform commands in one terminal window (add alias t='terraform' for bonus points) with a second terminal session for making modifications to your terraform files in vim, I was cruising. However, when the time came to create my DynamoDB table, I ran into by far the biggest and strangest roadblock in the entire challenge.
I was able to create the table without any problems and everything looked great. However, the next time I tried to make changes to my Terraform configuration, I received this error:
No matter what change I made to my Terraform configuration for the DynamoDB table, I continued to receive this error message. I tried terraform destroying my infrastructure. No change. I tried moving the corresponding .tf file to a different directory (therefore meaning Terraform would not see it as provisioned infrastructure and try to tear it down on the next apply) - no luck. It was just... stuck.
Once I sat down and did my research, I was able to find the answer: this is a known bug for the live version of the AWS Terraform module (4.36). If your tables use non-alphanumeric characters in the keys, Terraform can create the table, but it will error out when you try to make any further changes (I used the value 'view-count', the dash was the issue) It is believed to be fixed in version 4.90, which is available, but is still in preview.
At this point, I determined I had two major options: first, I could upgrade to the version of the module that fixes this issue. While this would be a fix, I was very concerned about the upgrade creating even more problems. The alternative was to delete the database, delete all references to the database in the terraform state file, then create a brand new database that adheres to Terraform's specifications. I opted to try this, and once I scrubbed all references to the old database from AWS and the terraform state, we were back in business.
Once I recreated all my resources via Terraform, it was time for the most nerve-wracking element of the challenge. While I worked on recreating all the elements in Terraform (using a different domain address), I left my hand-created elements active. Once I had two fully functioning infrastructures, it was time to terraform destroy my automated infrastructure, manually delete all of my hand-created infrastructure in Parts 2 and 3, wait for everything to clean itself up, then set up the final infrastructure for good. While I had a couple of configuration mismatches, it took me about 15 minutes from terraform init to final terraform apply, and then I had done it - I had finished the Cloud Resume Challenge!
The Victory Lap
While the challenge specs put the AWS Certified Cloud Practitioner at the beginning of the challenge, I decided to leave it for last. I thought getting exposure to all of these AWS services would make studying for the exam a breeze, and this was by far the most stress-free part of the challenge for me. I used this training resource directly from Amazon, and I thought it was very good. If you have some basic competency in the cloud and are eager to start working in AWS, leaving the exam to the end is a relaxing denouement of the challenge.
When I've taken Cisco exams in the past, the printout usually details whether you passed or failed. You'll just have to take my word for it.
Next Steps
On Arrakis, they teach the attitude of the knife, chopping off what's incomplete and saying "Now, it's complete because it ended there." However, until giant sandworms take their place at the top of the food chain, we still have the opportunity to evaluate our work and specify future improvements.
Currently, the "Download PDF" button on the website points to a static PDF stored in the same S3 bucket as the website itself. Fine, but a little primitive. Making updates to the layout of the website necessitates recreating the PDF and uploading to the bucket with the updated source code.
With our pipelines in place, it's a quick process and relatively efficient, but still a manual process. Eventually, (maybe once I have a little more Javascript experience under my belt) I would like to write a script that would programmatically generate the PDF using HTML2canvas and jsPDF.
Conclusion and Acknowledgements
The Cloud Resume Challenge was a lot of work. I spent about 5 weeks (6 if you include preparing for the CCP exam) during a very intensive month of work (working multiple weekends), but I was able to persevere and I'm very happy with the results.
Overall, were I to do it again (or if you're looking to take it on yourself), I think I would set up pipelines and terraform earlier in the process. Recreating a set of already-created resources didn't feel good, and felt like a lot of wasted effort.
I'd like to thank Cameron Chorba for initially turning me on to the challenge, as well as Stuart Marsh for being a good sounding board and motivator (I find your 365 days+ commit streak on Github very impressive and very inspiring) And finally... you! If you really have read to this point, you've experienced in some small part my struggles and joys with the challenge. If you're a fellow challenge champion, I'd love to hear what you did differently from me, and if you're not, I can't recommend giving it a shot enough. Irregardless, I'd love to connect to swap war stories as well as handy tips. Thanks for reading, and I look forward to seeing you all in the cloud.