• Contributing to Open Source

    For those of you that have not yet peaked under the hood of this blog, it’s running Jekyll. In short, I was no longer interested in maintaining a three tier infrastructure to simply host a blog. After comparing the field of static Content Management Systems (CMS), I chose it over Hugo only because I have a bit more experience with Ruby versus Go, and based on a cursory review felt that the breath of plugins was deeper based on its maturity. With that said, customizing it had a learning curve to understand the nuances of theming and the use of plugins through RubyGems.

    A little more than a week ago, I decided that I would migrate the site to Jekyll 4. In the process, I discovered that one of the plugins was not defined to support any version above 3. This sent me down a journey of contributing back to the community, instead of forcing the version on my own instance.

  • Sending CodeBuild project status to SQS - Defining and sending the messages

    In the first post of this series, I described my thought process (or insanity) of this entire project, and made decent progress on my list of requirements by creating the SQS queue, and ensuring least privilege using CloudFormation. The intent of the project being to receive notification(s) indicating the final status of a CodeBuild build without having to continually poll the CodeBuild API, or view the console. I decided to use SQS instead of a combination of SQS, and SNS to keep complexity of the project down, and runtime costs low (effectively negligible considering the number of builds/month). In this post, I’ll discuss how I came up with the message delivery system from within CodeBuild, and reasoning behind the message templates.

  • Sending CodeBuild project status to SQS - Building the queue

    Before joining AWS, I was using a few services in the suite, particularly to replace the hardware to host sites elsewhere. My exposure to the AWS Command Line Interface (CLI), or Boto (the AWS SDK for Python) was limited at best. I quickly recognized that while using the console (the AWS Web UI) was helpful to new users to understand the relationship between components, it was slow for operations of scale. It was this realization that led me to the power of the CLI, and eventually, to CloudFormation.

    For someone who prefers the command line (and manipulation of interfaces via the keyboard versus a mouse), this was fantastic! I could manipulate infrastructure through the CLI minimizing the context switch required when moving hand position from keyboard to mouse, and back. This post describes how I solved the inefficiency of using the AWS Console to check the status of my CodeBuild project build with SQS.

  • Using JUnit to help mentor

    Recently, a friend of mine needed help with Java that they were writing for a class project. As a newcomer to software development, they were having trouble understanding the nuances of object oriented design, and I was happy to help. My first ask was for the code that they had written to date, so that I had an understanding of their current state. I absolutely love reading through the code of others, because it helps me learn new skills, and techniques, even if the developer I’m working with is new to the field.

  • Magic of CloudWatch Events and CodeBuild

    Fantastic! You’re writing code, using CodeBuild from a repository in CodeCommit, and pushing the result into S3. The problem is that every single time you want to build, you have to make an AWS CLI call. What do you need to make a call succesfully? Credentials! Unfortunately, they expired hours ago when you were cycles deep into writing a very robust blog post. Your 2FA device is upstairs, and you’re so comfortable in your chair that you really don’t want to get up from it. Don’t let anyone tell you that being lazy does not breed innovation!

  • Hosting in S3 with CloudFront

    In the last post, I outlined my process to block public S3 buckets at the service level, ensuring that none of the buckets across my accounts would be exposed unintentionally. Once I was comfortable with the solution, I decided that it would be nice to finally set up a blog hosted in S3. My requirements were the following:

    1. Maintain the security posture of the hosting bucket
    2. Maintain access logs, and be able to report on them
    3. Content to only be delivered via TLS
    4. Ensure management of the certificate lifecycle was straight forward
  • S3 Block Public Access

    In November of 2018, AWS released S3 Block Public Access, as a method to apply an overarching policy to prevent public access to S3 buckets. The policy contains four options, and can be applied individually, or as a set which provides expected flexibility from an AWS feature (and the excess rope to cause trouble).

  • AWS CLI via SAML - Assuming roles across accounts

    In the second post of this series, I described how I set up my development environment using aws-google-auth. If it wasn’t for the tool, I would not be able to leverage the federation I set up between my GSuite and AWS accounts for use through the AWS CLI.

  • AWS CLI via SAML - Setting up your development environment

    The previous post in this series laid out how to configure the federation between a GSuite and an AWS account, with the intent on creating a single point of entry into your AWS infrastructure. This ensures that users of the infrastructure, regardless of account, authenticate into a single account, and then use role assumption based on their federation for authorization into the target account they will be working in.

  • AWS CLI via SAML - Setting up your federation

    Earlier this year, I decided that I would finally implement the same set of best practices in my own personal AWS accounts that I shared with my customers over the past two years. The intent, to run my own production workloads across the accounts that were effectively idle since their instantiation.