If you’re anything like me - you often prepare for the worst any time it’s critically important to execute.
Build - Practice - Repeat until it’s muscle memory.
Once that is settled - we start thinking about contingencies. Will I be on Hotel Wifi? What will happen to my bandwidth if I try pulling container images to a local cluster? Will it wipe out my audio/video?
Whether it’s an important mission hero or conference demonstration - how do you reduce potential room for error?
Events occur, Priorities shift, adaptation is required.
The last year has been a consistent roller coaster. I’ve always been on the edge of what defines security controls and how we architect systems to meet and exceed the requirements. My knowledge of compliance was sufficient enough to collaborate with others on answering the required controls and moving on with development efforts.
The thing is we knew that better existed - or that it should. The processes were frustrating, time is expensive, and generally the fidelity of the data didn’t meet the same fidelity of the other artifacts we were producing.
Warning - this is a large amount of text highlighting general strategy for my time at this conference. What I deem as valuable may not work for others with separate strategies. See the TLDR of each day if interested in the general overview.
KubeCon has wrapped up and with it I can definitively say that it was the most productive and impactful event in all of my previous KubeCon attendances. I went into this event with a strategy - which included observations and exploration as well as targeted tasks.
It’s no secret - and likely I am a broken record at this point - that homelabbing is not only a hobby of mine, but also a great activity for learning.
The experiments I have ran from my own hardware have informed me in ways where I could participate in discussions and architectural decisions in ways that provided value to myself and others. It is still an activity that I work to establish as a habit in some way/shape/form throughout my schedule.
In a week of the news around xz utils backdoor vulnerabilitity, it provides a reminder that there are systems that we need to remain vigilant in monitoring.
I’m a believer that one of our most vulnerable assets is our developer environments. We conduct tons of experimentation and use them to drive upgrades to downstream systems.
How are we keeping track of what is installed and the versions etc? seems like a solved problem - but I can guarantee that even big enterprise still remains vulnerable - more so in the age of containers.
If there is one value that I believe has contributed to the most meaningful time spent - in work and out - it’s the guiding principle that How you do anything is how you do everything. I don’t know where I originally heard it - but it immediately resonated as a truth with me.
Break it down
Hear me out - from the task that are very important and require intense focus - to the menial, routine tasks that require your attention in order to get them done - how you approach doing them is often consistent over time. Meaning that your system for approaching and solving any problem at hand is a byproduct of your systems. As James Clear puts it, “You do not rise to the level of your goals. You fall to the level of your systems”.
As a follow on from my previous post from OSCAL, I wanted to take a step back and discuss the role of Open Source GRC Engineering. If we look at traditional Governance, Risk, and Compliance (GRC) tooling - because I do imagine there are GRC platforms today that are not Open Source but are innovating - we’ve seen interfaces that primarily operates on consuming data and visualizing it. Not a bad business model by any means, but where it can fundamentally fail is defined paths for data to flow.
I’m sure I am not alone here - but at one point in my engineering career I had helped build a platform, and only after building it did we come to the threshold that was - “Has this been evaluated by security”. IE traditional silos and the need to evaluate for compliance across controls that may or may not apply.
In the era of DevSecOps and the Sec was never integrated. Embarrassing in some perspectives but I had no background to even constitute knowing what standard we would even be evaluating the platform against. Ignorance does not get a free pass though - and it instilled a need to understand why there wasn’t any data to pull for re-use. This activity is done time and time again with many of the same tools and architectures and yet each one is done in a silo? that didn’t sit well with me.
The article above lays out a few Supply Chain Security attacks that are applicable to software development. The TLDR is that - without signing your commits - there are ways to impersonate your github account through fairly trivial means. This is a pretty scary thought - impersonation is a pretty simple social engineering attack that could result in someone letting down their defenses when they otherwise shouldn’t - or worse - attributing some known bad code to someone else in an attempt to degrade their reputation.
I am a firm believer that a continuous growth-mindset is essential for any developer (and any other person to be honest). We execute day-in and day-out and more often than not will find ourselves playing to our strengths and focusing on the mission need instead of poking at every new skill and programming language under the sun.
Devoting yourself - your time, energy, focus and grit towards the skills you know will be required for making the next big decision is a great way to continue to grow. You’ll often find yourself sitting in a position of holding subject-matter-expertise and providing that knowledge to help inform greater entities (team, company, etc) on how to execute well.