On Security: Part I
As a cybersecurity professional, I spend a good part of every day thinking about security problems. For my group, the main problem is how to get public organizations in the US to adopt, at the very least, a minimally effective set of cybersecurity services that we develop the requirements for, and in some cases, provide directly. America’s infrastructure, public and private, in under immense threat and the security baseline is too low, genreally speaking. I tend to be a big picture thinker and take a holistic view of things, and an example of this is a line of thought I had earlier in the week that I couldn’t shake, which is: ‘what is the one, master security problem’ - the security problem to rule them all, as it were? Not the just us at CISA, or even just Americans - but any humans. It was one of those things that arises when you’re just about to fall asleep, or when you take a long steamy shower and allow your mind the freedom to wander where it will. Currently, we face the promise of rapid technological advancement and transformation amid unprecedented uncertainty, tension and risk. It seems to me, to put it succinctly, that ensuring a certain quality of human survivability may be the greatest security problem we face. Basically, how do we survive without also losing everything. It’s been a fun rabbit hole to go down, because now I get to exercise a muscle that I don’t often get to - which is to write an essay - something I did all the time back when I was a history student (dusting off my gigantic orange hard copy of the Chicago Manual of Style).
When I talk about about transformation through crisis, think big well-known pivotal events, which may be punctuated and sudden or a bit of a longer process: The collapse of the Roman Empire, and the rise of feudal system in Europe; and in turn, the Black Death and collapse of the feudal system and the rise of the Renaissance, and with it, early modern Europe. In the American context, we might also include the Civil War and its answer to the foundational question of slavery and sectionalism; or the Great Depression and the following New Deal which dramatically reordered the economy and society. Today, the world faces multiple converging threats - climate change, nuclear weapons (the Doomsday Clock is 89 seconds to midnight currently, the closest it has ever been, and just this past week we had Donald Trump positioning ballistic missile submarines closer to Russian Territory, and Dmitry Medvedev bragging about the Russian Dead Hand strategic weapons system1), pandemics, systemic economic fragility, rapidly advancing AI technology, and a the increasingly viable thought that humans may soon live not just on earth, but out there in the cosmos. Combine all this with the fact that most humans are overconsuming media (which may be accurate and in good-faith, or, twisted to inflict maximum fear and stress) on their ubiquitously present and accessible connected devices - it’s no wonder that people are feeling a sense of looming doom. The potential for disruption, destruction, transformation, and lightning-fast change appear to be present in equal and unprecedented fashion. So unprecedented, it seems, that we stand upon the highest-stakes precipice we have yet stood upon in our history - a dramatic inflection point that may spell disaster or trimumph.
I’m not trying to give any of those converging threats exhaustive treatment here, instead I’m interested in the how we should view the two-edged sword of the risk of destruction and the promise of transformation as it relates to our present, rapidly changing situation and what comes next. Great disruption, whether by catastrophe or innovation (or both), can catalyze societal evolution if approached with readiness and ethical caution, but paramount above all is to ensure a sufficient quality of survivability, which is to say functional survival regardless of what happens. It is therefore one of the greatest if not the greatest security concern we have, upon which all other security concerns are dependent. Should we face catastrophe, I could see the question playing out like a scene in Foundation - in which Hari Seldon tells the leaders of the Empire on Trantor (or the personified “Empire”, if you happened to watch the TV series) - that how they respond to inevitable catastrophe and decline will determine just how long the dark ages will last - a thousand years? Or thiry times longer? Our deliberate choices and how we leverage our technological prowess may dictate what the aftermath looks like - assuming that our technological capacity and all knowledge of it is not destroyed in the process.
In the next post (or update to this post) I will explore how scholars and tech people have thought about this question.
“How the US is Preparing for the Next Pandemic,” BBC News, August 14, 2025, https://www.bbc.com/news/articles/cly4kgv9238o. ↩︎