Shostack & Friends BlogSecurity, privacy, economics and unrelated topics since 20052024-03-09T00:00:00Zhttps://shostack.org/blog/Adam Shostackadam@shostack.orgModeling Attackers and Their Motives2014-11-11T00:00:00Zhttps://shostack.org/blog/modeling-attackers-and-their-motives/AdamThere are a number of reports out recently, breathlessly presenting their analysis of one threatening group of baddies or another. Most readers should, at most, skim their analysis of the perpetrators. Read on for why.<span class="single-post-excerpt"><!-- excerpt -->There are a number of reports out recently, breathlessly presenting their analysis of one threatening group of baddies or another. Most readers should, at most, skim their analysis of the perpetrators. Read on for why.<!-- /excerpt --></span>
<p>There are a number of reports out recently, breathlessly presenting their analysis of one threatening group of baddies or another. You should look at the reports for facts you can use to assess your systems, such as filenames, hashes and IP addresses. Most readers should, at most, skim their analysis of the perpetrators. Read on for why.</p>
<p>
There are a number of surface reasons that you might reject or ignore these reports. For example, these reports are funded by marketing. Even if they are, that's not a reason to reject them. The baker does not bake bread for fun, and the business goal of marketing can give us useful information. You might reject them for their abuse of adjectives like “persistent”, “stealthy”, or “sophisticated.” (I'm tempted to just compile a wordcloud and drop it in place of writing.) No, the reason to only skim these is what the analysis does to the chance of your success. There are two self-inflicted wounds that often happen when people focus on attackers:</p>
<ul>
<li>You miss attackers</li>
<li>You misunderstand what the attackers will do</li>
</ul>
<p>
You may get a vicarious thrill from knowing who might be attacking you, but that very vicarious thrill is likely to make those details available to your conscious mind, or anchor your attention on them, causing you to miss other attackers. Similarly, you might get attached to the details of how they attacked last year, and not notice how those details change.</p>
<p>
Now, you might think that your analysis won't fall into those traps, but let me be clear: the largest, best-funded analysis shops in the world routinely make serious and consequential mistakes about their key areas of responsibility. The CIA didn't predict the collapse of the Soviet Union, and it <a href="http://www.washingtontimes.com/news/2014/jul/1/cia-blamed-iraq-intel-failure-isis-rise/?page=all" target="_blank" rel="noopener">didn't predict the rise of ISIS</a>.</p>
<p>
If your organization believes that it's better at intelligence analysis than the thousands of people who work in US intelligence, then please pay attention to my raised eyebrow. Maybe you should be applying that analytic awesomesauce to your core business, maybe it is your core business, or maybe you should be carefully combing through the reports and analysis to update your assessments of where these rapscallions shall strike next. Or maybe you're over-estimating your analytic capabilities.</p>
<p>Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can't reliably prevent those things, detect them when you've missed, and respond when you discover you've missed, then digging into the motivations of your attackers may not be the best use of your time.</p>
<p>
The indicators that can help you find the successful attacks are an important value from these reports, and that's what you should use them for. Don't get distracted by the motivations.</p>
<p>
</p>The Dope Cycle and the Two Minutes Hate2017-01-04T00:00:00Zhttps://shostack.org/blog/dope-cycle-and-two-minutes-hate/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/The-vicious-cycle-of-pain-cravings-more-drugs-guilt.jpg" alt="A Vicious cycle of pain, cravings, more drugs, and guilt" class="single-post-main-img" />
<p>[Updated with extra links at the bottom.]</p>
<p>There's a cycle that happens as you engage on the internet. You post something, and wait, hoping, for the likes, the favorites, the shares, the kind comments to come in. You hit reload incessantly <em>even though the site doesn't need it</em>, hoping to get that hit that jolt even a little sooner. That dopamine release.</p>
<a href="http://www.narconon.org/blog/drug-addiction/vicious-cycle-addiction/" target="_blank"><img src="https://shostack.org/blog/img/2017/The-vicious-cycle-of-pain-cravings-more-drugs-guilt.jpg" alt="A Vicious cycle of pain, cravings, more drugs, and guilt" width="480" height="375" style="display:block; float:right;" /></a>
<p>Site designers refer to this by benign names, like <em>engagement</em> or <em>gamification</em> and it doesn't just happen on "social media" sites like Twitter or Instagram. It is fundamental to the structure of LinkedIn, of Medium, StackExchange, of Flickr. We are told how popular are the things we observe, and we are told to want that popularity. Excuse me, I mean that influence. That reach. And that brings me to the point of today's post: seven tips to increase your social media impactfulness. Just kidding.</p>
<p>Not kidding: even when you know you're being manipulated into wanting it, you want it. And you are being manipulated, make no mistake. Site designers are working to make your use of their site as pleasurable as possible, as emotionally engaging as possible. They're caught up in a <a href="https://en.wikipedia.org/wiki/Red_Queen_hypothesis" target="_blank">Red Queen Race</a>, where they must engage faster and faster just to stay in place. And when you're in such a race, it helps to steal as much as you can from millions of years of evolution. [Edit: I should add that this is not a moral judgement on the companies or the people, but rather an observation on what they must do to survive.] That's dopamine, that's adrenaline, that's every hormone that's been covered in Popular Psychology. It's a dope cycle, and you can read that in every sense of the word dope.</p>
<p>This wanting is not innocent or harmless. Outrage, generating a stronger response, wins. Sexy, generating a stronger response, wins. Cuteness, in the forms of awwws, wins. We are awash in messages crafted to generate strong emotion. More, we are awash in messages crafter to generate stronger emotion than the preceding or following message. This is not new. What is new is that the analytic tools available to its creators are so strong that the Red Queen Race is accelerating (by the way, that's bait for outraged readers to insist I misunderstand the Red Queen Race, generating views for this post). The tools of 20th century outrage are crude and ineffective. Today's outrage cycle over the House cancelling its cancellation of its ethics office is over, replaced by outrage over ... well, it's not year clear what will replace it, but expect it to be replaced.</p>
<p>When Orwell wrote of the Two Minutes Hate, he wrote:</p>
<blockquote>The horrible thing about the Two Minutes Hate was not that one was obliged to act a part, but that it was impossible to avoid joining in. Within thirty seconds any pretense was always unnecessary. A hideous ecstasy of fear and vindictiveness, a desire to kill, to torture, to smash faces in with a sledge hammer, seemed to flow through the whole group of people like an electric current, turning one even against one's will into a grimacing, screaming lunatic. And yet the rage that one felt was an abstract, undirected emotion which could be switched from one object to another like the flame of a blowlamp.</blockquote>
<p>I am reminded of Hoder's article, "<a href="https://medium.com/matter/the-web-we-have-to-save-2eb1fe15a426#.fa0bl3uyh" target="_blank">The Web We Have to Save</a>" (4.4K hearts, 165 balloons, and no easy way to see on Medium how many sites link to it). Also of related interest is <a href="http://nymag.com/daily/intelligencer/2016/01/good-bye-to-all-that-twitter.html" target="_blank">Good-bye to All That Twitter</a> and "<a href="http://www.geekwire.com/2017/seattle-author-lindy-west-leaves-twitter-calls-unusable-anyone-trolls-robots-dictators/" target="_blank">Seattle author Lindy West leaves Twitter, calls it unusable for ‘anyone but trolls, robots and dictators’</a>" but I don't think Twitter, per se, is the problem. Twitter has a number of aspects which make trolling (especially around gender and race issues, but not limited to them) especially emotionally challenging. Those are likely closely tied to the anticipation of positivity in "mentions", fulfilled by hate. But the issues are made worse by site design that successfully increases engagement.</p>
<p>I don't know what to do with this observation. I have tried to reduce use of sites that use the structures of engagement: removing them from my reading in the morning, taking their apps off my phone. But I find myself typing their URLs when I'm task switching. I am reluctant to orient around addiction, as it drags with it a great deal of baggage around free will and ineffective regulation.</p>
<p>But removing myself from Twitter doesn't really address the problem of the two minutes hate, nor of the red queen race of dope cycles. I'd love to hear your thoughts on what to do about them.</p>
<p>[Update: Related, "<a href="http://www.zephoria.org/thoughts/archives/2017/01/06/hacking-the-attention-economy.html" target="_blank">Hacking the Attention Economy</a>," by danah boyd.]</p>
<p>[Update (8 Feb): Hunter Walk writes "<a href="https://hunterwalk.com/2017/02/07/why-many-companies-mistakingly-think-trolls-harassment-is-good-for-business/" target="_blank">Why Many Companies Mistakingly Think Trolls & Harassment Are Good for Business</a>," and I'd missed Tim Wu writing on "<a href="https://www.amazon.com/s/ref=as_li_ss_tl?url=search-alias=stripbooks&field-keywords=the+attention+merchants&sprefix=the+attention+,stripbooks,210&crid=VBKR02ESY30Q&linkCode=ll2&tag=adamshostack-20&linkId=9770d99a47d8d27fae65b9de1caee3b5" target="_blank">The Attention Merchants</a>."]</p>Kyber Crystal and the Death Star2017-01-06T00:00:00Zhttps://shostack.org/blog/kyber-crystal-and-the-death-star/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/Death-star-construction.jpg" alt="Death Star construction" class="single-post-main-img" width="598" height="337" />
<p>This post has spoilers for Rogue One, and also Return of the Jedi.</p>
<p>We learn in Rogue One that the Death Star's main gun is powered by Kyber crystal. We know from various <a href="http://starwars.wikia.com/wiki/Kyber_crystal" target="_blank">sources</a> that it's <a href="http://screenrant.com/star-wars-kyber-lightsaber-crystals-trivia-facts/?view=all" target="_blank">rare</a>.</p>
<p>Then the Death Star is tested, destroying Jedah, where they're mining the crystals. Note that both times its fired, they give the order "single reactor ignition." Are they testing the reactors and power systems, or conserving kyber crystal?</p>
<p>Really, how much "ammo" did the original Death Star have on board? How many times could they fire the main gun?</p>
<p>Was ten or fifteen shots considered sufficient, because after a demonstration, fear will keep the local systems in line? Where did they find enought kyber crystal for the second Death Star?</p>2017 and Tidal Forces2017-01-13T00:00:00Zhttps://shostack.org/blog/2017-and-tidal-forces/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>There are two great blog posts at Securosis to kick off the new year:</p>
<ul>
<li><a href="https://securosis.com/blog/tidal-forces-the-trends-tearing-apart-security-as-we-know-it" target="_blank">Tidal Forces: The Trends Tearing Apart Security As We Know It</a> (Rich Mogull)</li>
<li><a href="https://securosis.com/blog/network-security-in-the-cloud-age-everything-changes" target="_blank">Network Security in the Cloud Age: Everything Changes</a> (Mike Rothman)</li>
</ul>
<p>Both are deep and important and worth pondering. I want to riff on something that Rich said:</p>
<blockquote>On the security professional side I have trained hundreds of practitioners on cloud security, while working with dozens of organizations to secure cloud deployments. It can take years to fully update skills, and even longer to re-engineer enterprise operations, even without battling internal friction from large chunks of the workforce...</blockquote>
<p>It's worse than that. <del>Yesterday</del> Recently on Emergent Chaos, I talked about <a href="https://adam.shostack.org/blog/2017/01/the-dope-cycle-and-the-two-minutes-hate/">Red Queen Races</a>, where you have to work harder and harder just to keep up.</p>
<p>In the pre-cloud world, you could fully update your skills. You could be an expert on Active Directory 2003, or Checkpoint's Firewall-1. You could generate friction over moving to AD2012. You no longer have that luxury. Just this morning, Amazon launched a new rev of something. Google is pushing a new rev of its G-Suite to 5% of customers. Your skillset with the prior release is now out of date. (I have no idea if either really did this, but they could have.) Your skillset can no longer be a locked-in set of skills and knowledge. You need the meta-skills of modeling and learning. You need to understand what your model of AWS is, and you need to allocate time and energy to consciously learning about it.</p>
<p>That's not just a change for individuals. It's a change for how organizations plan for training, and it's a change for how we should design training, as people will need lots more "what's new in AWS in Q1 2017" training to augment "intro to AWS."</p>
<p>Tidal forces, indeed.</p>On Immigration and Refugees2017-01-29T00:00:00Zhttps://shostack.org/blog/on-immigration-and-refugees/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/john-von-neumann.png" alt="John von Neumann" border="0" width="125" height="133" style="float:right;" /> <img src="https://shostack.org/blog/img/2017/SergeyBrin.png" alt="Sergey Brin and baby" title="SergeyBrin.png" width="150" height="110" style="float:right;" />
<p>The ban on refugees is illegal, immoral and un-American, and as an American, I want to add my voice.</p>
<p>The ban is illegal. ("<a href="https://www.nytimes.com/2017/01/27/opinion/trumps-immigration-ban-is-illegal.html" target="_blank">Trump’s Immigration Ban Is Illegal</a>.") I suspect that the United States also has legal obligations under treaties to accept refugees, but Google isn't my lawyer, and I am no expert.</p>
<p>The ban is immoral. Those who have gone through our immigration process and gotten green cards are being <a href="http://www.reuters.com/article/us-usa-trump-immigration-greencard-idUSKBN15C0KX" target="_blank">restricted</a> from returning to the US. Those people have followed the legal path to immigration and built lives here. We made a deal with them and we're breaking it, suddenly and without warning. Those people might have jobs, school, or family to return to, and their lives are upended and uncertain. These are not illegal aliens, they are people who have gone through a complex, and sometimes kafka-esque immigration process.</p>
<p>I have worked with engineers from Syria. (I'm not going to name them in today's climate.) They did good work, and were good people. They were dealing with the horror of hearing family back home was missing, and they did good work anyway.</p>
<p>The President is hurting America with this ban. By telling those here legally that their status can be upended at a whim, he makes a strong argument against coming here by following the rules as they exist on a given day. Some people will continue to come here in violation of the law; others will go elsewhere, and another country will get both the risk and the reward from that set of refugees.</p>
<p>It's worth noting that the protests and court orders yesterday, while welcome, "<a href="https://www.washingtonpost.com/national/health-science/trump-gives-no-sign-of-backing-down-from-travel-ban/2017/01/29/4ffe900a-e620-11e6-b82f-687d6e6a3e7c_story.html" target="_blank">Despite growing dissent, Trump gives no sign of backing down from travel ban</a>." I guess we need to keep calling this what it is: un-American.</p>
<p>Pictured is John von Neumann, <a href="http://www.atomicheritage.org/article/scientist-refugees-and-manhattan-project" target="_blank">refugee</a>, and inventor of the <a href="https://en.wikipedia.org/wiki/Von_Neumann_architecture" target="_blank">von Neumann</a> architecture that's at the heart of the computer on which you're reading this, and Sergey Brin, co-founder of Google, <a href="https://twitter.com/RMac18/status/825546620276088834/photo/1" target="_blank">on his way</a> to protest in San Francisco.</p>
<p>[Update: The hawks at Lawfare blog have an analysis, <a href="https://lawfareblog.com/malevolence-tempered-incompetence-trumps-horrifying-executive-order-refugees-and-visas" target="_blank">Malevolence Tempered by Incompetence:</a>.]</p>Groundrules on Complaining About Security2017-02-18T00:00:00Zhttps://shostack.org/blog/groundrules-on-complaining-about-security/mordaxusEveryone complains about security, but no one ever .. sets boundaries<span class="single-post-excerpt"><!-- excerpt -->Everyone complains about security, but no one ever .. sets boundaries<!-- /excerpt --></span>
<p>In this article, I want to lead into some other articles I’m working on. In those, I’m going to complain about security. But I want those complaints to be thoughtful and within a proper context.</p>
<p>You will hear many of us in security talk about threat models. Adam literally <a href="https://www.amazon.com/Threat-Modeling-Designing-Adam-Shostack/dp/1118809998/ref=as_li_ss_tl?ie=UTF8&qid=1522626183&sr=8-1&keywords=threat+modeling&linkCode=ll1&tag=adamshostack-20&linkId=a52c01dee834cc2f97650dede4e09988" target="_blank">wrote the book</a> on threat models and if you don’t have a copy, you should get one.</p>
<p>Threat models are a way of thinking about security in a somewhat rigorous way. Without some sort of threat model, you’re not really doing security.</p>
<p>Threat models sound complex, but they’re really not. We all do them intuitively all the time, and here’s the basic outline of how to make one. You want answers to these questions:</p>
<ol>
<li>What are you doing?</li>
<li>What could go wrong?</li>
<li>What are you doing about it?</li>
</ol>
<p>Among the valuable things in Adam’s book, he talks about these and more, but these three simple questions frame how to talk about security no matter who you are. If you don’t have a threat model, you might be doing something useful, but it’s not really security.</p>
<p>If you are a maker of security, without a threat model you might have a solution in search of a problem. You might also have a <a href="https://en.wikipedia.org/wiki/Stone_Soup" target="_blank">stone soup</a> security system, in which you throw a bunch of things in a pot, and while tasty (or secure), isn’t organized. There are many, many stone soup security systems out there.</p>
<p>If you’re going to use a security system, without a threat model you have no way to know if what you’re getting meets your needs.</p>
<p>If you’re challenging a security system, without a threat model, your criticisms may be true but irrelevant.</p>
<p>It is these latter two cases – deciding what security system to you and providing a critique of a security system – that I’m going to focus on, particularly since I’m going to be engaging in challenges, and people selecting a system also need to think about what their own threat model is when selecting a system. If you’re going to use a secuity system, a little bit of thought about what you expect it to do and what you expect it to protect you from is in required.</p>
<p>Let me move a bit away from computer security for a moment; analogies often help.</p>
<p>Let’s look at this statement:
</p><ul>
<li>Aspirin doesn’t cure cancer.</li>
</ul>
<p>It’s true. Aspirin <em>doesn’t</em> cure cancer. It doesn’t do half-bad on headaches (with of course, a number of other qualifiers), but it doesn’t cure cancer.</p>
<p>However, if Alice says, “I’m going to go take an aspirin” and Bob says, “Aspirin doesn’t cure cancer,” he has implicitly assumed that her threat model is not:</p>
<ul>
<li>I have a headache</li>
<li>I’m going to take an aspirin to cure it</li>
</ul>
<p>but</p>
<ul>
<li>I have cancer</li>
<li>I’m going to take an aspirin to cure it.</li>
</ul>
<p>Even if Alice actually does have cancer, she might also have a headache. Especially if she has to deal with someone with simplisitic thinking like Bob. This is the sort of headache that got me to write this essay.</p>
<p>Getting back to security, while I was typing the first part of this, a friend and I started on a discussion. We started with wondering if since most front door locks are easily picked, does that mean that they’re just security theatre. The discussion then went into social value of locks (most people are honest, after all), the technological merits of <a href="http://www.abloyusa.com/" target="_blank">Abloy locks</a>, the expense of getting a good lock for all your doors, the human factors aspects of wanting one key for all your doors, the security problem of weak points from the porch to the windows, and then on to reinforcing hinges and even the front door itself. It was a fun discussion, but it wasn’t a good security discussion, it was security stone soup. The initial question of whether most door locks do anything was the pot of water with a stone in it and we kept adding in garnishes until we ended up with a tasty conversation. However, at no point did we discuss a threat model. We don’t know what we were trying to protect, what threats we were protecting it from, or anything that turns it into a real security discussion.</p>
<p>I <em>think</em> we were talking about a stereotypical threat of a burglar backing up a van to the house and carting off a lot of valuables, but I am just presuming that.</p>
<p>I know of what I speak in this issue of threat models because I’m guilty of it, too. It’s so easy to get caught up in security stone soup that it happened to me while I was writing an essay on threat models and security stone soup.</p>
<p>Now that I have a couple of ground rules in place as a preface, I will complain about security in my next essay.</p>
Calls for an NTSB?2017-02-20T00:00:00Zhttps://shostack.org/blog/calls-for-an-ntsb/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>In September, Steve Bellovin and I asked "<a href="https://shostack.org/archive/2016/09/why-dont-we-have-an-incident-repository/">Why Don’t We Have an Incident Repository?</a>."</p>
<p>I'm continuing to do research on the topic, and I'm interested in putting together a list of such things. I'd like to ask you for two favors.</p>
<p>First, if you remember such things, can you tell me about it? I recall "Computers at Risk," the National Cyber Leap Year report, and the Bellovin & Neumann editorial in IEEE S&P. Oh, and "The New School of Information Security." But I'm sure there have been others.</p>
<p>In particular, what I'm looking for are calls like this one in <cite><a href="https://www.nap.edu/catalog/1581/computers-at-risk-safe-computing-in-the-information-age" target="_blank">Computers at Risk</a></cite> (National Academies Press, 1991):</p>
<blockquote>3a. <b>Build a repository of incident data.</b> The committee recommends that a repository of incident information be established for use in research, to increase public awareness of successful penetrations and existing vulnerabilities, and to assist security practitioners, who often have difficulty persuading managers to invest in security. This database should categorize, report, and track pertinent instances of system security-related threats, risks, and failures. [...] One possible model for data collection is the incident reporting system administered by the National Transportation Safety Board... (<a href="https://www.nap.edu/read/1581/chapter/3#36" target="_blank">chapter 3</a>)</blockquote>
<p>Second, I am trying to do searches such as "cites "Computers at Risk" and contains 'NTSB'." I have tried without luck to do this on <a href="https://scholar.google.com/scholar?hl=en&q=computers+at+risk&btnG=&as_sdt=1%2C48&as_sdtp=" target="_blank">Google Scholar</a>, <a href="https://academic.microsoft.com/#/detail/1592526964" target="_blank">Microsoft Academic</a> and <a href="https://www.semanticscholar.org/search?q=computers%20at%20risk&sort=relevance&ae=false" target="_blank">Semantic Scholar</a>. Only Google seems to be reliably identifying that report. Is there a good way to perform such a search?</p>Introducing Cyber Portfolio Management2017-02-21T00:00:00Zhttps://shostack.org/blog/introducing-cyber-portfolio-management/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>At RSA'17, I spoke on "<a href="https://www.rsaconference.com/industry-topics/presentation/security-leadership-lessons-from-the-dark-side" target="_blank">Security Leadership Lessons from the Dark Side</a>."</p>
<blockquote>Leading a security program is hard. Fortunately, we can learn a great deal from Sith lords, including Darth Vader and how he managed security strategy for the Empire. Managing a distributed portfolio is hard when rebel scum and Jedi knights interfere with your every move. But that doesn’t mean that you have to throw the CEO into a reactor core. “Better ways you will learn, mmmm?”</blockquote>
<img title="Security Leadership Lessons From The Dark Side.png" src="https://shostack.org/blog/img/2017/What-the-chart-means.png" alt="" width="300" height="242" />
<p>In the talk, I discussed how "security people are from Mars and business people are from Wheaton," and how to overcome the communication challenges associated with that.</p>
<p>RSA has posted audio with slides, and you can take a listen at the link above. If you prefer the written word, I have a small ebook on Cyber Portfolio Management, a new paradigm for driving effective security programs. But I designed the talk to be the most entertaining intro to the subject.</p>
<p>Later this week, I'll be sharing the first draft of that book with people who subscribe to my "<a href="https://shostack.org/contact#adams-new-thing">Adam's New Thing</a>" mailing list. <em>Adam's New Thing</em> is my announcement list for people who hate such things. I guarantee that you'll get fewer than 13 messages a year.</p>
<p>Lastly, I want to acknowledge that at <a href="http://www.securitybsides.com/w/page/47572893/BSidesSanFrancisco2012" target="_blank">BSides San Francisco 2012</a>, Kellman Meghu made the point that "they're having a pretty good risk management discussion," and that inspired the way I kicked off this talk.</p>Learning Lessons from Incidents2017-03-03T00:00:00Zhttps://shostack.org/blog/learning-lessons-from-incidents/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>After the February, 2017 S3 incident, Amazon posted this:</p>
<blockquote>We are making several changes as a result of this operational event. While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level. This will prevent an incorrect input from triggering a similar event in the future. We are also auditing our other operational tools to ensure we have similar safety checks. We will also make changes to improve the recovery time of key S3 subsystems. ("<a href="https://aws.amazon.com/message/41926/" target="_blank">Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region</a>")</blockquote>
<p>How often do you see public lessons like this in security?</p>
<p>"We have modified our email clients to not display URLs which have friendly text that differs meaningfully from the underlying anchor. Additionally, we re-write URLs, and route them through our gateway unless they meet certain criteria..."</p>
<p>Relatedly, <a href="https://codeascraft.com/2016/11/17/debriefing-facilitation-guide/" target="_blank">Etsy's Debriefing Facilitation guide</a>. Also, many people are describing this as "human error," which reminds me of Don Norman's "<a href="http://catless.ncl.ac.uk/Risks/23/07#subj10" target="_blank">Proper Understanding of 'The Human Factor'</a>:"</p>
<blockquote>...if a valve failed 75% of the time, would you get angry with the valve and simply continual to replace it? No, you might reconsider the design specs. You would try to figure out why the valve failed and solve the root cause of the problem. Maybe it is underspecified, maybe there shouldn't be a valve there, maybe some change needs to be made in the systems that feed into the valve. Whatever the cause, you would find it and fix it. The same philosophy must
apply to people.</blockquote>
<p>(Thanks to Steve Bellovin for reminding me of the Norman essay recently.)</p>More Satellites Than You Can Shake a Stick At2017-03-09T00:00:00Zhttps://shostack.org/blog/more-satellites-than-you-can-shake-a-stick-at/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>This video is really amazingly inspiring:</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/c0BpjPUT5FE?rel=0" frameborder="0" allowfullscreen=""></iframe>
<p>Not only does it show more satellites than I've ever seen in a single frame of video, but the rocket that took them up was launched by the <u>Indian Space Research Organisation</u>, [link to http://isro.gov.in/ no longer works] who managed to launch not only the largest satellite constellation ever, but had room for a few more birds in the launch. It's an impressive achievement, and it (visually) crystalizes a shift in how we approach space. Also, congratulations to the team at Planet, <a href="https://www.planet.com/pulse/planet-launches-satellite-constellation-to-image-the-whole-planet-daily/" target="_blank">the ability to image all of Earth’s landmass every day</a>.</p>
<p>Launching a micro satellite into low Earth orbit is now accessible to hobbyists. Many readers of this blog could do it. That's astounding. Stop and think about that for a moment. Our failure to have exciting follow-on missions after Apollo can obscure the fascinating things which are happening in space, as it gets cheap and almost boring to get to low Earth orbit. The Economist has a <a href="http://www.economist.com/technology-quarterly/2016-25-08/space-2016" target="_blank">good summary</a>. That's not to say that there aren't things happening further out. This is the year that contestants in the <a href="https://en.wikipedia.org/wiki/Google_Lunar_X_Prize" target="_blank">Google Lunar XPrize</a> competition must launch. Two tourists have paid a deposit to fly around the moon.</p>
<p>But what's happening close to the planet is where the economic changes will be most visible soon. That's not to say it's the only thing to watch, but the same engines will enable more complex and daring missions.</p>
<p>For more on what's happening in India around space exploration and commercialization, this is a fascinating interview with <a href="http://www.hindustantimes.com/india-news/satellite-launch-a-proof-of-isro-s-capabilities-says-founder-of-space-startups/story-GWd7n37YMXNNiNj9b9Y1PP.html" target="_blank">Susmita Mohanty</a>.</p>
<p class="credit-text">Video link: <a href="https://www.youtube.com/watch?v=c0BpjPUT5FE" target="_blank">ISRO PSLV-C37 onboard camera view of 104 satellites deployment</a></p>Syria2017-04-06T00:00:00Zhttps://shostack.org/blog/syria-2017/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>So I'm curious: on what basis is the President of the United States able to issue orders to attack the armed forces of Syria?</p>
<p>It is not on the basis of the 2001 "Authorization for Use of Military Force," cited in many instances, because there has been no claim that Syria was involved in the 9/11 attacks. (Bush and then Obama both stretched this basis incredibly, and worryingly, far. But both took care to trace back to an authorization.)</p>
<p>It is not on the basis of an emergency use of force because the United States was directly threatened.</p>
<p>Which leaves us with, as the NY Times reports:</p>
<blockquote>Mr. Trump authorized the strike with no congressional approval for the use of force, an assertion of presidential authority that contrasts sharply with the protracted deliberations over the use of force by his predecessor, Barack Obama. ("<a href="https://www.nytimes.com/2017/04/06/world/middleeast/us-said-to-weigh-military-responses-to-syrian-chemical-attack.html" target="_blank">Dozens of U.S. Missiles Hit Air Base in Syria</a>.")</blockquote>
<p>Or, as Donald Trump <a href="https://twitter.com/realDonaldTrump/status/373581528405905408" target="_blank">once said</a>:
</p><div style="text-align:center;"><img src="https://shostack.org/blog/img/2017/Trump-Syria.png" alt="Trump-Syria.png" border="0" width="450" height="210" /></div>
<p>Seriously, what is the legal basis of this order?</p>
<p>Have we really arrived at a point where the President of the United States can simply order the military to strike anywhere, anytime, at his personal discretion?</p>A New Blog2017-04-16T00:00:00Zhttps://shostack.org/blog/a-new-blog-2017/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>When I started blogging a dozen years ago, the world was different. Over time, I ended up with at least two main blogs (Emergent Chaos and New School), and guest posting at Dark Reading, IANS, various Microsoft blogs, and other places.</p>
<p>I decided it's time to bring all that under a single masthead, and hey, get TLS finally. I've imported the EmergentChaos and New School archives, but not the others. For those others, I'll post a link here as I post there.</p>
<p>If you subscribe to either or both, I suggest subscribing here; I'll post reminders to those other blogs to move as well. If you maintain a link to either of the old blogs, please update it to point here.</p>
<p>I'm sure I've broken things in the imports, please let me know what they are.</p>
<p>In the near future, I'll set up redirects from the old blogs to here.</p>People are The Weakest Link In Security?2017-04-17T00:00:00Zhttps://shostack.org/blog/people-are-the-weakest-link-in-security/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Despite the title, end users are rarely the weak link in security. We often make impossible demands of them. For example, we want them to magically know things which we do not tell them.</p>
<p>Today's example: in many browsers, <a href="https://www.xn--80ak6aa92e.com/" target="_blank">this site</a> will display as “Apple.com”. Go ahead. Explore that for a minute, and see if you can find evidence that it's not. What I see when I visit is:</p><p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2017/Unicode-domain.png" alt="URL bar showing “Apple.com”" title="Unicode domain.png" border="0" width="400" height="150" />
</p><p>When I visit the site, I see it's a secure site. I click on the word secure, I see this:</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2017/Dropdown.png" alt="Dropdown" title="Dropdown.png" border="0" width="172" height="335" />
<p>But it's really www.xn--80ak6aa92e.com, which is a <a href="https://en.wikipedia.org/wiki/Punycode" target="_blank">Puncycode</a> URL. Punycode is way to encode other languages so they display properly. That's good. What's not good is that there's no way to know that those are not the letters you think they are. Xudong Zheng <a href="https://www.xudongz.com/blog/2017/idn-phishing/" target="_blank">explains the problem</a>, in more depth, and writes about how to address it in the short term:</p>
<blockquote>A simple way to limit the damage from bugs such as this is to always use a password manager. In general, users must be very careful and pay attention to the URL when entering personal information. I hope Firefox will consider implementing a fix to this problem since this can cause serious confusion even for those who are extremely mindful of phishing.</blockquote>
<p>I appreciate Xudong taking the time to suggest a fix. And I don't think the right fix is that we can expect everyone to use a password manager.</p>
<p>When threat modeling, I talk about this as the interplay between threats and mitigations: threats should be mitigated and there's a threat that any given mitigation can be bypassed. When dealing with people, there's a simple test product security engineering can use. If you cannot write down the steps that a person must take to be secure, you have a serious problem. If you cannot write that list on a whiteboard, you have a serious problem. I'm not suggesting that there's an easy or obvious fix to this. But I am suggesting that as long as browser makers are telling their users that looking at the URL bar is a security measure, they have to make that security measure resist attacks. </p>Cyber Balance Sheet2017-04-18T00:00:00Zhttps://shostack.org/blog/cyber-balance-sheet/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/Cyber-Balance-Sheet.png" alt="Cyber Balance Sheet" class="single-post-main-img" width="75" height="100" style="float:right;" />
<p>There's an interesting report out from the Cyentia Institute, which is run by Wade Baker and Jay Jacobs. (Wade and Jay were amongst the principals behind the Verizon DBIR.) It's "<a href="https://go.focal-point.com/cyber-balance-sheet-report" target="_blank">The Cyber Balance Sheet</a>." It's interesting research and if you spend time with executives, worth your time.</p>Account Recovery2017-04-19T00:00:00Zhttps://shostack.org/blog/account-recovery/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Access to an account is access to an account. A lot of systems talk about "backup" authentication, but make that backup authentication available at all times. This has led to all sorts of problems, because the idea that the street you grew up on is a secret didn't make sense even before Yahoo! "<a href="https://www.nytimes.com/2016/09/23/technology/yahoo-hackers.html" target="_blank">invalidated</a>"it. Not to mention that even when answers to these questions are freeform, they tend to have only a few bits of entropy. Colors? First names? All have distributions. Then there's the ones who insist they know your answers:</p>
<img style="display:block;" src="https://shostack.org/blog/img/2017/ua-acct-recovery-qs.png" alt="United Airlines Account Recovery Questions" width="289" height="211" />
<p>One of the people who's focused on really improving account recovery is Brad Hill, and at F8, Facebook announced some new tech which I think is a very useful new point in the design space.</p>
<blockquote>As developers, we talk a lot about building experiences that people love. But there’s one experience that never fails to elicit a groan from people everywhere: recovering an account after forgetting your password.
<br />[...]<br />
Delegated Account Recovery helps people and businesses recover their accounts using the services that they trust. It is an open protocol that gives companies the ability to provide better and more secure options to their customers for regaining access to their accounts. Facebook — and other providers in the future — can help people verify who they are when they forget their password, lose their two-factor codes, or don't want to answer security questions based on personal information. ("<cite><a href="https://www.facebook.com/notes/protect-the-graph/delegated-account-recovery-now-available-in-beta/1875376806035795/" target="_blank">Delegated Account Recovery Now Available in Beta</a></cite>.")</blockquote>
<p>It's worth checking out.</p>
<p>And not that I'm trying to make trouble for anyone, but at what point does relying on use of a "secret" question like "street you grew up on" become the sort of unfair trade practice that garners regulatory attention? My guess is that the availability of credible alternatives brings that day closer.</p>How Not to Design an Error Message2017-04-20T00:00:00Zhttps://shostack.org/blog/how-not-to-design-an-error-message/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/SC07FireAlarm.png" alt="fire alarm" class="single-post-main-img" width="112" height="112" style="float:left;" />
<p>The voice shouts out: “Detector error, please see manual.” Just once, then a few hours later. And when I did see the manual, I discovered that it means "<b>Alarm has reached its End of Life</b>".</p>
<p>No, really. That's how my fire alarm told me that it's at its end of life. By telling me to read the manual. Why it doesn't say "device has reached end of life?" That would be direct and to the point. But no. When you press the button, it says "please see manual." Now, this was a 2009 device, so maybe, just maybe, there was a COGS issue in how much storage was needed.</p>
<p>But sheesh. Warning messages <a href="https://cups.cs.cmu.edu/soups/2011/posters/soups_posters-Reeder.pdf" target="_blank">should</a> be actionable, explanatory and tested. At least it was loud and annoying.</p>Cyber Grand Shellphish2017-04-24T00:00:00Zhttps://shostack.org/blog/cyber-grand-shellphish/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>There's a very interesting <u>paper</u> [link to http://phrack.org/papers/cyber_grand_shellphish.html no longer works] on the Cyber Grand Challenge by team Shellphish. Lots of details about the grand challenge itself, how they designed their software, how they approached the scoring algorithm, and what happened in the room.</p>
<p>There's lots of good details, but perhaps my favorite is:</p>
<blockquote>How would a team that did *nothing* do? That is, if a team connected and then ceased to play, would they fare better or worse than the other players? We ran a similar analysis to the "Never patch" strategy previously (i.e., we counted a CS as exploited for all rounds after its first exploitation against any teams), but this time removed any POV-provided points. In the CFE, this "Team NOP" would have scored 255,678 points, barely *beating* Shellphish and placing 3rd in the CGC.</blockquote>
<p>The reason I like this is that scoring systems are <strong>hard</strong>. Really, really hard. I know that DARPA spent substantial time and energy on the scoring system, and this outcome happened anyway. We should not judge either DARPA or the contest on that basis, because it was hard to see that that would happen ahead of time: it's a coincidence of the scores teams actually achieved.</p>“...the Elusive Goal of Security as a Scientific Pursuit”2017-04-25T00:00:00Zhttps://shostack.org/blog/the-elusive-goal-of-security-as-a-scientific-pursuit/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>That's the subtitle of a new paper by Cormac Herley and Paul van Oorschot, "<cite><a href="http://people.scs.carleton.ca/~paulv/papers/oakland2017science.pdf">SoK: Science, Security, and the Elusive Goal of Security as a Scientific Pursuit</a></cite>," forthcoming in IEEE Security & Privacy.</p>
<blockquote>The past ten years has seen increasing calls to make security research more “scientific”. On the surface, most agree that this is desirable, given universal recognition of “science” as a positive force. However, we find that there is little clarity on what “scientific” means in the context of computer security research, or consensus on what a “Science of Security” should look like. We selectively review work in the history and philosophy of science and more recent work under the label “Science of Security”. We explore what has been done under the theme of relating science and security, put this in context with historical science, and offer observations and insights we hope may motivate further exploration and guidance. Among our findings are that practices on which the rest of science has reached consensus appear little used or recognized in security, and a pattern of methodological errors continues unaddressed.</blockquote>Cassini2017-04-28T00:00:00Zhttps://shostack.org/blog/cassini/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>This image isn't Saturn's Rings, but an image of Saturn from its pole to equator.</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2017/Saturn.jpg" alt="image of Saturn from Cassini" border="0" width="256" height="256" />
<p>Sadly, many of the sites reporting on Cassini's dive through Saturn's rings — I'm going to say that again — Cassini's first dive through Saturn's rings — don't explain the photos. I'll admit it, I thought I was looking at the rings. <a href="http://www.space.com/36645-cassini-spacecraft-ring-dive-saturn-surprises.html" target="_blank">Space.com</a> has the explanations.</p>Threat Modeling & IoT2017-05-01T00:00:00Zhttps://shostack.org/blog/threat-modeling-and-iot/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Threat modeling internet-enabled things is similar to threat modeling other computers, with a few special tensions that come up over and over again. You can start threat modeling IoT with the four question framework:</p>
<ol>
<li>What are you building?</li>
<li>What can go wrong?</li>
<li>What are you going to do about it?</li>
<li>Did we do a good job?</li>
</ol>
<p>But there are specifics to IoT, and those specifics influence how you think about each of those questions. I'm helping a number of companies who are shipping devices, and I would love to fully agree that "<a href="https://twitter.com/DonAndrewBailey/status/858740722454990849" target="_blank">consumers shouldn't have to care about the device's security model. It should just be secure. End of story.</a>" I agree with Don Bailey on the sentiment, but frequently the tensions between requirements mean that what's secure is not obvious, or that "security" conflicts with "security." (I model requirements as part of 'what are you building.')</p>
<p>When I train people to threat model, I use this diagram to talk about the interaction between threats, mitigations, and requirements:</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2017/threats-mitigations-requirements.png" alt="threats mitigations requirements" border="0" width="550" height="250" />
<p>The interaction has a flavor in working with internet-enabled things, and that interaction changes from device to device. There are some important commonalities.</p>
<p>When looking at <strong>what you're building</strong>, IoT devices typically lack sophisticated input devices like keyboards or even buttons, and sometimes their local output is a single LED. One solution is to put a web server on the device listening, and to pay for a sticker with a unique admin password, which then drives customer support costs. Another solution is to have the device not listen but to reach out to your cloud service, and let customers register their devices to their cloud account. This has security, privacy, and COGS <del>downsides</del> tradeoffs. [Update: I said downsides, but it's more a different set of attack vectors become relevant in security. COGS is an ongoing commitment to operations; privacy is dependent on what's sent or stored.]</p>
<p>When asking <strong>what can go wrong</strong>, your answers might include "a dependency has a vulnerability," or "an attacker installs their own software." This is an example of security being in tension with itself is the ability to patch your device yourself. If I want to be able to recompile the code for my device, or put a safe version of zlib on there, I ought to be able to do so. Except if I can update the device, so can attackers building a botnet, and 99.n% of typical consumers for a smart lightbulb are not going to patch themselves. So we get companies requiring signed updates. Then we get to the reality that most consumer devices last longer than most silicon valley companies. So we want to see a plan to release the key if the company is unable to deliver updates. And that plan runs into the realities of bankruptcy law, which is that that signing key is an asset, and its hard to value, and bankruptcy trustees are unlikely to give away your assets. There's a decent pattern (allegedly from the world of GPU overclocking), which is that you can intentionally make your device patchable by downloading special software and moving a jumper. This requires a case that can be opened and reclosed, and a jumper or other DFU hardware input, and can be tricky on inexpensive or margin-strained devices.</p>
<p>That COGS (cost of goods sold) downside is not restricted to security, but has real security implications, which brings us to the question of <strong>what are you going to do about it</strong>. Consumers are not used to subscribing to their stoves, nor are farmers used to <a href="https://securityledger.com/2017/03/hobbled-by-dmca-famers-are-jailbreaking-tractors/" target="_blank">subscribing to their tractors</a>. Generally, both have better things to do with their time than to understand the new models. But without subscription revenue, it's very hard to make a case for ongoing security maintenance. And so security comes into conflict with consumer mental models of purchase.</p>
<p>In the IoT world, the question of <strong>did we do a good job</strong> becomes have we done a <em>good enough</em> job? Companies believe that there is a first mover advantage, and this ties to points that Ross Anderson made long ago about the tension between security and economics. Good threat modeling helps companies get to those answers faster. Sharing the tensions help us understand what the tradeoffs look like, and with those tensions, organizations can better define their requirements and get to a consistent level of security faster.</p>
<p>I would love to hear your experiences about other issues unique to threat modeling IoT, or where issues weigh differently because of the IoT nature of a system!</p>
<p>(Incidentally, this came from a <a href="https://twitter.com/DinisCruz/status/858750057750941696" target="_blank">question</a> on Twitter; threading on Twitter is now <a href="https://9to5mac.com/2015/01/25/tweetbot-mac-twitters-war-on-developers/" target="_blank">worse</a> than it was in 2010 or 2013, and since I have largely <a href="https://adam.shostack.org/blog/2017/01/the-dope-cycle-and-the-two-minutes-hate/">abandoned the platform</a>, I can't figure out who's responding to what. A few good points I see include:</p>
<ul>
<li><a href="https://twitter.com/cybergibbons/status/858699596461199360" target="_blank">...We have workable solutions to most of the problems. But people ridicule users instead.</a> (The CyberGibbons)</li>
<li><a href="https://twitter.com/DonAndrewBailey/status/858742863118688256" target="_blank">One of the problems with IoT security is that hackers are commonly not engineers, yet they want to advise engineers on security decisions.</a> (Don Bailey)</li>
</ul>Threat Modeling and Star Wars2017-05-04T00:00:00Zhttps://shostack.org/blog/threat-modeling-and-star-wars/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>IANS members should have access today to a new faculty report I wrote, entitled "<a href="https://portal.iansresearch.com/content/2909/threat-modeling-in-an-agile-world" target="_blank">Threat Modeling in An Agile World</a>." Because it's May the Fourth, I thought I'd share the opening:</p>
<blockquote>
As Star Wars reaches its climax, an aide approaches Grand Moff Tarkin to say, “We've analyzed their attack pattern, and there is a danger.” In one of the final decisions he makes, Tarkin brushes aside those concerns. Likewise, in Rogue One, we hear Galen Urso proclaim that the flaw is too subtle to be found. But that's bunk. There's clearly no blow-out sections or baffles around the reactor: if there's a problem, the station is toast. A first year engineering student could catch it.
<br /><br />
You don't have to be building a Death Star to think about what might go wrong before you complete the project. The way we do that is by “threat modeling,” an umbrella term for anticipating and changing the problems that a system might experience. Unfortunately, a lot of the advice you'll hear about threat modeling makes it seem a little bit like the multi-year process of building a Death Star.
</blockquote>A Privacy Threat Model for The People of Seattle2017-05-08T00:00:00Zhttps://shostack.org/blog/privacy-threat-model-for-the-people-of-seattle/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Some of us in the <u>Seattle Privacy Coalition</u> [link to https://www.seattleprivacy.org/ no longer works] have been talking about creating a model of a day in the life of a citizen or resident in Seattle, and the way data is collected and used; that is the potential threats to their privacy. In a typical approach, we focus on a system that we're building, analyzing or testing. In this model, I think we need to focus on the people, the 'data subjects.'</p>
<p>I also want to get away from the one by one issues, and help us look at the problems we face more holistically.</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2017/FBI-sues-seattle.png" alt="Feds Sue Seattle over FBI Surveillance" class="single-post-main-img" width="599" height="337" />
<p>The general approach I use to threat model is based on 4 questions:</p>
<ol>
<li>What are you working on? (building, deploying, breaking, etc)</li>
<li>What can go wrong?</li>
<li>What are you going to do about it?</li>
<li>Did you do a good job?</li>
</ol>
<p>I think that we can address the first by building a model of a day, and driving into specifics in each area. For example, get up, check the internet, go to work (by bus, by car, by bike, walking), have a meal out...</p>
<p>One question that we'll probably have to work on is how to address what can go wrong in a model this general? Usually I threat model specific systems or technologies where the answers are more crisp. Perhaps a way to break it out would be:</p>
<ol>
<li>What is a Seattlite's day?</li>
<li>What data is collected, how, and by whom? What models can we create to help us understand? Is there a good balance between specificity and generality?</li>
<li>What can go wrong? (There are interesting variations in the answer based on who the data is about)</li>
<li>What could we do about it? (The answers here vary based on who's collecting the data.)</li>
<li>Did we do a good job?</li>
</ol>
<p>My main goal is to come away from the exercise with a useful model of the privacy threats to Seattleites. If we can, I'd also like to understand how well this "flipped" approach works.</p>
<p>[As I've discussed this, there's a lot of interest in what comes out and what it means, but I don't expect that to be the main focus of discussion on Saturday. For example,] There are <del>also</del> policy questions like, "as the city takes action to collect data, how does that interact with its <a href="http://www.seattle.gov/iandraffairs/programs/welcoming-city" target="_blank">official goal to be a welcoming city</a>?" I suspect that the answer is 'not very well,' and that there's an opportunity for collaboration here across the political spectrum. Those who want to run a 'welcoming city' and those who distrust government data collection can all ask how Seattle's new privacy program will help us.</p>
<p>In any event, a bunch of us will be getting together at the Delridge Library this Saturday, May 13, at 1PM to discuss for about 2 hours, and anyone interested is welcome to join us. We'll just need two forms of ID and your consent to our outrageous terms of service. (Just kidding. We do not check ID, and I simply ask that you show up with a goal of respectful collaboration, and a belief that everyone else is there with the same good intent.)</p>
Warrants for Cleaning Malware in Kelihos2017-05-10T00:00:00Zhttps://shostack.org/blog/warrants-for-cleaning-malware-in-kelihos/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>This is an thought-provoking story:</p>
<blockquote>And perhaps most unusual, the FBI recently obtained a single warrant in Alaska to hack the computers of thousands of victims in a bid to free them from the global botnet, Kelihos.
<br /><br />
On April 5, Deborah M. Smith, chief magistrate judge of the US District Court in Alaska, greenlighted this first use of a controversial court order. Critics have since likened it to a license for mass hacking. ("<a href="https://arstechnica.com/tech-policy/2017/04/fbi-allays-some-critics-with-first-use-of-new-mass-hacking-warrant/" target="_blank">FBI allays some critics with first use of new mass-hacking warrant</a>," Aliya Sternstein, Ars Technica)</blockquote>
<p>One of the issues in handling malware at scale is that the law prohibits unauthorized access to computers. And that's not a bad principle, but it certainly makes it challenging to go and clean up infections at scale.</p>
<p>So the FBI getting a warrant to authorize that may be an interesting approach, with many cautions about the very real history of politicized spying, of COINTEL, of keeping the use of 0days secret. But I don't want to focus on those here. What I want to focus on is what did a judge actually authorize <em>in this case</em>. Is this a warrant for mass hacking? It doesn't appear to be, but it does raise issues of what's authorized, how those things were presented to the judge, and points to future questions about what authorization might include.</p>
<p>So what's authorized?</p>
<p>The <a href="https://www.justice.gov/opa/press-release/file/956521/download" target="_blank">application for a warrant</a> is fairly long, at 34 pages, much of it explaining why the particular defendant is worthy of a search, and the "time and manner of execution of search" starts on page 30.</p>
<p>What the FBI apparently gets to do is to operate a set of supernodes for the Kelihos botnet, and "The FBI's communications, however, will not contain any commands, nor will they contain IP addresses of any of the infected computers. Instead, the FBI replies will contain the IP and routing information for the FBI's 'sinkhole' server."</p>
<p>What questions does that raise?</p>
<p>A first technical point is that for the FBI's replies to reach those infected computers, there must be packets sent over the internet. For those packets to reach the infected computer's, they need addressing, in the form of an IP address. Now you can argue that those IP addresses of infected computers are in the headers, not the content of the packets. The nuance of content versus headers is important in some laws. In fact, the warrant para 66 explicitly states that the FBI will gather that data, and then provide it to ISPs, who they hope will notify end users. (I've written about that experience in "<a href="https://adam.shostack.org/blog/2014/01/the-worst-user-experience-in-computer-security/">The Worst User Experience In Computer Security?</a>.")</p>
<p>Another technical point is that the warrant says "The FBI with the assistance of private parties..." It's not clear to me what constraints might apply to those parties. Can they record netflow or packet captures? (That might be helpful in demonstrating exactly what they did later, and also create a privacy conundrum which the FBI takes apparent pains to avoid.) What happens if an unrelated party captures that data? For example, let's say one of the parties chooses to operate their sinkhole in an AWS node. I suspect AWS tracks netflows. A warrant to obtain that data might seem pretty innocent to a judge.</p>
<p>The idea that the FBI will not send any commands is, on the surface, a fine restriction, but it eliminates many possibilities for cleaning up. What could we imagine in the future?</p>
<p>For example, a command that could be useful would be to remove Kelihos from startup items? How about remove C:\Program Files\Kelihos.exe? Removing files from my computer without permission is pretty clearly a form of unauthorized access. It's a form that's probably in the interests of the vast majority of the infected. We might want to allow a well-intentioned party to do so.</p>
<p>But what if the commands fail? When Sony built a tool to remove the rootkit their DRM installed, the cleanup tool <a href="https://freedom-to-tinker.com/2005/11/15/sonys-web-based-uninstaller-opens-big-security-hole-sony-recall-discs/" target="_blank">opened a big security hole</a>. It's hard to write good code. It's very hard to write code that's free of side effects or security issues.</p>
<p>What if there's disagreement over what fits within the definition of well-intentioned? What if someone wants to <a href="http://www.huffingtonpost.com/james-pinkstone/apple-stole-my-music-no-s_b_9873638.html" target="_blank">remove duplicate files</a> to help me save disk space? What if they want to remove blasphemous or otherwise illegal files?</p>
Ross Anderson on Edge2017-05-11T00:00:00Zhttps://shostack.org/blog/ross-anderson-on-edge/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>The Edge is an interesting site with in depth interviews with smart folks. There's a long interview with Ross Anderson <a href="https://www.edge.org/conversation/ross_anderson-the-threat">published recently</a>.</p>
<p>It's a big retrospective on the changes over thirty years, and there's enough interesting bits that I'll only quote one:</p>
<blockquote>The next thing that’s happened is that over the past ten years or so, we’ve begun to realize that as systems became tougher and more difficult to penetrate technically, the bad guys have been turning to the users. The people who use systems tend to have relatively little say in them because they are a dispersed interest. And in the case of modern systems funded by advertising, they’re not even the customer, they’re the product.</blockquote>
<p>Take the time to listen. Ross's emphasis is a bit lost in the text.</p>Hospital Ransomware2017-05-13T00:00:00Zhttps://shostack.org/blog/hospital-ransomware/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>[Update, May 22, added link to "Observing".]</p>
<p>Good posts by <a href="https://www.lightbluetouchpaper.org/2017/05/13/bad-malware-worse-reporting/" target="_blank">Ross Anderson</a>, <a href="https://conspicuouschatter.wordpress.com/2017/05/13/the-politics-of-the-wannacrypt-ransomware-outbreak/" target="_blank">George Danezis</a> and <a href="https://www.cs.columbia.edu/~smb/blog//2017-05/2017-05-12.html" target="_blank">Steve Bellovin</a> say much of what I'd wanted to say, and more. So go take a read. [Also worth reading “<u>Observing the WannaCry fallout: confusing advice and playing the blame game</u>”.] [link to https://www.riscs.org.uk/2017/05/19/observing-the-wannacry-fallout-confusing-advice-and-playing-the-blame-game/ no longer works]</p>
<p>To what Bellovin says, I would add that 15 years ago, Steve Beattie, Crispin Cowan and I did some math for <a href="https://adam.shostack.org/time-to-patch-usenix-lisa02.pdf" target="_blank">Timing the Application of Security Patches for Optimal Uptime</a>, and estimated that likelihood of attack starts to exceed likelihood of damage from the patch at around 10 days. To my knowledge, no one has updated the dataset or re-run the numbers, but I would expect that improvements in test automation and improvement in attack frameworks make that closer to patch release, not further from it. My experience is that many organizations with dependencies on older technology also have not invested in test automation that enables even fast 'smoke testing' of their systems. Such test rigs allow you to quickly start the clock that Steve hypothesizes.</p>
<p>Also, see "<a href="http://www.antipope.org/charlie/blog-static/2017/05/rejection-letter.html" target="_blank">Rejection Letter</a>" by Charlie Stross, and "<a href="https://www.malwaretech.com/2017/05/how-to-accidentally-stop-a-global-cyber-attacks.html" target="_blank">How to Accidentally Stop a Global Cyber Attacks</a>."</p>Security Rarely Flows Downhill2017-05-16T00:00:00Zhttps://shostack.org/blog/security-rarely-flows-downhill/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>I was recently in a meeting at a client site where there was a lot of eye rolling going on. It's about not understanding <a href="https://adam.shostack.org/blog/2017/02/groundrules-on-complaining-about-security/">the ground rules</a>, and one of those groundrules is that security rarely flows downhill.</p>
<img src="https://shostack.org/blog/img/2017/architecture-stack.png" alt="architecture stack" class="single-post-main-img" width="213" height="168" style="float:right;" />
<p>That is, if you're in a stack like the one to the right, the application is vulnerable to the components underneath it. The components underneath should isolate themselves from (protect themselves against) things higher in the stack.</p>
<p>I can't talk about the meeting in detail (client confidentiality), so I'm going to pick an article I saw where some of the same thinking. I'm using that article as a bit of a straw man. This article was <em>convenient</em>, not unique. These points are more strongly made if they are grounded in real quotes, rather than ones I make up. </p>
<blockquote>“The lack of key validation (i.e. the verification that public keys are not invalid) is therefore not a major security risk. But I believe that validating keys would make Signal even more secure and robust against maliciously or accidentally invalid keys,” the researchers explained.
<br /><br />
In this farfetched example, researchers explain, communications would be intentionally compromised by the sender. The goal, could be to give the message recipient the appearance of secure communications in hopes they may be comfortable sharing something they might not otherwise.
<br />...<br />
“People could also intentionally install malware on their own device, intentionally backdoor their own random number generator, intentionally publish their own private keys, or intentionally broadcast their own communication over a public loudspeaker. If someone intentionally wants to compromise their own communication, that’s not a vulnerability,” Marlinspike said. (I'm choosing to not link to the article, because, I don't mean to call out the people making that argument.)</blockquote>
<p>So here's the rule: Security doesn't flow downhill without extreme effort. If you are an app, it is hard to protect the device as a whole. It is hard to protect yourself if the user decides to compromise their device or mess up their RNG. And Moxie is right to not try to improve the security of Android or IoS against these attacks: it's very difficult to do from where he sits. Security rarely flows downhill.</p>
<p>There are exceptions. Companies like Good Technologies built complex crypto to protect corporate data on devices that might be compromised. And best I understand it, it worked by having a server send a set of keys to the device, and the outer layer of Good decrypted the real app and data with those keys, then got more keys. And they had some anti-debugging lawyers in there (oops, is that a typo?) so that the OS couldn't easily steal the keys. And it was about the best you could do with the technology that phone providers were shipping. It is, netted out, a whole lot more fair than employers demanding the ability to wipe your personal device and your personal data.</p>
<p>So back to that meeting. A security advisor from a central security group was trying to convince a product team of something very much like "the app should protect itself from the OS." He wasn't winning, and he was trotting out arguments like "it’s not going to cost [you] anything." But that was obviously not the case. The cost of anything is the foregone alternative, and these "stone soup improvements" to security (to borrow from Mordaxus) were going to come at the expense of other features. Even if there was agreement on what direction to go, it was going to another few meetings to get these changes designed, and then it was going to cost a non-negligible number of programmer days to implement, and more to test and document.</p>
<p>That's not the most important cost. Far more important than the cost of implementing the feature was the effort to get to agreement on these new features versus others. </p>
<p>But even that is not the most important cost. The real price was respect for the central security organization. Hearing those arguments made it that much less likely that those engineers were going to see the "security advisor," or their team, as helpful.</p>
<p>As security engineers, we have to pick battles and priorities just like other engineers. We have to find the improvements that make the most sense for their cost, and we have to work those through the various challenges.</p>
<p>One of the complexities of consulting is that it can be trickier to interrupt in a large meeting, and you have less ability to speak for an organization. I'd love your advice on what a consultant should do when they watch someone at a client site demonstrating skepticism. Should I have stepped in at the time? How? (I did talk with a more senior person on the team who is working the issue.)</p>Well-deserved accolades2017-05-17T00:00:00Zhttps://shostack.org/blog/well-deserved-accolades/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/ParisaTabriz.png" alt="line drawing of Parisa Tabriz" class="single-post-main-img" width="300" height="216" style="float:right;" />
<p>When I saw that Wired had created a list, "<a href="https://www.wired.com/2017/04/20-people-creating-future-next-list-2017/" target="_blank">20 People Who Are Creating the Future</a>," I didn't expect to see anyone in security on it.</p>
<p>I was proven wrong in a wonderful way — #1 on their list is Parisa Tabriz, under the headline "Put Humans First, Code Second." A great choice, a well-deserved honor for Parisa, and a bit of a rebuke to those who want to focus on code vulnerabilities, and say "you can't patch human stupidity."</p>Certificate pinning is great in stone soup2017-05-23T00:00:00Zhttps://shostack.org/blog/certificate-pinning-is-great-in-stone-soup/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>In his "<a href="https://adam.shostack.org/blog/2017/02/groundrules-on-complaining-about-security/">ground rules</a>" article, Mordaxus gives us the phrase "stone soup security," where everyone brings a little bit and throws it into the pot. I always try to follow Warren Buffet's advice, to praise specifically and criticize in general.</p>
<p>So I'm not going to point to a specific talk I saw recently, in which someone talked about pen testing IoT devices, and stated, repeatedly, that the devices, and device manufacturers, should implement certificate pinning. They repeatedly discussed how easy it was to add a self-signed certificate and intercept communication, and suggested that the right way to mitigate this was certificate pinning.</p>
<p>They were wrong.</p>
<p>If I own the device and can write files to it, I can not only replace the certificate, but I can change a binary to replace a 'Jump if Equal' to a 'Jump if Not Equal,' and bypass your pinning. If you want to prevent certificate replacement by the device owner, you need a trusted platform which only loads signed binaries. (The interplay of mitigations and bypasses that gets you
there is a fine exercise if you've never worked through it.)</p>
<p>When I train people to threat model, I use this diagram to talk about the interaction between threats, mitigations, and requirements:</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2017/threats-mitigations-requirements.png" alt="Threats Mitigations requirements" class="single-post-main-img" width="550" height="250" />
<p>Is it a requirement that the device protect itself from the owner? If you're threat modeling well, you can answer this question. You work through these interplaying factors. You might start from a threat of certificate replacement and work through a set of difficult to mitigate threats, and change your requirements. You might start from a requirements question of "can we afford a trusted bootloader?" and discover that the cost is too high for the expected sales price, leading to a set of threats that you choose not to address. This goes to the core of "what's your threat model?" Does it include the device owner?</p>
<p>Is it a requirement that the device protect itself from the owner? This question frustrates techies: we believe that we bought it, we should have the right to tinker with it. But we should also look at the difference between the iPhone and a PC. The iPhone is more secure. I can restore it to a reasonable state easily. That is a function of the device protecting itself from its owner. And it frustrates me that there's a Control Center button to lock orientation, but not one to turn location on or off. But I no longer jailbreak to address that. In contrast, a PC that's been infected with malware is <em>hard</em> to clean to a demonstrably good state.</p>
<p>Is it a requirement that the device protect itself from the owner? It's a yes or no question. Saying yes has impact on the physical cost of goods. You need a more <del>expensive</del> sophisticated boot loader. You have to do a bunch of engineering work which is both straightforward and exacting. If you don't have a requirement to protect the device from its owner, then you don't need to pin the certificate. You can take the money you'd spend on protecting it from its owner, and spend that money on other features.</p>
<p>Is it a requirement that the device protect itself from the owner? Engineering teams deserve a crisp answer to this question. Without a crisp answer, security risks running them around in circles. (That crisp answer might be, "we're building towards it in version 3.")</p>
<p>Is it a requirement that the device protect itself from the owner? Sometimes when I deliver training, I'm asked if we can fudge, or otherwise avoid answering. My answer is that if security folks want to own security decisions, they must own the hard ones. Kicking them back, not making tradeoffs, not balancing with other engineering needs, all of these reduce leadership, influence, and eventually, responsibility.</p>
<p>Is it a requirement that the device protect itself from the owner?</p>
<p>Well, is it?</p>Adam & Chris Wysopal webcast2017-05-24T00:00:00Zhttps://shostack.org/blog/adam-chris-wysopal-webcast/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>(Today) Wednesday, May 24th, 2017 at 1:00 PM EDT (17:00:00 UTC), Chris Wysopal and I are doing a SANS webcast, "<a href="https://www.sans.org/webcasts/104267" target="_blank">Choosing the Right Path to Application Security</a>." I'm looking forward to it, and hope you can join us!</p>
<p>Update: the webcast is now archived, and the white paper associated with it, "<u>Using Cloud Deployment to Jump-Start Application Security</u>," [link to https://www.sans.org/reading-room/whitepapers/analyst/cloud-deployment-jump-start-application-security-37805 no longer works] is in the <a href="https://www.sans.org/reading-room/analysts-program" target="_blank">SANS reading room</a>.</p>The Ultimate Stopping Machine?2017-05-30T00:00:00Zhttps://shostack.org/blog/ultimate-stopping-machine/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/immobilizer.jpg" alt=" " class="single-post-main-img" width="338" height="190" style="float:right;" />
<p>Security is hard in the real world. There's an interesting story on Geekwire, "<a href="https://www.geekwire.com/2017/bmws-reachnow-investigating-cases-cars-getting-stuck-washington-state-ferries/" target="_blank">BMW’s ReachNow investigating cases of cars getting stuck on Washington State Ferries</a>." The story:</p>
<blockquote>a ReachNow customer was forced to spend four hours on the Whidbey Island ferry this weekend because his vehicle’s wheels were locked, making the vehicle immovable unless dragged. The state ferry system won’t let passengers abandon a car on the ferry because of <em>security</em> concerns.</blockquote>
<p>BMW's response:</p>
<blockquote>We believe that the issue is related to a <em>security</em> feature built into the vehicles that kicks in when the car is moving but the engine is turned off and the doors are closed. </blockquote>
<p>I first encountered these immobilizing devices on a friend's expensive car in 1999 or so. The threat is thieves equipped with a towtruck. It's not super-surprising to discover that a service like Reachnow, where "random" people can get into a car and drive it away will have tracking devices in those cars. It's a little more surprising that there are immobilizers in them.</p>
<p>Note the competing definitions of security (emphasis added in both quotes above):</p>
<ul>
<li>BMW is worried about theft.</li>
<li>The state ferry system is worried about car bombs.</li>
<li>Passengers might worry about being detained next to a broken car, or about bugs in the immobilization technology. What if that kicks in on the highway because "a wire gets loose"?</li>
</ul>
<p>In "<a href="https://adam.shostack.org/blog/2015/11/the-evolution-of-secure-things/">The Evolution of Secure Things</a>," I wrote:</p>
<blockquote>It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.</blockquote>
<p>Surprise! There's a way to move a vehicle a long distance with the engine off, and it's not a tow truck!</p>
<p>Real products, introduced into the real world, will often involve surprises like this. One characteristic of a good security architecture is that there's the right degree of adjustability in the product, and judging that is still a matter of engineering experience.</p>
<p>Similarly, one of the lessons of entrepreneurship is that the problems you experience are often surprising. Investors look for flexibility in the leaders they back because they know that they'll be surprised along the way.</p>Maintaining & Updating Software2017-06-01T00:00:00Zhttps://shostack.org/blog/maintaining-updating-software/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>In the aftermath of Wannacry, there's a lot of discussion of organizations not updating their systems. There are two main reasons organizations don't update the operating systems they run: compatibility and training. Training is simpler — you have to train people about the changes to the Start Menu to move them to Windows 8, and that's expensive. (I sometimes worked with sales people when I was at Microsoft, and they could have managed this <em>much</em> better than they have.)</p>
<p>Compatability is harder. In his excellent blog post on "<a href="https://www.cs.columbia.edu/~smb/blog/2017-05/2017-05-16.html" target="_blank">Who Pays?</a>," Steve Bellovin discusses how "achieving a significant improvement in a product's security generally requires a new architecture and a lot of changed code. It's not a patch, it's a new release." There are substantial changes to the ways memory is managed and laid out between the versions, including ASLR, DEP, CFG, etc. There are many changes and seeing how they impact real programs is hard. That's part of the reason Microsoft released the Enhanced Mitigation <del>Experiment</del> Experience Toolkit.</p>
<img src="https://shostack.org/blog/img/" alt="" class="single-post-main-img" style="display:block; margin:10px auto;" />
<p>This doesn't just apply to platforms, it also applies to libraries. (For example, see Jon Callas, "<a href="http://www.metzdowd.com/pipermail/cryptography/2014-April/020977.html" target="_blank">Apple and OpenSSL</a>."</p>
<p>Even when compatibility is generally very high, someone needs to test the code to see if it works, and that costs money. It costs a lot more money if you don't have test code, test documentation (YAGNI!) or if, umm, your test code has dependencies on libraries that don't work on the new platform...It is unlikely that re-certifying on a new platform is less than weeks of work, and for larger products, it could easily extend to person years of work, to maintain software that's already been sold. The costs are non-trivial, which brings me back to Steve Bellovin's post:</p>
<blockquote>There are, then, four basic choices. We can demand that vendors pay, even many years after the software has shipped. We can set up some sort of insurance system, whether run by the government or by the private sector. We can pay out of general revenues. If none of those work, we'll pay, as a society, for security failures.</blockquote>
<p>This is a fair summary, and I want to add two points.</p>
<p>First, it remains fashionable to bash Microsoft for all the world's security woes, there is a far bigger problem, that of open source, which is usually released without any maintenance plan. (My friends at the <a href="https://www.coreinfrastructure.org/" target="_blank">Core Infrastructure Initiative</a> are working on this problem.)</p>
<ul>
<li>Code is speech. The United States rarely imposes liabilities on people for speaking, and it seems downright perverse to do so more if they let others use their words.</li>
<li>There may not be an organization, or the author of the code may have explicitly disclaimed that they're responsible. If there is, and we as a society suddenly impose unexpected costs on them, that might inhibit future funding of open source. (As an example, the author of Postfix was paid by IBM for a while. Does IBM have responsibility for Postfix, now that he's left and gone to Google?) How does the "releasing code" calculus change if you're required to maintain it for ever?</li>
<li>The <a href="https://opensource.org/osd-annotated" target="_blank">Open Source Definition</a> prohibits discrimination against fields of endeavor, and requires licenses be technology neutral. So it seems hard to release an open source library and forbid the use of code in long-lived consumer goods.</li>
<li>What if Bob makes a change to Alice's code, and introduces a security bug in a subtle way? What if Alice didn't document that the code was managing a security issue? Does she need to fix it? </li>
</ul>
<p>Second, the costs to society will not be evenly distributed: they're going to fall on sectors with less software acumen, and places where products are repaired more than they're replaced, which tend to be the poorer places and countries.</p>
<p>[Update: Ross Anderson <a href="https://www.lightbluetouchpaper.org/2017/06/01/when-safety-and-security-become-one/" target="_blank">blogs</a> about a new paper that he wrote with Éireann Leverett and Richard Clayton. The paper is more focused on the regulatory challenge that maintaining and updating software provokes than the economics.]</p>
<p class="credit-text"><a href="https://unsplash.com/collections/206106/ruins?photo=irm6EmAwmLk" target="_blank">Photo</a> by Pawel Kadysz.</p>Bicycling and Risk2017-06-05T00:00:00Zhttps://shostack.org/blog/bicycling-and-risk/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/bike-commuter-1.jpg" alt="bike commuter" class="single-post-main-img" />
<p>A study found that those who cycle have a net 41% lower risk of premature death. Now, when I read that headline my first thought was that it was 100 people over 6 months and a statistical fluke. But no, they followed a quarter million Britons for 5 years.</p>
<p>Now, it's not obvious that it's causal. Perhaps those who are healthier choose to ride to work? But it seems reasonable to assume that getting a bunch of exercise, fresh air, and adrenaline rushes as distracted drivers read their timeslines as they drive could lead to better health.</p>
<p>The paper is "<a href="http://eprints.gla.ac.uk/138376/" target="_blank">Association between active commuting and incident cardiovascular disease, cancer, and mortality: prospective cohort study</a>," and a press discussion is at "<a href="http://www.gla.ac.uk/news/headline_522765_en.html" target="_blank">Cycling to work may cut your risk of premature death by 40%</a>."</p>
<p class="credit-text">Photo by <a href="https://unsplash.com/photos/v0TfIt-S6-o" target="_blank">Jack Alexander</a>.</p>Bicycling and Threat Modeling2017-06-06T00:00:00Zhttps://shostack.org/blog/bicycling-and-threat-modeling/adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/bikeshare-1.jpg" alt="bikeshare " class="single-post-main-img" />
<p>The Economist <a href="http://www.economist.com/news/china/21721231-some-officials-wish-they-werent-china-bikes-are-back" target="_blank">reports</a> on the rise of dockless bike sharing systems in China, along with the low tech ways that the system is getting hacked:</p>
<blockquote>The dockless system is prone to abuse. Some riders hide the bikes in or near their homes to prevent others from using them. Another trick involves photographing a bike’s QR code and then scratching it off to stop others from scanning it. With the stored image, the rider can then monopolise the machine. But customers caught misbehaving can have points deducted from their accounts, making it more expensive for them to rent the bikes.</blockquote>
<p>Gosh, you mean you give people access to expensive stuff and they ride off into the sunset?</p>
<p>Threat modeling is an umbrella for a set of practices that let an organization find these sorts of attacks early, while you have the greatest flexibility in choosing your response. There are lots of characteristics we could look for: practicality, cost-effectiveness, consistency, thoroughness, speed, et cetera, and different approaches will favor one or the other. One of those characteristics is useful integration into business.</p>
<p>You can look at thoroughness by comparing bikes to the BMW carshare program I discussed in "<a href="https://adam.shostack.org/blog/2017/05/the-ultimate-stopping-machine/">The Ultimate Stopping Machine</a>," I think that the surprise that ferries trigger an anti-theft mechanism is somewhat surprising, and I wouldn't dismiss a threat modeling technique, or criticize a team too fiercely for missing it. That is, there's nuance. I'd be more critical of a team in Seattle missing the ferry issue than I would be of a team in Boulder.) </p>
<p>In the case of the dockless bikes, however, I would be skeptical of a technique that missed "reserving" a bike for your ongoing use. That threat seems like an obvious one from several perspectives, including that the system is labelled "dockless," so you have an obvious contrast with a docked system. </p>
<p>When you find these things early, and iterate around threats, requirements and mitigations, you find opportunities to balance and integrate security in better ways than when you have to bolt it on later. (I discuss that iteration <a href="https://adam.shostack.org/blog/2017/05/certificate-pinning-is-great-in-stone-soup/">here</a> and <a href="https://adam.shostack.org/blog/2017/05/threat-modeling-iot/">here</a>.)</p>
<p>For these bikes, perhaps the most useful answer is not to focus on misbehavior, but to reward good behavior. The system wants bikes to be used, so reward people for leaving the bikes in a place where they're picked up soon? (Alternately, perhaps make it expensive to check out the same bike more than N times in a row, where N is reasonably large, like 10 or 15.)</p>
<p class="credit-text">Photo by <a href="https://unsplash.com/search/bikes?photo=UdGEXZtlx-E" target="_blank">Viktor Kern</a>.</p>Threat Modeling: What, Why and How2017-06-22T00:00:00Zhttps://shostack.org/blog/threat-modeling-what-why-how/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>I have a new article up at MISTI, "<a href="http://misti.com/infosec-insider/threat-modeling-what-why-and-how?" target="_blank">Threat Modeling: What, Why, and How?</a>." If you've read my other works on threat modeling, the part at the start about threat modeling versus threat intelligence is the part that will probably be most useful.</p>WTF? Why rebrand?2017-06-24T00:00:00Zhttps://shostack.org/blog/wtf-why-rebrand/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/WTF-logo.jpg" alt="World Taekwondo Federation logo" class="single-post-main-img" />
<p>Apparently, the <a href="http://www.bbc.com/sport/taekwondo/40391326" target="_blank">World Taekwondo Federation is changing their name</a>, because reasons.</p>IoT Security Workshop (Seattle, August)2017-06-26T00:00:00Zhttps://shostack.org/blog/iot-security-workshop-seattle-august-2017/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Jean Camp and Yoshi Kohno are organizing an interesting workshop upcoming at the University of Washington on "<a href="http://iot.soic.indiana.edu/iot-nsf-workshop/" target="_blank">Best Practices In The IoT</a>:"</p>
<blockquote>Our agenda begins with a presentation on the Federal Government initiatives in the IoT. When collecting the reading materials for emerging standards, we found nearly a thousand pages once all governmental materials are brought together...The product of the workshop will be a summary document identifying (i) a consensus set of graduated best practices for security and privacy for IoT in the home, and (ii) any gaps where best practices cannot yet be identified.</blockquote>
<p>(I believe that the workshop organizers might agree with me regards the term "<a href="https://adam.shostack.org/blog/2010/02/best-practices-for-defeating-the-term/">best practices</a>," but are driven by funders to use it.)</p>
<p>Also, they are searching for a few more sponsors if you can help in that department.</p>Links of Interest2017-06-29T00:00:00Zhttps://shostack.org/blog/links-of-interest-20170629/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<ul>
<li>It's a good thing that the Supreme Court's conservative wing is opposed to judges making law, because if they added a new term like "bona fide relationship" to immigration law, it would be <a href="https://www.nytimes.com/2017/06/27/us/trump-travel-ban-refugees-supreme-court.html" target="_blank">hugely confusing</a>. A bona fide crisis for opponents of "judicial activism."</li>
<li>If you have an AT&T email account, Verizon is going to <a href="https://techcrunch.com/2017/06/25/take-the-oath/" target="_blank">break your Flickr account</a>.</li>
<li><a href="https://www.nytimes.com/2017/06/23/technology/gmail-ads.html" target="_blank">Google Will No Longer Scan Gmail for Ad Targeting</a> Does that mean that the incremental ad revenue from learning more about people is not worth the effort to discuss privacy? </li>
</ul>DNA Replicates, Filmed at 11.2017-06-30T00:00:00Zhttps://shostack.org/blog/dna-replicates-filmed-at-11/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<blockquote>Scientists have long assumed that the DNA polymerases on the leading and lagging strands somehow coordinate with each other throughout the replication process, so that one does not get ahead of the other during the unravelling process and cause mutations.
<br /><br />
But this new footage reveals that there's no coordination at play here at all - somehow, each strand acts independently of the other, and still results in a perfect match each time.
(<a href="https://www.sciencealert.com/dna-replication-has-been-filmed-for-the-first-time-and-it-s-stranger-than-we-thought" target="_blank">DNA Replication Has Been Filmed For The First Time, And It's Not What We Expected</a>," Science Alert</blockquote>
<p>Paper: <a href="https://www.cell.com/cell/fulltext/S0092-8674(17)30634-7" target="_blank">Independent and Stochastic Action of DNA Polymerases in the Replisome</a>.</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/Sne1uO6RxLE" frameborder="0" allowfullscreen=""></iframe>Goldsworthy's Nature2017-07-03T00:00:00Zhttps://shostack.org/blog/goldsworthys-nature/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/Goldsworthy.jpg" alt="Goldsworthy" class="single-post-main-img" />
<p>Andy Goldsworthy creates effects by arranging nature. In this case, it's a set of leaves around the base of a tree. There's an <a href="http://www.goldsworthy.cc.gla.ac.uk/" target="_blank">online archive</a> of his early work, and plenty of other sites, such as <u>Melt</u> [link to http://visualmelt.com/Andy-Goldsworthy no longer works], but it seems that the artist, wisely, does not bother with a website.</p>Voter Records, SSN and Commercial Authentication2017-07-05T00:00:00Zhttps://shostack.org/blog/voter-records-ssn-and-commercial-authentication/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/verifiedbyvisa.png" alt="VerifiedbyVisa" class="single-post-main-img" style="float:right;" />
<blockquote>A Wednesday letter from the Presidential Advisory Commission on Election Integrity gives secretaries of state about two weeks to provide about a dozen points of voter data. That also would include dates of birth, the last four digits of voters' Social Security numbers... (<u>NYTimes story</u> [link to https://www.nytimes.com/aponline/2017/06/29/us/ap-us-voting-commission.html no longer works] ) Of this writing, <a href="http://www.cnn.com/2017/07/03/politics/kris-kobach-letter-voter-fraud-commission-information/index.html" target="_blank">44 states have refused</a>.</blockquote>
<p>I want to consider only the information security aspects of the <u>letter</u> [link to http://i2.cdn.turner.com/cnn/2017/images/06/30/peic.letter.to.maine[2].pdf no longer works], which also states that "Please be aware that any documents that are submitted to the full Commission will also be made available to the public."</p>
<p>Publishing a list of SSNs is prohibited by <a href="https://www.law.cornell.edu/uscode/text/42/405" target="_blank">42 USC 405(c)(2)(C)(Viii)</a>, but that only applies to "SSNs or related record[s]." Related record means "any record, list, or compilation that indicates, directly or indirectly, the identity of any individual with respect to whom a social security account number or a request for a social security account number is maintained pursuant to this clause." So its unclear to me if that law prohibits publishing the last 4 digits of the SSN in this way.</p>
<p>So, if a list of names, addresses, datas of birth and last four digits of the SSN of every voter are made available, what does that to to they myth that those selfsame four digits can be used as an authenticator?</p>
<p>I'd like to thank the administration for generating so much winning in authentication, and wish the very best of luck to everyone who now needs to scramble to find an alternate authentication technique.</p>
<p class="credit-text">Image credit: Jeff Hunsaker, "<a href="http://www.jeffreyhunsaker.com/2008/11/verified-by-visa-everything-we-tell.html" target="_blank">Verified by Visa: Everything We Tell Folks to Avoid</a>."</p>Threat Modeling Encrypted Databases2017-07-06T00:00:00Zhttps://shostack.org/blog/threat-modeling-encrypted-databases/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Adrian Colyer has an interesting summary of a recent paper, "<a href="https://blog.acolyer.org/2017/06/16/why-your-encrypted-database-is-not-secure/" target="_blank">Why your encrypted database is not secure</a>" in his excellent "morning paper" blog.</p>
<blockquote>If we can’t offer protection against active attackers, nor against persistent passive attackers who are able to simply observe enough queries and their responses, the fallback is to focus on weaker guarantees around snapshot attackers, who can only obtain a single static observation of the compromised system (e.g., an attacker that does a one-off exfiltration). Today’s paper pokes holes in the security guarantees offered in the face of snapshots attacks too.<br />...<br />Many recent encrypted databases make strong claims of “provable security” against snapshot attacks. The theoretical models used to support these claims are abstractions. They are not based on analyzing the actual information revealed by a compromised database system and how it can be used to infer the plaintext data.</blockquote>
<p>I take away two things: first, there's a coalescence towards a standard academic model for database security, and it turns out to be a grounded model. (In contrast to models like the random oracle in crypto.) Second, all models are wrong, and it turns out that the model of a snapshot attacker seems...not all that useful.</p>Worthwhile Books: Q2 20172017-07-07T00:00:00Zhttps://shostack.org/blog/worthwhile-books-q2-2017/AdamI'm always looking for interesting books to read. These are the books that I enjoyed enough to recommend in Q2.<span class="single-post-excerpt"><!-- excerpt -->I'm always looking for interesting books to read. These are the books that I enjoyed enough to recommend in Q2.<!-- /excerpt --></span>
<p>I'm always looking for interesting books to read. These are the books that I enjoyed enough to recommend in Q2.</p>
<h3>Cyber</h3>
<ul>
<li><a href="https://www.rand.org/pubs/research_reports/RR1751.html" target="_blank">Zero Days, Thousands of Nights</a>, by Lillian Ablion and Andy Bogart. (Also, thanks to RAND for releasing their books with an epub version.)</li>
</ul>
<h3>Nonfiction, not security</h3>
<ul>
<li><a href="https://www.amazon.com/gp/product/0231180489/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=0231180489&linkId=f0ef234efcbaba3cad36265f7cebb3c8" target="_blank">Narrative and Numbers</a>, Aswath Damodaran. Presents a compelling approach for using narrative <em>and</em> numbers to discuss business valuation, but the lessons can be extended and used in many places. Also worthwhile is his focus on improving stories by testing them and seeking out contrary views.</li>
<li><a href="https://www.amazon.com/gp/product/0062358367/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=0062358367&linkId=fe34498331ef18c42c27362e11a36989" target="_blank">The End of Average</a>, by Todd Rose. Rose uses narrative to make the case that the mean is not the distribution, and that focusing in on averages leads to all sorts of problems.</li>
<li><a href="https://www.amazon.com/gp/product/0143127799/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=0143127799&linkId=1af9ab3247a7c896386b04ded0748eeb" target="_blank">A Sense of Style</a>, Steven Pinker. I learned a number of things about how to write clearly and how the brain processes words. Some of those things will be in the next edition of <a href="https://www.amazon.com/gp/product/1118809998/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=1118809998&linkId=e5b9ae786c65458999a2ca5392a87cbf" target="_blank">Threat Modeling</a>.</li>
<li><a href="https://www.amazon.com/gp/product/B017V88T4M/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=B017V88T4M&linkId=0db6e7831225bd07e3a75cdc3152a727" target="_blank">Starman</a>, Jamie Doran. A biography of Yuri Gagarin, the first person in space.</li>
<li><a href="https://www.amazon.com/gp/product/026201520X/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=026201520X&linkId=1760d735d72abb8831c7c5130051c304" target="_blank">Spacesuit: Fashioning Apollo</a>, Nicholas de Monchaux. A really fascinating socio-technical history of the Apollo Spacesuit and the interactions between NASA and their systems approaches and the International Latex Company, who at the time, mainly made women's undergarments under the Playtex Brand. NASA was focused on manufacturing from plans, ILC fashioned from patterns. The engineered suits didn't function as clothing. ILC once sent NASA a silent filmstrip of an space-suited employee playing football as part of their argument for their approach. (As an aside, I re-wrote the first sentence here to put the long dependent clause at the end, because of advice in Pinker, and the sentence is better for it.)</li>
</ul>
<h3>Fiction</h3>
<ul>
<li><a href="https://www.amazon.com/gp/product/0316261254/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=0316261254&linkId=2d6e5c29fcbeda9e0b790ce8edb225ef" target="_blank">Underground Airlines</a> by Ben Winters. What if Lincoln had been shot, the civil war averted, and slavery was still legal in a "hard four" southern states? Not a breezy read, but fascinating alternate history.</li>
<li><a href="https://www.amazon.com/gp/product/0765378027/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=0765378027&linkId=f1b6c57d4306213eaf23ff49e05f5790" target="_blank">Seven Surrenders</a> by Ada Palmer. The second book in a quartet chronicling in the 23rd century. An interestingly non-standard future with deep layers of complexity. Challenging reading because of the language, the nicknames and Palmer's fascinating lens on gender, but easier than her first book, <a href="https://www.amazon.com/gp/product/0765378019/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=0765378019&linkId=b0edade70216753356929d57e36358ca" target="_blank">Too Like the Lightning</a>. Searching this blog, I am surprised that I never linked to her excellent blog, <a href="http://www.exurbe.com/" target="_blank">Ex Urbe</a>. Also, there's a <a href="http://adapalmer.com/publication/seven-surrenders/terra-ignota-crooked-timber-seminar/" target="_blank">Crooked Timber</a> seminar on the series.</li>
<li><a href="https://www.amazon.com/gp/product/B00MFWOOMG/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=B00MFWOOMG&linkId=7eb82597fd37dad9d7ce5f4d20056ad8" target="_blank">Yesterday's Kin</a>, Nancy Kress. Nancy Kress, need I say more? Apparently, I do, there's a trilogy coming out, and the first book, <a href="https://www.amazon.com/gp/product/0765390299/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=0765390299&linkId=d14d7f1537bc63799363bf58f6b928aa" target="_blank">Tomorrow's Kin</a>, is out shortly.</li>
</ul>
<p></p>Secure updates: A threat model2017-07-10T00:00:00Zhttps://shostack.org/blog/secure-updates-threat-model/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/software-updates.gif" alt="Software updates" class="single-post-main-img" />
<p>Post-Petya there have been a number of alarming articles on insecure update practices. The essence of these stories is that tax software, mandated by the government of Ukraine, was used to distribute the first Petya, and that this can happen elsewhere. Some of these stories are a little alarmist, with claims that unnamed "other" software has also been used in this way. Sometimes the attack is easy because updates are unsigned, other times its because they're also sent over a channel with no security.</p>
<p>The right answer to these stories is to fix the damned update software before people get more scared of updating. That fear will survive long after the threat is addressed. So let me tell you, [as a software publisher] how to do secure upadtes, in a nutshell.</p>
<p>The goals of an update system are to:</p>
<ol>
<li>Know what updates are available</li>
<li>Install authentic updates that haven't been tampered with</li>
<li>Strongly tie updates to the organization whose software is being updated. (Done right, this can also enable whitelisting software.)</li>
</ol>
<p>Let me elaborate on those requirements. First, know what updates are available — the threat here is that an attacker stores your message "Version 3.1 is the latest revision, get it here" and sends it to a target after you've shipped version 3.2. Second, the attacker may try to replace your update package with a new one, possibly using your keys to sign it. If you're using TLS for channel security, your TLS keys are only as secure as your web server, which is to say, not very. You want to have a signing key that you protect.</p>
<p>So that's a basic threat model, which leads to a system like this:</p>
<ol>
<li>Update messages are signed, dated, and sequenced. The code which parses them carefully verifies the signatures on both messages, checks that the date is later than the previous message and the sequence number is higher. If and only if all are true does it...</li>
<li>Get the software package. I like doing this over torrents. Not only does that save you money and improve availability, but it protects you against the "Oh hello there Mr. Snowden" attack. Of course, sometimes a belief that torrents have the "evil bit" set leads to blockages, and so you need a fallback. [Note this originally called the belief "foolish," but Francois politely pointed out that that was me being foolish.]</li>
<li>Once you have the software package, you need to check that it's signed with the same key as before.<br />
Better to sign the update and the update message with a key you keep offline on a machine that has no internet connectivity.</li>
<li>Since all of the verification can be done by software, and the signing can be done with a checklist, PGP/GPG are a fine choice. It's standard, which means people can run additional checks outside your software, it's been analyzed heavily by cryptographers.</li>
</ol>
<p>What's above follows the four-question framework for threat modeling: what are we working on? (Delivering updates securely); what can go wrong? (spoofing, tampering, denial of service); what are we going to do about it? (signatures and torrents). The remaining question is "did we do a good job?" Please help us assess that! (I wrote this quickly on a Sunday morning. Are there attacks that this design misses? Defenses that should be in place?)</p>Rolling out a Threat Modeling Program2017-07-11T00:00:00Zhttps://shostack.org/blog/rolling-out-threat-modeling-program/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>"<a href="http://misti.com/infosec-insider/rolling-out-a-threat-modeling-program">Rolling out a Threat Modeling Program</a>" is an article I wrote for MISTI. Check it out!</p>Umbrella Sharing and Threat Modeling2017-07-13T00:00:00Zhttps://shostack.org/blog/umbrella-sharing-and-threat-modeling/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/shared_umbrellas2_framed.jpg" alt="Shared umbrellas2 framed" class="single-post-main-img" />
<p>A month or so ago, I wrote "<a href="https://adam.shostack.org/blog/2017/06/bicycling-and-threat-modeling/">Bicycling and Threat Modeling</a>," about new approaches to bike sharing in China. Now I want to share with you "<a href="http://shanghaiist.com/2017/07/10/umbrella-sharing-fail.php" target="_blank">Umbrella-sharing startup loses nearly all of its 300,000 umbrellas in a matter of weeks</a>." </p>
<blockquote>The Shenzhen-based company was launched earlier this year with a 10 million yuan investment. The concept was similar to those that bike-sharing startups have used to (mostly) great success. Customers use an app on their smartphone to pay a 19 yuan deposit fee for an umbrella, which costs just 50 jiao for every half hour of use.<br /><br />
According to the South China Morning Post, company CEO Zhao Shuping said that the idea came to him after watching bike-sharing schemes take off across China, making him realize that "everything on the street can now be shared." </blockquote>
<p>I don't know anything about the Shanghaiist, but it's quoting a <a href="http://www.scmp.com/news/china/society/article/2101722/chinese-umbrella-sharing-firm-remains-upbeat-despite-losing-most" target="_blank">story</a> in the South China Morning Post, which closes:</p>
<blockquote>Last month, a bicycle loan company had to close after 90 per cent of its bikes were stolen.</blockquote>Threat Modeling Password Managers2017-07-17T00:00:00Zhttps://shostack.org/blog/threat-modeling-password-managers/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>There was a bit of a complex debate last week over 1Password. I think the best article may be Glenn Fleishman's "<a href="http://tidbits.com/article/17341" target="_blank">AgileBits Isn’t Forcing 1Password Data to Live in the Cloud</a>," but also worth reading are Ken White's "<a href="https://medium.com/@kennwhite/who-moved-my-cheese-1password-6a98a0fc6c56" target="_blank">Who moved my cheese, 1Password?</a>," and "<a href="https://blog.agilebits.com/2017/07/13/why-we-love-1password-memberships/" target="_blank">Why We Love 1Password Memberships</a>," by 1Password maker AgileBits. I've recommended 1Password in the past, and I'm not sure if I agree with Agilebits that "1Password memberships are... the best way to use 1Password." This post isn't intended to attack anyone, but to try to sort out what's at play.</p>
<p>This is a complex situation, and you'll be shocked, shocked to discover that I think a bit of threat modeling can help. Here's my model of</p>
<h3>what we're working on:</h3>
<img style="display: block; margin-left: auto; margin-right: auto;" title="Password manager.png" src="https://shostack.org/blog/img/2017/Password-manager.png" alt="Password manager" border="0" />
<p>Let me walk you through this: There's a password manager, which talks to a website. Those are in different trust boundaries, but for simplicity, I'm not drawing those boundaries. The two boundaries displayed are where the data and the "password manager.exe" live. Of course, this might not be an exe; it might be a .app, it might be Javascript. Regardless, that code lives somewhere, and where it lives is important. Similarly, the passwords are stored somewhere, and there's a boundary around that.</p>
<h3>What can go wrong?</h3>
<p>If password storage is local, there is not a fat target at Agilebits. Even assuming they're stored well (say, 10K iterations of PBKDF2), they're more vulnerable if they're stolen, and they're easier to steal en masse [than] if they're on your computer. (Someone might argue that you, as a home user, are less likely to detect an intruder than Agilebits. That might be true, but that's a way to detect; the first question is how likely is an attacker to break in? They'll succeed against you and they'll succeed against Agilebits, and they'll get a boatload more from breaking into Agilebits. This is not intended as a slam of Agilebits, it's an outgrowth of 'assume breach.') I believe Agilebits has a simpler operation than Dropbox, and fewer skilled staff in security operations than Dropbox. The simpler operation probably means there are fewer usecases, plugins, partners, etc, and means Agilebits is more likely to notice some attacks. To me, this nets out as neutral. Fleishman promises to explain "how AgileBits’s approach to zero-knowledge encryption... may be less risky and less exposed in some ways than using Dropbox to sync vaults." I literally don't see his argument, perhaps it was lost in the complexity of writing a long article? [Update: see also Jeffrey Goldberg's comment about how they encrypt the passwords. I think of what they've done as a very strong mitigation; with a probably reasonable assumption they haven't bolluxed their key generation. See this <a href="https://1password.com/files/1Password%20for%20Teams%20White%20Paper.pdf" target="_blank">1Password Security Design</a> white paper.]</p>
<p>To net it out: local storage is more secure. If your computer is compromised, your passwords are compromised with any architecture. If your computer is not compromised, and your passwords are nowhere else, then you're safe. Not so if your passwords are somewhere else and that somewhere else is compromised.</p>
<p>The next issue is where's the code? If the password manager executable is stored on your device, then to replace it, the attacker either needs to compromise your device, or to install new code on it. An attacker who can install new code on your computer wins, which is why <a href="https://adam.shostack.org/blog/2017/07/secure-updates-a-threat-model/">secure updates</a> matter so much. An attacker who can't get new code onto your computer must compromise the password store, discussed above. When the code is not on your computer but on a website, then the ease of replacing it goes way up. There's two modes of attack. Either you can break into one of the web server(s) and replace the .js files with new ones, or you can MITM a connection to the site and tamper with the data in transit. As an added bonus, either of those attacks scales. (I'll assume that 1Password uses certificate pinning, but did not chase down where their JS is served.)</p>
<p>Netted out, getting code from a website each time you run is a substantial drop in security.</p>
<h3>What should we do about it?</h3>
<p>So this is where it gets tricky. There are usability advantages to having passwords everywhere. (Typing a 20 character random password from your phone into something else is painful.) In their blog post, Agilebits lists more usability and reliability wins, and those are not to be scoffed at. There are also important business advantages to subscription revenue, and not losing your passwords to a password manager going out of business is important.</p>
<p>Each 1Password user needs to make a decision about what the right tradeoff is for them. This is made complicated by family and team features. Can little Bobby move your retirement account tables to the cloud for you? Can a manager control where you store a team vault?</p>
<p>This decision is complicated by walls of text descriptions. I wish is that Agilebits would do a better job of crisply and cleanly laying out the choice that their customers can make, and the advantages and disadvantages of each. (I suggest a feature chart like <a href="https://1password.com/sign-up/" target="_blank">this one</a> as a good form, and the data should also be in each app as you set things up.) That's not to say that Agilebits can't continue to choose and recommend a default.</p>
<h3>Does this help?</h3>
<p>After years of working in these forms, I think it's helpful as a way to break out these issues. I'm curious: does it help you? If not, where could it be better?</p>Humble Bundle2017-07-19T00:00:00Zhttps://shostack.org/blog/humble-bundle-20170719/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/wiley_bundle-twitter-post.png" alt="Humble Bundle: Cybersecurity by Wiley" class="single-post-main-img" />
<p>There's a <a href="https://www.humblebundle.com/books/cybersecurity-wiley" target="_blank">Humble Bundle on Cybersecurity</a>, full of Wiley books. It includes my threat modeling book, Ross Anderson's Security Engineering, Ferguson, Schneier and Kohno's Crypto Engineering and more.</p>
<p>I hope that this is the best price you'll ever see on these books. Get 'em while they're hot.</p>
<p>The bundle goes to support EFF &/or Water Aid America.</p>“Comparing the Usability of Cryptographic APIs”2017-07-20T00:00:00Zhttps://shostack.org/blog/comparing-the-usability-of-cryptographic-apis/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/Obstacles_Frame.png" alt=" " class="single-post-main-img" width="151" height="200" style="float:right;" />
<blockquote>(The abstract:) Potentially dangerous cryptography errors are well documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable; however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs.<br />
We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits—reducing the decision space, as expected, prevents choice of insecure parameters—simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions; however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.</blockquote>
<p>It's interesting that even when developers took care to consider usability of their APIs, usability testing revealed serious issues. But it's not surprising. The one constant of usability testing is that people surprise you.</p>
<p>The paper is: "<cite><a href="https://obj.umiacs.umd.edu/papers_for_stories/CryptoAPIs_Python_camera_ready%5B1%5D.pdf" target="_blank">Comparing the Usability of Cryptographic APIs</a></cite>," Yasemin Acar (CISPA, Saarland University), Michael Backes (CISPA, Saarland University & MPI-SWS), Sascha Fahl (CISPA, Saarland University), Simson Garfinkel (National Institute of Standards and Technology), Doowon Kim (University of Maryland), Michelle Mazurek (University of Maryland), Christian Stransky (CISPA, Saarland University), The Increasingly-misnamed Oakland Conference, 2017.</p>The Dope Cycle and a Deep Breath2017-07-21T00:00:00Zhttps://shostack.org/blog/dope-cycle-and-deep-breath/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Back in January, I wrote about "<a href="https://shostack.org/blog/dope-cycle-and-two-minutes-hate/">The Dope Cycle and the Two Minutes Hate</a>." In that post, I talked about:</p>
<blockquote>Not kidding: even when you know you’re being manipulated into wanting it, you want it. And you are being manipulated, make no mistake. Site designers are working to make your use of their site as pleasurable as possible, as emotionally engaging as possible. They’re caught up in a Red Queen Race, where they must engage faster and faster just to stay in place. And when you’re in such a race, it helps to steal as much as you can from millions of years of evolution. [Edit: I should add that this is not a moral judgement on the companies or the people, but rather an observation on what they must do to survive.] That’s dopamine, that’s adrenaline, that’s every hormone that’s been covered in Popular Psychology. It’s a dope cycle, and you can read that in every sense of the word dope.</blockquote>
<p>I just discovered a fascinating tool from a company called Dopamine Labs. Dopamine Labs is a company that helps their corporate customers drive engagement: "Apps use advanced software tools that shape and control user behavior. We know because [we sell] it to them." They've released a tool called Space: "Space uses neuroscience and AI to help you kick app addiction. No shame. No sponsors. Just a little breathing room to help you take back control." As they <u>say</u> [link to http://youjustneedspace.com/ no longer works]: "It's the same math that we use to get people addicted to apps, just run backwards."</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2017/space-app.png" alt="Space app" title="space-app.png" border="0" />
<p>There are some fascinating ethical questions involved in selling both windows and bricks. I'm going to say that you participants in a red queen race might as well learn what countermeasures to their techniques are by building them. Space works as a Chrome plugin and as an iOS and Android App. I've installed it, and I like it more than I like another tool I've been using (<a href="https://dayboard.co/" target="_blank">Dayboard</a>). I really like Dayboard's todo list, but feel that it cuts me off in the midst of time wasting, rather than walking me away.)</p>
<p>The app is at <u>http://youjustneedspace.com/</u> [link no longer works].</p>
<p>As we go into big conferences, it might be worth installing. (Also as we head into conferences, be excellent to each other. Know and respect your limits and those of others. Assume good intent. Avoid getting pulled into a "<a href="https://shostack.org/archive/2015/07/the-drama-triangle/" target="_blank">Drama Triangle</a>.")</p>Magical Approaches to Threat Modeling2017-08-08T00:00:00Zhttps://shostack.org/blog/magical-approaches-to-threat-modeling/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/ThaQeLa-Magician-Lego-1.jpg" alt="magician lego man" class="single-post-main-img" />
<p>I was watching a talk recently where the speaker said "STRIDE produces waaaay to many threats! What we really want is a way to quickly get the right threats!"*</p>
<p>He's right and he's wrong. There are exactly three ways to get to a short list of the most meaningful threats to a new product, system or service that you're building. They are:</p>
<ul>
<li>Magically produce the right list</li>
<li>Have experts who are so good they never even think about the wrong threat</li>
<li>Produce a list that's too long and prune it</li>
</ul>
<p>That's it. (If you see a fourth, please tell me!)</p>
<p>Predictions are hard, especially about the future. It's hard to know what's going to go wrong in a system under construction, and its harder when that system changes because of your prediction.</p>
<p>So if we don't want to rely on Harry Potter waving a wand, getting frustrated, and asking Hermione to create the right list, then we're left with either trusting experts or over-listing and pruning.</p>
<p>Don't get me wrong. It would be great to be able to wave a magic wand or otherwise rapidly produce the right list without feeling like you'd done too much work. And if you always produce a short list, then your short list is likely to appear to be right.</p>
<p>Now, you may work in an organization with enough security expertise to execute perfect threat models, but I never have, and none of my clients seem to have that abundance either. (Which may also be a Heisenproblem: no organization with that many experts needs to hire a consultant to help them, except to get all their experts aligned.)</p>
<p>Also I find that when I don't use a structure, I miss threats. I've noticed that I have a recency bias, towards attacks I've seen recently, and bias towards "fun" attacks, including spoofing these days because I enjoy solving those. And so I use techniques like STRIDE per element to help structure my analysis.</p>
<p>It may also be that approaches other than STRIDE produce lists that have a higher concentration of interesting threats, for some definition of "interesting." Fundamentally, there's a set of tradeoffs you can make. Those tradeoffs include:</p>
<ul>
<li>Time taken</li>
<li>Coverage</li>
<li>Skill required</li>
<li>Consistency</li>
<li>Magic pixie dust required</li>
</ul>
<p>I'm curious, what other tradeoffs have you seen?</p>
<p>Whatever tradeoffs you may make, given a choice between overproduction and underproduction, you probably want to find too many threats, rather than too few. (How do you know what you're missing?) Some of getting the right number is the skill that comes from experience, and some of it is simply the grindwork of engineering.</p>
<p>(* The quote is not exact, because I aim to follow Warren Buffett's excellent advice of praise specifically, criticize generally.)</p>
<p class="credit-text">Photo: <a href="https://www.flickr.com/photos/thaqela/6774231998/" target="_blank">Magician</a>, by ThaQeLa.</p>The Evolution of Ctenophore Brains2017-08-11T00:00:00Zhttps://shostack.org/blog/evolution-of-ctenophore-brains/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<blockquote>From his very first experiments, he could see that these animals were unrelated to jellyfish. In fact, they were profoundly different from any other animal on Earth.<br /><br />Moroz reached this conclusion by testing the nerve cells of ctenophores for the neurotransmitters serotonin, dopamine and nitric oxide, chemical messengers considered the universal neural language of all animals. But try as he might, he could not find these molecules. The implications were profound.</blockquote>
<p>Read "<a href="https://aeon.co/essays/what-the-ctenophore-says-about-the-evolution-of-intelligence" target="_blank">Aliens in our midst</a>" at Aeon.</p>Interesting Monday Reads2017-08-14T00:00:00Zhttps://shostack.org/blog/interesting-monday-reads-20170814/AdamEach of these is long and thought-provoking and worth savoring.<span class="single-post-excerpt"><!-- excerpt -->Each of these is long and thought-provoking and worth savoring.<!-- /excerpt --></span>
<p>Each of these is long and thought-provoking and worth savoring.</p>
<ul>
<li><u>Angela Sasse: Can we make people value IT security?</u> (By Wendy M. Grossman) [link to https://www.riscs.org.uk/2017/08/14/angela-sasse-can-we-make-people-value-it-security/ no longer works]</li>
<li><u>Risk Management and the Cybersecurity of the U.S. Government</u>, Steven B. Lipner and Butler W. Lampson [link to https://www.nist.gov/sites/default/files/documents/2016/09/16/s.lipner-b.lampson_rfi_response.pdf no longer works] </li>
<li><u>How to create a company culture that can weather failure</u>, Mary Branscombe [link to http://www.cio.com/article/3214427/leadership-management/how-to-create-a-company-culture-that-can-weather-failure.html no longer works] </li>
</ul>Learning From npm's Rough Few Months2017-08-15T00:00:00Zhttps://shostack.org/blog/learning-from-npms-rough-few-months/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>The node package manager (npm) is having a bad few months. Let’s look at what we can do, what other package managers should do and what we can learn at a policy level, particularly in the U.S. framing of “critical infrastructure.”</p>
<p>People in security who remain focused on the IT side of the house, rather than the development side, may not be familiar with npm. As its website says, "npm is the package manager for JavaScript and the world’s largest software registry. Discover packages of reusable code — and assemble them in powerful new ways." Odds are excellent that one or more of your websites rely on npm.</p>
<p><a href="https://portal.iansresearch.com/content/3163/blg/shostack-learning-from-npms-rough-few-months/ref/giBs6FYy" target="_blank">I wrote a long post on the subject at the IANS blog</a>.</p>Amicus brief in “Carpenter” Supreme Court Case2017-08-16T00:00:00Zhttps://shostack.org/blog/amicus-brief-in-carpenter-supreme-court-case/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>"In an amicus brief filed in the U.S. Supreme Court, leading technology experts represented by the Knight First Amendment Institute at Columbia University argue that the Fourth Amendment should be understood to prohibit the government from accessing location data tracked by cell phone providers — “cell site location information” — without a warrant." </p>
<p>For more, please see "<a href="https://knightcolumbia.org/news/supreme-court-brief-technologists-warn-against-warrantless-access-cell-phone-location-data" target="_blank">In Supreme Court Brief, Technologists Warn Against Warrantless Access to Cell Phone Location Data</a>." [Update: Susan Landau has a great blog post "<a href="https://lawfareblog.com/phones-move-%E2%80%93%C2%A0and-so-should-law" target="_blank">Phones Move – and So Should the Law</a>" in which she frames the issues at hand.]</p>
<p>I'm pleased to be one of the experts involved.</p>Celebrating Alt-Left Lawlessness2017-08-17T00:00:00Zhttps://shostack.org/blog/celebrating-alt-left-lawlessness/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Lately, I've tried to stay away from the tire fire that American politics has become. I'm reasonably certain that I have more to contribute in other areas. But when the President tries to equivocate between those waving the Nazi flag and those protesting against them, we need to speak about what's acceptable.</p>
<p>It ought to go without saying that when literal Nazis are on one side of a debate, the other side is in the right.</p>
<p>But apparently, that's not obvious, so I felt I could share a plan for a march by the alt-left, under the ominous name of "<a href="https://en.wikipedia.org/wiki/Operation_Overlord" target="_blank">Operation Overlord</a>." They were planning to overthrow the legitimate government all along the coast, and, through force, replace it with their own puppets. </p>
<p>More seriously, we can have disagreements about what's best for the country, and it's bad when we demonize those who disagree with us. Civilized society requires us to accept civil disagreement. It accepts that no one is privileged or disadvantaged by an accident of birth: "race, creed or color," as the expression goes. But civil disagreement, by definition, precludes violence, advocacy of violence or threats of violence. </p>
<p>The Nazi flag is one such threat. Waving it has no purpose except declaring oneself outside society and at odds with the ideals and principles of good people everywhere.</p>
<p>If you're in a crowd of Nazis, you should be asking why, and walking away.</p>
<p>If you have doubts about what a President should say, here's a sample:</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/dD4Nzt9thc8" frameborder="0" allowfullscreen=""></iframe>Babylonian Triginometry2017-08-25T00:00:00Zhttps://shostack.org/blog/babylonian-triginometry/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<blockquote>a fresh look at a 3700-year-old clay tablet suggests that Babylonian mathematicians not only developed the first trig table, beating the Greeks to the punch by more than 1000 years, but that they also figured out an entirely new way to look at the subject. However, other experts on the clay tablet, known as Plimpton 322 (P322), say the new work is speculative at best. ("<a href="http://www.sciencemag.org/news/2017/08/ancient-babylonian-tablet-may-contain-first-evidence-trigonometry" target="_blank">This ancient Babylonian tablet may contain the first evidence of trigonometry</a>.")</blockquote>
<p>The paper, "<a href="http://www.sciencedirect.com/science/article/pii/S0315086017300691" target="_blank">Plimpton 322 is Babylonian exact sexagesimal trigonometry</a>" is short and open access, and also contains this gem:</p>
<blockquote>If this interpretation is correct, then P322 replaces Hipparchus' ‘table of chords’ as the world's <em>oldest trigonometric table</em> — but it is additionally unique because of its exact nature, which would make it the world's only <em>completely accurate</em> trigonometric table. These insights expose an entirely new level of sophistication for OB mathematics.</blockquote>Organizing Threat Modeling Magic2017-08-29T00:00:00Zhttps://shostack.org/blog/organizing-threat-modeling-magic/Jonathan Marcil[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>I was inspired to develop and share my thoughts after Adam's previous post (<a href="https://adam.shostack.org/blog/2017/08/magical-approaches-to-threat-modeling/">magical approaches to threat modeling</a>) regarding selection of the threats and predictions. Since a 140 characters limit quickly annoys me, Adam gave me an opportunity to contribute on his blog, thanks to him I can now explain how I believe in magic during threat modeling.</p>
<p>I have noticed that most of what I do, because it is timeboxed due to carbon based lifeforms constraints, needs to be a finite choice selection from what appears to me as an infinite array of possibilities. I also enjoy pulling computer related magic tricks, or guesses, because it’s amusing and more engaging than reading a checklist. Magic, in this case, is either pure luck or based on some skills the spectators can’t see. I like when I think I’m having both.</p>
<p>During the selection phase of what to do, there’s a few tradeoffs that have been proposed such as coverage, time and skills required. Those are attack based and come from the knowledge of what an attacker can do. While I think that those effectively describe the selection of granular technical efforts, I prefer to look at what are his motivation rather than the constrains he’ll face. And for all that, I have a way or organizing it and showing it.</p>
<h3>Attack Tree</h3>
<p>When I think about the actual threats of a system, I don’t see a list, but rather a tree. That tree has the ultimate goals on top, and then descend into sub-goals that breaks down how you get there. It finally ends up in leaves that are the vulnerabilities to be exploited.</p>
<p>Here’s an unfinished example for an unnamed application:</p>
<img src="https://shostack.org/blog/img/2017/pruneme.png" alt=" " width="700" height="255" />
<p>A fun thing to do with a tree is to apply a weight on a branch. In this case the number represent attacker made tradeoffs and is totally arbitrary.</p>
<img src="https://shostack.org/blog/img/2017/pruneme.1.png" alt=" " width="700" height="270" />
<p>If you keep it relatively consistent to itself, you end up with an appropriate weighting system. For this example, let’s say it’s the amount of efforts you estimate it takes. You can sum the branches in the tree and get sub-goals weight without having to think about them.</p>
<img src="https://shostack.org/blog/img/2017/pruneme.2.png" alt=" " width="700" height="270" />
<p>And from that we can get a sum for the root goals:</p>
<img src="https://shostack.org/blog/img/2017/pruneme.3.png" alt=" " width="700" height="270" />
<p>But then how do I choose to prioritize or just work on something?</p>
<p>I could just say, well I’m going to do the easiest things to do, maybe because finding an SQLi in the application is easier than finding a slow API request, so better start looking at that first.</p>
<p>But regarding to decision, I often decide to do the most common human behavior: just don’t do it myself.</p>
<p>With the help of the tree, I just let the actual business reality do the selection on which root goals to pick. By that I mean the literal definition of reality, although nowadays people seems to forget what it really means:</p>
<blockquote>“reality ·noun · 1. the world or the state of things as they actually exist, as opposed to an idealistic or notional idea of them.”
– Google Almighty</blockquote>
<p>I never ask the business line if they think they’ll have SQLi, but rather, if they worry more about denial of service or information stealing.</p>
<p>One advantage of that, is that those decisions are at the root goals. The tree is a hierarchy; the higher level you are, the bigger impact you'll have. Like spinning a big cog wheel versus a smaller one:</p>
<img title="If you are keen with your eyes you can see that on this model, spinning the smaller wheels won't make the bigger ones move.. this happen too much in life and shouldn't because wheels needs to be locked together to function properly in a balanced system." src="https://shostack.org/blog/img/2017/cogwheelsimage.png" alt="3 gears" width="482" height="370" />
<p>If you were to pick on each vulnerability at the time, you’ll spin your working wheel a lot, while just really advancing the root goal a bit. Work on doing the selection on the root goals, then you'll see that it's impact is far greater for about the same amount of time. That's efficiency to me.</p>
<p>And that’s how I turn magic into engineering :-D</p>
<p>Of course, in order for it to be proper engineering, the next step would be to QA it. And at that point, you can fetch all the checklists or threats repository you can find, and verify that you covered everything in your tree. Simply add what you have missed, and then bask in the glory of perceived completeness.</p>
<p><em>For the curious practitioners, I've used <a href="http://plantuml.com/" target="_blank">PlantUML</a> in order to generate the tree examples as seen above. The tool let you textually define the tree using simple markup and auto balance it for you when you are updating it. A more detailed example can be found on my <a href="https://speakerdeck.com/owaspmontreal/threat-modeling-toolkit?slide=43" target="_blank">Threat Modeling Toolkit presentation</a>.</em></p>Star Wars, Star Trek and Getting Root on a Star Ship2017-09-01T00:00:00Zhttps://shostack.org/blog/star-wars-star-trek-and-getting-root-on-a-star-ship/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/spacecraft_discovery_by_trekkie5000_framed.jpg" alt=" " class="single-post-main-img" />
<p>It's time for some Friday Star Wars blogging!</p>
<p>Reverend Robert Ballecer, SJ <a href="https://twitter.com/padresj/status/902961194457153538" target="_blank">tweeted</a>: "as a child I learned a few switches & 4 numbers gives you remote code ex on a 23rd century starship." I responded, asking "When attackers are on the bridge and can flip switches, how long a password do you think is appropriate?"</p>
<p>It went from there, but I'd like to take this opportunity to propose a partial threat model for 23rd century starships.</p>
<p>First, a few assumptions:</p>
<ul>
<li>Sometimes, officers and crewmembers of starships die, are taken prisoner, or are otherwise unable to complete their duties.</li>
<li>It is important that the crew can control the spaceship, including software and computer hardware.</li>
<li>Unrestricted physical access to the bridge means you control the ship (with possible special cases, and of course, the Holodeck because lord forgive me, they need to shoot a show every week. Scalzi managed to get a surprisingly large amount from this line of inquiry in Red Shirts. But I digress.)</li>
</ul>
<p>I'll also go so far as to say that as a derivative of the assumptions, the crew may need a rapid way to assign "Captain" privileges to someone else, and starship designers should be careful to design for that use case.</p>
<p>So the competing threats here are denial of service (and possibly denial of future service) and elevation of privilege. There's a tension between designing for availability (anyone on the bridge can assume command relatively easily) and proper authorization. My take was that the attackers on the bridge are already close to winning, and so defenses which impede replacing command authority are a mistake. </p>
<p>Now, in responding, I thought that "flipping switches" meant physically being there, because I don't recall the episode that he's discussing. But further in further conversation, what became clear is that the switches can be flipped remotely, which dramatically alters the need for a defense.</p>
<p>It's not clear what non-dramatic requirement such remote switch flipping serves, and so on balance, it's easy to declare that the added risk is high and we should not have remote switch flipping. It is always easy to declare that the risk is high, but here I have the advantage that there's no real product designer in the room arguing for the feature. If there was, we would clarify the requirement, and then probably engineer some appropriate defenses, such as exponential backoff for remote connections. Of course, in the future with layers of virtualization, what a remote connection is may be tricky to determine in software.</p>
<p>Which brings me to another tweet, by Hongyi Hu, who <a href="https://twitter.com/hongyihu/status/903059214192349184" target="_blank">said</a> he was "disappointed that they still use passwords for authentication in the 23rd century. I hope the long tail isn't that long! 😛" What can I say but, "<a href="https://en.wikipedia.org/wiki/We%27ll_Always_Have_Paris_(Star_Trek:_The_Next_Generation)" target="_blank">we'll always have passwords</a>." We'll just use them for less.</p>
<p>As I've <a href="https://adam.shostack.org/blog/2016/12/rogue-one-the-best-star-wars-yet/">discussed</a>, the reason I use Star Wars over Star Trek in my teaching and examples is that no one is confused about the story in the core movies. I made precisely this mistake.</p>
<p>Image: The Spaceship Discovery, <a href="https://trekkie5000.deviantart.com/art/Spacecraft-Discovery-91055157" target="_blank">rendered</a> by Trekkie5000. Alert readers will recall issues that could have been discovered with better threat modeling.</p>Breach Vouchers & Equifax 2017 Breach Links2017-09-07T00:00:00Zhttps://shostack.org/blog/breach-vouchers-equifax-2017-breach-links/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/anderson-ss-card.jpg" alt=" " class="single-post-main-img" />
<p>[<strong>Thursday, September 21th</strong> is the latest of 5 updates.]</p>
<p>When I wrote "<a href="https://adam.shostack.org/blog/2016/10/the-breach-response-market-is-broken-and-what-could-be-done/">The Breach Response Market Is Broken</a>," I didn't expect one of the players to validate everything I had to say. What I said was that the very act of firms contracting with breach response services inhibit the creation of a market for breach response, and the FTC should require them to give vouchers to consumers.</p>
<p>Vice Motherboard is reporting that "<a href="https://motherboard.vice.com/en_us/article/a33xgk/firm-hired-to-monitor-data-breaches-is-hacked-143-million-social-security-numbers-stolen" target="_blank">Firm Hired to Monitor Data Breaches Is Hacked, 143 Million Social Security Numbers Stolen</a>."</p>
<p>It's not clear what database was accessed. On their <a href="https://www.equifaxsecurity2017.com/" target="_blank">website</a>, Equifax says "No Evidence of Unauthorized Access to Core Consumer or Commercial Credit Reporting Databases" and "Company to Offer Free Identity Theft Protection and Credit File Monitoring to All U.S. Consumers."</p>
<p>But here's the thing; I don't trust Equifax to protect data that ... they just failed to protect. I want protection from an independent firm.</p>
<p>Equifax's self-dealing in providing breach response services is unfair. No rational, well-informed consumer would select Equifax's service in this situation. Equifax's offering of credit file monitoring to all US consumers is also an unfair trade practice, which undercuts innovation, and limits the ability of new entrants to deliver effective services.</p>
<p>The FTC should require Equifax to send a voucher to each impacted individual which can be used to purchase any identity theft protection service on the market as of August, 2017.</p>
<hr />
<p>Usually I don't try to blog fast moving stories, but I may make an exception.</p>
<p>Update 1, later that day:</p>
<ul>
<li>Jeremiah Grossman points out: "<a href="https://twitter.com/jeremiahg/status/905925838176182278" target="_blank">As we’ve seen, breaches often negatively impact stocks (1-10%). We also know prices quickly bounce back. If we're really smart, we’d buy.</a>" I wonder — will this impact their business substantially? Probably revenue will be unaffected; costs may go up, not in the sense of notification (they're not bothering to mail you a letter) or breach response costs, but in expenses around computer security: software, staff, subscriptions, which may depress profitability over time if most of the new expenses are a new normal after an FTC consent decree.</li>
<li>Brian Krebs <a href="https://krebsonsecurity.com/2017/09/breach-at-equifax-may-impact-143m-americans/" target="_blank">has</a> context and history.</li>
<li>Bloomberg reports that "<a href="https://www.bloomberg.com/news/articles/2017-09-07/three-equifax-executives-sold-stock-before-revealing-cyber-hack" target="_blank">Three Equifax Managers Sold Stock Before Cyber Hack Was Revealed</a>." "None of the filings lists the transactions as being part of 10b5-1 pre-scheduled trading plans."</li>
</ul>
<p>Update 2, Sept 9:</p>
<ul>
<li>The International Business Times reports "<a href="http://www.ibtimes.com/political-capital/equifax-lobbied-kill-rule-protecting-victims-data-breaches-2587929" target="_blank">Equifax Lobbied To Kill Rule Protecting Victims Of Data Breaches</a>." They report Equifax wrote "a rule blocking companies from forcing their customers to waive class action rights would expose credit agencies 'to unmanageable class action liability that could result in full disgorgement of revenues' if companies are found to have illegally harmed their customers." It's a nice life, having the government block your victims from suing you, especially if you're worried that the harm is great enough to result in 'full disgorgement of revenues.' Now, you might argue that's hyperbole, but maybe it's a real fear.</li>
<li>The Onion reports "<a href="http://www.theonion.com/article/equifax-impressed-hackers-ability-ruin-peoples-fin-56882" target="_blank">Equifax Impressed By Hackers’ Ability To Ruin People’s Finances More Efficiently Than Company Can</a>."</li>
<li>Equifax once brought me to a Nine Inch Nails concert, and under the payola rules, I ought to have disclosed that when writing about them. It was over a decade ago, and had slipped my mind.</li>
</ul>
<p>Update 3, Sept 12:</p>
<ul>
<li>TechCrunch reports "<a href="https://techcrunch.com/2017/09/08/psa-no-matter-what-you-write-equifax-may-tell-you-youve-been-impacted-by-the-hack/" target="_blank">no matter what, Equifax may tell you you’ve been impacted by the hack</a>," even for random last name and SSN combos. Maybe that's not trying to drive anxiety, but their data really is that bad? (Thanks to Joey Gray for the pointer.)</li>
<li>Patrick McKenzie has a long article on what to do when a bank issues credit to someone they think is you, "<a href="http://www.kalzumeus.com/2017/09/09/identity-theft-credit-reports/" target="_blank">Identity Theft, Credit Reports, and You</a>."</li>
</ul>
<p>Update 4, September 16:</p>
<ul>
<li>Lawmakers: The Democrats of the House Committee on Energy and Commerce <a href="https://assets.documentcloud.org/documents/3991357/Equifax-2017-09-12-Letter-to-Equifax-CEO-Re.pdf" target="_blank">sent Equifax a letter</a>, as did Senators Hatch and Wyden, (<a href="https://drive.google.com/file/d/0BwsvF1X5umK4c056N2FsOVdETm8/view" target="_blank">Hatch/Wyden Letter)</a>, and 30 state attorneys general, "<a href="https://consumerist.com/2017/09/15/states-call-on-equifax-to-halt-marketing-of-its-paid-credit-monitoring-service/" target="_blank">States Call On Equifax To Halt Marketing Of Its Paid Credit Monitoring Service</a>."</li>
<li>In other legislative news, "<a href="http://www.latimes.com/business/la-fi-equifax-elizabeth-warren-20170915-story.html" target="_blank">Sen. Elizabeth Warren slams Equifax and introduces bill to ban fees for freezing credit.</a>"</li>
<li>Someone <a href="https://twitter.com/kennethn/status/907400233956421632" target="_blank">tried</a> to report the PIN generation issue over a year ago. </li>
<li>Equifax's CEO makes <u>statements</u> [link to http://amp.usatoday.com/story/659738001/ no longer works] intended to reassure in USA Today, including "We are devoting extraordinary resources to make sure this kind of incident doesn’t happen again," but they don't seem to have reported that claim to their shareholders via an 8K. (The one I see at the <a href="https://www.sec.gov/cgi-bin/browse-edgar?action=getcompany&CIK=0000033185&owner=include&count=40&hidefilings=0" target="_blank">SEC</a>, filed September 7, says "it's too early to tell.")</li>
<li>Reporters: Bob Sullivan has a "<a href="https://bobsullivan.net/cybercrime/equifax-what-now-my-credit-freeze-primer-also-dont-over-react/" target="_blank">what now</a>." I respect Bob as a consumer-centric reporter, and he's covered these issues for a while. I'm quoted in Howstuffworks's "<a href="http://money.howstuffworks.com/personal-finance/debt-management/equifax-credit-fraud-monitoring-really-matter.htm" target="_blank">After the Equifax Breach, Does Credit Fraud Monitoring Really Help?</a>"</li>
<li>The public has still not been told what database was accessed.</li>
</ul>
<p>Update 5, September 21:</p>
<ul>
<li>"<a href="https://gizmodo.com/equifax-has-been-sending-consumers-to-a-fake-phishing-s-1818588764" target="_blank">Equifax Has Been Sending Consumers to a Fake Phishing Site for Almost Two Weeks</a>." I believe that Gizmodo means either "a phishing site" or "a fake, phishing site," because the site to which Equifax was directing people was not theirs.</li>
<li>Equifax's former Chief Privacy Officer posted "<a href="https://www.annaslomovic.com/single-post/2017/09/17/Understanding-the-Equifax-Data-Breach" target="_blank">Understanding the Equifax Data Breach</a>," including an explanation of why the breach was likely consumer dispute information.</li>
</ul>Open for Business2017-09-07T00:00:00Zhttps://shostack.org/blog/open-for-business/AdamRecently, I was talking to a friend who wasn't aware that I'm consulting, and so I wanted to share a bit about my new life, consulting!<span class="single-post-excerpt"><!-- excerpt -->Recently, I was talking to a friend who wasn't aware that I'm consulting, and so I wanted to share a bit about my new life, consulting!<!-- /excerpt --></span>
<p>Recently, I was talking to a friend who wasn't aware that I'm consulting, and so I wanted to share a bit about my new life, consulting!</p>
<p>I’m consulting for companies of all sizes and in many sectors. The services I’m providing include threat modeling training, engineering and strategy work, often around risk analysis or product management.</p>
<p>Some of the projects I’ve completed recently include:</p>
<ul>
<li>Threat modeling training - Engineers learn how to threat model, and how to make threat modeling part of their delivery. Classes range from 1 to 5 days, and are customized to your needs.</li>
<li>Process re-engineering for a bank - Rebuilt their approach to a class of risks, increasing security, consistently and productively across the org.</li>
<li>Feature analysis for a security company - Identifying market need, what features fit those needs, and created a compelling and grounded story to bring the team together.</li>
</ul>
<p>If you have needs like these, or other issues where you think my skills and experience could help, I'd love to hear from you. And if you know someone who might, I’m happy to talk to them.</p>
<p>I have a to-the-point website at <a href="https://associates.shostack.org/">associates.shostack.org</a> and some details of my threat modeling services are at associates.shostack.org/threatmodeling.</p>Pie Charts, or this could be hell2017-09-13T00:00:00Zhttps://shostack.org/blog/pie-charts-or-this-could-be-hell/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/dark-desert-highway-pie-chart.jpg" alt=" " class="single-post-main-img" />
<p class="credit-text">(Via Davide Mancino)</p>Threat Modeling Training (London!)2017-09-14T00:00:00Zhttps://shostack.org/blog/threat-modeling-training-london/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/london-training-2017.png" alt=" " class="single-post-main-img" />
<p>Allow me to mention that there are still seats available at my <a href="https://www.devseccon.com/london-2017/" target="_blank">training session</a> in London 16-17 October, in cooperation with DevSecCon!</p>
<p>Usually, I do training for an organization, so this is your only chance this year to book a single seat.</p>Threat Modeling and Architecture2017-09-18T00:00:00Zhttps://shostack.org/blog/threat-modeling-and-architecture/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>"<a href="http://misti.com/infosec-insider/threat-modeling-and-architecture" target="_blank">Threat Modeling and Architecture</a>" is the latest in a series at Infosec Insider.</p>
<blockquote>After I wrote my last article on Rolling out a Threat Modeling Program, Shawn Chowdhury asked (on Linkedin) for more informatioin on involving threat modeling in the architecture process. It’s a great question, except it involves the words “threat, “modeling,” and “architecture.” And each of those words, by itself, is enough to get some people twisted around an axle.</blockquote>
<p>Architecture, in particular, is controversial, but without a definition it’s hard to say if or how threat modeling might help. A colleague wrote, “my basic rule of thumb has always been that architecture is 'that thing above the level at which you work’,” and another said, “An architecture is ‘the fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution.’” I’ve used the definition that an architecture is “a consistent system model shared by the engineers on a project that enables faster delivery with fewer problems.”</p>
<p>One element that each of these definitions shares is that they implicitly invoke models: abstractions which help us reason. And there is the fundamental tie; both threat modeling and architecture involve models of systems and analyzing properties of those models. Billy Vaughn Koen, now a professor emeritus at the University of Texas, wrote about the engineering method, which he defined as “the strategy for causing the best change in a poorly understood or uncertain situation within the available resources.” Dr. Koen wasn’t writing about software, but his definition is helpful in understanding the goals of software architecture work, which can be seen as strategies for constraining how developers invest effort. That is, an architectural approach defines the choices that a developer should make: use these APIs, these languages, these components.</p>
<p>Before we get to “this is the architecture we’ll use for this service,” though, we talk about tradeoffs we can make. In threat modeling, I’ve defined this as a four step process:</p>
<ol>
<li>What are we working on?</li>
<li>What can go wrong?</li>
<li>What are we going to do about it?</li>
<li>Have we done a good job?</li>
</ol>
<p>Let’s look at the steps with a broader lens. Asking, “What are we working on,” and ensuring that the answers are aligned, is an important part of both architectural and threat modeling processes. (One important and difficult aspect of this is to explicitly state what’s out of scope, or non-requirements. This is challenging because someone wants to build that feature or improve that property, and we have to tell them no, this feature is more important. It’s easier to let it slide, but it results in less focused products with features that don’t make sense.)</p>
<p>The question of, “What can go wrong” is far more focused in threat modeling. Architecture discussions can and should incorporate threat models along with other types of analysis of other failures.</p>
<p>In a really fascinating blog post entitled “<a href="https://www.kitchensoap.com/2017/08/12/multiple-perspectives-on-technical-problems-and-solutions/" target="_blank">Multiple Perspectives on Technical Problems and Solutions</a>,” former Etsy CTO John Allspaw wrote about his architectural review process. He talks about how architecture review is not about making a decision, but about informing the dialog around the choices to be made, and describes the process:</p>
<ul>
<li>Describing the problem they believe needs solving. This may or may not be straightforward to explain or describe, so a back-and-forth is usually needed for a group to get a full grasp of it.</li>
<li>Generating hypotheses about whether or not the problem(s) being described need to be solved in more or less complete or exhaustive ways. Some problem descriptions might be really narrow, or really broad, some problems don’t have to be fully “solved” all at once, etc. Will the problem exist in perpetuity?</li>
<li>Evaluating options for solutions. What are the pros and cons? Can a group that needs to support a solution sustain or maintain said solution? Will some solutions have an “expiration date”? What possible unintended consequences could result in upstream/downstream systems that depend on this solution?</li>
</ul>
<p>Does that look familiar? It should. It is remarkably parallel to threat modeling. Etsy, in general, has written powerfully on the topic of <a href="https://codeascraft.com/2016/11/17/debriefing-facilitation-guide/" target="_blank">understanding the organization’s successes or failures</a> during a project, and John, specifically, has written about <a href="https://www.kitchensoap.com/2014/11/14/the-infinite-hows-or-the-dangers-of-the-five-whys/" target="_blank">learning as you go</a>.</p>
<p>So let me make it explicit:</p>
<ul>
<li>Threat modeling is a contributor to architectural analysis and decision making.</li>
<li>Threat modeling contributes by helping you define threats and the requirements they violate.</li>
<li>Threat modeling contributes by helping you define possible solutions to security problems.</li>
<li>It breaks security into smaller parts to help those who are not security experts grapple with security questions.</li>
</ul>
<p>There are two ways this commonly happens. Architectural investigations can drive threat modeling or threat modeling can drive architectural investigations. </p>
<p>In the first, there are one or more possible architectural directions that might be sensible, and one of the ways to distinguish them is to answer, “which ones create or solve security problems?” Threat modeling structures the way you start to answer those questions for each possible architecture. </p>
<p>In the second, threat modeling leads to understanding that architectural choices have security implications, and those choices may need to be revisited. Those choices might have been made when security wasn’t important or wasn’t considered. Those choices might have been made when security was defined differently, or before clever new attacks were discovered. (It’s easy to get sucked into a blame game in these cases, and usually counter productive.) In these cases, threat modeling gives structure to a set of security issues, and enables you to start asking, “Can we change the architecture in a way that reduces these problems at a managable cost?” A managable cost involves both financial cost and the tradeoffs that a new architectural choice requires.</p>
<p>In my experience, when an architecture review brings attention to a problem and proposed solutions from multiple perspectives, <strong>decisions become less controversial</strong>. When a decision appears to be obvious to a broad group (“Question: should we (or should we not) take backups of critical databases? Decision: Yes.”) how a decision gets made almost disappears. — John Allspaw</p>
<p>It is tempting for security people to argue that security should take the place of pride in a discussion. (After all, we describe ourselves as security people; it’s important to us.) It is also tempting to accept the frame that we have to “fight for security.” Few people like conflict, and fewer handle conflict as constructively as they believe. In the “fight for security” frame, we can be heros, if just for one day. But security isn’t fixed in a day, and more useful than being a hero is being an engineer. (<a href="https://blogs.microsoft.com/microsoftsecure/2009/10/23/ninjas-are-cool-but-engineers-build-bridges/" target="_blank">That’s a perspective that I’ve discussed before.</a>)</p>
<p>In each case, the goal is to transform security from a vague worry to a crisply-enough modeled threat, and then ensure that addressing that threat gets the right degree of attention. That’s threat modeling, that’s architecture, that’s security engineering.</p>
<p><em>First <a href="http://misti.com/infosec-insider/threat-modeling-and-architecture" target="_blank">appeared</a> on MISTI.com's Infosec Insider.</em></p>“The Readability Of Scientific Texts Is Decreasing Over Time”2017-09-20T00:00:00Zhttps://shostack.org/blog/the-readability-of-scientific-texts-is-decreasing-over-time/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>There's an interesting new paper at bioRXiv, "<a href="http://www.biorxiv.org/content/early/2017/03/28/119370" target="_blank">The Readability Of Scientific Texts Is Decreasing Over Time</a>."</p>
<blockquote>Lower readability is also a problem for specialists (22, 23, 24). This was explicitly shown by Hartley (22) who demonstrated that rewriting scientific abstracts, to improve their readability, increased academics’ ability to comprehend them. While science is complex, and some jargon is unavoidable (25), this does not justify the continuing trend that we have shown.</blockquote>
<p>Ironically, the paper is released as a PDF, which is hard to read on a mobile phone. There's a tool, <a href="https://pandoc.org/" target="_blank">pandoc</a>, which can easily create HTML versions from their LaTeX source. I encourage everyone who cares about their work being read to create HTML and ebook versions.</p>Parroting Bad Security Advice2017-09-22T00:00:00Zhttps://shostack.org/blog/parroting-bad-security-advice/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<blockquote>A PARROT has become the latest voice to fool Amazon's Alexa voice assistant after ordering gift boxes using an Amazon Echo. Buddy the African Grey Parrot, mimicked his owner's voice so convincingly that her Amazon Echo accepted the order for six gift boxes. ("<u>
Parrot mimics owner to make purchases using Amazon Echo</u>.") [link to https://www.theinquirer.net/inquirer/news/3017840/parrot-mimics-owner-to-make-purchases-using-amazon-echo no longer works]<br />...<br />
As Alexa has a facility to require a PIN code before placing an order, it was really down to the family that their bird was able to make the request.</blockquote>
<p>Of course, Buddy would have been unable to learn the PIN.</p>
<p>Via Michael Froomkin.</p>It's Not The Crime, It's The Coverup or the Chaos2017-09-26T00:00:00Zhttps://shostack.org/blog/its-not-the-crime-its-the-coverup-or-the-chaos/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Well, Richard Smith has "<a href="https://investor.equifax.com/news-and-events/news/2017/09-26-2017-140531280" target="_blank">resigned</a>" from Equifax.</p>
<p>The CEO being fired is a rare outcome of a breach, and so I want to discuss what's going on and put it into context, which includes the <a href="https://www.nytimes.com/2017/09/22/us/politics/us-tells-21-states-that-hackers-targeted-their-voting-systems.html?nytmobile=0" target="_blank">failures at DHS</a>, and <a href="https://www.theguardian.com/business/2017/sep/25/deloitte-hit-by-cyber-attack-revealing-clients-secret-emails" target="_blank">Deloitte breach</a>. Also, I aim to follow the advice to praise specifically and criticize in general, and break that pattern here because we can learn so much from the specifics of the cases, and in so learning, do better.</p>
<p>Smith was not fired because of the breach. Breaches happen. Executives know this. Boards know this. The breach is outside of their control. Smith was fired because of the post-breach chaos. Systems that didn't work. <a href="https://adam.shostack.org/blog/2017/09/breach-vouchers/">Tweeting links to a scam site</a> for <em>two weeks</em>. PINS that were recoverable. Weeks of systems saying "you may have been a victim." Headlines like "<a href="https://www.nytimes.com/2017/09/22/your-money/equifax-breach.html" target="_blank">Why the Equifax Breach Stings So Bad</a>" in the NYTimes. Smith was fired in part because of the post-breach chaos, which was something he was supposed to control.</p>
<p>But it wasn't just the chaos. It was that Equifax displayed so much self-centeredness after the breach. They had the chutzpah to offer up their own product as a remedy. And that self-dealing comes from seeing itself as a victim. From failing to understand how the breach will be seen in the rest of the world. And that's a very similar motive to the one that leads to coverups.</p>
<p>In <cite>The New School</cite> Andrew and I discussed how fear of firing was one reason that companies don't disclose breaches. We also discussed how, once you agree that "security issues" are things which should remain secret or shared with a small group, you can spend all your energy on <em>rules</em> for information sharing, and have no energy left for <em>actual</em> information sharing.</p>
<p>And I think that's the root cause of "<a href="https://www.nytimes.com/2017/09/22/us/politics/us-tells-21-states-that-hackers-targeted-their-voting-systems.html?nytmobile=0" target="_blank">U.S. Tells 21 States That Hackers Targeted Their Voting Systems</a>" a full year after finding out:</p>
<blockquote>The notification came roughly a year after officials with the United States Department of Homeland Security first said states were targeted by hacking efforts possibly connected to Russia.</blockquote>
<p>A year.</p>
<p>A year.</p>
<p>A year after states were first targeted. A year in which "<a href="https://www.theverge.com/2017/9/25/16360482/obama-mark-zuckerberg-fake-news-facebook" target="_blank">Obama personally warned Mark Zuckerberg to take the threats of fake news ‘seriously.’</a>" (Of course, the two issues may not have been provably linkable at the time.) But. A year.</p>
<p>I do not know what the people responsible for getting that message to the states were doing during that time, but we have every reason to believe that it probably had to do with (and here, I am using not my sarcastic font, but my scornful one) "rules of engagement," "traffic light protocols," "sources and methods" and other things which are at odds with addressing the issue. (End scornful font.) I understand the need for these things. I understand protecting sources is a key role of an intelligence service which wants to recruit more sources. And I also believe that there's a time to risk those things. Or we might end up with a President who has more harsh words for Australia than the Philippines. More time for Russia than Germany.</p>
<p>In part, we have such a President because we value secrecy over disclosure. We accept these delays and view them as reasonable. Of course, the election didn't turn entirely on these issues, but on our electoral college system, which I <a href="https://adam.shostack.org/blog/2016/11/electoral-chaos/">discussed</a> at some length, including <a href="https://adam.shostack.org/blog/2016/11/election-2016/">ways to fix it</a>.</p>
<p>All of which brings me to the Deloitte breach, "<a href="https://www.theguardian.com/business/2017/sep/25/deloitte-hit-by-cyber-attack-revealing-clients-secret-emails" target="_blank">Deloitte hit by cyber-attack revealing clients’ secret emails</a>." Deloitte, along with the others who make up the big four audit firms, gets access to its clients deepest secrets, and so you might expect that the response to the breach would be similar levels of outrage. And I suspect a lot of partners are making a lot of hat-in-hand visits to boardrooms, and contritely trying to answer questions like "what the flock were you people doing?" and "why the flock weren't we told?" I expect that there's going to be some very small bonuses this year. But, unlike our relationship with Equifax, boards do not feel powerless in relation to their auditors. They can pick and swap. Boards do not feel that the system is opaque and unfair. (They sometimes feel that the rules are unfair, but that's a different failing.) The extended reporting time will likely be attributed to the deep analysis that Deloitte did so it could bring facts to its customers, and that might even be reasonable. After all, a breach is tolerable; chaos afterwards may not be.</p>
<p>The two biggest predictors of public outrage are chaos and coverups. No, that's not quite right. The biggest causes are chaos and coverups. (Those intersect poorly with data brokerages, but are not limited to them.) And both are avoidable.</p>
<p>So what should you do to avoid them? There's important work in preparing for a breach, and in preventing one.</p>
<ul>
<li>First, run tabletop response exercises to understand what you'd do in various breach scenarios. Then re-run those scenarios with the principals (CEO, General Counsel) so they can practice, too.</li>
<li>To reduce the odds of a breach, realize that you need continuous and integrated security as part of your operational cycles. Move from focusing on pen tests, red teams and bug bounties to a focus on threat modeling, so you can find problems systematically and early.</li>
</ul>
<p>I'd love to hear what other steps you think organizations often miss out on.</p>Emergent Musical Chaos2017-10-07T00:00:00Zhttps://shostack.org/blog/emergent-musical-chaos/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/global-jukebox-framed.png" alt="Global Jukebox" class="single-post-main-img" />
<p>The New York Times reports on how many of Alan Lomax's recordings are now online, "<a href="https://www.nytimes.com/2017/07/11/arts/music/alan-lomax-global-jukebox-digital-archive.html" target="_blank">The Unfinished Work of Alan Lomax’s Global Jukebox</a>." This is a very interesting and important archive of musical and cultural heritage. <a href="https://theglobaljukebox.org/" target="_blank">The Global Jukebox</a>. I was going to say that Lomax and <a href="https://en.wikipedia.org/wiki/Harry_Everett_Smith" target="_blank">Harry Smith</a> were parallel, and that the Anthology of American Folk Music is a similar project, but I was wrong, Smith drew heavily on Lomax's work.</p>
<p>Simultaneously, the Internet Archive has been working to put <a href="https://archive.org/details/georgeblood?&sort=-publicdate" target="_blank">78 RPM records online</a>, and has <a href="https://archive.org/details/georgeblood?&sort=-publicdate" target="_blank">a feed</a>.</p>Worthwhile Books, Q32017-10-11T00:00:00Zhttps://shostack.org/blog/worthwhile-books-q3-2017/AdamSome of what I've read over the past quarter and want to recommend as worthy of your time.<span class="single-post-excerpt"><!-- excerpt -->Some of what I've read over the past quarter and want to recommend as worthy of your time.<!-- /excerpt --></span>
<p>Some of what I've read over the past quarter, and want to recommend each of the books below as worthy of your time.</p>
<h3>Cyber</h3>
<ul>
<li><a href="https://www.amazon.com/Internet-Risky-Things-Trusting-Surround/dp/149196362X/ref=as_li_ss_tl?ie=UTF8&linkCode=ll1&tag=adamshostack-20&linkId=d5a7c40e3488fdfc830a5dd24a925bfb" target="_blank">The Internet of Risky Things</a>, Sean Smith. This was a surprisingly good short read. What I gained was an organized way of thinking and a nice reference for thinking through the issues of IOT. Also, the lovely phrase "cyber Love Canal." </li>
<li><a href="https://www.amazon.com/American-Spies-Modern-Surveillance-Should/dp/1107501857/ref=as_li_ss_tl?s=books&ie=UTF8&qid=1505863145&sr=1-1&keywords=American+Spies&linkCode=ll1&tag=adamshostack-20&linkId=19e754e7af89e2c849fe0cf650d6dbcc" target="_blank">American Spies</a>, Jennifer Stisa Granick. Again, surprisingly good, laying out with the logical force that really good lawyers bring, explaining both sides of an issue and then explaining the frame in which you should understand it.</li>
<li><a href="https://unbound.com/books/saving-bletchley-park" target="_blank">Saving Bletchley Park</a>, Sue Black. (Title links to publisher, who sells ebook & print, or you can go to <a href="http://amzn.to/2fVoNUw" target="_blank">Amazon</a>, who only sells the hardback.) The really interesting story of the activism campaign to save <a href="https://bletchleypark.org.uk/" target="_blank">Bletchley Park</a>, which was falling apart 20 years ago. Dr. Black is explicit that she wrote the book to carry the feel of an internet campaign, with some stylistic bits that I found surprising. I was expecting a drier style. Don't make my mistake, and do read the book. Also, visit Bletchley Park: it's a great museum. </li>
</ul>
<h3>Nonfiction, not security</h3>
<ul>
<li><a href="https://www.amazon.com/Black-Box-Thinking-People-Mistakes-But/dp/1591848229/ref=as_li_ss_tl?ie=UTF8&qid=1507498512&sr=8-1&keywords=Black+Box+Thinking&linkCode=ll1&tag=adamshostack-20&linkId=25efd6978484e1a098750241885da487" target="_blank">Black Box Thinking: Why Most People Never Learn from Their Mistakes--But Some Do</a>, Matthew Syed. Mr. Syed is a journalist, and has written a fantastic and highly readable account of how talking about mistakes helps prevent future problems.</li>
<li><a href="https://www.amazon.com/We-Have-No-Idea-Universe/dp/0735211515/ref=as_li_ss_tl?s=books&ie=UTF8&qid=1507498610&sr=1-1&keywords=We+have+No+Idea,+Jorge+Cham&linkCode=ll1&tag=adamshostack-20&linkId=a61d9ae56723e10b19c3c613621a2517" target="_blank">We have No Idea, Jorge Cham</a>, Daniel Whiteson take on what modern physics doesn't know about the world. Great education with enough fun to entertain, not distract. Except maybe the gravity puns. Those could have been spaced out more, so as to not crush the intervening ideas. Because I've updated my models of the universe while reading <em>We Have No Idea</em>, this news was easier to understand "<a href="https://www.newscientist.com/article/2149742-half-the-universes-missing-matter-has-just-been-finally-found/" target="_blank">Half the universe’s missing matter has just been finally found</a>."</li>
</ul>
<h3>Fiction</h3>
<ul>
<li>N. K. Jemisin's Broken Earth Series. Outstanding writing, interesting worldbuilding, and the first two books have both won Hugos. First book is "<a href="https://www.amazon.com/Fifth-Season-Broken-Earth/dp/0316229296/ref=as_li_ss_tl?s=books&ie=UTF8&qid=1507498640&sr=1-1&keywords=the+fifth+season&linkCode=ll1&tag=adamshostack-20&linkId=61cce74b0d870dcc842ee269f8533f73" target="_blank">The Fifth Season</a>." Bump it up in your queue.</li>
<li><a href="http://amzn.to/2y9zpsQ" target="_blank">The Rise and Fall of D.O.D.O</a>, Neal Stephenson and Nicole Galland. I'm not (yet) familiar with <a href="https://www.amazon.com/Nicole-Galland/e/B001JS0XHM/ref=sr_ntt_srch_lnk_1?qid=1507498672&sr=1-1" target="_blank">Galland's wor</a>k, much of which seems to be historical fiction. This fairly breezy and fun time travel read, much less dense than most of Stephenson's recent books.</li>
</ul>
<p>Previously: <a href="https://adam.shostack.org/blog/2017/07/worthwhile-books-q2-2017/">Q2</a>.</p>Building an Application Security Team2017-10-15T00:00:00Zhttps://shostack.org/blog/application-security-team/Jonathan Marcil[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/multi-roles-01-300x178.png" alt="juggling multiple user roles" class="single-post-main-img" />
<p>The Application Security Engineer role is in demand nowadays. Many job offers are available, but actual candidates are scarce. Why is that? It's not an easy task as the level of skills needed is both to be broad and specialized at the same time. Most of the offers are about one person, one unicorn that does all those wonderful things to ensure that the organization is making secure software.</p>
<h3>Looking at where we are coming from</h3>
<p>For the sake of simplicity, let's say that the application security industry is traditionally split between two main types of offering: those who are selling products and those who are selling pen test services. In many occasions both are from the same vendor, but from different teams internally. Let’s break down how they are judged on their success to have an idea how it managed to evolve.</p>
<p>For the products approach, we must concede that the actual work is executed by a computer program. It requires a certain level of intelligence from software, but at least we can have a certain amount of codified knowledge that can scale and work 24/7 to ensure larger coverage. Ironically enough, one of the focus of that machine seems to be somewhat the same as many talented individuals want for themselves: to be right. Success is defined when the tool finds the correct issues, while not annoying people too much with false positive.</p>
<p>For pen test services, work can be over simplified into running tools as well, but the value is also when the amount of false positives is quite low. Areal human that does a proof of concept can ensure that, plus it makes a great thing to show off. Also, a lot of creativity and adapted deep technical knowledge need to be applied dynamically, a thing that most tools don’t achieve at all. The findings will then be amplified and the actual coverage will not matter as much. A pen test can be deemed successful if it finds a few critical issues without having a 100% coverage. To be fair, many pen test services will try to achieve full coverage. They will also work in a black box model, making it a difficult endeavor. The evolution of pen testing seems to be heading toward acting as a Red Team; a team that focuses on mimicking what attackers does. In this case, all we need for success is one or a few open doors in the right place at the right time.</p>
<p>Both tools and pen testing service are useful and should be the first thing you try when you want to address the "security problem". They also both always provides recommendations on how to fix. Those recommendations tend to be simple and tied only to issues found. They might not be on point with identifying the root cause of the problem and often come too late in the development process. Since security tools and pen testers know best about security, and that knowledge is humongous to grasp, not much is left for the knowledge of building secure software. That is part of defining what is success in the software industry and, while they are here to help, the security industry has it as a byproduct of its own success.</p>
<p>It makes sense that an industry who wants solve a problem, tries to solve it like it has always done before: by using exceptional talents doing over-specialization. Unfortunately, the efforts required to fix are not from the realm of security engineering, but from the broader realm of software engineering. This is yet another large body of knowledge where only a fraction overlap with security.</p>
<h3>A partial view of the industry</h3>
<p>So before going on with what I personally think, I want to explore how the industry defines the role and tasks of an application security engineer.</p>
<p>Fair warning, what I've done here is a totally biased research™. The bias I can see is the following: I’ve collected all the job offers that one individual has received in the past two years while identifying as Sr. Application Security Engineer on a popular social media platform that is the hunting ground for recruiters. It is not representative of the industry at wide at all, but rather a view of what someone would attract in a specific geographical region. What I like about that bias, is that it expresses a consensus of people looking for the same thing, so really pin pointed at what I want to explicit in this post.</p>
<p>Let’s look at what trend we can extract from a total number of 57 job offers.</p>
<ul>
<li>35% (20) have the title “Application Security Engineer”</li>
<li>11% (6) have the title “Application Security Architect”</li>
<li>4% (2) includes “Product Security” in the title instead of "Application Security"</li>
<li>7% (4) are about management or the title “Application Security Manager”</li>
<li>21% (12) have senior in the title</li>
<li>11% (6) have the title “Security Engineer”</li>
<li>2% (1) have the title “Software Engineer”</li>
<li>19% (11) talk about an AppSec program, SDL or SDLC</li>
<li>5% (3) talk about pen testing</li>
<li>18% (10) talk about code review</li>
<li>9% (5) talk about threat modeling</li>
<li>12% (7) perform development/building tasks</li>
<li>26% (15) are far from the regional position</li>
<li>35% (20) didn't provide details about the role</li>
<li>3% (3) mentioned an application security team</li>
</ul>
<p>A few years ago, I was barely seeing the “Application Security Engineer” title, but now it has become more predominant. That change had probably the effect to put the role in a high demand situation as few people has it in their background history.</p>
<p>For more complete offerings that include many activities such as threat modeling and code review, the offer is also low, as it is hard to find a single individual that can and want to tackle everything on the table. The fact that we rely on one role also make it hard to institutionalize, and that doesn’t help at all at the resource management problem we now face with that offer-to-demand ratio.</p>
<p>The core of this current blog post is about building a team, and that’s for me a solution for easier recruiting and better security impact overall.</p>
<h3>Enough with contextualization, let’s go on the subject!</h3>
<p>So here we go, here's the list of 11 roles I can think of, with sample responsibilities and skills requirements. A role in this case is not a job title but rather a label that is easy to use and refer to.</p>
<p>In order for the list to make sense, you must have some prior knowledge in application security programs that define diverse activities during the software development life cycle (SDLC). <a href="http://www.opensamm.org/" target="_blank">OpenSAMM</a> and <a href="https://www.bsimm.com/about.html" target="_blank">BSIMM</a> are good ressources to help you enumerate and understand those practices.</p>
<table style="text-align: center">
<thead>
<tr>
<th>Roles</th>
<th>Responsibilities</th>
<th>Requirements</th>
</tr>
</thead>
<tbody>
<tr>
<td>The Hacker<br /><br /><img class="aligncenter wp-image-7648" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/011-spy-7-150x150.png" alt="" width="100" height="100" /></td>
<td>Break the applications<br /><br />Do manual and dynamic security testing<br /><br />Provide insight on threats</td>
<td>Participated in bug bounties in the past<br /><br />Knows the threats related to your target applications</td>
</tr>
<tr>
<td>The Secure Developer<br /><br /><img class="alignnone wp-image-7649" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/111-geek-8-150x150.png" alt="" width="100" height="100" /></td>
<td>Code Review<br /><br />Guideline for fixing<br /><br />Contribute to code changes if needed</td>
<td>Deep knowledge of tech stack you are using<br /><br />Knows the pitfalls in programming languages and frameworks</td>
</tr>
<tr>
<td>The Organizer<br /><br /><img class="aligncenter wp-image-7659" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/158-telemarketer-5-150x150.png" alt="" width="100" height="100" /></td>
<td>Keep in touch with development & other security teams<br /><br />Manage schedule and planning</td>
<td>Strong communication skills<br /><br />Project management experience</td>
</tr>
<tr>
<td>The Architect<br /><br /><img class="aligncenter wp-image-7660" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/037-worker-5-150x150.png" alt="" width="100" height="100" /></td>
<td>Lead Threat Modeling<br /><br />Steer secure architecture changes in the entreprise</td>
<td>Experiences with systems related to the target applications<br /><br />Good with creation of diagrams</td>
</tr>
<tr>
<td>The Automation Engineer<br /><br /><img class="wp-image-7661 aligncenter" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/077-mechanic-9-150x150.png" alt="" width="100" height="100" /></td>
<td>Integrate tools in CI/CD<br /><br />Work on automation of the team's tasks</td>
<td>Proficiency in scripting language<br /><br />Experience in DevOps</td>
</tr>
<tr>
<td>The Vulnerability Researcher<br /><br /><img class="wp-image-7662 aligncenter" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/236-scientist-150x150.png" alt="" width="100" height="100" /></td>
<td>Digg in the details of some systems<br /><br />Elaborate proof of concepts when needed<br /><br />Find pin pointed vulnerabilities</td>
<td>Deep knowledge of operating system internals<br /><br />Knowledge of exploitation techniques</td>
</tr>
<tr>
<td>The Apprentice<br /><br /><img class="aligncenter wp-image-7664" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/228-graduated-150x150.png" alt="" width="100" height="100" /></td>
<td>Anything to assist other roles<br /><br />Help with team motivation and staying current</td>
<td>Is highly motivated by the objectives of the team<br /><br />Quick learner</td>
</tr>
<tr>
<td>The Cheerleader<br /><br /><img class="aligncenter wp-image-7663" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/195-singer-3-150x150.png" alt="" width="100" height="100" />
</td>
<td>Provide positive focus for the team<br /><br />Ensure team motivation and well-being<br /><br />Detail the service offering of the team to customers</td>
<td>High charisma<br /><br />Marketing background<br /><br />Good at karaoke</td>
</tr>
<tr>
<td>The Teacher<br /><br /><img class="aligncenter wp-image-7665" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/241-teacher-150x150.png" alt="" width="100" height="100" /></td>
<td>Prepare and execute training<br /><br />Identify what needs attention for education</td>
<td>Great communication skills<br /><br />Good charisma<br /><br />Knowledgeable in security</td>
</tr>
<tr>
<td>The Documentation Artist<br /><br /><img class="aligncenter wp-image-7666" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/146-artist-6-150x150.png" alt="" width="100" height="100" /></td>
<td>Helper for training and documentation<br /><br />Design and polish documents</td>
<td>Some Photoshop skills<br /><br />Some Office suite skills<br /><br />Reads a lot about security and/or programming</td>
</tr>
<tr>
<td>The AppSec Lead<br /><br /><img class="wp-image-7667 aligncenter" title="Icon made by https://twitter.com/ from www.flaticon.com License: CC 3.0 BY" src="https://shostack.org/blog/img/2017/135-pilot-6-150x150.png" alt="" width="100" height="100" /></td>
<td>Builds the Application Security Program<br /><br />Can do most of the other tasks by the team</td>
<td>Proven vision and leadership skills<br /><br />Knowledge of SDLC<br /><br />Knowledge of Security</td>
</tr>
</tbody>
</table>
<p>Of course, those are stereotypical as a real person can take on many roles and responsibilities can be swapped between team members. Too many roles played at the same time (over a quarter or sprint, depending on your velocity) will dilute the competence of the person. Having one person do everything might make sense when the target size of the customer pool is small (<10) but doesn’t cut it on large teams.</p>
<p>To be successful, I'm arguing that only some combinations are to be considered optimal, that is called the Most Effective Tactic Available (META). That META will surely change with the evolution of the industry and the actual application security practice in the target enterprise.</p>
<p>For example, you probably need way more junior level technical people first if the company has never taking care of software security before. They will be great at finding low hanging fruits. They can even pin point emergent security issues, that will seem simple to them, in a complex system by having an external point of view. The good thing about this, is that over time, those people will evolve into more senior roles. The team will then have less junior roles as the problems left will be harder to solve.</p>
<p>I’m proposing that those META should be relative to the maturity level of the security program in the organization. The exact number of individuals needed can be calculated from the number of developers†. Every entreprise will be different, but here's some examples that are realistic for a target application security team of about 4 to 6 people:</p>
<ul>
<li>The organization is new to Security; 3 Hackers, 2 Secure Developers, 1 Organizer (0 need for advanced roles such as Architect or Vulnerability Researcher)</li>
<li>The organization already is composed of network security people; 2 Hackers, 2 Secure Developer (the Organizer will be part of what is already in place)</li>
<li>The organization already have at least 2 well established Hackers (known as "pen testers") but none of the following; 2 Secure Developers, 2 Organizers</li>
<li>The organization wants to focus on automation; 1 Hackers, 1 Secure Developer, 2 Automation Engineers</li>
</ul>
<p>The reason you don't see an AppSec lead in the small proposed teams above is because that role can be seen as your wild card that can be used to replace single or even multiple roles. For example, you can probably exchange 1 Hacker and 1 Secure Developer for 1 AppSec Engineer for a good trade-off. We often see teams composed uniquely of those wild cards, but in this case the problem will be to have it grow to more than a couple members. If you want growth and have the enterprise establish a strong team over time, then the diverse META based approach will get you there.</p>
<p>Getting the right META for a particular group of customers is a difficult task. It can require a lot of observation of the current workplace reality at the enterprise and target group level. It is wrong to say that one META fits all, but you can sure find yours based on objectives and current security practices already in place.</p>
<p>There’s also a few good team tactics I can think of that can push the outcome of certain tasks to the next level. The synergy that you can get from team work will make the effective results better than the sum of its parts. Here's a few examples:</p>
<ul>
<li>The Hacker and the Vulnerability Researcher provide the Architect with threat intelligence
<ul class="sublist">
<li style="list-style-type: '➢ '">Then the Architect has access to more knowledge regarding threat actors and their capabilities</li>
</ul>
</li>
<li>The Secure Developer works with the Hacker and Vulnerability Researcher
<ul class="sublist">
<li style="list-style-type: '➢ '">Then the Secure Developer can implement more solid security controls by having them based on vulnerability knowledge and tested</li>
</ul>
</li>
<li>The Hacker works on finding vulnerability in a horizontal fashion (multiple applications) while the Vulnerability Researcher works vertically (only one application)
<ul class="sublist">
<li style="list-style-type: '➢ '">The combinaison of the two will ensure better coverage as one dimension is often not enough</li>
</ul>
</li>
<li>The Organizer and the Cheerleader explains to less technically savvy people the impact of vulnerabilities and gain of fixing
<ul class="sublist">
<li style="list-style-type: '➢ '">The fixing rate ratio will be higher for findings by the Hacker and proposed fixes by the Secure Developer</li>
</ul>
</li>
<li>The Architect validate its security decisions with the Vulnerability Researcher, the Hacker and the Secure Developer
<ul class="sublist">
<li style="list-style-type: '➢ '">This makes the decisions stronger and introduce peer review within the team for more quality</li>
</ul>
</li>
<li>The Automation Engineer participate in finding issues with the Hacker, the Secure Developer or the Vulnerability Researcher
<ul class="sublist">
<li style="list-style-type: '➢ '">This will make him better at creating or adapting rules for the tools</li>
</ul>
</li>
<li>The Teacher and the Documentation Artist work with everyone to gather needs based on security findings and best practices
<ul class="sublist">
<li style="list-style-type: '➢ '">This ensure a consensus between the team in the vision and message communicated to software engineering teams</li>
</ul>
</li>
<li>The Cheerleader ensure that everyone else treats the customers with respect and good energy
<ul class="sublist">
<li style="list-style-type: '➢ '">This minimize the impact of the "fix your sh*t" that we see so often by Hackers that are lacking empathy</li>
</ul>
</li>
<li>The AppSec lead utilize any other team member to empower its strengths and complement its weaknesses, while also covering the various responsibilities and tasks for an unfilled role
<ul class="sublist">
<li style="list-style-type: '➢ '">This will ensure that no AppSec activities are left untouched as their usage can be beneficial even if done to a lesser degree than someone dedicated</li>
</ul>
</li>
</ul>
<h3>Conclusion</h3>
<p>It is hard if not realistic to gather a full team that fits exactly what you need. Most of the successful combinations will evolve around roles resembling those presented here. Asking all that knowledge, workload and versatility from one person will almost end up in making concession or burning out the candidate. It will inevitably make the growth of the application security team past a certain size very difficult if not impossible.</p>
<p>Bottom line, to achieve a great implementation of an application security strategy, diversity and team work is a must.
</p><blockquote>† BSIMM 8, a quantitative study of the software security industry, state that the average percentage ratio of software security people to developers is 1.60% with a median of 0.88%. Note that I think that this should be a minimum as proper staffing is a hard requirement for great success. The average size of a Software Security Group (or AppSec team) observed in BSIMM is 11.6, a pure coincidence related to my proposal of 11 roles.</blockquote>Threat Modeling 'App Democracy'2017-10-20T00:00:00Zhttps://shostack.org/blog/threat-modeling-app-democracy/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>"<a href="http://prawfsblawg.blogs.com/prawfsblawg/2017/10/direct-republican-democracy-.html" target="_blank">Direct Republican Democracy?"</a> is a fascinating post at Prawfsblog, a collective of law professors. In it, Michael T. Morley describes a candidate for Boulder City Council with a plan to vote "the way voters tell him," and discusses how that might not be really representative of what people want, and how it differs from (small-r) republican government. Worth a few moments of your time.</p>20 Year Software: Engineering and Updates2017-10-23T00:00:00Zhttps://shostack.org/blog/20-year-software-engineering-and-updates/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/Office2011expires_framed.png" alt="Office 2011 expires" class="single-post-main-img" />
<p>Twenty years ago, Windows 95 was the most common operating system. Yahoo and Altavista were our gateways to the internet. Steve Jobs just returned to Apple. Google didn't exist yet. America Online had just launched their Instant Messenger. IPv6 was coming soon. That's part of the state of software in 1997, twenty years ago. We need to figure out what engineering software looks like for a twenty year lifespan, and part of that will be really doing such engineering, because theory will only take us to the limits of our imaginations.</p>
<p>Today, companies are selling devices that will last twenty years in the home, such as refrigerators and speakers, and making them with network connectivity. That network connectivity is both full internet connectivity, that is, Internet Protocol stacks, and also local network connectivity, such as Bluetooth and Zigbee.</p>
<p>We have very little idea how to make software that can survive as long as AOL IM did. (It's going away in December, if you missed the <a href="https://techcrunch.com/2017/10/06/aol-instant-messenger-shut-down/" target="_blank">story</a>.) </p>
<p>Recently, there was a news cycle around Sonos updating their privacy policy. I don't want to pick on Sonos, because I think what they're experiencing is just a bit of the future, unevenly distributed. Responses such as this were common: "<u>Hardware maker: Give up your privacy and let us record what you say in your home, or we’ll destroy your property</u>:" [link to https://www.privateinternetaccess.com/blog/2017/08/hardware-maker-give-up-your-privacy-and-let-us-record-what-you-say-in-your-home-or-well-destroy-your-property/ no longer works]</p>
<blockquote>“The customer can choose to acknowledge the policy, or can accept that over time their product may cease to function,” the Sonos spokesperson said, specifically.</blockquote>
<p>Or, as the Consumerist, part of Consumer Reports, puts it in "<a href="https://consumerist.com/2017/08/23/sonos-holds-software-updates-hostage-if-you-dont-sign-new-privacy-agreement/" target="_blank">Sonos Holds Software Updates Hostage If You Don’t Sign New Privacy Agreement:</a>"</p>
<blockquote>Sonos hasn’t specified what functionality might no longer work in the future if customers don’t accept the new privacy policy.</blockquote>
<p>There are some real challenges here, both technical and economic. Twenty years ago, we didn't understand double-free or format string vulnerabilities. Twenty years of software updates aren't going to be cheap. (I wrote about the economics in "<a href="https://adam.shostack.org/blog/2017/06/maintaining-updating-software/">Maintaining & Updating Software</a>.")</p>
<p>My read is that Sonos tried to write a privacy policy that balanced their anticipated changes, including Alexa support, with a bit of future-proofing. I think that the reason they haven't specified what might not work is because they really don't know, because nobody knows. </p>
<p>The image at the top is the sole notification that I've gotten that Office 2011 is no longer getting security updates. (Sadly, it's only shown once per computer, or perhaps once per user of the computer.) Microsoft, like all tech companies, will cut functionality that it can't support, like <a href="https://www.macworld.com/article/1154785/business/welcomebackvisualbasic.html" target="_blank">Visual Basic for Mac</a> and also "end of lifes" its products. They do so on a published timeline, but it seems wrong to apply that to a refrigerator, end of lifeing your food supply. </p>
<p>There's probably a clash coming between what's allowable and what's economically feasible. If your contract says you can update your device at any time, it still may be beyond "the corners of the contract" to shut it off entirely. Beyond economically challenging, it may not even be technically feasible to update the system. Perhaps the chip is too small, or its power budget too meager, to always connect over TLS4.2, needed addresses the SILLYLOGO attack. </p>
<p>What we need might include:</p>
<ul>
<li>A Dead Software Foundation, dedicated to maintaining the software which underlies IoT devices for twenty years. This is not only the Linux kernel, but things like tinybox and openssl. Such a foundation could be funded by organizations shipping IoT devices, or even by governments, concerned about the externalities, what Sean Smith called "the Cyber Love Canal" in <a href="https://www.amazon.com/Internet-Risky-Things-Trusting-Surround/dp/149196362X/ref=as_li_ss_tl?ie=UTF8&linkCode=ll1&tag=adamshostack-20&linkId=d5a7c40e3488fdfc830a5dd24a925bfb" target="_blank">The Internet of Risky Things</a>. The Love Canal analogy is apt; in theory, the government cleans up after the firms that polluted are gone. (The practice is far more complex.)</li>
<li>Model architectures that show how to engineer devices, such as an internet speaker, so that it can effectively be taken offline when the time comes. (There's work in the mobile app space on making apps work offline, which is related, but carries the expectation that the app will eventually reconnect.)</li>
<li>Conceptualization of the legal limits of what you can sign away in the fine print. (This may be everything; between severability and arbitration clauses, the courts have let contract law tilt very far towards the contract authors, but Congress did step in to write the <a href="https://www.ftc.gov/tips-advice/business-center/guidance/consumer-review-fairness-act-what-businesses-need-know" target="_blank">Consumer Review Fairness Act</a>.) The FTC has <a href="https://www.ftc.gov/news-events/press-releases/2017/06/ftc-offers-comment-process-aimed-improving-security-internet" target="_blank">commented</a> on issues of device longevity, but not (afaik) on limits of contracts.</li>
</ul>
<p>What else do we need to build software that survives for twenty years?</p>Emergent Design Issues2017-10-29T00:00:00Zhttps://shostack.org/blog/emergent-design-issues/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/Powerpoint-Mouseover_Fotor-1.jpg" alt=" " class="single-post-main-img" />
<p>It seems like these days, we want to talk about everything in security as if it's a vulnerability. For example:</p>
<blockquote>German researchers have discovered security flaws that could let hackers, spies and criminals listen to private phone calls and intercept text messages on a potentially massive scale – even when cellular networks are using the most advanced encryption now available.
<br />...<br />
Experts say it’s increasingly clear that SS7, first designed in the 1980s, is riddled with serious vulnerabilities that undermine the privacy of the world’s billions of cellular customers. The flaws discovered by the German researchers are actually functions built into SS7 for other purposes – such as keeping calls connected as users speed down highways, switching from cell tower to cell tower – that hackers can repurpose for surveillance because of the lax security on the network. ("<a href="https://www.washingtonpost.com/news/the-switch/wp/2014/12/18/german-researchers-discover-a-flaw-that-could-let-anyone-listen-to-your-cell-calls-and-read-your-texts/" target="_blank">German researchers discover a flaw that could let anyone listen to your cell calls.</a>" Washington Post, 2014).</blockquote>
<p>But these are not vulnerabilities, because we can have endless debate about it they should be fixed. (Chrome exposing passwords is another example.) If they're not vulnerabilities, what are they? Perhaps they're flaws? One definition of flaws reads:</p>
<blockquote>"Flaws are often much more subtle than simply an off-by-one error in an array reference or use of an incorrect system call," the report notes. "A flaw might be instantiated in software code, but it is the result of a mistake or oversight at the design level."
<br /><br />
An example of such a flaw noted in the report is the failure to separate data and control instructions and the co-mingling of them in a string - a situation that can lead to injection vulnerabilities. (<a href="http://www.securityweek.com/ieee-report-reveals-top-10-software-security-design-flaws" target="_blank">IEEE Report Reveals Top 10 Software Security Design Flaws</a>)</blockquote>
<p>In this sense, the SS7 issues are probably not "flaws" in the sense that the system behavior is unanticipated. But we don't know. We don't know what properties we should expect SS7 to have. For most software, the design requirements, the threat model, is not clear or explicit. Even when it's explicit, it's often not public. (<a href="https://securityintelligence.com/ss7-vulnerability-isnt-a-flaw-it-was-designed-that-way/" target="_blank">Larry Loeb makes the same point here</a>.)</p>
<p>For example, someone decided to write code to run a program on mouse over in Powerpoint, that code was tested, dialog text was written and internationalized, and so on. Someone <a href="https://support.office.com/en-us/article/Start-a-program-during-your-presentation-3854cb18-fcf0-4a18-a990-71c5149667df" target="_blank">documented</a> it, and it's worth pointing out that the documentation doesn't apply to Powerpoint 2016. Was there a debate over the security of that feature when it shipped? I don't know. When it was removed? Probably.</p>
<p>There's a set of these, and I'm going to focus on how they manifest in Windows for reasons that I'll get to. Examples include:</p>
<ul>
<li>DLL hijacking: Eric Lawrence <a href="https://twitter.com/adamshostack/status/703307463831871488" target="_blank">asked</a>, "How long until Microsoft stops pretending DLL Hijacking isn't a problem and introduces a new EXECUTABLE CHARACTERISTICS flag SYS_DLLS_ONLY?" </li>
<li>Insecure ActiveX Controls: Michael Howard says he <a href="https://blogs.msdn.microsoft.com/michael_howard/2016/06/03/never-thought-id-still-be-dealing-with-this-insecure-activex-controls/" target="_blank">Never Thought I’d Still be Dealing with This: Insecure ActiveX Controls!</a></li>
<li>Whitelist bypass:
<u>Bypass Application Whitelisting Script Protections - Regsvr32.exe & COM Scriptlets (.sct files)</u> [link to http://subt0x10.blogspot.com/2017/04/bypass-application-whitelisting-script.html no longer works] </li>
<li>BreenMachine's set of <a href="https://foxglovesecurity.com/2016/01/16/hot-potato/" target="_blank">Hot Potato – Windows Privilege Escalation</a></li>
<li>DDE messages, as exploited in "<a href="https://sensepost.com/blog/2017/macro-less-code-exec-in-msword/" target="_blank">Macro-less Code Exec in MSWord</a>."</li>
<li>"<a href="http://georgemauer.net/2017/10/07/csv-injection.html" target="_blank">The Absurdly Underestimated Dangers of CSV Injection</a>," by George Mauer</li>
<li>Confused deputy attacks such as Zack Whittacker's "<a href="http://www.zdnet.com/article/windows-attack-can-steal-your-username-password-and-other-logins/" target="_blank">Microsoft won't fix Windows flaw that lets hackers steal your username and password</a> (NTLM creds to internet share); but they took the effort to put the creds into a separate virtual machine. There's also a confused deputy as a building block in Mauer's CSV injection.</li>
<li>File extension hiding and doubled extensions. (It's not even worth a link.</li>
<li><del><a href="https://blogs.technet.microsoft.com/msrc/2011/02/08/deeper-insight-into-the-security-advisory-967940-update/" target="_blank">Autorun</a></del></li>
</ul>
<p>The reason I'm looking at these is because design questions like these emerge when a system is successful. Whatever else you want to say about it, Windows was successful and very widely deployed. As a system becomes more successful, the easily exploitable bugs are fixed, and the hard to fix design tradeoffs become relatively more important. As I wrote in "<a href="https://shostack.org/archive/2015/11/the-evolution-of-secure-things/">The Evolution of Secure Things</a>:"</p>
<blockquote>It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.</blockquote>
<p>That chaotic real world exposes a set of issues that may or may not have been visible during product design. In threat modeling, identification of issues is the most crucial step. If you fail to identify issues, you will not manage those issues well. Another way to say that is: identifying issues is a necessary but not sufficient step. </p>
<p>The design choices listed above almost all predate threat modeling as a structured practice at Microsoft. But there have been other choices, like <a href="https://arstechnica.com/gadgets/2015/07/wi-fi-sense-in-windows-10-yes-it-shares-your-passkeys-no-you-shouldnt-be-scared/" target="_blank">Windows Wifi sense</a> or <a href="https://www.extremetech.com/computing/243079-upcoming-windows-update-reduces-spying-microsoft-still-mum-data-collects" target="_blank">new</a> <a href="http://www.zdnet.com/article/no-microsoft-is-not-spying-on-you-with-windows-10/" target="_blank">telemetry</a> in Windows 10. We can disagree with those design choices, but it's clear that there were internal discussion of the right business tradeoffs. So we go back to the definition of a flaw, "a mistake or oversight at the design level." These were not oversights. Were they design mistakes? That's harder. The designers knew exactly what they were designing, and the software worked as planned. It was not received as planned, and it is certainly being used in unexpected ways.</p>
<p>There are interesting issues of composition, especially in backup authentication. That problem is being exploited in <a href="https://www.nytimes.com/2017/08/21/business/dealbook/phone-hack-bitcoin-virtual-currency.html" target="_blank">crypto currency thefts</a>:</p>
<blockquote>Mr. Perklin and other people who have investigated recent hacks said the assailants generally succeeded by delivering sob stories about an emergency that required the phone number to be moved to a new device — and by trying multiple times until a gullible agent was found.
<br /><br />
“These guys will sit and call 600 times before they get through and get an agent on the line that’s an idiot,” Mr. Weeks said.
<br /><br />
Coinbase, one of the most widely used Bitcoin wallets, has encouraged customers to disconnect their mobile phones from their Coinbase accounts.</blockquote>
<p>One can imagine a lot of defenses, but "encouraging" customers to not use a feature may not be enough. As online wallet companies grow, they need to have threat modeled better, and perhaps that entails turning off the feature. (I don't know their businesses well enough to simply assert an answer.)</p>
<p>In summary, we're doing a great job at finding and squishing bugs, and that's opening up new and exciting opportunities to think more deeply about design issues.</p>
<p class="credit-text">PowerPoint <a href="https://twitter.com/subTee/status/899283365647458305/photo/1" target="_blank">Screen capture</a> via Casey Smith.</p>
<p>[Update Dec 13: After a conversation with Gary McGraw, I think I may have read the CSD definition of flaw too narrowly.]</p>Why is 'Reply' Not the Strongest Signal?2017-11-02T00:00:00Zhttps://shostack.org/blog/why-is-reply-not-the-strongest-signal/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>So apparently my "friends" at outlook.com are marking my email as junk today, with no explanation. They're doing this to people who have sent me dozens of emails over the course of months or years.</p>
<p>Why does no spam filter seem to take repeated conversational turns into account? Is there a stronger signal that I want to engage with someone than...repeatedly engaging?</p>Data Flow Diagrams 3.02017-11-05T00:00:00Zhttps://shostack.org/blog/data-flow-diagrams-3-0/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/DFD-framed-1.png" alt="data flow diagram sample" class="single-post-main-img" />
<p>In the <a href="http://brakeingsecurity.com/2017-036-adam-shostack-talks-about-threat-modeling-and-how-to-do-it-properly" target="_blank">Brakesec podcast</a>, I used a new analogy for why we need to name our work. When we talk about cooking, we have very specific recipes that we talk about: Julia Child's beef bourguignon. Paul Prudhomme's blackened fish. We hope that new cooks will follow the recipes until they get a feel for them, and that they can then start adapting and modifying them, as they generate mental models of what they're doing.</p>
<p>But we talk about threat modeling we don't label our recipes. We say this is how to threat model, as if that's not as broad as "this is how to cook."</p>
<p>And in that podcast, I realized that I've been guilty of definition drift in how I talk about data flow diagrams. Data flow diagrams, DFDs are also called 'threat model diagrams' because they're so closely associated with threat modeling. And as I've used them over the course of a decade, there have been many questions:</p>
<ul>
<li>Do you start with a context diagram?</li>
<li>What's a multi-process, and when should I use one?</li>
<li>Do I really need to draw single-headed arrows? They make my diagram hard to read!</li>
<li>Is this process inside this arc? Is an arc the best way to show a trust boundary?</li>
<li>Should I color things?</li>
</ul>
<p>Those questions I've initiated changes, such as showing a process as a rounded rectangle (versus a circle), eliminating rules such as all arrows are uni-directional, and advocating for trust boundaries as labeled boxes.</p>
<p>What I have not done is been crisp about what these changes are in a way that lets a team say "we use v3 DFDs" the way they might say "we use Python 3." (ok, no one says either, I know!) </p>
<p>I'm going to retroactively label all of these changes as DFD3.0. DFD v1 was a 1970s construct. DFD2 was the critical addition of trust boundaries. And a version 3 DFD is defined as follows:</p>
<ol>
<li>It uses 5 symbols. A rectangle represents an external entity, a person or code outside your control. A rounded rectangle represents a process. They're connected by arrows, which can be single or double headed. Data stores are represented by parallel lines. A trust boundary is a closed shape, usually a box. All lines are solid, except those used for trust boundaries, which are dashed or dotted. (There is no "multi-process" symbol in DFD3.)</li>
<li>It must not* depend on the use of color, but can use color for additional information.</li>
<li>All elements should have a label.</li>
<li>You may have a context diagram if the system is complex. One is not required.</li>
</ol>
<p>* Must, must not, should, should not are used per IETF norms.</p>
<p>This also allows us to talk about what might be in a DFD3.1. I know that I usually draw disks with the "drum" symbol, and I see a lot of people using that. It seems like a reasonable addition.</p>
<p>Using specific naming also allows us to fork. If you want to define a different type of DFD, have at it. If we have a bunch, we can figure out how keep things clear. Oh, and speaking of forking, I put this on github: <a href="https://github.com/adamshostack/DFD3/" target="_blank">DFD3</a>.</p>
<p>Using specific naming allows us to talk about testing and maturity in the sense of "this is in alpha test." "This has been used for several years, we took feedback, adjusted, and now it's release quality." I think that DFD3 is release quality, but it probably needs some beta testing for the definitions.</p>
<p>Similarly, DREAD has a bunch of problems, including a lack of definition. I use mention of DREAD as a way to see if people are threat modeling well. And one challenge there is that people silently redefine DREAD to mean something other than what it meant when Michael Howard and David LeBlanc talked about it in Writing Secure Code (2nd ed, 2003). If you want to build something new, your customers and users need to understand that it's new, so they don't get confused by it. Therefore, you need to give your new thing a new name. You could call it DREAD2, a DRE4D, DRECK, I don't really care. What I care about is that it's easily distinguished, and the first step towards that is a new name. </p>
<p>[Update: <strong>What's most important is not the choices that I've made for what's in DFD3, but the grouping of those choices into DFD3</strong>, so that you can make your own choices and our tools can compete in the market.</p>The Fights We Have to Fight: Fixing Bugs2017-11-08T00:00:00Zhttps://shostack.org/blog/the-fights-we-have-to-fight-fixing-bugs/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>One of the recurring lessons from Petroski is how great engineers overcome not only the challenges of physical engineering: calculating loads, determining build orders, but they also overcome the real world challenges to their ideas, including financial and political ones. For example:</p>
<blockquote>Many a wonderful concept, beautifully drawn by an inspired structural artist, has never risen off the paper because its cost could not be justified. Most of the great bridges of the nineteenth century, which served to define bridge building and other technological achievements for the twentieth century, were financed by private enterprise, often led by the expanding railroads. Engineers acting as entrepreneurs frequently put together the prospectuses, and in some cases almost single-handedly promoted their dreams to the realists. [...] Debates over how to pay for them were common. (<em><a href="http://amzn.to/2zhxpgZ" target="_blank">Engineers of Dreams: Great Bridge Builders and the Spanning of America</a></em>, Henry Petroski)</blockquote>
<p>Many security professionals have a hobby of griping that products get rushed to market, maybe to be secured later. We have learned to be more effective at building security in, and in doing so, reduce product costs and increase on-time delivery. But some products were built before we knew how to do that, and others are going to get built by companies who choose not to do that. And in that sense, Collin Greene's retrospective, "<a href="https://medium.com/@collingreene/fixing-security-bugs-5b5172adadf0" target="_blank">Fixing Security Bugs</a>" is very much worth your time. It's a retrospective on the Vista security program from a pen-test perspective.</p>
<blockquote>Hacking: Exciting. <br />
Finding bugs: Exciting. <br />
Fixing those bugs: Not exciting. <br />
The thing is, the finish line for our job in security is getting bugs fixed¹, not just found and filed. Doing this effectively is not a technology problem. It is a communications, organizational² and psychology problem.</blockquote>
<p>I joined Microsoft while the Vista pen test was finishing up, and so my perspective is complimentary. I'd like to add a few additional perspectives to his points.</p>
<p>First, he asks "is prioritization correct?" After Vista, the SDL team created security bug bars, and then later refined them to align with the MSRC update priorities. That alignment with the MSRC priorities was golden. It made it super-clear that if you didn't fix this before ship, you were going to have to do an update later. As a security engineer, you need to align your prioritization to the all up delivery priorities. Having everything be "extremely critical," "very critical," or "moderately critical" means you don't know what matters, and so nothing does.</p>
<p>Second, "why security matters" was still a fight to be fought in Vista. By Windows 7, security had completed its "move left." The spec form contained sections for security and privacy. Threat model review was a gate for start of coding. These process changes happened while developers were "rebelling" against Vista's "overweight" engineering process. They telegraphed that security mattered to management and executives. As a security engineer, you need to get management to spend time talking about how security is balanced with other priorities.</p>
<p>Third, he points out that escalating to a manager can feel bad, but he's right: "Often the manager has the most context on priorities." Management saying "get this fixed" is an expression of prioritization. If you've succeeded in your work on "why security matters," then management will know that they need to reinforce that message. Bringing the issues to them, responsibly, helps them get their job done. If it feels bad to escalate, then it's worth asking if you have full buy in on security.</p>
<p>Now, I'm talking about security as if it matters to management. More and more, that's the case. Something in the news causes leadership to say "we have to do better," and they believe that there are things that they can do. In part that belief is because very large companies have been talking about how to make it work. But when that belief isn't there, it's your job as an engineer to, as Petroski says, single-handedly promote your dreams to the realists. Again, Greene's post is full of good ideas.</p>
<p>Lastly, not everything is a bug. I discussed vulnerabilities versus design recently in "<a href="https://adam.shostack.org/blog/2017/10/emergent-design-issues/">Emergent Design Issues</a>." </p>
<p class="credit-text">(Photo: <a href="https://www.pexels.com/photo/black-and-brown-insect-37733/" target="_blank">https://www.pexels.com/photo/black-and-brown-insect-37733/</a>)</p>Microsoft's PCI Blueprint2017-11-14T00:00:00Zhttps://shostack.org/blog/microsofts-pci-bluepprint/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p><a href="http://aka.ms/pciblueprint" target="_blank">http://aka.ms/pciblueprint</a> is a fascinating collection of security documents for PCI compliance. They're designed to cut the cost of building a secure infrastructure by providing a design pattern and details.</p>Vulnerabilities Equities Process and Threat Modeling2017-11-20T00:00:00Zhttps://shostack.org/blog/vulnerabilities-equities-process-and-threat-modeling/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/balanced_rocks_david_clode_framed.jpg" alt="balanced rocks" class="single-post-main-img" />
<p>[Update: More at DarkReading, "<a href="https://www.darkreading.com/perimeter/the-critical-difference-between-vulnerabilities-equities-and-threat-equities/a/d-id/1330521" target="_blank"> The Critical Difference Between Vulnerabilities Equities & Threat Equities</a>."]</p>
<p>The Vulnerabilities Equities Process (VEP) is how the US Government decides if they'll disclose a vulnerability to the manufacturer for fixing. The process has come under a great deal of criticism, because it's never been clear what's being disclosed, what fraction of vulnerabilities are disclosed, if the process is working, or how anyone without a clearance is supposed to evaluate that beyond "we're from the government, we're here to help," or perhaps "I know people who managed this process, they're good folks." Neither of those is satisfactory.</p>
<p>So it's a very positive step that on Wednesday, White House Cybersecurity Coordinator Rob Joyce published "<u>Improving and Making the Vulnerability Equities Process Transparent is the Right Thing to Do</u>," [link to https://www.whitehouse.gov/blog/2017/11/15/improving-and-making-vulnerability-equities-process-transparent-right-thing-do no longer works] along with the process. Schneier <a href="https://www.schneier.com/blog/archives/2017/11/new_white_house_1.html" target="_blank">says</a> "I am less [pleased]; it looks to me like the same old policy with some new transparency measures -- which I'm not sure I trust. The devil is in the details, and we don't know the details -- and it has giant loopholes."</p>
<p>I have two overall questions, and an observation.</p>
<p>The first question is, was the published policy written when we had commitments to international leadership and being a fair dealer, or was it created or revised with an "America First" agenda?</p>
<p>The second question relates to there being four equities to be considered. These are the "major factors" that senior government officials are supposed to consider in exercising their judgement. But, surprisingly, there's an "additional" consideration. ("At a high level we consider four major groups of equities: defensive equities; intelligence / law enforcement / operational equities; commercial equities; and international partnership equities. Additionally, ordinary people want to know the systems they use are resilient, safe, and sound.") Does that imply that those officials are not required to weigh public desire for resilient and safe systems? What does it mean that the "additionally" sentence is not an equity being considered? </p>
<p>Lastly, the observation is that the VEP is all about vulnerabilities, not about flaws or design tradeoffs. From the <u>charter</u>, [link to https://www.whitehouse.gov/sites/whitehouse.gov/files/images/External%20-%20Unclassified%20VEP%20Charter%20FINAL.PDF no longer works] page 9-10:</p>
<blockquote>The following will not be considered to be part of the vulnerability evaluation process:
<ul>
<li>Misconfiguration or poor configuration of a device that sacrifices security in lieu of availability, ease of use or operational resiliency.</li>
<li>Misuse of available device features that enables non-standard operation.</li>
<li>Misuse of engineering and configuration tools, techniques and scripts that increase/decrease functionality of the device for possible nefarious operations.</li>
<li>Stating/discovering that a device/system has no inherent security features by design.</li>
</ul>
</blockquote>
<p>Threat Modeling is the umbrella term for security engineering to discover and deal with these issues. It's what I spend my days on, because I see the tremendous effort in dealing with vulnerabilities is paying off, and we see fewer of them in well-engineered systems.</p>
<p>In October, I wrote about the fact we're getting better at dealing with vulnerabilities, and need to think about design issues. I closed:</p>
<blockquote>In summary, we’re doing a great job at finding and squishing bugs, and that’s opening up new and exciting opportunities to think more deeply about design issues. (<a href="https://shostack.org/blog/emergent-design-issues/">Emergent Design Issues</a>)</blockquote>
<p>Here, I'm going to disagree with Bruce, because I think that this disclosure shows us an important detail that we didn't previously know. Publication exposes it, and lets us talk about it. </p>
<p>So, I'm going to double-down on what I wrote in October, and say that we need the VEP to expand to cover those issues. I'm not going to claim that will be easy, that the current approach will translate, or that they should have waited to handle those before publishing. One obvious place it gets harder is the sources and methods tradeoff. But we need the internet to be a resilient and trustworthy infrastructure. As Bill Gates <a href="https://news.microsoft.com/2012/01/11/memo-from-bill-gates/" target="_blank">wrote</a> 15 years ago, we need systems that people "will always be able to rely on, [] to be available and to secure their information. Trustworthy Computing is computing that is as available, reliable and secure as electricity, water services and telephony."</p>
<p>We cannot achieve that goal with the VEP being narrowly scoped. It must evolve to deal with the sorts of flaws and design tradeoffs that threat modeling helps us find.</p>
<p class="credit-text">Photo by <a href="https://unsplash.com/photos/wdDFuDf9EoQ" target="_blank">David Clode</a> on Unsplash.</p>Averting the Drift into Failure2017-11-22T00:00:00Zhttps://shostack.org/blog/averting-the-drift-into-failure/adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>This is a fascinating video from the Devops Enterprise Summit:</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/pmZ6wtOmTZU?rel=0" frameborder="0" allowfullscreen=""></iframe>
<p>"the airline that reports more incidents has a lower passenger mortality rate. Now what's fascinating about this ... we see this replicated this data across various domains, construction, retail, and we see that there is this inverse correlation between the number of incidents reported, the honesty, the willingness to take on that conversation about what might go wrong and things actually going wrong."</p>
<p>The speaker's website is <a href="http://sidneydekker.com/" target="_blank">sidneydekker.com/</a>, there's some really interesting material.</p>45 Years2017-11-25T00:00:00Zhttps://shostack.org/blog/45-years/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/moonshorty_apollo17_1080.jpg" alt="moonshorty_apollo17" class="single-post-main-img" />
<p>I had not seen this amazing <a href="https://apod.nasa.gov/apod/ap171124.html" target="_blank">picture</a> of Harrison Schmitt near Shorty Crater.</p>
<p>Via <a href="https://apod.nasa.gov/apod/" target="_blank">Astronomy Picture of the Day</a>. If you enjoy these, <a href="http://amzn.to/2A5iVDf" target="_blank">Full Moon</a> is a gorgeous collection of meticulously scanned Apollo images. There are various editions; I encourage you to get the 11"x11" one, not the 8x8.</p>The Carpenter Case2017-11-28T00:00:00Zhttps://shostack.org/blog/the-carpenter-case/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<blockquote>On Wednesday, the supreme court will consider whether the government must obtain a warrant before accessing the rich trove of data that cellphone providers collect about cellphone users’ movements. Among scholars and campaigners, there is broad agreement that the case could yield the most consequential privacy ruling in a generation. ("<a href="https://www.theguardian.com/commentisfree/2017/nov/27/supreme-court-cellphone-data-carpenter-first-amendment" target="_blank">Supreme court cellphone case puts free speech – not just privacy – at risk</a>.")</blockquote>
<p>Bruce Schneier has an article in the Washington Post, "<a href="https://www.washingtonpost.com/news/posteverything/wp/2017/11/27/how-the-supreme-court-could-keep-police-from-using-your-cellphone-to-spy-on-you/" target="_blank">How the Supreme Court could keep police from using your cellphone to spy on you</a>," as does Stephen Sachs:</p>
<blockquote>The Supreme Court will hear arguments this Wednesday in Carpenter v. United States, a criminal case testing the scope of the Fourth Amendment’s right to privacy in the digital age. The government seeks to uphold Timothy Carpenter’s conviction and will rely, as did the lower court, on the court’s 1979 decision in Smith v. Maryland, a case I know well.
<br /><br />
I argued and won Smith v. Maryland when I was Maryland’s attorney general. I believe it was correctly decided. But I also believe it has long since outlived its suitability as precedent. ("<a href="https://www.washingtonpost.com/opinions/the-supreme-courts-privacy-precedent-is-outdated/2017/11/26/fe9d1dd0-cfb2-11e7-81bc-c55a220c8cbe_story.html" target="_blank">The Supreme Court’s privacy precedent is outdated</a>.")</blockquote>
<p>I am pleased to have been able to help with an <a href="https://knightcolumbia.org/news/supreme-court-brief-technologists-warn-against-warrantless-access-cell-phone-location-data" target="_blank">amicus brief</a> in the case, and hope that the Supreme Court uses this opportunity to protect all of our privacy. Good luck to the litigants!</p>Learning from Near Misses2017-12-04T00:00:00Zhttps://shostack.org/blog/learning-from-near-misses/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>[Update: Steve Bellovin has a <a href="https://www.cs.columbia.edu/~smb/blog/2017-12/2017-12-04.html" target="_blank">blog post</a>]</p>
<p>One of the major pillars of science is the collection of data to disprove arguments. That data gathering can include experiments, observations, and, in engineering, investigations into failures. One of the issues that makes security hard is that we have little data about large scale systems. (I believe that this is <em>more</em> important than our clever adversaries.) The work I want to share with you today has two main antecedents.</p>
<p>First, in the nearly ten years since Andrew Stewart and I wrote The New School of Information Security, and called for more learning from breaches, we've seen a dramatic shift in how people talk about breaches. Unfortunately, we're still not learning as much as we could. There are structural reasons for that, primarily fear of lawsuits. </p>
<p>Second, last year marked 25 years of calls for an "NTSB for infosec." Steve Bellovin and I wrote a <a href="https://www.cs.columbia.edu/~smb/papers/Current_and_Future_States_of_Cybersecurity-Bellovin-Shostack.pdf" target="_blank">short note</a> asking why that was. We've spent the last year asking what else we might do. We've learned a lot about other Aviation Safety Programs, and think there are other models that may be better fits for our needs and constraints in the security realm.</p>
<p>Much that investigation has been a collaboration with Blake Reid, Jonathan Bair, and Andrew Manley of the University of Colorado Law School, and together we have a new draft paper on SSRN, "<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3081216" target="_blank">Voluntary Reporting of Cybersecurity Incidents</a>."</p>
<p>A good deal of my own motivation in this work is to engineer a way to learn more. The focus of this work, on incidents rather than breaches, and on voluntary reporting and incentives, reflects lessons learned as we try to find ways to measure real world security. The writing and abstract reflect the goal of influencing those outside security to help us learn better:</p>
<blockquote>The proliferation of connected devices and technology provides consumers immeasurable amounts of convenience, but also creates great vulnerability. In recent years, we have seen explosive growth in the number of damaging cyber-attacks. 2017 alone has seen the Wanna Cry, Petya, Not Petya, Bad Rabbit, and of course the historic Equifax breach, among many others. Currently, there is no mechanism in place to facilitate understanding of these threats, or their commonalities. While information regarding the causes of major breaches may become public after the fact, what is lacking is an aggregated data set, which could be analyzed for research purposes. This research could then provide clues as to trends in both attacks and avoidable mistakes made on the part of operators, among other valuable data.
<br /><br />One possible regime for gathering such information would be to require disclosure of events, as well as investigations into these events. Mandatory reporting and investigations would result better data collection. This regime would also cause firms to internalize, at least to some extent, the externalities of security. However, mandatory reporting faces challenges that would make this regime difficult to implement, and possibly more costly than beneficial. An alternative is a voluntary reporting scheme, modeled on the Aviation Safety Reporting System housed within NASA, and possibly combined with an incentive scheme. Under it, organizations that were the victims of hacks or “near misses” would report the incident, providing important details, to some neutral party. This database could then be used both by researchers and by industry as a whole. People could learn what does work, what does not work, and where the weak spots are.</blockquote>
<p>Please, take a look at the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3081216" target="_blank">paper</a>. I'm eager to hear your feedback.</p>Gavle Goat Gallantly Guarded2017-12-08T00:00:00Zhttps://shostack.org/blog/gavle-goat-gallantly-guarded/adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/DRzFlG_XcAApWrN.jpg" alt="Santa guards the Gavle Goat" class="single-post-main-img" />
<p>[Update: Merry Christmas, Gavelbocken! You <a href="https://twitter.com/Gavlebocken/status/944847273783119872" target="_blank">made it</a> this year!]</p>
<p><a href="https://www.thelocal.se/20171204/secret-plan-to-protect-gvle-goat-from-arsonists/" target="_blank">‘Secret’ plan to protect Gävle Christmas goat from arsonists</a>.</p>
<p>Previously: <a href="https://shostack.org/archive/2016/09/gavle-goat-now-56-more-secure/">Gavle Goat, now 56% more secure!</a>, <a href="https://shostack.org/archive/2005/12/13-meter-straw-goat-met-his-match/">13 Meter Straw Goat Met His Match</a>, <a href="https://shostack.org/archive/2008/12/gavle-goat-gone/">Gavle Goat Gone</a>, <a href="https://shostack.org/archive/2009/12/burning-news-gavle-goat/">Burning News: Gavle Goat</a>, <a href="https://shostack.org/archive/2011/12/gvle-goat-gambit-goes-astray/">Gävle Goat Gambit Goes Astray</a>, <a href="https://shostack.org/archive/2012/11/the-gavle-goat-is-getting-ready-to-burn/">The Gavle Goat is Getting Ready to Burn!</a>.</p>Portfolio Thinking: AppSec Radar2017-12-27T00:00:00Zhttps://shostack.org/blog/portfolio-thinking-appsec-radar/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/appsec-radar.png" alt="AppSec Radar" class="single-post-main-img" />
<p>At DevSecCon London, I met Michelle Embleton, who is doing some really interesting work around what she calls an <a href="https://github.com/MichelleEmbleton/appSecRadar" target="_blank">AppSec Radar</a>. The idea is to visually show what technologies, platforms, et cetera are being evaluated, adopted and in use, along with what's headed out of use.</p>
<p>Surprise technology deployments always make for painful conversations.</p>
<p>This strikes me as a potentially quite powerful way to improve communication between security and other teams, and worth some experimentation in 2018.</p>Threat Modeling Tooling from 20172017-12-28T00:00:00Zhttps://shostack.org/blog/threat-modeling-tooling-from-2017/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/A-magic-hammer.jpg" alt="a magic hammer" class="single-post-main-img" />
<p>As I reflect back on 2017, I think it was a tremendously exciting year for threat modeling tooling. Some of the highlights for me include:</p>
<ul>
<li>OWASP Threat Dragon is a web-based tool, much like the MS threat modeling tool, and explained in <a href="https://www.coreinfrastructure.org/news/blogs/2017/11/open-source-threat-modeling" target="_blank">Open Source Threat Modeling</a>, and the code is at <a href="https://github.com/mike-goodwin/owasp-threat-dragon" target="_blank">https://github.com/mike-goodwin/owasp-threat-dragon</a>. What's exciting is not that it's open source, but that it's web-driven, and that enables modern communication and collaboration in the way that's rapidly replacing emailing documents around.</li>
<li><a href="http://www.tutamantic.com/" target="_blank">Tutamen</a> is an exciting tool because it's simplicity forced me to re-think what threat modeling tooling could be. Right now, you upload a Visio diagram, and you get back a threat list in Excel, covering OWASP, STRIDE, CWE and CAPEC. If Threat Dragon is an IDE, Tutamen is a compiler.</li>
<li>We're seeing real action in security languages. Fraser Scott is driving an <a href="https://github.com/owasp-cloud-security/owasp-cloud-security" target="_blank">OWASP Cloud Security</a> project to create structured stories about threats and controls. If Tutamen is a compiler, this project lets us think about different include files. (The two are not yet, and may never be, integrated.) And closely related, Continuum Security has a <u>BDD-Security</u> project [link to https://www.continuumsecurity.net/bdd-security/ no longer works]</li>
<li>Continuum's also doing interesting work with <u>IriusRisk</u> [link to https://www.continuumsecurity.net/threat-modeling-tool/ no longer works], which they describe as "a single integrated console to manage application security risk throughout the software development process." If the tools above are about depth, IriusRisk is about helping large organizations with breadth.</li>
</ul>
<p>Did you see anything that was exciting that I missed? Please let me know in the comments!</p>Pen Testing The Empire2018-01-01T00:00:00Zhttps://shostack.org/blog/pen-testing-the-empire/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>[Updated with a leaked copy of the response from Imperial Security.]</p>
<iframe src="https://player.vimeo.com/video/247043527" width="640" height="360" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<p>To: Grand Moff Tarkin<br />
Re: "The Pentesters Strike Back" memo<br />
Classification: Imperial Secret/Attorney Directed Work Product</p>
<p>Sir,</p>
<p>We have received and analyzed the "Pentesters Strike Back" video, created by Kessel Cyber Security Consulting, in support of their report 05.25.1977. This memo analyzes the video, presents internal analysis, and offers strategies for response to the Trade Federation.</p>
<p>In short, this is typical pen test slagging of our operational security investments, which meet or exceed all best practices. It is likely just a negotiating tactic, albeit one with catchy music.</p>
<p>Finding 1.3: "Endpoints unprotected against spoofing." This is true, depending on a certain point of view. Following the execution of <a href="http://starwars.wikia.com/wiki/Order_66/Legends" target="_blank">Order 66</a>, standing policy has been "The Jedi are extinct. Their fire has gone out of the universe." As such, Stormtrooper training has been optimized to improve small arms accuracy, which has been a perennial issue identified in after-action reports.</p>
<p>Finding 2.1: "Network Segmentation inadequate." This has been raised repeatedly by internal audit, perhaps this would be a good "area for improvement" in response to this memo.</p>
<p>Finding 4.2: "Data at rest not encrypted." This is inaccurate. The GalactiCAD server in question was accessed from an authorized endpoint. As such, it decrypted the data, and sent it over an encrypted tunnel to the endpoint. The pen testers misunderstand our network architecture, again.</p>
<p>Finding 5.1: "Physical access not controlled." Frankly, sir, this battle station is the ultimate power in the universe. It has multiple layers of physical access control, including the screening units of Star Destroyers and Super SDs, Tie Fighters, Storm Trooper squadrons in each landing bay, [Top Secret-1], and [Top Secret-2]. Again, the pen testers ignore facts to present "findings" to their clients.</p>
<p>Finding 5.2: "Unauthorized mobile devices allows network access." This is flat-out wrong. In the clip presented, TK-427 is clearly heard authorizing the droids in question. An audit of our records indicate that both driods presented authorization certificates signed by Lord Vader's certificate authority. As you know, this CA has been the source of some dispute over time, but the finding presented is, again, simply wrong.</p>
<p>Finding 7.4: "Security incident response times need improvement." The incident response times are excellent. Commander Sheard, who was watch officer in Control Station West, had Storm Troopers at Detention Block AA23 before the attackers were able to exit the short corridor of the block, forcing them to blast holes in the wall. (The walls were built to resistance standard 4-15, able to resist a wookie pounding on them for 15 minutes, which would have allowed TD-787 and his squad to stop for caf en route, had not unrelated failures led to the prisoners having charged blasters.)</p>
<p>Finding 8.3: "Legacy intruder-tracking system inadequately concealed." Again, this claim simply has no basis in fact. The intruder-tracking system worked <em>perfectly</em>, allowing the Imperial Fleet to track the freighter to Yavin. In analyzing the video, we expect that General Orgena's intuition was "Force"-aided.</p>
<p>In summary, there are a few minor issues identified which require attention. However, the bulk of the report presents mis-understandings, unreasonable expectations, and focuses heavily on a set of assumptions that just don't bear up to scrutiny. We are in effective compliance with PCI-DSS, this test did not reveal a single credit card number, and the deal with the Trade Federation should not be impeded.</p>
<p class="credit-text">Via <a href="https://www.schneier.com/blog/archives/2018/01/security_vulner_12.html" target="_blank">Bruce Schneier</a>.</p>Not Bugs, but Features2018-01-05T00:00:00Zhttps://shostack.org/blog/not-bugs-but-features/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<blockquote>“[Mukhande Singh] said “real water” should expire after a few months. His does. “It stays most fresh within one lunar cycle of delivery,” he said. “If it sits around too long, it’ll turn green. People don’t even realize that because all their water’s dead, so they never see it turn green.”</blockquote>
<p class="credit-text">(<cite><a href="https://www.nytimes.com/2017/12/29/dining/raw-water-unfiltered.html" target="_blank">Unfiltered Fervor: The Rush to Get Off the Water Grid</a></cite>, Nellie Bowles, NYTimes, Dec 29, 2017.)</p>
<p>So those things turning the water green? Apparently, not bugs, but features. In unrelated "not understanding food science" news, don't buy <a href="https://www.wired.com/review/mellow-sous-vide-review/" target="_blank">the Mellow sous vide machine</a>. Features.</p>The Resistance Has Infiltrated This Base!2018-01-11T00:00:00Zhttps://shostack.org/blog/the-resistance-has-infiltrated-this-base/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<blockquote>In a memo issued Jan. 4 and rescinded about an hour later, Deputy Defense Secretary Pat Shanahan announced a new “Central Cloud Computing Program Office” -- or “C3PO” -- to “acquire the Joint Enterprise Defense Infrastructure (JEDI) Cloud.”
<br /><br />
“C3PO is authorized to obligate funds as necessary in support of the JEDI Cloud,” Shanahan, a former Boeing Co. executive, wrote, managing to get a beloved droid from the space-themed movies and an equally popular fictional order of warriors into what otherwise would be a routine message in the Pentagon bureaucracy.
<br /><br />
The memo was recalled because “it was issued in error,” according to Shanahan’s spokesman, Navy Captain Jeff Davis.</blockquote>
<p>Thanks to MC for the <a href="https://www.bloomberg.com/news/articles/2018-01-10/-star-wars-dazzle-stripped-from-a-pentagon-memo-on-the-cloud" target="_blank">story</a>.</p>Fire and building codes2018-01-19T00:00:00Zhttps://shostack.org/blog/fire-and-building-codes/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>What's more primordial than fire? It's easy to think that fire is a static threat, and defenses against it can be static. So it was surprising to see that changes in home design and contents are leading to fires spread much faster, and that the Canadian Commission on Building and Fire Codes is considering mandates for home sprinklers.</p>
<p>The CBC's "<a href="http://www.cbc.ca/news/canada/nova-scotia/house-fires-firefighters-national-building-code-1.4482786" target="_blank">Rise in fast-burning house fires heats up calls for sprinklers in homes</a>" has a good discussion of the changing threat, the costs of mitigation, and the tradeoffs entailed.</p>AppSec California TM Panel2018-01-23T00:00:00Zhttps://shostack.org/blog/appsec-california-tm-panel/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>I'm participating in the <a href="https://appseccalifornia2018.sched.com/event/CuRn/threat-modeling-panel" target="_blank">threat modeling panel</a> at <u>AppSec California</u> [link to https://2018.appseccalifornia.org/ no longer works].</p>
<p>Before talking about what we want to talk about, we decided to ask the audience what we should talk about. Please take a minute to fill out our <a href="https://www.surveymonkey.com/r/FZ2WBWK" target="_blank">three question survey</a> if you'll be there.</p>Star Trek's Astromycologist2018-01-26T00:00:00Zhttps://shostack.org/blog/star-treks-astromycologist/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>This is very cool: "<a href="http://www.cbc.ca/news/canada/british-columbia/paul-stamets-star-trek-mushroom-expert-mycology-1.4454204">Star Trek's secret weapon: a scientist with a mushroom fetish bent on saving the planet</a>."</p>
<blockquote>On Star Trek: Discovery, the character Lieutenant Paul Stamets is an "astromycologist" — a mushroom expert in outer space who is passionate about the power of fungi.<br /><br />
Stamets is actually named after a real U.S. scientist who spends his downtime tramping through the forests of B.C.'s Cortes Island.</blockquote>
<p>The real Stamets has a few books. "<a href="http://amzn.to/2FjilkE">Mycelium Running</a>" is a fascinating read.</p>AppSec Cali 2018: Izar Tarandach2018-01-30T00:00:00Zhttps://shostack.org/blog/appsec-cali-2018-izar-tarandach/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>I'm at the OWASP AppSec Cali event, and while there's now <del>there'll be</del> <a href="https://www.youtube.com/watch?v=mypgp-D-jW8" target="_blank">video</a>, I'm taking notes:</p>
<h3>Context for the talk</h3>
<ul>
<li>What fails during the development process? Incomplete requirements, non-secure design, lack of security mindset, leaky development</li>
<li>These failures are threats which can be mitigated. (eg, compliance and risk requirements address incomplete requirements)</li>
<li>We keep failing in the same way. How often are developers required to pass a security interview to get a job?</li>
<li>Story of Alice the manager, and Bob the developer who learns about a SQL injection in their legacy code. Bob is overwhelmed by security requirements.</li>
<li>"The problem with programmers is that you can never tell what a programmer is going until it is too late." — Seymour Cray</li>
<li>Security team objective: be informed about product flow; help developers not write and not deploy security issues; stop being a bottleneck; so focus secure development on the developer, not the security expert.</li>
</ul>
<h3>Notable Security Events</h3>
<ul>
<li>How to integrate security expertise into development in a more fluid way. Does this tie to "the spec"?</li>
<li>Developers don't know that their changes are security relevant</li>
<li>Funny example of a training quiz that doesn't have a learning goal</li>
<li>Noel Burch's hierarchy of competence. From unconscious incompetence through unconscious competence.</li>
<li>Learning: step-by-step, instructions, theory; training: repetition, muscle memory; applying: real life doing.</li>
<li>Tie domains to notable events, use checklists for those notable events.</li>
<li>Specifically formed "if you did X, do Y." Each "Y" must be in the language of the developer, concise, testable, and supported by training.</li>
<li>Ran an experiment, got solid feedback.</li>
<li>Short training gets used more.</li>
<li>Crisply defined responsibilities by role.</li>
</ul>Jonathan Marcil’s Threat Modeling Toolkit talk2018-01-30T00:00:00Zhttps://shostack.org/blog/jonathan-marcils-threat-modeling-toolkit-talk/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>There's a lot of threat modeling content here at AppSec Cali, and sadly, I'm only here today. Jonathan Marcil has been a guest here on Adam & friends, and today is talking about his toolkit: data flow diagrams and attack trees.</p>
<p>His world is very time constrained, and it's standing room only.</p>
<ul>
<li>Threat modeling is an appsec activity, understand attackers and systems</li>
<li>For security practitioners and software engineers. A tool to help clarify what the system is for reviewers. Highlight ameliorations or requirements.</li>
<li>Help catch important things despite chaos.</li>
<li><b>Must be collaborative</b>: communication is a key</li>
<li>Being wrong is great: people get engaged to correct you!</li>
<li>Data flow diagrams vs connection flow diagrams: visual overload. This is <em>not</em> an architectural doc, but an aid to security discussion. He suggests extending the system modeling approach to fit your needs, which is great, and is why I put my definition of a <a href="https://github.com/adamshostack/DFD3" target="_blank">DFD3</a> on github; let's treat our tools as artifacts like developers do.</li>
<li>An extended example of modeling Electrum.</li>
<li>The system model helps organize your own thoughts. Build a visual model of the things that matter to you, leave out the bits that don't matter.</li>
<li>Found a real JSONRPC <a href="https://github.com/spesmilo/electrum-docs/blob/master/cve.rst" target="_blank">vuln in the wallet</a> because of investigations driven by system model.</li>
<li>His models also have a "controls checklist;" "these are the controls I think we have." Controls tied by numbers to parts of diagram. Green checklists are a great motivator.</li>
<li>Discussion of one line vs two; would another threat modeling expert be able to read this diagram? What would be a better approach for a SAML-based system? Do you need trust boundaries between the browser and the IDP? What's going through your head as you build this?</li>
<li>Use attack trees to organize threat intelligence: roots are goals, leafs are routes to goals. If the goal is to steal cryptocurrency, one route is to gain wallet access, via stealing the physical wallet or software access. (Sorry, I'm bad at taking photos as I blog.) He shows the attack tree growing in a nice succession of slides.</li>
<li>Attack trees are useful because they're re-usable.</li>
<li>Uses PlantUML to draw trees with code, has a bunch of advantages of version control, automatically balancing trees.</li>
<li>Questions: How to collaborate with and around threat models? How to roll out to a group of developers? How to sell them on doing something beyond a napkin. </li>
<li>Diagrams for architects versus diagrams for developers.</li>
<li>If we had an uber-tree, it wouldn't be useful because you need to scope it and cut it. (Adam adds: perhaps scoping and cutting are easier than creating, if the tree isn't overwhelming?)</li>
<li>Link attack tree to flow diagram; add the same numbered controls to the attack tree.</li>
<li>If you can be in a meeting and say nothing in the TM meeting, you've won!</li>
</ul>
<p>Lastly, <a href="https://twitter.com/jonathanmarcil" target="_blank">Jonathan</a> did a great job of live-tweeting his own talk.</p>Doing Science with Near Misses2018-02-06T00:00:00Zhttps://shostack.org/blog/doing-science-with-near-misses/AdamNear misses are an important source of information for avoiding accidents, and it's a shame we don't use them in cybersecurity.<span class="single-post-excerpt"><!-- excerpt -->Near misses are an important source of information for avoiding accidents, and it's a shame we don't use them in cybersecurity.<!-- /excerpt --></span>
<p>[Update: The final article is available at <cite><a href="http://ctlj.colorado.edu/wp-content/uploads/2021/02/16.2_4-Shostack-8.7.18-FINAL.pdf" target="_blank" rel="noopener">That Was Close! Reward Reporting of Cybersecurity ‘Near Misses'</a></cite>, at the Colorado Technology Law Journal.]</p>
<img src="https://shostack.org/blog/img/2018/Near-miss-accident-tomorrow.jpg" alt="Report all near misses" class="single-post-main-img" style="float:left;" />
<p>Last week at <a href="https://artintoscience.com/" target="_blank" rel="noopener">Art into Science</a>, I presented “That was Close! Doing Science with Near Misses” (Slides as <a href="https://drive.google.com/file/d/1VetLhpp1n1p3uzawKPcfD24TSh2scjR_/view" target="_blank" rel="noopener">web page</a>, or download the <a href="https://adam.shostack.org/nearmiss/near-miss-shostack-art-into-science.pptx" target="_blank" rel="noopener">pptx</a>.)</p>
<p>The core idea is that we should borrow from aviation to learn from near misses, and learn to protect ourselves and our systems better. The longer form is in the draft “<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3081216" target="_blank" rel="noopener">That Was Close! Reward Reporting of Cybersecurity ‘Near Misses'</a>” <s>Voluntary Reporting of Cybersecurity “Near Misses”</s></p>
<p>The talk was super-well received and I'm grateful to Sounil Yu and the participants in the philosphy track, who juggled the schedule so we could collaborate and brainstorm. If you'd like to help, by far the most helpful way would be to tell us about a near miss you've experienced using our <a href="https://docs.google.com/forms/d/e/1FAIpQLScirDQVFE2bhX6En482Sjp-zO9HMxyyd-Rfc4m8VqFNi7Yb0Q/viewform" target="_blank" rel="noopener">form</a>, and give us feedback on the form. Since Thursday, I've added a space for that feedback, and made a few other suggested adjustments which were easy to implement.</p>
<p>If you've had a chance to think about definitions for either near misses or accidents, I'd love to hear about those, in comments, in your blog (trackbacks should work), or whatever works for you. If you were at Art Into Science, there's a #near-miss channel on the conference Slack, and I'll be cleaning up the notes.</p>
<p>Image from the EHS Database, who have a set of <a href="http://www.ehsdb.com/safety-poster---near-misses.php" target="_blank" rel="noopener">near miss safety posters</a>.</p>
Keep the Bombe on the Bletchley Park Estate2018-02-15T00:00:00Zhttps://shostack.org/blog/keep-the-bombe-on-the-bletchley-park-estate/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>There's a fundraising campaign to "<a href="http://www.crowdfunder.co.uk/new-display?tk=cd050dac9d25f4ef9c766e204e798f32a0353402" target="_blank">Keep the Bombe on the Bletchley Park Estate</a>."</p>
<p>The <a href="http://www.cryptomuseum.com/crypto/bombe/" target="_blank">Bombe</a> was a massive intellectual and engineering achievement at the British codebreaking center at Bletchley Park during the second world war. The Bombes were all disassembled after the war, and the plans destroyed, making the reconstruction of the Bombe at Bletchley a second impressive achievement.</p>
<p>My photo is from the exhibit on the reconstruction.</p>BlackHat and Human Factors2018-02-19T00:00:00Zhttps://shostack.org/blog/blackhat-and-human-factors-2018/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>As a member of the BlackHat Review Board, I would love to see more work on Human Factors presented there. The <a href="https://www.blackhat.com/us-18/call-for-papers.html" target="_blank">2018 call for papers</a> is open and closes April 9th. Over the past few years, I think we've developed an interesting track with good material <a href="https://www.blackhat.com/us-16/briefings/schedule/index.html#track/human-factors" target="_blank">year</a> over <a href="https://www.blackhat.com/us-17/briefings/schedule/index.html#track/human-factors" target="_blank">year</a>.</p>
<p>I wrote a <a href="https://adam.shostack.org/blog/2016/04/humans-in-security-blackhat-talks/">short blog post</a> on what we look for.</p>
<p>The BlackHat CFP calls for work which has not been published elsewhere. We prefer fully original work, but will consider a new talk that explains work you've done for the BlackHat audience. Oftentimes, Blackhat does not count as "Publication" in the view of academic program committees, and so you can present something at BlackHat that you plan to publish later. (You should of course check with the other venue, and disclose that you're doing so to BlackHat.)</p>
<p>If you're considering submitting, I encourage you to read all three recommendations posts at <a href="https://usa-briefings-cfp.blackhat.com/" target="_blank">https://usa-briefings-cfp.blackhat.com/</a></p><p></p>Threat Modeling Privacy of Seattle Residents2018-02-22T00:00:00Zhttps://shostack.org/blog/threat-modeling-privacy-of-seattle-residents/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>On Tuesday, I spoke at the Seattle Privacy/TechnoActivism 3rd Monday meeting, and shared some initial results from the Seattle Privacy Threat Model project.</p>
<p>Overall, I’m happy to say that the effort has been a success, and opens up a set of possibilities.</p>
<ul>
<li>Every participant learned about threats they hadn’t previously considered. This is surprising in and of itself: there are few better-educated sets of people than those willing to commit hours of their weekends to threat modeling privacy.</li>
<li>We have a new way to contextualize the decisions we might make, evidence that we can generate these in a reasonable amount of time, and an example of that form.</li>
<li>We learned about how long it would take (a few hours to generate a good list of threats, a few hours per category to understand defenses and tradeoffs), and how to accelerate that. (We spent a while getting really deep into threat scenarios in a way that didn’t help with the all-up models.)</li>
<li>We saw how deeply and complexly mobile phones and apps play into privacy.</li>
<li>We got to some surprising results about privacy in your commute.</li>
</ul>
<p>More at the Seattle Privacy Coalition blog, "<a href="https://seattleprivacy.org/threat-modeling-the-privacy-of-seattle-residents/" target="_blank">Threat Modeling the Privacy of Seattle Residents</a>," including slides, whitepaper and spreadsheets full of data.</p>Threat Modeling: Citizens Versus Systems2018-03-08T00:00:00Zhttps://shostack.org/blog/threat-modeling-citizens-versus-systems/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/Privacy-TM-Visualization.jpg" alt=" " class="single-post-main-img" />
<p>Recently, we <u>shared</u> [link to https://seattleprivacy.org/threat-modeling-the-privacy-of-seattle-residents/ no longer works] a privacy threat model which was centered on the people of Seattle, rather than on the technologies they use.</p>
<p>Because of that, we had different scoping decisions than I've made previously. I'm working through what those scoping decisions mean.</p>
<p>First, we cataloged how data is being gathered. We didn't get to "what can go wrong?" We didn't ask about secondary uses or transfers — yet. I think that was a right call for the first project, because the secondary data flows are a can of worms, and drawing them would, frankly, look like a can of worms. We know that most of the data gathered by most of these systems is weakly protected from government agencies. Understanding what secondary data flows can happen will be quite challenging. Many organizations don't disclose them beyond saying "we share your data to deliver and improve the service," those that do go farther disclose little about the specifics of what data is transferred to who. So I'd like advice: how would you tackle secondary data flows?</p>
<p>Second, we didn't systematically look at the question of what could go wrong. Each of those examinations could be roughly the size and effort of a product threat model. Each requires an understanding of a person's risk profile: victims of intimate partner violence are at risk differently than immigrants. We suspect there's models there, and working on them is a collaborative task. I'd like advice here. Are there good models of different groups and their concerns on which we could draw?</p>Citizen Threat Modeling and more data2018-03-14T00:00:00Zhttps://shostack.org/blog/citizen-threat-modeling/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Last week, in "<a href="https://shostack.org/blog/threat-modeling-citizens-versus-systems/">Threat Modeling: Citizens Versus Systems</a>," I wrote:</p>
<blockquote>I think that was a right call for the first project, because the secondary data flows are a can of worms, and drawing them would, frankly, look like a can of worms.<br />
(<em>and</em>)
<br />
Many organizations don’t disclose them beyond saying “we share your data to deliver and improve the service,” those that do go farther disclose little about the specifics of what data is transferred to who.</blockquote>
<p><img style="display:float; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2018/paypal.png" alt="Paypal Partnerships" title="paypal.png" border="0" width="400" height="448" /> Today, via <a href="https://www.schneier.com/blog/archives/2018/03/the_600_compani.html" target="_blank">Bruce Schneier</a>, we see that Paypal has disclosed <a href="https://shostack.org/blog/citizen-threat-modeling/">the list</a> of over 600 companies they might share your data with. He rightly asks if that's unusual. We don't know. My instinct is that it's not unusual for a financial multi-national.</p>
<p>I'm standing by the questions I asked; the first level of categories in the Paypal list may act as a good third level for our analysis. It will be interesting to see if others use the same categories. If they don't, the analysis process is magnified.</p>
<p>Their categories are:</p>
<ol>
<li>Payment Processors</li>
<li>Audit</li>
<li>Customer Service outsourcing</li>
<li>Credit reference and fraud agencies</li>
<li>Financial products</li>
<li>Commercial partnerships</li>
<li>Marketing and public relations</li>
<li>Operational services</li>
<li>Group companies</li>
<li>Commercial partners</li>
<li>Legal</li>
<li>Agencies</li>
</ol>
<p>It's unclear to me how 6 ("Commercial partnerships") differs from 10 ("Commercial partners"). I say this because I'm curious, not to point and laugh. We should cut Paypal some slack and appreciate that this is a new process to handle a new legal requirement. I'm also curious if 12 ("agencies") means "law enforcement agencies" or something else.</p>
<p class="credit-text">Visualization from <a href="https://rebecca-ricks.com/paypal-data/" target="_blank">How PayPal Shares Your Data</a>.</p>Speculative Execution Threat Model2018-03-15T00:00:00Zhttps://shostack.org/blog/speculative-execution-threat-model/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/mitigations_Framed.png" alt=" " class="single-post-main-img" />
<p>There's a long and important blog post from Matt Miller, "<a href="https://blogs.technet.microsoft.com/srd/2018/03/15/mitigating-speculative-execution-side-channel-hardware-vulnerabilities/" target="_blank">Mitigating speculative execution side channel hardware vulnerabilities</a>."</p>
<p>What makes it important is that it's a model of these flaws, and helps us understand their context and how else they might appear. It's also nicely organized along threat modeling lines.</p>
<p>What can go wrong? There's a set of primitives (conditional branch misprediction, indirect branch misprediction, and exception delivery or deferral). These are built into gadgets for windowing and disclosure gadgets.</p>
<p>There's also models for mitigations including classes of ways to prevent speculative execution, removing sensitive content from memory and removing observation channels.</p>Threat Modeling Panel at APPSEC Cali 20182018-03-20T00:00:00Zhttps://shostack.org/blog/threat-modeling-panel-at-appsec-cali-2018/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/X6j1nIO1F2o" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen=""></iframe>
<p>I really enjoyed being part of this panel. I felt we had a good mix of experience and some really interesting conversations.</p>Threat Model Thursday: Synopsys2018-03-22T00:00:00Zhttps://shostack.org/blog/tmt-synopsys/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>There's an increasing — and valuable — trend to publish sample threat models. These might be level sets for customers: "we care about <em>these</em> things." They might be reassurance for customers: "we <em>care</em> about these things." They might be marketing, they might serve some other purpose. All are fine motives, and whatever the motive, publishing them gives us an opportunity to look and compare the myriad ways models are created, recorded and used. And so I'm kicking off a series I'm calling "threat modeling Thursdays" to do exactly that.</p>
<p>Up front, let me be clear that I'm looking at these to learn and to illustrate. It's a dangerous trap to think that "the way to threat model is." There's more than one way to do it, as the Perl mavens say. Nothing here is intended to say "this is better or worse." Rather, I want to say things like "if you're a consultant, starting with scope is more important than when you're a developer." </p>
<p>So today's model, kicking off the series, comes to us from Synopsys, in a blog post titled "<a href="https://www.synopsys.com/blogs/software-security/5-pillars-successful-threat-model/" target="">The 5 pillars of a successful threat model</a>." And again, what's there is great, and what's there is very grounded in their consulting practice.</p>
<p>Thus, step 1 includes "define the scope and depth. Once a reasonable scope is determined with stakeholders, it needs to be broken down in terms of individual development teams..." Well, sure! That's one way to do it. If your threat models are going to be executed by consultants, then it's essential. And if your threat models are going to be done as an integral part of development, scoping is often implicit. But it's a fine way to start answering the question of "what are we working on?"</p>
<p>Step 2 is "Gain an understanding of what is being threat modeled." This is also aligned with my question 1, "what are we working on."</p>
<img src="https://www.synopsys.com/blogs/software-security/wp-content/uploads/2016/07/tmblog1.jpg" alt="A diagram of a system" />
<p>The diagram is great, and I initially wanted the internet trust boundary to be more pronounced, but leaving it the same as the other boundaries is a nice way to express "threats come from everywhere."</p>
<p>The other thing I want to say about the diagram is that it looks like a nice consulting deliverable. "We analyzed the system, discovered these components, and if there's stuff we missed, you should flag it." And again, that's a reasonable choice. In fact, any other choice would be unreasonable from consultants. And there are other models. For example, a much less formal whiteboard model might be a reasonable way for a team starting to threat model to document and align around an understanding of "what we're building." The diagrams Synopsys present take more time than the less formal ones. They also act as better, more formal records. There are scenarios where those more formal records are important. For example, if you expect to have to justify your choices to a regulator, a photo of a whiteboard does not "convey seriousness."</p>
<p>Their step 3 is to model the attack possibilities. Their approach here is a crisp version of the "asset/entry point" that Frank Swiderski and Window Snyder present in their book. "Is there any path where a threat agent can reach an asset without going through a control?"</p>
<p>They draw in assets, threat agents and controls here, and while I'm not a advocate of including them in diagrams (it makes for a lot of complexity), using two diagrams lets you study the system, then look at a more in depth version, which works nicely. Also, their definitions of threat agents is pretty interesting, for example, "unauthorized internal user." It says nothing of their motivation or capabilities, just their starting position and privileges. Compare and contrast that with a threat persona like "Sean “Keech” Purcell – Defacer." (Keech is one of the personas created by Aucsmith, Dixon, and Martin-Emerson.) </p>
<p>Synopsys's step 3, along with their step 4, "interpret the threat model," answer the question "what can go wrong?" Here I do want to mildly critique their use of the word "the." There are at least four models in play in the threat modeling activity (System, assets, agents, and controls are all modeled.) There's strength in thinking of threat modeling as a collection of activities. Calling a particular something 'the threat model' is both very common and needlessly restrictive.</p>
<p>Their step 5 is to "create a traceability matrix to record missing or weak controls." This is a fine input to the question that the readers of that matrix will ask, "what are we going to do about it?" Which happens to be my question 3. They have a somewhat complex analytic frame of a threat agent targets an asset via an attack over a surface... Also interesting in the <a href="https://www.synopsys.com/blogs/software-security/wp-content/uploads/2016/07/trace.jpg" target="">traceability matrix</a> is the presentation of "user credentials" as an attack goal. I treat those as 'stepping stones,' rather than goals. Also, in their discussion of the traceability matrix, we see handoffs: "it takes time and repetition to become proficient at [threat modeling]," and "With experience, you’ll be able to develop a simplified traceability matrix." These are very important points — how we threat model is not simply a function of our job function, it's also a function of experience, and the ways in which we work through issues changes as we gain experience. There's another trap in thinking the ways that work for an experienced expert will work for a novice, and the support tools that someone new to threat modeling may use will hinder the expert.</p>
<p>Lastly, they have no explicit analog to my step 4, "did we do a good job?" I believe that has nothing to do with different interests in quality, but rather that the threat model deliverable with their logo on it will go through stages of document preparation, review and critique, and so that quality check is an implicit one in their worlds.</p>
<P>To close, threat modeling shares the property, common in security, that secrecy makes it harder to learn. I have a small list of threat models to look at, and if you know of some that we can look at together, I would love to hear that, or other feedback you might have on what we can learn from this model.</P>
Friday Star Wars: Trek and CISSP2018-03-23T00:00:00Zhttps://shostack.org/blog/star-wars-friday-trek-and-cissp/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Larry Greenblat is releasing a series of videos titled "<a href="https://www.linkedin.com/feed/update/urn:li:activity:6381247412681342976" target="_blank">Passing the CISSP Exam with the help of Spock & Kirk</a>." I, of course, love this, because using stories to help people learn and remember is awesome, and it reminds me of my own "<a href="https://shostack.org/blog/the-security-principles-of-saltzer-and-schroeder/">The Security Principles of Saltzer and Schroeder, illustrated with Star Wars</a>." Also, my thoughts on <a href="https://adam.shostack.org/blog/2016/12/rogue-one-the-best-star-wars-yet/">Star Wars vs Star Trek for these sorts of things</a>.</p>Ries on Gatekeepers2018-03-27T00:00:00Zhttps://shostack.org/blog/ries-on-gatekeepers/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/woman_at_gate.jpg" alt="womain looking at city scape through metal gate" class="single-post-main-img" />
<p>Eric Ries wrote the excellent book <cite>Lean Startup</cite>. In a <a href="http://firstround.com/review/lean-startups-eric-ries-on-how-to-make-gatekeepers-a-source-of-power-and-speed/" target="_blank">recent interview</a> with Firstround, he talks about how to integrate gatekeeping functions into a lean business.</p>
<p>There is a tremendous amount of wisdom in there, and almost all of it applies to security. The core is that the gatekeeper has compassion for the work and ambiguity of engineering, and that compassion comes from being embedded into the work.</p>
<p>Engineering involves starting with problem statements that are incomplete or inaccurate, and dialog about those problems leading to refinement of the understanding of both the problem and the solution. It's hard to do that from a remote place in the organization.</p>
<p>This is an argument for what Ries calls embedding, which is appropriate for some gatekeeping functions. What's more important for security is "a seat at the table." They're importantly different. Embedding is a matter of availability when a problem comes up where we need the voice of legal or finance. A seat at the table is that the person is invited to the meetings where the problems and solutions are being refined. That happens naturally when the person invited is a productive contributor. Many functions, from program management to test to usability have won a seat at the table, and sometimes lost it as well.</p>
<p>The first hurdle to a seat at the table, and the only one which is non-negotiable, is productive engagement. "We get more done because we invite Alice to our meetings." That more might be shipping faster, it might be less rework, it might be higher quality. It is always things which matter to the organization.</p>
<p>The more productive the engagement, the more willing people will be to overlook soft skills issues. The famed BOFH doesn't get a seat at the table, because as much as IT might want one, he's abusive. Similarly, security people will often show up and say things like "one breach could sink the company," or "your design is crap." Hyperbole, insults, anger, all of the crassly negative emotions will cost not just Angry Bob but the whole security team their seat. These are behaviors that get drawn to the attention of management or even HR. They limit careers, and they also make it hard to give feedback. Who wants to get insulted when you're trying to help someone? They limit teams. Who wants to work with people like that?</p>
<p>There are other, less crass behaviors with similar effect: not listening, not delivering on time, not taking on work that needs taking on. These soft skills will not get you to the table, but they'll ease the journey, and most importantly, get you the feedback you may need to get there. But if you are in a gatekeeper role today, or if your security team aspires to rise to the point where you have a rope you can pull to stop the production line, the new article on gatekeepers by Mr. Ries is well worth your time.</p>
<p>One of the aspects of the post that's worthwhile is providing crisp guidance, which reminds me of what Izar Tarandach talked about at Appsec 2018. (<a href="https://adam.shostack.org/blog/2018/01/appsec-cali-2018-izar-tarandach/">My notes</a>, the <a href="https://www.youtube.com/watch?v=mypgp-D-jW8" target="_blank">video</a>.)</p>
<p class="credit-text"><a href="https://www.flickr.com/photos/aryokmateus/33274030862/in/dateposted/" target="_blank">Photo</a> by <a href="https://www.flickr.com/photos/aryokmateus/" target="_blank">Aryok Mateus</a>.</p>Threat Model Thursday: ARM Yourselves!2018-03-29T00:00:00Zhttps://shostack.org/blog/tmt-arm-yourselves/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p><img src="https://shostack.org/blog/img/2018/arm.png" alt="Arm" title="arm.png" border="0" width="114" height="160" style="float:right;" /> The response to my first <a href="https://adam.shostack.org/blog/2018/03/threat-model-thursday-synopsys/">Threat Model Thursday</a> was almost uniformly positive. Thank you!</p>
<p>I'm going to continue with the series, and have a second one ready. But as I think about how to maximize the value of the series, I want to try something. I want you to read the threat model without me, and analyze it.</p>
<p>This week's model is the <a href="https://pages.arm.com/psa-resources.html" target="_blank">ARM Network Camera TMSA</a>. (It's behind a regwall, but you can opt-out of marketing.)</p>
<p>As you read it, I want you to ask yourself two sets of questions. First, how does it align with the 4-question frame ("what are we working on," "What can go wrong," "what are we going to do about it," and "did we do a good job?") Second, ask yourself who, what, why, and how. (You can ask yourself when if you want to be complete about it.)</p>
<p>I'll be back next week with my answers.</p>Blaming the User2018-03-30T00:00:00Zhttps://shostack.org/blog/blaming-the-user/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/Blaming-the-User.jpg" alt="Fake ORLY book titled 'Blaming the user'" class="single-post-main-img" />
<p>Via <a href="https://www.linkedin.com/in/chadloder/" target="_blank">Chad Loder</a>.</p>Reasonable Software Security Engineering Podcast2018-04-02T00:00:00Zhttps://shostack.org/blog/reasonable-software-security-engineering-podcast/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>ISACA has released a podcast that we did to talk about the "<a href="https://web.archive.org/web/20180425071026/http://www.isaca.org/Knowledge-Center/Research/ResearchDeliverables/Pages/Reasonable-Software-Security-Engineering.aspx" target="_blank">Reasonable Software Security Engineering</a>" perspectives article. You can <u>download the podcast</u> at ISACA [link to http://www.isaca.org/Knowledge-Center/Documents/podcast/
Reasonable-Software-Security-Engineering_podcast_0318.MP3 no longer works], or you can use:</p>
<ul>
<li><a href="https://itunes.apple.com/us/podcast/reasonable-software-security-engineering/id1209164381?i=1000407853735&mt=2" target="_blank">iTunes</a> </li>
<li><a href="https://play.google.com/music/m/Dljyasumhdqa5ilgiuffshdck64?t=Reasonable_Software_Security_Engineering-ISACA_Podcast" target="_blank">Google Play</a> </li>
<li><a href="https://soundcloud.com/isacapodcast/reasonable-software-security-engineering" target="_blank">Soundcloud</a></li>
</ul>John Harrison's Struggle Continues2018-04-03T00:00:00Zhttps://shostack.org/blog/john-harrisons-struggle-continues/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Today is John Harrison's 352nd birthday, and Google has a doodle to celebrate. Harrison was rescued from historical obscurity by Dava Sobel's excellent book <em><a href="https://www.amazon.com/Longitude-Genius-Greatest-Scientific-Problem/dp/0802713122/ref=as_li_ss_tl?_encoding=UTF8&qid=1522787168&sr=8-1&linkCode=ll1&tag=adamshostack-20&linkId=d84e94ebb251844af6bd812badfde42f" target="_blank">Longitude</a></em>, which documented Harrison's struggle to first build and then demonstrate the superiority of his clocks to the mathematical and astronomical solutions heralded by leading scientists of the day. Their methods were complex, tedious and hard to execute from the deck of a ship.</p>
<p>To celebrate, I'd like to <a href="https://www.flickr.com/photos/adamshostack/41171880912" target="_blank">share</a> this photo I took at the <a href="https://www.rmg.co.uk/" target="_blank">Royal Museums Greenwich</a> in 2017:<br />
<a href="https://www.flickr.com/photos/adamshostack/41171880912" title="View 'Calculation for Finding the Longitude' on Flickr.com" target="_blank">
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2018/Harrison-Worksheet-framed.jpg" alt="Harrison Worksheet framed" title="Harrison Worksheet framed.jpg" border="0" width="360" height="480" /></a></p>
<p>(A <a href="https://www.flickr.com/photos/adamshostack/41171880912" target="_blank">Full size</a> version is on Flickr.)</p>
<p>As the <a href="https://www.flickr.com/photos/adamshostack/41171989212/in/dateposted-public/" target="_blank">placard</a> says, "First produced in 1768, this worksheet gave navigators <em><strong>an easy process</strong></em> for calculating their longitude using new instruments and the Nautical Almanac. Each naval ship's master was required to train with qualified teachers in London or Portsmouth in order to gain a certificate of navigational competence." (Emphasis added.)</p>Threat Model Thursday: ARM's Network Camera TMSA2018-04-05T00:00:00Zhttps://shostack.org/blog/tmt-arms-network-camera-tmsa/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/network-camera_framed.jpg" alt="" class="single-post-main-img" />
<p>Last week, I encouraged you to take a look at the ARM Network Camera Threat Model and Security Analysis, and consider:</p>
<blockquote>First, how does it align with the 4-question frame ("what are we working on," "What can go wrong," "what are we going to do about it," and "did we do a good job?") Second, ask yourself who, what, why, and how. </blockquote>
<p>Before I get into my answers, I want to re-iterate what I said in the first: I'm doing this to learn and to illustrate, not to criticize. It’s a dangerous trap to think that “the way to threat model is.” And this model is a well-considered and structured analysis of networked camera security, and it goes a bit beyond</p>
<p>So let me start with the who, what, why and how of this model.</p>
<p><strong>Who</strong> did this? The models were created, analyzed and documented by <a href="http://www.provenrun.com/about/" target="_blank">Prove & Run</a>, a French software firm, on contract to Arm.</p>
<p><strong>What</strong> is this? It's a Common Criteria Protection Profile. If you're not familiar with the Common Criteria, it's an attempt to use the buying power of major governments to improve the security of the things they buy, and to reduce costs for manufacturers by aligning their security requirements. That fundamental nature, of being a Protection Profile, controls the form of the document, and the models within it. The models of 'what we're working on' vary by purpose. We construct models to help us analyze and to help us communicate. We might want to communicate to persuade, to discuss, or to document. Our documents might be hyper-transient on a whiteboard or napkin, or designed for archival use. Both are choices. It takes work to write the more formal version.</p>
<p><strong>Why</strong> "With the inherent diversity of IoT there will be a greater need for device manufacturers to have a reference TMSA for their product. Arm has created a series of reference English language Protection Profiles for IoT products to show how this might be done in a way that is understandable by non-security experts. These security analyses are accompanied by at a glance summary documents and useful appendices that show how Arm TrustZone and CryptoIsland technology can be used to meet some of the SFRs. We hope that you find these documents useful as a starting point for creating a TMSA for your IoT device."</p>
<p>So to restate that, Arm wants to help their customers threat model, and understand how to use Arm's feature sets to mitigate threats to a common class of device. Cool! It's an important goal, and I'm glad Arm is investing in it. (And we will return to this goal.)</p>
<p><strong>How</strong> they do this is they:</p>
<ol>
<li>Give an overview of the <del>camera</del> TOE, and its use and major security features. By TOE, they mean Target of Evaluation, which is a subset of the camera. So how they do this is very, very, strongly grounded in the Common Criteria, to an extent that it's hard for anyone not grounded in that world to read.</li>
<li>Provide a diagram of what's in scope, and a set of assets to be protected.</li>
<li>Offer a set of threats. I'll analyze these below.</li>
<li>List a set of expected security policies that the end user will have. Some of these, frankly, are optimistic, such as "the admin shall change the default passwords," and "are assumed to follow and apply administrative guidance." However, optimistic or not, they are explicit, which allows us to evaluate them, and decide if they work for us. (Alternate approaches might be to not have a password to the device, and to remotely administer it. There are associated security issues, which we could also evaluate.)</li>
<li>Tie the security objectives to a set of threats.</li>
<li>Derive security requirements to meet the objectives.</li>
<li>Compare the requirements to Arm's CryptoIsland, Trustzone, and Root of Trust "products." </li>
</ol>
<p>With that, let me turn to the 4-question frame.</p>
<p>1. <strong>What are we working on?</strong><br /> This is addressed by the TOE, the diagram, and the set of security policies.</p>
<p>2. <strong>What can go wrong?</strong><br /> The structured approach to the interplay between threats, objectives and requirements is interesting. It may be one step more than is needed, but perhaps not, especially when we consider the goal of experts in areas other than security using them "as a starting point for creating a TMSA for your IoT device."</p>
<p>More interesting is where do the threats come from? The threats which they list are:</p>
<ul>
<li>T.impersonation</li>
<li>T.MITM</li>
<li>T.firmware_abuse</li>
<li>T.tamper</li>
<li>P.Credential_Management (The admin will change the password)</li>
<li>A.Trusted_Admin</li>
</ul>
<p>It is unclear where it comes from. Why these threats? What else was considered and rejected? (It may be that one more versed in Common Criteria than me finds the answers obvious; I suspect the overlap of that set with their target audience is less than 100%.)</p>
<p>Still taking the list as it is, I think that the TOE does not mean to me what it means to them. The TOE excludes the network, and so excludes impersonation and MITM. And so I think there's an alternate TOE, which includes my yellow line, although not the blue box. With that added, the impersonation and MITM threats make more sense. Also, having added it, I am now concerned about denial of service threats to and through the network interface, and also, potentially from the miria-d devices out there. (Although the protections of secure storage and authenticated firmware would protect most cameras with these features from permanently joining such a botnet.)</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2018/network-camera_framed-copy.jpg" alt="Network camera framed copy" title="network-camera_framed copy.jpg" border="0" width="318" height="226" />
<p>We are now in the realm of the list of threats list as it is not, which is the issue of <a href="http://www.insecam.org/" target="_blank">video</a> <a href="https://www.shodan.io/explore/tag/webcam" target="_blank">privacy</a>, or information disclosure by cameras. To be fair, that could be out of scope. I might argue that it should not be, but I will argue that it should not silently be out of scope. (It could also be subsumed under impersonation and the P.Credential_Management.)</p>
<p>There are also no threats listed relative to the 'general purpose operating system' which may be present. And from a Common Criteria perspective, that's sensible — it has a PP. We'll return to this in asking if we did a good job.</p>
<p>3. <strong>What are we going to do about it?</strong><br />The answers to this are a set of security functional requirements in section 6. I want to touch on one, and here, I will take the liberty of disagreeing. The line reads: MITM "assumes that the TOE can be attacked by intercepting or spying communications with remote servers. This threat is countered by the security objective OT.COMMUNICATION that ensures authentication of remote servers and protection in confidentiality and integrity of exchanged data." Authenticating remote servers is not sufficient to meet this goal. At a base, the difference between TLS 1.2 and 1.3 is about this problem. Solving it fully is hard -- does the camera need to reach out to a Certificate Authority for a cert? I want more here.</p>
<p>4. <strong>Did we do a good job?</strong><br /> I'm going to re-formulate that question, and ask instead, <em>Is the protection profile <del>the right</del> a good form to meet Arm's goals</em>? As a reminder, the goal is to help non-security experts use them "as a starting point for creating a TMSA for your IoT device."</p>
<p>To me, much of the form is sensible for that goal, especially section 6, and that sensible form may be doubly hidden. It's hidden on page 18 and onwards, and it's hidden behind a complex and intimidating form. The meat of the document starts 9 pages in. The language is heavily formal. A useful side effect of formality is the language is quite clear, especially compared to a lot of documents I see.</p>
<p>There is not enough discussion of vulnerabilities of the operating system and other software components. This does a dis-service to the non-expert customers for this document. They should get some guidance, perhaps couched in an acknowledgement that it's outside the norms of a protection profile. There may be Common-Criteria specific reasons that doing that is bad, including drawing the attention of the evaluation labs. If that's the case, then adding a third document to the zip file seems appropriate.</p><p>
</p><p>In closing, I found this to be a really interesting model to examine. What do you think of it? What else should we look at?</p>Gartner on DevSecOps Toolchain2018-04-10T00:00:00Zhttps://shostack.org/blog/gartner-on-devsecops-toolchain/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>I hadn't seen "<a href="https://www.gartner.com/doc/reprints?id=1-4PMXPUK&ct=180125&st=sb&elq_mid=432&elq_cid=199413" target="_blank">Integrating Security Into the DevSecOps Toolchain</a>," which is a Gartner piece that's fairly comprehensive, grounded and well-thought through.</p>
<p>If you enjoyed my "<a href="https://shostack.org/blog/reasonable-software-security-engineering/">Reasonable Software Security Engineering</a>," then this Gartner blog does a nice job of laying out important aspects which didn't fit into that ISACA piece.</p>
<p>Thanks to Stephen de Vries of <u>Continuum</u> [link to https://www.continuumsecurity.net/ no longer works] for drawing my attention to it.</p>Security Engineering: Computers versus Bridges2018-04-11T00:00:00Zhttps://shostack.org/blog/security-engineering-computers-versus-bridges/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/Brooklyn-Bridge_Framed.jpg" alt="Brooklyn Bridge" class="single-post-main-img" />
<p>Joseph Lorenzo Hall has a post at the Center for Democracy and Technology, "<a href="https://cdt.org/blog/taking-the-pulse-of-security-research/" target="_blank">Taking the Pulse of Security Research</a>." One part of the post is <a href="https://cdt.org/files/2018/04/2018-04-09-security-research-expert-statement-final.pdf" target="_blank">an expert statement on security research</a>, and I'm one of the experts who has signed on.</p>
<p>I fully support what CDT chose to include in the statement, and I want to go deeper. The back and forth of design and critique is not only a critical part of how an individual design gets better, but fields in which such criticism is the norm advance faster.</p>
<p>A quick search in Petroski's <cite><a href="https://amzn.to/2qjCGAu" target="_blank">Engineers of Dreams: Great Bridge Builders and the Spanning of America</a></cite> brings us the following. (The Roeblings built the Brooklyn Bridge, Lindenthal had proposed a concept for the crossing, which lost to Roebling's, and he built many others.)</p>
<blockquote>In Lindenthal’s case, he was so committed to the suspension concept for bridging the Hudson River that he turned the argument naturally and not unfairly to his use. Lindenthal admitted, for example, that it was “a popular assumption that suspension bridges cannot be well used for railroad purposes,” and further conceded that throughout the world there was only one suspension bridge then carrying railroad tracks, Roebling’s Niagara Gorge Bridge, completed in 1854, over which trains had to move slowly. However, rather than seeing this as scant evidence for his case, Lindenthal held up as a model the “greater moral courage and more abiding faith in the truth of constructive principles” that Roebling needed to build his bridge in the face of contemporary criticism by the “most eminent bridge engineers then living.” In Lindenthal’s time, three decades later, it was not merely a question of moral courage; “nowadays bridges are not built on faith,” and there was “not another field of applied mechanics where results can be predicted with so much precision as in bridges of iron and steel.” ("<a href="https://amzn.to/2qjCGAu" target="_blank">Engineers of Dreams: Great Bridge Builders and the Spanning of America</a>," Henry Petroski)</blockquote>
<p>Importantly for the case which CDT is making, over the span of thirty years, we went from a single suspension bridge to "much precision" in their construction. That progress happened because criticisms and questions are standard while a bridge is proposed, and if it fails, there are inquests and inquiries as to why.</p>
<p>In his <cite><a href="https://amzn.to/2JDqraW" target="_blank">The Great Bridge: The Epic Story of the Building of the Brooklyn Bridge</a></cite>, David McCullough describes the prolonged public discussion of the engineering merits:</p>
<blockquote>It had been said repeatedly by critics of the plan that a single span of such length was impossible, that the bridge trains would shake the structure to pieces and, more frequently, that no amount of calculations on paper could guarantee how it might hold up in heavy winds, but the odds were that the great river span would thrash and twist until it snapped in two and fell, the way the Wheeling Bridge had done (a spectacle some of his critics hoped to be on hand for, to judge by the tone of their attacks).</blockquote>
<p>The process of debating plans for a bridge strengthen, not weaken, the resulting structure. Both books are worth reading as you think about how to advance the field of cybersecurity.</p>
<p class="credit-text">Image credit: <a href="https://clevelandelectriclabs.com/structural-health-monitoring/brooklyn-bridge/" target="_blank">Cleveland Electric</a>, on their page about a fiber optic structural monitoring system which they retro-fitted onto the bridge in question.</p>Threat Model Thursday: Talking, Dialogue and Review2018-04-12T00:00:00Zhttps://shostack.org/blog/tmt-talking-dialogue-and-review/AdamAs we head into RSA, I want to hold the technical TM Thursday post, and talk about how we talk to others in our organizations about particular threat models, and how we frame those conversations.<img src="https://shostack.org/blog/img/2018/collaboration-framed.jpg" alt="multiple individuals collaborating over a laptop and paperwork" class="single-post-main-img" />
<p>As we head into RSA, I want to hold the technical TM Thursday post, and talk about how we talk to others in our organizations about particular threat models, and how we frame those conversations.</p>
<p>I’m a big fan of the whiteboard-driven dialogue part of threat modeling. That’s where we look at a design, find issues, and make tradeoffs together with developers, operations, and others. The core is the tradeoff: if we do this, it has this effect. I’m borrowing here John Allspaw’s focus on the <a href="https://www.kitchensoap.com/2017/08/12/multiple-perspectives-on-technical-problems-and-solutions/" target="_blank" rel="noopener">social nature of dialogue</a>: coming together to explore ideas. It’s rare to have a consultant as an active participant in these dialogues, because a consultant does not have ‘skin in the game,’ they do not carry responsibility for the tradeoffs. These conversations involve a lot of “what about?” and “what if” statements, and active listening is common.</p>
<p>Let me contrast that with the “threat model review.” When reviews happen late in a cycle, they are unlikely to be dialogues about tradeoffs, because the big decisions have been made. At their best, they are validation that the work has been done appropriately. Unfortunately, they frequently devolve into tools for re-visiting decisions that have been made, or arguments for bringing security in next time. Here, outside consultants can add a lot of value, because they’re less tied to the social aspects of the conversation, offer a “review" or “assessment.” These conversations involve a lot of “why” and “did you” questions. They often feel inquisitorial, investigatory and judgmental. Those being questioned often spend time explaining the tradeoffs that were made, and recording those tradeoff discussions was rarely a priority as decisions were made.</p>
<p>These social frames interleave with the activities and deliverables involved in threat modeling. We can benefit from a bit more reductionism in taking ‘threat modeling’ down to smaller units so we can understand and experiment. For example, my colleagues at RISCS <u>refer</u> [link to https://www.riscs.org.uk/2018/04/11/developer-centred-security-developers-den/ no longer works] to “traditional threat modeling approaches,” and we can read that lots of ways. At a technical level, was that an attacker-centric approach grounded in TARA? STRIDE-per-element? At a social level, was it a matter of security champs coming in late and offering their opinions on the threat modeling that had been done?</p>
<p>So I can read the discussion about the ThoughtWorks “Sensible Conversations” as a social shift from a review mode to a dialogue mode, in which case it seems very sensible to me, and I can read it as about the technical shift about their attacker/asset cards. My first read is that their success is more about the social shift which is the headline. The technical shift (or shifts) may be a part of enabling that by saying “hey, lets try a different approach.”</p>
<p>Image: <a href="https://unsplash.com/photos/UCZF1sXcejo" target="_blank" rel="noopener">Štefan Štefančík</a>. Thanks to FS & SW for feedback on the draft.</p>346,000 Wuhan Citizens’ Secrets2018-04-13T00:00:00Zhttps://shostack.org/blog/346000-wuhan-citizens-secrets/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/Chinese-Personal-Data-framed.jpg" alt=" " class="single-post-main-img" />
<p>"346,000 Wuhan Citizens’ Secrets" was an exhibition created with $800 worth of data by Deng Yufeng. From the New York Times:</p>
<blockquote>Six months ago, Mr. Deng started buying people’s information, using the Chinese messaging app QQ to reach sellers. He said that the data was easy to find and that he paid a total of $800 for people’s names, genders, phone numbers, online shopping records, travel itineraries, license plate numbers — at a cost at just over a tenth of a penny per person.</blockquote>
<p>"<a href="https://www.nytimes.com/2018/04/13/technology/china-personal-data-museum.html" target="_blank">The Personal Data of 346,000 People, Hung on a Museum Wall</a>," by Sui-Lee Wee and Elsie Chen.</p>Threat Modeling Thursday: #threatmodelhero2018-04-19T00:00:00Zhttps://shostack.org/blog/tmt-hashtag-threatmodelhero/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/threatmodelhero_Framed.jpg" alt="threat modeling hero badge" class="single-post-main-img" />
<p>My friends at Continuum Security have some cool swag here at RSA. Go get some at South 2125 (the Spanish Pavilion). Their meet us <u>blog post</u> [link to https://www.continuumsecurity.net/meet-us-rsa-discuss-easythreatmodeling/ no longer works].</p>Designing for Good Social Systems2018-04-23T00:00:00Zhttps://shostack.org/blog/designing-for-good-social-systems/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>There's a long story in the New York Times, "<a href="https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-lanka-riots.html">Where Countries Are Tinderboxes and Facebook Is a Match</a>:"</p>
<blockquote>A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.</blockquote>
<p>I've written previously about <a href="https://adam.shostack.org/blog/2015/07/the-drama-triangle/">the drama triangle</a>, how social media drives <a href="https://adam.shostack.org/blog/2017/01/the-dope-cycle-and-the-two-minutes-hate/">engagement through dopamine and hatred</a>, and a <a href="https://adam.shostack.org/blog/2017/07/the-dope-cycle-and-a-deep-breath/">tool to help you breathe through such feelings.</a></p>
<p>These social media tools are dangerous, not just to our mental health, but to the health of our societies. They are actively being used to fragment, radicalize and undermine legitimacy. The techniques to drive outrage are developed and deployed at rates that are nearly impossible for normal people to understand or engage with. We, and these platforms, need to learn to create tools that preserve the good things we get from social media, while inhibiting the bad. And in that sense, I'm excited to read about "<a href="https://www.knightfoundation.org/articles/20-projects-will-address-the-spread-of-misinformation-through-knight-prototype-fund">20 Projects Will Address The Spread Of Misinformation Through Knight Prototype Fund</a>."</p>
<p>We can usefully think of this as a type of threat modeling.</p>
<ul>
<li>What are we working on? Social technology.</li>
<li>What can go wrong? Many things, including threats, defamation, and the spread of fake news. Each new system context brings with it new types of fail. We have to extend our existing models and create new ones to address those.</li>
<li>What are we going to do about it? The Knight prototypes are an interesting exploration of possible answers.</li>
<li>Did we do a good job? Not yet.</li>
</ul>
<p>These emergent properties of the systems are not inherent. Different systems have different problems, and that means we can discover how design choices interact with these downsides. I would love to hear about other useful efforts to understand and respond to these emergent types of threats. How do we characterize the attacks? How do we think about defenses? What's worked to minimize the attacks or their impacts on other systems? What "obvious" defenses, such as "real names," tend to fail?</p>
<p>Image: <a href="https://www.washingtonpost.com/graphics/2017/business/russian-ads-facebook-targeting/">Washington Post</a></p>$35M for Covering up A Breach2018-04-24T00:00:00Zhttps://shostack.org/blog/35m-for-covering-up-a-breach/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>"<a href="http://www.businessinsider.com/yahoo-hack-35-million-sec-fine-for-not-telling-investors-about-russian-hack-2018-4" target="_blank">The remains of Yahoo just got hit with a $35 million fine because it didn't tell investors about Russian hacking</a>." The headline says most of it, but importantly, "'We do not second-guess good faith exercises of judgment about cyber-incident disclosure. But we have also cautioned that a company's response to such an event could be so lacking that an enforcement action would be warranted. This is clearly such a case,' said Steven Peikin, Co-Director of the SEC Enforcement Division."</p>
<p>A lot of times, I hear people, including lawyers, get very focused on "it's not material." Those people should study the SEC's <a href="https://www.sec.gov/news/press-release/2018-71" target="_blank">statement</a> carefully.</p>Threat Model Thursday: Q&A2018-04-26T00:00:00Zhttps://shostack.org/blog/tmt-q-and-a/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>In a comment on "<a href="https://adam.shostack.org/blog/2018/04/threat-model-thursday-arms-network-camera-tmsa/">Threat Model Thursday: ARM’s Network Camera TMSA</a>", Dips asks:</p>
<blockquote>Would it been better if they had been more explicit with their graphics ? I am a beginner in Threat Modelling and would have appreciated a detailed diagram denoting the trust boundaries. Do you think it would help? Or it would further complicate?</blockquote>
<p>That's a great question, and exactly what I hoped for when I thought about a series. The simplest answer is 'probably!' More explicit boundaries would be helpful. My second answer is 'that's a great exercise!' Where could the boundaries be placed? What would enforce them there? Where else could you put them? What are the tradeoffs between the two?</p>
<p>My third answer is to re-phrase the question. Rather than asking 'would it help,' let's ask 'who might be helped by better boundary demarcation' 'when would it help them,' and 'is this the most productive thing to improve?' I would love to hear everyone's perspective.</p>
<p>Lastly, it would be reasonable to expect that Arm might produce a model that depends on the sorts of boundaries that their systems can help protect. It would be really interesting to see a model from a different perspective. If someone draws one or finds one, I'd be happy to look at it for the next article in the series.</p>Best Cyber News Blogs, thanks!2018-04-30T00:00:00Zhttps://shostack.org/blog/best-cyber-news-blogs-thanks/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>CyberDB was kind enough to include us in their "<a href="https://www.cyberdb.co/best-cyber-security-news-blogs-2018/" target="_blank">Best Cyber Security News Blogs 2018</a>. There's some standbys and some I wasn't familiar with on the list. Thank you for including us!</p>TESS Launch Closeup2018-05-01T00:00:00Zhttps://shostack.org/blog/tess-launch-closeup/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/FalconTessLaunchKraus1200_small-1.jpg" alt="Falcon TESS Launch" class="single-post-main-img" />
<p><a href="https://apod.nasa.gov/apod/ap180421.html" target="_blank">View larger.</a></p>
<p><a href="http://johnkrausphotos.com/" target="_blank">John Kraus</a>, via <a href="https://apod.nasa.gov/apod/" target="_blank">APOD</a>.</p>Redzone Podcast on threat modeling2018-05-07T00:00:00Zhttps://shostack.org/blog/redzone-podcast-on-threat-modeling/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>I enjoyed being a guest recently on Bill Murphy's RedZone podcast. You can take a listen with a variety of tools at "<a href="http://www.redzonetech.net/podcast/adam-shostack/" target="_blank">
How CIOs Can Use Threat Modelling to Benefit Their Organization: Build Out Your Defenses!</a>."</p>Just Culture and Information Security2018-05-09T00:00:00Zhttps://shostack.org/blog/just-culture-and-information-security/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<blockquote>Yesterday Twitter revealed they had accidentally stored plain-text passwords in some log files. There was no indication the data was accessed and users were warned to update their passwords. There was no known breach, but Twitter went public anyway, and was excoriated in the press and… on Twitter.<br /><br />
This is a problem for our profession and industry. We get locked into a cycle where any public disclosure of a breach or security mistake results in...</blockquote>
<p>Well, you can imagine what it results in, or you can go read "<a href="https://securosis.com/blog/the-security-profession-needs-to-adopt-just-culture" target="_blank">The Security Profession Needs to Adopt Just Culture</a>" by Rich Mogull. It's a very important article, and you should read it, and the links, and take the time to consider what it means. In that spirit, I want to reflect on something I said the other night. I was being intentionally provocative, and perhaps crossed the line away from being just. What I said was a password management company had one job, and if they expose your passwords, you should not use their password management software.</p>
<p>Someone else in the room, coming from a background where they have blameless post-mortems, challenged my use of the phrase 'you had one job,' and praised the company for coming forward. And I've been thinking about that, and my take is, the design where all the passwords are at a single site is substantially and predictably worse than a design where the passwords are distributed in local clients and local data storage. (There are tradeoffs. With a single site, you may be able to monitor for and respond to unusual access patterns rapidly, and you can upgrade all the software at once. There is an availability benefit. My assessment is that the single-store design is not worth it, because of the catastrophic failure modes.)</p>
<p>It was a fair criticism. I've <a href="https://shostack.org/archive/2015/07/security-lessons-from-healthcare-gov/">previously said</a> "we live in an 'outrage world' where it’s easier to point fingers and giggle in 140 characters and hurt people’s lives or careers than it is to make a positive contribution." Did I fall into that trap myself? Possibly.</p>
<p>In "<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3776518/" target="_blank">Just Culture: A Foundation for Balanced Accountability and Patient Safety</a>," which Rich links, there's a table in Figure 2, headed "Choose the column that best describes the caregiver's action." In reading that table, I believe that a password manager with central storage falls into the reckless category, although perhaps it's merely risky. In either case, the system leaders are supposed to share in accountability.</p>
<p>Could I have been more nuanced? Certainly. Would it have carried the same impact? No. Justified? I'd love to hear your thoughts!</p>Joining the Continuum Team2018-05-15T00:00:00Zhttps://shostack.org/blog/joining-the-continuum-team/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/continuum-logo.png" alt="Continuum logo" class="single-post-main-img" />
<p>I'm pleased to share the news that I've <u>joined</u> [link to https://www.continuumsecurity.net/adam-shostack-joins-continuum-securitys-board/ no longer works] <u>Continuum Security</u>'s [link to https://www.continuumsecurity.net/ no longer works] advisory board. I am excited about the vision that Continuum is bringing to software security: "We help you design, build and manage the security of your software solutions." They're doing so for both happy customers and a growing community. And I've come to love their framing: "Security is not special. Performance, quality and availability is everyone’s responsibility and so is security. After all, who understands the code and environment better than the developers and ops teams themselves?" They're right. Security has to earn our seat at the table. We have to learn to collaborate better, and that requires software that helps the enterprise manage application security risk from the start of, and throughout, the software development process.</p>4 Common Missteps in Threat Modeling2018-05-21T00:00:00Zhttps://shostack.org/blog/4-common-missteps-in-threat-modeling/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Joan Goodchild has is looking at threat modeling for IBM's Security Intelligence blog, and quotes me in "<a href="https://securityintelligence.com/ready-to-try-threat-modeling-avoid-these-4-common-missteps/" target="_blank">Ready to Try Threat Modeling? Avoid These 4 Common Missteps</a>."</p>Threat Model Thursday: Google on Kubernetes2018-05-24T00:00:00Zhttps://shostack.org/blog/tmt-google-on-kubernetes/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/network-isolation-diagram.png" alt="Network Isolation Diagram" class="single-post-main-img" />
<p>There's a recent post on the Google Cloud Platform Blog, "<a href="https://cloudplatform.googleblog.com/2018/05/Exploring-container-security-Isolation-at-different-layers-of-the-Kubernetes-stack.html">Exploring container security: Isolation at different layers of the Kubernetes stack</a>" that's the subject of our next Threat Modeling Thursday post. As always, <a href="https://adam.shostack.org/blog/2018/03/threat-model-thursday-synopsys/">our goal</a> is to look and see what we can learn, not to say 'this is bad.' There's more than one way to do it. Also, <a href="https://adam.shostack.org/blog/2018/04/threat-model-thursday-arms-network-camera-tmsa/">last time</a>, I did a who/what/why/how analysis, which turned out to be pretty time consuming, and so I'm going to avoid that going forward.</p>
<p>The first thing to point out is that there's a system model that intended to support multiple analyses of 'what can go wrong.' ("Sample scenario...Time to do a little threat modeling.") This is a very cool demonstration of how to communicate about security along a supply chain. In this instance, the answers to "what are we working on" vary with who "we" are. That might be the Kubernetes team, or it might be someone using Kubernetes to implement a Multi-tenant SaaS workload.</p>
<h3>What are we working on?</h3>
<p>The answers to this are either Kubernetes or the mutli-tenant system. The post includes a nice diagram (reproduced above) of Kubernetes and its boundaries. Speaking of boundaries, they break out <em>security</em> boundaries which enforce the <em>trust</em> boundaries. I've also heard 'security boundaries' referred to as 'security barriers' or 'boundary enforcement.' They also say "At Google, we aim to protect all trust boundaries with at least two different security boundaries that each need to fail in order to cross a trust boundary."</p>
<p>But you can use this diagram to help you either improve Kubernetes, or to improve the security of systems hosted in Kubernetes.</p>
<h3>What can go wrong?</h3>
<p>Isolation failures. You get a resource isolation failure...you get a network isolation failure, everyone gets an isolation failure! Well, no, not really. You only get an isolation fail if your security boundaries fail. (Sorry! Sorry?)</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2018/isolation-failure.jpg" alt="Oprah announcing everyone gets an isolation failure" title="isolation-failure.jpg" border="0" width="309" height="232" />
<p>This use of isolation is interestingly different from STRIDE or ATT&CK. In many threat models of userland code on a desktop, the answer is 'and then they can run code, and do all sorts of things.' The isolation failure was the end of a chain, rather than the start, and you focus on the spoof, tamper or EoP/RCE that gets you onto the chain. In that sense, isolation may seem frustratingly vague for many readers. But isolation is a useful property to have, and more importantly, it's what we're asking Kubernetes to provide.</p>
<p>There's also mention of cryptomining (and cgroups as a fix) and running untrusted code (use sandboxing). Especially with regards to untrusted code, I'd like to see more discussion of how to run untrusted code, which may or may not be inside a web trust boundary, semi-safely, which is to say either attempting to control its output, or safely, which is to say, in a separate web namespace.</p>
<h3>What are we going to do about it?</h3>
<p>You can use this model to decide what you're going to do about it. How far up or down the list of isolations should you be? Does your app need its own container, pod, node, cluster?</p>
<p>I would like to see more precision in the wording of the controls — what does 'some' control-pane isolation mean? Is it a level of effort to overcome, a set of things you can rely on and some you can't? The crisp expression of these qualities isn't easy, but the authors are in a better position to express them than their readers. (There may be more on this elsewhere in the series.)</p>
<h3>Did we do a good job?</h3>
<p>There's no explicit discussion, but my guess is that the post was vetted by a great many people.</p>
<p>To sum up, this is a great example of using threat modeling to communicate between supplier and customer. By drawing the model and using it to threat model, they help people decide if GCP is right, and if so, how to configure it in the most secure way.</p>
<p>What do you see in the models?</p>NTSB on Uber (Preliminary)2018-05-25T00:00:00Zhttps://shostack.org/blog/ntsb-on-uber-preliminary/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>The NTSB has released "<a href="https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf">Preliminary Report Highway HWY18MH010</a>," on the Uber self-driving car which struck and killed a woman. I haven't had a chance to read the report carefully. </p>
<p>Brad Templeton has excellent analysis of the report at "<a href="https://ideas.4brad.com/ntsb-report-implies-serious-fault-uber-fatality">NTSB Report implies serious fault for Uber in fatality</a>" (and Brad's writings overall on the subject have been phenomenal.)</p>
<p>A few important things to note, cribbed from Brad.</p>
<ul>
<li>The driver was not looking at her phone, but a screen with diagnostic information from the self-driving systems.</li>
<li>The car detected a need to brake with approximately enough time to stop had it automatically applied the brakes.</li>
<li>That system was turned off for a variety of reasons that look bad (in hindsight, and probably could have been critiqued at the time).</li>
</ul>
<p>My only comment right now is wouldn't it be nice to have this level of fact finding in the world of cyber?</p>
<p>Also, it's very clear that the vehicle was carefully preserved. Can anyone say how the NTSB and/or Uber preserved the data center, cloud or other remote parts of the computer systems involved, including the algorithms that were deployed that day (versus reconstructing them later)?</p>The DREAD Pirates2018-05-31T00:00:00Zhttps://shostack.org/blog/tmt-the-dread-pirates/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<blockquote>Then he explained the name was important for inspiring the necessary fear. You see, no one would surrender to the Dread Pirate Westley.</blockquote>
<p>The DREAD approach was created early in the security pushes at Microsoft as a way to prioritize issues. It's not a very good way, you see no one would surrender to the Bug Bar Pirate, Roberts. And so the approach keeps going, despite its many problems.</p>
<p>There are many properties one might want in a bug ranking system for internally found bugs. They include:</p>
<ul>
<li>A cool name</li>
<li>A useful mnemonic</li>
<li>A reduction in argument about bugs</li>
<li>Consistency between raters</li>
<li>Alignment with intuition </li>
<li>Immutability of ratings: the bug is rated once, and then is unlikely to change</li>
<li>Alignment with post-launch/integration/ship rules</li>
</ul>
<p>DREAD certainly meets the first of these, and perhaps the second two. And it was an early attempt at a multi-factor rating of bugs. But there are many problems which DREAD brings that newer approaches deal with. </p>
<p>The most problematic aspect of DREAD is that there's little consistency, especially in the middle. What counts as a 6 damage versus 7, or 6 versus 7 exploitability? Without calibration, different raters will not be consistent. Each of the scores can be mis-estimated, and there's a tendency to underestimate things like discoverability of bugs in your own product.</p>
<p>The second problem is that you set an arbitrary bar for fixes, for example, everything above a 6.5 gets fixed. That makes the distinction between a 6 and a 7 sometimes matters a lot. The score does not relate to what needs to get fixed when found externally.</p>
<p>This illustrates why Discoverability is an odd things to bring into the risk equation. You may have a discoverability of "1" on Monday, and 10 on Tuesday. ("Thanks, Full-Disclosure!") So something could have a 5.5 DREAD score because of low discoverability but require a critical update. Suddenly the DREAD score of the issue is mutable. So it's hard to use DREAD on an externally discovered bug, or one delivered via a bug bounty. So now you have two bug-ranking systems, and what do you do when they disagree? This happened to Microsoft repeatedly, and led to the switch to <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/cc307404.aspx" target="_blank">a bug bar</a> approach.</p>
<p>Affected users is also odd: does an RCE in Flight Simulator matter less than one in Word? Perhaps in the grand scheme of things, but I hope the Flight Simulator team is fixing their RCEs.</p>
<p>Stepping beyond the problems internal to DREAD to DREAD within a software organization, it only measures half of what you need to measure. You need to measure both the security severity and the fix cost. Otherwise, you run the risk of finding something with a DREAD of 10, but it's a key feature (Office Macros), and so it escalates, and you don't fix it. There are other issues which are easy to fix (S3 bucket permissions), and so it doesn't matter if you thought discoverability was low. This is shared by other systems, but the focus on a crisp line in DREAD, everything above a 6.5 gets fixed, exacerbates the issue.</p>
<p>For all these reasons, with regards to DREAD? Fully skeptical, and I have been for <a href="https://social.msdn.microsoft.com/Forums/en-US/c601e0ca-5f38-4a07-8a46-40e4adcbc293/do-you-use-dread-as-it-is?forum=sdlprocess" target="_blank">over a decade</a>. If you want to fix these things, the thing to do is not create confusion by saying "DREAD can also be a 1-3 system!", but to version and revise DREAD, for example, by releasing DREAD 2. I'm exploring a similar approach with <a href="https://github.com/adamshostack/dfd3" target="_blank">DFDs</a>.</p>
<p>I'm hopeful that this post can serve as a collection of reasons to not use DREAD v1, or properties that a new system should have. What'd I miss?</p>Conway's Law and Software Security2018-06-06T00:00:00Zhttps://shostack.org/blog/conways-law-and-software-security/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>In "<a href="https://www.csoonline.com/article/3270644/application-development/conways-law-does-your-organization-s-structure-make-software-security-even-harder.html" target="_blank">Conway's Law: does your organization’s structure make software security even harder?</a>," Steve Lipner mixes history and wisdom:</p>
<blockquote>As a result, the developers understood pretty quickly that product security was their job rather than ours. And instead of having twenty or thirty security engineers trying to “inspect (or test) security in” to the code, we had 30 or 40 thousand software engineers trying to create secure code. It made a big difference.</blockquote>Eagle vs Fox2018-06-10T00:00:00Zhttps://shostack.org/blog/eagle-vs-fox/adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/Bald-Eagle-Fox-Rabbit-San-Juan-6874.jpg" alt="Bald Eagle in flight carrying fox" class="single-post-main-img" />
<p>Kevin Ebi captured an <a href="https://livingwilderness.photoshelter.com/image/I0000D0jSuxyfaQA" target="_blank">amazing set</a> of images of an eagle and a fox fighting over a rabbit. Check them out and read the story at <a href="https://livingwilderness.photoshelter.com/image/I0000D0jSuxyfaQA" target="_blank">his site</a>.</p>'EFAIL' Is Why We Can't Have Golden Keys2018-06-11T00:00:00Zhttps://shostack.org/blog/efail-is-why-we-cant-have-golden-keys/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>I have a new essay at Dark Reading, "<a href="https://www.darkreading.com/vulnerabilities---threats/efail-is-why-we-cant-have-golden-keys/a/d-id/1331938?">'EFAIL' Is Why We Can’t Have Golden Keys</a>." It starts:</p>
<blockquote>There's a newly announced set of issues labeled the "EFAIL encryption flaw" that reduces the security of PGP and S/MIME emails. Some of the issues are about HTML email parsing, others are about the use of CBC encryption. All show how hard it is to engineer secure systems, especially when those systems are composed of many components that had disparate design goals.</blockquote>Threat Model Thursday: Chromium Post-Spectre2018-06-14T00:00:00Zhttps://shostack.org/blog/tmt-chromium-post-spectre/AdamUnderstanding Google's Post-Spectre threat model<span class="single-post-excerpt"><!-- excerpt -->Understanding Google's Post-Spectre threat model<!-- /excerpt --></span>
<p>Today's Threat Model Thursday is a look at "<a href="https://chromium.googlesource.com/chromium/src/+/master/docs/security/side-channel-threat-model.md" target="_blank">Post-Spectre Threat Model Re-Think</a>," from a dozen or so folks at Google. As always, I'm looking at this from a perspective of what can we learn and to encourage dialogue around what makes for a good threat model.</p>
<h3>What are we working on?</h3>
<p>From the title, I'd assume Chromium, but there's a <em>fascinating</em> comment in the introduction that this is wider: "<strong>any software</strong> that both (a) runs (native or interpreted) code from more than one source; and (b) attempts to create a security boundary inside a single address space, is potentially affected." This is important, and in fact, why I decided to hightlight the model. The intro also states, "we needed to re-think our threat model and defenses for Chrome renderer processes." In the problem statement, they mention that there are other, out of scope variants such as "a renderer reading the browser’s memory." </p>
<p>It would be helpful to me, and probably others, to diagram this, both for the Chrome case (the relationship between browser and renderer) and the broader case of that other software, because the modern web browser is a complex beast. As James Mickens <a href="http://scholar.harvard.edu/files/mickens/files/towashitallaway.pdf" target="_blank">says</a>:</p>
<blockquote>A modern Web page is a catastrophe. It's like a scene from one of those apocalyptic medieval paintings that depicts what would happen if Galactus arrived: people are tumbling into fiery crevasses and lamenting various lamentable things and hanging from playground equipment that would not pass OSHA safety checks. This kind of stuff is exactly what you'll see if you look at the HTML, CSS, and JavaScript in a modern Web page. Of course, no human can truly "look" at this content, because a Web page is now like V'Ger from the first "Star Trek" movie, a piece of technology that we once understood but can no longer fathom, a thrashing leviathan of code and markup written by people so untrustworthy that they're not even third parties, they're fifth parties
who weren't even INVITED to the party...</blockquote>
<h3>What can go wrong</h3>
<p>There is a detailed set of ways that confidentiality breaks current boundaries. Most surprising to me is the claim that clock jitter is not as useful as we'd expect, and even enumerating all the clocks is tricky! (Webkit seems to have a different perspective, that <a href="https://webkit.org/blog/8048/what-spectre-and-meltdown-mean-for-webkit/" target="_blank">reducing timer precision</a> is meaningful.)</p>
<p>There is also an issue of when to pass autofilled data to a renderer, and a goal of "Ensure User Intent When Sending Data To A Renderer." This is good, but usability may depend on normal people understanding that their renderer and browser are different. That's mitigated by taking user gestures as evidence of intent. That seems like a decent balancing of usability and security, but as I watch people using devices, I see a lot of gesturing to explore and discover the rapidly changing meanings of gestures, both within applications and across different applications and passwords.</p>
<h3>What are we going to do about it?</h3>
<p>As a non-expert in browser design, I'm not going to attempt to restate the mitigations. Each of the defensive approaches is presented with clear discussion of its limitations and the current intent. This is both great to see, and hard to follow for those not deep in browser design. That form of writing is probably appropriate, because otherwise the meaning gets lost in verbosity that's not useful to the people most impacted. I would like to see more out-linking as an aide to those trying to follow along.</p>
<h3>Did we do a good job?</h3>
<p>I'm very glad to see Google sharing this because we can see inside the planning of the architects, the known limits, and the demands on the supply chain (changes to compilers to reduce gadgets, changes to platforms to increase inter-process isolation), and in the end, "we now assume any active code can read any data in the same address space. The plan going forward must be to keep sensitive cross-origin data out of address spaces that run untrustworthy code." Again, <strong>that's more than just browsers</strong>. If your defensive approaches, mitigations or similar sections are this clear, you're doing a good job.</p>
Threat Model Thursday: Architectural Review and Threat Modeling2018-06-21T00:00:00Zhttps://shostack.org/blog/tmt-architectural-review-and-threat-modeling/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/micro-houses_Framed.jpg" alt="warehouse with many micro houses in various stages of completion" class="single-post-main-img" />
<p>For Threat Model Thursday, I want to use current events here in Seattle as a prism through which we can look at technology architecture review. If you want to take this as an excuse to civilly discuss the political side of this, please feel free.</p>
<p>Seattle has a housing and homelessness <a href="https://www.seattletimes.com/seattle-news/homeless/more-affordable-housing-only-way-to-solve-seattles-homeless-crisis-new-report-says/" target="_blank">crisis</a>. The cost of a house has risen nearly <a href="https://seattlebubble.com/blog/2018/04/24/case-shiller-seattle-home-prices-rising-fastest-in-the-nation/" target="_blank">25% above the 2007 market peak</a>, and has roughly doubled in the 6 years since April 2012. Fundamentally, demand has outstripped supply and continues to do so. As a city, we need more supply, and that means evaluating the value of things that constrain supply. This <a href="https://www.thestranger.com/slog/2018/06/18/27810880/a-libertarian-response-to-seattles-homelessness-crisis" target="_blank">commentary</a> from the local Libertarian party lists some of them.</p>
<p>The rules on what permits are needed to build a residence, what housing is acceptable, or how many unrelated people can live together (no more than <u>eight</u> [link to https://www.seattle.gov/dpd/cs/groups/pan/@pan/documents/web_informational/dpdd016420.pdf no longer works] ) are expressions of values and priorities. We prefer that the developers of housing not build housing rather than build housing that doesn't comply with the city's Office of Planning and Community Development 32 pages of <a href="https://drive.google.com/file/d/1c2i08FTZH63ILlkEFNBEXz8UtcXKHoIh/view" target="_blank">neighborhood design guidelines</a>. We prefer to bring developers back after a building is built if the <a href="http://www.capitolhillseattle.com/2018/06/design-board-settles-on-big-new-mural-to-solve-capitol-hill-buildings-color-problem/" target="_blank">siding is not the agreed color</a>. This is a choice that expresses the values of the city. And because I'm not a housing policy expert, I can miss some of the nuances and see the effect of the policies overall.</p>
<p>Let's transition from the housing crisis here in Seattle to the architecture crisis that we face in technology.</p>
<p>No, actually, I'm not quite there. The city <a href="http://www.sightline.org/2016/09/06/how-seattle-killed-micro-housing/" target="_blank">killed micro-apartments</a>, only to replace them with ... <a href="http://www.capitolhillseattle.com/2018/06/true-hope-village-a-tiny-piece-of-seattles-big-homelessness-and-affordability-problem-moves-forward-in-the-central-district/" target="_blank">artisanal micro-houses</a>. Note the variation in size and shape of the two houses in the foreground. Now, I know very little about construction, but I'm reasonably confident that if you read the previous piece on micro-housing, many of the concerns regulators were trying to address apply to "True Hope Village," construction pictured above. I want you, dear reader, to read the questions about how we deliver housing in Seattle, and treat them as a mirror into how your organization delivers software. Really, please, go read "<a href="http://www.sightline.org/2016/09/06/how-seattle-killed-micro-housing/" target="_blank">How Seattle Killed Micro-Housing</a>" and the "<a href="https://drive.google.com/file/d/1c2i08FTZH63ILlkEFNBEXz8UtcXKHoIh/view" target="_blank">Neighborhood Design Guidelines</a>" carefully. Not because you plan to build a house, but as a mirror of your own security design guidelines.</p>
<p>They may be no prettier.</p>
<p>In some companies, security is valued, but has no authority to force decisions. In others, there are mandatory policies and review boards. We in security have fought for these mandatory policies because without them, products ignored security. And similarly, we have housing rules because of unsafe, unsanitary or overcrowded housing. To reduce the blight of slums.</p>
<p>Security has design review boards which want to talk about the color of the siding a developer installed on the now live product. We have design regulation which kills apodments and tenement housing, and then glorifies tiny houses. From a distance, these rules make no sense. I didn't find it sensible, myself. I remember a meeting with the Microsoft Crypto board. I went in with some very specific questions regarding parameters and algorithms. Should we use this hash algorithm or that one? The meeting took not five whole minutes to go off the rails with suggestions about non-cryptographic architecture. I remember shipping the SDL Threat Modeling Tool, going through the roughly five policy tracking tools we had at the time, discovering at the very last minute that we had extra rules that were not documented in the documents that I found at the start. It drives a product manager nuts!</p>
<p>Worse, rules expand. From the executive suite, if a group isn't growing, maybe it can shrink? From a security perspective, the rapidly changing threat landscape justifies new rules. So there's motivation to ship new guidelines that, in passing, spend a page explaining all the changes that are taking place. And then I see "Incorporate or acknowledge the best features of existing early to mid-century buildings in new development." What does that mean? What are the best features of those buildings? How do I acknowledge them? I just want to ship my peer to peer blockchain features! And nothing in the design review guidelines is clearly objectionable. But taken as a whole, they create a complex and unpredictable, and thus expensive path to delivery.</p>
<p>We express values explicitly and implicitly. In Seattle, implicit expression of values has hobbled the market's ability to address a basic human need. One of the reasons that <a href="https://shostack.org/blog/ries-on-gatekeepers/">embedding is effective</a> is that the embedded gatekeepers can advise, interpret in relation to real questions. Embedding expresses the value of collaboration, of <a href="https://shostack.org/blog/tmt-talking-dialogue-and-review/">dialogue over review</a>. Does your security team express that security is more important than product delivery? Perhaps it is. When Microsoft stood down product shipping for security pushes, it was an explicit statement. Making your values explicit and debating prioritization is important.</p>
<p>What side effects do your security rules have? What rule is most expensive to comply with? What initiatives have you killed, accidentally or intentionally?</p>Carpenter!2018-06-25T00:00:00Zhttps://shostack.org/blog/carpenter/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/google-location-history-930x517_orig_Framed.png" alt="google location history example" class="single-post-main-img" />
<p>The decision in Carpenter v. United States is an unusually positive one for privacy. The Supreme Court ruled that the government generally can’t access historical cell-site location records without a warrant. (SCOTUS Blog links to <a href="http://www.scotusblog.com/case-files/cases/carpenter-v-united-states-2/" target="_blank">court documents</a>. The court put limits on the "third party" doctrine, and it will be fascinating to see how those limits play out.</p>
<p>A few interesting links:</p>
<ul>
<li>"<a href="http://reason.com/volokh/2018/06/22/first-thoughts-on-carpenter-v-united-sta" target="_blank">First Thoughts on Carpenter v. United States</a>" by Orin Kerr, who is very well respected authority on the law of search and seizure.</li>
<li>"<a href="http://reason.com/blog/2018/06/22/neil-gorsuch-joins-sonia-sotomayor-in-qu" target="_blank">Neil Gorsuch Joins Sonia Sotomayor in Questioning the Third-Party Doctrine</a>"</li>
<li> "<u>Ten Thoughts on Today’s Blockbuster Fourth Amendment Decision – Carpenter v. United States</u>" [link to https://concurringopinions.com/archives/2018/06/ten-thoughts-on-todays-blockbuster-fourth-amendment-decision-carpenter-v-united-states.html no longer works], by Lior Strahilevitz, whose work on the topic was cited in a dissent by Justice Thomas.
<blockquote>The most important sentence in Justice Gorsuch’s opinion appears at page 20: “Nor can I fault the Court today for its implicit but unmistakable conclusion that the rationale of Smith and Miller is wrong; indeed, I agree with that.” Justice Gorsuch is going to be on the Court for a very long time and he is signaling that in a properly presented case he will reject the third-party doctrine. That’s huge. What is less certain is whether his characterization of the majority opinion is apt.</blockquote>
</li>
</ul>
<p>As I said <a href="https://shostack.org/blog/the-carpenter-case/">previously</a>, I am thankful to the fine folks at the Knight First Amendment Institute at Columbia University for the opportunity to help with their technologists amicus brief in this case, and I'm glad to see that the third party doctrine is under stress. That doctrine has weakened the clear aims of the fourth amendment in protecting our daily lives against warrantless searches as our lives have involved storing more of our "papers" outside our homes.</p>
<p>Image via <a href="http://www.themobilepcguys.com/news--blog/yes-google-maybe-tracking-you" target="_blank">the mobile pc guys</a>, who have advice about how to check your location history on Google, which is one of many places where it may be being captured. That advice might still be useful — it's hard to tell if the UI has changed, since I had turned off those features.</p>Continuum Interview2018-06-28T00:00:00Zhttps://shostack.org/blog/continuum-interview/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Continuum has <u>released</u> [link to https://www.continuumsecurity.net/continuum-security-interview-with-adam-shostack/ no longer works] a video of me and Stuart Winter-Tear in conversation at the Open Security Summit:</p>
<iframe width="753" height="423" src="https://www.youtube.com/embed/U_8_tZXlufg" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen=""></iframe>
<p>"At the recent Open Security Summit we had the great pleasure of interviewing Adam Shostack about his keynote presentation “A seat at the table” and the challenge of getting security involved in product and application design. We covered numerous topics from the benefits brought to business by threat modeling to pooping unicorns."</p>Threat Model Thursdays: Crispin Cowan2018-07-05T00:00:00Zhttps://shostack.org/blog/tmt-crispin-cowan/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/leviathan_Framed.png" alt=" " class="single-post-main-img" />
<p>Over at the Leviathan blog, Crispin Cowan writes about "<a href="https://www.leviathansecurity.com/blog/the-calculus-of-threat-modeling">The Calculus Of Threat Modeling</a>." Crispin and I have collaborated and worked together over the years, and our approaches are explicitly aligned around the four question frame.</p>
<h3>What are we working on?</h3>
<p>One of the places where Crispin goes deeper is definitional. He's very precise about what a security principal is:</p>
<blockquote>A principal is any active entity in system with access privileges that are in any way distinct from some other component it talks to. Corollary: a principal is defined by its domain of access (the set of things it has access to). Domains of access can, and often do, overlap, but that they are different is what makes a security principal distinct.</blockquote>
<p>This also leads to the definition of attack surface (where principals interact), trust boundaries (the sum of the attack surfaces) and security boundaries (trust boundaries for which the engineers will fight). These are more well-defined than I tend to have, and I think it's a good set of definitions, or perhaps a good step forward in the discussion if you disagree.</p>
<h3>What can go wrong?</h3>
<p>His approach adds much more explicit description of principals who own elements of the diagram, and several self-check steps ("Ask again if we have all the connections..") I think of these as part of "did we do a good job?" and it's great to integrate such checks on an ongoing basis, rather than treating it as a step at the end.</p>
<h3>What are we going to do about it?</h3>
<p>Here Crispin has assessing complexity and mitigations. Assessing complexity is an interesting approach — a great many vulnerabilities appear on the most complex interfaces, and I think it's a useful strategy, similar to 'easy fixes first' for a prioritization approach.</p>
<p>He also has "c. Be sure to take a picture of the white board after the team is done describing the system." "d. Go home and create a threat model diagram." These are interesting steps, and I think deserve some discussion as to form (I think this is part of 'what are we working on?') and function. To function, we already have "a threat model diagram," and a record of it, in the picture of the whiteboard. I'm nitpicking here for two very specific reasons. First, the implication that what was done isn't a threat model diagram isn't accurate, and second, as the agile world likes to ask "why are you doing this work?"</p>
<p>I also want to ask, is there a reason to go from whiteboard to Visio? Also, as Crispin says, he's not simply transcribing, he's doing some fairly nuanced technical editing, "Collapse together any nodes that are actually executing as the same security principal." That means you can't hand off the work to a graphic designer, but you need an expensive security person to re-consider the whiteboard diagram. There are times that's important. If the diagram will be shown widely across many meetings; if the diagram will go outside the organization, say, to regulators; if the engineering process is waterfall-like. </p>
<h3>Come together</h3>
<p>Crispin says that tools are substitutes for expertise, and that (a? the?) best practice is for a security expert and the engineers to talk. I agree, this is a good way to do it — I also like to train the engineers to do this without security experts each time.</p>
<p>And that brings me to the we/you distinction. Crispin conveys the four question frame in the second person (What are you doing, what did you do about it), and I try to use the first person plural (we; what are we doing). Saying 'we' focuses on collaboration, on dialogue, on exploration. Saying 'you' frames this as a review, a discussion, and who knows, possibly a fight. Both of us used that frame at a prior employer, and today when I consult, I use it because I'm really not part of the doing team. </p>
<p>That said, I think this was a super-interesting post for the definitions, and for showing the diagram evolution and the steps taken from a whiteboard to a completed, colored diagram.</p>
<p>The image is the frontspiece of Leviathan by Thomas Hobbes, with its famous model of the state, made up of the people.</p>Automotive Privacy2018-07-09T00:00:00Zhttps://shostack.org/blog/automotive-privacy/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/Car-sensors-framed.jpg" alt="Car Sensors" class="single-post-main-img" />
<p>[Update: clarified a sentence about whose privacy is touched, and where.]</p>
<p>I had missed the story "<a href="https://www.washingtonpost.com/news/innovations/wp/2018/01/15/big-brother-on-wheels-why-your-car-company-may-know-more-about-you-than-your-spouse/?noredirect=on&utm_term=.6e6f4572a383" target="_blank">Big Brother on wheels: Why your car company may know more about you than your spouse.</a>" There are surprising details, including that you might be able to shut it off, and the phrase “If a customer declines, we do not collect any data from the vehicle.” I do wonder how a customer can decline — does it involve not buying a GM car?</p>
<p>When we did a <a href="https://shostack.org/blog/privacy-threat-model-for-the-people-of-seattle/">privacy threat model</a> at the Seattle Privacy Coalition, we <u>found</u> [link to https://seattleprivacy.org/threat-modeling-the-privacy-of-seattle-residents/ no longer works] these issues. We also were surprised that the defense, taking a car driven by someone else (a taxi, or a Lyft/Uber) makes such a big difference, leaving the owner of the car associated with the trip via license plate, toll beacons, tire pressure monitors, traffic sensors, maps, and other technologies with tracking implications. And the passenger is associated if payment is by card, or the ride is booked via an app. <del>splits/confuses the difference.</del> It may also be that <em>driving</em> for Lyft/Uber acts as a defense, by classifying a car as a carshare, but it seems pretty easy to see through that to where the car is parked (especially overnight) and to repeated trips to dis-ambiguate between paid and personal rides.</p>Threat Modeling Thursday: 20182018-07-12T00:00:00Zhttps://shostack.org/blog/tmt-2018/AdamHelp me help you.<span class="single-post-excerpt"><!-- excerpt -->Help me help you.<!-- /excerpt --></span>
<p>So this week's threat model Thursday is simply two requests:</p>
<ol>
<li>What would you like to see in the series?</li>
<li>What would you like me to cover in my Blackhat talk, "<a href="https://www.blackhat.com/us-18/briefings/schedule/index.html#threat-modeling-in-2018-attacks-impacts-and-other-updates-10827" target="_blank">Threat Modeling in 2018</a>?"</li>
</ol>
<p>"Attacks always get better, and that means your threat modeling needs to evolve. This talk looks at what's new and important in threat modeling, organizes it into a simple conceptual framework, and makes it actionable. This includes new properties of systems being attacked, new attack techniques (like biometrics confused by LEDs) and a growing importance of threats to and/or through social media platforms and features. Take home ways to ensure your security engineering and threat modeling practices are up-to-date."</p>Friday Star Wars2018-07-13T00:00:00Zhttps://shostack.org/blog/friday-star-wars-july2018/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/mouse-traps-admiral-ackbar.jpg" alt="Admiral Ackbar painted on a mousetrap" class="single-post-main-img" />
<p>Oddly, I am unable to find this on Etsy. Perhaps the Disney Corporation, new owners of Star Wars, doesn't like mousetraps?</p>Games and Cards2018-07-16T00:00:00Zhttps://shostack.org/blog/games-and-cards/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/emergynt-risk-deck.jpg" alt="The Emergynt Risk Deck" class="single-post-main-img" />
<p>Emergynt has created the <a href="https://emergynt.com/risk-deck/" target="_blank">Emergynt Risk Deck</a>, a set of 51 cards, representing actors, vulnerabilities, targets, consequences and risks. It's more a discussion tool than a game, but I have a weakness for the word "emergent," and I've added it to my list of <a href="https://shostack.org/games.html">security games</a>.</p>
<p>Also, Lancaster University has created an <a href="https://www.securedevelopment.org/2017/10/12/games-to-help-learn-about-secure-development/" target="_blank">Agile Security Game</a>.</p>Keeping the Internet Secure2018-07-17T00:00:00Zhttps://shostack.org/blog/keeping-the-internet-secure/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/save-secure-australia.png" alt="Save Secure Australia" class="single-post-main-img" />
<blockquote>Today, a global coalition led by civil society and technology experts sent a <a href="https://www.accessnow.org/cms/assets/uploads/2018/07/Australia-Encryption-Coalition-Letter.pdf" target="_blank">letter</a> asking the government of Australia to abandon plans to introduce legislation that would undermine strong encryption. The letter calls on government officials to become proponents of digital security and work collaboratively to help law enforcement adapt to the digital era.
<br /><br />
In July 2017, Prime Minister Malcolm Turnbull held a press conference to announce that the government was drafting legislation that would compel device manufacturers to assist law enforcement in accessing encrypted information. In May of this year, Minister for Law Enforcement and Cybersecurity Angus Taylor restated the government’s priority to introduce legislation and traveled to the United States to speak with companies based there.
<br /><br />
Today’s letter signed by 76 organizations, companies, and individuals, asks leaders in the government “not to pursue legislation that would undermine tools, policies, and technologies critical to protecting individual rights, safeguarding the economy, and providing security both in Australia and around the world.” (Read the full announcement <a href="https://www.accessnow.org/global-coalition-calls-on-australias-government-to-reject-plans-to-undermine-encryption/" target="_blank">here</a>)</blockquote>
<p>I'm pleased to have joined in this effort by Accessnow, and you can sign, too, at <a href="https://secureaustralia.org.au/" target="_blank">https://secureaustralia.org.au</a>. Especially if you are Australian, I encourage you to do so.</p>Hey, this movie looks pretty interesting!2018-07-20T00:00:00Zhttps://shostack.org/blog/hey-this-movie-looks-pretty-interesting/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/1YTYZwuht9A" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen=""></iframe>Half the US population will live in 8 states2018-07-24T00:00:00Zhttps://shostack.org/blog/half-the-us-population-will-live-in-8-states/AdamOur country's population is shifting, which could have a dramatic effect on politics.<span class="single-post-excerpt"><!-- excerpt -->Our country's population is shifting, which could have a dramatic effect on politics.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/population-in-2040.png" alt="map identifying the most populated states in the US" class="single-post-main-img" />
<p>That's the subject of a thought-provoking Washington Post article, "<a href="https://www.washingtonpost.com/news/politics/wp/2018/07/12/in-about-20-years-half-the-population-will-live-in-eight-states/" target="_blank">In about 20 years, half the population will live in eight states</a>," and 70% of Americans will live in 15 states. "<a href="https://twitter.com/NormOrnstein/status/1016789064379334656?ref_src=twsrc%5Etfw" target="_blank">Meaning 30 percent will choose 70 senators. And the 30% will be older, whiter, more rural, more male than the 70 percent.</a>" Of course, as the census shows the population shifting, the makeup of the House will also change dramatically.</p>
<p>Maybe you think that's good, maybe you think that's bad. It certainly leads to interesting political times. Maybe even a bit of chaos, emerging.</p>Threat Modeling Thursday: 20182018-07-26T00:00:00Zhttps://shostack.org/blog/tmt-2018-2/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/tm-2018-larger_Framed.png" alt="talk synopsis for Blackhat" class="single-post-main-img" />
<p>Since I wrote <a href="https://amzn.to/2LHlMZo" target="_blank">my book</a> on the topic, people have been asking me "what's new in threat modeling?" My Blackhat talk is my answer to that question, and it's been taking up the time that I'd otherwise be devoting to the series.</p>
<p>As I've been practicing my talk*, I discovered that there's more new than I thought, and I may not be able to fit in everything I want to talk about in 50 minutes. But it's coming together nicely.</p>
<p>The current core outline is:</p>
<ul>
<li>What are we working on
<ul>
<li>The fast moving world of cyber</li>
<li>The agile world</li>
<li>Models are scary</li>
</ul>
</li>
<li>What can go wrong? Threats evolve!
<ul>
<li>STRIDE</li>
<li>Machine Learning</li>
<li>Conflict</li>
</ul>
</li>
</ul>
<p>And of course, because it's 2018, there's cat videos and emoji to <em>augment</em> logic. Yeah, that's the word. Augment. 🤷♂️</p>
<p>Wednesday, August 8 at 2:40 PM.</p>
<p>* Oh, and note to anyone speaking anywhere, and especially large events like Blackhat — as the <a href="https://www.blackhat.com/us-18/speaker-resources.html" target="_blank">speaker resources say</a>: practice, practice, practice.</p>Summer Reading List2018-07-31T00:00:00Zhttps://shostack.org/blog/summer-reading-list/Adam[no description provided}<span class="single-post-excerpt"><!-- excerpt -->[no description provided}<!-- /excerpt --></span>
<p>I'm honored to have my threat modeling book on this short list with Daniel Kahneman, Tony Hsieh, Nicole Forsgren, and Tom DeMarco: "<a href="https://medium.com/outsystems-engineering/summer-reading-list-top-recommendations-from-our-engineers-365581a14e7f" target="_blank">Summer Reading List: Top Recommendations from our Engineers</a>."</p>CSO on AppSec at the Speed of Devops2018-08-06T00:00:00Zhttps://shostack.org/blog/cso-on-appsec-at-the-speed-of-devops/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>“<u>20 Ways to Make AppSec Move at the Speed of DevOps</u>” [link to https://www.csoonline.com/article/3293924/security/20-ways-to-make-appsec-move-at-the-speed-of-devops.html no longer works] is in CSO. It’s a good collection, and I’m quoted.</p>
<p>[Update: Can’t find it on CSO, <a href="https://web.archive.org/web/20180802214447/https://www.csoonline.com/article/3293924/security/20-ways-to-make-appsec-move-at-the-speed-of-devops.html" target="_blank">https://web.archive.org/web/20180802214447/https://www.csoonline.com/article/3293924/security/20-ways-to-make-appsec-move-at-the-speed-of-devops.html</a></p>CyberSecurity Hall of Fame2018-08-06T00:00:00Zhttps://shostack.org/blog/cybersecurity-hall-of-fame/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Congratulations to the <u>2016 winners</u>! [link to http://www.cybersecurityhalloffame.com/#chof_2016 no longer works]</p>
<ul>
<li>Dan Geer, Chief Information Security Officer at In-Q-Tel;</li>
<li>Lance J. Hoffman, Distinguished Research Professor of Computer Science, The George Washington University;</li>
<li>Horst Feistel, Cryptographer and Inventor of the United States Data Encryption Standard (DES);</li>
<li>Paul Karger, High Assurance Architect, Prolific Writer and Creative Inventor;</li>
<li>Butler Lampson, Adjunct Professor at MIT, Turing Award and Draper Prize winner;</li>
<li>Leonard J. LaPadula, Co-author of the Bell-LaPadula Model of Computer Security; and</li>
<li>William Hugh Murray, Pioneer, Author and Founder of the Colloquium for Information System Security Education (CISSE)</li>
</ul>
<p>In a world where influence seems to be measured in likes, re-tweets and shares, the work by these 7 fine people really stands the test of time. For some reason this showed up on Linkedin as "Butler was mentioned in the news," even though it's a few years old. Again, test of time.</p>Aretha Franklin2018-08-13T00:00:00Zhttps://shostack.org/blog/aretha-franklin/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/Aretha-Franklin-Framed.jpg" alt="Aretha Franklin singing" class="single-post-main-img" />
<p>I remember an interview I read with Ahmet Ertegün, the founder of Atlantic Records. He was talking about Aretha, and he said that one of his producers came in, saying that she wasn't measuring up. He asked the producer what was up, and was told that they were trying to get her to sing like the other successful soul singers, and it wasn't working out.</p>
<p>Ertegün told the producer that he saw the problem, sitting right there. The fellow didn't want to let Aretha do what she knew, which was gospel.</p>
<p>There's a lot of wisdom in that short story, from not wanting to impose our vision of what people should be, to seeing the root of a problem.</p>
<p>In the meanwhile, I just hope that she pulls through. She's given a lot of joy to a lot of people, and she deserves a long, happy retirement.</p>Threat Modeling in 2018: Attacks, Impacts and Other Updates2018-08-13T00:00:00Zhttps://shostack.org/blog/threat-modeling-in-2018-attacks-impacts-and-other-updates/AdamThe slides from my Blackhat talk are now available.<span class="single-post-excerpt"><!-- excerpt -->The slides from my Blackhat talk are now available.<!-- /excerpt --></span>
<p>The slides from my Blackhat talk, "<a href="https://www.blackhat.com/us-18/briefings/schedule/index.html#threat-modeling-in-2018-attacks-impacts-and-other-updates-10827" target="_blank">Threat Modeling in 2018: Attacks, Impacts and Other Updates</a>" are now available either as a <a href="http://i.blackhat.com/us-18/Wed-August-8/us-18-Shostack-Threat-Modeling-in-2018.pdf" target="_blank">PDF</a> or <a href="https://1drv.ms/p/s!At_WCC0-WWpPi2nluEtR-poV8Kbc" target="_blank">online viewer</a>.</p>Toolbox: After a Conference2018-08-21T00:00:00Zhttps://shostack.org/blog/toolbox-after-a-conference/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/patrick-perkins-350622-unsplash_framed.jpg" alt="push pins and sticky notes" class="single-post-main-img" />
<p>Wow. Blackhat, Defcon, I didn't even make the other conferences going on in Vegas. And coming back it seems like there's a sea of things to follow up on. I think a little bit of organization is helping me manage better this year, and so I thought I'd share what's in my post-conference toolbox. I'm also sharing because I don't think my workflow is optimal, and would love to learn from how others are working through this in 2018 with its profusion of ways to stay in touch.</p>
<p>First, I have a stack of queues to process:</p>
<ol>
<li>Email. My inbox, but I also have a folder called "followup." I move a lot out of my inbox to the followup folder so I can see it when I'm back from travel. (I also have a set of monthly sub-folders: followup/august, followup/september, they let me say "I'll get back to you in three months.")</li>
<li>Signal</li>
<li>iMessage. For both of these, I go back through the conversations I've had, see if I had followups or if I dropped the ball on someone.
</li><li>Linkedin. I get a lot of linkedin requests, and I'm a fairly open networker. Sadly, the UI works very poorly for me. I would love to hear about tools that allow me to effectively move messages to something other than a LIFO queue.</li>
<li>Workflowy. I'm experimenting with this as a note taking tool, and it's not bad. <del>It's a bit of a pain to extract the data (for example, I can't email myself a branch of the tree), but copy and paste from the website is decent.</del> It turns out the website has great export, but still learning. </li>
<li>Business cards. I go through the whole stack of cards for todo items. I try to write notes on business cards. I discovered I did that on one of 6 cards where I remembered something. That's not very good odds, and forces me to consider what I might have missed. Still exploring how to make best use of cards without notes. Advice really welcome here.</li>
<li>Slack channels. Go through, look at DMs and channels. I suppose I should use some feature to note that I intend to followup. Is the Slack way to say "come back to this" to star a message? </li>
<li>Calendar. For each meeting, think about the meeting, check my notes, see if I remember followups or things that didn't make it to an email/workflowy note. And yes, there were several discussions that I know we discussed followups that I re-discovered by looking at my calendar.</li>
<li>Photos. Photographs are the new note-taking, and so going back through pictures you took is important.</li>
<li>Twitter, Facebook. I'm trying to break from Twitter, and don't use Facebook, but I figured I'd include them here because they're maybe worth remembering.</li>
</ol>
<p>After the queues, as a consultant, I have customer work to get back to and sales contacts to followup on. I have expenses. I haven't found an expense app that I really like, and so I stuff receipts in an envelope each evening, and then deal with them when I get home.</p>
<p>If I missed any followups, I'm sorry. Please reach out!</p>
<p>But more, I'm curious what works for you? What's in your toolbox?</p>
<p class="credit-text">Photo: <a href="https://unsplash.com/photos/ETRPjvb0KM0?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank">Patrick Perkins</a>.</p>Threat Model Thursday: Legible Architecture2018-08-23T00:00:00Zhttps://shostack.org/blog/tmt-legible-architecture/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/city-streets-orientations-osmnx.png" alt="City Streets orientation" class="single-post-main-img" />
<p>The image above is the frequency with which streets travel a certain orientation, and it's a nifty data visualization by <a href="http://geoffboeing.com/2018/07/comparing-city-street-orientations/">Geoff Boeing</a>. What caught my attention was not just the streets of Boston and Charlotte, but the lack of variability shown for Seattle, which is a city with two grids.</p>
<p>But then there was this really interesting tidbit, which relates to threat modeling:</p>
<blockquote>Kevin Lynch defined “legible” cities as those whose patterns lend themselves to coherent, organized, recognizable, and comprehensible mental images. These help us organize city space into cognitive maps for wayfinding and a sense of place.</blockquote>
<p>One of the questions I get all the time is 'what's the right way to model this system?' And the answer is the right way is the way that helps you find threats. A good system model balances detail with abstraction. It's laid out in a way that uses space and relative position to help the viewer follow a story.</p>
<p>Sometimes the underlying physical or logical reality makes that easy. Other times, the reality is more like the streets of Boston, and the official map draws a simple picture with the southern Red Lines being the same length, and similar visual portrayal of the Green Line. But the second map, from <a href="http://www.urbanrail.net/am/bost/boston.htm">http://www.urbanrail.net/am/bost/boston.htm</a>, shows a very different picture.</p>
<img style=" margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2018/Boston-Subway-Map-1011x1024.jpg" alt="Boston Subway Map official" title="Boston-Subway-Map-1011x1024.jpg" border="0" width="296" height="300" />
<img style=" margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2018/boston-map.png" alt="Boston map geographic" title="boston-map.png" border="0" width="251" height="300" />
<p>What's the right model? What's the legible architecture of a system?</p>
<p>Modeling a system that's grown organically over decades is a very challenging task. That's true of Windows, that's true of many large enterprise systems, that's true of the air traffic system. One of the advantages that cloud architectures bring is the opportunity to sweep away some of that historical complexity, and to create comprehensible models. That simplification carries value in terms of architectural consistency, makes it easier to impose checkpoints, and will be augmented over time with the accretion of complexity, inflexibility and eventually need to be swept away itself. That's rarely easy even when computers are like crops, rather than like pets.</p>
<p>As your threat modeling evolves, it's important to ask: what's the legible architecture of these systems?</p>
<p>That's emphatically not because legible architecture is a goal. It's a tool. Having understandable models of your systems makes it easier for everyone to interact with them, and that makes design easier, it makes evolution easier. Legible architecture is a property that makes other properties easier to achieve.</p>
Reflective Practice and Threat Modeling (Threat Model Thursday)2018-09-14T00:00:00Zhttps://shostack.org/blog/tmt-reflective-practice-and-threat-modeling/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Lately, I've been asking what takes threat modeling from a practice to a mission. If you're reading this blog, you may have seen that some people are nearly mad about threat modeling. The ones who say "you're never done threat modeling." The ones who've made it the center of their work practice. What distinguishes those people from those who keep trying to teach developers about the difference between a hactivist and a script kiddie?</p>
<p>A book I've read recently, "<a href="https://www.amazon.com/Reflective-Practitioner-Professionals-Think-Action/dp/0465068782/ref=sr_1_1?ie=UTF8&qid=1536190447&sr=8-1&keywords=reflective+practitioner" target="_blank">The Reflective Practitioner: How Professionals Think In Action</a>," gives some useful perspective. It's about how practitioners use the cases and issues before them to grapple with questions like 'is this the best way to approach this problem?' It's not an easy read by any stretch. It engages in analysis of both what makes a profession, and how several professions including architect, psychologist, and town planner engage with their work.</p>
<blockquote>They may ask themselves, for example, “What features do I notice when I recognize this thing? What are the criteria by which I make this judgment? What procedures am I enacting when I perform this skill? How am I framing the problem that I am trying to solve?” Usually reflection on knowing-in-action goes together with reflection on the stuff at hand. There is some puzzling, or troubling, or interesting phenomenon with which the individual is trying to deal. As he tries to make sense of it, he also reflects on the understandings which have been implicit in his action, understandings which he surfaces, criticizes, restructures, and embodies in further action. It is this entire process of reflection-in-action which is central to the “art” by which practitioners sometimes deal well with situations of uncertainty, instability, uniqueness, and value conflict.</blockquote>
<p>Those seeking to advance their practice of threat modeling would do well to pick up a copy and use it as a lens into reflecting on their practice of the arts.</p>
<p>After the jump, I'm going to quote more bits that struck me as I read, and offer some reflection on them.</p>
<blockquote>Alfred Schultz and his intellectual descendants have analyzed the tacit, everyday know-how that we bring to social interactions such as the rituals of greeting, ending a meeting, or standing in a crowded elevator. Birdwhistell has made comparable contributions to a description of the tacit knowledge embodied in our use and recognition of movement and gesture. In these domains, too, we behave according to rules and procedures that we cannot usually describe and of which we are often unaware.</blockquote>
<p>This was absolutely the situation in my own threat modeling (circa 1999). Being shocked by the cumbersome emergent processes descriptions helped me realized that the rules and procedures that were beneath our consciousness were so important. In fact it was an accumulation of rules and advice that were appropriate in some but not all situations that led me to the four-question framework.</p>
<blockquote>Many practitioners, locked into a view of themselves as technical experts, find nothing in the world of practice to occasion reflection. They have become too skillful at techniques of selective inattention, junk categories, and situational control, techniques which they use to preserve the constancy of their knowledge-in-practice. For them, uncertainty is a threat; its admission is a sign of weakness. Others, more inclined toward and adept at reflection-in-action, nevertheless feel profoundly uneasy because they cannot say what they know how to do...</blockquote>
<p>Yes. We need to consider ourselves as learning experts. That feeling that things are not quite right is not a sign of weakness, but of weakness leaving ones practice. At the same time, we need to practice. Real cases. Real systems. Real threat models.</p>
<blockquote>When design terms are ambiguous in this way, they may create confusion, but they also call attention to multiple consequences. Terms like “stair,” “ramp,” and “wall” refer both to particular building elements and to formal functions such as “marking” and “relating in.” “Gallery” refers both to an organization of space and to a particular precedent (“the sort of thing Aalto would invent”). Aspiring members of the linguistic community of design learn to detect multiple reference, distinguish particular meanings in context, and use multiple reference as an aid to vision across design domains.</blockquote>
<p>This relates closely to the question of "is this the right model?" Many models are layered, and understanding those layers, prying them apart and gluing them back together is a task along the way to expertise.</p>
<blockquote>As he reflects-in-action on the situation created by his earlier moves, the designer must consider not only the present choice but the tree of further choices to which it leads, each of which has different meanings in relation to the systems of implications set up by earlier moves. Quist’s virtuosity lies in his ability to string out design webs of great complexity. But even he cannot hold in mind an indefinitely expanding web. At some point, he must move from a “what if?” to a decision which then becomes a design node with binding implications for further moves. Thus there is a continually evolving system of implications within which the designer reflects-in-action.</blockquote>
<p>This touches on the virtue of the whiteboard and the "pointable model." The whiteboard holds part of the web, and as choices appear binding in unacceptable ways, we can wipe them away and explore anew</p>
<blockquote>It would be a mistake to attribute to the inquirer at the beginning of such a process the articulated description which he achieves later on—to say, for example, that Quist must have known unconsciously at the beginning just how this site is screwy and just how the geometry of parallels can be successfully imposed on it.</blockquote>
<p>Each real system has unique problems. The architect sees a site as 'screwy,' and must respond to its challenges. Sometimes a solution seems to appear in a flash, and sometimes that flash even fleshes out, but more frequently, the fleshing out alters our understanding as we go. When I ask what can go wrong, my understanding is guided and shaped by conversations and inquiry, and then grounded in listed threats, and guided again by a framework like STRIDE or a kill chain as a particular thread dries up. I almost never see all the problems all at once, although I frequently think I see the biggest ones quickly. Sometimes, I'm even right.</p>
<blockquote>When a civil engineer worries over what road to build rather than how to build it, he comes up against the politics of land taking and the organized resistance of neighborhoods. Indeed, he comes up against the whole economic, social, and political life of the region upon which the road may be imposed. And when, having designed a road, he begins to convert his design to reality, he encounters such additional problems as the constraints on city budgets, the reactions of organized labor, and the political machinations of contractors. The engineer may deal with these messy factors by placing them beyond the boundaries of his professional life; he may try to clear a space for narrowly defined professional work, treating the rest of the situation as a necessary evil. Or he may accept the intrusions of the larger situation as a part of his legitimate professional concern, opening himself to complexity, instability, and uncertainty.</blockquote>
<p>This is important on two levels. First, considering what road to build is an important investigation, and the technology sector as a whole is not doing enough of it. (This relates to Rogaway's "<a href="http://web.cs.ucdavis.edu/~rogaway/papers/moral.html" target="_blank">The Moral Character of Cryptographic Work</a>," and is amplified greatly by the impact of social media on our social systems.) Second, threat modeling gives us a way to abstract the road we're building, and to engage in dialogue about the tradeoffs we face, and so to engage with those messy factors which surround our work.</p>Space Elevator Test2018-09-25T00:00:00Zhttps://shostack.org/blog/space-elevator-test/AdamJapan has launched a miniature space elevator.<span class="single-post-excerpt"><!-- excerpt -->Japan has launched a miniature space elevator.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/STAR-space-elevator-2.jpg" alt="Space Elevator" class="single-post-main-img" />
<p>So cool!</p>
<blockquote>STARS-Me (or Space Tethered Autonomous Robotic Satellite – Mini elevator), built by engineers at Shizuoka University in Japan, is comprised of two 10-centimeter cubic satellites connected by a 10-meter-long tether. A small robot representing an elevator car, about 3 centimeters across and 6 centimeters tall, will move up and down the cable using a motor as the experiment floats in space.</blockquote>
<p>Via Science News, "<a href="https://www.sciencenews.org/article/japan-has-launched-miniature-space-elevator" target="_blank">Japan has launched a miniature space elevator</a>," and "<a href="http://stars.eng.shizuoka.ac.jp/english.html" target="_blank">the STARS project</a>."</p>CVE Funding and Process2018-10-02T00:00:00Zhttps://shostack.org/blog/cve-funding-and-process/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>I had not seen this <a href="https://web.archive.org/web/20180919134917/https://energycommerce.house.gov/wp-content/uploads/2018/08/082718-DHS-Recommendations-for-CVE-Program.pdf" target="_blank">interesting letter</a> (August 27, 2018) from the House Energy and Commerce Committee to DHS about the nature of funding and support for the CVE.</p>
<p>This is the sort of thoughtful work that we hope and expect government departments do, and kudos to everyone involved in thinking about how CVE should be nurtured and maintained.</p>
<p>(Updated March 2019, to use a wayback machine link rather than the original house.gov link.)</p>The Architectural Mirror (Threat Model Thursday)2018-10-04T00:00:00Zhttps://shostack.org/blog/tmt-the-architectural-mirror/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/brick-color.png" alt="Before and After building" class="single-post-main-img" />
<p>A few weeks ago, I talked about "<a href="https://shostack.org/blog/tmt-reflective-practice-and-threat-modeling/">reflective practice in threat modeling</a>", thinking about how we approach the problems we face, and asking if our approaches are the best we can do. Sometimes it's hard to reflect. It's hard to face the mirror and say 'could I have done that better?' That's human nature.</p>
<p>Sometimes, it can be easier to learn from an analogy, and I'll again go to physical buildings as a source. (I last discussed this in "<a href="https://shostack.org/blog/tmt-architectural-review-and-threat-modeling/">Architectural Review and Threat Modeling</a>".)</p>
<p><a href="https://www.capitolhillseattle.com/2018/09/80-pastrami-sandwich-high-78-unit-building-at-14th-and-pine-comes-to-design-review-boards-table/" target="_blank">Here</a>, we see 91 units of housing delayed for 3-4 months about the color of the exterior:</p>
<blockquote>A project to create 91 units of microhousing on First Hill will take a second try at getting final sign-off from the board...In June, the board asked that the project return for a second pass citing unhappiness with <em>the choice of cement fiber panel finish</em> to step down at the upper levels of the northern edge of the building and echoing public comment that <em>the color of bricks</em> selected for the building was too dark for the neighborhood’s existing “context.” (Capitol Hill Seattle blog) </blockquote>
<p>Now, Seattle has a very visible crisis of housing and homelessness. These 91 units will likely help 91 people or families get off the street. But...the color of the bricks is wrong, so stay on the streets for an extra few months? I exaggerate for effect and consideration, not of this choice, but to ask for reflection — are there choices imposed by security that make such a tradeoff in your organization?</p>
<p>Are you holding back revenue or customer satisfaction for goals that might wait, or might simply not be as important from an executive standpoint?</p>
<p>And if you have a tracking system for projects, it has to work.</p>
<blockquote>The number of Seattle permit applications completing initial review plummeted 75 percent from April to May, from 266 to 66. Builders say problems with the system are setting their projects back by weeks or months...Soon after launch, the new system repeatedly stalled and permit documents appeared to go missing. Tempers grew so hot that at one point the city called the police on a livid customer... In May, less than 11 percent of medium-complexity projects hit the two-week target. ("<a href="https://www.seattletimes.com/seattle-news/politics/rocky-launch-of-seattles-new-construction-permit-system-causes-delays-anger/" target="_blank">Rocky launch of Seattle’s new construction-permit system causes delays, anger.</a>")</blockquote>
<p>Security can be the reason projects are consistently randomized or miss their deadlines, and when it is, other teams work around us, ignore us, or question why they're paying for a security function that doesn't function.</p>
<p>The world is a fine source of opportunities to reflect, if only we take advantage.</p>Does PCI Matter?2018-10-09T00:00:00Zhttps://shostack.org/blog/does-pci-matter/AdamIt's certainly not a silver bullet...<span class="single-post-excerpt"><!-- excerpt -->It's certainly not a silver bullet...<!-- /excerpt --></span>
<p>There's an interesting article at the CBC, about how in Canada, "<a href="https://www.cbc.ca/news/politics/security-data-shared-services-it-1.4848688" target="_blank">More than a dozen federal departments flunked a credit card security test</a>:"</p>
<blockquote>Those 17 departments and agencies continue to process payments on Visa, MasterCard, Amex, the Tokyo-based JCB and China UnionPay cards, and federal officials say there have been no known breaches to date.</blockquote>
<p>There are some interesting details about the who and why, but what I want to focus on is the lack of (detected) breaches to date, and the impact of the audit failure.</p>
<p>The fact that there have been no breaches detected is usually a no-op, you can't learn anything from it, but with credit cards, there's a "Common Point of Purchase" analysis program that eventually turns a spotlight on larger "merchants" who've been breached. So the lack of detection tells us something, which is that a large set of PCI failures don't lead to breaches. From that we can, again, question if PCI prevents breaches, or if it does so better than other security investments.</p>
<p>The second thing is that this is now a "drop everything and fix it" issue, because it's in the press. Should passing PCI be the top priority for government agencies? I generally don't think so, but likely it will absorb the security budget for the year for a dozen departments.</p>GAO Report on Equifax2018-10-12T00:00:00Zhttps://shostack.org/blog/gao-report-on-equifax/AdamI still have questions...<span class="single-post-excerpt"><!-- excerpt -->I still have questions...<!-- /excerpt --></span>
<p>I have regularly asked why we don't know more about the Equifax breach, including in comments in "<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3081216" target="_blank">That Was Close! Reward Reporting of Cybersecurity 'Near Misses'</a>." These questions are not intended to attack Equifax. Rather, we can use their breach as a mirror to reflect, and ask questions about how defenses work, and learn things we can bring to our own systems.</p>
<p>Ian Melvin was kind enough to point out a GAO report, "<a href="https://www.gao.gov/assets/700/694158.pdf" target="_blank">Actions Taken by Equifax and Federal Agencies in Response to the 2017 Breach</a>." As you'd expect of a GAO report, it is level headed and provides a set of facts. </p>
<p>However, I still have lots of questions. Some very interesting details start on page 11:</p>
<blockquote>Equifax officials added that, after gaining the ability to issue system-level commands on the online dispute portal that was originally compromised, the attackers issued queries to other databases to search for sensitive data. This search led to a data repository containing PII, as well as unencrypted usernames and passwords that could provide the attackers access to several other Equifax databases. According to Equifax’s interim Chief Security Officer, the attackers were able to leverage these credentials to expand their access beyond the 3 databases associated with the online dispute portal, to include an additional 48 unrelated databases.<br /><br />The use of encryption allowed the attackers to blend in their malicious actions with regular activity on the Equifax network and, thus, secretly maintain a presence on that network as they launched further attacks without being detected by Equifax’s scanning software. (Editor's note: I've inverted the order of the paragraphs from the source.)</blockquote>
<p>So my questions include:</p>
<ul>
<li>How did the attackers get root?</li>
<li>Why wasn't the root shell noticed? Would our organization notice an extra root sell in production?</li>
<li>How did they get access to the other 48 databases?</li>
<li>Why didn't the pattern of connections raise a flag? "As before, Equifax
officials stated that the attackers were able to disguise their presence by
blending in with regular activity on the network." I find this statement to be surprising, and it raises questions: Does the dispute resolution database normally connect to these other databases and run the queries which were run? How was that normal activity characterized and analyzed? Encryption provides content confidentiality, not meta-data confidentiality. Would we detect these extra connections?</li>
</ul>
<blockquote>Specifically, while Equifax had installed a device to inspect network traffic for evidence of malicious activity, a misconfiguration allowed encrypted traffic to pass through the network without being inspected. According to Equifax officials, the misconfiguration was due to an expired digital certificate. The certificate had expired about 10 months before the breach occurred, meaning that encrypted traffic was not being inspected throughout that period. </blockquote>
<p>Would your organization notice if one of hundreds or dozens of IDSs shut up for a week, or if one ruleset stopped firing?</p>
<p>More published incident reports will help us get smarter, and provide better answers to the questions that CEOs and boards are asking: could this happen to us? With this report we an answer that better, but still not well.</p>Measuring ROI for DMARC2018-10-16T00:00:00Zhttps://shostack.org/blog/measuring-roi-for-dmarc/AdamI'm pleased to be able to share work that Shostack + Associates and the Cyentia Institute have been doing for the Global Cyber Alliance.<span class="single-post-excerpt"><!-- excerpt -->I'm pleased to be able to share work that Shostack + Associates and the Cyentia Institute have been doing for the Global Cyber Alliance.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/GCA-ROI-report-ipad.png" alt="Whitepaper cover: Measuring the Impact of DMARC's Part in Preventing Business Email Compromise" class="single-post-main-img" />
<p>I'm pleased to be able to share work that Shostack & Associates and the Cyentia Institute have been doing for the Global Cyber Alliance. In doing this, we created some new threat models for email, and some new statistical analysis of</p>
<blockquote>It shows the 1,046 domains that have successfully activated strong protection with GCA’s DMARC tools will save an estimated $19 million to $66 million dollars from limiting BEC for the year of 2018 alone. These organizations will continue to reap that reward every year in which they maintain the deployment of DMARC. Additional savings will be realized as long as DMARC is deployed.</blockquote>
<p>Their press release from this morning is at <a href="https://www.globalcyberalliance.org/technology-industry-sets-sights-on-bolstering-email-security/" target="_blank">here</a> and the report download is <a href="https://www.globalcyberalliance.org/dmarc-economic-benefits-report/" target="_blank">here</a>.</p>Privacy Extension to Elevation of Privilege game2018-10-17T00:00:00Zhttps://shostack.org/blog/privacy-extension-to-elevation-of-privilege-game/AdamAn extended version of Elevation of Privilege, now with Privacy.<span class="single-post-excerpt"><!-- excerpt -->An extended version of Elevation of Privilege, now with Privacy.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/4-Privacy-elevation-of-privilege-card.png" alt="card 4 - privacy" class="single-post-main-img" />
<p>The fine folks at Logmein have released a version of Elevation of Privilege that adds privacy! Check out the fine work by Mark Vinkovits at their blog, "<a href="https://blog.logmeininc.com/privacy-by-design-can-be-entertaining/" target="_blank">Privacy-By-Design Can Be Entertaining</a>" by Mark Vinkovits.</p>
Podcast with Ron Woerner2018-10-29T00:00:00Zhttps://shostack.org/blog/podcast-with-ron-woerner/AdamAnother podcast, another chance to talk about Threat Modeling<span class="single-post-excerpt"><!-- excerpt -->Another podcast, another chance to talk about Threat Modeling<!-- /excerpt --></span>
<p>Ron Woerner had me on as a guest in his business of security podcast series. It was fun to tease out some of the business justifications for threat modeling, and the podcast is now live at <a href="https://itunes.apple.com/us/podcast/business-of-security-podcast-series/id1347567820?mt=2" target="_blank">itunes</a>. You can learn more about the series at <a href="https://trustmapp.com/business-of-security-podcast-series/" target="_blank">Business of Security Podcast Series</a>.</p>Airline Safety2018-11-01T00:00:00Zhttps://shostack.org/blog/airline-safety/AdamAirplanes are filthy...<span class="single-post-excerpt"><!-- excerpt -->Airplanes are filthy...<!-- /excerpt --></span>
<p>There's an interesting <a href="https://www.cbc.ca/news/canada/flights-filthy-surfaces-airplane-marketplace-1.4873586" target="_blank">article</a> in the CBC, where journalists took a set of flights, swabbed surfaces, and worked with a microbiologist to culture their samples.</p>
<p>What they found will shock you!</p>
<p>Well, airplanes are filthy. Not really shocking. What was surprising to me was that the dirtiest of the surfaces they tested was the headrest. (They did not test the armrests.) Also, the seat pocket is a nice incubator and rarely cleaned. Not all that surprising, but I hadn't considered it.</p>Change in the Weather2018-11-09T00:00:00Zhttps://shostack.org/blog/change-in-the-weather/AdamSome recent changes in the weather...<span class="single-post-excerpt"><!-- excerpt -->Some recent changes in the weather...<!-- /excerpt --></span>
<ul>
<li>A remote Hawaiian island, East Island, was destroyed by Hurricane Walaka. East Island was 11 acres. It was also a key refuge for turtles and seals. Read more in <a href="https://www.theguardian.com/us-news/2018/oct/24/hawaiian-island-erased-by-powerful-hurricane" target="_blank" rel="noopener">The Guardian</a>.</li>
<li>Maersk has sent a ship, the Venta Maersk, through the Northern Passage. The journey and its significance were <a href="https://www.washingtonpost.com/world/europe/russias-suez-canal-ships-start-plying-an-ice-free-arctic-thanks-to-climate-change/2018/09/08/59d50986-ac5a-11e8-9a7d-cd30504ff902_story.html" target="_blank" rel="noopener">outlined</a> by the Washington Post, with predictions of 23 days (versus 34 to sail via Suez). In reality, it took 37 days, according to the <u>press release</u> [link to https://www.maersk.com/en/news/2018/09/28/maersk-concludes-trial-passage-of-northern-sea-route no longer works], “without incident.” The idea that there's a sailable Northern Passage is astounding, even if a first sailing took longer than expected.</li>
</ul>Threat Modeling in 2018: Attacks, Impacts and Other Updates2018-11-15T00:00:00Zhttps://shostack.org/blog/threat-modeling-in-2018-video-release/AdamCheck out my talk from Blackhat 2018<span class="single-post-excerpt"><!-- excerpt -->Check out my talk from Blackhat 2018<!-- /excerpt --></span>
<iframe width="560" height="315" src="https://www.youtube.com/embed/DMFF8zQqEVQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>Blackhat has released all the 2018 US conference videos. My <a href="https://www.youtube.com/watch?v=DMFF8zQqEVQ&t=0s&index=104&list=PLH15HpR5qRsVAXGmSVfjWrGtGLJjIJuGe" target="_blank">threat modeling in 2018</a> video is, of course, amongst them. Slides are linked <a href="https://adam.shostack.org/blog/2018/08/threat-modeling-in-2018-attacks-impacts-and-other-updates/">here</a>.</p>Books which are worth your time: Q42018-11-26T00:00:00Zhttps://shostack.org/blog/books-worth-your-time-q4-2018/AdamSome books worth reading.<span class="single-post-excerpt"><!-- excerpt -->Some books worth reading.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/stack-of-books_Framed.jpg" alt="Stack of books" class="single-post-main-img" />
<h3>Nonfiction</h3>
<ul>
<li><a href="https://amzn.to/2TC2N3k" target="_blank">The Brothers: John Foster Dulles, Allen Dulles, and Their Secret World War</a> is a fascinating biography of the Dulles brothers, and how the world changed through their lives and actions. One ran the State department, the other the CIA.</li>
<li><a href="https://amzn.to/2S61IPU" target="_blank">Weapons of Math Destruction</a> by Cathy O'Neil is an interesting overview of problems with machine learning and the ways in which it is often mis-applied. Sometimes verges towards the polemical, and readers with a statistical bent may want more. Still, interesting and worth your time.</li>
<li><a href="https://amzn.to/2TIu2t2" target="_blank">Angel: How to Invest In Technology Startups</a> is a bit of a breathless business book, but is an excellent overview of how to be a helpful angel investor.</li>
<li><a href="https://amzn.to/2S8jh1J" target="_blank">The Great Bridge and Engineers of Dreams</a>. I make no secret of my admiration for Petroski, and this history of the great bridge builders and the men and woman who built them is excellent. However, I think McCullough's <a href="https://amzn.to/2TGzqgv" target="_blank">The Great Bridge: The Epic Story of the Building of the Brooklyn Bridge</a> does a better job of deeply illustrating <a href="https://en.wikipedia.org/wiki/Emily_Warren_Roebling" target="_blank">Emily Warren Roebling's</a> place in the history of that bridge.</li>
<li><a href="https://amzn.to/2r2mWm3" target="_blank">Driverless</a> is a highly readable introduction to how driverless cars are coming to function. It also made a strong case that the right bar we should set and expect for driverless cars is not perfection, but killing and injuring fewer people than are killed by drunk, distracted, or otherwise non-competent drivers.</li>
<li><a href="https://www.amazon.com/Postmodern-Winemaking-Rethinking-Science-Ancient/dp/0520282590/ref=as_li_ss_tl?_encoding=UTF8&qid=1536080752&sr=8-1&linkCode=ll1&tag=adamshostack-20&linkId=d1427b81ec70d15278701dd2b4cb16e4&language=en_US" target="_blank">Postmodern Wine Making</a> is a memoir, a history and a bit of a manifesto about Clark Smith's time as a winemaker and what he's learned.</li>
</ul>
<h3>Fiction</h3>
<ul>
<li><a href="https://amzn.to/2TBbbjP" target="_blank">Void Star</a>, Zachary Mason. The best William Gibson novel in a while. The one with <a href="https://amzn.to/2TBbbjP" target="_blank">51 reviews</a> has 3.9 stars, while the one with <a href="https://amzn.to/2Qfp6gG" target="_blank">26</a> only gets 3.7. Other than their Amazon ratings, I am unsure of the difference. Ms. O'Neil would be appalled, or perhaps amused.</li>
</ul>
<p>What have you read lately that's worthwhile?</p>Gavelblocken, 20182018-11-29T00:00:00Zhttps://shostack.org/blog/gavelblocken-2018/AdamThe Gavle Goat is up.<span class="single-post-excerpt"><!-- excerpt -->The Gavle Goat is up.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/Gavle-Goat-2018.jpg" alt="Gavle Goat 2018" class="single-post-main-img" />
<p>The 2018 Gavle Goat is up and tweeting at <a href="https://twitter.com/Gavlebocken" target="_blank">@gavelebocken</a>. <a href="https://adam.shostack.org/blog/?s=gavle">Previously</a>.</p>Structures, Engineering and Security2018-12-07T00:00:00Zhttps://shostack.org/blog/structures-engineering-and-security/AdamJ.E. Gordon’s Structures, or Why Things Don’t Fall Down is a fascinating and accessible book. Why don’t things fall down? It turns out this is a simple question with some very deep answers.<span class="single-post-excerpt"><!-- excerpt -->J.E. Gordon’s Structures, or Why Things Don’t Fall Down is a fascinating and accessible book. Why don’t things fall down? It turns out this is a simple question with some very deep answers.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/structures-framed..jpg" alt="Cover of 'Structures' by J. E. Gordon" class="single-post-main-img" />
<p>J.E. Gordon's <cite><a href="https://amzn.to/2SwOax1">Structures, or Why Things Don't Fall Down</a></cite> is a fascinating and accessible book. Why don't things fall down? It turns out this is a simple question with some very deep answers. Buildings don't fall down because they're engineered from a set of materials to meet the goals of carrying appropriate loads. Those materials have very different properties than the ways you, me, and everything from grass to trees have evolved to keep standing. Some of these structures are rigid, while others, like tires, are flexible.</p>
<p>The meat of the book, that is, the part that animates the structural elements, really starts with Robert Hooke, and an example of a simple suspension structure, a brick hanging by a string. Gordon provides lively and entertaining explanations of what's happening, and progresses fluidly through the reality of distortion, stress and strain. From there he discusses theories of safety including the delightful dualism of factors of safety versus factors of ignorance, and the dangers (both physical and economic) of the approach.</p>
<p>Structures is entertaining, educational and a fine read that is worth your time. But it's not really the subject of this post.</p>
<p>To introduce the real subject, I shall quote:</p>
<blockquote>
We cannot get away from the fact that every branch of technology must be concerned, to a greater or lesser extent, with questions of strength and deflection.
...<br />
The 'design' of plants and animals and of the traditional artefacts did not just happen. As a rule, both the shape and the materials of any structure which has evolved over a long period of time in a competitive world represent an optimization with regard to the loads which it has to carry and to the financial and metabolic cost. We should like to achieve this sort of optimization in modern technology; but we are not always very good at it.
</blockquote>
<p>The real subject of this post is engineering cybersecurity. If every branch of technology includes cybersecurity, and if one takes the author seriously, then we ought to be concerned with questions of strength and deflection, and to the second quote, we are not very good at it.</p>
<p>We might take some solace from the fact that descriptions of laws of nature took from Hooke, in the 1600s, until today. Or far longer, if we include the troubles that the ancient Greeks had in making roofs that didn't collapse. </p>
<p>But our troubles in describing the forces at work in security, or the nature or measure of the defenses that we seek to employ, are fundamental. If we really wish to optimize defenses, we cannot layer this on that, and hope that our safety factor, or factor of ignorance, will suffice. We need ways to measure stress or strain. How cracks develop and spread. Our technological systems are like ancient Greek roofs — we know that they are fragile, we cannot describe why, and we do not know what to do.</p>
<p>Perhaps it will take us hundreds of years, and software will continue to fail in surprising ways. Perhaps we will learn from our engineering peers and get better at it faster.</p>
<p>The journey to an understanding of structures, or why they do not fall down, is inspiring, instructive, and depressing. Nevertheless, recommended.</p>House Oversight Committee on Equifax2018-12-11T00:00:00Zhttps://shostack.org/blog/house-oversight-committee-on-equifax/AdamThe House Oversight Committee has released a scathing report on Equifax...<span class="single-post-excerpt"><!-- excerpt -->The House Oversight Committee has released a scathing report on Equifax...<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/Oversight-committee-dec-2018-Equifax-Report_Framed.png" alt="Cover of House Oversight Committee report" class="single-post-main-img" />
<p>The House Oversight Committee has <u>released</u> [link to https://oversight.house.gov/report/committee-releases-report-revealing-new-information-on-equifax-data-breach/ no longer works] a scathing <u>report</u> [link to https://oversight.house.gov/wp-content/uploads/2018/12/Equifax-Report.pdf no longer works] on Equifax.</p>
<blockquote>Through the investigation, the Committee reviewed over 122,000 pages of documents, conducted transcribed interviews with three former Equifax employees directly involved with IT, and met with numerous current and former Equifax employees, in addition to Mandiant, the forensic firm hired to conduct an investigation of the breach.</blockquote>
<p>I haven't had time to review the report in detail, but I don't think it answers <a href="https://shostack.org/blog/gao-report-on-equifax/">questions I had</a> reading the GAO report. Four of their give key findings are about what happened before the breach, but the fifth, "unprepared to support affected consumers," goes to a point I've made consistently over nearly a dozen years: "<a href="https://shostack.org/blog/its-not-the-crime-its-the-coverup-or-the-chaos/">It’s Not The Crime, It’s The Coverup or the Chaos</a>."</p>Resources for Infosec Skillbuilding2018-12-12T00:00:00Zhttps://shostack.org/blog/resources-for-infosec-skillbuilding/AdamThe Threat Modeling Book has been featured on a list of resources by Digital Guardian.<span class="single-post-excerpt"><!-- excerpt -->The Threat Modeling Book has been featured on a list of resources by Digital Guardian.<!-- /excerpt --></span>
<p>Thanks to the kind folks Digital Guardian for including my <a href="https://amzn.to/2C4PEZP" target="_blank">threat modeling book</a> in their list of "<a href="https://digitalguardian.com/blog/best-resources-infosec-skillbuilding" target="_blank">The Best Resources for InfoSec Skillbuilding</a>."</p>
<p>It's particularly gratifying to see that the work is standing the test of time.</p>Pivots and Payloads2018-12-17T00:00:00Zhttps://shostack.org/blog/pivots-and-payloads/AdamA new game from SANS for understanding pen test methodology, tactics, and tools.<span class="single-post-excerpt"><!-- excerpt -->A new game from SANS for understanding pen test methodology, tactics, and tools.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/pen-test-poster-2018.png" alt="Pivots and Payloads pentest poster" class="single-post-main-img" />
<p>SANS has announced a new boardgame, "<a href="https://pen-testing.sans.org/blog/2018/10/02/sans-pen-test-poster-pivots-payloads-boardgame" target="_blank">Pivots and Payloads</a>," that "takes you through pen test methodology, tactics, and tools with many possible setbacks that defenders can utilize to hinder forward progress for a pen tester or attacker. The game helps you learn while you play. It's also a great way to showcase to others what pen testers do and how they do it."</p>
<p>If you register for their webinar, which is on Wednesday the 19th, they'll send you some posters versions that convert to boardgames.</p>
<p>If you're interested in serious games for security, I maintain a list at <a href="https://shostack.org/games.html">https://shostack.org/games.html</a>.</p>High ROI Security Advisory Boards2018-12-21T00:00:00Zhttps://shostack.org/blog/high-roi-security-advisory-boards/AdamDiscussing the value of Security Advisory Boards<span class="single-post-excerpt"><!-- excerpt -->Discussing the value of Security Advisory Boards<!-- /excerpt --></span>
<p>Lance Cottrell has a blog <cite><a href="https://ntrepidcorp.com/general/high-roi-security-advisory-boards/">The Why and How of High ROI Security Advisory Boards</a></cite> over at the Ntrepid blog.</p>
<p>I'm pleased to be a part of the board he's discussing, and will quibble slightly — I don't think it's easy to maximize the value of the board. It's taken effort on the part of both Ntrepid staff and executives and also the board, and the result is clearly high value.</p>Beyond Elf on a Shelf2018-12-24T00:00:00Zhttps://shostack.org/blog/beyond-elf-on-a-shelf/AdamHappy Holidays!<span class="single-post-excerpt"><!-- excerpt -->Happy Holidays!<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2018/Elf-on-a-shelf.jpg" alt="Elf on a shelf" class="single-post-main-img" />Scaling Threat Modeling Training2019-01-02T00:00:00Zhttps://shostack.org/blog/scaling-threat-modeling-training/AdamFor the last few years, I've been delivering in-person threat modeling training. I've trained groups ranging from 2 to 100 people at a time, and I've done classes as short as a few hours and as long as a week.<img src="https://shostack.org/blog/img/2019/Threat-Modeling-Training.png" alt="Threat Modeling Training video with Adam Shostack" class="single-post-main-img" />
<p>For the last few years, I've been delivering in-person threat modeling training. I've trained groups ranging from 2 to 100 people at a time, and I've done classes as short as a few hours and as long as a week.</p>
<p>That training is hands on and intense, and I'm very proud that my NPS customer satisfaction ratings tend to come in around 60-70, up there with Apple and Nordstroms. At the same time, in person training doesn't scale to the millions of developers, SRE, DevOps practitioners, and even security folks who could and should learn threat modeling.</p>
<p>That's why I'm super-excited to announce that Linkedin Learning (formerly Lynda.com) has launched my new course: <a href="https://www.linkedin.com/learning/learning-threat-modeling-for-security-professionals?courseClaim=AQHRBZNMfZ817wAAAWgQXCnpwsWeuCaR4xUIb8lIhj4O3BN2rJWH-AAEkjnBfjqDba_QY3zdym4i70ne_yCz7PdjCuy9Jy9uU0T4uevQ9ZL7Mo1ZDl67nXIDrjy9bZrObqH4ZIl5Zq_RFQEvakEgVGJFea5nPo0idVY9lIzWOt1L69CsaPDy4VNm2dimrKehePM4XqiVxBcGZXSt1SVmjrIdqxmrJa6BHAG5aUiF2amTLHsOShupEU_1CE6wVW_Zz1L124Bg9J2OqoA3M6NTyt_rf4SHIlfQ-Nw_o7IVM_ZHDCEsGbyOurRrxaXkfD3BIu6KnSkXBkUxNV78fUUIG1ZCZPvDkG68V7oyc0JWZtrmN1BPwYWr2k-QnRyl4LgjVCOIVtNKIfaou-T3o_3I7sGc1-oKRKJ0M6guv6pyfmuBlYS7GVfeeaxoiiro5DxAC7xuhZm5l0xC4Z68gyvB0CaM-UkPtZFZlc5VBa7S6lVbzLKkY6g3aFye4n4g0QuN89DVYc5KQiqcUNuAh6pP66u2_PRiZSfsmgfd7sBEpDUkjMdmXprXWUDrB_jl32tyLgm34oPiEPAxrZA0o7zDgfQlmjJiLVu4Mxbj7qxadin2xJdYD2fI9VkgBLlXM7ey3S5UN2vKbyNmdibL0R3-gJ6u-Jyf1DYgPHmEyt5KOuOZ&trk=feed-share_course_title_learning&lipi=urn%3Ali%3Apage%3Ad_flagship3_detail_base%3BEu%2Fs%2Bnj0RWuq7yzPXTHf3Q%3D%3D&licu=urn%3Ali%3Acontrol%3Ad_flagship3_detail_base-show_more&lici=pj9hE5daTpCB0ct%2BWoBkCw%3D%3D" target="_blank" rel="noopener">Introduction to Threat Modeling for Security Professionals</a>.</p>
<p>I'm also pleased to say that the complete 42 minute course is free via that link.</p>
<p>Lastly, I see the offerings as complimentary: each fits a niche and has its own advantages and disadvantages. In person, students get all the time they want to ask questions. Online, you get videos in 4 minute chunks.</p>IriusRisk 2.02019-01-10T00:00:00Zhttps://shostack.org/blog/iriusrisk-2-0/AdamI’m excited to be able to share “Announcement: IriusRisk Threat Modeling Platform 2.0 Released.”<span class="single-post-excerpt"><!-- excerpt -->I’m excited to be able to share “Announcement: IriusRisk Threat Modeling Platform 2.0 Released.”<!-- /excerpt --></span>
<p>I'm excited to be able to share "<a href="https://continuumsecurity.net/announcement-iriusrisk-threat-modeling-platform-2-0-released/" target="_blank" rel="noopener">Announcement: IriusRisk Threat Modeling Platform 2.0 Released</a>."</p>
<p>If you're looking to scale your enterprise threat modeling program, this is worth a look.</p>LinkedIn Learning: Producing a Video2019-01-10T00:00:00Zhttps://shostack.org/blog/linkedin-learning-producing-a-video/AdamMy Linkedin Learning course is getting really strong positive feedback. Today, I want to peel back the cover a bit, and talk about how it came to be.<span class="single-post-excerpt"><!-- excerpt -->My Linkedin Learning course is getting really strong positive feedback. Today, I want to peel back the cover a bit, and talk about how it came to be.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/Adam-on-stage.jpg" alt="Behind the scenes taping a LinkedIn Learning video" class="single-post-main-img" />
<p>My Linkedin Learning <a href="https://www.linkedin.com/learning/learning-threat-modeling-for-security-professionals" target="_blank" rel="noopener">course</a> is getting really strong positive feedback. Today, I want to peel back the cover a bit, and talk about how it came to be.</p>
<p>Before I struck a deal with Linkedin, I talked to some of the other popular training sites. Many of them will buy you a microphone and some screen recording software, and you go to town! They even "let" you edit your own videos. Those aren't my skillsets, and I think the quality often shines through. Just not in a good way.</p>
<p>I had a great team at Linkedin. From conceptualizing the course and the audience, through final production, it's been a blast. Decisions that were made were made because of what's best for the student. Like doing a video course so we could <em>show</em> me drawing on a whiteboard, rather than showing fancy pictures and implying that that's what you need to create to threat model like the instructor.</p>
<p>My producer Rae worked with me, and taught me how to write for video. It's a very different form than books or blogs, and to be frank, it took effort to get me there. It took more effort to get me to warm up on camera and make good use of the teleprompter(!), and that's an ongoing learning process for me. The team I work with there manages to be supportive, directive and push without pushing too hard. They should do a masterclass in coaching and feedback.</p>
<p>But the results are, I think, fantastic. The version of me that's recorded is, in a very real way, better than I ever am. It's the magic of <del>Holywood</del> 7 takes of every sentence. The team giving me feedback on how each sounded, and what to improve.</p>
<p>The first course is "<a href="https://www.linkedin.com/learning/learning-threat-modeling-for-security-professionals" target="_blank" rel="noopener">Learning Threat Modeling for Security Professionals</a>."</p>
Threat Modeling as Code2019-01-23T00:00:00Zhttps://shostack.org/blog/threat-modeling-as-code/AdamExploring threat models as code.<span class="single-post-excerpt"><!-- excerpt -->Exploring threat models as code.<!-- /excerpt --></span>
<p>Omer Levi Hevroni has a very interesting post exploring ways to represent <a href="https://www.omerlh.info/2019/01/19/threat-modeling-as-code/">threat models as code</a>.</p>
<p>The closer threat modeling practices are to engineering practices already in place, the more it will be impactful, and the more it will be a standard part of delivery.</p>
<p>There's interesting work in both transforming threat modeling thinking into code, and using code to reduce the amount of thinking required for a project. These are importantly different. Going from analysis to code is work, and selecting the right code to represent your project is work. Both, like writing tests, are an investment of effort now to increase productivity later.</p>
<p>It's absolutely worth exploring ways to reduce the unique thinking that a project requires, and I'm glad to see this work being done.</p>Threat Modeling: Attackers May Adapt, Respond2019-01-31T00:00:00Zhttps://shostack.org/blog/threat-modeling-attackers-may-adapt-respond/AdamReasons for failure in real-world security<span class="single-post-excerpt"><!-- excerpt -->Reasons for failure in real-world security<!-- /excerpt --></span>
<p><a href="https://threadreaderapp.com/thread/1083465039724433409.html" target="_blank" rel="noopener">This</a> is a really interesting post* about how many simple solutions to border security fail in the real world.</p>
<blockquote>
<ul>
<li>Not everywhere has the infrastructure necessary to upload large datasets to the cloud</li>
<li>Most cloud providers are in not-great jurisdictions for some threat models.</li>
<li>Lying to border authorities, even by omission, ends badly.</li>
</ul>
Fact is, the majority of "but why don't you just..." solutions in this space either require lying, reliance on infrastructure that may be non-existent or jurisdictionally compromised, or fails openly.</blockquote>
<p>The "post" was originally a long Twitter thread, which is archived, for the moment, at <a href="https://threadreaderapp.com/thread/1083465039724433409.html" target="_blank" rel="noopener">ThreadReader App</a>, which is a far, far better UI than Twitter.</p>Incentives and Multifactor Authentication2019-02-01T00:00:00Zhttps://shostack.org/blog/incentives-and-multifactor-authentication/AdamWhat if we gamified security?<span class="single-post-excerpt"><!-- excerpt -->What if we gamified security?<!-- /excerpt --></span>
<img src="https://s.aolcdn.com/hss/storage/midas/560b38c8ec7d436062b3316fac89617a/206615414/Fortnite%252FBoogieDown_GIF-1f2be97208316867da7d3cf5217c2486da3c2fe6.gif" />
<p>It's well known that adoption rates for multi-factor authentication are poor. For example, "<a href="https://www.theverge.com/2018/1/23/16922500/gmail-users-two-factor-authentication-google" target="_blank" rel="noopener">Over 90 percent of Gmail users still don’t use two-factor authentication</a>."</p>
<p>Someone was mentioning to me that there are bonuses in games. You get <a href="https://www.swtor.com/info/security-key" target="_blank" rel="noopener">access to special rooms</a> in Star Wars Old Republic. There's a <a href="https://www.engadget.com/2018/08/23/fortnite-2fa-emote/" target="_blank" rel="noopener">special emote</a> in Fortnite. (Above)</p>
<p>How well do these incentives work? Are there numbers out there?</p>Fire Doesn't Innovate by Kip Boyle (Book Review)2019-02-04T00:00:00Zhttps://shostack.org/blog/fire-doesnt-innovate-by-kip-boyle-book-review/AdamAn unexpected book review.<span class="single-post-excerpt"><!-- excerpt -->An unexpected book review.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/Fire-Doesnt-Innovate.jpg" alt="cover of Fire Doesn't Innovate by Kip Boyle" class="single-post-main-img" style="float:left;" />
<p>I hate reviewing books by people I know, because I am a picky reader, and if you can't say anything nice, don't say anything at all. I also tend to hate management books, because they often substitute jargon for crisp thinking. So I am surprised, but, here I am, writing a review of Kip Boyle's "<a href="https://amzn.to/2G9vu4b" target="_blank" rel="noopener">Fire Doesn't Innovate</a>." </p>
<p>I'm giving little away by saying the twist is that attackers do innovate, and it's a surprisingly solid frame on which Kip hangs a readable and actionable book for executives who need to make cybersecurity decisions. And it doesn't fall into the jargon trap either in security or management.</p>
<p>It is not a book for the CSO. It is a book for executives, including, but not limited, to CEOs. They need to understand why cyber risks aren't like fire risks, they need to drive action by their company, and they don't need, want, or have the time to be able to talk about the difference between Fancy Bear and SQL injection.</p>
<p>In this, it is less detailed by far than Peter Singer and Allan Friedman's "<a href="https://amzn.to/2HPi3IA" target="_blank" rel="noopener">Cybersecurity and Cyberwar</a>." That book is intended to act as a primer and get people ready for deeper learning. "Fire" is much more for the busy executive who needs to know what questions to act, what good answers look like, and what to tell their team to go do.</p>
<p>The book is organized into two major parts. Part I is basic cyber 'hygiene' for the exec, including actionable steps like turn on updates and backups and two factor auth. (I disagree with his blanket advice to never pay ransoms — getting your business back is probably better than losing it.) Part II is what to do. It's organized around the NIST CyberSecurity Framework, and makes it actionable. The action is in three parts: assess, plan and execute, and do so on an annual schedule.</p>
<p>Part of me burns with the urge to scream "that's too simplistic!" But I know that for a lot of executives, that's what they need as they get started. The nuance and complexity that we can bring to their problem leads to a feeling that cyber is overwhelming and impossible. So they do nothing. There's an important lesson and model here for those writing 'how to be safe on the internet' guidance, and maybe there's a second book here for normal folks.</p>
<p>There's another trap that Kip avoids, and that is the book that tells you about but doesn't reveal the secret sauce. Those books are essentially ads for the thing the author has to sell, and the book tells you enough to get you to pick up the phone. "Fire" doesn't do that. It lays out, specifically, here's the questions to ask. Here's the email to frame the project. Here's how to interpret results. It's a brave move, but one that I think is wise. (My threat modeling book tells you what you need to know, and people call me looking for help. The coaching, the "here's the nugget you need," and the comparisons all make for a good business.)</p>
<p>I don't know of another book at this level. Buy it for the execs you know.</p>
<p>Disclosure: I bought a copy of the Kindle Edition, and Kip gave me a signed copy of the paperback. He says nice things about me in the acknowledgements.</p>
Nature and Nurture in Threat Modeling2019-02-06T00:00:00Zhttps://shostack.org/blog/nature-and-nurture-in-threat-modeling/AdamWhat comes easily should still be taught and elaborated upon.<span class="single-post-excerpt"><!-- excerpt -->What comes easily should still be taught and elaborated upon.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/Sunset-Drought-Savannah-Tree-Africa-372778_Framed.jpg" alt="Sunset in African Savannah" class="single-post-main-img" />
<p>Josh Corman opened a bit of a can of worms a day or two ago, asking on Twitter: "<a href="https://twitter.com/joshcorman/status/1092859005175820288" target="_blank" rel="noopener">pls RT: who are the 3-5 best, most natural Threat Modeling minds? Esp for NonSecurity people. @adamshostack is a given.</a>" (Thanks!)</p>
<p>What I normally say to this is I don't think I'm <em>naturally</em> good at finding replay attacks in network protocols — my farming ancestors got no chance to exercise such talents, and so it's a skill I acquired. Similarly, whatever leads me to be able to spot such problems doesn't help me spot lions on the savannah or detect food that's slightly off. </p>
<p>If we're going to scale threat modeling, to be systematic and structured, we need to work from a body of knowledge that we can teach and test. We need structures like my four-question framework (what are we working on, what can go wrong, what do we do, did we do a good job), and we need structures like STRIDE and Kill Chains to help us be systematic in our approaches to discovering what can go wrong. Part of the reason the framework works is it allows us to have many ways to threat model, instead of “the one true way.”</p>
<p>But that's not a sufficient answer: from Rembrandt to Da Vinci, artists of great talent appear from nowhere. And they were identified and taught. The existence of schools, with curricula and codification of knowledge is important.</p>
<p>Even with brilliant artists (and I have no idea how to identify them consistently), we need more people to paint walls than we need people to paint murals. We need to scale the basic skills, and as we do so we'll learn how to identify the "naturals."</p>
<p>Photo: <a href="https://www.maxpixel.net/Sunset-Drought-Savannah-Tree-Africa-372778" target="_blank" rel="noopener">Max Pixel</a>.</p>
The Queen of the Skies and Innovation2019-02-10T00:00:00Zhttps://shostack.org/blog/the-queen-of-the-skies-and-innovation/AdamInnovation, regulation, and more.<span class="single-post-excerpt"><!-- excerpt -->Innovation, regulation, and more.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/Plane-phone2.jpg" alt="Pan Am 747 airliner and rotary phone" class="single-post-main-img" />
<p>The Seattle Times has a story today about how "<a href="https://www.seattletimes.com/seattle-news/50-years-ago-today-the-first-747-took-off-and-changed-aviation/" target="_blank" rel="noopener">50 years ago today, the first 747 took off and changed aviation</a>." It's true. The 747 was a marvel of engineering and luxury. The <a href="https://amzn.to/2SzQvLm" target="_blank" rel="noopener">book</a> by Joe Sutter is a great story of engineering leadership. For an upcoming flight, I paid extra to reserve an upper deck seat before the last of the passenger-carrying Queens of the Skies retires.</p>
<p>And in a way, the 747 represents a pinnacle of aviation engineering advancements. It was fast, it was long range, it was comfortable. There is no arguing that today's planes are lighter, quieter, have better air, in seat power and entertainment, but I'm still happy to be flying on one, and there are still a few left to be delivered as cargo airplanes until 2022. (You can get lost in the <a href="https://en.wikipedia.org/wiki/Boeing_747" target="_blank" rel="noopener">Wikipedia article.</a>)</p>
<p>And I want to talk a little not about the amazing aircraft, but about the regulatory tradeoffs made for aircraft and for computers.</p>
<p>As mentioned, the 50 year old design, with a great many improvements, remains in production. Also pictured is what's probably a 1960s era Bell Systems 500 (note the integrated handset cord). Now, if 747s crashed at the rate of computers running Windows, there wouldn't be any left. Regulation has made aviation safe, but the rate of innovation is low. (Brad Templeton has some thoughts on this in "<a href="https://ideas.4brad.com/tons-new-ideas-aviation-will-regulation-stop-them" target="_blank" rel="noopener">Tons of new ideas in aviation. Will regulation stop them?</a>.")</p>
<p>In contrast, innovation in phones, computers and networks have transformed roughly every aspect of life over the last 25 years. The iPhone has transformed phones from phones into computers full of apps.</p>
<p>This has security costs. It is nearly impossible to function in society without a mobile phone. Your location is tracked constantly. A vulnerability in your phone leads to compromise of astounding amounts of personal data. These security costs scale when someone finds a vulnerability. Bruce Schneier has <a href="https://amzn.to/2GhBhEV" target="_blank" rel="noopener">written recently</a> about how this all comes together and leads him to say that even bad regulation is probably better than no regulation.</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2019/phone-equals-products.jpg" alt="iphone replaces lots of things" title="Phone replaces everything" border="0" width="349" height="200" />
<p>In some ways, we're already accepting these controls: see "<u>15 Controversial Apps That Were Banned From Apple’s App Store</u>," [link to https://www.cheatsheet.com/gear-style/kinds-of-apps-that-apple-has-banned-from-the-app-store.html/ no longer works] or "<a href="https://www.gadgetsnow.com/slideshows/9-android-apps-banned-on-google-play-store/Google-has-banned-these-14-apps-from-Play-Store/photolist/63169369.cms" target="_blank" rel="noopener">Google has 'banned' these 14 apps from Play Store</a>." Controls imposed by one of the two companies wealthy enough to compete in mobile phone operating systems are importantly different from government controls, except of course, when those companies remove apps at the behest of governments.</p>
<p>I don't know how to write regulation that allows for permission-less innovation at the pace we're used to, and balances that with security and privacy. Something's likely to give, and we need to think about how to make the societal tradeoffs well. Does anyone?</p>
<p>(Lastly, speaking of that upper-deck reservation, I want to give a shout-out to TProphet's <a href="https://award.cat/" target="_blank" rel="noopener">Award Cat</a>, who drew my attention to the aircraft type and opportunity.)</p>Podcast: DevSecOps2019-02-13T00:00:00Zhttps://shostack.org/blog/podcast-devsecops/AdamI recently chatted with Mark Miller over at DevSecOps<span class="single-post-excerpt"><!-- excerpt -->I recently chatted with Mark Miller over at DevSecOps<!-- /excerpt --></span>
<p>I did a podcast with Mark Miller over at DevSecOps days. It was a fun conversation, and you can have a listen at "<a href="https://www.devsecopsdays.com/devsecops-podcast-gallery/podcasts/anticipating-failure-through-threat-modeling-w/-adam-shostack" target="_blank" rel="noopener">Anticipating Failure through Threat Modeling w/ Adam Shostack</a>."</p>55 5 ⭐ Reviews?2019-02-14T00:00:00Zhttps://shostack.org/blog/fifty-five-5star-reviews/AdamAlmost 5 years after release, I'm looking for a few more Amazon reviews.<span class="single-post-excerpt"><!-- excerpt -->Almost 5 years after release, I'm looking for a few more Amazon reviews.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/og-tm-book-cover.jpg" alt="Cover of my book, Threat Modeling: Designing for Security" class="single-post-main-img" />
<p>I'm getting ready for the 5-year anniversary of my book, "<a href="https://www.amazon.com/Threat-Modeling-Designing-Adam-Shostack/dp/1118809998" target="_blank" rel="noopener">Threat Modeling: Designing for Security</a>." </p>
<p>As part of that, I would love to see the book have more than 55 5 ⭐ reviews on Amazon. If you found the book valuable, I would appreciate it if you could take a few minutes to write a review.</p>Dolphins and Pufferfish2019-02-16T00:00:00Zhttps://shostack.org/blog/dolphins-and-pufferfish/AdamMore chaos in nature<span class="single-post-excerpt"><!-- excerpt -->More chaos in nature<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/dolphin.jpg" alt="dolphin with pufferfish hanging from its mouth" class="single-post-main-img" />
<p>Apparently, "<a href="https://www.smithsonianmag.com/smart-news/dolphins-seem-to-use-toxic-pufferfish-to-get-high-180948219/" target="_blank" rel="noopener">Dolphins Seem to Use Toxic Pufferfish to Get High</a>." </p>
<p>Of course, pufferfish toxins are also part of why the fish is a <u>delicacy</u> [link to http://www.asahi.com/ajw/articles/AJ201805010006.html no longer works] in Japan.</p>
<p>It just goes to show that nature finds its own, chaotic, uses for things.</p>What Should Training Cover?2019-02-24T00:00:00Zhttps://shostack.org/blog/what-should-training-cover/AdamWhen suggesting that someone needs more training, consider what specific points should be covered.<span class="single-post-excerpt"><!-- excerpt -->When suggesting that someone needs more training, consider what specific points should be covered.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/curriculum-elements.jpg" alt="Curriculum Elements" class="single-post-main-img" />
<p>Chris Eng said "<a href="https://twitter.com/chriseng/status/1098652049422647296" target="_blank" rel="noopener">Someone should set up a GoFundMe to send whoever wrote the hit piece on password managers to a threat modeling class.</a>"</p>
<p>And while it's pretty amusing, you know, I teach threat modeling classes. I spend a lot of time crafting explicit learning goals, considering and refining instructional methods, and so when a smart fellow like Chris says this, my question is why?</p>
<p>Is this "threat modeling as our only hope?" That's when we take a hard security problem and sagely say "better threat modeling." Then we wander off. It's even better with hindsight.</p>
<p>Or is there a particular thing that a student should be learning in a threat modeling class? There was a set of flaws where master passwords were accessible in memory, and thus an attacker with a debugger could get your master password and decrypt all your passwords. </p>
<p>I'm not going to link the hit piece because they deserve to not have your clicks, impressions, or ad displays. It asserted that these flaws mean that a password manager is no better than a text file full of your passwords. </p>
<p>Chris' point is that we should not tell people that using a password manager is bad, and I agree. It's an essential part of defending against your passwords being leaked by a third party site. An attacker who can read memory can read memory, which includes backing stores like disk; in fact, reading disk is easier than reading RAM.</p>
<p>So to loop this around to threat modeling, we can consider a bunch of skills or knowledge that could be delivered via training:</p>
<ol>
<li>Enumerate attacker capabilities. "An attacker who can run code as Alice can do everything Alice's account can do." (I am, somewhat famously, not a fan of "think like an attacker", and while I remain skeptical of enumerating attacker <em>motivations</em>, this is about attacker <em>capabilities</em>.)</li>
<li>Understand how attacks like spoofing can take place. Details like password stuffing and how modern brute force attacks take place are a set of facts that a student could learn.</li>
<li>Perform multiple analyses, and compare the result. If "what can go wrong" is "someone accesses your passwords by X or Y," what are the steps to do that? What part of the defenses are in common? Which are unique? This is a set of tasks that someone could learn. </li>
</ol>
<p>I structure classes around the four-question frame of "what are we working on, what can go wrong, what are we going to do, did we do a good job." I work to build up skills in each of those, show how they interact, and how they interact with other engineering work. I think asking 'what else could that attacker do with that access' is an interesting sub of question 2. How attacks work and a selection of real world attacks is something I've done for non-security audiences (it feels like review for security folks). The third, comparing between models, I don't feel is a basic skill.</p>
<p>I'm curious: are there other ways in which a threat modeling class could or should help its students see that 'password managers are no better than text files' is bad threat modeling?</p>
<p>Image (model) from Flinders University, <u>Key elements and relationships in curriculum</u> [link to http://www.flinders.edu.au/teaching/teaching-strategies/curriculum-development/curriculum-process.cfm no longer works]</p>
Adam @ RSA2019-02-28T00:00:00Zhttps://shostack.org/blog/adam-at-rsa/AdamWhere and when to find Adam Shostack at RSA 2019<span class="single-post-excerpt"><!-- excerpt -->Where and when to find Adam Shostack at RSA 2019<!-- /excerpt --></span>
<p>At RSA, I'll be speaking 3 times at the conference, and once at a private event for Continuum:</p>
<ul>
<li>"<a href="https://www.rsaconference.com/events/us19/agenda/sessions/14316-2028-future-state-long-live-the-firewall" target="_blank" rel="noopener">2028 Future State: Long Live the Firewall?</a>" with Jennifer Minella, Harry Sverdlove and Marcus Ranum. March 5 | 1:00 PM – 1:50 PM | Moscone West 3001</li>
<li><a href="https://continuumsecurity.net/threat-modeling-brunch-with-iriusrisk/" target="_blank" rel="noopener">Threat modeling brunch with IriusRisk</a> March 6 | 10 - 11 AM | See site for registration</li>
<li><a href="https://www.rsaconference.com/events/us19/agenda/sessions/14908-how-to-measure-ecosystem-impacts" target="_blank" rel="noopener">How to Measure Ecosystem Impacts</a> with Jay Jacobs. March 7 | 1:30 PM – 2:20 PM | Moscone West 2011</li>
<li><a href="https://www.rsaconference.com/events/us19/agenda/sessions/14733-threat-modeling-in-2019" target="_blank" rel="noopener">Threat Modeling in 2019</a>. March 8 | 8:30 AM – 9:20 AM | Moscone South 205</li>
</ul>Spoofing In Depth2019-02-28T00:00:00Zhttps://shostack.org/blog/spoofing-in-depth/AdamI'm quite happy to say that my next Linkedin Learning course has launched! This one is all about spoofing.<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2019/Spoofing-in-depth-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2019/Spoofing-in-depth-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2019/Spoofing-in-depth-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2019/Spoofing-in-depth-800w.jpeg" width="800" height="452" alt="screencap of Adam in new LinkedIn Learning course" decoding="async" />
</picture>
<p>I'm quite happy to say that my next Linkedin Learning course has launched! This one is all about spoofing.</p>
<p>It's titled "<a href="https://www.linkedin.com/learning/threat-modeling-spoofing-in-depth?isLearningSubscriber=true&trk=feed-share_course_title_learning&lipi=urn%3Ali%3Apage%3Ad_flagship3_detail_base%3BCBJsMu9NQyCZcL8vuJvaWg%3D%3D&licu=urn%3Ali%3Acontrol%3Ad_flagship3_detail_base-show_more&lici=OGCQQOyqQwKw2C6ql1uYxA%3D%3D" target="_blank" rel="noopener">Threat Modeling: Spoofing in Depth</a>." It's free until at least a week after RSA.</p>
<p>Also, I'm exploring the idea that security professionals lack a shared body of knowledge about attacks, and that an entertaining and engaging presentation of such a BoK could be a useful contribution. A way to test this is to ask how often you hear attacks discussed at a level of abstraction that's puts the attacks into a category other than "OMG the sky is falling, patch now." Another way to test is to watch for fluidity in moving from one type of spoofing attack to another.</p>
<p>Part of my goal of the course is to help people see that attacks cluster and have similarities, and that STRIDE can act as a framework for chunking knowledge.</p>After a Conference2019-03-11T00:00:00Zhttps://shostack.org/blog/after-a-conference/AdamSome tips on how to regoup and catch up after being away for conferences.<span class="single-post-excerpt"><!-- excerpt -->Some tips on how to regoup and catch up after being away for conferences.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2019/view-from-stage-rsa-panel-2019-720w.webp 720w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2019/view-from-stage-rsa-panel-2019-720w.jpeg 720w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2019/view-from-stage-rsa-panel-2019-720w.jpeg" width="720" height="404" alt="View from stage" decoding="async" />
</picture>
<p>RSA was an amazing and overwhelming experience. I had four opportunities to share with the audience, including a panel with Marcus Ranum, Jennifer Minella, and Harry Sverdlove on firewalls; a joint talk with Jay Jacobs on the impact of DMARC on BEC; and threat modeling in 2019. (I was gratified to see so many folks awake at 8.30 on Friday morning.) I also spoke at a brunch for Continuum.</p>
<p>Now I’m home and trying to catch up. I wanted to share my workflow on the chance it might be useful. I go through each of these areas to find tasks:</p>
<ul><li><b>Email</b>. My inbox, but I also have a folder called “followup.” I move a lot out of my inbox to the followup folder so I can see it when I’m back from travel. (I also have a set of monthly sub-folders: followup/august, followup/september, they let me say “I’ll get back to you in three months.”)</li>
<li><b>Business cards</b>. I go through the whole stack of cards for todo items. I try to write notes on business cards. I discovered I did that on one of 6 cards where I remembered something. That’s not very good odds, and forces me to consider what I might have missed. Still exploring how to make best use of cards without notes. Advice really welcome here.</li>
<li><b>Signal, iMessage</b>. For both of these, I go back through the conversations I’ve had, see if I had followups or if I dropped the ball on someone.</li>
<li><b>Linkedin</b>. I get a lot of linkedin requests, and I’m a fairly open networker. Sadly, the UI works very poorly for me. I would love to hear about tools that allow me to effectively move messages to something other than a LIFO queue.</li>
<li><b>Workflowy</b>. I’m experimenting with this as a note taking tool, and it’s not bad.</li>
<li><b>Slack channels</b>. Go through, look at DMs and channels. I suppose I should use some feature to note that I intend to followup. Is the Slack way to say “come back to this” to star a message?</li>
<li><b>Calendar</b>. For each meeting, think about the meeting, check my notes, see if I remember followups or things that didn’t make it to an email/workflowy note. And yes, there were several discussions that I know we discussed followups that I re-discovered by looking at my calendar.</li>
<li><b>Photos</b>. Photographs are the new note-taking, and so going back through pictures you took is important.</li>
<li><b>Twitter, Facebook</b>. I’m trying to break from Twitter, and don’t use Facebook, but I figured I’d include them here because they’re maybe worth remembering.</li>
</ul>
<p>If I missed a followup, I’m sorry, please just let me know. It’s easy to misplace something. Also I’m curious: What’s in your toolbox?</p>Facebook's Privacy Constitution2019-03-11T00:00:00Zhttps://shostack.org/blog/facebooks-privacy-constitution/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<p>Bruce Schneier and I wrote an article on Facebook's privacy changes: <cite><a href="https://onezero.medium.com/a-new-privacy-constitution-for-facebook-a7106998f904" target="_blank" rel="noopener">A New Privacy Constitution for Facebook</a></cite>.</p>A Seat At The Table (AppSecCali)2019-03-13T00:00:00Zhttps://shostack.org/blog/a-seat-at-the-table-appseccali/AdamMy talks from AppSecCali 2019<span class="single-post-excerpt"><!-- excerpt -->My talks from AppSecCali 2019<!-- /excerpt --></span>
<p>The fine folks at AppSecCali have <a href="https://www.youtube.com/playlist?list=PLpr-xdpM8wG-bXotGh7OcWk9Xrc1b4pIJ" target="_blank" rel="noopener">posted videos</a>, including my talks, A Seat At The Table, and <a href="https://www.youtube.com/watch?v=uzOdpuAhr28&list=PLpr-xdpM8wG-bXotGh7OcWk9Xrc1b4pIJ&index=36&t=0s" target="_blank" rel="noopener">Game On! Adding Privacy to Threat Modeling - Adam Shostack & Mark Vinkovits</a></p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/FyGA-9WHFAQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>Happy Pi Day!2019-03-14T00:00:00Zhttps://shostack.org/blog/happy-pi-day-2019/AdamThere's only a few times to use a pie chart, but to help you celebrate, there's how to keep track of your intake:
<p>There's only a few times to use a pie chart, but to help you celebrate, there's how to keep track of your intake:</p>
<img src="https://shostack.org/blog/img/2019/pie-i-have-eaten.jpg" />India's Intermediary Guidelines2019-03-18T00:00:00Zhttps://shostack.org/blog/indias-intermediary-guidelines/AdamI've signed on to Access Now's letter to the Indian Ministry of Electronics and Information Technology, asking the Government of India to withdraw the draft amendments proposed to the Information Technology (Intermediary Guidelines) Rules.<p>I've signed on to Access Now's letter to the Indian Ministry of Electronics and Information Technology, asking the Government of India to withdraw the draft amendments proposed to the Information Technology (Intermediary Guidelines) Rules.</p>
<p>As they say in their <a href="https://www.accessnow.org/global-coalition-urges-india-to-withdraw-proposed-amendments-to-intermediary-guidelines/" target="_blank" rel="noopener">press release</a>:</p>
<blockquote>Today’s letter, signed by an international coalition of 31 organizations and individuals, explains how the proposed amendments threaten fundamental rights and the space for a free internet, while not addressing the problems that the Ministry aims to resolve. A key concern is the requirement for intermediaries to “enable tracing out of such originator” of content that an intermediary hosts, which could lead to demands that providers weaken the security features of their products and services. This threat to privacy would in turn endanger free expression.</blockquote>Threat Modeling in 20192019-03-19T00:00:00Zhttps://shostack.org/blog/threat-modeling-in-2019/AdamRSA has posted a video of my talk, “Threat Modeling in 2019”.
<p>RSA has posted a video of my talk, “Threat Modeling in 2019”.</p>
<iframe src="https://www.youtube-nocookie.com/embed/ZoxHIpzaZ6U" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" width="560" height="315" frameborder="0"></iframe>
<p>The slides for a later version of the talk are <a href="https://adam.shostack.org/Shostack-Threat-Modeling-in-2019_oct-owasp.pdf">here</a>.</p>Cybersecurity is not very important2019-03-22T00:00:00Zhttps://shostack.org/blog/cybersecurity-is-not-very-important/AdamSome points to consider, from Andrew Odlyzko.<span class="single-post-excerpt"><!-- excerpt -->Some points to consider, from Andrew Odlyzko.<!-- /excerpt --></span>
<p><cite><a href="http://www.dtc.umn.edu/~odlyzko/doc/cyberinsecurity.pdf" target="_blank" rel="noopener">Cybersecurity is not very important</a></cite> is a new paper by the very smart Andrew Odlyzko. I do not agree with everything he says, but it's worth reading and pondering if and why you disagree with it. I think I agree with it more than I disagree.</p>20 Years of STRIDE: Looking Back, Looking Forward2019-04-01T00:00:00Zhttps://shostack.org/blog/20-years-of-stride-looking-back-looking-forward/AdamHas it been that long already?<span class="single-post-excerpt"><!-- excerpt -->Has it been that long already?<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/" alt="" class="single-post-main-img" />
<p>"Today, let me contrast two 20-year-old papers on threat modeling. My first paper on this topic, "Breaking Up Is Hard to Do," written with Bruce Schneier, analyzed smart-card security. We talked about categories of threats, threat actors, assets — all the usual stuff for a paper of that era. We took the stance that "we experts have thought hard about these problems, and would like to share our results."</p>
<p>Around the same time, on April 1, 1999, Loren Kohnfelder and Praerit Garg published a paper in Microsoft's internal "Interface" journal called "<a href="https://shostack.org/files/microsoft/The-Threats-To-Our-Products.docx" target="_blank" rel="noopener">The Threats to our Products</a>." It was revolutionary, despite not being publicly available for over a decade. What made the Kohnfelder and Garg paper revolutionary is that it was the first to structure the process of how to find threats. It organized attacks into a model (STRIDE), and that model was intended to help people find problems, as noted..."</p>
<p>Read the full version of "<a href="https://www.darkreading.com/20-years-of-stride-looking-back-looking-forward/a/d-id/1334275" target="_blank" rel="tag">20 Years of STRIDE: Looking Back, Looking Forward</a>" on Dark Reading.</p>Leave Those Numbers for April 1st2019-04-02T00:00:00Zhttps://shostack.org/blog/leave-those-numbers-for-april-1st/AdamOver-inflated numbers won't scare me into buying your 'solution'.<span class="single-post-excerpt"><!-- excerpt -->Over-inflated numbers won't scare me into buying your 'solution'.<!-- /excerpt --></span>
<p>"90% of attacks start with phishing!*" "Cyber attacks will cost the world 6 trillion by 2020!"</p>
<p>We've all seen these sorts of numbers from vendors, and in a sense they're April Fools day numbers: you'd have to be a fool to believe them. But vendors quote insane because there's no downside and much upside. We need to create more and worse downside, and the road there lies through losing sales.</p>
<p>We need to call vendors on these number, and say "I'm sorry, but if you'd lie to me about that, what about the numbers you're claiming that are hard to verify? The door is to your left."</p>
<p>If we want to change the behavior, we have to change the impact of the behavior. We need to tell vendors that there's no place for made up numbers, debunked numbers, unsupported numbers in our buying processes. If those numbers are in their sales and marketing material, they're going to lose business for it.</p>
<p>* This one seems to trace back to analysis that 90% of APT attacks in the Verizon DBIR started with phishing, but APT and non-APT attacks are clearly different.</p>Books Worth Your Time (Q1 2019)2019-04-05T00:00:00Zhttps://shostack.org/blog/books-worth-your-time-q1-2019/AdamJust what the title says...<span class="single-post-excerpt"><!-- excerpt -->Just what the title says...<!-- /excerpt --></span>
<h3>Cyber</h3>
<ul>
<li><a href="https://amzn.to/2V9W3tT" target="_blank" rel="noopener">Making Software</a> "What Really Works, and Why We Believe It" by Andy Oram and Greg Wilson. This collection of essays is a fascinating view into the state of the art in empirical analysis software engineering.</li>
<li><a href="hhttps://amzn.to/2WDqd9m" target="_blank" rel="noopener">Agile Application Security</a> by Laura Bell, Michael Brunton-Spall, Rich Smith and Jim Bird. A really good overview of the many moving pieces in an agile SDL. Good enough that I bought a paper copy to augment the ebook. (Also, sometimes redundant, and says nice things about my work.)</li>
<li><a href="https://amzn.to/2V6ByOR" target="_blank" rel="noopener">Click Here to Kill Everybody</a> by Bruce Schneier. Thought-provoking survey of the problems that come from the book above not being better read. More seriously, we haven't scaled application security, and even if we do, there will be bad developers who'll do a crappy job at building things. What can we do about that as a society? I don't like all of Schneier's answers, but the reasoning is sound.</li>
</ul>
<h3>Nonfiction</h3>
<ul>
<li><a href="https://amzn.to/2WDAWAv" target="_blank" rel="noopener">Trust Me, I'm Lying: Confessions of a Media Manipulator </a> by Ryan Holiday lays out the toolbox of the fellow who used to run marketing for American Apparel. Shows how guerrilla marketing works in the age of Twitter, and outlines techniques now being used to screw up elections and people's lives.</li>
<li><a href="https://amzn.to/2VcHMge" target="_blank" rel="noopener">The Internet of Garbage</a> by Sarah Jeong. As a summary of the problems and challenges of the internet, it's aged sadly well since 2015.</li>
<li><a href="https://amzn.to/2HQVLW3" target="_blank" rel="noopener">The Tangled Tree: A Radical New History of Life</a>. We're used to thinking that genes are passed on from parents, but as David Quammen explains, there's also horizontal gene transfer (<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4536854/" target="_blank" rel="noopener">NIH</a>, <a href="https://en.wikipedia.org/wiki/Horizontal_gene_transfer" target="_blank" rel="noopener">Wikipedia</a>). Really fascinating history of both science and the personalities involved. Recommended despite the writing being somewhat rocky and uneven – these are hard topics and I do not envy the author's task of making an accessible and interesting read.</li>
<li><a href="https://amzn.to/2FLGSk2" target="_blank" rel="noopener">Things We Think About Games</a> by Will Hindmarch and Jeff Tidball is 140 micro-essays about games. Some I loved, some I hated, but I enjoyed the heck out of it.</li>
</ul>
<h3>Fiction</h3>
<p>As it turns out, all three fiction books are re-imaginings of other stories. If you find that wicked annoying, these are not for you.</p>
<ul>
<li><a href="https://amzn.to/2HPJhOt" target="_blank" rel="noopener">The Queens of Innis Lear</a>, by Tessa Gratton is a re-telling of Lear from the perspective of his daughters. </li>
<li><a href="https://amzn.to/2WDOtIr" target="_blank" rel="noopener">A Study in Honor</a>, Claire O'Dell re-tells the Holmes/Watston story in the aftermath of a second American Civil War.</li>
<li><a href="https://amzn.to/2I5gFA0" target="_blank" rel="noopener">Spinning Silver</a>, Naomi Novik is a re-telling of the Rumplestiltskin tale. (Hugo nominated)</li>
</ul>
<p>That's what I read last quarter that I want to share. What was memorable for you?</p>Hayabusa!2019-04-06T00:00:00Zhttps://shostack.org/blog/hayabusa/AdamCongratulations to the Hayabusa2 mission team, who flew to an asteroid, dropped multiple rovers, an impactor and a separate camera satellite to observe the impactor. The Hayabusa2 then flew around, to the far side of the asteroid to avoid ejecta from the impactor. In a few weeks, Hayabusa2 will probably land, collect more samples and then fly back to Earth.
<img src="https://shostack.org/blog/img/2019/hayabusa2_mission_01.jpg" alt="orbit of Hayabusa2 in relation to Sun and Earth" class="single-post-main-img" />
<p>Congratulations to the Hayabusa2 mission team, who flew to an asteroid, dropped multiple rovers, an impactor and a separate camera satellite to observe the impactor. The Hayabusa2 then flew around, to the far side of the asteroid to avoid ejecta from the impactor. In a few weeks, Hayabusa2 will probably land, collect more samples and then fly back to Earth.</p>
<p>For more: <a href="http://global.jaxa.jp/projects/sas/hayabusa2/" target="_blank" rel="noopener">Hayabusa 2</a> page at the Japan Aerospace Exploration Agency, and don't miss the "<a href="http://global.jaxa.jp/projects/sas/hayabusa2/instruments.html" target="_blank" rel="noopener">major onboard instruments</a> page, including an ion engine using 1/10th the power of a chemical propellant, and fixes to malfunctions that happened after 15,000 hours of operation; a better seal on the collection horn and more.</p>
<p><a href="https://www.sciencemag.org/news/2019/04/japanese-spacecraft-may-have-just-blown-crater-distant-asteroid" target="_blank" rel="noopener">A Japanese spacecraft may have just blown a crater in a distant asteroid</a> (Science Mag)</p>'No Need' to tell the public (?!?)2019-04-08T00:00:00Zhttps://shostack.org/blog/no-need-to-tell-the-public/AdamSecrecy isn't the best policy when it comes to public health.<span class="single-post-excerpt"><!-- excerpt -->Secrecy isn't the best policy when it comes to public health.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/otitisexterna_0314advisorforum_560969.jpg" alt="close-up of inner ear buildup" class="single-post-main-img" />
<p>When Andrew and I wrote The New School, and talked about the need to learn from other professions, we didn't mean for doctors to learn from 'cybersecurity thought leaders' about hiding their problems:</p>
<blockquote>...Only one organism grew back. C. auris.<br /><br />It was spreading, but word of it was not. The hospital, a specialty lung and heart center that draws wealthy patients from the Middle East and around Europe, alerted the British government and told infected patients, but made no public announcement.<br /><br />“There was no need to put out a news release during the outbreak,” said Oliver Wilkinson, a spokesman for the hospital.<br /><br />This hushed panic is playing out in hospitals around the world. Individual institutions and national, state and local governments have been reluctant to publicize outbreaks of resistant infections, arguing there is no point in scaring patients — or prospective ones...<br /><br />Dr. Silke Schelenz, Royal Brompton’s infectious disease specialist, found the lack of urgency from the government and hospital in the early stages of the outbreak “very, very frustrating.”<br /><br />“They obviously didn’t want to lose reputation,” Dr. Schelenz said. “It hadn’t impacted our surgical outcomes.” ("<a href="https://www.nytimes.com/2019/04/06/health/drug-resistant-candida-auris.html" target="_blank" rel="noopener">A Mysterious Infection, Spanning the Globe in a Climate of Secrecy</a>", NYTimes April 6, 2019)</blockquote>
<p>This is the wrong way to think about the problem. Mr. Wilkinson (as quoted) is wrong. There is a fiduciary duty to tell patients that they are at increased risk of C. auris if they go to his hospital.</p>
<p>Moreover, there is a need to tell the public about these problems. Our choices, as a society, kill people. We kill people when we allow antibiotics to be used to make fatter cows or when we allow antifungals to be used on crops.</p>
<p>We can adjust those choices, but only if we know the consequences we are accepting. Hiding outcomes hinders cybersecurity, and it's a bad model for medicine or public policy.</p>
<p>(Picture courtesy of <a href="https://www.clinicaladvisor.com/home/consultations/treating-antifungal-resistant-ear-infection%E2%80%A9s/" target="_blank" rel="noopener">Clinical Advisor</a>. I am somewhat sorry for my use of such a picture here, where it's unexpected.)</p>Science of Security, Science for Security2019-04-09T00:00:00Zhttps://shostack.org/blog/science-of-security-science-for-security/AlexRecent article in Bentham’s Gaze<span class="single-post-excerpt"><!-- excerpt -->Recent article in Bentham’s Gaze<!-- /excerpt --></span>
<p>There's an interesting article in Bentham's Gaze, "<a href="https://www.benthamsgaze.org/2019/03/13/science-of-or-for-security/" target="_blank" rel="noopener">Science 'of' or 'for' security?</a>"</p>
<p>It usefully teases apart some concepts, and, yes, it probably is consistent with the New School.</p>The White Box Essays (Book Review)2019-04-10T00:00:00Zhttps://shostack.org/blog/the-white-box-essays/AdamA resource for those developing games.<span class="single-post-excerpt"><!-- excerpt -->A resource for those developing games.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/wb_mockup.jpg" alt="The White Box and the White Box Essays" class="single-post-main-img" />
<p><a href="https://amzn.to/2CZRJGQ" target="_blank" rel="noopener">The White Box</a>, and its accompanying book, "The White Box Essays" are a FANTASTIC resource, and I wish I'd had them available to me as I designed <a href="https://github.com/adamshostack/eop" target="_blank" rel="noopener">Elevation of Privilege</a> and helped with <a href="http://www.controlalthack.com/" target="_blank" rel="noopener">Control-Alt-Hack</a>.</p>
<p>The book is for people who want to make games, and it does a lovely job of teaching you how, including things like the relationship between story and mechanics, the role of luck, how the physical elements teach the players, and the tradeoffs that you as a designer make as you design, prototype, test, refine and then get your game to market. In the go-to-market side, there are chapters on self-publishing, crowdfunding, what needs to be on a box.</p>
<p>The Essays don't tell you how to create a specific game, they show you how to think about the choices you can make, and their impact on the game. For example:</p>
<blockquote>Consider these three examples of ways randomness might be used (or not) in a design:
<ul>
<li>Skill without randomness (e.g., chess). With no random elements, skill is critical. The more skilled a player is, the greater their odds to win. The most skilled player will beat a new player close to 100% of the time.</li>
<li>Both skill and randomness (e.g., poker). Poker has many random elements, but a skilled player is better at choosing how to deal with those random elements than an unskilled one. The best poker player can play with new players and win most of the time, but the new players are almost certain to win a few big hands. (This is why there is a larger World Series of Poker than World Chess Championship—new players feel like they have a chance against the pros at poker. Since more players feel they have a shot at winning, more of them play, and the game is more popular.)</li>
<li>Randomness without skill (e.g., coin-flipping). There is no way to apply skill to coin-flipping and even the “best” coin flipper in the world can’t do better than 50/50, even against a new player.</li></ul></blockquote>
<p>The chapter goes on to talk about how randomness allows players to claim both credit and avoid blame, when players make choices about die rolls and the impact on gameplay, and a host of other tradeoffs.</p>
<p>The writing is solid: it's as long as it needs to be, and then moves along (like a good game). What do you need to do, and why? How do you structure your work? If you've ever thought about designing a game, you should buy this book. But more than the book, there's a boxed set, with meeples, tokens, cubes, and disks for you to use as you prototype. (And in the book is a discussion of how to use them, and the impact of your choices on production costs.)</p>
<p>I cannot say enough good things about this. After I did my first game design work, I went and looked for a collection of knowledge like this, and it didn't exist. I'm glad it now does.</p>
<p>Image from <a href="http://www.atlas-games.com/product_tables/AG2903.php" target="_blank" rel="noopener">Atlas Games</a>.</p>3 Arguments for Threat Modeling2019-04-24T00:00:00Zhttps://shostack.org/blog/3-arguments-for-threat-modeling/AdamTop 3, from Continuum<span class="single-post-excerpt"><!-- excerpt -->Top 3, from Continuum<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/" alt="" class="single-post-main-img" />
<p>There's a great post from my friends at Continuum, <cite><a href="https://continuumsecurity.net/three-killer-arguments-for-adopting-threat-modeling-forward-this-to-your-boss/" target="_blank" rel="noopener">Three Killer Arguments for Adopting Threat Modeling</a></cite>.</p>
<p>Their arguments are "Threat Modeling Produces Measurable Security," "Threat Modeling Done Right Encourages Compliance," and "Threat Modeling Saves You Money." (Actually, they have 6.)</p>Episode 9 Spoilers2019-05-04T00:00:00Zhttps://shostack.org/blog/episode-9-spoilers/AdamToday is the last Star Wars Day before Episode 9 comes out, and brings the Skywalker saga to its end.<img src="https://shostack.org/blog/img/2019/D5vGFd8U0AAJJ95.jpg" alt="The art of Rise of the Skywalker, written by Phil Szostak" class="single-post-main-img" />
<p>Today is the last Star Wars Day before Episode 9 comes out, and brings the Skywalker saga to its end.</p>
<p>Film critics have long talked about how Star Wars is about Luke's Hero's Journey, or the core trilogy is about his relationship to his father, but they're wrong. Also, I regularly say that Star Wars is fundamentally the story of information disclosure: from the opening shot of Princess Leia's ship being pursued through the climatic destruction of the Death Star, it's an information security metaphor. But I too am wrong.</p>
<p>Star Wars is a story of how power corrupts.</p>
<p><b>The</b> prophecy, that someone will bring (or restore) balance to the Force, was never precisely stated in the films*. There were allusions: someone will restore balance to the Force. <a href="https://starwars.fandom.com/wiki/Chosen_One" target="_blank" rel="noopener">Variously</a>, the one expected to do that was Anakin, and then Luke, and then everyone who'd heard of the prophecy was either its presumptive subject or dead. But the Force is not out of balance in a way that a Skywalker can fix. The Force is out of balance because of the Skywalkers, and it is only through the ending of their line that balance can be restored. </p>
<p>Justifying that claim requires some of the story from outside the movies. The story starts with a Sith, <a href="https://amzn.to/2Jk2d7X" target="_blank" rel="noopener">Darth Plagueis</a>. He was interested in life extension by control of the Force. He was also master to Darth Sideous, who later became the Emperor.</p>
<p>The virgin birth of Anakin Skywalker was not just cheesy adaptation of Christian symbology, it was a massive head-fake that, without ever being explicit, got people treating Anakin as if he was supposed to be a savior figure, who died to answer for the sins of the world. But that's not the reason for his fatherless birth.</p>
<p>It was the experiments Plagueis did which led to the creation of Anakin Skywalker and it was Plagueis who set the saga in motion. Those actions unbalanced the Force, and the prophecy speaks of one who will bring back the balance.</p>
<p>The extreme and exceptional power of the Skywalkers break both the Jedi and the Sith. This is a side effect of the Force being out of balance. The way to restore balance to the Force is to end them, and that is what Rey will do, by killing Kylo Ren, son of Leia Skywalker.</p>
<p>Star Wars is a story of how how power corrupts, and how heroic quests for justice can both restore the world, and cause tremendous damage along the way.</p>
<p>To the final film's title, either it's a final headfake, or a reference to Skywalker as a *title*, those who quest for justice in the galaxy. </p>
<hr />
<p>* It was <a href="https://www.slashfilm.com/star-wars-chosen-one-prophecy/" target="_blank" rel="noopener">retconned</a> last month; older versions are tracked in <a href="https://starwars.fandom.com/wiki/Chosen_One" target="_blank" rel="noopener">this Wiki.</a></p>
<p>Also, I want to acknowledge that Emily Asher-Perrin first put forth the explanation that Skywalker is a title, in her post "<a href="https://www.tor.com/2019/04/22/hey-star-wars-episode-ix-dont-retcon-rey-into-a-skywalker/" target="_blank" rel="noopener">Hey, Star Wars: Episode IX — Don’t Retcon Rey Into a Skywalker</a>."</p>
<p>If you like this, I have plenty more <a href="https://shostack.org/blog/category/star-wars/">geeky Star Wars content</a>.</p>Testing Building Blocks2019-05-07T00:00:00Zhttps://shostack.org/blog/testing-building-blocks/AdamThere are a couple of new, short (4-page), interesting papers from a team at KU Leuven discussin the building blocks of threat modeling.<span class="single-post-excerpt"><!-- excerpt -->There are a couple of new, short (4-page), interesting papers from a team at KU Leuven discussin the building blocks of threat modeling.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/models.png" alt="testing building blocks of threat modeling" class="single-post-main-img" />
<p>There are a couple of new, short (4-page), interesting papers from a team at KU Leuven including:</p>
<ul>
<li><a href="https://lirias.kuleuven.be/2786461" target="_blank" rel="noopener">Knowledge is Power: Systematic Reuse of Privacy Knowledge for Threat Elicitation</a></li>
<li><a href="https://sion.info/assets/pdf/SAC2019/db29d84ae139406cb606e06bc6116248.pdf" target="_blank" rel="noopener">A Comparison of System Description Models for Data Protection by Design</a></li>
</ul>
<p>What makes these interesting is that they are digging into better-formed building blocks of threat modeling, comparing them to requirements, and analyzing how they stack up.</p>
<p>The work is centered on threat modeling for privacy and data protection, but what they look at includes STRIDE, CAPEC and CWE. What makes this interesting is not just the results of the comparison, but that they compare and contrast between techniques (DFD variants vs CARiSMA extended; STRIDE vs CAPEC or OWASP). Comparing building blocks at a granular level allows us to ask the question "what went wrong in that threat modeling project" and tweak one part of it, rather than throwing out threat modeling, or trying to train people in an entire method.</p>
Promoting Threat Modeling Work2019-05-13T00:00:00Zhttps://shostack.org/blog/promoting-threat-modeling-work/Some thoughts on promoting others’ threat modeling work.<span class="single-post-excerpt"><!-- excerpt -->Some thoughts on promoting others’ threat modeling work.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/stephanie-krist-100516-unsplash_framed.jpg" alt="field of wildflowers" class="single-post-main-img" />
<p>Quick: are all the flowers the same species?</p>
<p>People regularly ask me to promote their threat modeling work, and I'm often happy to do so, even when I have questions about it. There are a few things I look at before I do, and I want to share some of those because I want to promote work that moves things forward, so we all benefit from it. Some of the things I look for include: </p>
<ul>
<li>Specifics. If you have a new threat modeling approach, that's great. Describe the steps concisely and crisply. (If I can't find a list in your slide deck or paper, it's not concise and crisp.) If you have a new variant on a building block or a new way to answer one of the four questions, be clear about that, so that those seeing your work can easily put it into context, and know what's different. The four question framework makes this easy. For example, "this is an extension of 'what are we working on,' and you can use any method to answer the other questions." Such a sentence makes it easy for those thinking of picking up your tool to put it immediately in context.</li>
<li>Names. Name your work. We don't discuss Guido's programming language with a strange dependence on whitespace, we discuss Python. For others to understand it, your work needs a name, not an adjective. There are at least half a dozen distinct 'awesome' ways to threat model being promoted today. Their promoters don't make it easy to figure out what's different from the many other awesome approaches. These descriptors also carry an implication that only they are awesome, and the rest, by elimination, must suck. Lastly, I don't believe that anyone is promoting The Awesome Threat Modeling Method — if you are, I apologize, I was looking for an illustrative name that avoids calling anyone out.<br /><br />
(Microsoft cast a pall over the development of threat modeling by having at least four different things labeled 'the Microsoft approach to threat modeling.' Those included DFD+STRIDE, Asset-entry, patterns and practices and TAM, and variations on each.) Also, we discuss Python 2 versus Python 3, not 'the way Guido talked about Python in 2014 in that video that got taken off Youtube because it used walk-on music..'</li>
<li>Respect. Be respectful of the work others have done, and the approaches they use. Threat modeling is a very big tent, and what doesn't work for you may well work for others. This doesn't mean 'never criticize,' but it does mean don't cast shade. It's fine to say 'Threat modeling an entire system at once doesn't work in agile teams at west coast software companies.' It's even better to say 'Writing misuse cases got an NPS of -50 and Elevation of Privilege scored 15 at the same 6 west coast companies founded in the last 5 years.' <br /><br />
I won't promote work that tears down other work for the sake of tearing it down, or that does so by saying either 'this doesn't work' without specifics of the situation in which it didn't work. Similarly, it's fine to say "it took too long" if you say how long it took to do what steps, and, ideally, quantify 'too long.'</li>
</ul>
<p>I admit that I have failed at each of these in the past, and endeavor to do better. Specifics, labels, and respectful conversation help us understand the field of flowers.</p>
<p>What else should we do better as we improve the ways we tackle threat modeling?</p>
<p><a href="https://unsplash.com/photos/xTbSUh4N_aM" target="_blank" rel="noopener">Photo</a> by <a href="https://unsplash.com/photos/xTbSUh4N_aM?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank" rel="noopener">Stephanie Krist</a> on Unsplash.</p>Polymorphic Warnings On My Mind2019-05-29T00:00:00Zhttps://shostack.org/blog/polymorphic-warnings-on-my-mind/AdamThe more we see it, the more we ignore it.<span class="single-post-excerpt"><!-- excerpt -->The more we see it, the more we ignore it.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/securitywarnings-1.png" alt="screenshot of paper mentioned in this post" class="single-post-main-img" />
<p>There's a fascinating paper, "<a href="https://neurosecurity.byu.edu/media/Vance_et_al._2017_MISQ.pdf" target="_blank" rel="noopener">Tuning Out Security Warnings: A Longitudinal Examination Of Habituation Through Fmri, Eye Tracking, And Field Experiments</a>." (It came out about a year ago.)</p>
<p>The researchers examined what happens in people's brains when they look at warnings, and they found that:</p>
<blockquote>
<p>Research in the fields of information systems and human-computer interaction has shown that habituation—decreased response to repeated stimulation—is a serious threat to the effectiveness of security warnings. Although habituation is a neurobiological phenomenon that develops over time, past studies have only examined this problem cross-sectionally. Further, past studies have not examined how habituation influences actual security warning adherence in the field. For these reasons, the full extent of the problem of habituation is unknown.</p>
<p>We address these gaps by conducting two complementary longitudinal experiments. First, we performed an experiment collecting fMRI and eye-tracking data simultaneously to directly measure habituation to security warnings as it develops in the brain over a five-day workweek. Our results show not only a general decline of participants’ attention to warnings over time but also that attention recovers at least partially between workdays without exposure to the warnings. Further, we found that updating the appearance of a warning—that is, a polymorphic design—substantially reduced habituation of attention.</p>
<p>Second, we performed a three-week field experiment in which users were naturally exposed to privacy permission warnings as they installed apps on their mobile devices. Consistent with our fMRI results, users’ warning adherence substantially decreased over the three weeks. However, for users who received polymorphic permission warnings, adherence dropped at a substantially lower rate and remained high after three weeks, compared to users who received standard warnings. Together, these findings provide the most complete view yet of the problem of habituation to security warnings and demonstrate that polymorphic warnings can substantially improve adherence.</p></blockquote>
<p>It's not short, but it's not hard reading. Worthwhile if you care about usable security.</p>When security goes off the rails2019-06-07T00:00:00Zhttps://shostack.org/blog/when-security-goes-off-the-rails/AdamMy newest post over at Dark Reading ponders regulation.<span class="single-post-excerpt"><!-- excerpt -->My newest post over at Dark Reading ponders regulation.<!-- /excerpt --></span>
<p>New at Dark Reading, my <a href="https://www.darkreading.com/threat-intelligence/when-security-goes-off-the-rails/a/d-id/1334851" target="_blank" rel="noopener">When Security Goes Off the Rails</a>, Cyber can learn a lot from the highly regulated world of rail travel. The most important lesson: the value of impartial analysis.</p>
<p>(As I watch the competing stories, "<a href="https://arstechnica.com/information-technology/2019/05/eternally-blue-baltimore-city-leaders-blame-nsa-for-ransomware-attack/" target="_blank" rel="noopener">Baltimore City leaders blame NSA for ransomware attack</a>," and "<a href="https://www.nytimes.com/2019/05/31/us/nsa-baltimore-ransomware.html" target="_blank" rel="noopener">N.S.A. Denies Its Cyberweapon Was Used in Baltimore Attack, Congressman Says</a>," I'd like to see an investigations capability that can give us facts.)</p>DNS Security2019-06-13T00:00:00Zhttps://shostack.org/blog/dns-security/AdamI'm happy to say that some new research by Jay Jacobs, Wade Baker, and myself is now available, thanks to the Global Cyber Alliance.<img src="https://shostack.org/blog/img/2019/economic-value-of-dns-security.png" alt="header: The Economic Value of DNS Security" class="single-post-main-img" />
<p>I'm happy to say that some new research by Jay Jacobs, Wade Baker, and myself is now available, thanks to the Global Cyber Alliance.</p>
<p>They asked us to look at the value of DNS security, such as when your DNS provider uses threat intel to block malicious sites. It's surprising how effective it is for a tool that's so easy to deploy. (Just point to a DNS server like 9.9.9.9).</p>
<p>The report is available from GCA's site: <a href="https://www.globalcyberalliance.org/dns-economic-value-report/" target="_blank" rel="noopener">Learn About How DNS Security Can Mitigate One-Third of Cyber Incidents</a></p>Happy Juneteenth!2019-06-19T00:00:00Zhttps://shostack.org/blog/happy-juneteenth-3/AdamJuneteenth is the celebration of the end of slavery in the US. We should have more holidays that celebrate freedom for the sake of freedom.<img src="https://shostack.org/blog/img/2019/juneteenth.jpg" alt="celebrate emancipation" class="single-post-main-img" style="float:left;" />
<p>Juneteenth is the celebration of the end of slavery in the US. We should have more holidays that celebrate freedom for the sake of freedom.</p>
<p>So happy Juneteenth, everyone!</p>Passwords Advice2019-06-26T00:00:00Zhttps://shostack.org/blog/passwords-advice/adamBruse Marshall has put together a useful comparison of password requirements from OWASP ASVS v3 and v4.<span class="single-post-excerpt"><!-- excerpt -->Bruse Marshall has put together a useful comparison of password requirements from OWASP ASVS v3 and v4.<!-- /excerpt --></span>
<p>Bruce Marshall has put together a comparison of OWASP ASVS v3 and v4 password requirements: <a href="https://docs.google.com/spreadsheets/d/1UbOsbgv4WsmuVuL8M3NoCRD7UQKAw7vl6BLYaLk-EtI/" target="_blank" rel="noopener">OWASP ASVS 3.0 & 4.0 Comparison</a>. This is useful in and of itself, and is also the sort of thing that more standards bodies should do, by default.</p>
<p>It's all too common to have a new standard come out without clear diffs. It's all too common for new standards to build closely on other standards, without clearly saying what they've altered and why. This leaves the analysis of 'what's different' to each user of the standards. It increases the probability of errors. Both drive cost and waste effort. We should judge standards on their delivery of these important contextual documents.</p>The Unanimous Declaration of the 13 United States2019-07-04T00:00:00Zhttps://shostack.org/blog/the-unanimous-declaration-of-the-13-united-states/AdamWhen was the last time you looked over what our country was founded upon?<span class="single-post-excerpt"><!-- excerpt -->When was the last time you looked over what our country was founded upon?<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2017/declaration-of-independence.jpg" alt="The Declaration of Independence of the United States of America" class="single-post-main-img" height="376" width="359" style="float: right;" />
<p>In CONGRESS, July 4, 1776</p>
<p>The unanimous Declaration of the thirteen united States of America,</p>
<p>When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.</p>
<p>We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. –That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, –That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security. —Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government. The history of the present King of Great Britain [George III] is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world.</p>
<p>He has refused his Assent to Laws, the most wholesome and necessary for the public good.</p>
<p>He has forbidden his Governors to pass Laws of immediate and pressing importance, unless suspended in their operation till his Assent should be obtained; and when so suspended, he has utterly neglected to attend to them.</p>
<p>He has refused to pass other Laws for the accommodation of large districts of people, unless those people would relinquish the right of Representation in the Legislature, a right inestimable to them and formidable to tyrants only.</p>
<p>He has called together legislative bodies at places unusual, uncomfortable, and distant from the depository of their public Records, for the sole purpose of fatiguing them into compliance with his measures.</p>
<p>He has dissolved Representative Houses repeatedly, for opposing with manly firmness his invasions on the rights of the people.</p>
<p>He has refused for a long time, after such dissolutions, to cause others to be elected; whereby the Legislative powers, incapable of Annihilation, have returned to the People at large for their exercise; the State remaining in the mean time exposed to all the dangers of invasion from without, and convulsions within.</p>
<p>He has endeavoured to prevent the population of these States; for that purpose obstructing the Laws for Naturalization of Foreigners; refusing to pass others to encourage their migrations hither, and raising the conditions of new Appropriations of Lands.</p>
<p>He has obstructed the Administration of Justice, by refusing his Assent to Laws for establishing Judiciary powers.</p>
<p>He has made Judges dependent on his Will alone, for the tenure of their offices, and the amount and payment of their salaries.</p>
<p>He has erected a multitude of New Offices, and sent hither swarms of Officers to harass our people, and eat out their substance.</p>
<p>He has kept among us, in times of peace, Standing Armies without the consent of our legislatures.</p>
<p>He has affected to render the Military independent of and superior to the Civil power.</p>
<p>He has combined with others to subject us to a jurisdiction foreign to our constitution and unacknowledged by our laws; giving his Assent to their Acts of pretended Legislation:</p>
<ul style="list-style-type: none;">
<li>For Quartering large bodies of armed troops among us:</li>
<li>For protecting them, by a mock Trial, from punishment for any Murders which they should commit on the Inhabitants of these States:</li>
<li>For cutting off our Trade with all parts of the world:</li>
<li>For imposing Taxes on us without our Consent:</li>
<li>For depriving us, in many cases, of the benefits of Trial by Jury:</li>
<li>For transporting us beyond Seas to be tried for pretended offences:</li>
<li>For abolishing the free System of English Laws in a neighbouring Province, establishing therein an Arbitrary government, and enlarging its Boundaries so as to render it at once an example and fit instrument for introducing the same absolute rule into these Colonies:</li>
<li>For taking away our Charters, abolishing our most valuable Laws, and altering fundamentally the Forms of our Governments:</li>
<li>For suspending our own Legislatures, and declaring themselves invested with power to legislate for us in all cases whatsoever.</li>
</ul>
<p>He has abdicated Government here, by declaring us out of his Protection and waging War against us.</p>
<p>He has plundered our seas, ravaged our Coasts, burnt our towns, and destroyed the lives of our people.</p>
<p>He is at this time transporting large Armies of foreign Mercenaries to compleat the works of death, desolation and tyranny, already begun with circumstances of Cruelty and perfidy scarcely paralleled in the most barbarous ages, and totally unworthy the Head of a civilized nation.</p>
<p>He has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their Country, to become the executioners of their friends and Brethren, or to fall themselves by their Hands.</p>
<p>He has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages, whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions.</p>
<p>In every stage of these Oppressions We have Petitioned for Redress in the most humble terms: Our repeated Petitions have been answered only by repeated injury. A Prince whose character is thus marked by every act which may define a Tyrant, is unfit to be the ruler of a free people.</p>
<p>Nor have We been wanting in attentions to our British brethren. We have warned them from time to time of attempts by their legislature to extend an unwarrantable jurisdiction over us. We have reminded them of the circumstances of our emigration and settlement here. We have appealed to their native justice and magnanimity, and we have conjured them by the ties of our common kindred to disavow these usurpations, which, would inevitably interrupt our connections and correspondence. They too have been deaf to the voice of justice and of consanguinity. We must, therefore, acquiesce in the necessity, which denounces our Separation, and hold them, as we hold the rest of mankind, Enemies in War, in Peace Friends.</p>
<p>We, therefore, the Representatives of the united States of America, in General Congress, Assembled, appealing to the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by the Authority of the good People of these Colonies, solemnly publish and declare, That these United Colonies are, and of Right ought to be Free and Independent States; that they are Absolved from all Allegiance to the British Crown, and that all political connection between them and the State of Great Britain, is and ought to be totally dissolved; and that as Free and Independent States, they have full Power to levy War, conclude Peace, contract Alliances, establish Commerce, and to do all other Acts and Things which Independent States may of right do. And for the support of this Declaration, with a firm reliance on the protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes and our sacred Honor.</p>
<p>The signers of the Declaration represented the new states as follows:</p>
<dl>
<dt>New Hampshire</dt>
<dd>Josiah Bartlett, William Whipple, Matthew Thornton</dd>
<dt>Massachusetts</dt>
<dd>John Hancock, Samual Adams, John Adams, Robert Treat Paine, Elbridge Gerry</dd>
<dt>Rhode Island</dt>
<dd>Stephen Hopkins, William Ellery</dd>
<dt>Connecticut</dt>
<dd>Roger Sherman, Samuel Huntington, William Williams, Oliver Wolcott</dd>
<dt>New York</dt>
<dd>William Floyd, Philip Livingston, Francis Lewis, Lewis Morris</dd>
<dt>New Jersey</dt>
<dd>Richard Stockton, John Witherspoon, Francis Hopkinson, John Hart, Abraham Clark</dd>
<dt>Pennsylvania</dt>
<dd>Robert Morris, Benjamin Rush, Benjamin Franklin, John Morton, George Clymer, James Smith, George Taylor, James Wilson, George Ross</dd>
<dt>Delaware</dt>
<dd>Caesar Rodney, George Read, Thomas McKean</dd>
<dt>Maryland</dt>
<dd>Samuel Chase, William Paca, Thomas Stone, Charles Carroll of Carrollton</dd>
<dt>Virginia</dt>
<dd>George Wythe, Richard Henry Lee, Thomas Jefferson, Benjamin Harrison, Thomas Nelson, Jr., Francis Lightfoot Lee, Carter Braxton</dd>
<dt>North Carolina</dt>
<dd>William Hooper, Joseph Hewes, John Penn</dd>
<dt>South Carolina</dt>
<dd>Edward Rutledge, Thomas Heyward, Jr., Thomas Lynch, Jr., Arthur Middleton</dd>
<dt>Georgia</dt>
<dd>Button Gwinnett, Lyman Hall, George Walton</dd>
</dl>
<p>Image: <a href="http://www.loc.gov/exhibits/treasures/trt024.html" target="_blank" rel="noopener">Washington's copy of the Declaration of Independence</a>, from the Library of Congress.</p>The Unanimous Declaration of the Thirteen United States of America2019-07-04T00:00:00Zhttps://shostack.org/blog/the-unanimous-declaration-of-the-thirteen-united-states-of-america-3/AdamWhen was the last time you looked over what our country was founded upon?<span class="single-post-excerpt"><!-- excerpt -->When was the last time you looked over what our country was founded upon?<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/declaration-of-independence.jpg" alt="The Declaration of Independence of the United States of America" class="single-post-main-img" />
<p>(Reading the declaration of independence is a useful reminder of why we chose to dissolve the political bands that connected us to another. It's not about jingoism, or the results of a plebiscite, but about a "long train of abuses and usurpations, pursuing invariably the same Object," and the proper response to such acts.)</p>
<p>In CONGRESS, July 4, 1776</p>
<p>The unanimous Declaration of the thirteen united States of America,</p>
<p>When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.</p>
<p>We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. –That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, –That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security. —Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government. The history of the present King of Great Britain [George III] is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world.</p>
<p>He has refused his Assent to Laws, the most wholesome and necessary for the public good.</p>
<p>He has forbidden his Governors to pass Laws of immediate and pressing importance, unless suspended in their operation till his Assent should be obtained; and when so suspended, he has utterly neglected to attend to them.</p>
<p>He has refused to pass other Laws for the accommodation of large districts of people, unless those people would relinquish the right of Representation in the Legislature, a right inestimable to them and formidable to tyrants only.</p>
<p>He has called together legislative bodies at places unusual, uncomfortable, and distant from the depository of their public Records, for the sole purpose of fatiguing them into compliance with his measures.</p>
<p>He has dissolved Representative Houses repeatedly, for opposing with manly firmness his invasions on the rights of the people.</p>
<p>He has refused for a long time, after such dissolutions, to cause others to be elected; whereby the Legislative powers, incapable of Annihilation, have returned to the People at large for their exercise; the State remaining in the mean time exposed to all the dangers of invasion from without, and convulsions within.</p>
<p>He has endeavoured to prevent the population of these States; for that purpose obstructing the Laws for Naturalization of Foreigners; refusing to pass others to encourage their migrations hither, and raising the conditions of new Appropriations of Lands.</p>
<p>He has obstructed the Administration of Justice, by refusing his Assent to Laws for establishing Judiciary powers.</p>
<p>He has made Judges dependent on his Will alone, for the tenure of their offices, and the amount and payment of their salaries.</p>
<p>He has erected a multitude of New Offices, and sent hither swarms of Officers to harass our people, and eat out their substance.</p>
<p>He has kept among us, in times of peace, Standing Armies without the consent of our legislatures.</p>
<p>He has affected to render the Military independent of and superior to the Civil power.</p>
<p>He has combined with others to subject us to a jurisdiction foreign to our constitution and unacknowledged by our laws; giving his Assent to their Acts of pretended Legislation:</p>
<ul style="list-style-type: none;">
<li>For Quartering large bodies of armed troops among us:</li>
<li>For protecting them, by a mock Trial, from punishment for any Murders which they should commit on the Inhabitants of these States:</li>
<li>For cutting off our Trade with all parts of the world:</li>
<li>For imposing Taxes on us without our Consent:</li>
<li>For depriving us, in many cases, of the benefits of Trial by Jury:</li>
<li>For transporting us beyond Seas to be tried for pretended offences:</li>
<li>For abolishing the free System of English Laws in a neighbouring Province, establishing therein an Arbitrary government, and enlarging its Boundaries so as to render it at once an example and fit instrument for introducing the same absolute rule into these Colonies:</li>
<li>For taking away our Charters, abolishing our most valuable Laws, and altering fundamentally the Forms of our Governments:</li>
<li>For suspending our own Legislatures, and declaring themselves invested with power to legislate for us in all cases whatsoever.</li>
</ul>
<p>He has abdicated Government here, by declaring us out of his Protection and waging War against us.</p>
<p>He has plundered our seas, ravaged our Coasts, burnt our towns, and destroyed the lives of our people.</p>
<p>He is at this time transporting large Armies of foreign Mercenaries to compleat the works of death, desolation and tyranny, already begun with circumstances of Cruelty and perfidy scarcely paralleled in the most barbarous ages, and totally unworthy the Head of a civilized nation.</p>
<p>He has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their Country, to become the executioners of their friends and Brethren, or to fall themselves by their Hands.</p>
<p>He has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages, whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions.</p>
<p>In every stage of these Oppressions We have Petitioned for Redress in the most humble terms: Our repeated Petitions have been answered only by repeated injury. A Prince whose character is thus marked by every act which may define a Tyrant, is unfit to be the ruler of a free people.</p>
<p>Nor have We been wanting in attentions to our British brethren. We have warned them from time to time of attempts by their legislature to extend an unwarrantable jurisdiction over us. We have reminded them of the circumstances of our emigration and settlement here. We have appealed to their native justice and magnanimity, and we have conjured them by the ties of our common kindred to disavow these usurpations, which, would inevitably interrupt our connections and correspondence. They too have been deaf to the voice of justice and of consanguinity. We must, therefore, acquiesce in the necessity, which denounces our Separation, and hold them, as we hold the rest of mankind, Enemies in War, in Peace Friends.</p>
<p>We, therefore, the Representatives of the united States of America, in General Congress, Assembled, appealing to the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by the Authority of the good People of these Colonies, solemnly publish and declare, That these United Colonies are, and of Right ought to be Free and Independent States; that they are Absolved from all Allegiance to the British Crown, and that all political connection between them and the State of Great Britain, is and ought to be totally dissolved; and that as Free and Independent States, they have full Power to levy War, conclude Peace, contract Alliances, establish Commerce, and to do all other Acts and Things which Independent States may of right do. And for the support of this Declaration, with a firm reliance on the protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes and our sacred Honor.</p>
<p>The signers of the Declaration represented the new states as follows:</p>
<dl>
<dt>New Hampshire</dt>
<dd>Josiah Bartlett, William Whipple, Matthew Thornton</dd>
<dt>Massachusetts</dt>
<dd>John Hancock, Samual Adams, John Adams, Robert Treat Paine, Elbridge Gerry</dd>
<dt>Rhode Island</dt>
<dd>Stephen Hopkins, William Ellery</dd>
<dt>Connecticut</dt>
<dd>Roger Sherman, Samuel Huntington, William Williams, Oliver Wolcott</dd>
<dt>New York</dt>
<dd>William Floyd, Philip Livingston, Francis Lewis, Lewis Morris</dd>
<dt>New Jersey</dt>
<dd>Richard Stockton, John Witherspoon, Francis Hopkinson, John Hart, Abraham Clark</dd>
<dt>Pennsylvania</dt>
<dd>Robert Morris, Benjamin Rush, Benjamin Franklin, John Morton, George Clymer, James Smith, George Taylor, James Wilson, George Ross</dd>
<dt>Delaware</dt>
<dd>Caesar Rodney, George Read, Thomas McKean</dd>
<dt>Maryland</dt>
<dd>Samuel Chase, William Paca, Thomas Stone, Charles Carroll of Carrollton</dd>
<dt>Virginia</dt>
<dd>George Wythe, Richard Henry Lee, Thomas Jefferson, Benjamin Harrison, Thomas Nelson, Jr., Francis Lightfoot Lee, Carter Braxton</dd>
<dt>North Carolina</dt>
<dd>William Hooper, Joseph Hewes, John Penn</dd>
<dt>South Carolina</dt>
<dd>Edward Rutledge, Thomas Heyward, Jr., Thomas Lynch, Jr., Arthur Middleton</dd>
<dt>Georgia</dt>
<dd>Button Gwinnett, Lyman Hall, George Walton</dd>
</dl>
<p>Image: <u>Washington's copy of the Declaration of Independence</u> [link to http://www.loc.gov/exhibits/treasures/trt024.html no longer works], from the Library of Congress.</p>The Road to Mediocrity2019-07-05T00:00:00Zhttps://shostack.org/blog/the-road-to-mediocrity/AdamThe road to mediocre writing is paved with over-simplification and distraction.<span class="single-post-excerpt"><!-- excerpt -->The road to mediocre writing is paved with over-simplification and distraction.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/feasible.png" alt="definition of feasible" style="max-width:760px; height:auto;" />
<p>Google Docs has chosen to red-underline the word "feasible," which, as you can see, is in its dictionary, to suggest "possible." "Possible," possibly, was not the word I selected, because it means something different.</p>
<p>Good writing is direct. Good writing respects the reader. Good writing doesn't tax the reader accidentally. It uses simple words when possible, effectively utilizing, no wait, utilize means you're attempting to make your writing sound fancier than it need be. Never use "utilize" when its feasible to say "use."</p>
<p>Good writing tools are unobtrusive. They don't randomize the writer away from what they're working on to try to figure out why in holy hell it's wrong to be using the word feasible and why it needs to be replaced.</p>
<p>The road to mediocre writing is paved with over-simplification and distraction.</p>
<p>My current go-to is Pinker's <a href="https://www.amazon.com/gp/product/0143127799/ref=as_li_tl?ie=UTF8&tag=adamshostack-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=0143127799&linkId=1af9ab3247a7c896386b04ded0748eeb" target="_blank" rel="noopener">The Sense of Style</a>. What else helps you think about writing?</p>Safety and Security in Automated Driving2019-07-08T00:00:00Zhttps://shostack.org/blog/safety-and-security-in-automated-driving/AdamLet’s explore the risks associated with Automated Driving.<span class="single-post-excerpt"><!-- excerpt -->Let’s explore the risks associated with Automated Driving.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/automated-driving-risk-cube.png" alt="Automated Driving risk cube" class="single-post-main-img" />
<p>"<cite><a href="https://www.aptiv.com/docs/default-source/white-papers/safety-first-for-automated-driving-aptiv-white-paper.pdf" target="_blank" rel="noopener">Safety First For Automated Driving</a>"</cite> is a big, over-arching whitepaper from a dozen automotive manufacturers and suppliers.</p>
<p>One way to read it is that those disciplines have strongly developed safety cultures, which generally do not consider cybersecurity problems. This paper is the cybersecurity specialists making the argument that cyber will fit into safety, and how to do so.</p>
<p>In a sense, this white paper captures a strategic threat model. What are we working on? Autonomous vehicles. What can go wrong? Security issues of all types. What are we going to do? Integrate with and extend the existing safety discipline. Give specific threat information and mitigation strategies to component designers.</p>
<p>I find some parts of it surprising. (I would find it more surprising if I were to look at a 150 page document and not find anything surprising.)</p>
<blockquote>Contrary to the commonly used definition of an [minimal risk condition, (MRC)], which describes only a standstill, this publication expands the definition to also include degraded operation and takeovers by the vehicle operator. Final MRCs refer to MRCs that allow complete deactivation of the automated driving system, e.g. standstill or takeover by the vehicle operator.</blockquote>
<p>One of the "minimal risk" maneuvers listed (table 4) is an emergency stop. And while an emergency stop may certainly be a risk minimizing action in some circumstances, describing it as such is surprising, especially when presented in contrast to a "safe stop" maneuver.</p>
<p>It's important to remember that driving is incredibly dangerous. In the United States in 2018, an <a href="https://www.nsc.org/road-safety/safety-topics/fatality-estimates" target="_blank" rel="noopener">estimated</a> 40,000 people lost their lives in car crashes, and 4.5 million people were seriously injured. (I've seen elsewhere that a million of those are hospitalized.) A great many of those injuries are caused by either drunk or distracted drivers, and autonomous vehicles could save many lives, even if imperfect.</p>
<p>Which brings me to a part that I really like, which is the 'three dimensions of risk treatment' figure (Figure 8, shown). Words like "risk" and "risk management" encompass a lot, and this figure is a nice side contribution of the paper.</p>
<p>I also like Figure 27 & 28 (shown), showing risks associated with a generic architecture. Having this work available allows systems builders to consider the risks to various components they're working on. Having it available lets us have a conversation about the systematic risks that exist, but also, allows security experts to ask "is this the right set of risks for systems builders to think about?"</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2019/automotive-safety-fig28.png" alt="A chart of system components in an autonomous vehicle" title="automotive-safety-fig28.png" border="0" width="448" height="600" />NIST on SDLs2019-07-11T00:00:00Zhttps://shostack.org/blog/nist-on-sdls/AdamMitigating the Risk of Software Vulnerabilities by Adopting a Secure Software Development Framework (SSDF) from NIST is open for comment.<span class="single-post-excerpt"><!-- excerpt -->Mitigating the Risk of Software Vulnerabilities by Adopting a Secure Software Development Framework (SSDF) from NIST is open for comment.<!-- /excerpt --></span>
<p>There's a new draft available from NIST, "<cite><a href="https://csrc.nist.gov/CSRC/media/Publications/white-paper/2019/06/07/mitigating-risk-of-software-vulnerabilities-with-ssdf/draft/documents/ssdf-for-mitigating-risk-of-software-vulns-draft.pdf" target="_blank" rel="noopener">Mitigating the Risk of Software Vulnerabilities by Adopting a Secure Software Development Framework (SSDF)</a></cite>." They are accepting comments through August 5th.</p>Threat Modeling at Layer 82019-07-12T00:00:00Zhttps://shostack.org/blog/threat-modeling-at-layer-8/AdamDiscussing online conflict on the AppSec Podcast<span class="single-post-excerpt"><!-- excerpt -->Discussing online conflict on the AppSec Podcast<!-- /excerpt --></span>
<p>Conflict online — bullying, trolling, threats and the like are everywhere. The media coverage is shifting from "OMG what are we doing about this?!" to "<a href="https://time.com/5619999/instagram-mosseri-bullying-artificial-intelligence/" target="_blank" rel="noopener">Wow, this is really hard</a>." (Ayup)</p>
<p>I've been <a href="https://github.com/adamshostack/conflictmodeling" target="_blank" rel="noopener">exploring</a> how to engineer for these problems, and I joined Chris Romeo and Robert Hurlbut to talk about it on <u>the AppSec Podcast</u> [link to https://www.securityjourney.com/blog/adam-shostack-threat-modeling-layer-8-and-conflict-modeling/ no longer works]. I hope it's worth a listen, and I'm hopeful that you'll join in and collaborate.</p>Books Worth Reading: Q2 2019 (Apollo Edition)2019-07-15T00:00:00Zhttps://shostack.org/blog/books-worth-reading-q2-2019-apollo-edition/AdamSome books worth reading, particularly related to space and history<span class="single-post-excerpt"><!-- excerpt -->Some books worth reading, particularly related to space and history<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/IMG_3863.jpg" alt=" " class="single-post-main-img" />
<ul>
<li><a href="https://amzn.to/32vdnOm" target="_blank" rel="noopener">A Man on the Moon</a>, Andrew Chaikin is probably the best of the general histories of the moon landings.</li>
<li><a href="https://amzn.to/2jSSCKP" target="_blank" rel="noopener">Failure is not an Option</a>, by Gene Kranz, who didn't actually say that during Apollo 13.</li>
<li><a href="https://amzn.to/2lnU08J" target="_blank" rel="noopener">Marketing The Moon</a> by David Scott and Richard Jurek. I was surprised what a good history this was, and how much it brought in the overall history of the program and put it in context.</li>
<li><a href="https://amzn.to/30B6aud" target="_blank" rel="noopener">Spacesuit: Fashioning Apollo</a>, as <a href="https://adam.shostack.org/blog/2017/07/worthwhile-books-q2-2017/">mentioned</a> previously.</li>
<li><a href="https://amzn.to/2Y5yxTC" target="_blank" rel="noopener">Full Moon</a>. Gorgeous photography, printed from very high quality scans; the author convinced NASA to provide access to first generation negatives. You may need to search on Amazon to find a reasonably priced copy.</li>
</ul>
<p>Also worthwhile: From the Earth to The Moon (<a href="https://amzn.to/2lmaxtH" target="_blank" rel="noopener">DVD</a>, <a href="https://amzn.to/2lhb1Bg" target="_blank" rel="noopener">Blue Ray</a>), and the <a href="https://www.museumofflight.org/" target="_blank" rel="noopener">Museum of Flight</a> Apollo exhibit, in Seattle through September 2nd.</p>Happy Apollo Day!2019-07-20T00:00:00Zhttps://shostack.org/blog/happy-apollo-day/AdamToday is the 50th Anniversary of ‘One small step for a man, one giant leap for mankind.’<img src="https://shostack.org/blog/img/2019/11318018-3x2-940x627.jpg" alt="photo of astronaut after landing on the moon" class="single-post-main-img" />
<p>Today is the 50th Anniversary of ‘One small step for a man, one giant leap for mankind.’</p>
<p>It's an event worth celebrating, in the same way we celebrate <a href="https://yurisnight.net/about/" target="_blank" rel="noopener">Yuri's Night</a>.</p>
<p>The holy days — the holidays — that we celebrate say a great deal about us. They shape who we are. The controversies that emerge when we try to add (Martin Luther King) or remove a holiday (Columbus Day) are controversies because they express who we are, and how that could be changing.</p>
<p>Some of these new holidays are silly: <a href="https://adam.shostack.org/blog/2004/09/avast-ye-landlubbers-its-national-talk-like-a-pirate-day/">Talk Like a Pirate Day</a>, <a href="https://adam.shostack.org/blog/2017/05/threat-modeling-and-star-wars/">Star Wars Day</a>.</p>
<p>Some of the holidays are happy, some are somber.</p>
<p>Apollo Day could celebrate the engineering achievement, the risks and dangers of exploration, and the sadness that we haven't been back.</p>
<p>So, the only way to start a holiday is to start a holiday. Start by wishing people a happy Apollo day, and we'll see if it reaches escape velocity.</p>Valuing CyberSecurity Research Datasets2019-07-26T00:00:00Zhttps://shostack.org/blog/valuing-cybersecurity-research-datasets/AdamA paper at the Workshop on the Economics of Information Security titled “Valuing CyberSecurity Research Datasets” focuses on the value of the IMPACT data sharing platform at DHS, and how the availability of data shapes research.<span class="single-post-excerpt"><!-- excerpt -->A paper at the Workshop on the Economics of Information Security titled “Valuing CyberSecurity Research Datasets” focuses on the value of the IMPACT data sharing platform at DHS, and how the availability of data shapes research.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/Valuing-Cybersecurity-Research-Datasets_framed.png" alt="screenshot of article mentioned in this post" class="single-post-main-img" />
<p>There was a really interesting paper at the <a href="https://weis2019.econinfosec.org/program/agenda/" target="_blank" rel="noopener">Workshop on the Economics of Information Security</a>. The paper is "<a href="https://weis2019.econinfosec.org/wp-content/uploads/sites/6/2019/05/WEIS_2019_paper_41.pdf" target="_blank" rel="noopener">Valuing CyberSecurity Research Datasets</a>."</p>
<p>The paper focuses on the value of the IMPACT data sharing platform at DHS, and how the availability of data shapes the research that's done. </p>
<p>On its way to that valuation, a very useful contribution of the paper is the analysis of types of research data which exist, and the purposes for which it can be used:</p>
<blockquote>Note that there has been considerable attention paid to information sharing among operators through organizations such as ISACs. In contrast, we examine data provisioning done primarily for research purposes. Cybersecurity data resides on a use spectrum – some research data is relevant for operations and vice versa. Yet, as difficult as it can be to make the case for data sharing among operators, its even harder for researchers. Data sharing for research is generally not deemed as important as for operations. Outcomes are not immediately quantifiable. Bridging the gap between operators and researchers, rather than between operators alone, is further wrought with coordination and value challenges. Finally, research data is often a public good, which means it will likely be undervalued by the parties involved. </blockquote>
<p>The paper enumerates benefits of research, including advancing scientific understanding, enabling infrastructure, creating parity in access to ground truth(s) for academics, technology developers, and others who don't directly gather data. It also enumerates a set of barriers to research, including legal and ethical risk, costs, value uncertainty, and incentives.</p>
<p>These issues were highly resonant for me, because our <a href="https://shostack.org/blog/doing-science-with-near-misses/">near miss work</a> certainly encounters these issues of value uncertainty and cost as we consider how to move beyond the operational data sharing that ISACs enable.</p>
<p>I'm very glad to see the challenges crystalized in this way, and we haven't even reached the main goal of the paper, which is to assess how much value we get from sharing data.</p>
<p>While talking about this paper Robert Lemos has a <a href="https://www.darkreading.com/threat-intelligence/better-cybersecurity-research-requires-more-data-sharing/d/d-id/1334972" target="_blank" rel="noopener">story</a> at Dark Reading, and Ross Anderson <a href="https://www.lightbluetouchpaper.org/2019/06/03/weis-2019-liveblog/" target="_blank" rel="noopener">liveblogged</a> the WEIS conference.</p>Actionable Followups from the Capital One Breach2019-07-30T00:00:00Zhttps://shostack.org/blog/actionable-followups-from-the-capital-one-breach/AdamWhat have we learned and what steps can we take?<span class="single-post-excerpt"><!-- excerpt -->What have we learned and what steps can we take?<!-- /excerpt --></span>
<p>Alexandre Sieira has some very interesting and actionable advice from looking at the Capital One Breach in "<a href="https://www.infoseczanshin.com/learning-from-the-july-2019-capital-one-breach.html" target="_blank" rel="noopener">Learning from the July 2019 Capital One Breach</a>."</p>
<p>Alex starts by saying "The first thing I want to make clear is that I sympathize with the Capital One security and operations teams at this difficult time. Capital One is a well-known innovator in cloud security, has very competent people dedicated to this and has even developed and high quality open source solutions such as Cloud Custodian that benefit the entire community." I share that perspective - I've spent a lot of time at OWASP, DevSecCon and other events talking with the smart folks at Capital One.</p>
<p>One thing I'll add to his post is that the advice to "Avoid using * like the plague" is easy to implement with static analysis, by which I mean grep or diff in a commit hook. Similarly, if you want to block the grant of ListBuckets, you can look for that specific string.</p>
<p>Over time, you can evolve to check that the permissions are from a small subset of permissions you agree should be granted. One of the nice things about the agile approach to security is that you can start tomorrow, and then evolve.</p>
<p>At Blackhat next week, Dino Dai Zovi will be talking about how "<a href="https://www.blackhat.com/us-19/briefings/schedule/index.html#every-security-team-is-a-software-team-now-17280" target="_blank" rel="noopener">Every Security Team is a Software Team Now</a>." Part of that thinking is how can we take advice, like Alex's, and turn it into code that enforces our goals.</p>
<p>As we learn from breaches, as we share the code we build to address these problems, we'll see fewer and fewer incidents like these.</p>Blackhat Best Practice2019-08-02T00:00:00Zhttps://shostack.org/blog/blackhat-best-practice/AdamReminders about some conference best practices.<span class="single-post-excerpt"><!-- excerpt -->Reminders about some conference best practices.<!-- /excerpt --></span>
<p>Shortly, I'm off to Blackhat. My <a href="https://www.blackhat.com/us-19/training/schedule/index.html#threat-modeling-intensive-14405" target="_blank" rel="noopener">Threat Modeling Intensive</a> classes both sold out (thank you!)</p>
<p>Nearly a decade ago, I <a href="https://adam.shostack.org/blog/2010/07/black-hat-best-practices/">put forth</a> a set of best practices:</p>
<ul>
<li>Breath mints</li>
<li>Ricola</li>
<li>Purell</li>
<li>Advil</li>
<li>Gatorade</li>
<li>This year, I'm adding a <a href="https://amzn.to/2yAQUBD">travel humidifier</a>. I've been using this one, and it really needs to soak for 10 minutes, but then it adds a nice stream of moisture to the room.</li>
</ul>
<p>Also, at a conference, you'll ask, and get the question "What's new?" Learn to <a href="https://adam.shostack.org/blog/2015/04/conference-etiquette-whats-new/">answer it well</a>, it will make your conference more valuable.</p>Toolbox: After a Conference2019-08-13T00:00:00Zhttps://shostack.org/blog/toolbox-after-a-conference-2/AdamHow to get back into the workflow after time away at conferences.<span class="single-post-excerpt"><!-- excerpt -->How to get back into the workflow after time away at conferences.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/Blackhat-2019.jpg" alt="collage of group photos with Adam Shostack around Blackhat 2019" class="single-post-main-img" />
<p>Wow. Blackhat, Defcon, I didn’t make any of the other conferences going on in Vegas. And coming back it seems like there’s a sea of things to follow up on. A little bit of organization is helping me manage better this year, and so I thought I’d share what’s in my post-conference toolbox. I’m also sharing because I don’t think my workflow is optimal, and would love to learn how you're working through this in 2019, with its profusion of ways to stay in touch.</p>
<p>I've added a new first step relative to <a href="https://adam.shostack.org/blog/2018/08/toolbox-after-a-conference/">last year</a>, which is to write a trip report, for myself. It captures who I talked to, impressions, followup, and value of the event.</p>
<p>Next, I have a stack of queues to process:</p>
<ol>
<li>Email. My inbox, but I also have a folder called “followup.” I move a lot out of my inbox to the followup folder so I can see it when I’m back from travel. (I also have a set of monthly sub-folders: followup/august, followup/september, they let me deliver when I say “I’ll get back to you in three months.”)</li>
<li>Signal, iMessage. For both of these, I go back through the conversations I’ve had, see if I had followups or if I dropped the ball on someone.</li>
<li>Linkedin. I get a lot of linkedin requests, and I’m a fairly open networker. Sadly, the UI works very poorly for me. I would love to hear about tools that allow me to effectively move messages to something other than a LIFO queue.</li>
<li>Workflowy. I’m experimenting with this as a note taking tool, and it’s not bad. </li>
<li>Business cards. I go through the whole stack of cards for todo items. I try to write notes on business cards. I discovered I did that on one of 6 cards where I remembered something. That’s not very good odds, and forces me to consider what I might have missed. Still exploring how to make best use of cards without notes. Advice really welcome here.</li>
<li>Slack channels. Go through, look at DMs and channels.</li>
<li>Calendar. For each meeting, think about the meeting, check my notes, see if I remember followups or things that didn’t make it to an email/workflowy note. And yes, there were several discussions that I know we discussed followups that I re-discovered by looking at my calendar.</li>
<li>Photos. Photographs are the new note-taking, and so going back through pictures you took is important.</li>
<li>Twitter, Facebook. I’m trying to break from Twitter, and don’t use Facebook, but I figured I’d include them here because they’re maybe worth remembering. </li>
<li>Read Later queue in Instapaper (new!)</li>
</ol>
<p>After the queues, as a consultant, I have customer work to get back to and sales contacts to followup on. I have expenses. I haven’t found an expense app that I really like, and so I stuff receipts in an envelope each evening, and then deal with them when I get home.</p>
<p>If I missed any followups, I’m sorry. Please reach out!</p>
<p>But more, I’m curious what works for you? What’s in your toolbox?</p>Training At Embedded Systems Security Days2019-08-15T00:00:00Zhttps://shostack.org/blog/training-at-embedded-systems-security-days/AdamI'm excited to be teaming up with Alpha Strike and Limes Security to deliver training in Vienna November 6-8.<span class="single-post-excerpt"><!-- excerpt -->I'm excited to be teaming up with Alpha Strike and Limes Security to deliver training in Vienna November 6-8.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/embedded-systems-security-days.jpg" alt="banner for embedded systems security days, Nov 6-8, 2019" class="single-post-main-img" />
<p>I'm excited to be teaming up with Alpha Strike and Limes Security to deliver training in Vienna November 6-8. Details are available at <u>Embedded Systems Security Days</u> [link to https://security-days.io/en/embedded-systems-security-days-2/ no longer works].</p>Interesting Reads, August 192019-08-21T00:00:00Zhttps://shostack.org/blog/interesting-reads-august-19/AdamJust what the title says<span class="single-post-excerpt"><!-- excerpt -->Just what the title says<!-- /excerpt --></span>
<ul>
<li>If you needed more reasons to move away from using SMS-based authentication, and treating phone companies as trusted, "<a href="https://boingboing.net/2019/08/06/att-employees-took-over-1-mi.html" target="_blank" rel="noopener">AT&T employees took over $1 million in bribes to plant malware and unlock millions of smartphones: DOJ</a>".</li>
<li>Abuse reporting systems are being abused. You need to threat model and play the chess game. "<a href="https://www.nytimes.com/2019/08/14/science/nasa-flat-earth.html" target="_blank" rel="noopener">How Flat Earthers Nearly Derailed a Space Photo Book</a>". My <a href="https://github.com/adamshostack/conflictmodeling/">conflict modeling work</a> is a first draft of how to threat model such systems.</li>
<li>Author Marko Kloos talks about <a href="http://www.markokloos.com/?p=2208" target="_blank" rel="noopener">Life after FaceTwitStagram</a>: "And I have to tell you that the absence of social media has had a major positive effect in my life...It’s strange how the first few days after social media feel like kicking an addiction."</li>
</ul>Threat Modeling Building Blocks2019-09-04T00:00:00Zhttps://shostack.org/blog/threat-modeling-building-blocks/AdamThreat modeling isn’t one task — its a collection of tasks that build on each other to produce more valuable insights.<span class="single-post-excerpt"><!-- excerpt -->Threat modeling isn’t one task — its a collection of tasks that build on each other to produce more valuable insights.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/pile-of-legos.jpg" alt="small pile of legos in various shapes" class="single-post-main-img" />
<p>Threat modeling isn't one task — its a collection of tasks that build on each other to produce more valuable insights. One of the values of the four question frame is that it lets us reduce things into smaller, more assessable building blocks.</p>
<p>And in that vein, there are a couple of new, short (4-page), interesting papers from a team at KU Leuven including:
</p><ul>
<li><a href="https://lirias.kuleuven.be/2786461" target="_blank" rel="noopener">Knowledge is Power: Systematic Reuse of Privacy Knowledge for Threat Elicitation</a></li>
<li><a href="https://sion.info/assets/pdf/SAC2019/db29d84ae139406cb606e06bc6116248.pdf" target="_blank" rel="noopener">A Comparison of System Description Models for Data Protection by Design</a></li>
</ul>
<p>What makes these interesting is that they are digging into better-formed building blocks of threat modeling, comparing them to requirements, and analyzing how they stack up.</p>
<p>The work is centered on threat modeling for privacy and data protection, but what they look at includes STRIDE, CAPEC and CWE. What makes this interesting is not just the results of the comparison, but that they compare and contrast between techniques (DFD variants vs CARiSMA extended; STRIDE vs CAPEC or OWASP). Comparing building blocks at a granular level allows us to ask the question "what went wrong in that threat modeling project" and tweak one part of it, rather than throwing out threat modeling, or trying to train people in an entire method.</p>
<p>For some people, these might be unusual blocks that they rarely need. But I'm glad they're studying them.</p>Course announcement: Tampering in Depth!2019-09-10T00:00:00Zhttps://shostack.org/blog/course-announcement-tampering-in-depth/AdamI'm excited to announce that I'm hitting my STRIDE and Linkedin has released the second course in my in-depth exploration of STRIDE: Tampering.<span class="single-post-excerpt"><!-- excerpt -->I'm excited to announce that I'm hitting my STRIDE and Linkedin has released the second course in my in-depth exploration of STRIDE: Tampering.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/tampering_framed.png" alt="a view from tampering village" class="single-post-main-img" />
<p>I'm excited to announce that I'm hitting my STRIDE and Linkedin has released the <a href="https://www.linkedin.com/learning/threat-modeling-tampering-in-depth?isLearningSubscriber=true&trk=feed-share_course_title_learning&lipi=urn%3Ali%3Apage%3Ad_flagship3_detail_base%3BaOMAhKAfSj%2BzHMofzZWTmw%3D%3D&licu=urn%3Ali%3Acontrol%3Ad_flagship3_detail_base-show_more&lici=%2BAoz6NokT5y0aWsjZ7wEOw%3D%3D" target="_blank" rel="noopener">second course</a> in my in-depth exploration of STRIDE: Tampering.</p>
<p>I'm finding it fascinating to dive deep into the threats, organize my knowledge, and in doing so, hopefully help us chunk and remember what we're learning.</p>Capture the Flag events and eSports2019-09-11T00:00:00Zhttps://shostack.org/blog/capture-the-flag-events-and-esports/Jonathan MarcilA breakdown of CTFs and eSports<span class="single-post-excerpt"><!-- excerpt -->A breakdown of CTFs and eSports<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/nsec700-2.jpg" alt="NorthSec CTF 2019 in Montreal" class="single-post-main-img" style="max-width:760px; height:auto;" />
<p>Looking at what is popular with smaller niche crowds can give greater insight into the “next thing”. This natural selection of attention can inspire an evolution of methods and practices. Capture the Flag Events (CTFs) and electronic Sports (eSports) are good examples of a relatively new trend. I've had the chance to be front row with each in the past years and can say they both have vivid partisans. Spoiler alert! My conclusion is that CTFs are intrinsically an eSport with the attribute of having a strong educational value.</p>
<h4>eSports, what is this millennial aberration?</h4>
<p>Before jumping in to the “get off my lawn” reaction, I’d like you to picture an actual lawn within traditional sports. Usually it’s grass or some form of terrain where people are playing a game. It can have spectators either on-site or remote using a television. People are here to see action, to cheer for a team to achieve a goal or to win at the end. During a broadcast, commentators describe the game and do the animation of the overall spectacle.</p>
<p>Now take that, and imagine that for some people, the game is a video game and the lawn is a network.</p>
<p>Many eSports leagues exist around the world and important tournaments are broadcast online. The level of strategy, tactics and action can get heated and overwhelming. Commentators end up sounding like traditional sport announcers when shouting during intense moments. Spectators also fill arenas by the tens of thousands.</p>
<p>Winners return home with <a href="https://www.esportsearnings.com/" target="_blank" rel="noopener">cash prizes</a> that get pretty big, and with money coming from an industry that is <a href="https://lpesports.com/e-sports-news/the-video-games-industry-is-bigger-than-hollywood" target="_blank" rel="noopener">bigger than the movie industry</a>. This is where the level of competition broadens internationally. South Koreans are, more often than not, dominating in this respect.</p>
<p>Sponsors and gamers get paid to play, in what we call professional gaming. Those pro-gamers have this as their full-time job.</p>
<p>If your mind is not blown already, here’s a crowd of about six hundred people cheering for a video game to start. It was during a collective gathering of a StarCraft II tournament in 2011, called BarCraft Montreal.</p>
<figure><img src="https://shostack.org/blog/img/2019/barcraft2011-2.jpg" alt="BarCraft Montreal 2011" /><figcaption>BarCraft Montreal 2011</figcaption></figure>
<p>Of course, not everyone is a pro. LAN parties and online competitions have also have places for amateurs. That phenomenon is even older than the broadcasting era. People playing video games in the 90s would sometime gather to compete for a cash prize. Back then, eSports was already a thing without having the name attached to it.</p>
<p>Fast forward to 2019, permanent venues and bars are now dedicated to the phenomena, with some great examples such as the <a href="https://www.hyperxesportsarenalasvegas.com/" target="_blank" rel="noopener">eSports Arena</a> in Las Vegas and <a href="https://www.meltdown.bar/" target="_blank" rel="noopener">Meltdown</a>, an international franchise of eSports bars. Now stadiums fill with 20,000 spectators in North America during multi millions dollars gaming tournaments such as the <a href="https://www.theverge.com/2019/7/25/20708650/fortnite-world-cup-preview-streaming-twitch-youtube-esports" target="_blank" rel="noopener">Fortnite World Cup</a>.</p>
<h4>Capture the Flag events, a collective obsession</h4>
<p>In the hacking communities, CTF events have always been the practitioner's favorite. It’s generally a tournament that spawns over hours if not a few days, where teams don’t fight each other, but rather solve problems with security related techniques. Challenges are usually presented within a defined category and their worth in awarded points is proportional to their difficulty. Sometimes, when the organizers want to put more effort in the presentation layer of the competition, a simulated scenario can be used. Common examples include hacking company X or being part of covert operations that goes against a dictatorial government. The kind of stuff hackers fantasize about. Regardless of the presentation, the goal is easy: to find flags. In other terms, you’re exploiting systems to gain information disclosure. The more you successfully hack, the more you get flags that gives points. The winning team is the one who gets the most points.</p>
<p>These kinds of events have existed for quite a very long time now, with my first personal participation being around 2002. CTFs have grown in popularity over time, with NorthSec being one of the biggest in North America at more than 700 people on-site.</p>
<figure><img src="https://shostack.org/blog/img/2019/nsec700-2.jpg" alt="NorthSec CTF 2019 in Montreal" style="max-width:760px; height:auto;" /><figcaption><a href="https://nsec.io/competition/" target="_blank" rel="noopener">NorthSec CTF</a> 2019 in Montreal</figcaption></figure>
<p>Visually this looks similar to an eSports tournament, but the main difference is that most teams use Linux laptops instead of Windows gaming PCs.</p>
<p>The participants show equal amounts of passion for the subject matter, passing hours in front of their screen during the day to reach that good feeling of winning. In one case, it's finding a flag, and in another, it's to win a match against a competing team. When this happens, the sense of success doesn't only happen on the meta goal of winning the whole thing, but it also releases endorphins for every little gain that is made. Finding a flag is very rewarding!</p>
<h4>Enough about games, here's something serious: Learning!</h4>
<p>One of the undeniable effects of competing is that it makes people better at what they do. They push themselves further, they learn to improve. This is true in gaming, and especially true in CTF events.</p>
<p>In my opinion, a CTF event is one of the best ways to learn about offensive security and its impact. As the participants try to find ways of getting to the flag, they also teach themselves to work up to a successful kill chain. They have to construct the last step of exploitation that leads to the flag, thus gaining points from the impact of a successful attack. Instead of experiencing the stressful real-life scenario of a breach like a professional would, they have the dopamine release of winning. That's a pretty effective way to train a brain to reach a certain goal.</p>
<p>One important value of CTF challenges, versus real-life cases, is that the attacker understands there’s a final defined attack vector that is possible. The participant also might get a sense of the difficulty level by the amount of points associated with the flag. Additionally, flags can also have a category or name assigned, giving away the type of technique to use, such as "SQL Injection #2". This title is most certainly a good giveaway of the direction the attacker would need to take.</p>
<p>Competing by gathering more flags and points gives the contestant a motivation that is beyond the standard pentesting realm. More often than not, these end with running a Windows network exploitation tool and writing a report. This is somewhat less exciting than receiving a gold medal and applauds from a large crowd.</p>
<p>This type of motivation is palpable in the enthusiasm of participants. Go to any on-site event and see just how people are riveted to their screens and passionate about the results. There's a lot of team work involved and the social component of knowledge sharing is reinforced.</p>
<p>Professionals can use it to sharpen their skills, while more amateurish people can also gain satisfaction in hacking in a legalized and guided environment. This can be compared to the realm of online bug bounty, where the environment might have been legitimized for years, but has a natural lack of guidance for newcomers and some limitations on knowledge sharing.</p>
<p>In either artificial challenges or in real-case attacks, there's a common level of satisfaction to crack something open. The pleasure center of our brain are rewarded with accomplishment after working hard on a goal and reaching it. For the professional, the gains are also quicker and more frequent, as they can experience in hours what they would normally over weeks if not months.</p>
<p>CTFs effectively cover the full spectrum of skill levels. The top tier are looking to be challenged and win. Participants at the introductory level can use it to understand vulnerabilities and the impact of their exploitation. This is done on a first-hand basis, without getting into the overhead of a full-blown enterprise environment.</p>
<p>Aside from learning, the security industry can take advantage of the CTF event and use it as a recruiting mechanism. This especially benefits students as their cleverness and skills never cease to amaze in hacking. That might be not shown just by looking at their grades. Hacking and exploiting requires a certain amount of non-conformity and logical creativity which traditional schooling might never even test.</p>
<h4>eSports + CTF = a new breed of edutainment</h4>
<p>The art of hacking is difficult to grasp, as it requires a lot of learning and experimenting. One effective way people can improve is by observing others. If you are watching someone playing a game, and you analyze their strategy, you might improve your own just by observing. You will gain new tactics and strategies along the way. If you are lucky, you might even entertain yourself at the same time.</p>
<p>The whole concept of watching someone hacking is not fully exploited yet. Many step by step tutorial videos you can see online present ways of solving challenges, but leave out the exciting outcomes of a competition.</p>
<p>That is, until we did it back in 2015.</p>
<figure><img src="https://shostack.org/blog/img/2019/hackfestctf-2-1024x382.jpg" alt="OWASP Mini-CTF Live at Hackfest 2015" style="max-width:760px; height:auto;" /><figcaption>OWASP Mini-CTF Live at Hackfest 2015</figcaption></figure>
<p>In order to be able to achieve it, we actually made a special CTF event with the main goal of broadcasting it. The initial part, which wasn't part of the main show, was a qualifying round where single contestants would participate in a classical CTF. This has assured us that the top ranked individuals brought to the finals would be the best. The final round was a few specially crafted CTF challenges designed to be presented in front of spectators. We asked those finalists, who were in another room, to share their screen so the on-stage presenters could comment on what is happening without spoiling the results. The challenges author came on stage to give the audience insights into solving some of them. We also even got some funny moments where the participants were talking to us through their console prompt. The whole session was very entertaining, as it gave the audience a true understanding of what a CTF is really about: a bunch of people searching on Google 😊.</p>
<p>Our last recorded efforts were in 2018, where we incorporated the flag scoring system into visuals and managed to have a full HD view of the participants.</p>
<figure><img src="https://shostack.org/blog/img/2019/2018-hackfestminictf-splash.jpg" alt="OWASP Mini-CTF 2018 splash screen with visually appealing logo" width="715" height="402" /><figcaption>OWASP Mini-CTF 2018 splash screen with visually appealing logo</figcaption></figure>
<figure><img src="https://shostack.org/blog/img/2019/2018-hackfestminictf-casters.jpg" alt="Co-casters Jonathan Marcil and Philippe Arteau" width="714" height="401" /><figcaption>Co-casters Jonathan Marcil (me) and Philippe Arteau</figcaption></figure>
<figure><img src="https://shostack.org/blog/img/2019/2018-hackfestminictf-categories.jpg" alt="Icons and colors associated with the flag names" width="714" height="400" /><figcaption>Icons and colors associated with the flag names</figcaption></figure>
<figure><img src="https://shostack.org/blog/img/2019/2018-hackfestminictf-scoreboard.jpg" alt="Scoreboard view of the contestants with found flags highlighted" width="713" height="401" /><figcaption>Scoreboard view of the contestants with found flags highlighted</figcaption></figure>
<figure><img src="https://shostack.org/blog/img/2019/2018-hackfestminictf-quadview.jpg" alt="Quad view of the contestants and their screens" width="710" height="399" /><figcaption>Quad view of the contestants and their screens</figcaption></figure>
<figure><img src="https://shostack.org/blog/img/2019/2018-hackfestminictf-winningsub.jpg" alt="Full screen view of the winning flag submission by Ian Bouchard (Corb3nik)" width="711" height="400" /><figcaption>Full screen view of the winning flag submission by Ian Bouchard (Corb3nik)</figcaption></figure>
<p>This concept definitely captured the attention of CTF savvy people. A good part of the audience we had on-site were also competing in the qualifying round.</p>
<p>What we did is unparalleled as far as we know, but it needs to be refined, showcased and experimented more to reach a wider audience and gain its full potential. Publishing it online offers the chance of succeeding in popularity when taking into account the large number of views that a <a href="https://www.youtube.com/results?search_query=hacking+tutorial" target="_blank" rel="noopener">standard hacking tutorial</a> normally gets.</p>
<h4>Conclusion</h4>
<p>Without an expansive introduction, the whole concept might look silly. But really, a CTF is a video game. Both share the same related components such as the competition aspect and the utilization of problem solving skills. They also share the fact that the action is done in front of a computer screen.</p>
<p>In a sense, CTF challenges are like a bunch of mini educational games. They give the participant a motivation to gain the computer security knowledge required to solve them and win.</p>
<p>One of the main barriers of acceptance on the edutainment concept is related to the fact that even the well established eSports phenomenons are still very much on the fringes for many people. Unfortunately, those same people could actually be more interested in computer security than in actual gaming.</p>
<p>Learning is all about expanding our mind. It can be done with self-service education to reach a goal or by being a spectator of other people technical skills. When people worrying about having perpetrators getting off their lawn (or network!) seek to expand their grasp on pieces of the hacking ecosystem, it can actually improve their overall understanding of computer security, and this could ultimately help them to threat model and defend their space better.</p>Interesting reads2019-09-18T00:00:00Zhttps://shostack.org/blog/interesting-reads/AdamSharing for you, bookmarking for me.<span class="single-post-excerpt"><!-- excerpt -->Sharing for you, bookmarking for me.<!-- /excerpt --></span>
<ul>
<li>There were widely circulated <a href="https://thenextweb.com/security/2019/09/02/fraudsters-deepfake-ceos-voice-to-trick-manager-into-transferring-243000/" target="_blank" rel="noopener">reports</a> of voice cloning being used in phishing. I've been predicting these for a while (<a href="https://www.youtube.com/watch?v=DMFF8zQqEVQ&feature=youtu.be" target="_blank" rel="noopener">Threat modeling in 2018</a> at Blackhat, 28 minutes in), but Guillaume Ross asked <a href="https://www.linkedin.com/feed/update/urn:li:activity:6575424961991766016/" target="_blank" rel="noopener">some really good questions</a> about it.</li>
<li>A new John Locke manuscript, "The Toleration of Papists" has been discovered and <a href="https://www.cambridge.org/core/services/aop-cambridge-core/content/view/5218E6A9D0F58DA2D8FC075F46042E32/S0018246X19000207a.pdf/john_locke_and_the_toleration_of_catholics_a_new_manuscript.pdf" target="_blank" rel="noopener">published</a>. (<a href="https://www.theguardian.com/commentisfree/2019/sep/08/john-locke-hero-of-western-liberalism-not-as-tolerant-as-we-thought" target="_blank" rel="noopener">Short</a> analysis in the Guardian.)</li>
<li><a href="https://adversarialfashion.com/" target="_blank" rel="noopener">Adversarial Fashion</a> has a line of goods "designed to trigger Automated License Plate Readers, injecting junk data in to the systems used by the State and its contractors to monitor and track civilians and their locations." (Also, first non-Google use of <a href="https://developers.google.com/speed/webp/" target="_blank" rel="noopener">webp</a> graphics that I've noticed. It turns out that wordpress won't let you upload webp yet.)</li>
<li>Economist Tyler Cowen has an interview with <a href="https://reason.com/2019/09/01/neal-stephenson-wants-to-tell-big-stories/" target="_blank" rel="noopener">Neal Stephenson</a>.</li>
</ul>OWASP Portland: Talk and Podcast2019-10-02T00:00:00Zhttps://shostack.org/blog/owasp-portland-talk-and-podcast/adamI recently had a chance to speak at the meeting for the Portland, Oregon chapter of OWASP<span class="single-post-excerpt"><!-- excerpt -->I recently had a chance to speak at the meeting for the Portland, Oregon chapter of OWASP<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/portland-oregon-neon-sign.png" alt="large neon sign featuring th outline of the state of Oregon with the text 'Portland Oregon Old Town' and a leaping deer" class="single-post-main-img" />
<p>Podcast with me by OWASP's Portland, Oregon Chapter in advance of me speaking at <a href="https://www.meetup.com/OWASP-Portland-Chapter/events/263660173/" target="_blank" rel="noopener">their meeting</a> October 9.</p>
<p>You can <a href="https://soundcloud.com/owasp-portland-oregon/owasp-portland-oregon-adam-shostack-threat-modeling" target="_blank" rel="noopener">listen here</a>.</p>Quick Threat Model Links October 20192019-10-09T00:00:00Zhttps://shostack.org/blog/quick-threat-modeling-links-october-2019/AdamJust a few things for now<span class="single-post-excerpt"><!-- excerpt -->Just a few things for now<!-- /excerpt --></span>
<ul>
<li>Trail of Bits released a <u>threat model for Kubernetes</u> [link to https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Threat%20Model.pdf no longer works]. There's some <a href="https://www.helpnetsecurity.com/2019/08/12/kubernetes-security-matures/" target="_blank" rel="noopener">context</a> from Aaron Small, who made the project happen.</li>
<li>Continuum has a <a href="https://iriusrisk.com/threat-modeling-lambda/">blog</a> and a <a href="https://github.com/continuumsecurity/IriusRisk/blob/master/resources/AWS%20Lambda%20Threats%20%26%20Countermeasures.xlsx" target="_blank" rel="noopener">spreadsheet</a> on threat modeling lambdas (as a category, not specific to Amazon Lambda), and also a <a href="https://iriusrisk.com/capec-threat-modeling/" target="_blank" rel="noopener">post on threat modeling with CAPEC</a>.</li>
<li>Ntrepid has released a blog posts on "Threat Modeling for Managed Attribution" (<a href="https://ntrepidcorp.com/general/threat-modeling-for-managed-attribution/" target="_blank" rel="noopener">part 1</a>, <a href="https://ntrepidcorp.com/general/threat-modeling-for-managed-attribution-part-2/" target="_blank" rel="noopener">part 2</a>, <a href="https://ntrepidcorp.com/general/threat-modeling-for-managed-attribution-part-3/" target="_blank" rel="noopener">part 3</a>)</li>
<li>The W3C has updated the <a href="https://www.w3.org/TR/security-privacy-questionnaire/" target="_blank" rel="noopener">questionnaire</a> it uses for web feature development, including questions about "legitimate misuse" (Techcrunch: <a href="https://techcrunch.com/2019/09/11/web-feature-developers-told-to-dial-up-attention-on-privacy-and-security/" target="_blank" rel="noopener">Web feature developers told to dial up attention on privacy and security</a>.)</li>
<li>Omada Health has <a href="https://www.omadahealth.com/hubfs/nodirt.pdf" target="_blank" rel="noopener">released</a> an interesting threat model ("INCLUDES NO DIRT") for medical device modeling.</li>
</ul>
<p>What else have you seen? I'm hoping to find time to write more deeply on several of these. </p>
<p>Disclosure: I'm on advisory boards for Continuum and Ntrepid.</p>Interesting Reads: Risk, Automation, lessons and more!2019-10-15T00:00:00Zhttps://shostack.org/blog/interesting-reads-2/AdamJust what the title says.<span class="single-post-excerpt"><!-- excerpt -->Just what the title says.<!-- /excerpt --></span>
<ul>
<li>The <a href="https://www.cybok.org/knowledgebase/" target="_blank" rel="noopener">Cybok</a> project has released its v1 "Risk Management & Governance Knowledge Area"; I was a reviewer.</li>
<li><a href="https://sion.info/assets/pdf/SEAD2019/c432746eda1640bcbc156825c3707d0d.pdf" target="_blank" rel="noopener">Towards Automated Security Design Flaw Detection</a> is an interesting paper from academics in Belgium and Sweden.</li>
<li>Steve Lipner offers "<a href="https://www.csoonline.com/article/3440120/lessons-learned-through-15-years-of-sdl-at-work.html" target="_blank" rel="noopener">Lessons learned through 15 years of SDL at work</a>"</li>
<li>Charles Wilson has perspective on threat modeling devices in "<a href="https://charleswilson.blog/2019/09/17/does-that-come-in-a-large-os-scale-in-threat-modeling/" target="_blank" rel="noopener">Does That Come in a Large? OS Scale in Threat Modeling</a></li>
<li>Lastly, apparently cockroaches can be magnetized: "<a href="https://www.haaretz.com/science-and-health/.premium-magnetized-dead-cockroaches-study-wins-ig-nobel-and-it-s-amazingly-interesting-1.7853764" target="_blank" rel="noopener">Magnetized Dead Cockroaches Study Wins Ig Nobel — and It’s Amazingly Interesting</a>."</li>
</ul>Who Are We Kidding with Attacker-Centered Threat Modeling?2019-10-23T00:00:00Zhttps://shostack.org/blog/who-are-we-kidding-with-attacker-centered-threat-modeling/adamDon't go into Threat Modeling with this mindset.<span class="single-post-excerpt"><!-- excerpt -->Don't go into Threat Modeling with this mindset.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/ben-white-4K2lIP0zc_k-unsplash-framed.png" alt="young boy reading book and laughing" class="single-post-main-img" />
<p>I've spoken for over a decade against "think like an attacker" and the trap of starting to threat model with a list of attackers. And for my threat modeling book, I cataloged every serious grouping of attackers that I was able to find. And as I was reading "<a href="https://protectyoungeyes.com/12-ingenious-screen-time-hacks-how-to-beat-them/" target="_blank" rel="noopener">12 Ingenious iOS Screen Time Hacks</a>," I realized what they're all missing: kids. Kids, apparently, are very clever about getting around Apple's Screen Time. And in fact, it's not just kids that are missing from the lists. Family members in general, and especially estranged or former partners are an important group of threat actors. And they're a group who are intensely difficult to protect against. I've been talking about the issue of device management as a new subset of Elevation of Privilege since I read <a href="https://www.nytimes.com/2018/06/23/technology/smart-home-devices-domestic-abuse.html" target="_blank" rel="noopener">Thermostats, Locks and Lights: Digital Tools of Domestic Abuse</a> in 2018.</p>
<p>System designers need to consider the ways attackers with some access can abuse that access. The problems are some of the most fascinating and tricky I've encountered. For example, if someone's call history includes calls to a domestic violence hotline, that can trigger further abuse. Do you allow the call history to be edited? If someone is using private browsing mode to avoid showing history, do you create a visual difference? I've had advocates tell me that such differences can trigger abuse when people "are caught" "hiding" their browsing. The issues are simpler with kids, but I know of few parents who are happy with the tools they have. This trickiness is an opportunity for security designers to shine, and to make a real difference.</p>
<p>This yet is another way where starting from a list of attackers will lead you to miss important threats: things that impact your customers.</p>
<p>Previously: <a href="https://shostack.org/archive/2016/04/think-like-an-attacker-is-an-opt-in-mistake/"> “Think Like an Attacker” is an opt-in mistake</a>, <a href="https://shostack.org/archive/2008/09/think-like-an-attacker/">Think Like An Attacker?</a>, <a href="https://shostack.org/archive/2008/09/the-discipline-of-think-like-an-attacker/">The Discipline of “think like an attacker”</a>, <a href="https://shostack.org/archive/2014/11/think-like-an-attacker-flip-that-advice/">Think Like An Attacker? Flip that advice!</a> and <a href="https://shostack.org/blog/modeling-attackers-and-their-motives/">Modeling Attackers and Their Motives</a>.</p>
<p>"Kid laughing at your design" photo by <a href="https://unsplash.com/photos/4K2lIP0zc_k" target="_blank" rel="noopener">Ben White, Unsplash</a>.
</p>Interesting Finds: Liberalism, machine learning, encryption and learning2019-10-28T00:00:00Zhttps://shostack.org/blog/interesting-finds-liberalism-machine-learning-encryption/AdamSome interesting things I've recently found<span class="single-post-excerpt"><!-- excerpt -->Some interesting things I've recently found<!-- /excerpt --></span>
<ul>
<li>The Economist Reflects on Liberalism is the sort of in-depth writing and thinking that makes the magazine so great: "<a href="https://www.economist.com/essay/2018/09/13/the-economist-at-175" target="_blank" rel="noopener">Reinventing Liberalism for the 21st century</a>."</li>
<li><a href="https://towardsdatascience.com/evading-machine-learning-malware-classifiers-ce52dabdb713?gi=4f46ea7cac34" target="_blank" rel="noopener">Evading Machine Learning Malware Classifiers</a>, from the winner of the Defcon Machine Learning Static Evasion Competition.</li>
<li>The general counsel of the NSA and former general counsel of the FBI have editorials on encryption. Worth considering as a pair: <a href="https://www.nytimes.com/2019/09/10/opinion/nsa-privacy.html" target="_blank" rel="noopener">I Work for N.S.A. We Cannot Afford to Lose the Digital Revolution,</a> and <a href="https://www.lawfareblog.com/rethinking-encryption" target="_blank" rel="noopener">Rethinking Encryption</a>.</li>
<li><a href="http://www.liberatingstructures.com/" target="_blank" rel="noopener">Liberating Structures</a> is a catalog of structures that engage groups in dialogue and discussion.</li>
</ul>Includes No Dirt: Healthcare Threat Modeling (Thursday)2019-10-31T00:00:00Zhttps://shostack.org/blog/includes-no-dirt-healthcare-threat-modeling-thursday/Adam“Includes No Dirt” is a threat modeling approach by William Dogherty and Patrick Curry of Omada Health, and I've been meaning to write about it since it came out.<span class="single-post-excerpt"><!-- excerpt -->“Includes No Dirt” is a threat modeling approach by William Dogherty and Patrick Curry of Omada Health, and I've been meaning to write about it since it came out.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/soapy-hands-washing.jpg" alt="A pair of hands covered in soap bubbles" class="single-post-main-img" />
<p>“<cite><a href="http://www.includesnodirt.com/nodirt.pdf" target="_blank" rel="noopener">Includes No Dirt</a></cite>” is a threat modeling approach by William Dogherty and Patrick Curry of Omada Health, and I've been meaning to write about it since it came out. I like that it starts from context — the why this matters: Their goal is to have a single approach to security, privacy, and compliance. Reducing re-work is tremendously important to integrating into a development process and "shifting left." The paper is available for download without registration, and is self-contained.</p>
<p>First, let me say that I think this is really nice work. Clean, if you will, and clearly grounded in having reviewed prior work without feeling a need to include it all. The white paper contains 4 main parts:</p>
<ol>
<li>Context</li>
<li>The NO DIRT model for learning what can go wrong</li>
<li>Putting NO DIRT into action</li>
<li>Sample assessment questionnaires and brainstorming worksheets </li>
</ol>
<p>The section "putting NO DIRT into action" is particularly interesting as a list of "when", "who" and "how," to show how the same technical work can serve five different organizational goals. I also like that one of the "whens" is vendor risk management. No one involved likes the practice of sending long spreadsheets full of questions like "do you have a clean desk policy" back and forth. (Which is why there's 3 million job openings: people prefer to take a job as a TSA screener.) More seriously, an approach that focuses on exchanging threat models is a welcome and important development, and that approach can be seen as a new and useful building block.</p>
<p>The scoring system is interesting, and I am somewhat surprised to see the direct translation from a score to a high/medium/low, but I suspect that that relates to the oversight mechanisms in place.</p>
<p>I like that the questionnaire and brainstorming worksheets are clearly separate from the core model, both by the word sample, and by their inclusion as appendices. It took me a bit to understand that appendix A is samples (showing the work) and B is samples (for use).</p>
<p>My understanding is that the authors would be happy to see it adopted and adapted, and my very small critique is that I'd like to see explicit permission to build on the questionnaire, under a creative commons or similar license.</p>Medical Device Security Standards2019-11-02T00:00:00Zhttps://shostack.org/blog/medical-device-security-standards/AdamRecently, I've seen four cybersecurity approaches for medical devices, and we can learn by juxtaposing them.<p>Recently, I've seen four cybersecurity approaches for medical devices, and we can learn by juxtaposing them.</p>
<p>The <cite><a href="http://www.imdrf.org/docs/imdrf/final/consultations/imdrf-cons-ppmdc.pdf" target="_blank" rel="noopener">Principles and Practices for Medical Device Cybersecurity</a></cite> is a process-centered and comprehensive document from the International Medical Device Regulators Forum. It covers pre- and post- market considerations, as well as information sharing and coordinated vuln disclosure. It's important because for a device maker to have to comply with different standards in different countries at least drives up cost, and can easily lead to a situation where they're required to meet either contradictory or difficult to reconcile demands.</p>
<p>In contrast, the MITA has released a "material data sheet" for product cybersecurity. This one doesn't impose any requirements on the development process, but enumerates the information that professionals dealing with the device need to know. (Connectivity, updatability, et cetera. Roughly 250 questions of the form "Does the device employ any mechanism (e.g., release-specific hash key, checksums, digital signature, etc.) to ensure the installed software is manufacturer-authorized?")</p>
<p><u>MITA Releases National Standard for Medical Device Security</u>, [link to https://www.medicalimaging.org/2019/10/09/mita-releases-national-standard-for-medical-device-security/ no longer works] about disclosing what the device does - for example, does it send patient PII anywhere or listen on the network?</p>
<p>The MDS is factual — it doesn't judge the choices made, just makes them concrete. Compare to INCLUDES NO DIRT (covered yesterday) whose very name expresses intent.</p>
<p>Lastly, the Medical Device Innovation Consortium has been awarded funding for "<a href="https://www.businesswire.com/news/home/20191021005402/en/Medical-Device-Innovation-Consortium-MDIC-Awarded-Funding" target="_blank" rel="noopener">Expansion of Case for Quality and Cybersecurity Threat Modeling</a>." I cannot tell you how positive I think this development will be.</p>Message Sequence Charts2019-11-06T00:00:00Zhttps://shostack.org/blog/message-sequence-charts/AdamSwim lane diagrams have been formalized in message sequence charts - what that means.<span class="single-post-excerpt"><!-- excerpt -->Swim lane diagrams have been formalized in message sequence charts - what that means.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/MSC_TimeConstraint.png" alt="message sequence charts" class="single-post-main-img" />
<p>I was not aware that the ITU had formalized swim lane diagrams into <a href="https://en.wikipedia.org/wiki/Message_sequence_chart" target="_blank" rel="noopener">Message Sequence Charts</a>. While you don't need to use these formalizations, the choices they made, and the comparisons to UML's diagrams can be interesting, especially if there are tricky corners where you're having trouble modeling some flow. For example, "They work particularly great in opening up assumptions (e.g., so many times a message from server to another has proven to be actually relayed through a browser redirection) and also if you mark state machine transitions on it, you'll see false assumptions there too."</p>
<p>Thanks to Antti Vähä-Sipilä for pointing them out (along with the use case) in the OWASP Threat Modeling slack channel. (Join via <a href="https://join.slack.com/t/owasp/shared_invite/enQtNjExMTc3MTg0MzU4LWQ2Nzg3NGJiZGQ2MjRmNzkzN2Q4YzU1MWYyZTdjYjA2ZTA5M2RkNzE2ZjdkNzI5ZThhOWY5MjljYWZmYmY4ZjM" target="_blank" rel="noopener">OWASP Slack Channel</a>.)</p>Managed Attribution Threat Modeling2019-11-14T00:00:00Zhttps://shostack.org/blog/managed-attribution-threat-modeling/AdamLet's talk CAKED, a threat model for managed attribution.<span class="single-post-excerpt"><!-- excerpt -->Let's talk CAKED, a threat model for managed attribution.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/blur-cakes-close-up-cupcakes-416534.png" alt="close view of brightly-colored frosted cupcakes" class="single-post-main-img" />
<p>The more I learn about threat modeling, the more I think the toughest part is how we answer the question: "What can go wrong?" Perhaps that's "finding threats." Maybe it's "discovering" or "eliciting" them. Maybe it's analogizing from threats we know about. I'm not yet even sure what to call it. But what it does for us is valuable.</p>
<p>I was at a conference not long ago, where people were talking about how to make their threat modeling process lightweight, and one person said "We ask just one question: how would you attack this system you're building. And we get good results." Now, maybe that's true. They certainly get better results than they would from a heavyweight system that no one uses, and maybe that's the implicit point of comparison.</p>
<p>In response, I'm going to quote a student from one of my recent classes. They said: "When we brainstormed, I was pretty happy with what we were finding. Then I used this new way, and I felt bad about what we'd done before." (The "new way" in this was STRIDE, and we were discussing people's experience first brainstorming then using a structure.)</p>
<p>I am pretty certain that however we conceptualize it, whatever we call it, the way we do it needs structure. No, actually, that's not quite right. It doesn't <em>need</em> structure. The panelist had a point. But we get more results from the work, more people are able to do the work, and we get more consistency in the results that they give us if we structure the work.</p>
<p>But I think we can do far better by thinking about ways to answer the question "what can go wrong" that are focused on the needs of a given system. That might be focused on the issues a given system faces. It might be a way that helps us know if we've done a good job or have more work to do. It might help us avoid rabbit warrens.</p>
<p>And that brings me to CAKED, a threat model for "managed attribution." Managed attribution is a particular type of privacy: shaping how you're seen online, and it has an unusual collection of threats. (Ntrepid has a blog post, <a href="https://ntrepidcorp.com/general/what-is-managed-attribution/" target="_blank" rel="noopener">What is Managed Attribution?</a>)</p>
<p>CAKED is an acrostic for:
</p><ul>
<li>Correlation of entities: Can the attacker connect two accounts, servers, identities, etc. that are not meant to be seen as connected?</li>
<li>Attribution of actors: Can an outsider identify who is behind the activity? This could reveal the operator, their organization, or the MA provider.</li>
<li>Knowledge of operation: Can the adversary recognize that the activity is part of an operation and possibly understand the nature or purpose of the operation?</li>
<li>Exposure of aliases: Can someone discover that an alias account is not a real person?</li>
<li>Discovery of resources: Are there any loose threads that could help identify other previously unknown attribution management infrastructure?</li>
</ul>
<p>What's interesting about CAKED, is that it (and the accompanying list of threat actor perspectives) are specific to the problems that a managed attribution service should be thinking about as they build and evolve technology. Even if you're not interested in managed attribution, your organization has threats that it worries about. Taking those, grouping them results in a new way to structure work.</p>
<p>It is tempting to say 'we need more of these,' but again, that's not quite right. With more of these, we can do a better job of protecting things. We can involve more people in that work. And we can expect that the work done will be more consistent. That's a high leverage result from threat modeling.</p>
<p><a href="https://ntrepidcorp.com/general/threat-modeling-for-managed-attribution-part-2/" target="_blank" rel="noopener">part 2</a>, <a href="https://ntrepidcorp.com/general/threat-modeling-for-managed-attribution-part-3/" target="_blank" rel="noopener">part 3</a></p>
<p>Disclosure: I'm an advisor to Ntrepid, and provided feedback on at a draft of the model. They did not ask me to write this post, and I don't speak for them. Photo by <u>Pixabay</u> [link to https://www.pexels.com/photo/blur-cakes-close-up-cupcakes-416534/ no longer works].</p>Han Solo, Frozen in Carbonite2019-11-26T00:00:00Zhttps://shostack.org/blog/han-solo-frozen-in-carbonite/Adam[no description provided]<span class="single-post-excerpt"><!-- excerpt -->[no description provided]<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/han-solo-williams-sonoma.png" alt="casserole dish from Williams Sonoma featuring a relief replication of Han Solo in Carbonite on the lid" class="single-post-main-img" />
<p>Apparently, someone was baked at <a href="https://www.williams-sonoma.com/products/le-creuset-star-wars-han-solo-roaster/?pkey=cgifts-star-wars" target="_blank" rel="noopener">Williams Sonoma</a>.</p>The Gavle Goat is up2019-11-30T00:00:00Zhttps://shostack.org/blog/the-gavle-goat-is-up/AdamFor 51 years, the gallant people of Gavle, Sweden, have been putting up a straw goat, and arsonists have been burning it.<span class="single-post-excerpt"><!-- excerpt -->For 51 years, the gallant people of Gavle, Sweden, have been putting up a straw goat, and arsonists have been burning it.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/galve-goat-2019.jpg" alt="Gavle Goat" class="single-post-main-img" />
<p>[Update: The <a href="https://www.thelocal.se/20200102/new-year-record-swedish-straw-goat-survives-third-christmas-in-a-row/" target="_blank" rel="noopener">goat survived</a>, for the third year in a row!]</p>
<p>For 51 years, the gallant people of Gavle, Sweden, have been putting up a straw goat, and arsonists have been burning it. Apparently, they didn't have Twitter back then, and needed alternate ways to get into flame wars.</p>
<p>Previously: <a href="https://www.google.com/search?q=site%3Ahttps%3A%2F%2Fshostack.org&q=gavle">Gavle Goat at Shostack & Friends</a>.</p>Books Worth Your Time (Q4)2019-12-01T00:00:00Zhttps://shostack.org/blog/books-worth-your-time-q4/AdamJust what the title says...<span class="single-post-excerpt"><!-- excerpt -->Just what the title says...<!-- /excerpt --></span>
<h3>Cyber</h3>
<ul>
<li><a href="https://amzn.to/2Y2AHkn" target="_blank" rel="noopener">The Huawei and Snowden Questions</a>, by Olav Lysne is a deep dive into what happens when an untrusted vendor builds your trusted computing base, and more importantly, why a great many of the "obvious" ways to address those risks are subject to easy work-arounds. This is unhappy news for Huawei, but more importantly, as cyber is now treated as a "fifth domain" and there are no norms of conflict, bad news for technology in general. It's bad news for the makers of important technology, who are now subject to attacks by nation states. It's bad news for people and businesses who rely on technology, who are going to be subject to seeing their tax software hacked to distribute malware, as happened in Ukraine and led to <a href="https://en.wikipedia.org/wiki/Petya_(malware)" target="_blank" rel="noopener">NotPetya</a>. It's bad news for people who want to be able to control their computer, because these attacks are leading vendors to <a href="https://www.howtogeek.com/443611/how-macos-catalinas-new-security-features-work/" target="_blank" rel="noopener">lock down</a> even their traditional operating systems, as Jonathan Zittrain predicted in his 2008 book, <a href="https://amzn.to/37LkK6G" target="_blank" rel="noopener">The Future of the Internet--And How to Stop It</a>.</li>
<li><a href="https://amzn.to/2syXCZ9" target="_blank" rel="noopener">Cult of the Dead Cow</a>, Joseph Menn. A very solid look at the early days of the hacking scene. I had not thought about how different the "make money in cyber" world is from when I got started, reading textfiles. To me, it's always been normal to think about the ethical implications of what we do and how we do it in cyber and in privacy. A good deal of that is because of the world which the cDc helped to build. I'll probably have a fuller review, but this is worth your time and consideration. </li>
</ul>
<h3>Nonfiction</h3>
<ul>
<li><a href="https://amzn.to/2LafgZt" target="_blank" rel="noopener">Taming the Sun</a>, by Varun Sivaram is a big sweeping overview of "Innovations to Harnass Solar Energy and Power the Planet." It's generally quite good and covers technology, finance, regulation, and sets the scene very nicely. Sometimes degenerates into academic literature review, for example in discussion of perovskite photovoltaics.</li>
<li><a href="https://amzn.to/2L9Is33" target="_blank" rel="noopener">Endurance</a> by Scott Kelly is a great autobiography, and a great story of the space program. Full of fascinating little details like no one was willing to suggest the "twin study" until he did.</li>
<li><a href="https://amzn.to/34BCiQH" target="_blank" rel="noopener">The White Box Essays</a>, Jeremy Holcomb. How to design games. If you're a game designer, or thinking of publishing a game, you need to read this book. If you want to see how to write a concise instruction manual for a complex project, you should read this book. My <a href="https://adam.shostack.org/blog/2019/04/the-white-box-essays/">Longer review</a>.</li>
<li><a href="https://amzn.to/35OrajB" target="_blank" rel="noopener">Meeples Together</a>, Christopher Allen & Shannon Appelcline. The first analysis of cooperative game mechanics. I think that cooperation is incredibly important in making games with a purpose (aka serious games, teaching games.). Having a book that collects the games, summarizes them, and draws lessons from them is an important step forward for the genre, and thus for us as game designers. They also have a <a href="https://amzn.to/35OrajB" target="_blank" rel="noopener">blog</a>.</li>
<li><a href="https://www.justsecurity.org/63708/word-searchable-version-of-mueller-report/" target="_blank" rel="noopener">Report on the Investigation in the 2016 Presidential Election</a>, Robert Mueller. If you haven't read this, you might be confused as to why someone can say "no collusion." (It's because collusion is not a legal term, and so Mueller and his team do not attempt to reach a conclusion.). Reminder, here are the <a href="https://en.wikipedia.org/wiki/Special_Counsel_investigation_(2017%E2%80%932019)" target="_blank" rel="noopener">34 people and three companies</a> indicted as a result of the investigation, including the President's National Security Advisor and Campaign Chairman.</li>
</ul>
<h3>Fiction</h3>
<ul>
<li><a href="https://amzn.to/37UrJdK">A Memory Called Empire</a>, Arkady Martine. Interesting hard SF about an ambassador from a culture that has memory recording devices.</li>
<li><a href="https://amzn.to/35KzSiM">Space Opera</a>, Catherynne M. Valente. Tremendously fun Douglas Adams style romp of the Galatic Eurovision contest.</li>
</ul>Goodbye, Feedburner2019-12-03T00:00:00Zhttps://shostack.org/blog/goodbye-feedburner/AdamThe time has come.<span class="single-post-excerpt"><!-- excerpt -->The time has come.<!-- /excerpt --></span>
<p>Over the years, a number of people set up Feedburner accounts to proxy RSS from our blogs into their system. I generally have no issue with people reading how they choose, but I cannot provide support or management. Google is end of lifing the old Feedburner, and for those of you reading via Feedburner RSS, I humbly ask that you update to <a href="https://adam.shostack.org/blog/feed/">https://adam.shostack.org/blog/feed/</a> or <a href="https://adam.shostack.org/blog/comments/feed/">https://adam.shostack.org/blog/comments/feed/</a> (with comments).</p>Empirical Evaluation of Secure Development Processes2019-12-07T00:00:00Zhttps://shostack.org/blog/empirical-evaluation-of-secure-development-processes/AdamEarlier this year, I helped to organize a workshop at Schloss Dagstuhl on Empirical Evaluation of Secure Development Processes. I think the workshop was a tremendous success.<span class="single-post-excerpt"><!-- excerpt -->Earlier this year, I helped to organize a workshop at Schloss Dagstuhl on Empirical Evaluation of Secure Development Processes. I think the workshop was a tremendous success.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2019/dagstuhl.png" alt="screenshot of first page of paper cited in the post" class="single-post-main-img" />
<p>Earlier this year, I helped to organize a <a href="https://www.dagstuhl.de/en/program/calendar/semhp/?semnr=19231" target="_blank" rel="noopener">workshop</a> at <a href="https://www.dagstuhl.de/" target="_blank" rel="noopener">Schloss Dagstuhl</a> on Empirical Evaluation of Secure Development Processes. I think the workshop was a tremendous success, we've already seen publications inspired by it, such as <cite><a href="https://arxiv.org/pdf/1911.12139.pdf" target="_blank" rel="noopener">Moving Fast and Breaking Things: How to stop crashing more than twice</a></cite>, and I know there's more forthcoming.</p>
<p>I'm also pleased to say that the workshop report is now available at <a href="https://dx.doi.org/10.4230/DagRep.9.6.1" target="_blank" rel="noopener">https://dx.doi.org/10.4230/DagRep.9.6.1</a>. The framing of the workshop (from the announcement) was:</p>
<blockquote> The problem of how to design and build secure systems has been long-standing. For example, as early as 1978 Bisbey and Hollingworth[6] complained that there was no method of determining what an appropriate level of security for a system actually was. In the early years various design principles, architectures and methodologies were proposed: in 1972 Anderson described the “reference monitor” concept, in 1974 Saltzer described the “Principle of least privilege”, and in 1985 the US Department of Defense issued the Trusted Computer System Evaluation Criteria.
<br /><br />
Since then, although much progress has been made in software engineering, cybersecurity and industrial practices, much of the fundamental scientific foundations have not been addressed – there is little empirical data to quantify the effects that these principles, architectures and methodologies have on the resulting systems.
<br /><br />
This situation leaves developers and industry in a rather undesirable situation. The lack of this data makes it difficult for organizations to effectively choose practices that will cost-effectively reduce security vulnerabilities in a given system and help development teams achieve their security objectives. There has been much work creating security development lifecycles...</blockquote>
<p>Also, I am quite pleased that Dagstuhl takes a very open approach — the report is licensed as <a href="https://creativecommons.org/licenses/by/3.0/" target="_blank" rel="noopener">CC-BY-3</a>.</p>Encryption & Privacy Policy and Technology2019-12-11T00:00:00Zhttps://shostack.org/blog/encryption-privacy-policy-and-technology/AdamA few tidbits in recent news.<span class="single-post-excerpt"><!-- excerpt -->A few tidbits in recent news.<!-- /excerpt --></span>
<p>The Open Technology Institute has an Open Letter to Law Enforcement in the U.S., UK, and Australia: Weak Encryption Puts Billions of Internet Users at Risk. (<a href="https://www.newamerica.org/oti/press-releases/open-letter-law-enforcement-us-uk-and-australia-weak-encryption-puts-billions-internet-users-risk/" target="_blank" rel="noopener">press release</a>, <a href="https://newamericadotorg.s3.amazonaws.com/documents/Coalition_Response_Letter_-_Encryption_DOJ_event_and_letter_to_Facebook.pdf" target="_blank" rel="noopener">letter</a>.) I am pleased to be one of the signers.</p>
<p>In closely related news, nominations for the <a href="https://www.lightbluetouchpaper.org/2019/12/10/2020-caspar-bowden-award/" target="_blank" rel="noopener">2020 Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies</a> are open.</p>
<p>Lastly, the Port of Seattle is considering putting rules in place around facial recognition technology. The Identity Project has <a href="https://papersplease.org/wp/2019/12/03/seattle-port-commission-to-consider-rules-for-airport-facial-recognition/" target="_blank" rel="noopener">extensive documentation</a>, and I generally concur with their recommendations. Additionally, as I read them, I realized that perhaps there's an open question of "what's the harm from a photograph. My <a href="https://shostack.org/files/essays/Shostack-Seattle-CBP-biometrics.pdf" target="_blank" rel="noopener">comments</a> to the Port Commission are a start at grappling with that question. (After I wrote that, the Port approved a set of guidelines, covered <a href="https://www.geekwire.com/2019/seattle-port-officials-approve-face-recognition-guardrails-feds-waffle-controversial-tech/" target="_blank" rel="noopener">here</a> for the use of facial recognition. I would still love your feedback on how to think about the harm from a photograph.)</p>Star Wars Episode 9 is a week away!2019-12-13T00:00:00Zhttps://shostack.org/blog/star-wars-episode9-week-away/AdamAlmost there...<span class="single-post-excerpt"><!-- excerpt -->Almost there...<!-- /excerpt --></span>
<p>Emily Asher-Perrin has some of the most interesting writing on the Star Wars universe. I like her analysis of where Rey may come from in <a href="https://www.tor.com/2019/12/12/rey-should-choose-to-adopt-the-skywalker-name-not-be-retconned-into-the-family/" target="_blank" rel="noopener">Rey Should Choose to Adopt the Skywalker Name, Not Be Retconned Into the Family</a>. I half look forward to the day when Disney assimilates her into the official writing team. The stories will get better, and we'll lose her analysis.</p>
<p>Also, from last May, my own <a href="https://adam.shostack.org/blog/2019/05/episode-9-spoilers/">Episode 9 Spoilers</a>. But if Ms. Asher-Perrin is right, then the universe goes out of balance again.</p>Echo, Threat Modeling and Privacy2019-12-30T00:00:00Zhttps://shostack.org/blog/echo-threat-modeling-and-privacy/AdamI'm featured in (local NPR Affiliate) KUOW's Primed: Season 3, Episode 8.<span class="single-post-excerpt"><!-- excerpt -->I'm featured in (local NPR Affiliate) KUOW's Primed: Season 3, Episode 8.<!-- /excerpt --></span>
<p>I'm featured in (local NPR Affiliate) KUOW's <a href="https://www.kuow.org/stories/primed-season-3-episode-8" target="_blank" rel="noopener">Primed: Season 3, Episode 8</a>. I appreciate how the sense of fun that many security people bring to their work comes through. For me, it was fun learning about how Elevation of Privilege works for non-techies. (Spoiler: not super-well, you need to select the cards pretty carefully. Maybe there's another game there?). To be clear — I enjoyed the podcast, and think the story comes through well despite that.</p>Threat Modeling Thursday: Machine Learning2020-01-02T00:00:00Zhttps://shostack.org/blog/tmt-machine-learning/AdamFor my first blog post of 2020, I want to look at threat modeling machine learning systems.<span class="single-post-excerpt"><!-- excerpt -->For my first blog post of 2020, I want to look at threat modeling machine learning systems.<!-- /excerpt --></span>
<p>[Update Jan 16: Victor of the Berryville Machine Learning Security blog has some interesting analysis <a href="https://berryvilleiml.com/2020/01/16/on-recent-microsoft-and-nist-ml-security-documents/" target="_blank" rel="noopener">here</a>. That brings up a point I forgot to mention here: it would be great to name and version these models so we can discuss them more precisely.]</p>
<p>Microsoft recently released a set of documents including "<a href="https://docs.microsoft.com/en-us/security/threat-modeling-aiml" target="_blank" rel="noopener">Threat Modeling AI/ML Systems and Dependencies</a>" and "<a href="https://docs.microsoft.com/en-us/security/failure-modes-in-machine-learning" target="_blank" rel="noopener">Failure Modes in Machine Learning</a>" (the later also available in a more printer-friendly version at <a href="https://arxiv.org/abs/1911.11034v1" target="_blank" rel="noopener">arxiv</a>.). These build on last December's "<a href="https://docs.microsoft.com/en-us/security/securing-artificial-intelligence-machine-learning" target="_blank" rel="noopener">Securing the Future of Artificial Intelligence and Machine Learning at Microsoft</a>."</p>
<p>First and foremost, I'm glad to see Microsoft continuing its long investment in threat modeling. We cannot secure AI systems by static analysis and fuzz testing alone. Second, this is really hard. I've been speaking with the folks at OpenAI (eg, "<a href="https://d4mucfpksywv.cloudfront.net/GPT_2_August_Report.pdf" target="_blank" rel="noopener">Release Strategies and theSocial Impacts of Language Models</a>,") and so I've had firsthand insight into just how hard a problem it is to craft models of 'what can go wrong' with AI or ML models. As such, my criticisms are intended as constructive and said with sympathy for the challenges involved.</p>
<p>In "Securing the Future," the authors enumerate 5 engineering challenges:</p>
<ol>
<li>Incorporating resilience and discretion, especially as we handle voice, gesture and video inputs.</li>
<li>Recognizing bias.</li>
<li>Discerning malicious input from unexpected "black swans".</li>
<li>Built-in forensic capability.</li>
<li>Recognizing and safeguarding sensitive data, even when people don't categorize it as such</li>
</ol>
<p>I would have liked to see the first two flow more into this new work.</p>
<p>In "<cite>Failure Modes</cite>," the authors combine previous work to create a new set of 11 attacks and 6 failures. The attacks range from "perturbation attacks" to "Attacking the ML supply chain" or "exploiting software dependencies." I like the attempt to unify the attacks and failure modes into a single framework. I do think that the combination of attacks calls out for additional ways to slice it. For example, the people who are concerned about model inversion are likely not the same as those who are going to worry about software dependencies, unless there's a separated security team. (See more below on the threat modeling document, but in general, I've come to see failure to integrate security into mainline development as an inevitable precursor to insecure designs, and conflict over them.)</p>
<p>One of the ways in which the Failure Modes document characterizes attacks is a column of "Violates traditional notion of access/authorization," or "Is the attacker [technologically] misusing the system?" This column is a poster child for why diagrams are so helpful in threat modeling.</p>
<p>For example, the very first filled out attack (S.1, Perturbation) has a scenario of "Noise is added to an X-Ray image, which makes the predictions go from normal scan to abnormal," and "no special privileges are required..to perform the attack." My mental model of an X-Ray analysis system is roughly sketched here:</p>
<p><img src="https://shostack.org/blog/img/2019/perturbation-threat-model.png" alt="Perturbation threat model" title="perturbation-threat-model.png" width="450" height="254" border="0" /></p>
<p>Even if the X-Ray file goes to Instagram, and even if that upload is a DICOM file rather than a png, once the attacker has tampered with it, they can't re-insert the image without special privileges to write to the clinical images data store. Additionally, my take is that the attacker is engaged in at least one traditional technical misuse (tampering). Now, generally, I would not expect a STRIDE-centered analysis to result in discovery of perturbation attacks without special training for the analysts. However, I do think that showing the system model under consideration, and the location of the attackers, would clarify this part of the model. Another approach would be to precisely describe what they mean by 'misuse' and 'traditional notions.'</p>
<p>That brings me to the third document, "<cite>Threat Modeling AI/ML Systems</cite>," which "supplements existing SDL threat modeling practices by providing new guidance on threat enumeration and mitigation specific to the AI and Machine Learning space." I would expect that the documentation would tie to <a href="https://www.microsoft.com/en-us/securityengineering/sdl/threatmodeling" target="_blank" rel="noopener">current practices are</a>, and much of it seems to be refinements to the "diagram" and "identify" steps, along with "changing the way you view trust boundaries," and a deep focus on reviews.</p>
<p>ML-specific refinements to diagramming and enumeration are helpful. For example, questions such as "are you training from user-supplied inputs?" seem to be re-statable as "did you diagram? ("Create a data flow diagram that includes your training data?"). Threat identification seems to include both the 11 specific attacks mentioned above, and "Identify actions your model(s) or product/service could take which can cause customer harm online or in the physical domain." I like the implied multi-layered investigation: one level is very generalized, the other very specific. We might call that a "multi-scale view" and that's an approach I haven't seen explored much. At the same time, the set of review questions for identifying customer harm seems very Tay-focused: 3 of the 8 questions refer to trolling or PR backlash. I am surprised to see very little on bias in either the review questions, attacks or unintentional failures. Biases, like racial differences in facial recognition effectiveness, or rating racially stereotyped names being rated differently in a resume screening app, are an important failure mode, and might be covered under 'distributional shifts.' But bias can carry from training data which is representative, and so I think that's a poor prompt. It's also possible that Microsoft covers such things under another part of the shipping process, but a little redundancy could be helpful.</p>
<p>The "change the way you view trust boundaries" section puzzles me. It seems to be saying "make sure you consider where your training data is coming from, and be aware there are explicit trust boundaries there." I'm unsure why that's a change in how I view them, rather than ensuring that I track the full set.</p>
<p>Lastly, a lot of the 'threat modeling' document is focused on review and review questions. I've written about reviews versus conversations before ("<a href="https://shostack.org/blog/tmt-talking-dialogue-and-review/">Talking, Dialogue and Review</a>") and want to re-iterate the importance of threat modeling early. The heavy focus on review questions implies that threat modeling is something a security advisor leads at the end of a project before it goes live, and it would be a shame if that's a takeaway from these documents. The questions could be usefully reframed into the future. For example:</p>
<ul>
<li>If your data is poisoned or tampered with, how will you know?</li>
<li>Will you be training from user-supplied inputs?</li>
<li>What steps will you take to ensure the security of the connection between your model and the data?</li>
<li>What will you do to onboard each new data source? (This is a new question, rather than a re-frame, to illustrate an opportunity from early collaboration.)</li>
</ul>
<p>Again, the question of how we can analyze the security properties of a ML system before that system is fielded is an important one. Very fundamental questions, like how to scope such analysis remain unanswered. What the 'best' form of analysis might be, and for which audience, remains open. Is it best to treat these new attacks as subsets of STRIDE? (I'm exploring that approach in my recent Linkedin Learning classes). Alternately, perhaps it will be best, as this paper does, to craft a new model of attacks.</p>
<p>The easiest prediction is that as we roll into the 2020s these questions will be increasingly important for many projects, and I look forward to a robust conversation about how we do these things in structured, systematic, and comprehensive ways.</p>
<p>Update, Feb 28: See also <a href="https://shostack.org/blog/tmt-biml-machine-learning-risk-framework/">BIML Machine Learning Risk Framework</a>.</p>
Threat Modeling Thursday: The Human Element2020-01-09T00:00:00Zhttps://shostack.org/blog/tmt-the-human-element/AdamI joined Caroline Wong on the Humans of Infosec Podcast to discuss The Human Element of Threat Modeling.<span class="single-post-excerpt"><!-- excerpt -->I joined Caroline Wong on the Humans of Infosec Podcast to discuss The Human Element of Threat Modeling.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/humans-of-infosec-podcast.png" alt="Humans of Infosec Podcast" class="single-post-main-img" />
<p>Today's Threat Modeling Thursday is a podcast! I'm on The Humans of InfoSec Podcast, with Caroline Wong: <a href="https://soundcloud.com/humans-of-infosec/episode-34-adam-shostack-the-human-element-of-threat-modeling" target="_blank" rel="noopener">The Human Element of Threat Modeling</a>.</p>100,00 Moon Shots2020-01-12T00:00:00Zhttps://shostack.org/blog/10000-moon-shots/AdamAmazing and impressive photographs of the moon.<span class="single-post-excerpt"><!-- excerpt -->Amazing and impressive photographs of the moon.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/moonshot.jpg" alt="high definition close-up view of the moon's surface" class="single-post-main-img" />
<p>Andrew McCarthy has an amazing and impressive photographs of the moon on <a href="https://www.instagram.com/cosmic_background/" target="_blank" rel="noopener">Instagram</a>.</p>
<p>To call these photographs is somewhat provocative. In his trilogy, Ansel Adams focuses (sorry! Not sorry) on <a href="https://amzn.to/36UvzCQ" target="_blank" rel="noopener">the camera</a>, <a href="https://amzn.to/35O5ZO8" target="_blank" rel="noopener">the negative</a>, and <a href="https://amzn.to/2QM2SCk" target="_blank" rel="noopener">the print</a>. In The Negagive, he specifically discusses exposing film to light in controlled ways that caused chemical reactions on the film, and it remains common to hear photographers talk of 'an exposure', in much the same way that we dial phones.</p>
<p>In that sense, this is 100,000 exposures, and the development process brings them together in what we might have previously called a montage. </p>
<p>But in a very real sense, your camera stopped exposing film to light a long time ago, and started performing tricks. The conversion of RAW to jpg or other compressed format is not just tossing a bunch of light information from the sensor, but also a bit of saturation, sharpening, and other enhancements. (<del>If you have a fancy modern camera</del> If your fancy modern camera allows you to select scene modes, try setting the camera to "Faithful" or "neutral." That will expose some bits of what the camera does, but <a href="https://www.cambridgeincolour.com/tutorials/white-balance.htm" target="_blank" rel="noopener">white balance</a> is still being estimated and set behind the scenes.)</p>
<p>Is that "exposure" or "development?" Well, all models are wrong, some models are useful.</p>
<p>Your camera gets even more clever when it develops HDR (high dynamic range) photos: it takes 3 or 4 exposures in rapid succession, and combines them up to give you a high impact bit of trash. (Okay, I have opinions about the overuse of HDR. The "Your money back" button is on the side of the page.) Similarly, a panorama is developmental magic where a phone knows how quickly the sensor is moving, how much it's bouncing up and down, and uses that information as it stitches images together.</p>
<p>I don't even know how to explain the computational magic in portrait mode. There's recognition of a face or faces, separation from background, blurring of the background, and more, all in real time. ("What's an exposure again, grandma?")</p>
<p>And so, when Mr. McCarthy engages in developing magic that my <del>computer</del> <del>phone</del> camera doesn't automatically do for me, it's not that far from the developing that is done, either in the camera or in photoshop.</p>
<p>So... nice exposures. Enjoy!</p>
<p>Via <a href="https://www.diyphotography.net/this-incredible-image-of-the-moon-at-the-start-of-2020-is-a-blend-of-100000-photos/" target="_blank" rel="noopener">DIYPhotography</a>.</p>Enter the SpudNet2020-01-15T00:00:00Zhttps://shostack.org/blog/enter-the-spudnet/AdamA new game to teach networking and security concepts.<span class="single-post-excerpt"><!-- excerpt -->A new game to teach networking and security concepts.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/spudnet-cards-spread.png" alt=" spread of cards from new game Spudnet" class="single-post-main-img" />
<p><a href="https://www.kickstarter.com/projects/codomo/potato-pirates-2-enter-the-spudnet?ref=9avxec" target="_blank" rel="noopener">Spudnet</a> is a new game to teach networking and security concepts. The creators were kind enough to send me a pre-production copy, and I can tell you - it looks and feels super solid, and, more importantly, it plays well.</p>
<p>The Kickstarter has already met its goals, and while all Kickstarters have risk, the creators clearly have production down.</p>Cryptographic Excitement2020-01-16T00:00:00Zhttps://shostack.org/blog/cryptographic-excitement/AdamA couple big stories in the realm of cryptography that got me excited.<span class="single-post-excerpt"><!-- excerpt -->A couple big stories in the realm of cryptography that got me excited.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/slow-exposure-traffic.jpg" alt="slow exposure of busy traffic intersection at night" class="single-post-main-img" />
<p>In the last few days, we've seen two big stories in the realm of cryptography. The first is that SHA-1 breaks <a href="https://threatpost.com/exploit-fully-breaks-sha-1/151697/" target="_blank" rel="noopener">are now practical</a>, and those practical breaks impact things like PGP and git.</p>
<p>If you have code that depends on SHA-1, its time to fix that. If you have a protocol that uses SHA1, you need to rapidly version cycle.</p>
<p>Thinking a bit more strategically, SHA-1 was designed by the NSA, and published in 1993. It stood for roughly 25-30 years, which is impressive for a cryptographic algorithm, and reminds us that the NSA is very good at algorithm design. It's a shame that they lost sight of their defensive mission for many years.</p>
<p>The second is that Windows doesn't properly validate cryptographic certificates. Surprisingly, this was reported to Microsoft by the NSA, who decided to <a href="https://media.defense.gov/2020/Jan/14/2002234275/-1/-1/0/CSA-WINDOWS-10-CRYPT-LIB-20190114.PDF" target="_blank" rel="noopener">take credit for it</a>. Bruce Schneier <a href="https://www.schneier.com/blog/archives/2020/01/critical_window.html" target="_blank" rel="noopener">reports</a> some interesting details, including that the Agency's cybersecurity director did a press briefing(!), and this is not the first vuln reported by NSA to Microsoft. It's not that surprising to me that this was reported to Microsoft — the weaponization potential is massive, and <a href="https://arstechnica.com/information-technology/2020/01/researcher-develops-working-exploit-for-critical-windows-10-vulnerability/" target="_blank" rel="noopener">proof of concepts</a> exist. Breaking certificate authentication means you can use any update channel as a distribution point. To me, breaking updates, breaking people's trust in updates and updatability is close to bombing hospitals. It's simply outside what any civilized participant will do. The collateral damage is too high. In theory, the US Government runs a "vulnerabilities equities process" to balance interests, and there's a solid explanation of that process <a href="https://www.oodaloop.com/ooda-original/2020/01/14/vulnerabilities-the-search-for-buried-treasure-and-the-us-government/" target="_blank" rel="noopener">here</a>.</p>
<p>To go back to the SHA-1 story, NSA has been suffering from a lack of public trust as a result of putting back doors into things like <a href="https://en.wikipedia.org/wiki/Dual_EC_DRBG" target="_blank" rel="noopener">EC_DRBG</a>. As a result, we can no longer trust their input to standards processes, and one, two, or two hundred vulnerability reports, while appreciated and appropriate, are not going to fix that.</p>Threat Model Thursday: Files2020-01-23T00:00:00Zhttps://shostack.org/blog/tmt-files/AdamHave you considered the idea that “Files are Fraught With Peril” lately? Maybe you should...<span class="single-post-excerpt"><!-- excerpt -->Have you considered the idea that “Files are Fraught With Peril” lately? Maybe you should...<!-- /excerpt --></span>
<p>There's a fascinating talk by Dan Luu, "<a href="https://danluu.com/deconstruct-files/" target="_blank" rel="noopener">Files are Fraught With Peril.</a>" The talk itself is fascinating, in a horrifying, nothing works, we're going to give up and raise goats now sort of way.</p>
<p>He starts from the startling decision of Dropbox to drop support for all Linux filesystems except Ext4. This surprising decision stems from the fact that a filesystem is a leaky abstraction, The interaction between performance and reliability means that fsync behaves strangely. His article is worth reading as a whole to see the fractal horror involved (and he doesn't even consider virtual machines, which I would expect would exacerbate the situation).</p>
<p>His article is worth reading in full, but as I read it, I considered: what do these failures of our mental models of filesystems mean for threat modeling? The filesystem model is leaky, and the leaks impact security (specifically, the integrity property, in that read returns precisely what was written). So what does that mean: is the operating system a tampering threat? Do we throw out threat modeling because a set of models is dangerously wrong?</p>
<p>I do not think the OS is a tampering threat, nor should we throw out threat modeling. We need to consider that the bugs exposed by the failure of storage models are generally that: bugs, whose violation of our expectations leads to some shocking results ("everyone agrees Postgres should crash?!?" Read Mr. Luu's article, please.) But still: bugs. Things that can be fixed or addressed. Threat modeling still helps us find exciting design issues, even while accepting that all models are wrong.</p>Threat Model Thursday: Games2020-02-06T00:00:00Zhttps://shostack.org/blog/tmt-games/AdamFor reasons I can't quite talk about yet, this has been a super busy time, and I look forward to sharing the exciting developments that have kept me occupied.
<p>For reasons I can't quite talk about yet, this has been a super busy time, and I look forward to sharing the exciting developments that have kept me occupied.</p>
<p>In the meantime, my friends at Agile Stationery have transcribed a talk that Mark Vinkovits and I gave at AppSec Cali last year. Their posts are at: <a href="https://agilestationery.co.uk/blogs/pp/game-on-adding-privacy-to-threat-modelling" target="_blank" rel="noopener">Why card gaming helps teams model security and privacy threats (part 1)</a>, and <a href="https://agilestationery.co.uk/blogs/pp/why-card-gaming-helps-teams-model-security-and-privacy-threats-transcript-part-2" target="_blank" rel="noopener">part 2</a>. Part 3 will post later.</p>Repudiation Now Live on Linkedin Learning2020-02-11T00:00:00Zhttps://shostack.org/blog/repudiation-now-live-on-linkedin-learning/AdamMy course, “Repudiation in Depth” is now live on Linkedin Learning. This is the fourth course in my Learning Threat Modeling series.<span class="single-post-excerpt"><!-- excerpt -->My course, “Repudiation in Depth” is now live on Linkedin Learning. This is the fourth course in my Learning Threat Modeling series.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/repudiation.png" alt="screenshot from Repudiation course" class="single-post-main-img" />
<p>My course, "<a href="https://www.linkedin.com/learning/threat-modeling-repudiation-in-depth/the-threat-of-repudiation?u=2125562">Repudiation in Depth</a>" is now live on Linkedin Learning. This is the fourth course I've created, starting with "<a href="https://www.linkedin.com/learning/learning-threat-modeling-for-security-professionals/a-simple-approach-to-threat-modeling?u=2125562">Learning Threat Modeling</a>", and courses on "<a href="https://www.linkedin.com/learning/threat-modeling-spoofing-in-depth/mitigate-spoofing-threats?u=2125562">spoofing</a>", "<a href="https://www.linkedin.com/learning/threat-modeling-tampering-in-depth/mitigate-tampering-threats?u=2125562">tampering</a>", and now, repudiation. (You can probably see where this is going, and I'm making great strides towards the goal. Sorry not sorry.)</p>
<p>I'd say it's not my fault that this course took so long, but my producers have receipts, so what can I say?</p>
<p>What I can say is that putting together great, in depth content like this takes time.</p>Blackhat and Human Factors2020-02-26T00:00:00Zhttps://shostack.org/blog/blackhat-and-human-factors-2020/AdamAs a member of the BlackHat Review Board, I would love to see more work on Human Factors presented there.<span class="single-post-excerpt"><!-- excerpt -->As a member of the BlackHat Review Board, I would love to see more work on Human Factors presented there.<!-- /excerpt --></span>
<p>As a member of the BlackHat Review Board, I would love to see more work on Human Factors presented there. Over the past few years, we’ve developed an interesting track with good material <a href="https://www.blackhat.com/us-19/briefings/schedule/#track/human-factors" target="_blank" rel="noopener">year</a> over <a href="https://www.blackhat.com/us-18/briefings/schedule/#track/human-factors" target="_blank" rel="noopener">year</a>. The 2020 <a href="https://www.blackhat.com/call-for-papers.html" target="_blank" rel="noopener">call for papers is open</a> and closes April 6th. </p>
<p>I wrote a short <a href="https://adam.shostack.org/blog/2016/04/humans-in-security-blackhat-talks/">blog post</a> on what we look for.</p>
<p>The BlackHat CFP calls for work which has not been published elsewhere. We prefer fully original work, but will consider a new talk that explains work you’ve done for the BlackHat audience. Oftentimes, Blackhat does not count as “Publication” in the view of academic program committees, and so you can present something at BlackHat that you plan to publish later. (You should of course check with the other venue, and disclose that you’re doing so to BlackHat.)</p>
<p>If you’re considering submitting, I encourage you to read all three recommendations posts at <a href="https://usa-briefings-cfp.blackhat.com/" target="_blank" rel="noopener">https://usa-briefings-cfp.blackhat.com/</a></p>Threat Model Thursday: BIML Machine Learning Risk Framework2020-02-27T00:00:00Zhttps://shostack.org/blog/tmt-biml-machine-learning-risk-framework/AdamRisk Framework and Machine Learning<span class="single-post-excerpt"><!-- excerpt -->Risk Framework and Machine Learning<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/tmt-biml-ml-risk-framework.png" alt=" " class="single-post-main-img" />
<p>The Berryville Institute of Machine Learning (BIML) has released "<a href="https://berryvilleiml.com/results/" target="_blank" rel="noopener">An Architectural Risk Analysis of Machine Learning Systems</a>." This is an important step in the journey to systematic, structured, and comprehensive security analysis of machine learning systems, and we can contrast it with the work at Microsoft I <a href="https://shostack.org/blog/tmt-machine-learning/">blogged about last month</a>. As always, my goal is to look at published threat models to see what we can learn. Also, I'm following the authors' language here - Dr. McGraw and I have had discussions about the relationship between terms, and I don't think that there's much value in a fight over ARA vs TM, attack surface vs trust boundary, or threat vs risk.</p>
<p>BIML has released the work in two ways, an <a href="https://berryvilleiml.com/interactive/" target="_blank" rel="noopener">interactive risk framework</a> contains a subset of the information in the <a href="https://berryvilleiml.com/results/ara.pdf" target="_blank" rel="noopener">PDF version</a>. I'll focus on the PDF version (Version 1.0 (1.13.20)), which is formally divided into two parts. Part One is titled ML Security Risks, and Part Two is a set of principles. The authors communicate a nuanced understanding of those principles where they say "Some caveats are in order. No list of principles like the one presented here is ever perfect..." I have little to add except I would like a guarantee that if I follow their principles, my system will be secure (😀). And so I'll focus on Part One, which we can usefully divide into sections.</p>
<p>The first section of Part One is a threat analysis which follows the four question framework. They have a model of ML systems (reproduced at the top of this post). They address what can go wrong in a list of 78 specific risks, organized into both a top ten list and risks to each of the 9 components of their system. The by-component analysis also has sets of "associated controls", some of which tie to specific risks, others to specific components. This section doesn't contain controls for the system as a whole. </p>
<p>The specific risks are challenging in several ways. The first challenge is specificity. For example, risks to the raw data in the world include "raw:1:data confidentiality," and "raw:3:storage." I would think that risks to data confidentiality include those to storage, but a substantial part of what's covered in the former seems to be about how the data flows through and can be extracted from the outputs. I'm specifically not commenting that the risk applies in more than one place - rather I am puzzled at which issues I should be looking at in data confidentiality vs data storage. I would think that raw:3:storage would incorporate both confidentiality and integrity. The generality means that many of the risks (such as an attacker tampering with your inputs) recur, and that makes the list feel redundant and perhaps challenging to use. This also illustrates a tradeoff: had the authors applied their analysis to data flows, as well as processes and external entities, then my concern about overlap would be smaller, while my concern about redundancy would be larger.</p>
<p>There's also a challenge that the risks are quite dis-similar. Some of these seem like things attackers could do (text encoding), others like things which could go wrong (storage), others more the effects of things going wrong (legal). There are also some oddly named risks: tampering with input data is labeled 'trustworthiness'. Encoding integrity seems to relate to data selection. Encoding integrity relates closely (in my mind) to sensor risks. There are similar questions we can ask of the other parts of the system.</p>
<p>However, these are challenges, not fatal flaws. They can also be seen as usability challenges, the sort of thing which can be tested and measured, because we can see how well the document works for the intended audiences of ML practitioners and engineers, security practitioners, and ML security people. (I suspect it will work better for group 3, and again, that's a testable hypothesis.)</p>
<p>Section B is a map of attacks. (They say "You can think of a specific attack as a coordinated exploit of a set of risks that results in system compromise. For the most part we will ignore attacks on ML infrastructure or attacks that specifically circumvent ML-based defense." I think they are here using 'system compromise' in the sense of 'mission compromise,' rather than 'popping a shell.'</p>
<p>Section C is a very brief attack surface estimation. (Probably appropriately brief.)</p>
<p>Section D is a set of ten "system wide risks and broad concerns." I think of this as a second threat analysis, looking at risks that are, well, system-wide, rather than component-specific. It is a good complement to section A, and in the title and orientations of the risks are much more self-similar than in section A.</p>
<p>All of that said, let's step back and take a look at what's here. There are three lists of risks:</p>
<ol>
<li>A Top Ten (extracted from the 78)</li>
<li>78 specific risks, which I'm going to dub the BIML-78</li>
<li>10 System-wide risks</li>
</ol>
<p>It would be helpful if the authors would <del>explicitly tie the 78 to one or the other or both lists of ten, and to</del> apply the Spinal Tap treatment to one or the other, and make it go to 11, to mitigate risks of confusion. There's an implication of only including the BIML-78 in the interactive page, and I am unsure why that means. Also, the lists need names. (See <a href="https://shostack.org/blog/promoting-threat-modeling-work/">promoting threat modeling work</a>, bullet 2.)</p>
<p>Obviously, this set is different thing than STRIDE. Both help us answer the question of 'what can go wrong?' so in that frame, they're similar. But the BIML-78 (v1) seems more like a library than a mnemonic, checklist or prompt. (Somewhat entertainingly, there are 78 threat cards in an Elevation of Privilege deck - there's obviously a need for a card deck!)</p>
<p>It is interesting that the threats are tied to elements of the system design - that's a somewhat unusual property, and one that we'll also see in next week's TM Thursday post.</p>
<p>It would also be helpful to hear when and how each list (especially the long one) is intended to be used. The analysis seems like a longer, more elaborate project than a session using STRIDE to analyze a system. Is the plan a set of meetings in which senior engineers discuss the threats? </p>
<p>In summary, I think that this is more usefully viewed as an analysis, rather than an analytic tool, and that its core could be built into an analytic tool.</p>
<p>Update: The top ten are explicitly pulled from the BIML-78, and tagged as such in the text. My apologies.</p>
Threat Modeling Training at Blackhat 20202020-03-02T00:00:00Zhttps://shostack.org/blog/threat-modeling-training-at-blackhat-2020/AdamAt Blackhat this summer, I'll be offering threat modeling training at Blackhat. Last year, these sold out quickly, so don't wait!<img src="https://shostack.org/blog/img/2020/blackhat-2020-training-banner.png" alt="Trainer at blackhat 2020" class="single-post-main-img" />
<p>At Blackhat this summer, I'll be offering threat modeling training at Blackhat. Last year, these sold out quickly, so don't wait!</p>
<blockquote>This hands-on, interactive class will focus on learning to threat model by executing each of the steps. Students will start threat modeling early on day 1, followed by an understanding of traps that they might fall into, and then progressing through the four questions: what are we working on, what can go wrong, what are we going to do about it and did we do a good job. This is capped off with an end to end exercise that brings the skills together.</blockquote>
<p><a href="https://www.blackhat.com/us-20/training/schedule/index.html#applied-threat-modeling-19074" target="_blank" rel="noopener">August 1-2</a>, and <a href="https://www.blackhat.com/us-20/training/schedule/index.html#applied-threat-modeling-190741577138825" target="_blank" rel="noopener">August 3-4</a>.</p>Amazon's 'Alexa Built-in' Threat Model2020-03-05T00:00:00Zhttps://shostack.org/blog/amazons-alexa-built-in-threat-model/AdamExploring supply chain threat modeling with Alexa<span class="single-post-excerpt"><!-- excerpt -->Exploring supply chain threat modeling with Alexa<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/alexa-threat.jpeg" alt="amazon alexa units wearing dark glasses and black fedoras" class="single-post-main-img" />
<p>Amazon has released a set of documents, "<a href="https://developer.amazon.com/en-US/blogs/alexa/device-makers/2020/01/updates-to-device-security-requirements-for-alexa-built-in-products" target="_blank" rel="noopener">Updates to Device Security Requirements for Alexa Built-in Products</a>." I want to look at these as a specific way to express a threat model, which is threat modeling along the supply chain, talk about the proliferation of this different kind of model, and what it means for engineering. (More precisely, since I don't have an Amazon developer account, I'm going to look at the blog post, and infer some stuff about the underlying documentation.)</p>
<p>Alexa Built-In is a relatively new space for Amazon: offering APIs and a platform for consumers to access via someone else's devices. And Amazon is clearly thinking about what it means for their Alexa service to be accessed via, say, Sonos speakers, and they call out a set of seven required capabilities:</p>
<ol>
<li>Secure Boot</li>
<li>Secure Key Storage</li>
<li>Hardware-Based Cryptographic Engines</li>
<li>Up-to-Date and Operating Systems with Long-Term Support (LTS)</li>
<li>Host Hardening</li>
<li>Separation of Account Privileges</li>
<li>Threat Surface Reduction</li>
</ol>
<p>What they're saying is "we've thought about a set of threats, including someone replacing your boot code or stealing your keys, and we need you to act on those." The list includes a set of answers to 'what can go wrong,' and tells you what you need to do about it. For example, the full line reads "Secure Boot can be used to reduce the risk that a hacker can tamper with and gain a persistent foothold on their device."</p>
<p>We can frame that as a threat model without straining:</p>
<ul>
<li><strong>What are you* working on?</strong> Alexa Built-in devices.</li>
<li><strong>What can go wrong?</strong>? A hacker can tamper with and gain a persistent foothold.</li>
<li><strong>What are you going to do about it?</strong> Secure boot</li>
<li><strong>Did you do a good job?</strong> "We require device makers to submit a security assessment report before launch..."</li>
</ul>
<p>(*I'm swapping the form of the four questions from "we" to "you", which has all sorts of consequences I'm going to ignore for this post.)</p>
<p>If our analysis is more structured than brainstorming, then there's value in having skilled engineers analyze an idealized version of a system. They can catalog the threats that impact a the high-level design. (If we're just brainstorming, then it's hard to know if the analysis is worthwhile.). If they publish their high level design, then I can compare my high level design to theirs, and if they match, expect that my design inherits those threats. Better yet, here Amazon has said what they expect to be done about each.</p>
<p>But these lists of what you should do are not unique to Amazon. There's a tremendous amount of guidance for IoT makers, and the lists are not well aligned. For example, let's compare to the UK's "<a href="https://www.gov.uk/government/publications/code-of-practice-for-consumer-iot-security/code-of-practice-for-consumer-iot-security" target="_blank" rel="noopener">Code of Practice for consumer IoT security</a>." That has 13 guidelines. UK #3 roughly matches to Alexa's #4, and at first blush, number 4, 6, and 7 correspond directly. Nine UK guidelines, and three Alexa guidelines don't obviously line up. So someone making an Alexa device for sale in the UK has to deal with roughly 17 guidelines. The UK Code of Practice lists:</p>
<ol>
<li>No default passwords</li>
<li>Implement a vulnerability disclosure policy</li>
<li>Keep software updated (~A4) </li>
<li>Securely store credentials and security-sensitive data (=A2)</li>
<li>Communicate securely</li>
<li>Minimise exposed attack surfaces (=A7) </li>
<li>Ensure software integrity (=A1) </li>
<li>Ensure that personal data is protected</li>
<li>Make systems resilient to outages</li>
<li>Monitor system telemetry data</li>
<li>Make it easy for consumers to delete personal data</li>
<li>Make installation and maintenance of devices easy</li>
<li>Validate input data
</li></ol>
<p>Of course, there are not the only two sets of rules. Underwriter's Labs has the <a href="https://www.ul.com/resources/ul-cybersecurity-assurance-program-ul-cap">2900</a> series for Cybersecurity Assurance, DHS has <a href="https://www.dhs.gov/securingtheIoT" target="_blank" rel="noopener">IOT Security Guidance</a>, The FDA has <a href="https://www.fda.gov/regulatory-information/search-fda-guidance-documents/content-premarket-submissions-management-cybersecurity-medical-devices" target="_blank" rel="noopener">draft premarket guidance for cybersecurity</a>, which, importantly, require updatability, not on Amazon's Alexa requirements or the NCSC list. Other lists, such as Amazon AWS's list, are also different. ("<a href="https://aws.amazon.com/blogs/iot/ten-security-golden-rules-for-iot-solutions/" target="_blank" rel="noopener">Ten security golden rules for IoT solutions</a>.")</p>
<p>The differences in "what to do" indicate differences of one or more of implied architecture, analytic technique, and mitigative action. It would be helpful to both device makers and those creating new regulations if the threat model work product were more concretely revealed. (That is, what do you think these devices look like? What analysis techniques did you use?)</p>
<p>Some of these differences in the lists may reflect power differences: Amazon can say that you must do these things to be Alexa powered. The FDA can say 'you must do these things to sell your device, and perhaps the UK has a harder time demanding that devices meet its code of practice.</p>
<p>Our security engineering practices are just not that mature yet, and so some of this diversity may result in better security. Other parts of the diversity just add work. At each device maker, someone has to assess the requirements, find the commonalities, and decide what to do. (There may be a mapping document, but I was unable to find it.)</p>
<p>So with that, let me compare briefly to the BIML Risk Analysis, which I <a href="https://shostack.org/blog/tmt-biml-machine-learning-risk-framework/">talked about</a> last week. That document shows its work much more deeply, and the application of that thinking is harder to see. There's a real tension in how to balance between these, and I hope we see more documents that help us see what our choices look like. (Nominations welcome!)</p>Free Threat Modeling Training2020-03-17T00:00:00Zhttps://shostack.org/blog/free-threat-modeling-training/AdamWhile I can't fix things, I can at least make my LinkedIn courses free for a time.<span class="single-post-excerpt"><!-- excerpt -->While I can't fix things, I can at least make my LinkedIn courses free for a time.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/Threat-modeling-training-1.png" alt="LinkedIn Learning course: Threat Modeling: Spoofing In Depth" class="single-post-main-img" />
<p>The current situation is scary and anxiety-provoking, and I can't do much to fix that. </p>
<p>One thing I can do is give people a chance to learn, and so I'm making my Linkedin Learning classes free this week. (I'm told that each class is free for the day, so you'll need to watch each within a day of starting the course.)</p>
<p>These links should open the courses (and as I understand it, start the clock).</p>
<ul>
<li><a href="https://www.linkedin.com/learning/learning-threat-modeling-for-security-professionals?isLearningSubscriber=true&trk=feed-share_course_title_learning&lipi=urn%3Ali%3Apage%3Ad_flagship3_detail_base%3B22cj9k3tQIm%2FErXm46pxLA%3D%3D&licu=urn%3Ali%3Acontrol%3Ad_flagship3_detail_base-viewCourse&lici=fCI2HjzHQzmFRR64K9gkWA%3D%3D" target="_blank" rel="noopener">Learning Threat Modeling</a></li>
<li><a href="https://www.linkedin.com/learning/threat-modeling-spoofing-in-depth?isLearningSubscriber=true&trk=feed-share_course_title_learning&lipi=urn%3Ali%3Apage%3Ad_flagship3_detail_base%3BiD%2BjlFSgSDSPRhIU62fmoQ%3D%3D&licu=urn%3Ali%3Acontrol%3Ad_flagship3_detail_base-viewCourse&lici=9w4CyNWaT2ixM7G5g3poSg%3D%3D" target="_blank" rel="noopener">Spoofing in Depth</a></li>
<li><a href="https://www.linkedin.com/learning/threat-modeling-tampering-in-depth?isLearningSubscriber=true&trk=feed-share_course_title_learning&lipi=urn%3Ali%3Apage%3Ad_flagship3_detail_base%3BtwIdAbgISjepD8CunMeqmg%3D%3D&licu=urn%3Ali%3Acontrol%3Ad_flagship3_detail_base-viewCourse&lici=BdlVDAGfQ7%2BBDmL%2F3%2BXrsw%3D%3D" target="_blank" rel="noopener">Tampering in Depth</a></li>
<li><a href="https://www.linkedin.com/learning/threat-modeling-repudiation-in-depth?isLearningSubscriber=true&trk=feed-share_course_title_learning&lipi=urn%3Ali%3Apage%3Ad_flagship3_detail_base%3BROiCAoc6SGipD0Y2ckHdaQ%3D%3D&licu=urn%3Ali%3Acontrol%3Ad_flagship3_detail_base-viewCourse&lici=MN2dwZLcSVuj1Lzms5YzJg%3D%3D" target="_blank" rel="noopener">Repudiation In Depth</a></li>
</ul>Threat Modeling with Questionnaires2020-03-19T00:00:00Zhttps://shostack.org/blog/threat-modeling-with-questionnaires/AdamThis post comes from a conversation I had on Linkedin with Clint Gibler.<span class="single-post-excerpt"><!-- excerpt -->This post comes from a conversation I had on Linkedin with Clint Gibler.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/hand-completing-survey.jpg" alt="A hand using a pen to complete a survey" class="single-post-main-img" />
<p>This post comes from a <a href="https://www.linkedin.com/posts/clintgibler_quick-look-devsecops-state-of-the-union-activity-6635558428121137152-xDwW/" target="_blank" rel="noopener">conversation</a> I had on Linkedin with Clint Gibler. He wrote:</p>
<blockquote>One challenge I've heard from a number of companies is that, with say 3-5 AppSec engineers supporting 500 - 1000 devs, you can't TM every story, or even every epic. So what do you focus on?
<br /><br />
The high risk / most critical things. But what are those? It's not always easy to have visibility or even awareness of everything being built in fast moving, complex, large environments.
<br /><br />
One method discussed in a few talks (and I reference in mine) is having devs fill out self-service security questionnaires that provide some detail about the purpose of the service / new feature, what sensitive data it touches, potentially dangerous functionality it might have, etc. so that security engineers can do a meaningful TM of those high risk apps.</blockquote>
<p>As always, I'm picking this up to learn, not to criticize Clint for what he wrote. It is worthwhile to start by illuminating Clint's mental model of technical threat modeling work, where he implicitly characterize threat modeling as a heavyweight consulting engagement by appsec engineers. In that model, there are probably deliverables like a DFD and a list of threats. And while that's one fine model, there are other models, such as every engineer threat models, and the approach is lightweight. (I say technical work because there's also the interpersonal and inter-organizational work, and I'm not focused on those in this post.)</p>
<p>Let's think about what's really meant by "devs fill out self-service security questionnaires..." They're answering questions about the first two questions in threat modeling: "what are we working on," and "what can go wrong?" If the answers reach a threshold, then they consider what they're going to do about it.</p>
<p>And so, <strong>they're threat modeling using a questionnaire</strong>. In fact, every engineer (or scrum master) is threat modeling every feature. They're just using different tools than were in that initial mental model.</p>
<p>And so the question we can ask is not "what do you threat model", but "how can we best use the time available?" and even add, "...when many threat modeling activities turn out to provide assurance but not discovery of interesting threats?" Before I get there, I want to briefly explain that I'm avoiding the phrase "the highest ROI approach," because higher ROI activities might involve a high minimum investment, and in this scenario, avoiding that is the price of the seat at the table.</p>
<p>So with that, I'm modeling all questionnaires as the same, but that's obviously wrong. It's more useful to characterize the questionnaires. How long do they take? Are they a quick check in, or a tedious waste? How often does each element trigger? What's the result of those triggers? Are the triggers avoidable with other mechanisms? For example, can you <del>do static analysis</del> grep checkins to find obviously risky variables like SSN, CCN or COVID? Can we do a risk bounty, where the person who points out the highest risk code in a unit gets a thing? Can we make what I'll call "Big Wall Map" threat models part of scrum planning?</p>
<p>By "Big Wall Map" I mean a way of addressing what we're working on. There's a large map of the code on the wall, and scrum feature discussion starts with showing where the changes will take place. Alice might say "My code will change the way we calculate the ads, but still sending the ad request to this module, so there's no new data flow, and no new type of data crossing the trust boundary." And she's done. Billie might have a different change that adds a new data flow, and so she'll know (or be told) that she needs to do deeper analysis of what can go wrong. This is somewhat similar to a questionnaire, but the compliance check is done by the team and the scrum master. The analysis is dependent on the team. There's less effort, less output, less evidence. But to the question: "how can we best use the time available?" </p>
<p>Big Wall Map is exceptionally fast in steady state. When you start with Big Wall Map, making of the map is technical debt. Maintaining the map can be expensive. But it has payoff in reducing rework that's bigger than security. The physical nature of the map is also important in the same way that physical kanban boards are important. They're visible when people are doing their day jobs, and there's only so much "good wall space." What it's spent on is important. And yes, many people are working from home right now, which makes that less visible.</p>
<p>Back to the goal, what is the goal of the questionnaire? Is it to ensure that nothing slips through the cracks? That there's enough security analysis of each story? That there's a record of the analysis so there's someone to blame? Is it to allocate work by security engineers? Engineering is all about tradeoffs. Crisply defining what we want will help us get there.</p>
<p>Agile folks I work with love to say "if it hurts, do it more." That's a great approach for threat modeling, and it's worth talking through how we can re-factor threat modeling to integrate better. I see a lot of success with developers owning security, Big Wall Maps, and threat modeling every feature in super-lightweight ways. But it takes each company time and work to get to that point. It has to be a cultural journey as well as a technical one.</p>
<p><a href="https://pixabay.com/photos/survey-opinion-research-voting-fill-1594962/" target="_blank" rel="noopener">Image</a> by <a href="https://pixabay.com/users/andibreit-2748383/" target="_blank" rel="noopener">Andreas Breitling</a>.</p>The COVID Pandemic2020-03-23T00:00:00Zhttps://shostack.org/blog/the-covid-pandemic/AdamI know many readers are here for the threat modeling, and I could claim that this is the “what are we going to do about it” post, which it is, but I don't want to have to blog all threat modeling all the time. So this is the “Seattle is a month into COVID-19” post.<p>I know many readers are here for the threat modeling, and I could claim that this is the “what are we going to do about it” post, which it is, but I don't want to have to blog all threat modeling all the time. So this is the “Seattle is a month into COVID-19” post.</p>
<p>There are a huge number of tips on how to stay safe, how to work from home, etc. I'm a fan of discipline in lists, and so I'll share only one: "<a href="https://www.nytimes.com/2020/03/21/opinion/scott-kelly-coronavirus-isolation.html" target="_blank" rel="noopener">I Spent a Year in Space, and I Have Tips on Isolation to Share</a>." (Also, his book, "<a href="https://amzn.to/2UbDEz6" target="_blank" rel="noopener">Endurance</a>", was excellent, as mentioned in my <a href="https://adam.shostack.org/blog/2019/12/books-worth-your-time-q4/">Books Worth Your Time (Q4)</a>.)</p>
<p>No, what I want to talk about is what we're going to do about the possibility that this could be a year or more of isolation and other measures to flatten the curve, and what that means. <a href="https://institute.jpmorganchase.com/content/dam/jpmc/jpmorgan-chase-and-co/institute/pdf/institute-growth-vitality-cash-flows.pdf" target="_blank" rel="noopener">According</a> to Chase, a typical small business does not have cash on hand to cover a month of expenses, never mind a year. We're going to see a lot of those businesses disappear.</p>
<p>There are millions of people who do gig work. Those people typically lack health insurance. My consulting firm has a few people who typically do either a job here and there, or small monthly bits, like our bookkeeper. I've talked to all our contractors, and almost all of them have seen their pipelines and even promised work on signed contracts evaporate. I do a lot of training work, and that's all postponed indefinitely. That's mirrored in restaurant workers, artists, and all sorts of people who keep businesses humming along. (We're fortunate to have more than a month of cash on hand, and expect to be able to meet our obligations for what I'd normally call 'the foreseeable future.' We're spending some of it on our contractors, improving products for when this turns around.)</p>
<p>But I'd like to think about those who are less fortunate, especially gig workers, who are <a href="https://www.nelp.org/blog/amicus-brief-lowman-v-unemployment-compensation-board-review/" target="_blank" rel="noopener">generally not covered</a> by unemployment insurance when their gigs disappear.</p>
<p>But the way that the bailout is operating, which seems to be a small amount of cash to most Americans and then lots of loans, doesn't actually help the gig workers, and it won't help many small businesses. For example, read <a href="https://www.eater.com/2020/3/9/21166993/how-much-to-run-a-restaurant-cost-mei-mei-boston-finances" target="_blank" rel="noopener">What Does It Really Cost to Run a Restaurant?</a>. Assume that they don't do a takeout option, but try to get by. Some of their cost are highly variable: buying food (16%) and labor (44%) might go away. But equipment leases, operating permits are still $10,000 a year. Rent is $86,000. For rough numbers, let's call that fixed costs of $10,000 a month. And their net income is $21,000. If they take out a loan, then in two months, they've spent their net income for the year on those fixed expenses. If they need to stay closed for 3 months, then when they re-open, assuming all of their business reappears instantly, then they've paid those operating expenses, and spent their net income for this year and half of next. So, setting aside the dream, setting aside the obligations they may feel to staff, does it make economic sense for them to take out a loan? I think probably not. And this is the case for most small businesses — even with loans, there's a limit to how long they can pay salaries. Of course, there's always the old 'take out a government-backed loan you can't pay back and pay salaries with it' trick for transferring money from the taxpayer to employees. But we have a more efficient scheme for that: unemployment insurance.</p>
<p>Going back to the macro level, that means a lot of people unemployed, potentially for far longer. Many of their expenses, like rent, are fixed, and it's hard to move in a pandemic. (Assuming that we don't want people to go and see a new apartment before moving in, that we don't want them to hire movers to help, and that having them abandon many of their possessions in moving is bad.). Also, here in Seattle, we have a severe housing shortage, so finding inexpensive places to live is hard.</p>
<p>Back to unemployment. Washington, for example, if you earned <a href="https://esd.wa.gov/unemployment/calculate-your-benefit" target="_blank" rel="noopener">$42,000</a> or more per year, your unemployment would be the maximum amount, which is $790/week or $3,160/month, and benefits are stopped at 6 months.). The state median income is $<a href="https://www.dshs.wa.gov/esa/eligibility-z-manual-ea-z/state-median-income-chart" target="_blank" rel="noopener">4,237</a>. Waving hands a little over the math, that means even a median income family will be stretched.</p>
<p>So I think we need a massive increase in unemployment payouts, including new ways of treating those who were earning above $42,000 a year. We need to extend unemployment to gig workers. We need this more than loans to small businesses. (To be clear, I'm not opposed to business loans, but rather that even with them, we're going to see lots of people out of work. Hiring will be slow because of uncertainty and reliance on in-person interviews.)</p>
<p>If we're going to bail out airlines who've shoved massive profits into stock buybacks and cruise lines that kept operating as their passengers got sick, we should be bailing out gig workers. We should be helping families and individuals in unusual ways during these very unusual times.</p>
Medical Device Threat Modeling2020-03-26T00:00:00Zhttps://shostack.org/blog/medical-device-threat-modeling/AdamNew training being developed, seeking interest.<span class="single-post-excerpt"><!-- excerpt -->New training being developed, seeking interest.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/inperson-threat-modeling-training-with-adam.jpg" alt="Wide view of in-person threat modeling training with Adam Shostack at the whiteboards" class="single-post-main-img" />
<p>Threat modeling figures heavily in the FDA's thinking. It's been part of the first cybersecurity pre-market guidance, it was a big part of the workshop on '<u>content of premarket submissions</u>,' [link to https://www.fda.gov/medical-devices/workshops-conferences-medical-devices/public-workshop-content-premarket-submissions-management-cybersecurity-medical-devices-january-29-30 no longer works] etc. There have been lots of questions about how to make that happen.</p>
<p>I've been working with the FDA and the MDIC, and we have been planning for free boot camps for threat modeling. MDIC is now asking people to <a href="https://mdic.org/project/medical-device-cybersecurity-threat-modeling/" target="_blank" rel="noopener">apply to attend</a>. Dates TBD in light of the current pandemic, but please, if you're interested, let us know so to help us plan.</p>
<ul>
<li>2-day intensive hands-on sessions on threat modeling.</li>
<li>Learn about structured, systematic and comprehensive approach to threat modeling for engineering more secure systems from SMEs from public and private sector.</li>
<li>Learn the latest updates on medical device cybersecurity and related areas from representatives of FDA and industry.</li>
<li>Networking opportunity with SMEs from MedTech and non-MedTech sectors to learn on cybersecurity best practices that can be incorporated into the medical device industry.</li>
<li>Contribute to the discussions on the development of Medical Device Threat Modeling Playbook.</li>
</ul>Friday Star Wars2020-03-27T00:00:00Zhttps://shostack.org/blog/friday-star-wars-2/AdamPandemic Safety in Star Wars<span class="single-post-excerpt"><!-- excerpt -->Pandemic Safety in Star Wars<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/safe-social-chokehold.png" alt="Safe social chokehold" title="safe-social-chokehold.png" border="0" width="342" height="480" />Answering 'What Are We Working On' When Remote2020-03-30T00:00:00Zhttps://shostack.org/blog/answering-what-are-we-working-on-when-remote/AdamHow do we replace the in-person whiteboard sessions essential to Threat Modeling when we are distanced and working remotely?<span class="single-post-excerpt"><!-- excerpt -->How do we replace the in-person whiteboard sessions essential to Threat Modeling when we are distanced and working remotely?<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/remote-work-data-flow-diagram.png" alt="data flow diagram" class="single-post-main-img" />
<p>Practicing physical distancing has already dramatically changed how we work, and will continue to do so. Being physically distant means we can't use a whiteboard to help us talk through "what are we working on?"</p>
<p>There are technical facets of threat modeling, like using visual models to show and scope "what are we working on?" These can be done on a whiteboard, in Visio or Draw.io, or in specialized threat modeling tooling.</p>
<p>Threat modeling also has an interpersonal aspect. Being at a whiteboard shapes those interactions. Being at a whiteboard frames a dialog: we're working together, sharing knowledge or developing shared knowledge of a system. (More on dialog in <a href="https://shostack.org/blog/tmt-talking-dialogue-and-review/">Talking, Dialogue and Review</a>, and on whiteboarding <a href="https://shostack.org/archive/2016/09/diagrams-in-threat-modeling/">Diagrams in Threat Modeling</a>.)</p>
<p>We need to replace that tool with something else. Some points to consider:</p>
<ul>
<li>Do you want a threat modeling tool or a drawing tool?</li>
<li>Is the tool for developing a shared understanding, or recording that understanding? (Which matters: the journey or the destination?)</li>
<li>Usability & fluidity (single user)</li>
<li>Usability (team)</li>
<li>Integrations (into all sorts of things including source control, task management, and communication tooling like video conferencing and chat.</li>
<li>Accessibility</li>
</ul>
<p>Last week, I decided to do an experiment with online drawing tools, and I'm blown away by Miro. For me, it has a great mix of fluidity and ease of tweaking like making boxes the same size, aligning them. Going from a whiteboard diagram to a Visio diagram will usually take me 2-3x as long as the drawing work for the whiteboard. Using Miro, I realized how much of that is awkwardness UI design. The default Miro stencil search doesn't return a data flow diagram, but there's one at <a href="https://miro.com/templates/data-flow-diagram/" target="_blank" rel="noopener">https://miro.com/templates/data-flow-diagram/</a>. The picture above took me just under 4 minutes once I was logged in and had the stencil.</p>
<p><u>Awwapp</u> [link to https://awwapp.com/ no longer works] has a very nice jump in and go approach, but within a few minutes, I wanted the tools I'd just discovered in Miro. I was unable to find a way to dot lines for trust boundaries.</p>
<p>Google <a href="https://apps.apple.com/us/app/jamboard/id1143591418" target="_blank" rel="noopener">Jamboard</a> on an ipad is very whiteboard like, including the bad shapes. The erasure animation is something between cute and twee. The assistive drawing tools seemed overly sensitive, and I had a hard time with text. I'm told that there's a version in hangouts, but didn't explore.</p>
<p>Whatever tool you want to use, recognize that there's a learning curve of some form. My experiment above contains all sorts of biases - maybe if I'd used Awwapp before Miro, Miro would have felt constraining and not whiteboard-like? There's a social aspect. Maybe Miro does a bad job at collaboration? Maybe it has worse accessibility features in ways that matter to a teammate?</p>
<p>(PS: We had a discussion about this on The Appsec <u>Podcast</u> [link to https://podcast.securityjourney.com/application-security-podcast/episodes/ no longer works].)</p>
Power Dynamics in Threat Modeling2020-04-02T00:00:00Zhttps://shostack.org/blog/power-dynamics-in-threat-modeling/AdamOn Linkedin, Peter Dowdall had a very important response to my post on remote threat modeling.<span class="single-post-excerpt"><!-- excerpt -->On Linkedin, Peter Dowdall had a very important response to my post on remote threat modeling.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/JopwellCollection_image11-1536x807.jpg" alt="group office meeting" class="single-post-main-img" />
<p>On Linkedin, Peter Dowdall had a very important <a href="https://www.linkedin.com/feed/update/urn:li:activity:6651023610603454464?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A6651023610603454464%2C6651040347881263104%29" target="_blank" rel="noopener">response</a> to my post on <a href="https://shostack.org/blog/answering-what-are-we-working-on-when-remote/">remote threat modeling</a>. Because comments on Linkedin are a transient resource, I'm going to quote heavily:</p>
<blockquote>The team here ran a session with people in the same room using Miro (maybe 1 remote) and we found it stripped the barriers of either "taking the pen" or calling out threats to a board. That style of threat modelling can make some uncomfortable, resulting in people with great ideas staying quiet.<br /><br />
Being behind a laptop drawing on Miro, we saw more boldness from developers. It has a nice flow to it and allows you to get things down on virtual paper fast...and some of our more creative peoples drawings made it fun, which wasn't an expectation. Just an observation I thought I would share.</blockquote>
<p>This really hit home for me. I aspire to create inclusive ways to threat model, because different perspectives help us discover different problems. I'd like to use Peter's comment to think about power dynamics in threat modeling. I am fond of whiteboards, because whiteboards, for me, are contrasted with an architect controlling a projector with a Visio doc. It's tremendously uncomfortable, shocking even, to elbow them out of the way and start using their laptop to edit.</p>
<p>That's one example of a power dynamic, and Peter brings up another. These are important. They influence the quality of the work. If we want to leverage all the brains in the room, we need to find ways to let people speak and are heard. We need to ensure everyone has both permission and encouragement to engage, and to avoid having the conversation be dominated by one or two people.</p>
<p>There are other power dynamics, including gender and cultural origin, especially the way a culture treats power differentials and respect. (Just to be concrete, imagine the dynamic over a laptop with a man and a woman in each role. Imagine co-workers from Israel, Japan and India, and how each engages.) There's also organizational culture power dynamics, such as stem from seniority, length of time at the company or being part of a profit center or a cost center.</p>
<p>I've talked about one of the useful properties of the Elevation of Privilege game being power leveling: you can record a threat "to get the point," and that's why there's a point system in the game. Another mechanism that can help is surveys as part of addressing "did we do a good job?" </p>
<p>I'm very curious, what else have you seen that helps reduce power differentials and get everyone engaged?</p>
<p>Image via <a href="https://jopwell.pixieset.com/thejopwellcollection/?pid=975806456&id=5&h=MTM5NTEzOTM4Nw" target="_blank" rel="noopener">Jopwell</a>.</p>Worthwhile Books (Q1 2020)2020-04-14T00:00:00Zhttps://shostack.org/blog/worthwhile-books-q1-2020/AdamThese are the books I read in the first quarter (and forgot to mention last quarter) that I think are worth your time.<img src="https://shostack.org/blog/img/2020/worthwhile-books-1536x650.jpg" alt="an open book with a pen and journal" class="single-post-main-img" />
<p>These are the books I read in the first quarter (and forgot to mention last quarter) that I think are worth your time.</p>
<h3>Cyber</h3>
<ul>
<li><a href="https://amzn.to/33Z1IIv" target="_blank" rel="noopener">Secrets of a Cyber Security Architect</a>, by Brook S. E. Schoenfield. I was honored to write the Foreword, and think there's a great deal of hard-won wisdom.</li>
<li><a href="https://amzn.to/3bBrCVw" target="_blank" rel="noopener">Sandworm</a>, by Andy Greenberg. In depth the story of the NotPetya worm. Much of it's been published in Wired, but seeing the story in one place was powerful.</li>
<li><a href="https://amzn.to/2JuaJjf" target="_blank" rel="noopener">Crash Override</a>, by Zoe Quinn. I put off reading this for quite a while, and I regret that. I expected more of a polemic, or (justifiably) a re-telling of a story of traumatization. And while there were elements of that, I found Crash Override to be clear-headed, witty and insightful about the awful experiences that Ms. Quinn went through and what we all can learn from them.</li>
</ul>
<h3>Non-fiction</h3>
<ul>
<li><a href="https://amzn.to/3asoA5P" target="_blank" rel="noopener">The Weather Experiment</a> by Peter Moore is fascinating history of the very gradual process of understanding the weather. The shape of a hurricane is obvious today because of satellites, but was worked out by people writing each other letters containing observations. The existence of weather offices was highly political, and at times defunded for the offense of offering to predict the weather. There's an interesting relationship to cybersecurity, in that it took quite some time to even figure out what was worth observing, and much more time to start to collect, correlate and understand it all. The benefit of all that work wasn't visible at the start. I don't think that we know the shape of a hurricane yet, and our fits and starts at collecting and sharing knowledge might not be capturing the right things, or making it available to the right people.</li>
<li><a href="https://amzn.to/2Uu4Clv" target="_blank" rel="noopener">Tolkien in the Great War</a> by John Garth. A mix of biography and literary criticism. I was reading this the day I learned that Christopher Tolkien had passed. Speaking of which, this article on his maps was a fascinating, and related tidbit.</li>
<li><a href="https://amzn.to/2UszDGs" target="_blank" rel="noopener">Bad Blood</a>. The story of Theranos. When I read these books, I play a game with myself where I ask "at what point would I quit?" In this case, the lesson that comes out is the abuse of loyalty. Theranos' founder Holmes asked for or even demanded loyalty from those around her. Not a focus on the mission or goals, but loyalty to her. There's a lesson in how she allowed that to trump everything else.</li>
<li><a href="https://amzn.to/3bEPOGx" target="_blank" rel="noopener">Humans Need Not Apply</a> by Jerry Kaplan is a really interesting look at the dual rise of what the author calls synthetic intellects and forged laborers. The analysis of what that means for people is fascinating, deep and wide ranging. Importantly, he identifies 'winner take all' as an outcome that's magnified by each of these developments separately, and that build on each others. The proposals to address the problems are less convincing. In this, and I mean this in a complimentary way, the work reminds me of Das Kapital. Excellent analysis of the problems with capitalism, and I don't believe in the solutions offered.</li>
</ul>
<h3>Fiction</h3>
<ul>
<li><a href="https://amzn.to/2yhkcYU" target="_blank" rel="noopener">Famous Men Who Never Lived</a> by K. Chess is a fascinating story of trans-universe refugees, settled in a world that doesn't understand their shared heritage. Well-done storytelling and character development.</li>
<li><a href="https://amzn.to/2XMHC3p" target="_blank" rel="noopener">Torchship</a> by Karl Gallagher is a fun story of a torchship taking on various cargos and problems after the AIs and grey goo have taken over a good chunk of the universe. The pilot uses a slide rule because computers cannot be trusted. Awesomeness. Also free if you have Kindle unlimited, easily worth it if not.</li>
<li><a href="https://amzn.to/34zosiy" target="_blank" rel="noopener">Children of Ruin</a>, Adrian Tchaikovsky. The second book in a fascinating universe, this one with uplifted octopi.</li>
<li><a href="https://amzn.to/2XyTt4Q" target="_blank" rel="noopener">Semiosis</a> by Sue Burke is a multi-generational story of settlers on a planet with intelligent plans. Finalist for lots of awards. I did not love everything about this, but it stretches in interesting ways.</li>
</ul>
<p>That's my list! What have you read lately that was worthwhile? Please leave a comment?</p>
<p><a href="https://unsplash.com/photos/QJDzYT_K8Xg" target="_blank" rel="noopener">Photo</a> by <a href="https://unsplash.com/@aaronburden?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank" rel="noopener">Aaron Burden</a>.</p>Threat Model Thursday: Data Flow Diagrams2020-04-23T00:00:00Zhttps://shostack.org/blog/tmt-data-flow-diagrams/AdamThis week's threat model Thursday looks at an academic paper, Security Threat Modeling: Are Data Flow Diagrams Enough? by Laurens Sion and colleagues.<span class="single-post-excerpt"><!-- excerpt -->This week's threat model Thursday looks at an academic paper, Security Threat Modeling: Are Data Flow Diagrams Enough? by Laurens Sion and colleagues.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/What-is-a-trust-boundary-1536x1040.png" alt="data flow diagram" class="single-post-main-img" />
<p>This week's threat model Thursday looks at an academic paper, <cite><a href="https://lirias.kuleuven.be/3010840?limo=0" target="_blank" rel="noopener">Security Threat Modeling: Are Data Flow Diagrams Enough?</a></cite> by Laurens Sion and colleagues. The short (4 page), readable paper looks at the strengths and weaknesses of forms of DFDs, and what we might achieve with variations on the form and different investments of effort. I take issue with the framing of 'enough', as if there's a single definition of enough that's enough for all of us, but that's the authors' choice. Anyone who thinks deeply about how threat modeling works will benefit from reading the paper and thinking about how those tradeoffs work for their organization.</p>
<p>Also interesting is their discussion of the meaning of a trust boundary. They lay out three meanings:</p>
<ol type="i">
<li>Levels of trust</li>
<li>Assumptions of attacker capabilities</li>
<li>Deployment information</li>
</ol>
<p>I think there's a 4th meaning of trust boundary, which is a trust boundary indicates separation of principals by some control, and the boundary is instantiated by the control. All of these are facets of the same thing, but that doesn't tell us what a trust boundary is, it tells us what we mean by it or what it indicates.</p>
<p>Another way of saying that is the boundary is 'the place' where the principals interact, but consider my diagram with an API endpoint and two boundaries. From the perspective of the client, the clients are mutually mistrustful, but does the server care about that distinction? Perhaps it does - the server probably wants to keep client data segregated. The clients want the same, but after it's crossed the boundary, by definition, that's a matter of trust (or encryption).</p>
<p>So two takeaways for today: first, go read "<cite><a href="https://lirias.kuleuven.be/3010840?limo=0" target="_blank" rel="noopener">Are Data Flow Diagrams Enough?</a></cite>" Second, please tell me what a trust boundary means to you. </p>
<p>Proper, academic citation: Laurens Sion, Koen Yskout, Dimitri Van Landuyt, Alexander van den Berghe, Wouter Joosen, <a href="https://lirias.kuleuven.be/3010840?limo=0" target="_blank" rel="noopener">Security Threat Modeling: Are Data Flow Diagrams Enough?</a>, 1st International Workshop on Engineering and Cybersecurity of Critical Systems (EnCyCriS), In EEE/ACM 42nd International Conference on Software Engineering Workshops (ICSEW’20), Seoul, South Korea, May 23-29, 2020.</p>Bounce and Range2020-05-08T00:00:00Zhttps://shostack.org/blog/bounce-and-range/AdamI want to talk about two books: Bounce, by Matthew Syed and Range, by David Epstein. I want to talk about them together in part because Range is explicitly framed as a response to Bounce.<span class="single-post-excerpt"><!-- excerpt -->I want to talk about two books: Bounce, by Matthew Syed and Range, by David Epstein. I want to talk about them together in part because Range is explicitly framed as a response to Bounce.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/bounce-book-cover.png" alt="cover of Bounce by Matthew Syed" class="single-post-main-img" style="float:left;" />
<p>I want to talk about two books: <cite><a href="https://amzn.to/2YkCELr">Bounce</a></cite>, by Matthew Syed and <cite><a href="https://amzn.to/2Yt0g0g">Range</a></cite>, by David Epstein. I want to talk about them together in part because <em>Range</em> is explicitly framed as a response to <em>Bounce</em>. [Update: Bill Gates has selected Range as one of his <a href="https://www.gatesnotes.com/About-Bill-Gates/Holiday-Books-2020">5 good books for a lousy year</a>.]</p>
<p><em>Bounce</em> is focused on the relationship between talent and training. Syed starts with a discussion of ping pong stars, and the belief that they have some special talent or affinity for the game. He points out that most of Britian's Olympic team came from the same town, and how unlikely it is that everyone in that town had some genetic or other inbuilt advantage. It's about training and practice. Not the junk-science of 10,000 hours of practice, but the use of deliberative practice to develop specific skills.</p>
<p>This is resonant because I've spent a long time arguing against the need for an attacker mindset, or '<a href="https://adam.shostack.org/blog/2019/10/who-are-we-kidding-with-attacker-centered-threat-modeling/">think like an attacker</a>.' If we require some special, unteachable mindset as a prerequisite to building secure systems, we might as well give up. But, as <em>Bounce</em> teaches us, there's no magic, just practice. (Affinity for a topic, enjoyment of the practice doubtless helps, and early success probably contributes to that enjoyment, and thus practice.)</p>
<p>And that practice needs to be deliberate. That is, it needs to focus in on weaknesses, and address them. The exercises need to be constructed to develop new skills. I took a lot from this book to revising the training I deliver in threat modeling.</p>
<p>Of course, when I say this is resonant, I'm at risk of letting confirmation bias dominate my perception, and so it's good to have <em>Range</em> to critique <em>Bounce</em>. Epstien constructs a powerful argument that much success comes from acting at the intersection of fields or ideas, and that if you specialize early to get to thousands of hours of deliberative practice, you won't have range to draw apon.</p>
<p>This is interesting because in my time in security, we've gone from a field in which no one was trained, degreed or certified to one in which those are essential to starting out. We've struggled through the first attempts to figure out what security is, why it matters to an organization, how to "do security", "measure security" and reward employees for their work. We have answers, and to address demands from executives, IT departments, developers, and others, we've created structures and roles that define how work gets done. We now teach people how to be a SOC operator, a threat hunter, a penetration tester, an IAM manager, etc.</p>
<p>There is strong resistance to these forms, because a little reflection shows that these organizations are neither natural nor effective. There's a move recently to cast aside security's hard-won right to stop shipment. It turns out that being able to stop ship comes with accountability without authority, and so product owners pressure security to sign off, despite having done no work to secure a product. It's a bad situation all around, and the solution, shockingly, is to give up the power to stop a product from shipping, and move to an advisory role.</p>
<p>We have, probably, the same number of people with 'non-standard' origin or training as we did twenty years ago, but the field is far larger, making such difference far less common as a fraction of practitioners. We have fewer conversations about how we set our goals or why we do things, and a lot more about the tasks and how to accomplish them.</p>
<p>I learned important things from <em>Range</em>, and it delivers value. At the same time, I think it critiques an argument that I didn't read in <em>Bounce</em>. That argument is 'specialize early.' That's certainly an argument that's out there. There are regular complaints that people coming out of school don't have the particular technical skills that employers want, and that's frustrating to employers and students. What are they paying for if not to learn? And what they need to learn are not how to use Nessus version 2.2, but how to think critically, how to communicate, and how to learn. They need the range in which to embed their skills. </p>
<p>Below are some highlights that caught my attention in <em>Range</em>. I read Bounce on paper, so adding quotes is harder.</p>
<font size="smaller">
<ul>
<li>No tool is omnicompetent. There is no such thing as a master-key that will unlock all doors. —Arnold Toynbee, A Study of History (Page 4)</li>
<li>...so that jazz musicians could improvise while inside an MRI scanner. Limb saw that brain areas associated with focused attention, inhibition, and self-censoring turned down when the musicians were creating. “It’s almost as if the brain turned off its own ability to criticize itself,” he told National Geographic. While improvising, musicians do pretty much the opposite of consciously identifying errors and stopping to correct them. (Page 69)</li>
<li>Improv masters learn like babies: dive in and imitate and improvise first, learn the formal rules later. “At the beginning, your mom didn’t give you a book and say, ‘This is a noun, this is a pronoun, this is a dangling participle,’” Cecchini told me. “You acquired the sound first. And then you acquire the grammar later.” (Page 70)</li>
<li>One of those desirable difficulties is known as the “generation effect.” Struggling to generate an answer on your own, even a wrong one, enhances subsequent learning. (Page 79)</li>
<li>Mention Kepler if you want to get Northwestern University psychologist Dedre Gentner excited. She gesticulates. Her tortoiseshell glasses bob up and down. She is probably the world’s foremost authority on analogical thinking. Deep analogical thinking is the practice of recognizing conceptual similarities in multiple domains or scenarios that may seem to have little in common on the surface. (Page 94)</li>
<li>If you’re asked to predict whether a particular horse will win a race or a particular politician will win an election, the more internal details you learn about any particular scenario—physical qualities of the specific horse, the background and strategy of the particular politician—the more likely you are to say that the scenario you are investigating will occur. Psychologists have shown repeatedly that the more internal details an individual can be made to consider, the more extreme their judgment becomes. For the venture capitalists, they knew more details about their own project, and judged that it would be an extreme success, until they were forced to consider other projects with broad conceptual similarities. (Page 101)</li>
<li>Dunbar witnessed important breakthroughs live, and saw that the labs most likely to turn unexpected findings into new knowledge for humanity made a lot of analogies, and made them from a variety of base domains. The labs in which scientists had more diverse professional backgrounds were the ones where more and more varied analogies were offered, and where breakthroughs were more reliably produced when the unexpected arose. (Page 109)</li>
<li>They’ve been there, many times, and now have to re-create a well-understood process that they have executed successfully before. The same goes for airline crews. Teams that have experience working together become exceedingly efficient at delegating all of the well-understood tasks required to ensure a smooth flight. When the National Transportation Safety Board analyzed its database of major flight accidents, it found that 73 percent occurred on a flight crew’s first day working together. Like surgeries and putts, the best flight is one in which everything goes according to routines long understood and optimized by everyone involved, with no surprises. (Page 194)</li>
<li>Griffin’s research team noticed that serial innovators repeatedly claimed that they themselves would be screened out under their company’s current hiring practices. “A mechanistic approach to hiring, while yielding highly reproducible results, in fact reduces the numbers of high-potential [for innovation] candidates,” they wrote. (Page 196)</li>
</ul>
</font>SDL Article in CACM2020-05-11T00:00:00Zhttps://shostack.org/blog/sdl-article-in-cacm/AdamMost of my time, I'm helping organizations develop the skills and discipline to build security in. We give the best advice available, and I recognize that we're early in developing the science around how to build an SDL that works.<p>Most of my time, I'm helping organizations develop the skills and discipline to build security in. We give the best advice available, and I recognize that we're early in developing the science around how to build an SDL that works.</p>
<p>That's why I spend time working with academics who can objectively study what we're working on. Mary Ellen Zurko and I have a short article on the subject in the May Communications of the ACM: "<a href="https://cacm.acm.org/magazines/2020/5/244328-secure-development-tools-and-techniques-need-more-research-that-will-increase-their-impact-and-effectiveness-in-practice/fulltext" target="_blank" rel="noopener">Secure Development Tools and Techniques Need More Research That Will Increase Their Impact and Effectiveness in Practice</a>."</p>Models and Accuracy (Threat Modeling Thursday)2020-05-14T00:00:00Zhttps://shostack.org/blog/models-and-accuracy-threat-modeling-thursday/AdamFor Threat Model Thursday, I want to look at models and modeling in a tremendously high-stakes space: COVID models.<span class="single-post-excerpt"><!-- excerpt -->For Threat Model Thursday, I want to look at models and modeling in a tremendously high-stakes space: COVID models.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/model-airplane-Chris-Leipelt.jpg" alt="boy holding model airplane with shadow cast upon a boulder" class="single-post-main-img" />
<p>For Threat Model Thursday, I want to look at models and modeling in a tremendously high-stakes space: COVID models. There are a lot of them. They disagree. Their accuracy is subject to a wide variety of interventions. (For example, few disease models forecast a politicized response to the disease, or a massively inconsistent response within an area where people can travel freely.) Policy makers need to make decisions about life and death, and they must assess model quality. There's an interesting paper in Science, "<cite><a href="https://science.sciencemag.org/content/368/6491/577" target="_blank" rel="noopener">Harnessing multiple models for outbreak management</a></cite>," and a more accessible <a href="https://medicalxpress.com/news/2020-05-covid-policy-decisions.html" target="_blank" rel="noopener">writeup</a>.</p>
<p>I am often asked to judge threat models. People like to ask me questions that have at their heart, 'is this system model right' or 'did we find the right threats.' Sometimes they'll ask 'did we approach this right?' Sadly, there is rarely a quick answer to that question, but one of the things I've learned is that the answer, but not the logic, follows Betteridge's law of headlines. The answer is always no. The reason people are asking me to judge their models is that they are uncomfortable with them, and they'd like help figuring out why.</p>
<p>That's not 100% true. Sometimes they're really proud of the model, and want to show off. That's usually accompanied by a relieved story of 'we almost did this..' Those are great stories. I love them. I love hearing what people emphasize as they tell the stories - there's gold there in how organizations change and mature. Rarely are the models in these stories perfect, or even great. They are good enough to expose a choice, an impact, or something else, and good enough to drive change.</p>
<p><a href="https://unsplash.com/photos/IYOJOCQOtBg" target="_blank" rel="noopener">Photo</a> by Chris Leipelt.</p>How Are Computers Compromised (2020 Edition)2020-05-20T00:00:00Zhttps://shostack.org/blog/how-are-computers-compromised-2020-edition/AdamUnderstanding the way intrusions really happen is a long-standing interest of mine.<span class="single-post-excerpt"><!-- excerpt -->Understanding the way intrusions really happen is a long-standing interest of mine.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/cisa-logo.png" alt="CISA logo" class="single-post-main-img" style="float:right;" />
<p>Understanding the way intrusions really happen is a long-standing interest of mine. This is quite a different set of questions compared to "how long does it take to detect," or "how many records are stolen?" How the intrusion happens is about questions like: Is it phishing emails that steal creds? Email attachments with exploits? SQL injection? Is it APTs or scripts? Which intrusions lead to major breaches? Without knowing these things, it's hard to evaluate the ways in which we engineer defenses. Taking answers from the headlines is sane if the breaches that result in headlines are distinguishable at the start in some way.</p>
<p>And that's what makes US CERT's new alert <a href="https://www.us-cert.gov/ncas/alerts/aa20-133a" target="_blank" rel="noopener">AA20-133A</a>, "Top 10 Routinely Exploited Vulnerabilities" interesting. The US Government has some interesting advantages: a large collection of attractive targets, a mandate that all CFO agencies have a security process, published investments in security, a large and skilled incident response force. And so when they tell us that these vulnerabilities are 'routinely exploited,' that is both fascinating and prompts me to ask additional questions.</p>
<ul>
<li>What fraction of incidents have a discovered initial access method?</li>
<li>What fraction of those initial access methods are "use of vuln" (as opposed to credential theft, USB in the parking lot, evil maid attacks, attacks on servers in the cloud.</li>
<li>What fraction of incidents are covered by the top 10?</li>
<li>What's the relationship between #1 and #10?</li>
<li>Who's excluded from the set "state, nonstate, and unattributed cyber actors"?</li>
<li>Has there been a "5 whys" or other analysis of why those patches were missing? (I'm not saying "root cause" because we all know there's never one root cause.)</li>
<li>What was the investment of controls in the organizations attacked? Was patch management a priority?</li>
</ul>
<p>For some of these, releasing specific answers are going to be tricky because of details of a specific incident, where there's concern that even saying 'attacker jumped an airgap' exposes information. For others, such as the first, there's a risk that journalists are going to say 'really, we only know how 15% of incidents start?' (I would be surprised if it's that high.)</p>
<p>Nevertheless, having details like these are going to help us move forward. What's more, we don't really need incident by incident details - much like the advisory is generalized, we can also hear what program issues are correlated with intrusion. For example, I believe that patch management is way harder than you'd believe if you read infosec twitter, but so what? What would be interesting is "80% of the entities breached were rated as 'needs improvement' in patching, while only 54% of entities were rated at 'needs improvement.' That's not only interesting, but if we have a collection of such statements, then we can prioritize advice by correlation with not being breached. That would be exciting and actionable.</p>
<p>There is a tremendous amount that governments can do with data that they gather about themselves, and I look forward to the day we expect them to do it.</p>
<p>Related: My 2013 SIRA talk, "<a href="https://1drv.ms/p/s!At_WCC0-WWpPiEMFmUxxloeHU3rr" target="_blank" rel="noopener">Building a Science of Security</a>", "<a href="http://download.microsoft.com/download/0/3/3/0331766E-3FC4-44E5-B1CA-2BDEB58211B8/Microsoft_Security_Intelligence_Report_volume_11_Zeroing_in_on_Malware_Propagation_Methods_English.pdf" target="_blank" rel="noopener">Zeroing in on Malware Propagation Methods </a>."</p>Code: science and production2020-05-26T00:00:00Zhttps://shostack.org/blog/code-science-and-production/AdamPhil Bull presents an interesting, generally convincing, argument in 'Why you can ignore reviews of scientific code by commercial software developers', with a couple of exceptions.<span class="single-post-excerpt"><!-- excerpt -->Phil Bull presents an interesting, generally convincing, argument in 'Why you can ignore reviews of scientific code by commercial software developers', with a couple of exceptions.<!-- /excerpt --></span>
<p>There's an interesting article by Phil Bull, "<a href="https://philbull.wordpress.com/2020/05/10/why-you-can-ignore-reviews-of-scientific-code-by-commercial-software-developers/amp/" target="_blank" rel="noopener">Why you can ignore reviews of scientific code by commercial software developers</a>". It's an interesting, generally convincing argument, with a couple of exceptions. (Also worth remembering: <a href="https://www.wired.com/2015/10/can-learn-epic-failure-google-flu-trends/" target="_blank" rel="noopener">What We Can Learn From the Epic Failure of Google Flu Trends</a>.)</p>
<p>The first interesting point is the difference between production code and exploratory code. Production code is intended to do the same thing every time it's run. Exploratory code can reasonably be intended to do different things from year to year or even day by day.</p>
<p>The cost of going from exploratory code to production code is large. Fred Brooks taught us that a program is 1/9th of a product. This distinction really hurts in the ML space, where ML specialists are used to producing exploratory code (and models) which are often pushed to production.</p>
<p>The world doesn't have a large amount of production-quality pandemic modeling code. We can bemoan this state of affairs ad nauseam (literally), but we can't rapidly change it.</p>
<p>The second interesting point relates to test suites, and here, I actually disagree respectfully. Of all the practices discussed, of maintainability, documentation, error checking, I think that automated system tests, included in the 'makefile,' are the least excusable. Forgetting to run tests is human. Failing to understand the impact of a change across a program is easy.</p>
<p>When the models were used purely for science, the errors were egg on the scientists' faces. When the models were used purely for science, the pressure to get results was, relatively speaking, negligible. Many of the demands, for documentation, maintainability, and the like, are fine ideas. In a perfect world, each might be addressed. But like demands for security, many of these things involve tradeoffs - work to improve the model's accuracy might be sacrificed for work to improve its maintainability.</p>
<p>Talking about tradeoffs, I'll digress to add that we live in a world where sites like Twitter and Facebook reward 'engagement', by which I generally mean yelling at each other, rather than thoughtful commentary. Responding to those attacks is additional work that has to be prioritized against the work that will get us out of this pandemic faster, with fewer lives lost.</p>
<p>Interesting work involves making tradeoffs between different meanings of quality. What qualities matter, and how to best achieve them, is obvious far less often than we think.</p>SLR as a Webcam2020-05-28T00:00:00Zhttps://shostack.org/blog/slr-as-a-webcam/AdamAs I built out my home studio to record videos for my distributed classes, I was lucky enough to be able to find an in-stock HDMI capture card, but those are harder and harder to find. As it turns out, you may be able to avoid the need for that with a mix of apps.<p>As I built out my home studio to record videos for my distributed classes, I was lucky enough to be able to find an in-stock HDMI capture card, but those are harder and harder to find. As it turns out, you may be able to avoid the need for that with a mix of apps.</p>
<p><a href="https://petapixel.com/2020/05/08/camera-hack-use-your-main-camera-as-a-webcam-without-a-capture-card/" target="_blank" rel="noopener">This</a> article on PetaPixel points to <a href="https://github.com/v002/v002-Camera-Live" target="_blank" rel="noopener">Camera Live</a> (Maybe Canon only), <a href="http://camtwiststudio.com/" target="_blank" rel="noopener">Camtwist</a>.</p>
<p>The key step in getting this to work with Zoom is <code>sudo codesign --remove-signature /Applications/zoom.us.app/</code></p>
<p>This is an odd step, explained as "first, you remove the “signature authorization” required by Zoom to recognize the camera as a webcam," but I don't think that's what that command does. (Entertainingly, while codesign is one of the relatively few bits of MacOS that comes with a fleshed-out manual page, the manual page doesn't document a remove-signature option.)</p>
<p>I'd thought that what this would do would be to allow Camera Live to drop a plugin into Zoom, and not have Zoom's signature break. <a href="https://www.ecamm.com/mac/iglasses/" target="_blank" rel="noopener">iGlasses</a> has worked for me for other conferencing apps, but not Zoom. After removing the signature and loading up iGlasses, it works in Zoom, but does not seem to drop anything into /Applications/Zoom.us.app or ~/Library/. However, that codesign command now breaks something in Zoom's ability to access the microphone, which I think is likely a self-integrity check. I've had to downgrade to an earlier version of Zoom.</p>
<p>Incidentally, there are HDMI capture cards on Amazon like <a href="https://amzn.to/2zt73MV" target="_blank" rel="noopener">this</a> or <a href="https://amzn.to/3bqyKUg" target="_blank" rel="noopener">this</a>. Those are 1080p, not 4k, but generally, 1080p is plenty of resolution. In my own experiments, OBS starts to spin the fans if I record at 2550x1440 for a minute on my reasonably recent imac, and so I've been capping my video resolution and delivering the best quality sound - I suspect most people are watching in pretty small video windows anyway.</p>One Bad Apple2020-06-03T00:00:00Zhttps://shostack.org/blog/one-bad-apple/AdamI generally try to stay on technical topics, because my understanding is that's what readers want. But events are overwhelming and I believe that not speaking out is now a political choice.<p>I generally try to stay on technical topics, because my understanding is that's what readers want. But events are overwhelming and I believe that not speaking out is now a political choice.</p>
<p>I want to start from this Chris Rock video:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/f8CsN-2Ht80?start=21" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>I hadn't seen it before, but I have spent a lot of time studying how airlines respond to problems, and you know what?</p>
<p>When German Wings had one bad apple, Europe <a href="https://www.reuters.com/article/us-france-crash-germanwings-eu/europe-adopts-new-pilot-mental-health-rules-after-germanwings-crash-idUSKBN1KF1UA" target="_blank" rel="noopener">rolled out</a> new rules on pilot mental health.</p>
<p>That's how you deal with the bad apples. You don't let them spoil the whole lot. So what does that mean? The obvious answers are things like "fire them! Prosecute them!" Those are attractive answers. They seem like good ideas. Like a key part of justice for the victims. A key deterrent to future incidents.</p>
<p>Another element from my work is to improve, we need to learn. Learning is hard when emotions run high (this is not a criticism, it's a biological reality.) Learning is hard when people are getting blamed. Etsy has done great work in how to <a href="https://codeascraft.com/2016/11/17/debriefing-facilitation-guide/" target="_blank" rel="noopener">facilitate a blameless postmortems</a>. But their post-mortems are not literal ones. No one knelt on someone's neck for 9 minutes. But, in this incident and almost every one like it, it turns out that the officer had a history of less impactful incidents. Today, we use adversarial processes to investigate those (review boards, courts). We know that adversarial approaches are at odds with learning. They result in dug-in heels, justification, righteousness, not understanding of the other side's position.</p>
<p>A large part of me wants to be righteous, and declare that even these smaller incidents should bring down the wrath of the system; that when police are failing to serve and protect, there should be Consequences. </p>
<p>But is that desire for consequences actually reducing our ability to change? If so, what do we do?</p>
<p>I make no claims to answers to these questions. There are clearly important differences between a computer security issue and death or "even" serious injury, and I don't claim that the analogies are perfect.</p>
<p>I want to thank Nicole Forsgren for <a href="https://www.linkedin.com/feed/update/urn:li:activity:6673724167101259776/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A6673613724726767617%2C6673724096024584193)" target="_blank" rel="noopener">inspiring</a> me to write this.</p>Evidence Based Security2020-06-04T00:00:00Zhttps://shostack.org/blog/evidence-based-security/AdamCheck out “The Need for Evidence Based Security” by Chris Frenz.<span class="single-post-excerpt"><!-- excerpt -->Check out “The Need for Evidence Based Security” by Chris Frenz.<!-- /excerpt --></span>
<blockquote>As security professionals, have we ever sat down and truly made an effort to empirically determine what controls are actually effective in our environment and what controls do very little to protect our environment or, worse yet, actually work to undermine our security.</blockquote>
<p>That's from <cite><a href="https://chimecentral.org/mediaposts/the-need-for-evidence-based-security/" target="_blank" rel="noopener">The Need for Evidence Based Security</a></cite>, by Chris Frenz, is worth reading.</p>
<p>His focus on moving from compliance with untested standards to demonstrating effectiveness is very welcome, and I appreciate the tie to evidence based medicine for his audience.</p>
<p>Go have a look.</p>'Best Practices for IoT Security'2020-06-08T00:00:00Zhttps://shostack.org/blog/best-practices-for-iot-security/AdamThere's an interesting new draft, Best Practices for IoT Security: What Does That Even Mean? by Christopher Bellman and Paul C. van Oorschot.<span class="single-post-excerpt"><!-- excerpt -->There's an interesting new draft, Best Practices for IoT Security: What Does That Even Mean? by Christopher Bellman and Paul C. van Oorschot.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/best-practices.png" alt="screenshot of opening to quoted article" class="single-post-main-img" />
<p>There's an interesting new draft, <a href="https://arxiv.org/pdf/2004.12179.pdf" target="_blank">Best Practices for IoT Security:What Does That Even Mean?</a> It's by Christopher Bellman and Paul C. van Oorschot. The abstract starts: "Best practices for Internet of Things (IoT) security have recently attracted considerable attention worldwide from industry and governments, while academic research has highlighted the failure of many IoT product manufacturers to follow accepted practices. We explore not the failure to follow best practices, but rather a surprising lack of understanding, and void in the literature, on what (generically) "best practice" means, independent of meaningfully identifying specific individual practices. Confusion is evident from guidelines that conflate desired outcomes with security practice to achieve those outcomes." </p>
<p>Highly readable, and worth your time. The contrast they provide between activity descriptions and outcome goals is one of many points I hadn't noticed.</p>Contextualisation of Data Flow Diagrams...2020-06-09T00:00:00Zhttps://shostack.org/blog/contextualisation-of-data-flow-diagrams/AdamContextualisation of Data Flow Diagrams for security analysis is a new paper to which I contributed.<span class="single-post-excerpt"><!-- excerpt -->Contextualisation of Data Flow Diagrams for security analysis is a new paper to which I contributed.<!-- /excerpt --></span>
<p><a href="https://arxiv.org/abs/2006.04098" target="_blank" rel="noopener">Contextualisation of Data Flow Diagrams for security analysis</a> is a new paper to which I contributed:</p>
<blockquote>Abstract: Data flow diagrams (DFDs) are popular for sketching systems for subsequent threat modelling. Their limited semantics make reasoning about them difficult, but enriching them endangers their simplicity and subsequent ease of take up. We present an approach for reasoning about tainted data flows in design-level DFDs by putting them in context with other complementary usability and requirements models. We illustrate our approach using a pilot study, where tainted data flows were identified without any augmentations to either the DFD or its complementary models.</blockquote>
<p>The paper is available on arXiv.</p>Sonatype Report on DevSecOps2020-06-12T00:00:00Zhttps://shostack.org/blog/sonatype-report-on-devsecops/AdamThe Sonatype 2020 DevSecOps Community Survey is a really interesting report. Most interesting to me is the importance of effective communication, with both tools and human communication in developer happiness.<span class="single-post-excerpt"><!-- excerpt -->The Sonatype 2020 DevSecOps Community Survey is a really interesting report. Most interesting to me is the importance of effective communication, with both tools and human communication in developer happiness.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/trek-star-wars.png" alt="Survey results." class="single-post-main-img" />
<p>The <a href="https://www.sonatype.com/2020survey" target="_blank" rel="noopener">Sonatype 2020 DevSecOps Community Survey</a> is a really interesting report. Most interesting to me is the importance of effective communication, with both tools and human communication in developer happiness.</p>
<img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2020/happydevelopers.png" alt="Happydevelopers" title="happydevelopers.png" border="0" width="600" height="459" />
<p>But even more important is my belief that to reach developers Star Wars is better than Star Trek is confirmed. No bias there.</p>Threat Research: More Like This2020-06-14T00:00:00Zhttps://shostack.org/blog/threat-research-more-like-this/AdamI want to call out some impressive aspects of a report by Proofpoint.<span class="single-post-excerpt"><!-- excerpt -->I want to call out some impressive aspects of a report by Proofpoint.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/email-screenshot.png" alt="email screenshot" class="single-post-main-img" />
<p>I want to call out some impressive aspects of a report by Proofpoint: <cite><a href="https://www.proofpoint.com/us/blog/threat-insight/ta410-group-behind-lookback-attacks-against-us-utilities-sector-returns-new" target="_blank" rel="noopener">TA410: The Group Behind LookBack Attacks Against U.S. Utilities Sector Returns with New Malware</a></cite>.</p>
<p>There are many praise-worthy aspects of this report, starting from the amazing lack of hyperbole, and the focus on facts, rather than opinions. The extraordinary lack of adjectives is particularly refreshing, as is the presence of explanations for the conclusions drawn. ("This conclusion is based on the threat actor’s use of shared attachment macros, malware installation techniques, and overlapping delivery infrastructure.")</p>
<p>But most important to me is the clear and detailed exposition of how the attack itself worked. Proofpoint shared both sample emails, showing the human-level hooks, and the way the attacks worked ("Microsoft Word documents with malicious macros...the FlowCloud macro used privacy enhanced mail (“.pem”) files which were subsequently renamed to the text file “pense1.txt”. This file is next saved as a portable executable file named “gup.exe” and executed using a version of the certutil.exe tool named “Temptcm.tmp”.)</p>
<p><img style="display:block; margin-left:auto; margin-right:auto;" src="https://shostack.org/blog/img/2020/Figure-6.jpg" alt="Figure 6" title="Figure 6.jpg" border="0" width="760" height="432" /><br />
This is important because, as a defender focused on building products, I can use these details to conceptualize defenses. For example, we can see in their figure 6 the use of cmd. Perhaps we could block the use of cmd from macros, or require that the files executed be in certain locations? The malware copies certutil into %tmp% (I am unsure why.) Perhaps we could block execution of code in %tmp% (and %downloads%, while we're at it.) Perhaps we could block the renaming of files? That appears hard - once we can run arbitrary commands, there are a plethora of confusable deputies. Perhaps we could prevent anything inside a macro from making a file executable?</p>
<p>These models of the malicious acts and models of defense can be considered both for these details or other attacks. We might look at these attacks and their common features as we design new defenses. All too often, we only talk about what the malware does after it gets the ability to run code, not how it gets there. (For example, what we see in their Figure 9 is often Figure 1 or 2.) And while that's useful to the anti-malware community, more detail of the attacks help us design better defenses.</p>
<p>Very nice work by Michael Raggi, Dennis Schwarz, and Georgi Mladenov.</p>The Jenga View of Threat Modeling2020-06-16T00:00:00Zhttps://shostack.org/blog/the-jenga-view-of-threat-modeling/AdamI'm happy to announce Shostack + Associate's new, first, corporate white paper! It uses Jenga to explain why threat modeling efforts fail so often.<img src="https://shostack.org/blog/img/2020/jenga-view-cover.png" alt="cover of white paper: The Jenga View of Threat Modeling" class="single-post-main-img" style="float:left;padding-right:2em;" />
<p>I'm happy to announce Shostack + Associate's new, first, corporate white paper! It uses Jenga to explain why threat modeling efforts fail so often.</p>
<p>I'm excited for a lot of reasons. I care about learning from failure. I love games as teaching tools. But really, I'm excited because the paper has helped the people who read early copies. </p>
<p>It's also exciting because as it turns out, the Jenga metaphor is way bigger than threat modeling. I'm talking about threat modeling because people tell me that's what they want to hear about, but really, threat modeling requires culture change. It requires organizational work, and thinking about Jenga blocks will help you achieve that.</p>
<p>Only time will tell, but I think this is going to be as important as the <a href="https://shostack.org/files/papers/modsec08/Shostack-ModSec08-Experiences-Threat-Modeling-At-Microsoft.pdf" target="_blank" rel="noopener">'experiences' paper</a> where I broke threat modeling into attacker-centric, asset-centric and technology-centric views. This feels like an equally important step forward.</p>
<p>Because I really want people to read this paper, there's no registration required. Because I want people to use the ideas, I'm releasing it under a creative commons license. </p>
<p>You can get your copy at <a href="https://shostack.org/resources/whitepapers">https://associates.shostack.org/whitepapers</a></p>
<p>[Added Wednesday: And I joined the <u>Application Security Podcast</u> [link to https://podcast.securityjourney.com/ no longer works] to talk about the paper, you can watch or listen <u>here</u> [link to https://podcast.securityjourney.com/adam-shostack-the-jenga-view-of-threat-modeling/ no longer works].]</p>
Happy Juneteenth!2020-06-19T00:00:00Zhttps://shostack.org/blog/happy-juneteenth-4/AdamJuneteenth is the celebration of the end of slavery in the US. We need more holidays that celebrate freedom. Freedom isn't always comfortable or easy, but it is the precondition to the pursuit of happiness.<img src="https://shostack.org/blog/img/2020/usct-reenactment.jpg" alt="Reenactment of black civil war soldiers" class="single-post-main-img" />
<p>Juneteenth is the celebration of the end of slavery in the US. We need more holidays that celebrate freedom. Freedom isn't always comfortable or easy, but it is the precondition to the pursuit of happiness.</p>
<p>BTW, we've been celebrating Juneteenth here on this blog for <a href="https://adam.shostack.org/blog/2006/06/happy-juneteenth/">a long time</a>, if no more consistently than anything else we do.</p>
<p>Photo: <a href="https://www.flickr.com/photos/lblanchard/169180628/" target="_blank" rel="noopener">Laura Blanchard</a>, and I'll note that there's a real lack of Juneteenth stock imagery on <a href="https://www.foleon.com/blog/5-sites-for-free-stock-photos" target="_blank" rel="noopener">all these sites</a>.</p>The Cyentia Library Relaunches2020-06-22T00:00:00Zhttps://shostack.org/blog/the-cyentia-library-relaunches/AdamI'm excited to see that they're Re-introducing the Cyentia Research Library, with cool (new?) features like an RSS feed. There are over 1,000 corporate research reports with data that companies paid to collect, massage, and release in a way they felt would be helpful to the rest of the world.<span class="single-post-excerpt"><!-- excerpt -->I'm excited to see that they're Re-introducing the Cyentia Research Library, with cool (new?) features like an RSS feed. There are over 1,000 corporate research reports with data that companies paid to collect, massage, and release in a way they felt would be helpful to the rest of the world.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/iristopicplot.png" alt="Information Risk Insights Study 20/20 plot" class="single-post-main-img" />
<p>I'm excited to see that they're <a href="https://www.cyentia.com/re-introducing-the-cyentia-research-library/" target="_blank" rel="noopener">Re-introducing the Cyentia Research Library</a>, with cool (new?) features like an RSS feed. There are over 1,000 corporate research reports with data that companies paid to collect, massage, and release in a way they felt would be helpful to the rest of the world.</p>
<p>The Cyentia Library lets us see what people are doing in terms of research and data. We can get an understanding of what the industry thinks is important or at least visible, and what we write about in consistent ways.</p>
<p>And as I look at the topic map, I note some things. There's topics for 'misconfiguration,' and 'human error' but not 'vulnerability exploitation' or even the ever-popular 'hacking' as a cause of breach. There's no 'feature abuse' or 'account takeover.' My longstanding interest in how attackers get access is underserved. There's no category for attack vector or root cause or kill chain analyses. There are 'criminal group,' 'extortion,' 'financial gain,' 'larceny and loss,' 'ransomware' and 'threat actor.'</p>
<p>To be clear: I mean no criticism of Cyentia for exposing these things. I am grateful for their work, and how it shines a light on where we are.</p>
<p>When I comment that "how attackers get access" is underserved, what I mean is that its insufficiently well addressed in 2000 reports to have emerged or been noticed by the NLP and manual analyses.</p>Threat Modeling and the SAFE Framework2020-06-30T00:00:00Zhttps://shostack.org/blog/threat-modeling-safe-framework/AdamMy thoughts on an interesting blog post discussing how to bring threat modeling into the Scaled Agile Framework.<span class="single-post-excerpt"><!-- excerpt -->My thoughts on an interesting blog post discussing how to bring threat modeling into the Scaled Agile Framework.<!-- /excerpt --></span>
<p>There's an interesting and detailed blog post from Antti Vähä-Sipilä and Heli Syväoja at the F-Secure blog, <a href="https://www.f-secure.com/en/consulting/our-thinking/using-safe-to-align-cyber-security-and-executive-goals" target="_blank" rel="noopener">Using SAFe® to align cyber security and executive goals in an agile setting</a>.</p>
<p>What I find most useful is the detailed and specific elements of how to bring threat modeling into the Scaled Agile Framework, in particular:</p>
<ul>
<li>Security and privacy work need to be visible on backlogs.</li>
<li>Don’t use non-functional requirements with security. The attackers will not care if you have valiant statements in your acceptance criteria. (👈 ❤️) </li>
<li>Use different tools to answer the question what can go wrong. At the Epic Refinement phase, look for negative business outcomes; at feature refinement, developers can use something like STRIDE.</li>
</ul>
<p>(The first two are direct quotes from their "key points" summary, the last is my restatement.)</p>
<p>The article also goes into very interesting detail about failings that they've observed, from 'Perceived barriers to threat modelling' to the importance of documenting risks that have no solution, to problems that result from centralizing work into security epics, and I don't want to re-state their points here, but rather build on them, by pointing out that most of what they have to say is not about the technical skills of threat modeling, but about the organizational discipline that's involved.</p>
<p>If only we had a framework for thinking about such distinctions!</p>
<p>Oh wait, we do! I published <a href="https://shostack.org/blog/the-jenga-view-of-threat-modeling/">the Jenga framework</a> earlier this month as a way of thinking about the organizational, technical and soft skills involved in threat modeling.</p>
<p>I hope that the F-Secure team will be sharing more specifics about and examples of how they're making attacker stories work (I regularly hear that they take too long), and also share the 5 question triage checklist they're using. (Although, I see them as <a href="https://shostack.org/blog/threat-modeling-with-questionnaires/">doing threat modeling work</a>, not 'knowing when threat modeling is needed.')</p>Threat Model In My Devops2020-07-02T00:00:00Zhttps://shostack.org/blog/threat-model-in-my-devops/AdamA recent talk by Alyssa Miller focuses on integrating threat modeling in devops.<span class="single-post-excerpt"><!-- excerpt -->A recent talk by Alyssa Miller focuses on integrating threat modeling in devops.<!-- /excerpt --></span>
<p><a href="https://www.youtube.com/watch?v=4KL7t1-FYBk" target="_blank" rel="noopener">This</a> talk by Alyssa Miller is fascinating and thought provoking.</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/4KL7t1-FYBk" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>She frames a focus on integrating threat modeling into devops. The question of 'what are we working on' is answered with use cases, and threat modeling for that sprint is scoped to the use cases. 'What can go wrong' is focused on a business analysis of what can go wrong with private data, critical functions, financial assets, people assets or secrets.</p>
<p>I like the business integration. I do have a couple of reactions: first, the approach to assets - assets in <em>this story</em> is a really nice hack that addresses many of the failure modes of starting from assets. Those problems include definitions (things attackers want/things you protect/stepping stones), generality, such as wanting to protect the company's reputation in a vaguely defined way, and the need to invest time in creating a list of assets and generating agreement around it.</p>
<p>Second, I wonder if getting rid of diagrams that show data flows a good tradeoff? (I'm intentionally not saying DFD.) It's clearly a good tradeoff if the alternative is 'do nothing.' It may be that this business-focused approach helps obviate the need for data flows, but I'm not sure. </p>
<p>Regardless, the talk is worth watching.</p>Internet Society Opposition to LAED Act2020-07-07T00:00:00Zhttps://shostack.org/blog/internet-society-opposition-to-laed-act/AdamThe Internet Society Open Letter Against Lawful Access to Encrypted Data Act was published this morning.<span class="single-post-excerpt"><!-- excerpt -->The Internet Society Open Letter Against Lawful Access to Encrypted Data Act was published this morning.<!-- /excerpt --></span>
<p>The <a href="https://www.globalencryption.org/open-letter-against-laed-act/" target="_blank" rel="noopener">Internet Society Open Letter Against Lawful Access to Encrypted Data Act</a> was published this morning. It's an important and broad coalition to protect the ability of American companies to deliver security to their customers. I'm honored to be one of the signers.</p>Amicus Brief on CFAA2020-07-13T00:00:00Zhttps://shostack.org/blog/amicus-brief-on-cfaa/AdamI recently signed onto the amicus brief on the Van Buren/Computer Fraud and Abuse Act filed by the Electronic Frontier Foundation.<span class="single-post-excerpt"><!-- excerpt -->I recently signed onto the amicus brief on the Van Buren/Computer Fraud and Abuse Act filed by the Electronic Frontier Foundation.<!-- /excerpt --></span>
<p>The EFF has filed an amicus brief on the Van Buren case about the Computer Fraud and Abuse Act:</p>
<blockquote>Washington, D.C.—The Electronic Frontier Foundation (EFF) and leading cybersecurity experts today urged the Supreme Court to rein in the scope of the Computer Fraud and Abuse Act (CFAA)—and protect the security research we all rely on to keep us safe—by holding that accessing computers in ways that violate terms of service (TOS) does not violate the law.</blockquote>
<p>Their announcement is <a href="https://www.eff.org/press/releases/eff-asks-supreme-court-rule-violating-terms-service-isnt-crime-under-cfaa" target="_blank" rel="noopener">here</a>. </p>
<p>I am honored to be <a href="https://www.eff.org/cases/van-buren-v-united-states/security-researcher-amici" target="_blank" rel="noopener">one</a> of the experts who's signed onto the brief.</p>
<p>Modified to add explicit mention of van Buren.</p>
Software Engineering Radio2020-07-15T00:00:00Zhttps://shostack.org/blog/software-engineering-radio/AdamI enjoyed being a guest on Software Engineering Radio in this in depth interview.<span class="single-post-excerpt"><!-- excerpt -->I enjoyed being a guest on Software Engineering Radio in this in depth interview.<!-- /excerpt --></span>
<p>I enjoyed being a guest on Software Engineering Radio: <a href="https://www.se-radio.net/2020/07/episode-416-adam-shostack-on-threat-modeling/" target="_blank" rel="noopener">Adam Shostack on Threat Modeling</a>. It's a substantial, in depth interview, running nearly 80 minutes, and covering a wide variety of topics.</p>Video Series2020-07-21T00:00:00Zhttps://shostack.org/blog/video-series/AdamNot usually one for the video format, I'm expanding my horizons thanks to 2020 being what it is.<span class="single-post-excerpt"><!-- excerpt -->Not usually one for the video format, I'm expanding my horizons thanks to 2020 being what it is.<!-- /excerpt --></span>
<p>It will come as no surprise to regular <em>readers</em> of this blog that I prefer the written word to audio and video, but 2020 being 2020, I now have a <a href="https://www.youtube.com/channel/UCsEDO1_A9kMms0kQgV2uWVw/" target="_blank" rel="noopener">YouTube Channel</a>, with the first video below:</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/_RAup6cvbfY" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>Sociotechnical Approach to Cyber Security2020-07-24T00:00:00Zhttps://shostack.org/blog/sociotechnical-approach-to-cyber-security/AdamA recent post from Helen L. of the UK’s NCSC, A sociotechnical approach to cyber security, shares the context of socio-technical approaches.<span class="single-post-excerpt"><!-- excerpt -->A recent post from Helen L. of the UK’s NCSC, A sociotechnical approach to cyber security, shares the context of socio-technical approaches.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/stsg-cartoon-st-approach.png" alt="drawing showing various connects among people and products" class="single-post-main-img" />
<p>There's a post from Helen L. of the UK's NCSC, <a href="https://www.ncsc.gov.uk/blog-post/a-sociotechnical-approach-to-cyber-security">A sociotechnical approach to cyber security</a>. Her post shares the context of socio-technical approaches, discussed the (re-named) RISCS institute, and shares the current problem book.</p>
<p>The post and the problem book are both worth a careful read. </p>
<p>(I'm honored to be an advisor to the RISCS Institute, and have had some minor input into the problems book.)</p>Maximizing the Value of Virtual Security Conferences2020-07-31T00:00:00Zhttps://shostack.org/blog/maximizing-the-value-of-virtual-security-conferences/AdamA few tips on getting the most out of attending a virtual security conference.<span class="single-post-excerpt"><!-- excerpt -->A few tips on getting the most out of attending a virtual security conference.<!-- /excerpt --></span>
<p>Nathan Hamiel has a really good post on <a href="https://research.kudelskisecurity.com/2020/07/14/maximizing-the-value-of-virtual-security-conferences/" target="_blank" rel="noopener">Maximizing The Value of Virtual Security Conferences</a>. To his key point of 'know what you want to get out of it' and 'know what it would take to make it happen,' I want to add two ideas.</p>
<p>First, take notes with a pen and paper. This is a key lesson for me as I deliver training through computers. Having a pen and paper means that I have to take effort to switch, and I've even taken to physically moving my keyboard across the office so I have to stand up to get it. Transcribing your notes into something like a trip report gives you an additional chance to focus on what you and your organization get out of it.</p>
<p>Second, choose your conference based on what you want to get out of it. If you want cutting edge research, Black Hat is great, as is Usenix Enigma or some of the more academic conferences like IEEE S&P. If you want a small intimate event where you have a chance to socialize with a smaller group, a smaller event might be better. You're trading the chance to see the latest work as its unveiled for a smaller group, often focused on a sub-discipline or a community. (You'll also find a lot of #blackhat channels on slacks and discords in addition to the formal conference site.)</p>
<p>Lastly, I want to mention Dan Cuthbert's "<a href="https://www.linkedin.com/pulse/blackhat-usa-virtual-edition-daniel-cuthbert/" target="_blank" rel="noopener">Blackhat USA - The Virtual Edition</a>. He's done a great job of rounding up interesting talks, although I'm particularly looking forward to Matt Wixey's "<a href="https://www.blackhat.com/us-20/briefings/schedule/index.html#breaking-brains-solving-problems-lessons-learned-from-two-years-of-setting-puzzles-and-riddles-for-infosec-professionals-20294" target="_blank" rel="noopener">Breaking Brains, Solving Problems: Lessons Learned from Two Years of Setting Puzzles and Riddles for InfoSec Professionals</a> and Ron Bitton's <a href="https://www.blackhat.com/us-20/briefings/schedule/index.html#a-framework-for-evaluating-and-patching-the-human-factor-in-cybersecurity-20695" target="_blank" rel="noopener">A Framework for Evaluating and Patching the Human Factor in Cybersecurity</a>. I was skeptical of both talks when I saw the titles, but the details sold me, and I'm glad we accepted each.</p>When to Threat Model2020-08-12T00:00:00Zhttps://shostack.org/blog/when-to-threat-model/AdamA talk from the Biohacking Village at DefCon brought up a good point.<span class="single-post-excerpt"><!-- excerpt -->A talk from the Biohacking Village at DefCon brought up a good point.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/when-to-threat-model.jpg" alt="screenshot from virtual talk discussed in the article" class="single-post-main-img" />
<p>At the Biohacking Village at Defcon, there was an <a href="https://www.youtube.com/watch?v=z-j3gqUUYzM&t=2s" target="_blank" rel="noopener">interesting talk</a> on Includes No Dirt threat modeling. I thought this slide was particularly interesting. As threat modeling moves from an idea through pilots and deployments, and we develop the organizational disciplines of threat modeling, the question of 'when do we do this' comes up. There's good appsec focused answers like 'every sprint', or 'in line with your waterfall, but those answers aren't universal. For example, they don't help when you're thinking about your supply chain.</p>
<p>The talk by William Dogherty and Patrick Curry (shown) covers a lot of these organizational discipline factors, and this slide appears about 53 minutes in. The whole talk is worth watching.</p>
<p>My previous discussion of the approach overall is in <a href="https://adam.shostack.org/blog/2019/10/includes-no-dirt-healthcare-threat-modeling-thursday/">Includes No Dirt: Healthcare Threat Modeling</a>.</p>MDIC Annual Public Forum2020-08-13T00:00:00Zhttps://shostack.org/blog/mdic-annual-public-forum-2020/AdamI'll be speaking at the MDIC's Annual Public Forum today, discussing how threat modeling helps bring maturity to the medtech sector.<span class="single-post-excerpt"><!-- excerpt -->I'll be speaking at the MDIC's Annual Public Forum today, discussing how threat modeling helps bring maturity to the medtech sector.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/MDIC-panel.jpg" alt="MDIC Panel - Cybersecurity: Strides Toward Maturity Benchmarking for the MedTech Sector; Thursday, Aug 13, 2020, 11am Pacific/2pm Eastern" class="single-post-main-img" />
<p>I'll be speaking at the MDIC's <a href="https://apf.mdic.org/" target="_blank" rel="noopener">Annual Public Forum</a> today, discussing how threat modeling helps bring maturity to the medtech sector. <a href="https://apf.mdic.org/agenda/" target="_blank" rel="noopener">Join us</a> shortly!</p>Information Disclosure In Depth2020-08-14T00:00:00Zhttps://shostack.org/blog/information-disclosure-in-depth/AdamI have something to disclose...<span class="single-post-excerpt"><!-- excerpt -->I have something to disclose...<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/Information-Disclosure-In-Depth.jpg" alt="torn brown paper revealing the word SECRET" class="single-post-main-img" />
<p>I have something to disclose: the release of my new course on <a href="https://www.linkedin.com/learning/threat-modeling-information-disclosure-in-depth?u=2125562" target="_blank" rel="noopener">information disclosure</a>has just launched on Linkedin! 🎉🥂</p>
<p>To celebrate, I've made it easier to disclose the contents by making it free for you <a href="https://www.linkedin.com/posts/shostack_threatmodeling-learning-cybersecurity-activity-6699772506540167168-qK9v" target="_blank" rel="noopener">link here</a>.</p>
<p>Please help me disclose this information to the world!</p>Better Taught Than Caught!2020-08-18T00:00:00Zhttps://shostack.org/blog/better-taught-than-caught/AdamInformal training may work in some cases, but Threat Modeling skills should be passed on through more formal means.<span class="single-post-excerpt"><!-- excerpt -->Informal training may work in some cases, but Threat Modeling skills should be passed on through more formal means.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/derek-owens-cnzamNXuwo4-unsplash.jpg" alt="adult male teaching young child to fish at the beach" class="single-post-main-img" />
<p>So Chris Romeo has a blog post, “<u>Threat modeling: better caught than taught</u>” [link to https://blog.securityjourney.com/threat-modeling-better-caught-than-taught/ no longer works]. In it, he advocates for threat modeling being a skill passed on informally. And, like many things in threat modeling, that's attractive, sounds fun, and is utterly wrong. </p>
<p>Let's threat model this:</p>
<ul>
<li>What are we working on? <strong>Scaling threat modeling</strong> across all developers. (Cool!)</li>
<li>What can go wrong? <strong>A game of telephone</strong>, where one person tells another something, and they pass it on. And we all know how well the last person gets the message. But it's <em>worse</em> than telephone: people will do web searches, and find all sorts of crazy advice, from "think like a hacker" to "develop a list of assets," or worse. And because they only sort of got what they were doing, and because we don't name and version our techniques, they'll end up with a scrum of continuously integrated ideas in their heads. And odds are that will be confusing.</li>
<li>What are we going to do about it? We're gonna train, thoughtfully.</li>
<li>Did we do a good job? We can tell using surveys; using tools to see if threat models are similar across teams. We can tell by dropping in and observing.</li>
</ul>
<p>All that said, and teasing aside, Chris is right in much of what he says: diving in and threat modeling is a win. Asking leading questions is a helps people get to the answers. Learning by doing is the only way to learn. There's also a reality that some people come in with adjacent security knowledge, from which they can bridge to threat modeling. Others don't. The ones without that knowledge need more structure and more granular structure to help them learn. That includes proper labeling of what we're doing: DFD3, STRIDE, asset/attacker/technical centering. Without those labels, students are left struggling with 'what is this threat modeling thing you want me to learn?' Calling yet another set of steps 'agile threat modeling' hurts everyone, or at least everyone who uses a search engine for supplemental information.</p>
<p>He's also right that talking to developers about the relationship between STRIDE and PASTA is a mistake. They don't need to know about that. The threat modeling experts need to know about that. The security champs might need to know about that. In my core 201 class ("Threat Modeling for Architects") there's exactly one way to answer each question (DFDs (DFDv3, in fact), STRIDE, bugs, retrospectives). That class takes about a day in person and a week at a couple hours a day when it's delivered distributed. There's lecture, exercise and discussion to help ensure that the lesson lands well.</p>
<p>Good training involves carefully setting the stage for students to catch the lessons. It requires defining what you want a student to catch, what you're going to make them do to catch it. One of the first goals is the why: showing the students that threat modeling helps them find real problems quickly. When I train, I am to be showing that inside of the first 30 minutes. As Chris says, lectures on</p>
<p>Look, I have a bias for training. It's a big part of my business. Consistency doesn't happen by accident, and that's why you need a teaching and learning plan for your organization's security journey. And for those who don't know Chris, his company, <a href="https://www.securityjourney.com/" target="_blank" rel="noopener">Security Journey</a> helps build culture. We share the perspective that culture is an important part of that journey, and having the last part of the "fan-out" be carefully designed "catching" of threat modeling can be a great plan.</p>
<p>Image from <a href="https://unsplash.com/@derekowensheart" target="_blank" rel="noopener">Derek Owens</a>, on Unsplash.</p>Worthwhile Books Q2 20202020-08-19T00:00:00Zhttps://shostack.org/blog/worthwhile-books-q2-2020/AdamThese are the books that I read in Q2 2020 that I think are worth your time.<span class="single-post-excerpt"><!-- excerpt -->These are the books that I read in Q2 2020 that I think are worth your time.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/debby-hudson-asviIGR3CPE-unsplash.jpg" alt="stack of 5 old books next to an open journal with pen" class="single-post-main-img" />
<p>These are the books that I read in Q2 2020 that I think are worth your time. Sorry it's late. They're still worthwhile. :)</p>
<h3>Cyber</h3>
<ul>
<li><a href="https://amzn.to/33RCKNs" target="_blank" rel="noopener">You'll See This Message When It Is Too Late</a>, by Josephine Wolff. This is an interesting examination of the effects of finger-pointing and blame avoidance on the cybersecurity landscape, with chapter titles like "How the TJX breach set the sate for a decade of payment card conflict" and "what they aren't telling you is their rules are archaic." The most interesting contribution is a deep examination of who might be in the best position to deal with problems, which may be the victims of a crime, or it may be that the victims were forced to follow archiac, ineffective, or blame shifting rules. She points out the role that governments, ISPs, software vendors and others might play. </li>
<li><a href="https://amzn.to/2DLsL1f" target="_blank" rel="noopener">Permanent Record</a>, by Edward Snowden is an interesting autobiography. Today, it's popular to form opinions of people by reading tweets or screeds for or against them. I found Snowden's thinking about what he did and why to be interesting and well reasoned.
</li><li><a href="https://amzn.to/3iJaUqT" target="_blank" rel="noopener">Securing DevOps</a> by Julien Vehent. Very solid advice without a lot of fluff. Sometimes wished for just a bit more on they 'why.'</li>
</ul>
<h3>Non-fiction</h3>
<ul>
<li><a href="https://amzn.to/30KHOB9" target="_blank" rel="noopener">Zucked</a>, Roger McNamee. Interesting views of how Facebook is causing harm, and how Mark Zuckerberg is unwilling to listen. Most interesting is that it comes from an early investor. </li>
<li><a href="https://amzn.to/3kDTWM3" target="_blank" rel="noopener">Because Internet</a> by Gretchen McCulloch. Fascinating and fun guide to how language is evolving on the internet, how our use of language is driven by when we started using the internet, and how emojis are like gestures that the fluent use to add depth to their words delivered through internet. 🤷♂️</li>
<li><a href="https://amzn.to/31LUFCc" target="_blank" rel="noopener">Meeples Together</a> by Christopher Allen & Shannon Appelcline is a study of "How and why cooperative board games work." If you care about designing for instructional goals, cooperation mechanisms are an incredibly attractive toolbox. Getting your students to work together to solve problems has incredible potential.</li>
<li><a href="https://amzn.to/3iKsBGy" target="_blank" rel="noopener">What If</a>: serious scientific answers to absurd questions, by Randall Munroe of xkcd fame.</li>
<li><a href="https://amzn.to/3kKYTCO" target="_blank" rel="noopener">Leading Change</a> by John Kotter. Let me be frank: I hate most business books. They are full of platitudes, puffery and other crap. This is not that. Many of my customers are going through change to deliver more secure software, and I had stumbled across some of the techniques here, reinvented others, and missed more. My practice is more helpful since I learned about <a href="https://www.kotterinc.com/8-steps-process-for-leading-change/" target="_blank" rel="noopener">the 8 steps for leading change</a>.</li>
<li><a href="https://amzn.to/30VdHXG" target="_blank" rel="noopener">Where Good Ideas Come From</a> by Steven Johnson. I've been a fan of his work since <a href="https://amzn.to/30UGziK">The Ghost Map</a> gave us the key that unlocked the final form of <a href="https://amzn.to/34dyFTZ" target="_blank" rel="noopener">The New School</a>. This is a study of how ideas come about, examining the myth of the lone inventor, and showing how networks and interaction help ideas mature and find a niche.</li>
<li><a href="https://amzn.to/3kJ1sW5" target="_blank" rel="noopener">I Am C-3PO: The Inside Story</a> by Anthony Daniels, the only person to appear in all 9 Star Wars episodes. A delightful memoir.</li>
</ul>
<h3>Fiction</h3>
<ul>
<li><a href="https://amzn.to/3gZi2yO" target="_blank" rel="noopener">Fall; Or, Dodge in Hell</a> by Neal Stephenson. This is what you get for complaining, incorrectly, that he can't write endings. An 892 page extended ending to <a href="https://amzn.to/3iIQG0k" target="_blank" rel="noopener">Reamde</a>, in which (SPOLIER) the protagonist, Dodge, dies at the very beginning. Way less depressing than <a href="https://amzn.to/2XYPtK7" target="_blank" rel="noopener">Seveneves</a>, even though most of the characters in this one also die. (That joke will be much funnier after you learn, early on, that uploading consciousness to supercomputers is a core plot device of "Fall.")</li>
</ul>
<p>
<a href="https://unsplash.com/photos/asviIGR3CPE" target="_blank" rel="noopener">Photo</a> by <a href="https://unsplash.com/@hudsoncrafted?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank" rel="noopener">Debby Hudson</a>.</p>Elevation of Privilege In The Time of Cholera2020-08-24T00:00:00Zhttps://shostack.org/blog/elevation-of-privilege-in-the-time-of-cholera/AdamHow to play in person games while maintaining safe distances.<span class="single-post-excerpt"><!-- excerpt -->How to play in person games while maintaining safe distances.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/eop-cards-and-box.jpg" alt="cards from the Elevation of Privilege game" class="single-post-main-img" />
<p>The <em>Elevation of Privilege</em> game has had way more staying power than I would have expected. But the online experience in this time of global pandemic has left out some of the magic that made it work.</p>
<p>So I was really skeptical when Simon Gibbs from Agile Stationery mailed me about <a href="https://agilestationery.co.uk/blogs/pp/how-to-play-elevation-of-privilege-and-cornucopia-remotely">an approach to playing remotely</a>. But when I look at it, and I look at the logic behind it, I find myself intrigued: </p>
<ul>
<li>The player receives the (physical) deck and looks it over.</li>
<li>The player receives his hand from the Games Master by email.</li>
<li>They accept a calendar invite to play one game targeting one system. They might get the system diagram at this point too.</li>
<li>They fetch out their hand ahead of the game.</li>
<li>The player joins a video call to play the game.</li>
</ul>
<p>"<strong>The player will interact with the deck on at least three of these occasions.</strong>"</p>
<p>Physically pulling the cards as a way to engage before a meeting is deep. I might even suggest that the way to do it would be say "you must pull out your cards before you accept the invitation. Please include questions about the cards if you have any."</p>
<p>Also, professional production makes a difference. You're free to print yourself, and having a well-made thing in your hand tells you that someone cared enough to do it. The most expensive thing in a meeting of developers playing EoP is not the card deck, it's not the $50 microphone or $100 webcam or the thousand dollar computer. It's the developer's time. And using a card deck to prompt and anchor developer thinking about what can go wrong is worth your time.</p>Podcast with Sidney Dekker2020-08-26T00:00:00Zhttps://shostack.org/blog/podcast-with-sidney-dekker/AdamThis is a really interesting podcast interview with Sidney Dekker, who's one of the most important thinkers in safety.<span class="single-post-excerpt"><!-- excerpt -->This is a really interesting podcast interview with Sidney Dekker, who's one of the most important thinkers in safety.<!-- /excerpt --></span>
<p>This is a really interesting podcast interview with Sidney Dekker, who's one of the most important thinkers in safety. <u>The Jay Allen Show on Safety</u> [link to https://pod.co/the-jay-allen-show-on-safety-fm-2/sidney-dekker no longer works] . (Fast forward through the first 3 minutes, the content is quite interesting.)</p>
<p>Particularly interesting is his discussion of some 'best practices' which come out of a poorly supported chain of work by an insurance analyst. "It turns out, the deeper you dig, he made it up. There's literally no basis for the triangle."</p>The Uber CSO indictment2020-08-28T00:00:00Zhttps://shostack.org/blog/the-uber-cso-indictment/AdamThoughts on Mark Rasch's essay, Conceal and Fail to Report - The Uber CSO Indictment<span class="single-post-excerpt"><!-- excerpt -->Thoughts on Mark Rasch's essay, Conceal and Fail to Report - The Uber CSO Indictment<!-- /excerpt --></span>
<p>Mark Rasch, who created the Computer Crime Unit at the United States Department of Justice, has an essay, "<cite><a href="https://securitycurrent.com/conceal-and-fail-to-report-the-uber-cso-indictment/">Conceal and Fail to Report - The Uber CSO Indictment</a></cite>."</p>
<blockquote>The case is causing great consternation in the InfoSec community partly because it is the first instance in which a CSO or CISO has been personally held responsible (other than by firing) for a data breach response, and the first time that criminal sanctions of any kind have been sought against the corporate victim of a data breach for handling (mishandling) the data breach itself.</blockquote>
<p>Mark spends a lot of energy explaining the law of the case and some of the subtleties, for example: "It’s also clear that Uber and Sullivan did not want the FTC to know about the 2017 breach. But I’m not sure that, as a matter of law, this constitutes “misrepresenting, concealing or falsifying” materials actually produced to the FTC." As someone who does expert witness work now and again, I've learned to recognize skilled analysis, and this is skilled analysis, the kind you'd want on your side, especially if you're one of those with great concern. </p>
<p>I have a few small things to add, and one weighty one.</p>
<p>First, Joe Sullivan is innocent until proven guilty. There's no need, and many opportunities for mistakes, in pre-judging him or the case. The indictment is literally written to make the case against him, and portray him in the worst possible light. There's lots written in general about how being prosecuted is an emotionally wretched experience, and how even innocent people will plead guilty to reduce the shadow of uncertainty when prosecutors will ask for the maximum possible sentence for those who exercise their rights to a trial by jury. If you're not familiar with that, searches such as "why do innocent people plead guilty" may be eye opening.</p>
<p>Second, CSOs are obviously concerned about what this means for them. My advice is to get your lawyer's advice in writing. "My lawyer told me this was ok" is a pretty good defense. Keep a copy in your personal safe.</p>
<p>Third, I've long seen breach disclosure as a way to learn from our mistakes, and I've been struggling with what this case means for learning from breach disclosures. In reading Mark's essay, I think it's a net negative. We're going to see substantially more caution from lawyers. That might mean:</p>
<ul>
<li>Less specific language used (if that's even possible). </li>
<li>More claims that "something may have happened, and we are reporting this to you out of an abundance of caution" sorts of language.</li>
<li>More specific and factual statements, bookended by "we have a good faith belief based on what we know today"</li>
</ul>
<p>I'm glad that my research has led me towards <a href="https://shostack.org/blog/doing-science-with-near-misses/">near miss analysis</a>. The opportunity to demonstrate constructive engagement was important before. Mark describes Uber's response as "less than ideal," and speaking more generally, many responses to things which turn out to be somewhere on a spectrum from nuisances, incidents or breaches involve decisions that will be judged harshly with 20/20 hindsight. As a society, we would benefit from ways to demonstrate constructive engagement, to enable us to understand from the problems which are happening, and to draw more lessons from them.</p>
<p>I don't claim that near miss analysis is the only frame for that, but in light of the prosecution, we should be thinking about ways to give those who want to do the right things incentives to do those things.</p>Phil Venables Blogging2020-09-07T00:00:00Zhttps://shostack.org/blog/phil-venables-blogging/AdamIt's not LinkedIn posts or Tweets, but a real live blog.<span class="single-post-excerpt"><!-- excerpt -->It's not LinkedIn posts or Tweets, but a real live blog.<!-- /excerpt --></span>
<p>Phil Venables is one of the more reflective and thoughtful CSOs out there, and in this era where everything is a tweet or a linkedin post (sigh) you may have missed that Phil has a <a href="https://www.philvenables.com/home" target="_blank" rel="noopener">blog</a>.</p>
<p>This Labor day, why not take the time to catch up on his writing?</p>Threat Modeling, Insiders and Incentives2020-09-10T00:00:00Zhttps://shostack.org/blog/threat-modeling-insiders-and-incentives/AdamInspired by the recent story of Tesla's insider, I'd like to discuss insider threat as it fits into threat modeling.<span class="single-post-excerpt"><!-- excerpt -->Inspired by the recent story of Tesla's insider, I'd like to discuss insider threat as it fits into threat modeling.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/insider.png" alt="3 individuals working from laptops on a couch" class="single-post-main-img" />
<p>There's been a lot of talk over the last week about "updating threat models" in light of the Tesla insider story. (<a href="https://www.wired.com/story/tesla-ransomware-insider-hack-attempt/" target="_blank" rel="noopener">For example</a>.) I'm getting this question a fair bit, and so wanted to talk about insiders in particular, and how to use the news in threat modeling more generally. This also is a great opportunity to think about incentives.</p>
<p>So first: the story is that a Russian gang approached a Tesla employee and offered $1 million to plant malware. So: should you update your threat models?</p>
<p>The first question to ask is "do your threat models already include insiders?" They should. Many people don't like to talk about insiders. They don't want to think that Bob is going to turn against them, and that's natural. But "insiders" can be framed as a focus on an the attacker who's used a phishing link to steal credentials, or an attachment to run code inside your soft, gooey interior. If Bob can go wild inside your systems, Yuri can use Bob's account in the same ways.</p>
<p>It's easy for me to say that "they should," and there's also a reality that many defenses against insiders take substantial engineering effort, and if your organization hasn't committed to that, then there can be a real discouraging effect from repeatedly discovering and disregarding these threats.</p>
<p>So when the issue is on the front page of the newspaper can be a fine time to revisit what's in your threat models. News stories can motivate management to say 'we don't want to be the next version of that,' and so if you've lacked the drive to address insiders, maybe Tesla can help you get there.</p>
<p>Speaking of getting there: the offer of a million dollars doesn't mean anyone will get paid a million dollars, and even if you do get the money, you're going to be blackmailed out of that, and a lot more. But <a href="https://www.cia.gov/library/readingroom/docs/CIA-RDP88-01070R000301750013-0.pdf" target="_blank" rel="noopener">MICES</a> starts with money for a reason: there's always people in financial distress, and long term distress makes thinking clearly difficult.</p>
<p>And thinking about that distress is key to helping your employees do the right thing in such situations. Idan Shoham has an interesting post "<a href="http://shoham.ca/blog/employer-pays-not-bad-guys.html" target="_blank" rel="noopener">Paying insiders to expose the bad guys</a>." He touches on paying more as the bad guys are indicted, arrested, etc, which is sensible. Not everyone wants to engage in dealing with criminals or helping to bring in more specific evidence. Even filing a police report is an important step, and it can be stressful. It's also a strong deterrent to lies: filing a false report is a crime. A small bonus can help people get over that hump. (Incidentally, you're thinking about insider threats when you think your folks might abuse your bonus system. It's easier to think about when the reason Bob's screwing you is obvious.)</p>
<p>Especially in 2020, when everyone is stressed all the time, aligning incentives is a good strategy to include in your answers to "What are we going to do about it?"</p>
<p><a href="https://unsplash.com/photos/_S7-KX8geL0" target="_blank" rel="noopener">Photo</a> by Austin Distel.</p>Starting Threat Modeling: Focused Retrospectives are Key2020-09-17T00:00:00Zhttps://shostack.org/blog/starting-threat-modeling-focused-retrospectives-are-key/AdamDon't skip this important step.<span class="single-post-excerpt"><!-- excerpt -->Don't skip this important step.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/starfish-retrospective.jpg" alt="sample starfish retrospective" class="single-post-main-img" />
<p>There's a good, long article at MartinFowler.com "<a href="https://martinfowler.com/articles/agile-threat-modelling.html" target="_blank" rel="noopener">A Guide to Threat Modelling for Developers</a>." It's solid work and I'm glad its out there. And I want to do something I don't usually do, which is quibble with footnotes.</p>
<p>Jim writes in footnote 2:</p>
<blockquote>Adam Shostack, who has written extensively on threat modelling and has provided feedback on this guide takes credit for the three question structure. He also adds a fourth question "Did we do a good enough job?" I don't disagree with Adam that we need to reflect and improve on the outcomes of our threat modelling. However, I have omitted this question from the basic structure as I believe it can be addressed elsewhere.</blockquote>
<p>So why is that worth quibbling with? It's certainly kind and respectful, and everything one could hope for in a disagreement, and I hope to answer in the same spirit.</p>
<p>The reason I include "did we do a good enough job" in my Four Question Framework is twofold. First, retrospectives are often given short shrift in practice. When your software development is in a reasonably steady state, it may be okay to hold a 30 minute session monthly... or quarterly... or when someone makes noise about it. But far more importantly, threat modeling is usually an addition to an existing software process. Because it is new, different work, setting aside a time to specifically ask 'did we do a good enough job at threat modeling' is helpful. </p>
<p>In fact, in my consulting work, I recommend that the first time a team is exposed to threat modeling, you set aside as much time for a retrospective as you do for performing the work. Many times the problems you hear about are "this is new, and we're still building muscle." But that's okay. People want to be heard. They want a chance to express those perspectives. Making listening part of threat modeling helps them, and you, get it done. </p>
<p>Image from <a href="https://t2informatik.de/en/smartpedia/scrum-retrospective/" target="_blank" rel="noopener">T2 Informatik</a>.</p>Mentions2020-09-23T00:00:00Zhttps://shostack.org/blog/mentions/AdamA few recent mentions<span class="single-post-excerpt"><!-- excerpt -->A few recent mentions<!-- /excerpt --></span>
<p>I <a href="https://multi-hazards.libsyn.com/cybersecurity-threat-modeling-in-an-up-down-world-conversation-with-adam-shostack" target="_blank" rel="noopener">joined</a> Vin Nelsen for the Multi-Hazards podcast. If you're looking for me to go beyond the bounds of technology threat modeling, this was, an interesting, far-ranging conversation about the state of the world. He also creates a study guide per episode — don't miss the subtly labeled pdf there.</p>
<p>I didn't join in <a href="https://cisoseries.com/security-is-suffering-from-devops-fomo/" target="_blank" rel="noopener">Security Is Suffering From DevOps FOMO</a>, but they discuss my blog fight with Chris Romeo over should threat modeling be taught or caught.</p>
<p>At 6PM BST (London) today the 23rd, I'll be joining <a href="https://www.equalexperts.com/expert-talks/gamification-of-threat-modelling-using-owasp-cornucopia/?utm_source=linkedin&utm_medium=social&utm_campaign=eto-sec-cornucopia" target="_blank" rel="noopener">Gamification of threat modelling using OWASP Cornucopia with Grant Ongers</a>. </p>
<p>Lastly, I'm interviewed as part of Ed Moyle & Diana Kelley's new book, "<a href="https://www.packtpub.com/product/practical-cybersecurity-architecture/9781838989927" target="_blank" rel="noopener">Practical Cybersecurity Architecture</a>," now available for pre-order.</p>A PCI Threat Model2020-09-24T00:00:00Zhttps://shostack.org/blog/pci-threat-model/AdamCompliance isn't Security, oh and something I wrote.<span class="single-post-excerpt"><!-- excerpt -->Compliance isn't Security, oh and something I wrote.<!-- /excerpt --></span>
<p>The reason I hate compliance programs is because they're lists of things we need to do, and many times, those things don't seem to make a great deal of sense. In threat modeling, I talk about the interplay between threats, controls, and requirements, and I joke that "a requirement to have a control absent any threat" is why we hate compliance programs (not joking).</p>
<p>So when Anton Chuvakin wrote an article on <a href="https://securityboulevard.com/2020/05/data-security-and-threat-models/" target="_blank" rel="noopener">Data Security and Threat Models</a> and closed it with "explicit threat models do make security better," I remembered that I'd actually written up a threat model for PCI, but not shared it. It's now at <a href="https://www.shostack.org/files/papers/A_PCI_Threat_Model_2020.pdf" target="_blank" rel="noopener">A PCI Threat Model</a>, and I have a column in Dark Reading explaining how this can <a href="https://www.darkreading.com/threat-intelligence/solving-the-problem-with-security-standards-/a/d-id/1338944" target="_blank" rel="noopener">solve the problem with security standards</a>.</p>
Training: Threat Modeling for Security Champions2020-10-07T00:00:00Zhttps://shostack.org/blog/training-threat-modeling-for-security-champions/AdamExpanding on our distributed class structure.<span class="single-post-excerpt"><!-- excerpt -->Expanding on our distributed class structure.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/yoda-video.jpg" alt="puppet yoda teaching video screenshot" class="single-post-main-img" />
<p>I haven't talked about it much, but I spent the first few months of the pandemic learning how to deliver effective training in a distributed (online) model. I'm really proud that our distributed class NPS customer satisfaction scores are now comparable to our in-person classes. Also it's been a lot of hard work, and in addition to our core classes (Threat Modeling for Architects and Threat Modeling in Depth), we now have classes for champs and trainers.</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/RV8xLzNlrTQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>We have a few seats available in an upcoming distributed class, "Threat Modeling for Security Champs." Oct 19-24. Details at <a href="https://shostack.org/training/courses/301">https://associates.shostack.org/301</a> .</p>
<p>This will be small group instruction for experienced threat modelers looking to get to the next level.</p>On Monopolies2020-10-09T00:00:00Zhttps://shostack.org/blog/on-monopolies/AdamIn a simpler age, Matt Stoller famously lost his job for critiquing Google.
<p>In a simpler age, Matt Stoller famously lost his job for critiquing Google.</p>
<p>He has a really interesting article summarizing and analyzing the massive anti-trust report at <a href="https://mattstoller.substack.com/p/congress-gets-ready-to-smash-big" target="_blank" rel="noopener">Congress Gets Ready to Smash Big Tech Monopolies</a>.</p>
<p>If you're like me, unsure if or how this might matter, take the time to read what he said. (Via <a href="https://www.pelicancrossing.net/netwars/2020/10/incoming.html" target="_blank" rel="noopener">Wendy Grossman</a>, who also has interesting commentary.)</p>Notice the Outrage Machines2020-10-31T00:00:00Zhttps://shostack.org/blog/notice-the-outrage-machines/AdamWith three days to the US election, the outrage machines are running on all cylinders. It'll be easier to stay happy if you remember to notice them.<p>With three days to the US election, the outrage machines are running on all cylinders. It'll be easier to stay happy if you remember to notice them.</p>
<p>To be clear, I'm not using a metaphor. Websites from news to social media use data to drive stories. Twitter's top tweets, Facebook's timeline, your local newspaper, but also Linkedin, Medium, Buzzfeed, - all are focused on keeping you on their site as long as possible to show you as many ads as possible. The key way that they do that is by measuring and optimizing engagement. Engagement is not a single number. It's a function of what the site thinks of you. Are you a woke liberal? A law and order conservative? An evangelical? Of course, because it's all code, it's not a single characterization, but dozens or hundreds of characteristics, and those characteristics are used to find and display the content that best provokes people like you. (They say "engage," I say "provoke.")</p>
<p>So if you're a woke liberal, you'll see outrageous content about voter suppression. If you're a conservative, you see content about voter fraud. If you're an evangelical.. Everyone is being shown content that sets us up with an expectation of violence on election day. (I gotta stop doing searches for this stuff.) Even being aware of it, the outrage machine is frustrating to watch. The most extreme comments and posts are the ones that you'll see.</p>
<p>There's all sorts of things to say about the threats that these systems pose to our self-conception, to our conceptions of those around us, and our understanding of the world. These tools are making a small set of people very rich. But those are posts for another day.</p>
<p>Today, and for the next few days, it will be very hard to avoid engagement engines. Being aware of the manipulation can help. Look for the tactics that are enraging you and those around you. Watch for the <a href="https://adam.shostack.org/blog/2015/07/the-drama-triangle/">drama triangle</a>.</p>
<p>The practice of <a href="https://www.insightmeditationcenter.org/books-articles/mental-noting/" target="_blank" rel="noopener">noting</a>: simply being reflective and conscious of how you feel, and why is really helpful. For example, "I'm angry" or "I'm angry that this blogger is taking sides while trying to appear reasonable." "I'm angry that this blogger isn't condemning the folks I don't like, who are clearly at fault!"</p>
<p>The outrage machines have been running on all cylinders, and they're going to go into overdrive. Don't let them drag you down.</p>On Legitimacy2020-11-02T00:00:00Zhttps://shostack.org/blog/on-legitimacy/AdamThe legitimacy of the outcomes of our election are already under aggressive and sustained attack.<span class="single-post-excerpt"><!-- excerpt -->The legitimacy of the outcomes of our election are already under aggressive and sustained attack.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/iraq-elections-2005.png" alt=" " class="single-post-main-img" />
<p>The first function of democracy is to enable the peaceful handover of power from one group to another. For this, all its myriad sins are forgiven. The peaceful handover of power from one group to another is not a sure thing. Historically, it's something of an aberration. There are all sorts of reasons, when you are powerful, to not want to hand over that power. But if we don't have peaceful handoffs, then we have violent ones.</p>
<p>Violence is a terrible way to decide who's in charge, or what they can do.</p>
<p>The second function of democracy is to convince everyone that it produces legitimate and correct choices. The importance of this was a primary message of Al Gore's <a href="https://www.americanrhetoric.com/speeches/algore2000concessionspeech.html" target="_blank" rel="noopener">concession speech</a> in 2000.</p>
<p>The legitimacy of the outcomes of our election are already under aggressive and sustained attack. Those attacks include attacks on people's ability to vote, on the collection and counting of ballots, on expectations of speediness, and on the personal integrity of the good people who spend their time administering our elections. Those attacks come from all sides, and I think many of the questions start from good places.</p>
<p>But even the well-intentioned quickly descend into a partisan mire. They are amplified by the <a href="https://shostack.org/blog/notice-the-outrage-machines/">outrage machines</a> from whatever their intent was into fuel for a set of fires.</p>
<p>They are also amplified by enemies of democracy. We know for a fact that the Russians put their thumbs on the scale of the American elections in 2016. That scheme is now clearly one of the most effective and important intelligence operations since the cracking of the Enigma. The chaos that's been injected into the western, liberal system which held authoritarianism in check will take decades to rebuild. </p>
<p>My hope is that Tuesday's results will be so clear and overwhelming that no one can seriously claim fraud. Trump got 3 million fewer votes than Hillary Clinton. She was thus, clearly, the second least popular candidate in a long time. Trump, the least popular candidate in a long time, is running again. In 2016, many people chose not to vote, or chose to vote for a third party in that election, which was why Trump was able to squeak in. It's easy to see him losing, but he, and his supporters also need to see that he's a loser.</p>
<p>That's the second function of democracy, and it's also undercut by outrage machines and filter bubbles.</p>Maps and Visualization2020-11-04T00:00:00Zhttps://shostack.org/blog/maps-and-visualization/AdamA colorful map shares a powerful message.<span class="single-post-excerpt"><!-- excerpt -->A colorful map shares a powerful message.<!-- /excerpt --></span>
<p>I <a href="https://adam.shostack.org/blog/2004/11/stop-hurting-america/">posted</a> this image in 2004.</p>
<img src="https://shostack.org/blog/img/2020/th_Purple-USA.jpg" width="325" height="255" alt="A map of America in gradations of purple" />.
<p>Even more relevant now. While we have a country that is clearly divided, the dividing lines are not so neat as the maps showing states going one way or the other.</p>
<p>Updated Nov 7, see also <a href="https://purplestatesofamerica.org/">purplestatesofamerica.org/</a>.</p>Friday Star Wars: Lego Holiday Celebration2020-11-06T00:00:00Zhttps://shostack.org/blog/friday-star-wars-lego-holiday-celebration/AdamA little something to make you smile today:<p>A little something to make you smile today:</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/-bZxO5Dn9x0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>On Legitimacy (After the Election)2020-11-11T00:00:00Zhttps://shostack.org/blog/on-legitimacy-after-the-election/AdamAdditional thoughts on the subject.<span class="single-post-excerpt"><!-- excerpt -->Additional thoughts on the subject.<!-- /excerpt --></span>
<p>Before the election, I wrote about <a href="https://shostack.org/blog/on-legitimacy/">legitimacy</a>. In that, I said 'The second function of democracy is to convince everyone that it produces legitimate and correct choices.'</p>
<p>There are two important things worth watching. First, President Trump is attempting to cast doubt on an election in which he was thoroughly rejected by voters. Second, we spent the better part of a week wondering who was going to win the election. Both are worth some noticing and reflection.</p>
<p>First, the current gap is 5 million votes (out of 140 or so million cast). There are no credible reports of large scale fraud. There's no way enough votes were miscounted to lead to a change in the result. President Trump and his allies are engaged in a reprehensible assault on our democracy. </p>
<p>Second, the country was on pins and needles over the winner of the election even after Joe Biden had a clear lead of millions of votes. Setting aside politics and history for a moment, if I told you about an election system in which half the time the eventual winner got fewer total votes than the loser, would you think that a fair system for deriving legitimacy? What if I told you that 3/4 of the recent wins by one party were won with fewer votes than went to the other party's candidate? You probably would think that's also a rigged system. And both are accurate descriptions of the Electoral College.</p>
<p>In 2016, I wrote "the Electoral College exists for a reason. (See <a href="http://avalon.law.yale.edu/18th_century/fed68.asp" target="_blank" rel="noopener">Federalist #68</a>. But it did not ensure "that the office of President will never fall to the lot of any man who is not in an eminent degree endowed with the requisite qualifications," or prevent someone who only had "Talents for low intrigue, and the little arts of popularity," from becoming President. And so, having failed in its role as a backstop, what remains is a <a href="https://en.wikipedia.org/wiki/List_of_United_States_presidential_elections_by_popular_vote_margin" target="_blank" rel="noopener">system</a> that allocates the Presidency that's out of line with the popular vote. Over the last 25 years, the Democratic party candidate has won all but one popular vote total (George Bush won a majority of the popular cote in 2004. In 2000, Bush won with 543,816 votes less than Al Gore, and in 2016, Trump won with 2.8 million fewer votes than Hillary Clinton. (The pattern doesn't show before that.)</p>
<p>Maybe this is a statistical anomaly. If it is, then we can get through it. If not, if one party will regularly need 3-5 million more votes than the other to eke out victory, then our system of allocating legitimacy to wins is in trouble. I've written <a href="https://adam.shostack.org/blog/2016/11/election-2016/">before</a> about the National Popular Vote compact, and the <a href="https://adam.shostack.org/blog/2016/11/electoral-chaos/">chaos</a> that ensued after Trump eked out a victory from behind. </p>
<p>Also, credible people can write that <a href="https://balkin.blogspot.com/2020/10/the-electoral-college-is-not-broken.html" target="_blank" rel="noopener">The Electoral College Is Not Broken</a>. That's the argument from history that it's working precisely as designed. Other credible people can put forth the argument that <a href="https://balkin.blogspot.com/2020/10/part-two-national-popular-majority-rule.html" target="_blank" rel="noopener">a constitutional amendment is required</a>.</p>
<p>There is, of course, a political argument that a divided government gives us checks and balances, but that system seems seriously out of whack.</p>
<p>Legitimacy is important, and if our institutions are not up to delivering it, we're going to have problems.</p>A Threat Modeling Manifesto2020-11-17T00:00:00Zhttps://shostack.org/blog/threat-modeling-manifesto/AdamA diverse set of experts and advocates for threat modeling are releasing a threat modeling manifesto, modeled after the agile manifesto and focused on values and principles.<span class="single-post-excerpt"><!-- excerpt -->A diverse set of experts and advocates for threat modeling are releasing a threat modeling manifesto, modeled after the agile manifesto and focused on values and principles.<!-- /excerpt --></span>
<p>There's a <a href="https://www.threatmodelingmanifesto.org/" target="_blank" rel="noopener">threat modeling manifesto</a> being released today by a diverse set of experts and advocates for threat modeling. We consciously modeled it after the agile manifesto and it's focused on values and principles.</p>
<p>Also, there's a podcast that gives you a chance to listen, behind-the-scenes at <u>The Threat Modeling Manifesto – Part 1</u>. [link to https://podcast.securityjourney.com/the-threat-modeling-manifesto-part-1/ no longer works]</p>Breaking Encryption Myths (EU Commission on Encryption)2020-11-22T00:00:00Zhttps://shostack.org/blog/breaking-encryption-myths/AdamSpeaking up<span class="single-post-excerpt"><!-- excerpt -->Speaking up<!-- /excerpt --></span>
<p>I've signed onto a <a href="https://www.globalencryption.org/2020/11/breaking-encryption-myths/" target="_blank" rel="noopener">letter</a> to the European Commission on end to end encrypted communications.</p>Stencils and Sketch Books2020-11-23T00:00:00Zhttps://shostack.org/blog/stencils-and-sketch-books/AdamGoing beyond the whiteboard.<span class="single-post-excerpt"><!-- excerpt -->Going beyond the whiteboard.<!-- /excerpt --></span>
<p>We get many things from whiteboards. One of those is a sense of impermanence - that the work on them is a work in progress. That it's a sketch, rather than a final product. And I missed whiteboards, so working with my partners at Agile Stationery, we created not only whiteboards, but also stencils to help you neaten up your threat models as you iterate through them.</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/x7T2FOXho00" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>And here's a random tidbit of "products are hard" for you: we started out with the ruler in centimeters (hey, you're an engineer!) but the books were already printed with 1/4" grid (didn't think it through). So the final production stencils have an inch-denominated ruler.</p>
<p>If you have other geeky ideas for the next rev of the stencil, let us know!</p>It's Not Working!2020-11-25T00:00:00Zhttps://shostack.org/blog/its-not-working/AdamHelp me help you...<span class="single-post-excerpt"><!-- excerpt -->Help me help you...<!-- /excerpt --></span>
<p>As we launched the <a href="https://www.threatmodelingmanifesto.org/" target="_blank" rel="noopener">threat modeling manifesto</a>, we ran into some trouble with TLS. Some of you even reported those troubles, by saying "it's not working."</p>
<p>Thanks.</p>
<p>That's so helpful.</p>
<p>Sarcasm aside, there's a basic form to a helpful bug report: "I did A, and observed B." If you want to make it really useful, add "I expected C," or even "and the impact is D."</p>
<p>Let me compare and contrast with an example:</p>
<p>"I clicked on the link I see in your post at (URL) in Chrome 86 on MacOS big sur, and I get a message "Host not found." </p>
<p>Again, to compare and contrast: "I tried to follow the link..." (How? Which link?)</p>Mitigating Social Bias in Knowledge Graphs2020-12-04T00:00:00Zhttps://shostack.org/blog/mitigating-social-bias-in-knowledge-graphs/AdamSomething to consider<span class="single-post-excerpt"><!-- excerpt -->Something to consider<!-- /excerpt --></span>
<p>There's an interesting paper, <a href="https://www.amazon.science/blog/mitigating-social-bias-in-knowledge-graph-embeddings" target="_blank" rel="noopener">Mitigating social bias in knowledge graph embeddings</a> from a team at Amazon, which was presented at an academic workshop on <a href="https://kg-bias.github.io/" target="_blank" rel="noopener">bias in knowledge graph construction</a>. The work is interesting, and the availability of approaches like this will be a welcome shift in how we deal with these important issues. Of course, these approaches are not panaceas, but starting to define and address relatively low hanging fruit is important.</p>We Need a Discipline of Cybersecurity Public Health2020-12-07T00:00:00Zhttps://shostack.org/blog/we-need-a-discipline-of-cybersecurity-public-health/AdamThe video of my Distinguished Lecture at Ruhr University Bochum is now online, and I've got reference to share as well.<span class="single-post-excerpt"><!-- excerpt -->The video of my Distinguished Lecture at Ruhr University Bochum is now online, and I've got reference to share as well.<!-- /excerpt --></span>
<p>A few weeks back, I mentioned the Distinguished Lecture I gave at Ruhr University Bochum. I'm happy to say that the <a href="https://www.youtube.com/watch?v=2PobV1jH4a8" target="_blank" rel="noopener">video</a> is now online, and I also want to share the <a href="https://adam.shostack.org/publichealth/References_for_Shostack_Cyber_Public_Health_November2020.pdf" target="_blank" rel="noopener">references</a>.</p>Fireeye Hack & Culture2020-12-09T00:00:00Zhttps://shostack.org/blog/fireeye-hack-culture/AdamThoughts on the recent Fireeye Hack and the culture surrounding breaches<span class="single-post-excerpt"><!-- excerpt -->Thoughts on the recent Fireeye Hack and the culture surrounding breaches<!-- /excerpt --></span>
<p>[Update: 3 comments] Fireeye's <a href="https://www.fireeye.com/blog/products-and-services/2020/12/fireeye-shares-details-of-recent-cyber-attack-actions-to-protect-community.html" target="_blank" rel="noopener">announcement</a> of their discovery of a breach is all over the news. The Reuters <a href="https://www.reuters.com/article/idUSL1N2IO2S4" target="_blank" rel="noopener">article</a> quotes a 'Western security official' as saying "Plenty of similar companies have also been popped like this."</p>
<p>I have two comments. First, it's easy for anyone to label attackers "sophisticated." Fireeye certainly has more data and experience in assessing that, and I'd like to see their scale. I'd like to hear specifics of what makes them call the hack top-tier. OK, they "tailored their capabilities"? How? When you say "a novel combination of techniques" is that "novel techniques" or "novel combinations"? I understand that that's unlikely to come out for a while because of investigations.</p>
<p>Second, nearly fifteen years ago, when we wrote the <a href="https://amzn.to/3n2z6Hb" target="_blank" rel="noopener">New School</a>, the way we perceived breaches was very different. Now, almost all of what I'm seeing is the message that we should be compassionate and see how we can learn from it, for example: <a href="https://www.linkedin.com/posts/oliviarosecybersecurity_lets-give-some-grace-to-fireeye-anyone-activity-6742282231051493376-YUdx" target="_blank" rel="noopener">Let’s see how they can react to this and ultimately strengthen the industry. </a>
</p><p>It's very positive to see that change has really taken hold.</p>
<p>Third, after writing a first version, I'm seeing lots of compliments about them releasing lots of IoCs, and that release is a great step. Also, I want to say that, if your ability to detect these attacks is dependent on these IoCs, you may be in trouble. And if you're rushing to add those detections to your defenses, I want to encourage you to ask: how likely is it you'll be attacked with these specific tools? Never waste a good crisis, sure, but that doesn't make implementing these IoCs the right use of your crisis energy.</p>Charley Pride (1934-2020)2020-12-13T00:00:00Zhttps://shostack.org/blog/charley-pride-1934-2020/AdamEarly DRM artist recently passed away.<span class="single-post-excerpt"><!-- excerpt -->Early DRM artist recently passed away.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/charley-pride.png" alt="Charley Pride: A Tribute to Jim Reeves" class="single-post-main-img" style="float:left;" />
<p>Charley Pride has passed away of complications of Covid-19. I knew of his work because one of his albums, A Tribute to Jim Reeves, was <a href="https://www.stereophile.com/news/11178/" target="_blank" rel="noopener">initially sold</a> with digital rights management. I bought a copy to explore the DRM before news came out that you could just take a sharpie and draw over the bits that made it into a CD-ROM. The CD would then play, or rip, just fine. Of course, because the internet has a strange recency bias, finding pictures of that is hard.</p>
<p>He had an undeniably great voice, even if his style wasn't what I usually listen to.</p>
<p>I think the DRM'd <a href="https://amzn.to/37cbBps" target="_blank" rel="noopener">edition</a> of the album is the one that now sells for $985, with his name in green on the top left. Of course, there's also an edition for <a href="https://amzn.to/3mhwq7j" target="_blank" rel="noopener">$12.97</a> - A strange footnote to his legacy.</p>Elevation of Privilege In The Time of Cholera, Redux2020-12-15T00:00:00Zhttps://shostack.org/blog/elevation-of-privilege-in-the-time-of-cholera-redux/AdamSo far, so good.<span class="single-post-excerpt"><!-- excerpt -->So far, so good.<!-- /excerpt --></span>
<p>I had not seen <a href="https://medium.com/ft-product-technology/threat-modelling-at-the-ft-4d628ea3c5a4" target="_blank" rel="noopener">Threat modelling at the FT</a>. In in Lisa Fiander and Costas K share their experiences with <a href="https://github.com/adamshostack/eop" target="_blank" rel="noopener">Elevation of Privilege</a> played remotely.</p>
<p>It's a pleasant surprise to see how well EoP works in this remote world. I'd written <a href="https://shostack.org/blog/elevation-of-privilege-in-the-time-of-cholera/">about</a> and then done <u>a session</u> [link to https://agilestationery.co.uk/blogs/pp/our-first-play-to-learn-sessions-with-threat-modeling-games no longer works] with Agile Stationery; seeing independent reports is great!</p>The Asset Trap2020-12-16T00:00:00Zhttps://shostack.org/blog/the-asset-trap/AdamAs we look at what's happened with the Russian attack on the US government and others via Solarwinds, I want to shine a spotlight on a lesson we can apply to threat modeling.<img src="https://shostack.org/blog/img/2020/types-of-assets.jpeg" alt="File folders with the focus on one labeled Assets" class="single-post-main-img" />
<p>As we look at what's happened with the Russian attack on the US government and others via Solarwinds, I want to shine a spotlight on a lesson we can apply to threat modeling.</p>
<p>An example of asset-driven thinking leads the article <a href="https://apnews.com/article/technology-hacking-coronavirus-pandemic-russia-350ae2fb2e513772a4dc4b7360b8175c" target="_blank" rel="noopener">Hack may have exposed deep US secrets; damage yet unknown</a>. And I don't want to pick on this article in particular — anyone can fall into this trap:</p>
<blockquote>Some of America’s most deeply held secrets may have been stolen in a disciplined, monthslong operation being blamed on elite Russian government hackers. The possibilities of what might have been purloined are mind-boggling. Could hackers have obtained nuclear secrets? COVID-19 vaccine data? Blueprints for next-generation weapons systems?</blockquote>
<p>This seems perfectly reasonable list, doesn't it?</p>
<p>But you know what? This is <em>assets</em> in the sense of things which are valuable to us. What are the Russians going to do with plans for our nuclear weapons? They have their own. What are they going to do with vaccine data? Save the lives of their citizens? Plans for our next generation weapons may, indeed, be useful to them for finding weaknesses in those systems or copying them. But there are also assets in the sense of <em>things attackers want</em>.</p>
<p>I have a bet for their top prize: DKIM keys. You know, the things which add digital signatures to email as they pass through mail servers? With those, they can craft fake emails from government officials, but which pass any validity check. Back in 2016, Robert Graham <a href="https://blog.erratasec.com/2016/10/yes-we-can-validate-wikileaks-emails.html" target="_blank" rel="noopener">wrote</a>:</p>
<blockquote>Recently, in response to a leaked email suggesting Donna Brazile gave Hillary's team early access to debate questions, she defended herself by suggesting the email had been "doctored" or "falsified". That's not true. We can use DKIM to verify it.</blockquote>
<p>Now, we have an adversary who has developed a broad and modern understanding of the use of information operations to amplify wedges in Western societies, and they've been particularly successfully in the United States and the UK. And that adversary can forge emails. Emails saying things like "We need to fake election results" or "we should bollox up the rollout of the President's plan." Such lies will have a life of their own. (These may be assets in the third sense of the term, which is stepping stones, but I think, here, that's a distinction without a difference.)</p>
<p>One important defense is to rotate DKIM keys regularly, but I'm pretty sure DKIM keys are not in your assets list.</p>
<p>I hope you're nodding along, believing this plausible, because I'm going to tell you again: assets are a trap. How I think I might abuse them is dependent on my background and orientation. That's probably different than what an attacker is going to do.</p>
<p>I have a perspective on what I'd do as an attacker. The reporter for the AP has a perspective, informed by national security officials. Am I right? Is he? I don't know. Focusing in on assets helps us tell stories about those assets. As we think about defenses being structured, systematic and comprehensive, it's important to focus in on what we understand - the things we're working on, and to defend those things. If we predicate our defenses on these stories, we may be lead astray. If we focus our defenses on the vaccine secrets, then we may mis-construe the attacker's tactics. These are avoidable mistakes.</p>
<p>It's early days in this story: I don't mean to criticize the defenders or distract from their forensic work. But for those whose work is threat modeling, the anticipation of future problems, there's already lessons we can bring back to our work.</p>
<p>Previously: <a href="https://shostack.org/blog/fireeye-hack-culture/">Fireeye Hack & Culture</a>, and the attacker-centricity version of this trap in <a href="https://adam.shostack.org/blog/2019/10/who-are-we-kidding-with-attacker-centered-threat-modeling/">Who Are We Kidding with Attacker-Centered Threat Modeling?</a>, and links therein.</p>Chang'e 5!2020-12-23T00:00:00Zhttps://shostack.org/blog/change5/AdamCongratulations to the Chinese for the success of their Chang'e 5 lunar sample return mission!<img src="https://shostack.org/blog/img/2020/Change5samplereturn.jpg" alt="Scientists extract the samples from the Chang'e 5 spacecraft." class="single-post-main-img" />
<p>Congratulations to the Chinese for the success of their Chang'e 5 lunar sample return mission!</p>
<p>The complexity of landing a robot on the moon and returning it safely to Earth is enormous. In contrast to the Apollo series of missions, which launched and returned inside of a week, Chang'e took a week to get to the moon, and two weeks to return. It's a nice optimization of fuel, and complicates thermal management.</p>Dinosaur Feathers2020-12-24T00:00:00Zhttps://shostack.org/blog/dinosaur-feathers/AdamCheck out this close-up of a dinosaur tail preserved in amber!<span class="single-post-excerpt"><!-- excerpt -->Check out this close-up of a dinosaur tail preserved in amber!<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2020/01_dinosaur_tail.adapt_.1900.1.jpg" alt="Close-up of dinosaur tail in amber" class="single-post-main-img" />
<p>Scientists have discovered a chunk of amber with a dinosaur tail in it. (Poor dinosaur!) National Geographic has the <a href="https://www.nationalgeographic.com/news/2016/12/feathered-dinosaur-tail-amber-theropod-myanmar-burma-cretaceous/" target="_blank" rel="noopener">story</a>, which is not brand-new, but is a nice bit of scientific joy for the day.</p>Just the Great Conjunction of Saturn and Jupiter...shot from the moon2020-12-27T00:00:00Zhttps://shostack.org/blog/just-the-great-conjunction-of-saturn-and-jupiter-shot-from-the-moon/AdamIt's easy to forget that the Lunar Reconnaissance Orbiter has been circling the moon for nearly a dozen years..<img src="https://shostack.org/blog/img/2020/saturn-jupiter-close-01.jpg.optimal.jpg" alt="Close-up of Saturn/Jupiter conjunction" class="single-post-main-img" />
<p>It's easy to forget that the Lunar Reconnaissance Orbiter has been circling the moon for nearly a dozen years..</p>
<p>Via <a href="https://www.diyphotography.net/this-is-what-the-jupiter-saturn-conjunction-looks-like-from-a-moon-orbiting-probe/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Diyphotographynet+%28DIYPhotography.net+-+Photography+and+Studio+Lighting%29" target="_blank" rel="noopener">DIY Photography</a></p>Vaccines2020-12-28T00:00:00Zhttps://shostack.org/blog/vaccines/AdamYou may have noticed that my end of the year posts are all science focused. Today, a set of resources on the COVID vaccines.
<img src="https://shostack.org/blog/img/2020/covid19spikeproteins.jpg" alt="microscopic rendering of a COVID-19 spike protein" class="single-post-main-img" />
<p>You may have noticed that my end of the year posts are all science focused. Today, a set of resources on the COVID vaccines.</p>
<p>First, the FDA has authorized two vaccines for emergency use. The review memoranda (<a href="https://www.fda.gov/media/144416/download" target="_blank" rel="noopener">Pfizer</a>, <a href="https://fda.report/media/144673/Moderna+COVID-19+Vaccine+review+memo.pdf" target="_blank" rel="noopener">Moderna</a>) are all sorts of fascinating. As the kids say, TL;DR: both vaccines are safe and no meaningful side effects were seen in testing approximately 44,000 and 30,400 test subjects.</p>
<p>There's also a fascinating relationship to computer security where the vaccines work because, as Meredith Patterson <a href="https://threadreaderapp.com/thread/1342891437537505280.html" target="_blank" rel="noopener">says</a>, "a parser differential between how the immune system interprets an unfamiliar token (the <small>🔱</small> nucleoside) and how ribosomes interpret it, and a subtle semantic bug arising from disparate handling of different representations of an ambiguously representable parse result." (She says bug, I say exploitable feature. Ambiguous representation coding is probably a helpful redundancy.) She's looking at the highly readable <a href="https://berthub.eu/articles/posts/reverse-engineering-source-code-of-the-biontech-pfizer-vaccine/" target="_blank" rel="noopener">Reverse Engineering the source code of the BioNTech/Pfizer SARS-CoV-2 Vaccine</a>. You should, too.</p>
<p>(Image: <a href="https://news.uga.edu/searching-the-covid-19-spike-protein-for-a-potential-vaccine/" target="_blank" rel="noopener">Searching the COVID-19 spike protein for a potential vaccine</a>, April 24, 2020, University of Georgia)</p>It's 2021: Have you checked your backups?2021-01-03T00:00:00Zhttps://shostack.org/blog/its-2021-check-backups/AdamAs the expression goes, no one cares about backups, they care about restores. Do yours work?
<span class="single-post-excerpt"><!-- excerpt -->As the expression goes, no one cares about backups, they care about restores. Do yours work?<!-- /excerpt --></span>
<p>As the expression goes, no one cares about backups, they care about restores. Do yours work? </p>
<p>Some lessons learned over the last few days:</p>
<ul>
<li>Apple has <a href="https://support.apple.com/en-us/HT201255#help" target="_blank" rel="noopener">disabled</a> single user mode as of Mojave, and many recovery options are not available if you use a firmware password. Do not forget that availability is a security property.</li>
<li>Using a <a href="https://pi-hole.net/" target="_blank" rel="noopener">pi.hole</a> made getting to Apple's network Recovery Installer difficult.</li>
<li>According to the Internet, Time Machine and Synology NAS servers don't play well together. I have not been told this by Apple support reps (who have generally been very nice if not effective at solving my problem).</li>
<li>There's a cool business mode in Apple's Messages.app where it can send text messages to Apple for you. </li>
<li>Time Machine has an undocumented internal format that you can't look at without Apple tools. And when they don't work, you are officially roughly out of luck. There's lots of very large files which contain data, rather than copies of your files in something that looks like a mirror of your file system.</li>
<li>Amazon can deliver hard drives overnight, but there's no obvious way to search for just those. Add to cart, see what the fastest delivery option was. I ordered at 7PM, had a spare drive when I woke up. <a href="https://amzn.to/3pGT0IH" target="_blank" rel="noopener">4TB USB3 drives</a> are under $100, and by the time you read this, will be under $50.</li>
<li>Disk Warrior <a href="https://www.alsoft.com/diskwarrior5apfs" target="_blank" rel="noopener">doesn't yet have full support for APFS</a>.</li>
<li><a href="https://www.prosofteng.com/mac-data-recovery" target="_blank" rel="noopener">Data Recovery</a> does, but somehow the app is not yet signed and so you get Gatekeeper errors.</li>
<li>Agilebits no longer lets you <a href="https://support.1password.com/cs/find-license/?src=bitbot" target="_blank" rel="noopener">download your license file</a> if you have a standalone 1Password installation.</li>
<li><strong>Updated</strong> to add: Time machine works better over a wired connection. If you encounter failures, try connecting via an ethernet hub.</li>
</ul>
<p>If you don't - RIGHT NOW - GO AND CHECK YOUR GODDAMNED ABILITY TO RECOVER, I will, on your worst day, jump out of your disabled computer and say "I told you so."*</p>
<p style="font-size:50%">* Terms and conditions apply. Offer not valid without coupon. Subject to locale restrictions. Not available in Quebec.</p>Digital Guru Books2021-01-15T00:00:00Zhttps://shostack.org/blog/digital-guru-books/AdamRupin Gupta runs Digital Guru books. He's one of the nicest people you'll ever meet, a real joy to work with, and he works hard to put books on shelves so that you can discover them. With the conference business changing, Digital Guru needs some help.<img src="https://shostack.org/blog/img/2021/digital-guru-books.jpg" alt="photo of the DigitalGuru Books Team" class="single-post-main-img" />
<p>Rupin Gupta runs Digital Guru books. He's one of the nicest people you'll ever meet, a real joy to work with, and he works hard to put books on shelves so that you can discover them. With the conference business changing, Digital Guru needs some help.</p>
<p>Borrowing some words from my editor Jim Minatel: "If you've ever bought a book at a technical conference bookstore - RSA, dozens of Microsoft events, and so on - chances are you've bought it from the Digital Guru team. Rupin and his small business operate this specialty service that benefits everyone in the technical world - attendees, publishers, authors, event organizers. They don't just sell books though, they know the authors, the customers, they're the epitome of a small customer focused business."</p>
<p>Help him out at <a href="https://www.gofundme.com/f/save-the-independent-technical-bookseller/thankyou/share" target="_blank" rel="noopener">https://www.gofundme.com/f/save-the-independent-technical-bookseller</a>?</p>
Podcast on Using Games2021-01-25T00:00:00Zhttps://shostack.org/blog/podcast-on-using-games/AdamIt would be trite writing to say it was fun to be on a podcast with Volko Ruhnke and Hadas Cassorla to talk about using games to teach. And while it was, it was really educational and inspirational. I learned from both of them, and I hope you enjoy the podcast as well!
<p>It would be trite writing to say it was fun to be on a podcast with Volko Ruhnke and Hadas Cassorla to talk about using games to teach. And while it was, it was really educational and inspirational. I learned from both of them, and I hope you enjoy the podcast as well!</p>
<p><a href="https://www.buzzsprout.com/775379/7439884-volko-ruhnke-adam-shostack-and-hadas-cassorla-building-games-to-teach-real-world-security" target="_blank" rel="noopener">Volko Ruhnke, Adam Shostack and Hadas Cassorla - Building Games to Teach Real-World Security</a></p>Irius Risk & Gary McGraw2021-01-26T00:00:00Zhttps://shostack.org/blog/irius-risk-gary-mcgraw/AdamDr. Gary McGraw joins the IriusRisk Technical Advisory Board<span class="single-post-excerpt"><!-- excerpt -->Dr. Gary McGraw joins the IriusRisk Technical Advisory Board<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/irius-risk-gary-mcgraw.jpg" alt="Dr. Gary McGraw joins the IriusRisk Technical Advisory Board" class="single-post-main-img" />
<p>I'm very excited that Gary McGraw is <a href="https://iriusrisk.com/press-release-gary-mcgraw/" target="_blank" rel="noopener">joining</a> the Irius Risk Technical Advisory Board as board chair. Gary's a pioneer in software security, and his work in machine learning was my choice to <a href="https://shostack.org/blog/tmt-machine-learning/">kick off blogging</a> 2020.</p>Threat Modeling and Social Issues2021-01-28T00:00:00Zhttps://shostack.org/blog/threat-modeling-and-social-issues/AdamFor Data Breach Today, I spoke with Anna Delaney about threat modeling for issues that are in the news right now.<span class="single-post-excerpt"><!-- excerpt -->For Data Breach Today, I spoke with Anna Delaney about threat modeling for issues that are in the news right now.<!-- /excerpt --></span>
<p>For <cite>Data Breach Today</cite>, I spoke with Anna Delaney about threat modeling for issues that are in the news right now: "Does your organization have a plan in place if one of your employees is accused via Twitter of being an insurrectionist? If your software was being used to spread plans for a riot, could you detect that? Threat modeling expert Adam Shostack discusses how companies should be prepared to respond to issues in the news."</p>
<p><a href="https://www.databreachtoday.com/threat-modeling-for-social-issues-a-15854" target="_blank" rel="noopener">Threat Modeling for Social Issues</a></p>Better OKRs Through Threat Modeling2021-02-15T00:00:00Zhttps://shostack.org/blog/better-oks-through-threat-modeling/AdamEffective Threat Modeling by itself can ensure that your OKRs and AppSec Program are not only in great tactical shape, but also help define a strategic roadmap for your AppSec Program.<span class="single-post-excerpt"><!-- excerpt -->Effective Threat Modeling by itself can ensure that your OKRs and AppSec Program are not only in great tactical shape, but also help define a strategic roadmap for your AppSec Program.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/okr-in-tm.jpg" alt="OKR in Threat Modeling" class="single-post-main-img" />
<p>Abhay Bhargav has a really excellent post on <a href="https://www.abhaybhargav.com/better-okrs-for-security-through-effective-threat-modeling/" target="_blank" rel="noopener">Better OKRs for Security through Effective Threat Modeling</a>. I really like how he doesn't complain about the communication issues between security and management, but offers up a concrete suggestion for improvement.</p>
<p>Key quote: "Effective Threat Modeling by itself can ensure that your OKRs and AppSec Program are not only in great tactical shape, but also help define a strategic roadmap for your AppSec Program."</p>
<p>I like the post so much that I have only a small amount to add. I think we could benefit by sharing sample OKRs around either threat modeling, the four questions, or smaller deliverables. </p>
<p>I am not a master of the OKR form, but some simple examples might include:</p>
<ul>
<li>Increase assurance that we're making the right security investments by having current threat model documents for 95% of our apps and operational environments. (Where current is some time metric related to velocity) or</li>
<li>Improve system resilience by revisiting 'what can go wrong' for <i>a system that's led to lots of operational problems.</i></li>
<li>Improve defenses by adding one test case for each of STIDE per sprint to existing code for at least 75% of sprint stories.</li>
<li>Reduce security debt by 10% over Q1 by going back and creating appropriate system models for 5 of the Legacy Deployments.</li>
</ul>
<p>Ideally, these would be anonymized versions of real OKRs, possibly with history. (For example, "I started by asking for one of each ST<b>R</b>IDE test cases, but there was strong pushback because we concentrate our model of repudiation in our support use cases.")</p>My Year Without Flying2021-02-18T00:00:00Zhttps://shostack.org/blog/my-year-without-flying/AdamIt was just over a year ago that I last walked out of the Seattle airport. Some thoughts from a very frequent flyer on the pandemic so far.<span class="single-post-excerpt"><!-- excerpt -->It was just over a year ago that I last walked out of the Seattle airport. Some thoughts from a very frequent flyer on the pandemic so far.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/snack-box.jpg" alt="Snack Box" class="single-post-main-img" />
<p>It was just over a year ago that I last walked out of the Seattle airport. Before the pandemic, I was a very frequent flyer. As the pandemic was starting, I was under the weather and chose to skip RSA, having little idea what was coming.</p>
<p>That trip, in early February, was also the last training I did in front of a room. Now, a year later, I want to look back and look forward a little. First, I want to acknowledge that my family has been very lucky in escaping the worst of the damage of the pandemic, and I'm aware that that luck is tied in part to us being able to work remotely via computer, and having sufficient space to make that function.</p>
<p>On a professional level, the pandemic forced me to learn a great deal about teaching and learning and how to make distributed classes work. I also learned a lot about video production and editing and learning management tools and wow... our classes are now available online, and we're building out more open classes (more on that soon). Every class, I observe some students struggling, and my compassion for teachers and students in elementary school grows. My students and I have all sorts of advantages including discipline, developed learning skills, etc. I can barely imagine how someone who's seven years old can do it.</p>
<p>But moving from me to a larger view, the magnitude of the calamity of 2020 is hard to overstate.</p>
<p>On a political level, the United States has seen violent interference with the peaceful transfer of power, and the consequences for the person most responsible for that seem to be minimal. That's an astounding level of political dysfunction, whose consequences are likely to be awful. If you want to hold "elites" accountable, there's a clear place to start: Donald Trump. We can think about a future in which the US returns to a stable political system, and possibly acts as a global anchor for human rights, democracy, free markets, and enlightenment values, or one in which we careen back and forth, with each administration working to undo the damage wrought by its predecessors. I've written recently on legitimacy <a href="https://shostack.org/blog/on-legitimacy/">before</a> and <a href="https://shostack.org/blog/on-legitimacy-after-the-election/">after</a> the election.</p>
<p>On the level of the pandemic, we have somehow normalized and accepted the idea that thousands of people per day are dying. Hospitals are overwhelmed. We can reasonably anticipate that as many as a million Americans may die of this disease. Nothing in my lifetime relates.</p>
<p>We can roughly model the future of the pandemic as vaccines work — or not. By "not," I mean the rate of variants is high, and we fail to develop "pan-corona" vaccines (see <a href="https://www.nytimes.com/2021/02/09/health/universal-coronavirus-vaccine.html" target="_blank" rel="noopener">here</a>). The "vaccines working" scenario is vaccines, in combination with medical advances to reduce the impact to those who've caught covid, and public health measures like contract tracing, masks, and other forms of isolation all join together to cut the impact of the disease enough that things can go back to normal.</p>
<p>If vaccines don't work, well, for most of human history, we lived with plagues, and there were many responses that took advantage of the time involved in travel. We may have seen the end of the golden age of fast travel. Demand will go down: we have seen organizations figure out remote work, teams figure out remote on boarding and team building, and businesses are learning how to learn without putting everyone in a room. Acknowledgements of old and new difficulties will rise. Those with young children or other dependents are less able to be away. We may see quarantines on arrival. My year without flying may become less remarkable.</p>
<p>The combined bleak view seems remarkably bleak, and we haven't even talked about climate change. We can imagine and work for better futures. Part of getting to a better future will entail doing something about the <a href="https://shostack.org/blog/notice-the-outrage-machines/">outrage machines</a> that select and promote content that upsets people and drives them to commit violent acts against each other. We'll need to get back to the idea that long-form content is worthwhile, and in that mode, I want to recommend <a href="https://youtu.be/dU5EKbkolUw?t=875" target="_blank" rel="noopener">Rebuilding the Civic Square</a> with Kaye Husbands Fealing, a dean at Georgia Tech and Professor Peter Swire. It's roughly 35 minutes, and it's very good. They start out discussing issues of outrage engines and the legal framework for them in the US. The link jumps in a bit later where Peter discusses being willing to understand that people who think differently are not monsters, and it is very much worth listening to. </p>
<p>If you've read this far, thank you. Let me tie this back to threat modeling as a reward. 😉 This post started out with what I'm working on, transitioned through two sets of what can go wrong (with distributed learning, and then broader societal issues). It then moved to what are we going to do about it, and in noting that you're here, at least my writing did a good job of keeping you reading.</p>Linkedin Learning2021-02-23T00:00:00Zhttps://shostack.org/blog/linkedin-learning/AdamBringing threat modeling to more and more people, now through a series of courses on LinkedIn.<span class="single-post-excerpt"><!-- excerpt -->Bringing threat modeling to more and more people, now through a series of courses on LinkedIn.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/linkedin-launch-dos-eop.jpg" alt="headphones, Threat Modeling book, and mug on a desk with a screen snippet overlay of the Denial of Service and Elevation of Privilege course on LinkedIn" class="single-post-main-img" />
<p>I am very excited to announce that Linkedin Learning has released "<a href="https://www.linkedin.com/learning/threat-modeling-denial-of-service-and-elevation-of-privilege?u=0" target="_blank" rel="noopener">Threat Modeling: Denial of Service and Elevation of Privilege</a>." This is the sixth course I've done with them, and completes a cycle which starts with "<a href="https://www.linkedin.com/learning/learning-threat-modeling-for-security-professionals?u=0" target="_blank" rel="noopener">Learning Threat Modeling for Security Professionals</a>," and then steps through each of the STRIDE threats in depth. (We combined DoS and EoP because neither felt like a full course as I wrote the scripts.)</p>
<p>This is important to me because we're working to bring threat modeling to more and more people. To achieve that goal, we need scalability. These self-paced courses are infinitely scalable, and on a learning platform that's very widely available.</p>Happy (Belated) Pi Day!2021-03-15T00:00:00Zhttps://shostack.org/blog/belated-pi-day/AdamFor pi day, we celebrated with a set of pies<span class="single-post-excerpt"><!-- excerpt -->For pi day, we celebrated with a set of pies<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/bacon-liver-pie.jpg" alt="a pizza with bacon and liver" class="single-post-main-img" />
<img src="https://shostack.org/blog/img/2021/chicken-pot-pie.jpg" alt="A pizza with chicken" title="chicken-pot-pie.JPG" border="0" width="300" height="225" />
<img src="https://shostack.org/blog/img/2021/Cherry-pie.jpg" alt="A pizza with cherries on it" title="Cherry-pie.JPG" border="0" width="300" height="225" />
<p>For pi day, we celebrated with a set of pies - a <a href="http://allrecipes.co.uk/recipe/3558/liver--bacon-and-mushroom-pie.aspx" target="_blank" rel="noopener">British</a> style bacon and liver pie, a chicken pot pie, and a cherry pie.</p>
<p>The bacon and liver pie, with roasted carrot and shallot, was intended as a joke and came out well enough that we'll make it again. The cherry pie, with dried cherry and prosciutto, was also good as a sauce-free design.</p>Mmmm, Pandemic Puppies2021-03-24T00:00:00Zhttps://shostack.org/blog/mmmm-pandemic-puppies/Adam<span class="single-post-excerpt"><!-- excerpt --> <!-- /excerpt --></span>
<p>This is a really encouraging set of trends that Sandy Carielli <a href="https://go.forrester.com/blogs/make-application-security-a-top-priority/" target="_blank" rel="noopener">reports</a> on:</p>
<blockquote>My latest report, “The State Of Application Security, 2021,” draws heavily from that security survey mentioned above, and by far the most encouraging piece of data I share in the report is about how security pros are prioritizing application security. When asked about their top tactical priorities for the coming year, improving application security was the most common response, and building security into the development process was in the top 10. That latter priority shows up in the number of security teams continuing to shift left and build security tooling into the design and development phases of the software development lifecycle.</blockquote>
<p>Where are the pandemic puppies? Well, you'll just have to read her blog post.</p>Ever Given & Suez2021-03-26T00:00:00Zhttps://shostack.org/blog/ever-given-suez/adamThoughts on the issues with the Ever Given blocking the Suez Canal.<span class="single-post-excerpt"><!-- excerpt -->Thoughts on the issues with the Ever Given blocking the Suez Canal.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/digging-a-deeper-hole.jpg" alt="Excavator digging out sand around the box of the Ever Given in the Suez Canal, March 2021" class="single-post-main-img" />
<p>There's lots of fascinating details in <a href="https://interestingengineering.com/the-ship-blocking-the-suez-canal-could-take-weeks-to-remove" target="_blank" rel="noopener">The Ship Blocking the Suez Canal Could Take Weeks to Remove</a> at Interesting Engineering.</p>
<p>Two tidbits: first, the denial of service is blocking $9.6 billion dollars a day of cargo, but the eventual cost may be lower. Second, Egypt didn't outlaw slavery until 1863. (Happy Passover, everyone!) </p>
<p><a href="https://www.cnbc.com/2021/03/26/satellite-images-of-ship-ever-given-in-suez-canal-shows-work-underway.html" target="_blank" rel="noopener">This</a> CNBC story has an interesting image captured by Capella Space, showing how deeply wedged it is (but may be misleading because of angle of capture)</p>
<p><img src="https://image.cnbcfm.com/api/v1/image/106860130-1616772730994-ExW_9uTWQAgthk7-orig.jpg?v=1616772845&w=630&h=354" /></p>
<p>Also this shot's fascinating - notice all the gear above the ship on the maintenance road, and the two access paths they've cut down to the canal:</p>
<p><img src="https://image.cnbcfm.com/api/v1/image/106860125-1616772288363-01_overview_of_ever_given_ship_suez_canal_26march2021_wv2.jpg?v=1616772459&w=630&h=354" /></p>
<p>Any other really good views? Please share in the comments!</p>
<p>Update: Thanks to Ellen for reminding me about the Suez Canal's previous cameo, in "<a href="https://shostack.org/blog/change-in-the-weather/">Change in the Weather</a>."</p>
Microsoft Autoupdate hangs Excel 16.47.210323012021-03-26T00:00:00Zhttps://shostack.org/blog/microsoft-autoupdate-hangs-excel-16-47-21032301/AdamMicrosoft AutoUpdate for Mac has gotten exceptionally aggressive about running. Even if you use launchctl to disable it, you get a pop up roughly every 15 minutes of using an Office program.<p>Microsoft AutoUpdate for Mac has gotten exceptionally aggressive about running. Even if you use launchctl to disable it, you get a pop up roughly every 15 minutes of using an Office program. </p>
<p>That's probably a good thing, overall. There's plenty of evidence that update failures leave folks vulnerable. Note that I'm saying "update failures," rather than "failure to update", because updates fail.</p>
<p>For example, today, I'm being harangued over an Excel update. Trouble is, I've tried to install it, and MAU is failing to install it for reasons that aren't obvious even after I looked at the log files. </p>
<p>If you're having this problem, you can fix it by opening a terminal, and running these commands:</p>
<code>
cd $TMPDIR
ls -ltd MSau* |head
</code>
<p>In one of those directories will be a file Excel_16.47.21031401_to_16.47.21032301_Delta.pkg . Run that. I don't know why the updater is failing, and it's unclear if MAU is sending telemetry back.</p>Threat Modeling Classes2021-03-30T00:00:00Zhttps://shostack.org/blog/threat-modeling-classes/AdamThrough the pandemic, I’ve rebuilt the way I teach threat modeling. The new structure and the platforms I needed to adapt for my corporate clients also allows me to offer the courses to the public.<span class="single-post-excerpt"><!-- excerpt -->Through the pandemic, I’ve rebuilt the way I teach threat modeling. The new structure and the platforms I needed to adapt for my corporate clients also allows me to offer the courses to the public.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/Threat-modeling-training.jpg" alt="group of professionals reviewing threat model diagrams on window-cling whiteboards in a city office" class="single-post-main-img" />
<p>I have been lucky through these unprecendented and challenging times, and I'm grateful to have avoided many of the awful problems that others have faced. In my own little way, I spent a lot of time worried that delivering threat modeling training was only possible with us in the same room together.</p>
<p>Through the pandemic, I've rebuilt the way I teach threat modeling. The new structure and the platforms I needed to adapt for my corporate clients also allows me to offer the courses to the public.</p>
<p>Why does that matter to you? If you want to bring structured, systematic and comprehensive security to your work, threat modeling is the way to go. And if you want to learn threat modeling, I aim to offer the very best training available.</p>
<p>I'm happy to announce that I'm offering two open courses in May: <a href="https://shostackassociates.thinkific.com/courses/threat-modeling-intensive-222" target="_blank" rel="noopener">Threat Modeling Intensive</a> the week of May 10th, and <a href="https://shostackassociates.thinkific.com/courses/threat-modeling-for-security-champs-301-may-2021" target="_blank" rel="noopener">Threat Modeling for Security Champs</a>, the week of May 24th.</p>
<p>People want to know how they work! I'm leading each of these. The intensive is two hours of live classes daily, and the champs course is one hour. The live portion is augmented with short (recorded) video lectures and homework. And there's no reason to make everyone do those on a zoom call, so it's up to each participant to schedule.</p>
<p>Each class is limited to 25 people so that there's time and space to interact and learn. Those limited seats are probably going to go quickly, so if you're interested, you should act fast.</p>Passover Pie2021-04-01T00:00:00Zhttps://shostack.org/blog/passover-pie/AdamFor Passover, we made a lamb and bitter greens pizza. Now, you may be saying to yourself that that’s wrong, but allow me to explain.<span class="single-post-excerpt"><!-- excerpt -->For Passover, we made a lamb and bitter greens pizza. Now, you may be saying to yourself that that’s wrong, but allow me to explain.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/passover-pizza.jpg" alt="a pizza topped with lamb and bitter greens" class="single-post-main-img" />
<p>For Passover, we made a lamb and bitter greens pizza. Now, you may be saying to yourself that that's wrong, but allow me to explain.</p>
<p>A few years ago, Seattle Food Geek wrote about a <a href="https://seattlefoodgeek.com/2012/07/no-yeast-no-rise-champagne-pizza-dough/" target="_blank" rel="noopener">No-Yeast, No-Rise, Champagne Pizza Dough</a>. It makes use of an encapsulated leavener called WRISE. I had a sample of the WRISE leavener from a friend, and Modernist Pantry has it, listed as encapsulated baking powder <a href="https://modernistpantry.com/products/im-free-perfect-gluten-free-baking-powder.html?_ga=2.142964593.511678341.1617028974-70002299.1612879401" target="_blank" rel="noopener">here</a>. Anyway, I decided to use it to make a passover pizza, taking just under 18 minutes from adding liquid to flour to putting it in the oven.</p>
<p>So, turned on the oven, set up my ingredients (always important if you want to move fast without breaking things), and had at it.</p>
<p>To make it into the oven in 18 minutes, I had to reduce the rest time from Scott's ten minutes to six. That was crucial for getting the entire process done. And it turns out that without the second, 15 minutes of resting, the dough is very stiff and hard to roll. I wasn't far from bruising my hands as I rolled it out. The second dough ball, which we allowed to leaven (rise), was far more forgiving, and rose better in the oven.</p>
<p>Also essential to getting it done in 18 minutes was parbaking the crust. If I'd tried to sauce and dress the pizza, I don't think I could have finished in time. The cooked crust was decent. Not nearly as good as others with time to rest, but it wasn't bad.</p>
<p>Now, if you want to play entirely by the rules (and I'm not sure why else you'd bother with this approach), you may notice the presence of chametz in the recipe and in the kitchen as you roll out the dough.</p>
<p>Fortunately, Ken Forkish has an answer in his <a href="https://www.amazon.com/Elements-Pizza-Unlocking-Secrets-World-Class/dp/160774838X#reader_160774838X" target="_blank" rel="noopener">Elements of Pizza</a>. You can parbake, wrap, and freeze dough, then thaw and use it. So you can parbake your crust — at which point it's not chametz, any more than matzoh is — and use it at will.</p>
<p>Disclaimer: I'm not a Rabbi, much to my mom's disappointment.</p>Behind the Scenes: Training Development2021-04-06T00:00:00Zhttps://shostack.org/blog/behind-the-scenes-training-development/AdamDeveloping a training program is hard, especially when it will be delivered remotely.<span class="single-post-excerpt"><!-- excerpt -->Developing a training program is hard, especially when it will be delivered remotely.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/Teleprompter.png" alt="teleprompter" class="single-post-main-img" />
<p>I've talked about <a href="https://courses.shostack.org/" target="_blank" rel="noopener">our new training</a>, and I want to provide a little behind the scenes view. I regularly talk with folks who've gone through the pain of developing their own training, or worse, put others through the pain of their alpha-version training, and then paid the price in having to convince people to give another training a chance.</p>
<p>When you're in a room with people, able to observe body language and facial expressions, it's easier to adjust on the fly, and 'rescue' a training going badly. That's much harder when everyone's distributed.</p>
<p>Behind the scenes, there's a huge amount of planning that helps deliver quality. Many of the deliverables (learning goals, scripts, lesson plans) are like software product planning deliverables — the end user never sees them. If we take the agile motto of "You Ain't Gonna Need It" to heart, we don't bother with them. We yolo the training, and, frankly, it shows.</p>
<p>At the start of the pandemic, I didn't know very much about these. The ability to adjust on the fly, in a room, was a crutch. The training got great customer satisfaction scores, customers brought us back, we must be doing great?</p>
<p>One of the things I did know about a year ago was flipped classrooms: you put the lectures on video, and do higher-level work in the classroom. And early in the pandemic, there was a lot of advice for teachers making the transition to remote classrooms very quickly: only worry about audio quality, you only have time to record a single take... and that was all good advice for teachers engaging in a very rapid transition.</p>
<p>I now own a decent teleprompter. Along with lights, microphones, and the whole kit and kaboodle. I resisted the teleprompter for a long time. I had an ipad mounted alongside the camera, and you know what? It showed. You could watch my eyes track over to the slides and back. And while that seems like a small thing, our brains have a huge amount of grey matter devoted to interpreting small signals like eye movement and micro-expressions. You could say that it's literally what makes us human - the brainpower we devote to it makes our heads too big.</p>
<p>But I digress. Wow, do I ever! I appreciate you reading about our heads, but for a training, students are left wondering "how does that relate to data flow diagrams?" In fact, I digress a lot, which brings me back to the teleprompter. A teleprompter, with a script, helps me stay on topic. I can plan the lessons and know that I'm covering the important points.</p>
<p>And by the way, the results of all this work are available at <a href="https://courses.shostack.org/" target="_blank" rel="noopener">courses.shostack.org</a> — we're running <a href="https://shostack.org/training/open">open courses</a> regularly.</p><p>
</p><p>Those are the important points for me, and I'm curious: what have you learned about training or learning during the pandemic?</p>
Can Training Work Remotely?2021-04-13T00:00:00Zhttps://shostack.org/blog/can-training-work-remotely/AdamI get this question a lot: Can distributed/remote training work as well as in person? Especially for threat modeling, where there's a strong expectation that training involves whiteboards...<span class="single-post-excerpt"><!-- excerpt -->I get this question a lot: Can distributed/remote training work as well as in person? Especially for threat modeling, where there's a strong expectation that training involves whiteboards...<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/children-coloring-group-tables.jpg" alt="groups of children sitting at tables, coloring, in a classroom setting" class="single-post-main-img" />
<p>I get this question a lot: Can distributed/remote training work as well as in person? Especially for threat modeling, where there's a strong expectation that training involves whiteboards. (I remember one course in particular, about 15 minutes in, the buyer said: "Let's get to the whiteboards already!")</p>
<p>And there's no doubt: people learn by doing. They also learn by having their work respectfully critiqued by an instructor, and they learn by having their work critiqued by peers, and critically examining the work done by their fellow students. And so it's helpful to include all three elements in the design of a course.</p>
<p>There's advantages to being in person: as an instructor, I've learned to read the energy of a room. It's easier to add energy, both my own, and getting people to stand up and move around.</p>
<p>It's literally harder to see the advantages to being distributed. I see two sets of advantages: the exercises and time budgets.</p>
<p>On the exercises, people can spend the time they need. Some do them faster, some do them more slowly. In a classroom, I have to allocate a time block for that: 15 or 30 minutes. The folks who can do them faster are waiting, the ones who are going more slowly might feel cut off. In the distributed course, there's no need to time box them for the benefit of the other students. (This can also be a curse, and so our exercises have a time range per exercise so people can see when to ask for help.) Also, doing the exercises in small groups does have its advantages, and a key disadvantage: the weaker students can just nod along, rather than struggling through the exercises. Struggle isn't a particularly positive word, but learning happens as you push through and figure out how to do the exercise.</p>
<p>Production-wise, it's easier to switch tasks: instructors don't need to be onsite and so running the course a few hours per day doesn't entail being away from home for longer. Participants can get other work done during the day.</p>
<p>Over the past year, I've learned to make lemonade, and to be honest, I'm now at a point where I expect to continue with distributed classes into the foreseeable future. The lectures, on video, are better than they can be in person. They're better because you can speed them up, you can re-wind, you can turn on closed captions (only some of our courses so far). And the many takes that go into them mean that I'm fresh, I'm concise, and maybe I'll even use them in person.</p>
<p>You can check out our current offerings at <a href="https://shostackassociates.thinkific.com/" target="_blank" rel="noopener">https://shostackassociates.thinkific.com/</a>, and seats are available for both May courses.</p>
The Updates Must Go Through2021-04-14T00:00:00Zhttps://shostack.org/blog/the-updates-must-go-through/AdamThe timing of updates is not coincidental.<span class="single-post-excerpt"><!-- excerpt -->The timing of updates is not coincidental.<!-- /excerpt --></span>
<p>On Monday, the Department of Justice <a href="https://www.justice.gov/opa/pr/justice-department-announces-court-authorized-effort-disrupt-exploitation-microsoft-exchange" target="_blank">announced</a> that it had cleaned malware ("webshells") off of hundreds of infected mail systems running Microsoft Exchange. Microsoft has been trying to get folks to apply critical security patches to address a problem that's being actively exploited.
<img src="https://shostack.org/blog/img/2021/msau.png" style="max-width: 75%;" alt="screencapture of Microsoft autoupdater not working" />
</p><p>A few minutes ago, I posted a this screencapture of Microsoft's autoupdater going haywire on my Mac. The two stories are intimately related to people not wanting to roll patches. This is a story I've been watching closely for twenty years or so. (For example, see our <cite><a href="https://adam.shostack.org/time-to-patch-usenix-lisa02.pdf" target="_blank" rel="noopener">Timing the Application of Security Patches for Optimal Uptime</a></cite>, Usenix Lisa 2002).</p>
<p>There were some improvements along the way. In 2004, Microsoft shipped the Blaster removal tool to remove a virulent worm. That worm had code which targeted Windowsupdate.com with a denial of service attack. I remember being in a meeting where Jason Garms described the history of that tool, and how it morphed into the Malicious Software Removal Tool (MSRT). Controversial at the time, Apple now has it's own variant of that built quietly into MacOS, and we'll come back to MSRT vs DoJ.</p>
<p>Microsoft evolved Windows Update into Microsoft Update, with a promise that you'd only need one updater. Today, I have at least three on this computer: Microsoft Update, which seems to update itself more than anything else, but also manages updates for Office. Teams has its own updater, as does Skype. Probably other tools do as well, because Microsoft seems to have taken its eye off the ball of flawless updating.</p>
<p>Low flaw, low impact updating is possible; observe Chrome and Firefox, with their restore after restart. (Admittedly, a browser is easier than an operating system, and I have many <a href="https://shostack.org/blog/its-2021-check-backups/">brickbrats</a> for Cupertino there, too.)</p>
<p>When your updates are not reliable and low impact, people avoid them.</p>
<p>When Jason talked about Blaster, he said "We made the decision that Windowsupdate was going to be the last Microsoft property standing. We can only get back if we can clean this thing up."</p>
<p>Cleaning this thing up is what the DoJ did last week. So, why DoJ, not Microsoft? Probably because some mix of: MSRT runs on fewer corporate machines than consumer ones, hesitancy to set a precedent that the DoJ can order Microsoft to execute code on machines, the DoJ's desire to flex a new precedential muscle. There may have been other factors. And when I say "hesitancy," I know at least a few folks still at Microsoft who would have shown up with baseball bats to help people see reason in the face of that idea.</p>
<p>And really, we do not want companies being told by governments to run code on people's computers. Updates are too important to the health of the digital world to see them disrupted. Trust in those updates is easily lost and hard to regain. Leaving Microsoft out of it was the right thing to do. I just wish that team would improve the quality of the update pathway, because that's also the right thing to do.</p>Threat Model Thursday: Github's Approach2021-04-16T00:00:00Zhttps://shostack.org/blog/tmt-githubs-approach/AdamA bunch of people recently asked me about Robert Reichel’s post 'How We Threat Model,' and I wanted to use it to pick up on Threat Model Thursdays.<span class="single-post-excerpt"><!-- excerpt -->A bunch of people recently asked me about Robert Reichel’s post 'How We Threat Model,' and I wanted to use it to pick up on Threat Model Thursdays.<!-- /excerpt --></span>
<p>A bunch of people recently asked me about Robert Reichel's post "<a href="https://github.blog/2020-09-02-how-we-threat-model/" target="_blank" rel="noopener">How We Threat Model</a>," and I wanted to use it to pick up on Threat Model Thursdays, where I talk about process and practices. My goal is always to build, and sometimes that involves criticism.</p>
<p>So let me start by saying I like the way that they frame it: "At GitHub, threat modeling isn’t necessarily a specific tool or set of deliverables—it’s a process to help foster ongoing discussions between security and engineering teams around a new or existing system. A threat model is a collaborative security exercise where we evaluate and validate the design and task planning for a new or existing service. This exercise involves structured thinking about potential security vulnerabilities that could adversely affect your service." I think they probably have many useful conversations in that frame.</p>
<p>I did want to point out and discuss four aspects of the post: three that are there and one that's not. They are reviews, meeting and process, universality, and the one that's not: specific steps.</p>
<p>The first thing to say is that their process is centered around review meetings ("our security team will provide documentation and examples to the engineering teams on effective threat modeling. We typically ask each engineering team to generate a model in advance, covering a significant part of a system to review"). Note that the product team is generating documentation and bringing it for review. That often leads to threat modeling happening at the end, after decisions have been made. (More on <a href="https://shostack.org/blog/tmt-talking-dialogue-and-review/">reviews in threat modeling</a>.) Later in the post, they do talk about shifting left, so I think they may have avoided this trap.</p>
<p>Second, they state "Every threat modeling conversation should have at least the following goals." One of the three is "holistically evaluate the entire surface area and develop the most likely points of compromise." This is at odds with a lot of agile practices, such as "threat model every story." </p>
<p>Third, in 'bringing it together,' they have 5 steps in "a quick summary of our process." Three of those steps relate to meeting hygiene and effective project management. I have no doubt that 'following up' and 'leaving with specific action items' are important. Similarly, we could have 'don't call the other team idiots' on that list. It's excellent advice, often violated by security teams who are getting started threat modeling. (This, by the way, is why I wrote my <a href="https://shostack.org/files/papers/The_Jenga_View_of_Threat_Modeling.pdf">Jenga</a> white paper, talking about how we have technical skills, soft skills and organizational disciplines.)</p>
<p>And that brings me to what threat modeling specific steps are they following? I would love to hear more about the documentation and examples they provide. They mention using the MS TMT or OWASP Threat Dragon, and then have an hour-long conversation about STRIDE threats to it. Are development teams expected to provide just a system model or do they also do a preliminary analysis?</p>
<p>All of this said, they're getting value from the process, and they're clearly thinking about 'what goes wrong' as they threat model, which is crucial. Each organization that succeeds with threat modeling has some commonalities, and a lot of adaption to how they deliver products.</p>
<p>I'm glad to see Github sharing how they're threat modeling, and I hope that they, and you, find this constructive and helpful in understanding where they are.</p>IoT Security & Threat Modeling2021-04-22T00:00:00Zhttps://shostack.org/blog/iot-security-threat-modeling/AdamExpanding on the UK Government's ‘The Uk Code of Practice for Consumer IoT Security’ and how it aligns with Threat Modeling.<span class="single-post-excerpt"><!-- excerpt -->Expanding on the UK Government's ‘The Uk Code of Practice for Consumer IoT Security’ and how it aligns with Threat Modeling.<!-- /excerpt --></span>
<p>There's a new report out from the <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/978692/The_UK_code_of_practice_for_consumer_IoT_security_-_PETRAS_UCL_research_report.pdf" target="_blank" rel="noopener">UK Government, The UK Code of Practice for Consumer IoT Security</a>.</p>
<p>One of the elements I want to draw attention to is:</p>
<blockquote>The use of IoT devices by perpetrators of domestic abuse is a pressing and deeply concerning problem that is largely hidden from view. Collecting data (and therefore evidence) on this is challenging for a number of reasons outlined in this section by Leonie Tanczer. There are concrete steps that both industry and the policy community could take to address the misuse of consumer IoT in this setting and we include a number of these as well as lessons from other countries.</blockquote>
<p>But all three of the elements in the report, IoT-Facilitated tech abuse, Fitness systems, and childrens things, require technologists to start thinking more deeply and broadly about the questions of what can go wrong with their products. We are past time when we can look reactively, or use a list of 'vulnerabilities' and avert our eyes from the other impacts our technology has.</p>
<p>And that means a lot more threat modeling, where the way we answer 'what can go wrong' moves beyond STRIDE.</p>This time for sure, Pinky!2021-04-23T00:00:00Zhttps://shostack.org/blog/this-time-for-sure-pinky/AdamIf everyone agrees on what we should do, why do we seem incapable of doing it?<span class="single-post-excerpt"><!-- excerpt -->If everyone agrees on what we should do, why do we seem incapable of doing it?<!-- /excerpt --></span>
<blockquote>If everyone agrees on what we should do, why do we seem incapable of doing it? Alternately, if we are doing what we have been told to do, and have not reduced the risks we face, are we asking people to do the wrong things?</blockquote>
<p>Read Mike Tanji's full article, <a href="https://web.archive.org/web/20210731095412/http://www.haftofthespear.com/solar-sunrise-solar-winds-the-questionable-value-of-two-decades-of-cybersecurity-advice/" target="_blank">From Solar Sunrise to Solar Winds: The Questionable Value of Two Decades of Cybersecurity Advice</a>.</p>'Stop Vaccine Finger Wagging'2021-04-26T00:00:00Zhttps://shostack.org/blog/stop-vaccine-finger-wagging/AdamJust stop.<span class="single-post-excerpt"><!-- excerpt -->Just stop.<!-- /excerpt --></span>
<blockquote>The U.S. political divide on whether to get the coronavirus vaccine suggests that "maybe there’s been too much finger wagging,"" said the head of the National Institutes of Health.
<br /><br />
"I've done some of that; I'm going to try to stop and listen, in fact, to what people's specific questions are," NIH Director Francis Collins said Sunday on NBC’s "Meet the Press."
</blockquote>
<p>(<a href="https://www.seattletimes.com/nation-world/stop-vaccine-finger-wagging-says-top-u-s-health-official/" target="_blank" rel="noopener">https://www.seattletimes.com/nation-world/stop-vaccine-finger-wagging-says-top-u-s-health-official/</a>)</p>
<p>No metaphor or anything, just a simple takeaway: stop finger wagging over vaccines. Cybersecurity professionals are way more effective than public health professionals, and certainly don't need to hear any finger wagging over our finger wagging over things like long passwords or failure to patch, so I shall keep my fingers where they belong.</p>Threat Model Thursday: Technology Consumers2021-04-29T00:00:00Zhttps://shostack.org/blog/threat-model-thursday-technology-consumers/Adam“It depends on your threat model...”<span class="single-post-excerpt"><!-- excerpt -->“It depends on your threat model...”<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/smokescreen.jpg" alt="person creating smoke with handheld device" class="single-post-main-img" />
<p>There's an interesting paper by Becky Kazansky, <a href="https://journals.sagepub.com/doi/full/10.1177/2053951720985557" target="_blank" rel="noopener"><cite>"It depends on your threat model": the anticipatory dimensions of resistance to data-driven surveillance</cite></a>. The author critiques 'anticipatory data practices', a collection of techniques that include my own work, as presented to civil society activists. It opens “While many forms of data-driven surveillance are now a ‘fact’ of contemporary life amidst datafication, obtaining concrete knowledge of how different institutions exploit data presents an ongoing challenge, requiring the expertise and power to untangle increasingly complex and opaque technological and institutional arrangements.”</p>
<p>Her work, advances my thinking and concerns that I've had since our expriment with <u>threat modeling the privacy of Seattle residents</u> [link to https://seattleprivacy.org/threat-modeling-the-privacy-of-seattle-residents/ no longer works]. In particular, it shows some of the limits of the Four Question Framework. I see two particular limits with the questions "what are we working on" and "what are we going to do about it".</p>
<p>First, normal people don't "work on" technology the way technologists do. They're working on living their lives, and hoping technology helps. Most people don't have a good way to understand the technology they're using, and especially not at reasonable cost. Much of the technology which influences our lives and our privacy and security is outside of our control — for example, the electronic locks on an apartment building, the RFID readers on a bus, the cameras and microphones on telephone poles. That's technology working on us, not us working on technology.</p>
<p>Second, the options available to normal people are often exceptionally limited: phones are tracked by phone companies despite anything we do (except possibly putting them in airplane mode, which seemed to be getting more "nuanced" every year). Living without a phone or without Facebook is hard, and even if you're not on Facebook, Facebook maintains a shadow profile of you. We might spend hours configuring apps, "opting out", protecting ourselves, only to discover that the latest IoS turns on Bluetooth or iCloud again. If you use a Pi-Hole to protect your home from the devices in it, things randomly break in hard to debug ways. You can't </p>
<p>Her paper is thoughtful and challenging. We need to do better at helping real people navigate the world and protect themselves. There's an interesting approach in Apple's recent guidance, <a href="https://manuals.info.apple.com/MANUALS/1000/MA1976/en_US/device-and-data-access-when-personal-safety-is-at-risk.pdf" target="_blank" rel="noopener">Device and Data Access when Personal Safety is At Risk</a>. (I hope to have more to say about that soon.)</p>
<p>I do think that some degree of modeling and analysis is helpful for protecting normal folks, and even activists, who want to have a digital life. But the tooling we have isn't yet what they need.</p>
<p class="credit-text">Photo: <a href="https://unsplash.com/photos/GwU52kCDOpM" target="_blank" rel="noopener">Jamie Street</a>.</p>Apple Guidance on Intimate Partner Surveillance2021-05-06T00:00:00Zhttps://shostack.org/blog/apple-guidance-on-intimate-partner-surveillance/AdamApple has released ‘Device and Data Access when Personal Safety is At Risk’ and I wanted to explore it a bit.<span class="single-post-excerpt"><!-- excerpt -->Apple has released ‘Device and Data Access when Personal Safety is At Risk’ and I wanted to explore it a bit.<!-- /excerpt --></span>
<p>Apple has released (or I've just come across) a document <a href="https://manuals.info.apple.com/MANUALS/1000/MA1976/en_US/device-and-data-access-when-personal-safety-is-at-risk.pdf" target="_blank" rel="noopener">Device and Data Access when Personal Safety is At Risk</a>. </p>
<blockquote>Apple makes it easy to connect and share your life with the people closest to you. What you share, and whom you share it with, is up to you — including the decision to make changes to better protect your information or personal safety. If you’d like to revisit what you share with other people, or restore your device’s original settings for any reason, this guide can help you understand what information you are sharing via your Apple devices, and how to make changes to protect your safety.</blockquote>
<p>The document includes checklists, which are available separately:</p>
<ul>
<li><a href="https://support.apple.com/en-us/HT212021" target="_blank" rel="noopener">If you want to see if anyone else has access to your device or accounts</a></li>
<li><a href="https://support.apple.com/en-us/HT212022" target="_blank" rel="noopener">If you want to stop sharing with someone whom you previously shared with</a></li>
<li><a href="https://support.apple.com/en-us/HT212023" target="_blank" rel="noopener">If you want to make sure no one else can see your location</a></li>
</ul>
<p>Defending against attackers who are both authorized and "interface-bound" is a weird problem for information security, as traditionally defined. We object with phrases like, "of course people can do the things you've authorized them to do." "How do you expect to defend against an attacker who can configure the device?"</p>
<p>These objections are simultaneously reasonable and not. They're reasonable from the perspective that information security has long been defined as the ability of system owners to implement their policies, and to have assurance in those implementations. They're unreasonable because ever year in the United States, 1,000 to 1,600 women die at the hands of their male partners, and some fraction of those are enabled by technology-aided surveillance. (The National Institute of Justice Journal's issue on <a href="https://www.ncjrs.gov/pdffiles1/jr000250.pdf" target="_blank" rel="noopener">Intimate Partner Homicide</a> is mainly disturbing and also fascinating. For security practitioners, that's worth reading both for the key messages, and also to see how another field marshalls its information for practical use, and the quality that comes from dedicated editing.)</p>
<p>That "simultaneously reasonable and not" has made it hard for researchers working in intimate partner violence to bring their problem to the attention of information security, and that's fed my thinking that <a href="https://shostack.org/blog/we-need-a-discipline-of-cybersecurity-public-health/">we need a discipline of cybersecurity public health</a> which complements information security the way public health compliments medicine.</p>
<p>Relatedly, Apple has been thinking about the abuse of Airtags for stalking, and has <a href="https://9to5mac.com/2021/04/30/airtag-stalking-protections/" target="_blank" rel="noopener">built in some protections</a>. We can debate if those are the <em>right protections</em>, which is a much better debate to have than <em>should there be protections</em>?</p>Tracking Company Says 96% of iPhone Users Block Tracking2021-05-08T00:00:00Zhttps://shostack.org/blog/tracking-company-says-96-of-iphone-users-block-tracking/AdamSo there's some good news and some bad news in this story: 'Too Bad, Zuck: Just 4% of U.S. iPhone Users Let Apps Track Them After iOS Update'.<span class="single-post-excerpt"><!-- excerpt -->So there's some good news and some bad news in this story: 'Too Bad, Zuck: Just 4% of U.S. iPhone Users Let Apps Track Them After iOS Update'.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/flurry.jpg" alt="A child wearing a backpack walks down a snowy city neighborhood street" class="single-post-main-img" />
<p>So there's some good news and some bad news in this story: <a href="https://gizmodo.com/too-bad-zuck-just-4-of-u-s-iphone-users-let-apps-tra-1846851013" target="_blank" rel="noopener">Too Bad, Zuck: Just 4% of U.S. iPhone Users Let Apps Track Them After iOS Update</a>. The good news is that, given a choice, 96% of Americans don't accept targeted ads. I'm sure that the advertisers will accept that, move on, and not oppose new restrictions on intrusive practices.</p>
<p>But the bad news is... the data comes from a tracking and analytics company, Flurry. (I know nothing about them, and they may be very fine people.) "Flurry based its findings on a sample size of 2.5 million daily mobile active users with iOS 14.5 in the U.S. and a sample size of 5.3 million such users worldwide. According to the company, its analytics tool is installed in more than 1 million mobile applications and it aggregates data from about 2 billion devices per month."</p>
<p>So, umm, if this anti-tracking stuff is working — how does Flurry get that data, and what are they collecting from the 96% of Americans who've opted out of tracking via Apple's Anti-tracking technology?</p>
<p class="credit-text">Photo: <a href="https://unsplash.com/@sigmund?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank" rel="noopener">Sigmud</a></p>Pacific Northwest Appsec Conference2021-05-14T00:00:00Zhttps://shostack.org/blog/pacific-northwest-appsec-conference/AdamAppSec Pacific Northwest Conference is a free application security conference that will be held Saturday, June 19th. It is a virtual, online event sponsored by the OWASP chapters of Portland, Vancouver, and Victoria.
<p>AppSec Pacific Northwest Conference is a free application security conference that will be held Saturday, June 19th. It is a virtual, online event sponsored by the OWASP chapters of Portland, Vancouver, and Victoria.</p>
<p>We love to see brand new speakers, seasoned speakers and everyone in between. Their <a href="https://docs.google.com/document/d/1Zwky-6Pa06fZ6ol0pO1yOiJxh24qnCcpYnZBQIDejqo/edit" target="_blank" rel="noopener">call for presentations</a> is now open.</p>Colonial Pipeline, Darkside and Models2021-05-15T00:00:00Zhttps://shostack.org/blog/colonial-pipeline-darkside-and-models/AdamThe Colonial Pipeline shutdown story is interesting in all sorts of ways, and I can't delve into all of it.I did want to talk about one small aspect, which is the way responders talk about Darkside.<span class="single-post-excerpt"><!-- excerpt -->The Colonial Pipeline shutdown story is interesting in all sorts of ways, and I can't delve into all of it.I did want to talk about one small aspect, which is the way responders talk about Darkside.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/Mandiant-Darkside-May15-2021.png" alt="Mandian Darkside May15 2021" class="single-post-main-img" />
<p>The Colonial Pipeline shutdown story is interesting in all sorts of ways, and I can't delve into all of it.</p>
<p>I did want to talk about one small aspect, which is the way responders talk about Darkside. Blog posts from <a href="https://news.sophos.com/en-us/2021/05/11/a-defenders-view-inside-a-darkside-ransomware-attack/" target="_blank" rel="noopener">Sophos</a> and <a href="https://www.fireeye.com/blog/threat-research/2021/05/shining-a-light-on-darkside-ransomware-operations.html" target="_blank" rel="noopener">Mandiant</a> seem really useful! Information sharing is working, and what the heck does a Cyber Review Board have left to do? I want to comment first on the models that they're using, second on the data within them, and thirdly on a few of the things that the new Cyber Safety Review Board might do if they were charged with looking at this.</p>
<p>The first thing I did was to compare the kill chain models. Sophos is categorizing their chain with at least two steps that Mandiant leaves out. Most of the mapping is pretty minor differences in titles, but I can't tell if Mandiant's establish foothold stage is the same as Sophos' execution stage, and would need to dig in deeper.</p>
<img src="https://shostack.org/blog/img/2021/Sophos-Darkside-May15-2021.png" alt="Sophos Darkside May15 2021" title="Sophos-Darkside-May15-2021.png" border="0" width="757" height="437" />
<table>
<tr>
<th>Mandiant</th>
<th>Sophos</th>
</tr>
<tr>
<td>Initial Compromise</td>
<td>Initial access</td>
</tr>
<tr>
<td>Establish Foothold</td>
<td>Execution(?)</td>
</tr>
<tr>
<td>Escalate Privileges</td>
<td>Defense evasion</td>
</tr>
<tr>
<td>Maintain Presence</td>
<td>Persistence</td>
</tr>
<tr>
<td>Move Laterally</td>
<td>Lateral Movement</td>
</tr>
<tr>
<td>Internal Recon</td>
<td>Discovery</td>
</tr>
<tr>
<td>Complete Mission</td>
<td>Impact</td>
</tr>
<tr>
<td></td>
<td>Exfiltration</td>
</tr>
<tr>
<td> </td>
<td>Command & Control</td>
</tr>
</table>
<p>The second thing to note is far more important: the contents of the columns differ a lot. For move laterally, Mandiant lists: Beacon, RDP, plink, F-Secure C3, while Sophos lists PSExec, RDP, SSH. The only element in common is RDP. So, who to believe? Is the accurate information the union of the two, in which case, both are seriously off? Is one better than the other? I think that both are basing their data on five investigations ("Mandiant currently tracks five clusters of threat activity that have involved the deployment of DARKSIDE," "The Sophos Rapid Response team has been called in for incident response or to intervene during an attack involving DarkSide on at least five different instances in the past year.")</p>
<p>Let me be clear: I am not saying this to pick on either team or their members. I have every reason to believe that both companies employ smart, hardworking people, and are sincerely trying to share information to help defenders as best they can. Further, I appreciate that both have taken cycles from their response teams to assemble the information to help defenders.</p>
<p>Rather, and this is my third point: we exist in a world apparently awash in data about threat actors, and there are plenty of opportunities to dig deeper. The Mandiant and Sophos posts came to my attention in a conversation, and I didn't attempt to find others. I haven't done a element by element comparison of the chains, but I do see that Mandiant lists "suspected password attacks on perimeter", CVE-2021-20016 and malicious emails with links to Sophos' external remote access and credential phishing. (I very much appreciate that both companies are being more specific than "phishing.")</p>
<p>Today, each defender has to do this work for themselves, and there's not enough hours in the day. Tales of under-staffed, overworked, and burnt-out defense operations teams are not just common, they're the norm, much like under-staffed and overworked air traffic control was the norm, and there was 8 minutes of work for each takeoff or landing, and each controller had 30 takeoffs or landings to manage in an hour, meaning the work as imagined was 8 minutes, and the work as performed had to fit into 2 minutes. (Numbers are approximate, but the ATC situation is a major focus of work in human factors engineering, after a series of NTSB reports drew attention to the discrepancies.)</p>
<p>It would be fascinating to know if Colonial was focused on a threat informed defense, a NIST CSF informed defense, or something else, and more, over a set of investigations, is one working better than the other? A Cyber Safety Review Board could also ask more focused questions: Does one prevent better, but detect worse? Does chasing these indicators help defenders get to effectiveness, or run them ragged, drinking from a firehose? A Board could help us find empirical answers.</p>
<p>A Board could also delve into specifics: do investigating one or another type of indicator result in faster detection? How many new indicators does a typical investigation find? What's the rate of change of indicators found? (That is, "is the world of cyber really fast moving or molasses?")</p>
<p>The new Board will need to demonstrate its value and there's a plethora of ways it could do so. We all hope it chooses thoughtfully, and should give it the space and support it will need to do so.</p>
<p>[Update: the first version of the table aligned exflitration and C&C with internal recon, I am grateful to Steve Bellovin for pointing out the error. Additionally, I am aligning defense evasion and escalate privilege because least privilege is a defense, being evaded, and an argument can be made it's a presence maintenance technique.]</p>
NSF Wants Data on Your Data Needs2021-05-20T00:00:00Zhttps://shostack.org/blog/nsf-wants-data-on-your-data-needs/AdamThe National Science Foundation is looking for information on needs for datasets.<span class="single-post-excerpt"><!-- excerpt -->The National Science Foundation is looking for information on needs for datasets.<!-- /excerpt --></span>
<p>The National Science Foundation is looking for information on needs for datasets, <a href="https://www.nsf.gov/pubs/2021/nsf21056/nsf21056.jsp?WT.mc_id=USNSF_25%26WT.mc_ev=click" target="_blank" rel="noopener">Dear Colleague Letter: Request for Information on the specific needs for datasets to conduct research on computer and network systems</a>.</p>
<p>A draft of my responses is on <a href="https://docs.google.com/document/d/1V7G6UQgr-kPjQHkojjJ56Ce9dFRg6UQ3ZbyuZJ-Fg2Q/edit" target="_blank" rel="noopener">Google Docs</a>.</p>
<p>Comments are due Friday at 5 PM EST. (I thought I'd posted this earlier.)</p>Using Threat Modeling to Improve Compliance (TM Thursday)2021-05-20T00:00:00Zhttps://shostack.org/blog/using-threat-modeling-to-improve-compliance-tm-thursday/AdamThreat model Thursday is not just back, but live again!<p>Threat model Thursday is not just back, but live again!</p>
<p>This week is my <a href="https://path.rsaconference.com/flow/rsac/us21/rsacus21session/page/sessionagenda/session/1602603770712001zMnn" target="_blank" rel="noopener">Using Threat Modeling to Improve Compliance</a> at RSAC 2021.</p>
<p>The video replay is available if you have an RSA pass, and the <a href="https://static.rainfocus.com/rsac/us21/sess/1602603770712001zMnn/finalwebsite/2021_US21_RMG-R03_01_Using-Threat-Modeling-to-Improve-Compliance_1620746631000001QMkA.pdf" target="_blank" rel="noopener">slides</a> are available to all.</p>Review: Practical Cybersecurity Architecture2021-05-26T00:00:00Zhttps://shostack.org/blog/review-practical-cybersecurity-architecture/AdamAdam Shostack's review of the book Practical Cybersecurity Architecture<span class="single-post-excerpt"><!-- excerpt -->Adam Shostack's review of the book Practical Cybersecurity Architecture<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/practical-security-architecture.jpg" alt="Cover of the book Practical Cybersecurity Architecture bu Ed Moyle and Diana Kelley" class="single-post-main-img" style="float:left; padding-right:1em;" />
<p>There's an insightful <a href="https://twitter.com/stahnma/status/634849376343429120" target="_blank" rel="noopener">comment</a>, "Everybody has a testing environment. Some people are lucky enough enough to have a totally separate environment to run production in." Similarly, everybody has both enterprise and product architecture. Some people are lucky enough to be able to design them.</p>
<p>I have to say that because "architecture" is much maligned for being heavyweight, disconnected, and irrelevant in today's world of Dev-Opsy CI/CD moving fast and breaking things. But much like you can only allow things to be broken if you have detection, response, and recovery when things break, you can move faster by when you've thought about the security framework that allows you to do that.</p>
<p>And that brings me to the only book on security architecture that I've ever enjoyed, <cite><a href="https://amzn.to/3fM30gl" target="_blank" rel="noopener">Practical Security Architecture</a></cite> by Diana Kelley and Ed Moyle. </p>
<p>The property I enjoy most about this book is a focus on what we might call a YAGNI approach to architecture, doing only what is needed to serve some customer need. They're agnostic to SABSA vs TOGAF vs Open Group, drawing on each where it's strong and helpful. Sometimes, I wish they'd make a call (for example, in the application security section, they talk about both NIST 800-160 and BSIMM). But in practice, what they deliver is a good guide to both and a reminder that you likely want to flip through each as a source.</p>
<p>Importantly, the book is short, readable and grounded, and I've learned a lot reading it. it's a good quick reference to make sure you're covering something reasonably. It's also thought provoking in a good way, even when I disagree with it.</p>
<p>And they have a really important bit which improves my thinking about risk management and threat modeling.</p>
<blockquote>For an application, though, the likelihood dimension is exaggerated somewhat. A public-facing application such as the one in this example will likely always be available (or nearly so) to anyone from anywhere. This means that while likelihood is still important in the risk equation, most threats — over a long enough time horizon — have a high likelihood of being relevant... it is significantly easier to focus analysis efforts and mitigation/remediation planning on threats directly, rather than attempting to quantify or qualify likelihood, which, as we discussed, can be challenging to do.</blockquote>
<p>If we can simply set likelihood to "1" then we just look at impact, and it turns out, that looks a lot like <a href="https://github.com/UnityTech/unity-ssdlc/blob/master/Security%20Process/Bug-Bar.md" target="uhfHeaderId" rel="MSDocsHeader-Archive">bug bars</a>. Bonus, they're now more intellectually grounded. Also, you don't need to believe that all threats will come to pass, you can get the same effect with ".9," or in fact any constant.</p>
<p>Similarly, I learned a lot about scope, and how to use scoping as a tool, which influences my thinking about the threat modeling question of "what are we working on."</p>
<p>All in all, worth your time if what you do touches architecture, and more so if you had thought architecture a dirty word.</p>
<p>(Disclaimers: They interviewed me for the book, quote me, and they sent me a copy. They say nice things about my Threat Modeling book. Also, I'm lucky to count Diana and Ed as friends.)</p>
Recording Lectures2021-06-01T00:00:00Zhttps://shostack.org/blog/recording-lectures/AdamPeople sometimes ask me about my recording setup, and I wanted to share some thoughts about recording good learning content.<span class="single-post-excerpt"><!-- excerpt -->People sometimes ask me about my recording setup, and I wanted to share some thoughts about recording good learning content.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/Teleprompter.png" alt="teleprompter" class="single-post-main-img" />
<p>People sometimes ask me about my recording setup, and I wanted to share some thoughts about recording good learning content. The most important thing I've learned is the importance of conceptualizing what you want it to look like. The other thing I've learned is that the more expensive gear is usually more expensive for decent reasons, and it's easy to be penny-wise and pound foolish. It is also possible to spend crazy sums of money for features and properties that you don't need, and what you need depends on what you're trying to record. (The other thing I've learned is that Casey Ellis keeps his <a href="https://cje.io/2021/05/23/my-office-setup-part-2/" target="_blank" rel="noopener">office</a> WAY NEATER than I do.)</p>
<p>Great learning content requires a learning plan. <a href="https://amzn.to/3yFQOpU" target="_blank" rel="noopener">Teaching Effectively with Zoom</a> is a quick read and covers both a lot of learning theory and how to apply it with Zoom. Recorded lectures are one part of that and the focus of this post. It's certainly possibly to create great content with nothing more than taking out your phone (witness TikTok, but realize many of those 'impromptu' videos are anything but.). It's also possible to spend an inordinate amount of time and money without getting useful results. For the gear, I'll tell you what I have and if it's different, what I'd get today and why.</p>
<p>Again, the first question to ask is what you want your work to be. I've roughly settled on short videos with me in a chair, and the most important element of the setup is the audio. In fact, for education content, the sound is always the most crucial thing — if folks are struggling to hear what you say through staticy or choppy audio, their cognitive energy is literally being spent on understanding the words coming out of your mouth, not the message you're trying to convey.</p>
<p><strong>Audio</strong>: recording audio starts with a microphone. A lavalier microphone is the kind that attaches to a shirt, like when you're on stage. A lav is great because it's at a constant distance to your mouth, which means your audio is recording at a consistent level. I use an <a href="https://amzn.to/2TbmTFX" target="_blank" rel="noopener">Audio-Technica AT803</a> because I wanted a fully wired high-quality lav, and that's somewhat hard to find. Most are designed for wireless use, which means different connector formats.</p>
<p>An audio interface is a box that helps deal with those connector formats. It has a bunch of input and output formats, and converts between them. For example, it can take the XLR plug of my microphone, and produce both something that my computer understands as USB audio, and also a "line out" for headphones. The line-out on unit is useful for driving a teleprompter. You are likely to spend an inordinate amount of time on physical connectors and other requirements of your audio gear. If you're recording only yourself (versus you and a guest) than one input is probably ok. I now have a <a href="https://www.bhphotovideo.com/c/product/1514482-REG/motu_3120_m2_2x2_usb_c_audio.html" target="_blank" rel="noopener">Motu M2</a>, which I like. Unusually at the $200 price point, it has nuaunced level and output monitoring on the device. I returned a <a href="https://amzn.to/2QRcXAO" target="_blank" rel="noopener">Focusrite Scarlett</a>, which does most of what I need, but it turns out the level indicator is merely "clip" or "ok," and maybe that's great if you have an engineer managing sound for you. (<a href="https://rebootrecording.com/audio-interface-io-explained/" target="_blank" rel="noopener">This</a> is a good article on the physical connectors, with, crucially, lots of pictures; the <a href="https://www.nytimes.com/wirecutter/reviews/best-usb-audio-interface/" target="_blank" rel="noopener">wirecutter</a> has a review of Audio Interface devices.)</p>
<p>Those physical connections drove my microphone choice. I needed something that worked with XLR, because I didn't know about the Rode <a href="https://amzn.to/34bdUXt" target="_blank" rel="noopener">TRS to XLR</a> converter, which opens a huge number of possibilities. For the microphone, I'd buy another AT-803, but generally a cardiod (directional) or hypercardiod pattern for microphones helps you avoid picking up background noise and so something like <a href="https://amzn.to/3yDFASO" target="_blank" rel="noopener">this</a> looks like a good, less expensive option. There are more choices when you have more conversion options, but if you get <a href="https://amzn.to/34bsAWI" target="_blank" rel="noopener">this</a> you have to figure out how to plug a 4 pin mini... and that way lies madness. </p>
<p>Things I wanted to avoid (or at least not pay for):</p>
<ul>
<li>A big "podcast" microphone. They're a distraction in frame. The switchable modes and mute switches are yet more things to go wrong. They're heavy, meaning you need higher quality arms to hold them in place.</li>
<li>Wireless. Most lav mikes are wireless and, for my setup, that just more things to fail.</li>
<li>Top-end lavs with lay-flat pickups or kevlar windings. These are great if you're recording a movie and need to survive the talent sweating or jumping around.</li>
<li>More physical layer interconnects.</li>
</ul>
<p><strong>Teleprompter</strong>: I wasted a lot of time and money producing worse video before giving in and getting a teleprompter. I now have a <a href="https://glidegear.net/products/glide-gear-tmp-750-professional-video-camera-tablet-teleprompter" target="_blank" rel="noopener">Glide Gear 750</a> because it has a large (16-inch?) mirror and holds an ipad, which means I don't need to squint to read what's displayed. I'm using <a href="https://promptsmart.com/products/pro" target="_blank" rel="noopener">Promptsmart Pro</a> to autoscroll. It's pretty good, and it works much better with a wire from the audio interface, but not perfectly, so I also have a bluetooth mouse to help me scroll. The 750 is a reasonably solid unit, but the case feels flimsy. If I were moving it outside my home, I would look at a different unit or at least a replacement case.</p>
<p><strong>Tripod</strong>: The teleprompter means a tripod that can hold substantial weight (15 or more pounds.) I have an <a href="https://amzn.to/3bQy6Cd" target="_blank" rel="noopener">Alta 263</a> because wirecutter recommended it, and before my next big recording push, <del>I'll get a pan head</del> I got a <a href="https://www.bhphotovideo.com/c/product/1347333-REG/leofoto_lb_68_levelling_head.html" target="_blank" rel="noopener">levelling head</a>.</p>
<p>Things to avoid if buying a tripod only for video:</p>
<ul>
<li>"Travel" in the description. Lightweight material like carbon fiber or aluminum. Nice if you have to carry it, but the wieght of a steel tripod helps stabilize it, and you pay extra for portability.</li>
<li>Ball heads: hard to get perfectly level, hard to adjust one axis at a time.</li>
</ul>
<p><strong>Camera</strong>: I have a Sony Alpha, and today would buy either the <a href="https://amzn.to/3hXxO0a" target="_blank" rel="noopener">ZV-1</a> or the <a href="https://amzn.to/3oJj8n0" target="_blank" rel="noopener">A7C</a>, depending on budget. The second link contains a baseline Sony lens, and I've become a fan of renting lenses before buying. I'm using a <a href="https://amzn.to/3fJM2iC" target="_blank" rel="noopener">Tamron 28-75</a> set fairly open (f-stop 3 or 4) for most video.</p>
<p>Camera accessories: I worry about the mechanical stress of little tiny cable heads hanging off the camera forever, so have a <a href="https://amzn.to/3oKgvkR" target="_blank" rel="noopener">cage</a>, a <a href="https://amzn.to/34dqUvV" target="_blank" rel="noopener">cable clamp</a>, and a <a href="https://amzn.to/3uggtSP" target="_blank" rel="noopener">screw converter</a> kit. I've used a capture card like <a href="https://amzn.to/3bNUfRW" target="_blank" rel="noopener">this</a> to be able to both see what's on camera and record via OBS, but have moved to in-camera capture for quality. The inexpensive capture cards will drop video quality from 4K to to (at best) UHD (1920-1080), and they hiccup in various ways. Getting 4K video live into your computer is expensive — hundreds of dollars, not dozens. Watch for USB3 out, and read the output specs very carefully. </p>
<p><strong>Lights</strong>: I spent a <strong>lot</strong> of time on lighting. <a href="https://amzn.to/34gHcUG" target="_blank" rel="noopener">This</a> umbrella set seems to work pretty well with the lights roughly directly at head level facing me. I also got smaller glasses with a thinner frame so that they don't throw as much of a shadow. I don't really use the silver or gold umbrellas. Illya Ovchar has some <a href="https://petapixel.com/2021/07/08/11-popular-light-modifiers-compared-which-one-is-right-for-you/" target="_blank" rel="noopener">posts</a> at PetaPixel showing how light modifiers work on a manequin head. Things to avoid:</p>
<ul>
<li>Point light sources that can't easily be diffused.</li>
<li>Low intensity lights that don't throw enough light to make a difference.</li>
<li>Lights whose color temperature doesn't adjust.</li>
<li>Box lights may work for you, but they're even bigger than umbrellas, meaning they require bigger, heavier stands, and tend to need a higher-power light which gets hotter.</li>
<li>If you have glasses, avoid ring lights. They show up in my glasses however much I try to diffuse them.</li>
<li>Sunlight, because the sun moves* and thus your light changes in ways that make it hard to get consistent video.</li>
</ul>
<p>(* ok, smarty, the Earth moves. You know what I mean.)<br />
</p><p>Update June 6: when trying to solve a problem, <a href="https://www.bhphotovideo.com/" target="_blank" rel="noopener">B&H</a> and <a href="https://www.adorama.com/" target="_blank" rel="noopener">Adorama</a> have photographer-centered product categories that are way better than Amazon's mess of a search system. For example, I discovered the screw-driven leveling head when looking at B&H's <a href="https://www.bhphotovideo.com/c/browse/Tripods-Support/ci/8310/N/4075788771" target="_blank" rel="noopener">Tripods and support</a> > "<a href="https://www.bhphotovideo.com/c/browse/Tripod-Head-Accessories/ci/412/N/4075788751" target="_blank" rel="noopener">Tripod head accessories</a>" where you can scan the various <a href="https://www.bhphotovideo.com/c/product/1513552-REG/arca_swiss_860112_core_75_leveler_classic.html" target="_blank" rel="noopener">designs</a> (and price points!)</p>
<p>The other thing I've learned is to maintain a set of <a href="https://adam.shostack.org/blog/wp-content/uploads/2021/05/Adam-video-setup-notes-2021.pdf" title="Adam-video-setup-notes-2021.pdf" alt="Adam video setup notes 2021" target="_blank" rel="noopener">notes</a> on my setup — my full notes are bigger and longer, I've trimmed this for privacy, but every time I make a setup adjustment it goes in to help with the next setup. Every bit of pedantry in this is because I can't keep it all in my head, and why would I try?</p>
<p>It's been a heck of a learning journey and I'm sure there's things I've missed — what else would you like me to share?</p>
<p>Updates: July 9, added <a href="https://petapixel.com/2021/07/08/11-popular-light-modifiers-compared-which-one-is-right-for-you/" target="_blank" rel="noopener">light modifiers</a>, Dec 16, adding <a href="https://www.youtube.com/watch?v=jaFHhQktJiw" target="_blank" rel="noopener">Andrew Lo's teaching setup</a>, which is different than a recording setup. Jan, 2021, added <a href="https://www.nytimes.com/wirecutter/reviews/best-usb-audio-interfa/ce/?utm_source=rss&utm_medium=feed&utm_campaign=RSS%20Feed" target="_blank" rel="noopener">wirecutter</a> audio interface reviews. </p>
Van Buren2021-06-04T00:00:00Zhttps://shostack.org/blog/van-buren/AdamThe Supreme Court has ruled in the van Buren case, and there's a good summary on the Eff's blog.<span class="single-post-excerpt"><!-- excerpt -->The Supreme Court has ruled in the van Buren case, and there's a good summary on the Eff's blog.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/van-buren.png" alt="" class="single-post-main-img" />
<p>The Supreme Court has <a href="https://www.supremecourt.gov/opinions/20pdf/19-783_k53l.pdf" target="_blank" rel="noopener">ruled</a> in the van Buren case, and there's a good <a href="https://www.eff.org/deeplinks/2021/06/van-buren-victory-against-overbroad-interpretations-cfaa-protects-security" target="_blank" rel="noopener">summary</a> on the EFF's blog: "The decision is a victory for all Internet users, as it affirmed that online services cannot use the CFAA’s criminal provisions to enforce limitations on how or why you use their service..."</p>
<p>As I <a href="https://shostack.org/blog/amicus-brief-on-cfaa/">said at the time</a>, I was honored to be a part of EFF's <a href="https://www.eff.org/cases/van-buren-v-united-states/security-researcher-amici" target="_blank" rel="noopener">amicus</a> brief in this case.</p>
<p>There were nuanced <a href="https://www.supremecourt.gov/DocketPDF/19/19-783/153099/20200903144811054_19-783%20bsac%20Amicus%20Curiae%20Brief%20in%20Support%20of%20Repondent%20by%20Electronic%20Privacy%20Information%20Center.pdf" target="_blank" rel="noopener">arguments</a> that the CFAA can and should be used to protect sensitive personal information. I agree with their goal, but I am not persuaded that the distinctions are as easy to make as they argued. The crisp distinctions they laid out remind me of the distinctions made in the Electronic Communications Privacy Act of 1986. It too has clear distinctions, such as emails left on a server for 30 days lose privacy protections — a distinction that was justifiable when storage was expensive. Their distinctions are more nuanced. However, I've had several long conversations with my employer's lawyers about exactly how words in the CFAA might be parsed. Congress is the right place for those debates.</p>Thoughts on the Executive Order2021-06-07T00:00:00Zhttps://shostack.org/blog/thoughts-on-the-executive-order/AdamA new article by Steve Bellovin and myself at Lawfare.<span class="single-post-excerpt"><!-- excerpt -->A new article by Steve Bellovin and myself at Lawfare.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/EO-cyber-reviewboard.png" alt="Screenshot of Executive Order on White House website" class="single-post-main-img" />
<p><a href="https://www.lawfareblog.com/finally-cybersecurity-safety-review-board" target="_blank" rel="noopener">Finally! A Cybersecurity Safety Review Board</a> is a new article by Steve Bellovin and myself at Lawfare.</p>
<blockquote>One element of President Biden’s executive order on cybersecurity establishes a board to investigate major incidents involving government computers in somewhat the way that the National Transportation Safety Board investigates aviation disasters. The two of us, among many others, have been advocating for such a board for many years. The creation of the board is a good first step, possibly as much as can be done without legislative action. But we think that additional action is needed and will magnify the value the board offers.</blockquote>Ransomware is Not the Problem2021-06-09T00:00:00Zhttps://shostack.org/blog/ransomware-is-not-the-problem/AdamArbitrarily powerful software -- applications, operating systems -- is a problem, as is preventing it from running on enterprise systems.<span class="single-post-excerpt"><!-- excerpt -->Arbitrarily powerful software -- applications, operating systems -- is a problem, as is preventing it from running on enterprise systems.<!-- /excerpt --></span>
<blockquote>There's an infinite number of studies of ransomware lately, all breathlessly talking about how to fight this dangerous threat. They're all dangerously wrong. Ransomware is not the problem.</blockquote>
<p>I'm being intentionally provocative in my <a href="https://www.darkreading.com/attacks-breaches/ransomware-is-not-the-problem/a/d-id/1341171" target="_blank" rel="noopener">latest</a> Dark Reading Column</p>'Not in my threat model'?2021-06-15T00:00:00Zhttps://shostack.org/blog/not-in-my-tm/AdamYou know what's not in my threat model? A meteor hitting a volcano... And that's ok!<span class="single-post-excerpt"><!-- excerpt -->You know what's not in my threat model? A meteor hitting a volcano... And that's ok!<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/meteor-volcano.jpeg" alt="volcano and meteor" class="single-post-main-img" />
<p>You know what's not in my threat model? A meteor hitting a volcano... And that's ok! Your threat modeling should be focused on the threats that are likely to impact your systems. So unless your system is your evil supervillain volcano lair, a meteor is likely out of scope. And unless you have giant space lasers, there's not a lot you can do about that meteor.</p>
<p>You'll learn this, and more in my threat modeling courses, and speaking of which, we have some upcoming.</p>
<ul>
<li>Applied Threat Modeling, <a href="https://www.blackhat.com/us-21/training/schedule/index.html#applied-threat-modeling-21943" target="_blank" rel="noopener">July 31/Aug 1</a> or <a href="https://www.blackhat.com/us-21/training/schedule/index.html#applied-threat-modeling-219431619628066" target="_blank" rel="noopener">August 2/3</a> at Blackhat. This is our most popular course.</li>
<li>Threat modeling for security champs, <a href="https://courses.shostack.org/courses/threat-modeling-301-for-security-champs" target="_blank" rel="noopener">July 19-24</a>. This course is for those with a strong technical base, looking to improve their ability to help others threat model.</li>
</ul>
<p>You can see our upcoming open courses at <a href="https://courses.shostack.org/">courses.shostack.org/</a>, and you can sign up there for course announcements.</p>
Fast threat modeling videos2021-06-17T00:00:00Zhttps://shostack.org/blog/fast-threat-modeling-videos/AdamI'm exploring the concept of very fast threat modeling videos.<span class="single-post-excerpt"><!-- excerpt -->I'm exploring the concept of very fast threat modeling videos.<!-- /excerpt --></span>
<p>I'm exploring the concept of very fast threat modeling videos, and have <a href="https://youtu.be/Yt0PhyEdZXU" target="_blank" rel="noopener">posted</a> the first one.</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/Yt0PhyEdZXU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>Feedback welcome!</p>Juneteenth: A New Federal Holiday2021-06-19T00:00:00Zhttps://shostack.org/blog/juneteenth/AdamThoughts on the new federal holiday, Juneteenth<span class="single-post-excerpt"><!-- excerpt -->Thoughts on the new federal holiday, Juneteenth<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/Fourth_US_Infantry_Detail__U_S__Colored_Troops__1864.jpg" alt="black and white photography of members of the Fourth US Infantry detail, 1864" class="single-post-main-img" />
<p>I'm thrilled that Juneteenth will be a Federal holiday. We need more holidays that celebrate freedom, and there's few events that increase freedom as much as emancipating people who were enslaved. That is, freeing them from the threat violence would be used against them, and they would have no recourse. The <strong>United States</strong> also needs more holidays that celebrate the <strong>United</strong> part.</p>
<p>Here at Adam & Friends and its antecedents, we've been celebrating Juneteenth since <a href="https://shostack.org/archive/2006/06/happy-juneteenth/">2006</a>. Of course, like most things here, we do it erratically.</p>
<p class="credit-text">Photocredit: <a href="https://www.blackpast.org/african-american-history/united-states-colored-troops-1863-1865/" target="_blank" rel="noopener">Blackpast</a>.</p>
Why Threat Model?2021-06-23T00:00:00Zhttps://shostack.org/blog/why-threat-model/AdamThe second video in my 60 second series!<p>The second video in my 60 second series!</p>
<iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/YP4mNRXGcks?rel=0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
Applied Threat Modeling at BlackHat 20212021-06-28T00:00:00Zhttps://shostack.org/blog/applied-tm-at-blackhat21/AdamAt Blackhat USA, I'll be teaching Applied Threat Modeling.<p>At Blackhat USA, I'll be teaching Applied Threat Modeling.</p>
<iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/uMaYjxJUE2Q?rel=0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>This hands-on, interactive class will focus on learning to threat model by executing each of the steps. Students will start threat modeling early on the first day and then going deep into each of the four questions: what are we working on, what can go wrong, what are we going to do about it, and did we do a good job? This is capped off with an end-to-end exercise that brings the skills together. <a href="https://www.blackhat.com/us-21/training/schedule/index.html#applied-threat-modeling-21943" target="_blank" rel="noopener">July 31/Aug 1</a> and <a href="https://www.blackhat.com/us-21/training/schedule/index.html#applied-threat-modeling-219431619628066" target="_blank" rel="noopener">Aug 2-3</a></p>Threat Model Thursday: 5G Infrastructure2021-07-01T00:00:00Zhttps://shostack.org/blog/tmt-5g-infra/AdamThe US Government's lead cybersecurity agencies have released an interesting report, and I wanted to use this for a Threat Model Thursday, where we take a respectful look at threat modeling work products to see what we can learn.<span class="single-post-excerpt"><!-- excerpt -->The US Government's lead cybersecurity agencies have released an interesting report, and I wanted to use this for a Threat Model Thursday, where we take a respectful look at threat modeling work products to see what we can learn.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/5G-architecture.png" alt="5G architecture map" class="single-post-main-img" />
<p>The US Government's lead cybersecurity agencies (CISA, NSA, and ODNI) have released an interesting report, <a href="https://media.defense.gov/2021/May/10/2002637751/-1/-1/1/POTENTIAL%20THREAT%20VECTORS%20TO%205G%20INFRASTRUCTURE.PDF" target="_blank" rel="noopener">Potential Threat Vectors To 5G Infrastructure</a>. (<u>Press release</u> [link to https://www.nsa.gov/news-features/press-room/Article/2601078/nsa-odni-and-cisa-release-5g-analysis-paper/ no longer works] ), and I wanted to use this for a Threat Model Thursday, where we take a respectful look at threat modeling work products to see what we can learn.</p>
<p>The first thing I look for is a statement about who did the work and why. I think the report from a Threat Model Working Panel of the Enduring Security Framework of CIPAC, but then there's language like "identified by the ESF <strong>and</strong> 5G TM working panel... (emphasis added)" which implies its separate. Not a big deal, and in light of some of the ungrounded claims that 5G causes cancer, and the propensity of people to jump to death threats, I'm more accepting than normal of the report not having names on it.</p>
<p>The report lists a set of vectors (where the threats apply) and then threat scenarios. They have three main groups of vectors (things which are vulnerable to threats): policy and standards, supply chain and systems architecture. I think vectors are being used to address the question of where things can go wrong, and the vectors do relate to a scoping in an interesting way that's worth grappling with. (Nancy Leveson's work on control functions for development and operations seems related.) The grouping by vectors and scenarios is an interesting overall approach, and makes for a manageable sized report, which is exceptionally useful given the complexity of 5G.</p>
<p>This set of threat vectors, is similar to both "what are we working on" and "what can go wrong" and I think explicitly talking about those leads to useful results, which I'll discuss question by question.</p>
<h3>What are we working on?</h3>
<p>We should know what's in scope. Is it the networks? The radio networks, or something more broadly? Page 2 states "Additionally, 5G networks will use more ICT components than previous generations of wireless networks..." which sounds like the scope of the systems go deeper, and the glossary mentions core networks. Are end user devices in scope? Anything with a 5G chip? The accounting systems? The data gathered by 5G? "Telesurgical devices" are explicitly mentioned, are they in scope? (Why anyone would run a telesurgical device over a long-range radio, rather than switching to more reliable fiber as soon as economically feasible is beyond me. There are probably scenarios where it's helpful to have a device not tethered to fiber, but is the entire device in scope for this work, or is just the 5g modem in scope? Or neither, because end devices are not trusted?)</p>
<p>A diagram showing that scope would help. I'm borrowing one from <a href="https://calhoun.nps.edu/handle/10945/67090" target="_blank" rel="noopener">An Attack Vector Taxonomy for Mobile Telephony Security Vulnerabilities</a> for this post.</p>
<p>The diagram and discussion should be explicit about trust boundaries. What trusts what? For example, what's the trust relationship between User Equipment and other things? Missing boundaries may be a flaw in the design of 5G, rather than a flaw in the analysis. However, that should be called out by the people doing the analysis, rather than the person writing this meta-analysis.</p>
<p>An explicit statement of the scope of the work, and the trust boundaries that are supposed to delineate participants would be a great help. To be fair to the panel, cellular network systems are complex, and are made more complex because the people building them have these, ummm, unique perspectives, with terminology like "non-standalone network" and a "core network" which isn't the internet. Their baroque language and expansive documentation are both barriers. Nevertheless, knowing what's in what we used to call the "trusted compute base" is essential. We'll return to the concept of the TCB.</p>
<p>In fact, there's a lesson <a href="https://dl.acm.org/doi/10.1145/358549.358561" target="_blank" rel="noopener">from</a> Sir Tony Hoare: "there are two ways of constructing a software design. One way is to make it so simply that there are <em>obviously</em> no deficiencies and the other way is to make it so complicated that there are no <em>obvious</em> deficiencies." The 5G folks are clearly in the complexity camp, and that complexity dogs this analysis. Again, I want to be clear that I mean no disrespect to the folks doing the analysis work. Everyone would benefit from more layering and segmentation in cellular network designs, and treating systems like telesurgery or video streaming as services which run on top of the network layers. You know, the way the internet's layered architecture has enabled, video streaming to be added without any changes to the underlying layers. We'll come back to why that is so important when we get to what's being done about the problems.</p>
<h3>What can go wrong?</h3>
<p>The nature of the threats listed is unusual and worth some attention. They're grouped into scenarios, starting on page 5 with "nation state influence on 5G standards." </p>
<p>Let's return to that telesurgical device, mentioned earlier. Again, we have the idea that a surgical device will have 5G integrated into it, rather than modularized, accepted as if that's a reasonable future. The complexity, and associated lack of layering is a bug, not a feature. The device should distrust the network, and encrypt its data in ways that are opaque to the network, and apply trust from one endpoint to another, rather than decrypting it at various points within the system. Unfortunately, the cellular network folks hate that because they are stuck on the idea of smart networks and dumb end devices. (This is why hotspotting is broken on my phone: the cell company gets to send a "carrier settings" file and Apple trusts it, without ever letting me inspect or fix it.) </p>
<h3>What are we going to do about it?</h3>
<p>The report contains a good discussion of the dangers of the "optional controls" in the design, and I want to acknowledge that, and then go further.</p>
<p>Complexity is not only a threat to understanding, but also limits the ability of systems operators or users to deploy additional controls. In fact, implicit in the idea of mandatory and optional controls is that all the threats are known, and appropriate controls can be specified in advance. We can, and should, argue about how quickly threats evolve, and how much variation in operation is helpful. But we also know that there is distrust between the suppliers, operators and regulators, and it seems reasonable to think that we might want to layer additional controls, especially on the "ICT" componentry, the OT componentry, and the system integrity.</p>
<p>It used to be that some folks trusted manufacturers to prescribe the appropriate systems security; others relied on hardening guides. What I read from this is that independent hardening guides are not going to exist for these critical infrastructure components.</p>
<h3>Did We Do a Good Job?</h3>
<p>There's an interesting sentence in the intro "This product is [...] derived from the considerable amount of analysis that already exists on this topic, to include public and private research and analysis." Someone with time on their hands could find out what's new here, and get an understanding of what that private analysis focsed on, somewhat like how analysis of the old Data Encryption Standard (eventually) lead to an understanding of new forms of cryptanalysis that were not previously public.</p>
<p>There's a lot going on under the covers here about the role of China and Chinese companies in developing the standards, building, and in many cases, operating the equipment that makes up 5G. Frankly, there seems to be an element of surprise about what it's like when your technology is made in a country you don't trust – a situation that's been present for non-US companies for decades. The state of the art in systems security is that supply chains are trusted, in the sense that they can betray you. Chip makers and software makers can insert extra functionality in ways that are exceptionally hard to detect, and if detected, hard to distinguish from mistakes.</p>
<p>For many years, the strategy has been to address these via trusted manufacturers, using only cleared staff, but that approach has never been economical. This is why a small TCB is useful; SBOMs and <a href="https://blogs.cisco.com/security/mud-is-officially-approved-by-ietf-as-an-internet-standard-and-cisco-is-launching-mud1-0-to-protect-your-iot-devices" target="_blank" rel="noopener">MUDs</a> will be as well. They are not perfect, but they will allow more focused analysis, easier detection of abnormal behavior, and better audits of what's actively deployed. </p>
<p>To sum up, looking at this report provokes an awful lot of thinking about what a threat model could or should be. (Sorry! Is it a comfort to know I cut half the words in editing?) My conviction that explicit system modeling and scoping are part of threat modeling, rather than an input to it, is strengthened by looking at "what are we working on?"</p>Sketching to Answer 'What are we working on?'2021-07-07T00:00:00Zhttps://shostack.org/blog/sketching-video/AdamThe latest in the World's Shortest Threat Modeling Videos.
<p>The latest in the World's Shortest Threat Modeling Videos.</p>
<iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/oZWy-PEhBT8?rel=0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>Collaboration in Threat Modeling2021-07-13T00:00:00Zhttps://shostack.org/blog/collaboration-video/AdamIt's the latest in the World's Shortest Threat Modeling videos!
<p>It's the latest in the World's Shortest Threat Modeling videos!</p>
<p>Also, I set up <a href="https://bit.ly/adam-yt" target="_blank" rel="noopener">https://bit.ly/adam-yt</a> to make it easy to find my Youtube channel.</p>
<iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/C3QtxbkF7pY?rel=0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
Threat Model Thursday: NIST’s Code Verification Standard2021-07-15T00:00:00Zhttps://shostack.org/blog/tmt-NIST-code-verification-standard/AdamEarlier this week, NIST released a Recommended Minimum Standard for Vendor or Developer Verification of Code. I want to talk about the technical standard overall, the threat modeling component, and the what the standard means now and in the future.<span class="single-post-excerpt"><!-- excerpt -->Earlier this week, NIST released a Recommended Minimum Standard for Vendor or Developer Verification of Code. I want to talk about the technical standard overall, the threat modeling component, and the what the standard means now and in the future.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/NIST-developer-verification-Exec-order.png" alt="screenshot from NIST website referencing Executive Order 14028" class="single-post-main-img" />
<p>Earlier this week, NIST released a <a href="https://www.nist.gov/itl/executive-order-improving-nations-cybersecurity/recommended-minimum-standard-vendor-or-developer" target="_blank" rel="noopener">Recommended Minimum Standard for Vendor or Developer Verification of Code</a>. I want to talk about the technical standard overall, the threat modeling component, and the what the standard means now and in the future. To summarize: new requirements are coming to a project near you, and getting ready now is a good idea.</p>
<h3>The standard</h3>
<p>The standard is a Recommended Minimum Standard for Vendor or Developer Verification of Code. It was produced in response to Executive Order 14028, Improving the Nation's Cybersecurity. It covers 11 techniques in 6 classes:</p>
<ul>
<li>Threat modeling (🎉 🥂)</li>
<li>Automated testing</li>
<li>Static analysis (code scanning, hardcoded secrets)</li>
<li>Dynamic analysis (use the built in protections, black box, structural tests, regressions, fuzzing, web app scanning)</li>
<li>Check included software</li>
<li>Fix bugs</li>
</ul>
<p><strong>This is a really good list</strong>. I want to emphasize that. I really like several of the framings, including especially "check included software" being more than "run a software component analysis tool to check for CVEs." I would have liked to see explicit mention that gcc's -Wall option <a href="https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Warning-Options.html#Warning-Options" target="_blank" rel="noopener">still</a> does not in fact <a href="https://adam.shostack.org/blog/2006/07/gcc-wall-wereallymeanit/" target="_blank" rel="noopener">turn on all warnings</a>. </p>
<p>Nominally, this is a standard about software verification, and they're considering vendors who are not the original developers verifying fitness. (<a href="https://www.nist.gov/itl/executive-order-improving-nations-cybersecurity/recommended-minimum-standards-vendor-or-1" target="_blank" rel="noopener">FAQ</a> #3). </p>
<h3>Threat modeling within the standard</h3>
<p>I am glad to see threat modeling included in the standard. The task NIST was given was to craft a testing standard, and threat modeling is an unusual thing to include there. They do address that, and I want to expand on what they've said:</p>
<blockquote><p>Threat modeling should be done multiple times during development, especially when developing new capabilities, to capture new threats and improve modeling.</p></blockquote>
<p>As we discussed in the <a href="https://www.threatmodelingmanifesto.org/" target="blank" rel="noopener">Threat Modeling Manifesto</a>, there are many ways to get value. Threat modeling can be a great test planning technique, and if that's all you're using it for, you'll find extreme value in ensuring you consider what you're working on as a whole. It's also useful in verifying fitness for purpose as a developer selects and commits to software developed elsewhere. (Ideally, the developer will start to provide such threat models, or consumers will start to share their work. I look forward to either and both.)</p>
<p>However, I do disagree that improving modeling should be a goal in and of itself. Modeling has to be a task in service of a goal, and good enough, thoughtfully considered, is good enough.</p>
<h3>The future</h3>
<p>Currently, this document exists in an odd state. It is titled "Recommended Minimum Standards", but it is not a standard. Question 4 of the FAQ clarifies: NIST sets the standards, other parts of the government set procurement requirements. So if you sell to the Federal government, expect to see these requirements in your procurement questions soon, and that will trickle across the market.</p>
<p>These standards are also of interest to anyone who writes words like "We take industry standard steps to protect your security", say, in a privacy policy. Much like the FTC's <a href="https://www.ftc.gov/tips-advice/business-center/guidance/start-security-guide-business" target="_blank" rel="noopener">Start with Security</a>, if you're ignoring these steps, it may well come back to haunt you. All of these techniques can be implemented easily, at least for a start. How deep you need to go for each is dependent on the unique circumstances of your business. For threat modeling, the Manifesto and my <a href="https://www.youtube.com/c/shostack" target="_blank" rel="noopener">Worlds Fastest Threat Modeling</a> Videos series are both good places to start.</p>
<p>The technical work involved in each of these can be pretty small. However, change is always hard at scale, your developers are busy, and figuring out what tools to use, what your requirements are, how you're going to track those requirements, et cetera, will all take time and energy. If you start now, you'll minimize disruption and have an easier time of it.</p>Zen and the art of not quantifying risk2021-07-27T00:00:00Zhttps://shostack.org/blog/zen-and-the-art-of-not-quantifying-risk/AdamMany people want their threat modeling work to produce risk numbers, and in this post you'll learn why that's a mistake.<span class="single-post-excerpt"><!-- excerpt -->Many people want their threat modeling work to produce risk numbers, and in this post you'll learn why that's a mistake.<!-- /excerpt --></span>
<p>Many people want their <a href="https://shostack.org/">threat modeling work</a> to quantify risk. This is understandable, but it turns out to be counter-productive. That's because the work to address a risk by changing a control or a design is not dominated by the risk, but by the change. If you have a risk that someone will steal data because there's weak authentication on an API that reveals confidential information and is used by thousands of customers, the need is real, and there's a business challenge in fixing it.</p>
<p>Quantifying risk in the wrong way creates avoidable problems. These include getting security "spun up" about about issues that are hard to fix, trouble with auditors when things which are "high risk" aren't fixed, and conflict between security and development, or security and operations.</p>
<p>This isn't to say that we should ignore real problems because they're hard to fix. It's not to say that we should ignore security risk — as in the API example, that's a real problem that should be fixed. But the fix comes after dialog with others who are involved in the system. For this, a "T-shirt sizing" may be enough. We aim to get involved in overall improvement prioritization. I first heard of this approach when hearing about a security person (I think Window Snyder) sat in war room for Windows XP SP2. War room was where decisions were made about various bits that either were or were not going to ship in the service pack (or other release). Having representation there mattered. </p>
<p>There are two side effects of moving from risk management to integration with prioritization. The first, frankly, is more work for security. Someone has to make time for the meetings in which priorities are set. (That you/y'all have something better to do may say a lot, and it's not pleasant.) The second side effect is that you may see more lows fixed. There are issues which are easy to fix "while the hood is open." Sometimes, in the process of fixing one issue, you can cluster a set of fixes together.</p>
<p>Lastly, there's a variant on the usual small/medium/large, and that is possible, plausible, probable. (Along with "pagers-going-off" and "post-mortemed".)</p>
The COVID testbed and AI2021-08-04T00:00:00Zhttps://shostack.org/blog/ai-and-covid/AdamThe pandemic gives us a chance to evaluate AI tools...you'll be shocked to discover how they did.<span class="single-post-excerpt"><!-- excerpt -->The pandemic gives us a chance to evaluate AI tools...you'll be shocked to discover how they did.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/AI-prove-usefulness.png" alt="quote from article cited in the post" class="single-post-main-img" />
<p>There's a really interesting article in MIT Tech Review, <a href="https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/" target="_blank" rel="noopener">Hundreds of AI tools have been built to catch covid. None of them helped</a>.</p>
<p>Oops, I think I gave away the ending. But there's a lot of fascinating details:</p>
<ul>
<li> Many unwittingly used a data set that contained chest scans of children who did not have covid as their examples of what non-covid cases looked like. But as a result, the AIs learned to identify kids, not covid.</li>
<li> Because patients scanned while lying down were more likely to be seriously ill, the AI learned wrongly to predict serious covid risk from a person’s position.</li>
<li> AIs were found to be picking up on the text font that certain hospitals used to label the scans.</li>
<li> ...</li>
</ul>
<p>The most important internal problem is that no one seemed to have asked "what could go wrong," or "are there things in the data other than what we care about?" Of course no one asks that, because there always are, and the nifty thing about machine learning is that it can sometimes overcome such problems.</p>
<p>That's exacerbaged by the lack of inter-disciplinarity on teams. Some teams were strong on ML, some were strong on medicine. Developing a good working relationship, especially across inter-disciplinary boundaries, takes time and energy.</p>
<p>Lastly, the most important problem is not a lack of forethought, it's a lack of independent analysis. Models are being developed for a variety of reasons, none of which lead to people wanting to say "let's have someone else look at this." That's expensive. It's slow. It may result in critiques of your work. It may violate corporate desire to keep secrets. But as models get used to make more and more decisions, we're going to need to sort out how to do independent evaluation at scale.</p>
<p>If we don't, the natural tendency of systems will be to privatize gain, and externalize the costs. That's not a critique of anyone's motives, just a natural result of incentive distribution.</p>
25 Years in AppSec: Looking Back2021-08-09T00:00:00Zhttps://shostack.org/blog/25-years-in-appsec-looking-back/AdamTime flies and things change... A look back on the growth of this industry.<span class="single-post-excerpt"><!-- excerpt -->Time flies and things change... A look back on the growth of this industry.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/code-review-photo-bylagos-techie-unsplash.jpg" alt="multiple individuals using laptops on a shared desk" class="single-post-main-img" />
<p>Twenty-five years ago I published a set of <a href="https://shostack.org/files/essays/review/">code review guidelines</a> that I had crafted while working for a bank. I released them (thanks, SteveMac!) to get feedback and advice, because back then, there was exceptionally little in terms of practical advice on what we now call AppSec.</p>
<p>Looking back at what's there: it's explicitly a review document for a firewall group, taking code that's ‘thrown over a wall’ to be run and operated by that group. The document includes a mix of design advice, coding requirements, and operational needs, along with some admin bits like setting the rule that the least positive review would be the one we record.</p>
<p>There's some goodness in there: avoiding risky system calls, fuzzing, using lint and compiler warnings. Static analysis is just lint and compiler based — the first tools like RATS were not yet available. I had built or was starting to build a tool too embarrassing to release — it was a large shell script that used ldd and grep to find calls to dangerous functions. In hindsight, it was a small step forward. Competition in commercial tooling with companies like Coverity, Ounce Labs or Fortify was a good decade away, and memory safety in usable languages was not even a hint.</p>
<p>There was no concept that what we were doing was modeling threats, no hint towards standardizing how we got to an understanding of the code. The idea of paying a bounty on bugs was not unheard of (Netscape had a bounty program), but the idea that a bank would do so... I don't think it ever came up, even over beer.</p>
<p>Also the laptops were... clunkier.</p>
<p>If you've been around for a while, what else is brand new since you joined the field?</p>
<p class="credit-text">Photo: <a href="https://unsplash.com/@heylagostechie" target="_blank" rel="noopener">Lagos Techie</a> on Unsplash.</p>
Training - October2021-08-16T00:00:00Zhttps://shostack.org/blog/training-october/AdamAre you tired of escalations and fights after pen tests find crucial security issues at the last minute? I have upcoming threat modeling training that can help!<span class="single-post-excerpt"><!-- excerpt -->Are you tired of escalations and fights after pen tests find crucial security issues at the last minute? I have upcoming threat modeling training that can help!<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/exhaustion.jpg" alt="An exhausted young man" class="single-post-main-img" />
<p>Are you tired of escalations and fights after pen tests find crucial security issues at the last minute?</p>
<p>Threat modeling can help. Bringing systematic, structured and comprehensive security into your development can avoid these unpleasant surprises. I aim to provide the very best training available.</p>
<p>My next open course will be Oct 11-16 (2 hours class, 2 hours homework daily.) Details and signup at the <a href="https://shostack.org/training/courses/222">Threat Modeling Intensive</a> page.</p>Threat Modeling Through the JoHari Window2021-08-20T00:00:00Zhttps://shostack.org/blog/threat-modeling-through-the-johari-window/AdamLet me call your attention to a new post by Irene Michlin, “Where Threat Modelling fits in the matrix?” (with a few comments on why it matters).<span class="single-post-excerpt"><!-- excerpt -->Let me call your attention to a new post by Irene Michlin, “Where Threat Modelling fits in the matrix?” (with a few comments on why it matters).<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/joharitm.jpg" alt="The grid of knowns and unknowns, with a threat modeling overlay" class="single-post-main-img" />
<p>I want to call your attention to a new post by Irene Michlin <a href="https://www.linkedin.com/pulse/where-threat-modelling-fits-matrix-irene-michlin/" target="_blank" rel="noopener">Where Threat Modelling fits in the matrix?</a>. I like two things about it: first, it uses a well known model, the <a href="https://en.wikipedia.org/wiki/Johari_window" target="_blank" rel="noopener">Johari window</a> to shine light on a problem. (Some models are useful!)</p>
<p>Second, her analysis shines light on an important problem, which is how do we express how and when tooling helps in threat modeling? This is a problem that has been nagging me since 2007 or so when we built the SDL TM Tool. What sort of threats will tooling help with? When we shift from random set from brainstorming, possibly including really interesting threats, to a more predictable set, how do we characterize what was left out?</p>
<p>Irene's post is a useful step forward and well worth your time.</p>This is the blog you're looking for2021-08-24T00:00:00Zhttps://shostack.org/blog/this-is-the-blog-youre-looking-for/AdamMaking it easier to check feed updates<span class="single-post-excerpt"><!-- excerpt -->Making it easier to check feed updates<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/the-blog-youre-looking-for.jpg" alt="A woman standing on a sand dune" class="single-post-main-img" />
<p>As I migrate adam.shostack.org and various predecesors to this cool new site (hey, take a minute, look around!), I wanted to add a post that helps you see that you have the latest feed as of August 2021 in your feed reader. This is that post.</p>
<p class="credit-text">Photo by <a href="https://unsplash.com/@katekerdi?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank" rel="noopener">Katerina Kerdi</a> on <a href="https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank" rel="noopener">Unsplash</a>.</p>
Training discounts!2021-09-02T00:00:00Zhttps://shostack.org/blog/training-discount/AdamAre you tired of escalations and fights after pen tests find crucial security issues at the last minute? I have a discount code for upcoming threat modeling training that can help!<span class="single-post-excerpt"><!-- excerpt -->Are you tired of escalations and fights after pen tests find crucial security issues at the last minute? I have a discount code for upcoming threat modeling training that can help!<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/exhaustion.jpg" alt="An exhausted young man" class="single-post-main-img" />
<p>Are you tired of escalations and fights after pen tests find crucial security issues at the last minute?</p>
<p>Threat modeling can help. Bringing systematic, structured and comprehensive security into your development can avoid these unpleasant surprises. I aim to provide the very best training available.</p>
<p> You know what else can help? Letting people know about discounts! If you <a href="https://courses.shostack.org/courses/threat-modeling-intensive-222-October">sign up</a> by end of day Friday, code earlybird15 will get you 15% off!</p>
<p>My next open course will be Oct 11-16 (2 hours class, 2 hours homework daily.) Details and signup at the <a href="https://shostack.org/training/courses/222">Threat Modeling Intensive</a> page.</p>
What can go wrong?2021-09-10T00:00:00Zhttps://shostack.org/blog/what-can-go-wrong/AdamThe World's Shortest Threat Modeling Video series continues with .. what can go wrong?<p>The World's Shortest Threat Modeling Video series continues with .. what can go wrong?</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/Ckf1joYGH2o" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
A Vulnerable System2021-09-13T00:00:00Zhttps://shostack.org/blog/a-vulnerable-system/AdamAndrew Stewart has an excellent new book, A Vulnerable System.<span class="single-post-excerpt"><!-- excerpt -->Andrew Stewart has an excellent new book, A Vulnerable System.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/a-vulnerable-system.png" alt="A close up of the cover of a vulnerable system" class="single-post-main-img" />
<p><cite><a href="https://amzn.to/38cZF6K" target="_blank" rel="noopener">A Vulnerable System</a></cite> is a new book from Andrew J. Stewart. Stewart traces the history of computer security from before the very start of computing in a rigorous and approachable way. Doing this helps us understand 'how we got to now' and why some of the problems we have persist. The book is worth your time if you work in security.</p>
<p>The book begins with the very first computers, and moves quickly through the emergence of timesharing to the question "can we rent time on this machine to the Air Force and others?" From there, the book takes the reader on a tour of the early papers and the folks who wrote them. (He misses the funny story that the "* property" was supposed to be renamed before publication, but very little else.) He covers the history of the rainbow books and their failures, and then gets to the rise of vulnerability discovery as a major driver of the field, the growth of the security products market, data breaches, and nation state hacking.</p>
<p>He also discusses how the field has reached a state of epistemic closure, where the debates we have are narrow, and (separately) how the field has achieved this state before we have deep understanding of what we're doing. I was at first taken aback when he pointed out that "The CIA triad, 'no security through obscurity,' and 'defense in depth' are not scientific or mathematical laws derived from first principles, nor are they inerrant. They are simply mantras, and so can be recast." </p>
<p>But he's right. These are more rules of thumb than they are statements like "Entropy always decreases." (Thermodynamics and information security may be opposites, and my cryptographic background leads me to think we never have enough entropy. That situation has changed somewhat with the rise of in-chip sources of unpredictability, but how do we measure how much it has changed?) Moving from the specifics to the general point, having studied the history, he can say that the field is young. He can also point out that while we have mantras that can help us day to day, as a field or aspiring profession, need to sometimes look more broadly at what we're doing.</p>
<p>Some readers may recognize Andrew's name because I was his co-author for <cite><a href="https://shostack.org/books/new-school-infosec-book">The New School of Information Security</a></cite>. I get too much credit — he made that book happen. We both learned a tremendous amount from writing it, and writing together, and I'm thrilled that he has a book with only his name on it. </p>NIST Brings Threat Modeling into the Spotlight2021-09-23T00:00:00Zhttps://shostack.org/blog/nist-brings-threat-modeling-into-the-spotlight/AdamNew at Darkreading, a post on NIST and threat modeling<span class="single-post-excerpt"><!-- excerpt -->New at Darkreading, a post on NIST and threat modeling<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/nist-darkreading.png" alt="A screencapture of the headline" class="single-post-main-img" />
<p>In July, NIST released a new minimum standard for vendor or developer verification of code, and I commented <a href="https://shostack.org/blog/tmt-nist-code-verification-standard/">here</a>. I have more to say about what it means in a new Dark Reading Article, <a href="https://www.darkreading.com/threat-intelligence/nist-brings-threat-modeling-into-the-spotlight" target="_blank" rel="noopener">NIST Brings Threat Modeling into the Spotlight</a>.</p>
Lessons Learned: Playing Elevation of Privilege2021-09-28T00:00:00Zhttps://shostack.org/blog/lessons-learned-elevation-of-privilege/AdamWe learn while we're having fun. Some takeaways from a recent play to learn session.<span class="single-post-excerpt"><!-- excerpt -->We learn while we're having fun. Some takeaways from a recent play to learn session.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/takeaways.png" alt="Amongst the takeaways image" class="single-post-main-img" />
<p>After the previous <a href="https://agilestationery.com/pages/play-elevation-of-privilege-with-adam-shostack" target="_blank" rel="noopener">Elevation of Privilege Play to Learn</a> session, Nigel Hanson shared some of his thoughts on Slack. I thought they were worth sharing more broadly.</p>
<blockquote>"Absolutely loved today's game as I can guarantee that some of the misunderstandings that we all had will be the same issues that my teams will strike around the globe."</blockquote>
<p>After I asked for more details, he continued:</p>
<p>"What I was meaning by misunderstandings (or what I interpreted as misunderstandings) was what the underlying focus of the game actually is. I was initially caught up (and suspect others were too) about the details of the game (scoring etc), but then had a few “Aha” moments. I had 4 key takeaways:"</p>
<ol>
<li>The cards are simply prompts/ideas of things that could go wrong. They help ensure that we think broadly - even for a system we may have never seen before.</li>
<li>The usage of physical cards help with keeping people’s attention when playing remotely. Suspect if I just had a list of cards on screen, I’d easily find myself distracted when it wasn’t my turn. Holding the physical cards I found myself really thinking about which one to play next.</li>
<li>The game format will really help with teams where I have 1-2 strong members (who probably claim they know all the things that could go wrong so don’t even need to play the game) as it will somewhat focus them on the threats on their cards.</li>
<li>Flipside of #3 is that it will give the less vocal team members an opportunity to voice their ideas and have them equally have the opportunity to put them forward.</li>
<li><b>All models</b> are <b>wrong</b>, but some are useful</li>
<li>Don’t feel overwhelmed by the size of the deck of cards. It’s perfectly ok to play a couple of rounds, get some findings, work on those, rinse+repeat (get some wins, feel positive about the experience). Look forward to the next round!</li>
</ol>
What are we going to do: CO2 edition2021-10-05T00:00:00Zhttps://shostack.org/blog/what-are-we-going-to-do-co2-edition/AdamWhat happened when Microsoft tried to buy climate abatements<span class="single-post-excerpt"><!-- excerpt -->What happened when Microsoft tried to buy climate abatements<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/carbon-sequestration.png" alt="Amongst the takeaways image" class="single-post-main-img" />
<p>There's a fascinating article in Nature, <cite><a href="https://www.nature.com/articles/d41586-021-02606-3" target="_blank" rel="noopener">Microsoft’s million-tonne CO2-removal purchase — lessons for net zero</a></cite>. It's an analysis of the 154 bids the company received to remove carbon emissions. And the short form is almost none of them met Microsoft's criteria. That's interesting in and of itself as we think about climate change. They point out that pests, fire, and other threats limit the estimated lifetime of forest, soil, and ocean based storage to roughly 100 years (Part of me wishes the figure was log scale, but perhaps that short term thinking, and it just doesn't matter that much.) </p>
<p>The paper is fascinating in and of itself, and I want to use it as a chance to talk about the question 'what are we going to do about it?' as we ask it in threat modeling.</p>
<p>What the authors are doing is to say that many proposed mitigations don't meet their criteria of long term storage, confidence in the technology and others. Expressing criteria like this help us align thinking about mitigations or risk management techiniqes across an organization much more than a CVSS score. They allow decision makers to express their preferences and weights. Nominally, the CVSS component scores might allow you to do this, and it would not surprise me if Microsoft had internal weights expressed, but the focus is not on those weights, but the higher level issues.</p>
<p>Some of the criteria we might consider in a complex mitigation of a cybersecurity issue include cost to implement, ease of bypass, usability or other side effects, "unusual" circumstances like dropped my phone in the ocean or being abused by an intimate partner. I don't think my list is complete, but I'm curious what other people's lists are.</p>
Trainings at Global Appsec 20212021-10-20T00:00:00Zhttps://shostack.org/blog/trainings-global-appsec-2021/AdamTremendous training opportunities in threat modeling and other topics at Appsec Global 2021<span class="single-post-excerpt"><!-- excerpt -->Tremendous training opportunities in threat modeling and other topics at Appsec Global 2021<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/training-room.jpg" alt="People being trained in a conference room" class="single-post-main-img" />
<p>The trainings for OWASP Appsec Global have just been <a href="https://usa.globalappsec.org/trainings/" target="_blank" rel="noopener">announced!</a> There's a great set of trainings, and I want to highlight how much threat modeling training is available from a variety of trainers:</p>
<ul>
<li>Advanced Whiteboard Hacking (2 day, Steven Wierckx)</li>
<li>Introduction to Threat Modeling (1 day, me)</li>
<li>Threat Modeling: A Master Class (1 day, me)</li>
<li>Threat Modeling: From None to Done (1 day, John DiLeo)</li>
</ul>
<p>There's also other training courses available:</p>
<ul>
<li>AppSec - Secure Coding and DevSecOps (ASCD) (3-day training course, Magno Logan)</li>
<li>Browser Fingerprinting - Past, Present, and Future (1-day, Nick Nikiforakis)</li>
<li>Hacking Modern Web apps - Master the Future of Attack Vectors (2-day, Anirudh Anand)</li>
<li>Introduction to Web Application Hacking & Bug Bounty (3-day, Ben Sadeghipour)</li>
<li>Kubernetes Security Masterclass (2-day, Nithin Jois)</li>
<li>Snakes Crawling On The Web: Finding Security Vulnerabilities With Python (1 day, Ruben Ventura)</li>
</ul>
<p class="credit-text">Photo by <a href="https://unsplash.com/photos/Oalh2MojUuk?utm_source=unsplash&utm_medium=referral&utm_content=creditShareLink">Leon</a></p>
Breaking into threat modeling2021-11-01T00:00:00Zhttps://shostack.org/blog/breaking-into-threat-modeling/AdamA video interview by OWASP leader Vandana Verma, on the topic of breaking into threat modeling.
<p>A video interview by OWASP leader Vandana Verma, on the topic of breaking into threat modeling.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/HIr1k9Hbm0w" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
25 Years of Appsec - Appsec Global2021-11-11T00:00:00Zhttps://shostack.org/blog/25-years-of-appsec-owasp/AdamAdam is delivering the opening keynote for OWASP Global Appsec 2021 with a 25 year restrospective on the history of appsec and a look into its future.<span class="single-post-excerpt"><!-- excerpt -->Adam is delivering the opening keynote for OWASP Global Appsec 2021 with a 25 year restrospective on the history of appsec and a look into its future.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/global-appsec.png" alt="screenshot from video: breaking into threat modeling" class="single-post-main-img" />
<p>Adam is delivering the opening keynote for <a href="https://usa.globalappsec.org/schedule/" target="_blank" rel="noopener">Global AppSec US 2021</a>, 25 Years in AppSec: Looking Back, Looking Forward.</p>
<blockquote>25 years ago, Adam was working at a bank doing source code security reviews, and got permission to release internal [source code review] guidelines. 15 years ago he joined the Microsoft SDL team ... hear some highlights and some lowlights from the journey, and more importantly, what can we expect over the next 25 years? Where is appsec going? What new frontiers will we get to secure? What problems will still be with us?</blockquote>
<p>The slides are <a href="https://shostack.org/files/conference/25-years-of-appsec-shostack-owasp-global-2021-final.pptx">here</a>. The most useful links are below:</p>
<ul>
<li><a href="https://shostack.org/files/essays/review.html">My 1996 source code review guidelines</a></li>
<li>Loren Kohnfelder's new book: <a href="https://designingsecuresoftware.com/" target="_blank" rel="noopener">Designing Secure Software</a></li>
<li><a href="https://www.threatmodelingmanifesto.org/" target="_blank" rel="noopener">The Threat Modeling Manifesto</a></li>
<li><a href="https://doi.org/10.1145/3433949" target="_blank" rel="noopener"> The (Im)possibility of fairness: different value systems require different mechanisms for fair decision making.</a></li>
<li><a href="https://shostack.org/blog/tmt-machine-learning/">Threat modeling in machine learning</a> (my perspective, including links to BIML & Microsoft resources</li>
<li><a href="https://doi.org/10.1145/3134681" target="_blank" rel="noopener">Digital Technologies and Intimate Partner Violence: A Qualitative Analysis with Multiple Stakeholders</a></li>
<li><a href="https://www.emerald.com/insight/content/doi/10.1108/978-1-83982-848-520211049/full/html" target="_blank" rel="noopener">Threat Modeling Intimate Partner Violence: Tech Abuse as a Cybersecurity Challenge in the Internet of Things</a></li>
<li><a href="https://www.nytimes.com/2018/06/23/technology/smart-home-devices-domestic-abuse.html" target="_blank" rel="noopener">https://www.nytimes.com/2018/06/23/technology/smart-home-devices-domestic-abuse.html</a></li>
<li><a href="https://github.com/adamshostack/conflict/" target="_blank" rel="noopener">Github.com/adamshostack/conflict/</a> — my work to bring the 4 Question Framework to internet conflict and user-generated content.</li>
<li><a href="https://www.belfercenter.org/learning-cyber-incidents" target="_blank" rel="noopener">Learning from Cyber Incidents</a> at Harvard's Belfer Center. The <a href="https://www.belfercenter.org/publication/learning-cyber-incidents-adapting-aviation-safety-models-cybersecurity" target="_blank" rel="noopener">report</a> is now live, but not yet linked.</li>
</ul>
Learning Lessons from Aviation2021-11-15T00:00:00Zhttps://shostack.org/blog/cyber-lessons-learned/AdamThe definition of insanity is doing the same thing over and over and expecting different results. We can do better, and a major new report explains how.<span class="single-post-excerpt"><!-- excerpt -->The definition of insanity is doing the same thing over and over and expecting different results. We can do better, and a major new report explains how.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/learning-from-incidents-banner.png" alt="Cover of a workshop report: learning from cyber incidents" class="single-post-main-img" />
<p>For literally 30 years, people have been talking about the idea of a “cyber NTSB.” Unfortunately, most people have stopped at the metaphor, and so has the idea. And so we see the same problems impact system after system, organization after organization, year after year. And rather than learning from these incidents so we can do better, we blame the victims. We insist that they should read through the dozens of standards, apply risk management techniques, select their defenses and somehow defend themselves from persistent attackers. We know how that's working out.</p>
<p><a href="https://www.belfercenter.org/person/robert-knake" target="_blank" rel="noopener">Rob Knake</a> and I have been working for the last several years on a <a href="https://www.belfercenter.org/learning-cyber-incidents" target="_blank" rel="noopener">project</a> to adopt learning models from aviation to cyber. We had planned to convene a workshop in early 2020, and the pandemic forced us to delay and then move online, but that allowed us to bring together over 70 experts, including leaders from the NTSB and ASRS over an extended period earlier this year, and we learned so much that it's only now that the <a href="https://www.belfercenter.org/publication/learning-cyber-incidents-adapting-aviation-safety-models-cybersecurity" target="_blank" rel="noopener">report</a> is ready for release. I also want to fully acknowledge <a href="https://twitter.com/tarah" target="_blank" rel="noopener">Tarah Wheeler</a>, our third author on the report.</p>
<p>The report is almost certainly the fullest investigation into what an NTSB for cyber might be, and as we were preparing the report, the May Executive Order on Cyber security was released, including a Cyber Safety Review Board. (Steve Bellovin and I shared our thoughts on that in <a href="https://www.lawfareblog.com/finally-cybersecurity-safety-review-board" target="_blank" rel="noopener">Lawfare</a> in June.) Because we had this amazing confluence of a convening of experts and the Order, we went beyond our mandate, and mined the discussions and our notes for a section of recommendations for the nascent board. I was excited to see that reach CISA over the weekend:</p>
<img src="https://shostack.org/blog/img/2021/cisa-linkedin.png" />
<p>Since the report was for the National Science Foundation, we also cataloged questions that are worthy of further scientific study, and collected over 50 research questions that we hope will be pursued.</p>
<p>Our major findings are excerpted below — each is further explained in the <a href="https://www.belfercenter.org/publication/learning-cyber-incidents-adapting-aviation-safety-models-cybersecurity" target="_blank" rel="noopener">executive summary</a>:</p>
<ul>
<li>Third party and in-house investigations are no substitute for objective, independent investigations.</li>
<li>Companies are unlikely to fully cooperate under a voluntary regime.</li>
<li>Product, tool, and control failure must be identified in an objective manner.</li>
<li>Findings may be sensitive but should be disseminated as widely as possible.</li>
<li>Fact finding should be kept separate from fault finding.</li>
<li>“Near Miss” reporting can complement incident investigations.</li>
</ul>
<p>We're grateful to Harvard's Belfer Center, the National Science Foundation, the Hewlett Foundation, and Northeastern's Global Resillience Institute for support, and to all of the workshop participants.</p>
<p>Lastly, I want to close this post with the closing words of the report:</p>
<blockquote>Secret knowledge is mysticism, not science or engineering. We heard a great deal in our workshop about how various groups have access to useful data which drives decisions that they believe are good. Yet the decisions they come to are different, which has a cost both to those trying to comply with the advice, and in the credibility of the advice. There are certainly challenges: informing opponents, ranging from threat actors to lawyers, of what you know can be worrisome. Subjecting one’s reasoning to criticism is scary. It is also a constant in fields with high rates of engineering success, ranging from bridge building to medical device manufacture. The consequences for leaving the field of cybersecurity in a prolonged adolescence are now too great; it’s time for us to grow up. </blockquote>Medical Device Threat Modeling Webinar2021-11-21T00:00:00Zhttps://shostack.org/blog/medical-device-threat-modeling-webinar/AdamAn important webinar by MDIC about the medical device threat modeling playbook is now available!<span class="single-post-excerpt"><!-- excerpt -->An important webinar by MDIC about the medical device threat modeling playbook is now available!<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/medical-device-threat-modeling-webinar.png" alt="screenshot from video: threat modeling webinar" class="single-post-main-img" />
<p>
In October, the <a href="https://mdic.org/event/playbook-for-threat-modeling-medical-devices-webinar/" target="_blank" rel="noopener">MDIC</a> held a webinar to update the medical device community on the upcoming medical device threat modeling playbook. If you missed it, the video is available on their page or on youtube:
</p>
<p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/B4JRc9DVfko" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</p>
FDA Threat Modeling Playbook Now Available2021-11-30T00:00:00Zhttps://shostack.org/blog/fda-threat-modeling-playbook-available/AdamHow to threat model medical devices? The FDA has released a playbook!<span class="single-post-excerpt"><!-- excerpt -->How to threat model medical devices? The FDA has released a playbook!<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/TM-playbook.jpg" alt="A woman studying a medical image" class="single-post-main-img" />
<p>I am really excited that this is now available to the public. MITRE and MDIC have been working on a threat modeling playbook for medical device security: <em>The “Playbook for Threat Modeling Medical Devices” was developed to increase knowledge of threat modeling throughout the medical device ecosystem in order to further strengthen the cybersecurity and safety of medical devices.</em> I've been helping and I know it will drive improvements to the ecosystem.</p>
<p><a href="https://www.mitre.org/publications/technical-papers/playbook-threat-modeling-medical-devices" target="_blank" rel="noopener">Get a copy now</a>!</p>
Gävle Goat, 2021 edition2021-12-11T00:00:00Zhttps://shostack.org/blog/gavle-goat-2021/AdamThere are some things the pandemic can't stop. Gävle, Sweden putting up a straw goat is one of them.<span class="single-post-excerpt"><!-- excerpt -->There are some things the pandemic can't stop. Gävle, Sweden putting up a straw goat is one of them.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/gavle-2021.png" alt="An enourmous straw goat" class="single-post-main-img" />
<p>If ever we needed a metaphor for human perseverance and courage, I'm pretty sure we would not pick the town of Gävle, Sweden putting up a large straw goat and daring arsonists to burn it down. And yet, the story speaks to something in all of us. Or maybe it's just an entertaining holiday tradition in a time when outdoor activity is more important than ever.</p>
<p>Either way, the 2021 goat is now up, and you can read all about it on the <a href="https://www.visitgavle.se/en/gavlebocken" target="_blank" rel="noopener">towns website</a>, or this impressive history thread at <a href="https://boards.theforce.net/threads/g%C3%A4vle-goat-watch-2021.50056856/" target="_blank" rel="noopener">Theforce.net forums</a>.</p>
<p>Update, Dec 17: <a href="https://twitter.com/Gavlebocken/status/1471676229740814343?s=20" target="_blank" rel="noopener">And... another one's gone...</a>
</p><p class="credit-text">Photo: Daniel Bernstål. Previously: <a href="https://www.google.com/search?q=site%3Ahttps%3A%2F%2Fshostack.org&q=gavle">Gavle Goat at Shostack & Friends</a></p>
Fast, Cheap + Good Whitepaper2021-12-15T00:00:00Zhttps://shostack.org/blog/fast-cheap-good/AdamThreat modeling doesn't need to be a slow, heavyweight activity!<span class="single-post-excerpt"><!-- excerpt -->Threat modeling doesn't need to be a slow, heavyweight activity!<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/Fast-Cheap-Good.png" alt="A whitepaper cover page, with the title fast, cheap and good: an unusual tradeoff available in threat modeling" class="single-post-main-img" />
<p>Threat modeling work can be very rewarding. There is a common pattern where a lightweight proof of concept run by security experts leads to the creation of a heavier process. This heavier process is designed to help developers, operations and others with less security expertise. These approaches are often too heavy for low-risk projects, too big for agile projects, and they don’t consistently produce results worthy of the invested energy.</p>
<p>This creates a situation in which fast and cheap ways to figure out ‘What can go wrong?’ and ‘Do we need to dig deeper?’ are better than heavyweight approaches. This paper presents a set of approaches as simple as asking, “What can go wrong?” It also provides a framework that allows us to consider direct return on effort as well as other common goals for security assurance such as consistency and assurance.</p>
<p>This paper:</p>
<ul>
<li>Collects a set of fast, cheap, and good ways to begin threat modeling</li>
<li>Measures them on several metrics</li>
<li>Provides you with a decision framework to illuminate when to use more in-depth approaches</li>
</ul>
<p>Two podcasts have released on the paper:</p>
<ul>
<li><a href="https://www.youtube.com/watch?v=htX43oIKP0c" target="_blank" rel="noopener">Application Security Podcast</a> is a bit longer.</li>
<li><a href="https://www.buzzsprout.com/1660360/9679737" target="_blank" rel="noopener">The Well Aware Security Show</a></li>
</ul>
<p><a href="https://shostack.org/files/papers/Fast-Cheap-and-Good.pdf" target="_blank" rel="noopener">Fast, Cheap and Good: An unusual tradeoff available in threat modeling</a> is now available to everyone! If you want early access to papers like this, sign up for <a href="https://shostack.org/contact">Adam Shostack's New Thing</a>.</p>
Missed it by that much!2021-12-16T00:00:00Zhttps://shostack.org/blog/missed-it-by-that-much/AdamMissed it by that much!<span class="single-post-excerpt"><!-- excerpt -->Missed it by that much!<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/pantone-2022.png" alt="Pantone's color of the year, 2022" class="single-post-main-img" />
<p>
I knew our re-branding was forward looking, and I appreciate Pantone's acknowledgement. 😂
</p>
The Allegory of Rocks and Sand2021-12-29T00:00:00Zhttps://shostack.org/blog/allegory-rocks-sand/AdamAs the year closes out, I'm thinking a lot about the allegory of the rocks and the sand. You should, too.<span class="single-post-excerpt"><!-- excerpt -->As the year closes out, I'm thinking a lot about the allegory of the rocks and the sand. You should, too.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2021/jar-rocks.png" alt="Jars with sand and rocks in them" class="single-post-main-img" />
<p>2021 has been a tough year. Millions have lost friends or loved ones to disease. Many have continued to shelter in place, amazed at how society at large seems to be reacting.</p>
<p>In that context, I know I've been very lucky. I've been healthy, able to pivot my business to deliver training that I think is even better than what I did in person, and we've had more requests than we could take on. I'm grateful to those customers, to the team that's helped me deliver not only great training, but a lovely new website and new self-paced training options.</p>
<p>And as the year ends, I realized how much of my year has been interrupt-driven. I also thought about projects that had been on hold, and why. There have been some big projects — one I'll talk about soon that just wrapped up a week ago. But others have been on hold, and I've been working roughly ten hours a day for a year, without advancing another project that I'm really, really excited about.</p>
<p>And this brings me to the allegory of the rocks and the sand. It's pretty simple. You have a jar. You fill it with sand, and then try to add some rocks. The rocks don't fit. You pour out the sand, add the rocks, and you can fill in sand around the rocks. Are you ready for the big reveal? The rocks are the important things in life. Family. Friends. Hobbies. The projects that matter to you. The sand is all the little things. A request for 30 minutes of my time to help review this. Yet another email inquiring if someone plans to pay for an event I agreed to do on something like the schedule on which they agreed to pay. Doomscrolling through sea chanties about password changes. It adds up to a full jar.</p>
<p>I've decided to focus on the rocks for the next year. There's a few parts to that. One is being mindful of what's in what category. Another is each week, I'm going to write down what are the big projects and what I plan to do for them that week. (I also plan to read the Getting Things Done book, which I hear talks about this very technique.)</p>
<p>What are your rocks? Are they getting the attention they deserve?</p>
25 Years in AppSec: Looking Back, Looking Forward2022-01-10T00:00:00Zhttps://shostack.org/blog/25-years-appsec-keynote/AdamMy opening keynote from Appsec Global 2021<span class="single-post-excerpt"><!-- excerpt -->My opening keynote from Appsec Global 2021<!-- /excerpt --></span>
<p>I was honored to give the opening keynote at OWASP AppSec Global 2021 - 25 years in appsec: looking back, looking forward. OWASP made it available to members first, and it's now available to everyone. I hope you enjoy it!</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/pB1EqwwXoNk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>Letterlocking2022-01-12T00:00:00Zhttps://shostack.org/blog/letterlocking/AdamLetterlocking is a lost art of protecting letters from surriptitious readers.<span class="single-post-excerpt"><!-- excerpt -->Letterlocking is a lost art of protecting letters from surriptitious readers.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/letterlocking-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/letterlocking-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/letterlocking-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/letterlocking-800w.jpeg" width="800" height="333" alt="Schematic of a letter lock" decoding="async" />
</picture>
<p><a href="https://www.bl.uk/eblj/2021articles/pdf/ebljarticle112021.pdf" target="_blank" rel="noopener"><cite>The Spiral-Locked Letters of Elizabeth I and Mary, Queen of Scots</cite></a> is a new article in the Electronic British Library Journal, documenting and explaining <i>letterlocking</i>, "the process by which a flat writing substrate is turned into its own envelope – is a system of document security used to send letters for centuries before the invention of the mass-produced gummed envelope in the early nineteenth century." These techniques clearly show an evolution of attack and defense over many, many years.</p>
<p>There are some fascinating elements to the design, which incorporates both manipulating the paper and a wax seal covering it. The authors are concerned with elements of preservation and history. Their distinction of high/medium/low security are relative to what's available within letterlocking techniques. Having spent time with tamper evidence enthusiasts, I think I can see how to disassemble and re-assemble a lock, but that's not the point.</p>
<p>The available technology to protect letters, especially between mutually distrustful parties, didn't include cryptography, because what we now call cryptographic and stenographic techniques were closely held secrets.</p>
<p>For more, <a href="http://letterlocking.org/about" target="_blank" rel="noopener">Letterlocking.org</a>. The <a href="https://documents.pub/document/cia-flaps-and-seals-manual.html">CIA Flaps and Seals</a> manual doesn't seem to mention letterlocking (although I didn't review closely).</p>
Threat Modeling Open Training: First Quarter, 20222022-01-13T00:00:00Zhttps://shostack.org/blog/threat-modeling-open-training/AdamOpen threat modeling training, Q1 2022<span class="single-post-excerpt"><!-- excerpt -->Open threat modeling training, Q1 2022<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/dfd-first-quarter-695w.webp 695w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/dfd-first-quarter-695w.jpeg 695w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/dfd-first-quarter-695w.jpeg" width="695" height="333" alt="A data flow diagram" decoding="async" />
</picture>
<p><b>What</b>: I'm really excited to announce my next threat modeling training (open, remote-only, instructor-led) will be Feb 28-March 4, 8AM-10AM Pacific time, and seats are still available! We limit these courses to 24 people so everyone can get personalized attention. Everyone in the course gets a certificate of completion (suitable for CPE credit) as well as a package of cool threat modeling physical tools.</p>
<p><b>Why</b>: Threat modeling is the language of security, and it's the techniques we use to understand the problems a future system can encounter so we can deal with them in a strategic way. Individuals have signed up for the course to advance their careers, while companies often send people to learn to bring more value earlier in the process.</p>
<p><b>Who</b>: Course participants are usually senior, experienced engineers who are involved in technology delivery. That includes SWE or software developers, SRE, program and project managers. You do not need to know how to code or hack to participate and learn in this course. We've also had line of business owners, recent graduates, and even PhD candidates. Also, this is earlier than many of our courses, and so it's great for those in Europe.</p><p>
</p><p><b>How</b>: Sign up at <a href="https://courses.shostack.org/courses/threat-modeling-intensive-222-Q1-2022" target="_blank" rel="noopener">https://courses.shostack.org/courses/threat-modeling-intensive-222-Q1-2022</a>. Also, if you use coupon adamblog2022 by next Friday (Jan 24), you get 10% off! </p>
<p class="small-text">Note: the DFD is an exercise from previous class, who wanted to threat model a Spotify-style music service. We had no idea how close any of that might be. All models are wrong, some models are useful.</p>Elevation of Privilege: New Cards for 20222022-01-20T00:00:00Zhttps://shostack.org/blog/elevation-of-privilege-2022/AdamHoly cow, we’ve added new cards to Elevation of Privilege!<span class="single-post-excerpt"><!-- excerpt -->Holy cow, we’ve added new cards to Elevation of Privilege!<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/Elevation-of-Privilege-2022-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/Elevation-of-Privilege-2022-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/Elevation-of-Privilege-2022-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/Elevation-of-Privilege-2022-800w.jpeg" width="800" height="321" alt="Elevation of Privilege cards" decoding="async" />
</picture>
<p>I recently had a burst of enthusiasm for updating the Elevation of Privilege card game, and there are now 7 new cards, and a bunch of minor edits. (“Everyone in the world” is now “anyone with a Facebook account”, similarly, “ACLs” are now “permissions” etc.)</p>
<p>The new cards are:</p>
<ul>
<li>T 2: An attacker can modify your build system and produce signed builds of your software</li>
<li>DoS 2: An attacker can make your authentication system unusable or unavailable</li>
<li>Dos 3: An attacker can drain our easily replacable battery (battery, temporary)</li>
<li>Dos 4: An attacker can drain a battery that’s hard to replace (sealed in a phone, an implanted medical device, or in a hard to reach location) (battery, persist)</li>
<li>Dos 5: An attacker can spend our cloud budget (budget, persist)</li>
<li>E2: An attacker has compromised a key technology supplier</li>
<li>E3: An attacker can access the cloud service which manages your devices</li>
<li>E4: An attacker can escape from a container or other sandbox</li>
</ul>
<p>A complete list can be found in the <a href="https://github.com/adamshostack/eop/commits/master/cards.yaml" target="_blank" rel="noopener">github history</a> for cards.yaml.</p>
<p>The “log4j card” remains unchanged: “(R2) An attacker can pass data through the log to attack a log reader, and there’s no documentation of what sorts of validation are done.”</p>
<p>In other Elevation of Privilege news, there’s a good article in Dark Reading, <a href="https://www.darkreading.com/edge-articles/lets-play-raising-the-stakes-for-threat-modeling-with-card-games" target="_blank" rel="noopener">Let’s Play! Raising the Stakes for Threat Modeling With Card Games</a> by Andrada Fiscutean.</p>
<p>You can of course get the cards from <a href="https://github.com/adamshostack/eop" target="_blank" rel="noopener">Elevation of Privilege github</a>, or do like I do, and buy decks from <a href="https://agilestationery.com/products/elevation-of-privilege-game" target="_blank" rel="noopener">Agile Stationery</a>.</p>Wearing Many Hats2022-01-24T00:00:00Zhttps://shostack.org/blog/wearing-many-hats/AdamFascinating history of a transformation in how hackers were seen.<span class="single-post-excerpt"><!-- excerpt -->Fascinating history of a transformation in how hackers were seen.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/Wearing-many-hats-timeline-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/Wearing-many-hats-timeline-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/Wearing-many-hats-timeline-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/Wearing-many-hats-timeline-800w.jpeg" width="800" height="321" alt="A timeline" decoding="async" />
</picture>
<p>There's a fascinating new history, <a href="https://datasociety.net/library/wearing-many-hats-the-rise-of-the-professional-security-hacker/" target="_blank" rel="noopener">Wearing Many Hats: The Rise of the Professional Security Hacker</a> by Matt Goerzen and Gabriella Coleman, which charts "the movements of the digital underground during the 1990s to reveal what underground technologists or “hackers”, did—technically, linguistically, and culturally—to establish their legitimacy as employable, trustworthy security experts."</p>
<p>Having been present for much of this, I think they've done a great job at capturing the history. My only comment, a bit of a nit, is that early in the report, the authors claim that "Other types of social insecurity and risk stemming from the use of networking technologies—such as harassment, surveillance, and the targeting of civil society activists—were only substantially addressed later by different types of communities and actors." In fact, the early cypherpunks were very focused on surveillance and possible targeting of activists by government in the mid-nineties, and there was substantial overlap between the hacker and cypherpunk communities. There was substantial activism around cryptographic export controls; also Lance Cotrell's Mixmaster, Zero-Knowledge's Freedom, and other software was created in the time covered by the report.</p>Worthwhile Books Q1 20222022-01-28T00:00:00Zhttps://shostack.org/blog/worthwhile-books-q1-2022/AdamThese are the books that I read in the second half of 2021 that I think are worth your time.<span class="single-post-excerpt"><!-- excerpt -->These are the books that I read in the second half of 2021 that I think are worth your time.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2020/debby-hudson-asviIGR3CPE-unsplash-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2020/debby-hudson-asviIGR3CPE-unsplash-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2020/debby-hudson-asviIGR3CPE-unsplash-800w.jpeg" width="800" height="530" alt="stack of 5 old books next to an open journal with pen" decoding="async" />
</picture>
<p>I realize it's been a little while since I did this, but I wanted to pick up my now and then mention of books that I read in Q4 2021 that I think are worth your time.</p>
<h3>Cyber</h3>
<ul>
<li><a href="https://amzn.to/3zy7OiK" target="_blank" rel="noopener"><cite>A Vulnerable System</cite></a> by Andrew Stewart. <a href="https://shostack.org/blog/a-vulnerable-system/">My review</a>.</li>
<li><a href="https://amzn.to/3JO9ERC" target="_blank" rel="noopener"><cite>Designing Secure Software</cite></a> by Loren Kohnfelder. I was honored to write the foreword.</li>
<li><cite><a href="https://amzn.to/3HM7Arr" target="_blank" rel="noopener">Ethics and Data Science</a></cite> by Mike Loukides, Hilary Mason and DJ Patil is a very solid, short primer. Technically not cybersecurity, but there's a lot of overlap.</li>
</ul>
<h3>Non-fiction</h3>
<ul>
<li><a href="https://amzn.to/3f0AqaZ" target="_blank" rel="noopener"><cite>Resilience Engineering</cite></a> by David Woods and Erik Hollnagel. We were honored to have Dr. Woods present at our <a href="https://www.belfercenter.org/learning-cyber-incidents" target="_blank" rel="noopener">Learning from Cyber Incidents</a> workshop.</li>
<li><a href="https://amzn.to/3HF1lFN" target="_blank" rel="noopener"><cite>How Emotions are Made</cite> </a> by Lisa Feldman Barrett. Quite fascinating. Argues that our bodies and minds react, and we learn to label those reactions with emotions. Very down to earth writing, grounded in lots of science.</li>
<li><a href="https://amzn.to/3Gj5wHb" target="_blank" rel="noopener"><cite>Teaching Effectively with Zoom</cite></a> by Dan Levy. One of several books with very similar titles and perhaps rushed out at the start of the pandemic, but this one is by a Harvard faculty member who has thought deeply about effective teaching, and has written an incredibly useful short text for those who are being forced to teach in an unfamiliar way. He covers things like "use this Zoom function", but along the way he also sneaks in a masterclass on how to teach.</li>
<li><a href="https://amzn.to/3t46fI9" target="_blank" rel="noopener"><cite>Flying Blind</cite></a> by Peter Robison, and apparently the only non-fiction in this roundup written by a journalist, and one who's been convering Boeing for 25 years or so, and has a strong perspective on the changes the company underwent, and how they lead to the 777 battery issues and the MAX tragedies.</li>
</ul>
<h3>Fiction</h3>
<ul>
<li><cite><a href="https://www.amazon.com/Fated-Sky-Lady-Astronaut-Novel-ebook/dp/B0756JKWCJ/ref=sr_1_2?crid=QL9UNV12XQNR&keywords=fated+sky&qid=1643388509&sprefix=fated+sky%2Caps%2C214&sr=8-2" target="_blank" rel="noopener">The Fated Sky</a></cite> by Mary Robinette Kowal continues her Lady Astronaut series.</li>
</ul>
<p class="credit-text"><a href="https://unsplash.com/photos/asviIGR3CPE" target="_blank" rel="noopener">Photo</a> by <a href="https://unsplash.com/@hudsoncrafted?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank" rel="noopener">Debby Hudson</a>.</p>Ten Questions we hope the CSRB answers2022-02-09T00:00:00Zhttps://shostack.org/blog/ten-questions-for-the-csrb/AdamThe new Cyber Safety Review Board is an opportunity to get better faster.<span class="single-post-excerpt"><!-- excerpt -->The new Cyber Safety Review Board is an opportunity to get better faster.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/homeland-security-seal-800-796w.webp 796w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/homeland-security-seal-800-796w.jpeg 796w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/homeland-security-seal-800-796w.png 796w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/homeland-security-seal-800-796w.jpeg" width="796" height="334" alt="Homeland security" decoding="async" />
</picture>
<p>Steve Bellovin, Tarah Wheeler and I have a new article in Lawfare, <a href="https://www.lawfareblog.com/ten-questions-we-hope-cyber-safety-review-board-answers%E2%80%94and-three-it-should-ignore" target="_blank" rel="noopener">Ten Questions We Hope the Cyber Safety Review Board Answers—and Three It Should Ignore</a>. The title is, I hope, self-explanatory.</p>How To Choose a Threat Modeling Training2022-02-18T00:00:00Zhttps://shostack.org/blog/how-to-choose-threat-modeling-training/Karen WalshUnderstanding how to choose the right threat modeling training can give you the education you want for the skills you need.<span class="single-post-excerpt"><!-- excerpt -->Understanding how to choose the right threat modeling training can give you the education you want for the skills you need.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/threat-modeling-training-graphic-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/threat-modeling-training-graphic-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/threat-modeling-training-graphic-800w.jpeg" width="800" height="431" alt="graphic depicting two adults conducting threat modeling training with laptop and whiteboard" decoding="async" />
</picture>
<p>Anticipa----tion. Threat modeling is really about anticipation. It’s about wondering what <em>could</em> happen. This is different from prediction, of course. Prediction is about taking an educated or informed guess at what <em>will</em> happen. The problem: in security, you can’t really predict the future. You can, however, anticipate problems, and that’s where threat modeling is valuable. If you’re new to threat modeling or want to advance your knowledge, you might be asking: how do I choose a threat modeling training?</p>
<h2>What is threat modeling?</h2>
<p>In cybersecurity, threat modeling is a structured process that empowers you to analyze threats to software and looks at some combination of:</p>
<ul><li>Business processes</li>
<li>Software and hardware that implement those processes</li>
<li>Clients and partners that connect from outside the business’ control.</li>
<li>Vulnerabilities</li>
<li>Threats</li>
<li>Controls</li>
<li>Assets</li>
<li>attackers</li></ul>
<p>A good threat analysis uses a methodology and structured techniques to help identify risks so you can proactively mitigate them so the work is consistent across the many participants.</p>
<p><a href="https://shostack.org/blog/why-threat-model/">We threat model</a> so that we can anticipate problems and, hopefully, fix them before they become a problem. If we can fix something before it’s an issue, then it’s less expensive. We don’t have to re-do work, we don’t have to adjust dependent code.</p>
<h2>What makes a good threat model?</h2>
<p>A good threat model starts with the Four Question Framework:</p>
<ul><li><a href="https://shostack.org/blog/sketching-video/">What are we working on?</a></li>
<li><a href="https://shostack.org/blog/what-can-go-wrong/">What can go wrong?</a></li>
<li>What are we going to do?</li>
<li>Did we do a good job?</li></ul>
<p>You might notice, these questions are not really technical, and that makes it a great language for communicating with executives about security issues.</p>
<h3>What are we working on?</h3>
<p>You can model anything. We’re used to talking about software and classic enterprise architectures. Today, we also need to think about operational, mobile app, and <a href="https://shostack.org/blog/iot-security-threat-modeling/">IoT</a> threats. On the other hand, if you haven’t defined <em>what</em> you’re working on, then you won’t be able to scope your analysis and move confidently to the next question.</p>
<h3>What can go wrong?</h3>
<p>This is where you start thinking about threats to your specific environment. You can be as structured as you want. Many people brainstorm with an open-ended question, while others start with a very structured approach. This can include specific threat types, like spoofing. You can also look at it from how threat actors operate, using Kill Chains.</p>
<h3>What are we going to do?</h3>
<p>Now you start going through the list of what <em>could</em> go wrong to try and address each threat. You can take four types of actions:</p>
<ul><li>Mitigate - apply a control of some type</li>
<li>Eliminate - remove risky features</li>
<li>Transfer - via insurance or user interface</li>
<li>Accept - by the right person, and tracked</li></ul>
<p>The choice you make depends on your capacity - technical and financial. You prioritize then take action. The action you take may depend on your risk tolerance.</p>
<h3>Did we do a good job?</h3>
<p>This is where you validate what you did. You make sure that you addressed all the identified threats. Basically, you’re checking your work. </p>
<p>This step also gives you the opportunity to summarize for non-technical executives. You can assess to see if you did a good job in the organizational senses, as well. Organizational means looking at both whether you had a good workflow and whether you have the right team in place. </p>
<p>Some questions to ask might include:</p>
<ul><li>Did we document the work and the threats we discovered?</li>
<li>Are documents in the right place? </li>
<li>Are people happy to have spent time threat modeling? </li>
<li>Is cross-team and cross-functional communication improving?</li></ul>
<h2>What to look for in threat modeling training content</h2>
<p>Great training requires great content, great instructional design and great execution, in that order. Any issues will limit what can be learned. </p>
<p>When you’re choosing technical tools, you have a certain set of specifications for what you want and need. Choosing a threat modeling training isn’t really that different. You might know that you want to learn the threat analysis process, but you might not realize there are different approaches. </p>
<p>To help you, I have some questions that you can ask when deciding on a threat modeling course so that you get the education you want. </p>
<h3>What is the learning approach?</h3>
<p>No matter what kind of course you take, the learning approach should be the first thing you consider. Training and education are different. With training, you gain skills. With education, you learn so that you can apply those skills to new problems. </p>
<p>Let’s think about how people view traditional cybersecurity awareness trainings. Most don’t work because attendees don’t actually <em>learn</em> in meaningful ways. The same is true for any training course. With threat modeling, the learning approach might be even more important because you need the ability to adapt to new threats. </p>
<p>This means that when evaluating a threat modeling course, you want to ask the following questions:</p>
<ul><li>What are the learning goals?</li>
<li>How are the learning modules structured?</li>
<li>What does the execution of these goals and modules look like?</li>
<li>What happens to bring it all together?</li></ul>
<h4>What is the threat modeling approach used?</h4>
<p>Some trainings will focus on one threat modeling methodology or technology. So, you should know the differences so you can choose the one that’s right for you. </p>
<p>More often than not, trainings will focus on the different threat modeling methodologies and how to choose the right one. For example, some focus solely on STRIDE, which is the acronym for the six threat types it focuses on:</p>
<ul><li><b>S</b>poofing</li>
<li><b>T</b>ampering</li>
<li><b>R</b>epudiation</li>
<li><b>I</b>nformation disclosure</li>
<li><b>D</b>enial of Service (DoS)</li>
<li>Privilege <b>E</b>scalation</li></ul>
<p>STRIDE is important and should definitely be discussed. It’s not the only framework for threat modeling. </p>
<p>The problem is that instead of starting with the threat types addressed, you need to start with an organizing principle, like the Four Question Framework. Threat modeling is about creating a repeatable process, and that means following a series of steps like identifying, brainstorming, analyzing, solving, and reviewing. Then, you pick the threat modeling techniques that support <em>into</em> those steps. </p>
<h3>What is the core content provided?</h3>
<p>When you’re choosing an online training course, you’re also buying use of the content provided. Any class is really just for the time you’re going through the course, but you also pay for any resources that you can reference later. </p>
<p>This means that you should consider what types of downloadable resources come with the class. Some things to consider are:</p>
<ul><li>How do the materials reinforce learning objectives? </li>
<li>How many exercises are provided? How deep can you go with them?</li>
<li>How are the <a href="https://shostack.org/blog/recording-lectures/">videos structured</a>?</li>
<li>What is the instructor’s experience?</li>
<li>How do these materials help you continue your learning after the course ends?</li></ul>
<h3>Does the training meet my needs?</h3>
<p>Often, online training gives you basic skills. This is great if you’re just getting started. However, learning is a journey, trite though that may sound. </p>
<p>The question you need to ask yourself is, “what do I need and want to learn?” If you already know the basics of threat modeling, then you need a course that gives you learning outcomes that apply to your everyday life. If you’re unsure what you need to learn, you probably want a respected and trustworthy source to make those choices for you.</p>
<p>As you start your search for advanced coursework, you should consider whether the course provides training to help with:</p>
<ul><li>Cybersecurity</li>
<li>Application security</li>
<li>Security design principles</li>
<li>Specific frameworks, like <a href="https://courses.shostack.org/courses/Introduction-to-threat-modeling-global-appsec-2021" target="_blank" rel="noopener">OWASP</a> </li>
<li>Job function, <a href="https://courses.shostack.org/courses/threat-modeling-for-engineers-201-self-paced" target="_blank" rel="noopener">like engineers</a> </li>
<li>Technology area, such as IoT or cloud</li>
<li>Business vertical, such as medical devices, automotive or aviation</li></ul>
<h3>Is the training focused on a specific threat modeling technology?</h3>
<p>Some trainings focus on giving you experience that focuses on how to use a vendor technology. With so many threat modeling technologies available, this makes sense. After all, you can’t use something if you haven’t been trained on it. </p>
<p>On the other hand, threat modeling doesn’t <em>have</em> to use a specific technology. You can just as easily get started with threat modeling using a whiteboard. Technology can certainly add value once you have the skills and the framework or process to apply them.</p>
<p>You also want a threat modeling course that takes a modern, thoughtful, and applicable approach to threat modeling. This means giving you a way to really learn the underlying skills - including the non-technical ones - so that you can successfully threat model anything, even new technologies. </p>
<h3>How does the training help evaluate or validate the threat model exercises?</h3>
<p>The course you take needs to be practical and applicable. You need to learn, and a lot of times that’s hard, especially with a <a href="https://shostack.org/blog/can-training-work-remotely/">remote course</a>. We deliver either self-paced or instructor-led trainings, and have instructions for organizations that apply a cohort strategy to keep self-paced learners motivated.</p>
<h4>Self-Paced Courses</h4>
<p>If you’re an independent learner with limited time, self-paced might be perfect for you. This is especially true if you’re someone who’s good at learning from doing, then reviewing. <a href="https://courses.shostack.org/collections?category=self-paced" target="_blank" rel="noopener">Self-paced trainings</a>, also called computer-based trainings (CBT), give you that flexibility and independence.</p>
<h4>Instructor-Led Courses</h4>
<p>Some people need the structure of <a href="https://courses.shostack.org/collections?category=instructor-led" target="_blank" rel="noopener">instructor-led coursework</a>. They need a defined time where they interact. Some people want the structure of sitting in a lecture. This is also ok. After all, no two learners are the same. If part of taking the time out of your day to attend lectures is getting instructor or peer feedback, you should make sure that the course includes time for these activities. </p>
<p>With a “flipped classroom” approach, lectures are pre-recorded, and you spend time interacting with the instructor rather than just listening to them drone on. If you’re taking a course with a flipped-classroom approach, you can watch lectures on your own time. You can speed up or slow down the video. You can re-wind. You can read the subtitles. (Adam will teach you to take notes with pen and paper to help you stay focused.) You can spend as much time as you need doing exercises. Then, you can spend the dedicated live session time engaging in discussion and instructor feedback. </p>
<h2>Our Threat Modeling Trainings</h2>
<p>What works in person doesn’t alway work online, so we worked with educational designers to create trainings modeled on research around how people learn best and made them technology agnostic, taking a modern, thoughtful, and applicable approach.</p>
<p>Are we effective? We like to think we are, but we also know that everyone thinks their service offers value. Instead of listening to us, why not see what some of our students have said:</p>
<blockquote>The course was very thought provoking about how our program should be run. I like the emphasis on Adam not having the answers for us, but having things that we should consider.</blockquote>
<p>For more information about the courses Shostack + Associates currently have available, please <a href="https://courses.shostack.org/collections" target="_blank" rel="noopener">see the listings here</a>.</p>I need an extension!2022-02-23T00:00:00Zhttps://shostack.org/blog/i-need-an-extension/AdamA few lessons from the Mazda radio incident.<span class="single-post-excerpt"><!-- excerpt -->A few lessons from the Mazda radio incident.<!-- /excerpt --></span><picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/extension-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/extension-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/extension-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/extension-800w.jpeg" width="800" height="333" alt="A broken car computer" decoding="async" />
</picture>
<p>[Updated Feb 26 - see below]</p>
<p>There was an <a href="https://www.geekwire.com/2022/youre-listening-to-kuow-like-it-or-not-mysterious-glitch-has-mazda-drivers-stuck-on-public-radio/" target="_blank" rel="noopener">entertaining story</a> recently, about a set of radios in Mazdas that got stuck in a reboot cycle. It turns out that the issue was a local radio station was sending files without extensions via one of the digital channels that are now in a lot of what I still think of as analog radios. (These are how song names, and apparently more, can show up in your in-dash entertainment (IDE) units.) And as much fun as it was to make jokes like “but then you've pwned a Mazda,” I think there are some useful security engineering lessons we should take away.</p>
<p> The first is that inputs are unpredictable. I admit, I rarely give thought to what happens if a file comes in without an extension, but it's now worth adding to any longer, more thorough test suites. You keep up on those, right? The second, I'm guessing here, is check return codes better. (I'd say exceptions, but I'm guessing these IDEs were coded in C (“some 2014-2017 Mazda vehicles with older software.”). The third is that this is most likely a case where some degree of least privilege would have saved Mazda a lot of money. Let me unpack why.</p>
<p>After this blew the heck up in the press, Mazda provided “$1,500 connectivity master units” to those affected - they probably are not paying that per unit, but they are paying dealers for the 90 minute installation, and they're probably upgrading a lot of units not yet impacted to prevent another instance. The description of the issue, a “reboot cycle,” seems to indicate to me that there's not a set of independent processes running, for example, with radio.app talking to the X11 server to display things, but a smaller set of highly integrated processes, possibly even a single process model where something provides just enough services to keep these things mostly running. A set of privileges would allow the still working code to notice and reject the problems that other code was having.</p>
<p>Obviously, I'm speculating, but the folks who make in dash entertainment units are highly price-sensitive, and the code changed as minimally as possible for long periods, so the units shipped in 2013 were likely selected in 2011, which means they could reasonably have been code-complete in 2010. Or because it only impacted Mazdas, it's possible that these are in-house units.</p>
<p>From a threat modeling perspective, it's nitfy that this is an honest to goodness example of a one-way data flow carrying a threat. That's pretty unusual.</p>
<p>In closing, these amusing sorts of incidents often carry important lessons, and you can use them for security education. Why miss the chance to drive awareness?</p>
<p>Updated Feb 26: There's a <a href="https://github.com/Hamled/mazda-format-string-bug#readme" target="_blank" rel="noopener">claim</a> that this can be triggered by program names that contain a % character, which I find more convincing than the 'lack of extension' root cause. (Via <a href="https://www.kuow.org/stories/99-invisible-great-red-car-conspiracy/" target="_blank" rel="noopener">Risks Digest</a>.)</p>Elevation of Defenses2022-03-15T00:00:00Zhttps://shostack.org/blog/elevation-of-defenses/AdamUsing games to help us explore engineering techniques<span class="single-post-excerpt"><!-- excerpt -->Using games to help us explore engineering techniques<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2022/levees.jpg" alt="Soldiers building an emergency levee" class="single-post-main-img" />
<p>Once a month or so, I run an Elevation of Privilege session with Agile Stationery. We spend 90 minutes playing the game, talking about threat modeling, how to introduce it, and showing how to make it work.</p>
<p>One of the reasons I do it is that I learn from the participants, and in a recent session for a bank, I got a new perspective on scoring that I want to share.</p>
<p>Kit M. asked “can we give a point for coming up with a defense?” First: heck yes. You <em>can</em> give points for whatever you want. And you <em>should</em> give points in ways that reward the behaviors you want to see. So, giving a point for a defense is good, and makes me wonder: is the game too focused on offense? Is there a variant where you get points for coming up with threats that are already blocked by the defenses that software engineers have built and tested? Maybe that's a team variant, and we could do something like...</p>
<ul>
<li>Discovering a threat: 1 point</li>
<li>Noting a potential defense: 1 point</li>
<li>Noting an implemented defense: 2 points</li>
<li>Discovering a variant of the threat that bypasses the defense: 1 point</li>
<li>Explaining how an implemented defense prevents the variant: 3 points</li>
</ul>
<p>Games are an important tool for engineering — they open the door to playful exploration of possibilities. When playing, we choose to move into a space where we arbitrarily limit ourselves with a set of rules. (Soccer is way easier if you pick up the ball with your hands, but we agree in playing soccer not to do that.) A correlate is that we know we're there to have fun, and it's ok to make suggestions like “what if we change the rules?”</p>
<p>You can play <a href="https://shostack.org/games/elevation-of-privilege">with your team</a>, you can join an <a href="https://shostack.org/games/elevation-of-privilege" target="_blank" rel="noopener">open session</a>, and we do private sessions - <a href="https://shostack.org/contact">get in touch</a>.</p>
<p class="credit-text">Photo: <a href="https://www.flickr.com/photos/usacehq/5793394999/in/photostream/" target="_blank" rel="noopener">US Army Corps of Engineers</a>.</p>
How Executives Can Use Threat Modeling2022-03-18T00:00:00Zhttps://shostack.org/blog/how-executives-can-use-threat-modeling/AdamYou don’t have to be technical, but you can’t make informed decisions about your business without threat modeling.<span class="single-post-excerpt"><!-- excerpt -->You don’t have to be technical, but you can’t make informed decisions about your business without threat modeling.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2022/pexels-anna-shvets-5324939.jpg" alt="Focused colleagues brainstorming in boardroom; Photo by Anna Shvets from Pexels" class="single-post-main-img" />
<p>My <a hre="https://courses.shostack.org/collections?category=self-paced" target="_blank" rel="noopener">threat modeling trainings</a> usually have a good mix of technical people, like developers, network architects, and <a href="https://courses.shostack.org/courses/threat-modeling-for-engineers-201-self-paced" target="_blank" rel="noopener">engineers</a>. These technical folks need to use threat modeling, and they learn where it fits into their daily functions. Every so often, though, I get business executives - like CISOs, VPs engineering, and even a CEO, or CFO. It’s a general misunderstanding in cybersecurity - and the rest of the technology world - that business leadership can’t do threat modeling. In fact, most executives use threat modeling everyday without realizing it. The good news? Executives can learn to apply specific threat modeling techniques to help make more informed decisions across all areas of the business, especially cybersecurity. </p>
<h2>What is threat modeling?</h2>
<p>We <a href="https://shostack.org/blog/why-threat-model/">threat model</a> everyday. First, let’s set out a definition of threat modeling that makes sense to everyone - no matter what their job function is. Threat modeling is a structured process that helps you understand what can go wrong so that you can decide whether it’s something that you want to spend time fixing. In the end, threat modeling is basically a way to identify potential threats and decide what to do about them — sometimes adding a control, sometimes accepting the risk.</p>
<h2>What is the threat modeling process?</h2>
<p>I like to use my Four Question Framework:</p>
<ul><li><a href="https://shostack.org/blog/sketching-video/">What are we working on?</a></li>
<li><a href="https://shostack.org/blog/what-can-go-wrong/">What can go wrong?</a></li>
<li>What are we going to do about it?</li>
<li>Did we do a good job?</li></ul>
<p>Let’s put threat modeling into an everyday hypothetical.</p>
<blockquote>You just got the notification on your smartphone that you need to install the most recent software update. You’re also on your way to go run errands. You don’t know how long it’s going to take for the update to download and install, and you don’t want to head out without your phone. You decide that you’ll wait until you get home and do it before you go to sleep. You run the errands, Netflix and chill, then install the update without any data being stolen.</blockquote>
<p>This is an example of threat modeling. Even though you might not realize it, your brain went through the following process:</p>
<ul><li><strong>What are we working on</strong>: Installing a smartphone update and running errands</li>
<li><strong>What could go wrong</strong>: Someone could steal the personal information that your store on your phone. Your phone could be out of commission while you’re running errands.</li>
<li><strong>What are we going to do about it</strong>: Compare the time it takes to install versus getting errands done. Decide on the trade-off between getting to Netflix and chill or protecting data sooner.</li>
<li><strong>Did we do a good job</strong>: Your data was ok - this time! So you did a good job.</li></ul>
<h2>4 Reasons Threat Modeling Is Valuable for Executives</h2>
<p>We talk about creating a culture of security all the time. To do this, it really does have to start at the top, with executive leadership and even the board. You don’t have to be technical, but you can’t make informed decisions about your business without threat modeling. If you can’t express what can go wrong with your technical systems, how are you making choices about if or how to defend them?</p>
<h3>Being a Leader</h3>
<p>The best leaders lead by doing. To be an effective senior leadership team member, you need to set an example. You can’t create a culture of security if you’re not actively engaged in the process yourself.</p>
<h3>Participating in Conversations</h3>
<p>You can’t be actively engaged in conversations that you don’t fully understand. You don’t need to be able to <em>do</em> the work. You need to be able to <em>lead</em> the work. </p>
<p>If data is the new oil, then as a CEO or CFO, you should know where your data is. I don’t mean you need to know this database field-by-database-field, but you should understand the way data flows through your systems and networks. </p>
<h3>Building Business Strategies</h3>
<p>A fundamental part of your job is to build and communicate about strategic business goals - what we’re working on. Today, those will almost always involve technology or data. </p>
<p>As you’re trying to grow revenue, your marketing and sales teams want to adopt a new Software-as-a-Service (SaaS) customer relationship management (CRM) platform. This makes sense from a business perspective. They can share data to accelerate the sales cycle. Also, the CRM will collect, store, transmit, or process sensitive personally identifiable information. </p>
<p>When you’re making these strategic decisions, you have to understand the potential threats so you can determine whether it’s the tool you should add to your business. </p>
<h3>Meeting Compliance Requirements</h3>
<p>The first thing to explain here: threat modeling is not risk management. They should be integrated, but they’re not the same. </p>
<p>Threats are things that can go wrong. You need to understand <em>how</em> things can go wrong so that you can prove governance over your security program. Threat modeling provides input into risk management. You make decisions about risk, using strategies like:</p>
<ul><li>Accept: it’s low enough that you don’t think it will have a high impact to the business, relative to the reward</li>
<li>Transfer: when you find someone else, like an insurer, to manage it </li>
<li>Eliminate: Decide not to go there, because other strategies aren’t easily applied, cost effective, or otherwise won’t mitigate a threat.</li></ul>
<p>Let’s return to the SaaS CRM example. You’ve decided to onboard this new SaaS CRM platform, but a lot of personally identifiable information (PII) is involved. The CRM might integrate with the following:</p>
<ul><li>Email servers</li>
<li>Calendars</li>
<li>Social media accounts</li>
<li>Communication tools</li>
<li>Form or survey tools</li>
<li>Conference call software</li>
<li>Ticketing systems</li></ul>
<h4>Applying Threat Threat Modeling to Compliance</h4>
<p>Now, let’s take a look at where threat modeling fits in.</p>
<ul><li><strong>What are we working on</strong>: Setting up a CRM platform that connects to your networks and other applications</li>
<li><strong>What can go wrong</strong>: Insecure APIs connecting all these technologies can lead to a data breach</li>
<li><strong>What are we going to do about it</strong>: Put security controls in place to protect the data, like encryption</li></ul>
<p>Most executives understand the idea of insecure APIs. It’s a point where data goes from one application to another. Understanding the real threat to your company is a bit different. You don’t need to <em>know</em> how to secure an API. You do need to <em>understand</em> how data moves between the applications. If you don’t understand the way data flows across your networks and applications, you can’t understand the threat. </p>
<h4>Threat Modeling and Risk Management</h4>
<p>This is where you quantify the threats that you can’t simply mitigate. You assess likelihood and impact, or other factors which help you make risk tradeoffs. Once you’ve discovered the threats, you can understand the real risk. </p>
<p>If you have all your sensitive information on one fully trusted network, the impact of any threat is higher and you might want to segment and isolate or even apply zero-trust strategies to limit the impact of an attack. This means you need to understand the way that data flows so that you can adequately assess risk and make sure that you’re mitigating risks. </p>
<p>This is where the fourth question comes in. Now that you’ve managed risk, you need to ask:<br />
<strong style="text-indent: 40px;">Did we do a good job?</strong><br />
You can’t meet compliance requirements without reviewing things from a strategic perspective. Did we cover all the threats to a reasonable level? </p>
<h4>Threat Modeling and Governance</h4>
<p>Since you understood the potential threats to data security, you were able to make an informed decision about what network to use and what controls to put in place. You’re also able to understand how the control failed and why the steps the security team took helped you secure the data better. </p>
<h2>Threat Modeling is for Everyone</h2>
<p>Making good trade-offs is a part of being an executive. Every day, we make conscious or subconscious trade-offs when making decisions. When you start to threat model, you’re creating a structured process for consistent outcomes. When you have consistency, you have repeatable processes for strategic decision-making that helps you achieve business objectives, like protecting your customer’s data the way you promise. </p>
<p>For more information about the courses I currently have available, please <a href="https://courses.shostack.org/collections" target="_blank" rel="noopener">see the listings here</a>.
</p>The Evergreen Running Aground Problem2022-03-23T00:00:00Zhttps://shostack.org/blog/ever-forward/AdamThe Evergreen line has had another ship run aground.<span class="single-post-excerpt"><!-- excerpt -->The Evergreen line has had another ship run aground.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/everforward-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/everforward-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/everforward-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/everforward-800w.jpeg" width="800" height="534" alt="The US Coast Guard approaches a grounded Evergreen ship" decoding="async" />
</picture>
<p>"🎶 Ooops. I did it again..." The Ever Forward has <a href="https://www.popularmechanics.com/technology/infrastructure/a39438984/ever-forward-is-stuck-in-the-chesapeake-bay/" target="_blank" rel="noopener">run aground</a> in Chesapeake Bay, mirroring the problem with its fellow Evergreen Marine ship the Ever Given, which blocked the Suez canal last year. And while I passionately want to laugh, I have a very serious question.</p>
<p>Ten companies, of which Evergreen is one, control roughly 75% of global container shipping. (<a href="https://www.globaltrademag.com/our-top-ten-list-these-shipping-companies-control-nearly-75-of-the-market/" target="_blank" rel="noopener">Global Trade magazine</a>.) Evergreen has suffered two newsworthy accidents in a year, the others have suffered none. I'm not going to commit the statistical sin of computing on small cherry-picked data, but this strike me as surprising and unusual, and begs the question: Is something wrong with Evergreen, or is this bad luck?</p>
<p>Fortunately, I don't have to answer that. But if I did, I wouldn't want to take Evergreen's word for it. I'd want an independent investigator who can analyze what's happened, what contributing factors might exist, and make a report available. Also fortunately, we have just such a body for transportation accidents: The National Transportation Safety Board. I look forward to the day when we have a computer security equivallent.</p>
Short reads, March 20222022-03-28T00:00:00Zhttps://shostack.org/blog/shortreads-march/AdamInteresting articles from around the internet, March edition<span class="single-post-excerpt"><!-- excerpt -->Interesting articles from around the internet, March edition<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/short-reads-march-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/short-reads-march-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/short-reads-march-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/short-reads-march-800w.jpeg" width="800" height="501" alt="Four people reading at a church" decoding="async" />
</picture>
<p>I've been meaning to blog about a few things, and not quite had time. Events in Ukraine are tremendously depressing, and I don't have much useful to say. I hope you find one or more of these interesting:</p>
<ul>
<li><a href="https://www.rstreet.org/2022/03/08/announcing-the-white-hat-cyber-forecasting-tournament/" target="_blank" rel="noopener">Announcing the White Hat Cyber Forecasting Tournament</a> by R Street, and <a href="https://www.metaculus.com/tournament/white-hat/" target="_blank" rel="noopener">Metaculus's tournament page.</a> The user interface takes a bit of getting used to, but there's ongoing predictions that prediction markets will be helpful in security, and I love a good experiment.</li>
<li><a href="https://www.theatlantic.com/health/archive/2022/02/covid-anti-vaccine-smoking/622819/" target="_blank" rel="noopener">COVID Won’t End Up Like the Flu. It Will Be Like Smoking.</a> (Self-explanatory title, at the Atlantic)</li>
<li><a href="https://nickmerrill.substack.com/p/pockets-of-liberation?s=r" target="_blank" rel="noopener">Nick Merrill on economic motives for crypto</a>: "They pay two-thirds of their income to their landlord. Why not try minting their music as NFTs?"</li>
</ul>
<p class="credit-text">Photo by <a href="https://unsplash.com/photos/AUkKng1cOrM" target="_blank" rel="noopener">Dahiana Waszaj</a></p>The Grimes Model of Scams2022-03-31T00:00:00Zhttps://shostack.org/blog/grimes-model-of-scams/AdamRoger Grimes has an exciting new model of scams that's going to transform how we teach people ot defend against them.<span class="single-post-excerpt"><!-- excerpt -->Roger Grimes has an exciting new model of scams that's going to transform how we teach people ot defend against them.<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2022/grimes-model.png" alt="The Grimes model of scams" class="single-post-main-img" />
<p>Over the years, I've had the opportunity to look at scams and cons and how they translate to technical systems. And while scams are fascinating, technical cybersecurity tends to treat them with even more disdain than social engineering concerns. The scorn and victim blaming can be something to see.</p>
<img src="https://shostack.org/blog/img/2022/grimes-original.png" align="right" />
<p>So I was pretty excited to see an article from Roger Grimes, <a href="https://www.linkedin.com/pulse/traits-most-scams-roger-grimes/" target="_blank" rel="noopener">Traits of Most Scams</a>. This is a model of threats, a way of answering the question "what can go wrong?" It's largely analagous to STRIDE in that sense. It has a couple of nice properties to it. They include simplicity and longevity. The simplicity is somewhat self-evident. The longevity is a result of the properties being hard for scammers to avoid: an unexpected message, urgency, a downside to you, and a thing you haven't been asked to do before.</p>
<p>It's likely that the message is unexpected - if it's expected, the scammer has either spent a long time on preparation ("the long con"). It's urgent because a lot of the infrastructure scammers use will get taken down, and it's also likely that you'll notice that something is odd or too good to be true. The downside to you is the reason the scammer is scamming, and it's new to you because if they've asked you, you're probably more wary. Of course, there are romance scams and advance fee frauds where the requests cascade, but there was a first request in the chain.</p><p>
</p><p>Similarly, the "confirm using an alternate method" is what we called "a path to safety." (I think this was originally pointed out by Ellen Cram Kowalczyk, who said taking control of a situation is a powerful technique as you travel. Scammers initiate contact with you - the random person you ask for help is less likely to be involved.) Visiting a bookmark, using a search engine, calling the number on the back of your card all get you to that alternate channel and out of the scammer's space.</p>FDA Draft Premarket Guidance2022-04-08T00:00:00Zhttps://shostack.org/blog/fda-premarket-draft/AdamThe FDA has issued draft guidance for pre-market security<span class="single-post-excerpt"><!-- excerpt -->The FDA has issued draft guidance for pre-market security<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/fda-cybersecurity-premarket-draft-2022-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/fda-cybersecurity-premarket-draft-2022-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/fda-cybersecurity-premarket-draft-2022-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/fda-cybersecurity-premarket-draft-2022-800w.jpeg" width="800" height="440" alt="Document title: Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions Draft Guidance for Industry and Food and Drug Administration Staff" decoding="async" />
</picture>
<p>The FDA has issued a draft, <a href="https://www.fda.gov/media/119933/download" target="_blank" rel="noopener">Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions Draft Guidance for Industry and Food and Drug Administration Staff</a>. Healthcare Infosecurity has a summary and interview with Dr. Suzanne Schwartz in <a href="https://www.healthcareinfosecurity.com/fda-document-details-cyber-expectations-for-device-makers-a-18863" target="_blank" rel="noopener">FDA Document Details Cyber Expectations for Device Makers</a>. Comments are due within 90 days.</p>
Future of Appsec podcast2022-04-21T00:00:00Zhttps://shostack.org/blog/future-of-appsec/AdamA really fun episode with Adam joining Harshill Parikh of Tromzo's Future of Appsec podcast.<span class="single-post-excerpt"><!-- excerpt -->A really fun episode with Adam joining Harshill Parikh of Tromzo's Future of Appsec podcast.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/TROMZO_Podcast_FoAS__Guest_800x300px-1-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/TROMZO_Podcast_FoAS__Guest_800x300px-1-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/TROMZO_Podcast_FoAS__Guest_800x300px-1-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/TROMZO_Podcast_FoAS__Guest_800x300px-1-800w.jpeg" width="800" height="300" alt="Podcast episode billboard" decoding="async" />
</picture>
<p> Most people think about threat modeling as an extensive, costly and heavyweight exercise. But what if it didn’t have to be? What if threat modeling could be as easy as asking and answering a few simple questions?</p>
<p>Topics discussed in this episode:</p>
<ul>
<li>Why threat modeling shouldn’t only be for organizations with large teams of application security engineers. </li>
<li>How to bridge the gap between the security team focused on threat modeling and the development/engineering team.</li>
<li>How security engineers can support and train their developers on how to incorporate threat modeling into their day-to-day work. </li>
<li>Where threat modeling should fit into your application security program priorities. </li>
<li>The surprising benefits that threat modeling brings — outside of knowing the risks that exist.</li>
<li>How most organizations let perfect be the enemy of good (and what they should be doing instead). </li>
</ul>
<p>Listen — and subscribe — at:</p>
<ul>
<li><a href="https://podcasts.apple.com/lt/podcast/future-of-application-security/id1615681970" target="_blank" rel="noopener">Apple podcasts</a></li>
<li><a href="https://open.spotify.com/episode/7yvNSUsU25Lpo8FTVIWciZ?si=46cb9cd3ca6441ec&nd=1" target="_blank" rel="noopener">Spotify</a></li>
</ul>
CyberPeace2022-04-25T00:00:00Zhttps://shostack.org/blog/cyberpeace/AdamA new book on cyberpeace!<span class="single-post-excerpt"><!-- excerpt -->A new book on cyberpeace!<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/cyber-peace-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/cyber-peace-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/cyber-peace-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/cyber-peace-800w.jpeg" width="800" height="305" alt="An excerpt of the cover of the new book, Cyberpeace" decoding="async" />
</picture>
<p>I'm excited that <a href="https://www.cambridge.org/core/books/cyber-peace/8C458021C6FEC398064867A9B5EA938D" target="_blank" rel="noopener">Cyber Peace: Charting a Path Toward a Sustainable, Stable, and Secure Cyberspace</a>, edited by Scott J. Shackelford, Indiana University, Bloomington, Frederick Douzet, Université Paris 8, Christopher Ankersen, New York University is now published by Cambridge University press, and is open access. Rob Knake and I have a chapter, "Trust But Verify" on the importance of having many ways to assess the state of the world, and how that ties to science of security.</p>
Worthwhile Books May 20222022-05-02T00:00:00Zhttps://shostack.org/blog/worthwhile-books-q2-2022/AdamThese are the books that I read in the second quarter, 2022 that are worth your time.<span class="single-post-excerpt"><!-- excerpt -->These are the books that I read in the second quarter, 2022 that are worth your time.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/books-star-wars-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/books-star-wars-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/books-star-wars-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/books-star-wars-800w.jpeg" width="800" height="298" alt="A stack of Star Wars books" decoding="async" />
</picture>
<p>Books I've read lately that I think you might find enjoyable are the focus of this 100% foreshadowing-free edition of “worthwhile books.”</p>
<h3>Cyber</h3>
<ul>
<li><a href="https://www.cambridge.org/core/books/cyber-peace/8C458021C6FEC398064867A9B5EA938D" target="_blank" rel="noopener"><cite>Cyber Peace</cite></a>, edited by Scott J. Shackelford, Frederick Douzet, and Christopher Ankersen. Rob Knake and I have a chapter, Trust But Verify. Open access.</li>
</ul>
<h3>Non-fiction</h3>
<ul>
<li><a href="https://amzn.to/3ru9NSo" target="_blank" rel="noopener"><cite>The Age of Surveillance Capitalism</cite></a>, Shoshana Zuboff.</li>
<li><a href="https://amzn.to/3KJ5FFT" target="_blank" rel="noopener"><cite>The Sense of Style</cite></a>, Steven Pinker (My short <a href="https://shostack.org/blog/the-road-to-mediocrity/">review</a>).</li>
<li><a href="https://writeusefulbooks.com/" target="_blank" rel="noopener"><cite>Write Useful Books</cite></a>, Rob Fitzpatrick.</li>
<li><a href="https://amzn.to/3vlbd2V" target="_blank" rel="noopener"><cite>The Ultimate Star Wars and Philosophy</cite></a> and <a href="https://amzn.to/3jHUzoI" target="_blank" rel="noopener"><cite>Star Wars and Philosophy: More Powerful than You Can Possibly Imagine</cite></a>. Both edited “chapter” volumes.</li>
<li><a href="https://amzn.to/3JQ9LKW" target="_blank" rel="noopener"><cite>The World According To Star Wars</cite></a>, Cass Sunstein.</li>
<li><a href="https://amzn.to/377tc4I" target="_blank" rel="noopener"><cite>Stencil Wars - PocketArt: The Ultimate Book on Star Wars Inspired Street Art</cite></a>.</li>
</ul>
<h3>Fiction</h3>
<ul>
<li><a href="https://amzn.to/3KQ8p4m" target="_blank" rel="noopener"><cite>Termination Shock</cite></a>, Neal Stephenson.</li>
<li><a href="https://amzn.to/3vndkmX" target="_blank" rel="noopener"><cite>The Pursuit of the Pankera: A Parallel Novel About Parallel Universes</cite></a>, Robert Heinlein. This is the manuscript which became <cite>Number of the Beast</cite> and is most interesting either to his fans or in seeing the power of good editing.</li>
<li><a href="https://amzn.to/37c2how" target="_blank" rel="noopener"><cite>From a Certain Point of View: The Empire Strikes Back</cite></a>. The second volume of Star Wars, told from the perspective of minor characters.</li>
</ul>
Happy Star Wars Day: A Big Announcement & Small Gift2022-05-04T00:00:00Zhttps://shostack.org/blog/star-wars-day-2022/AdamExciting news from Adam Shostack on Star Wars Day 2022<span class="single-post-excerpt"><!-- excerpt -->Exciting news from Adam Shostack on Star Wars Day 2022<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/beginning-crawl-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/beginning-crawl-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/beginning-crawl-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/beginning-crawl-800w.jpeg" width="800" height="299" alt="A snippet from the opening crawl of the newest Shostack online course" decoding="async" />
</picture>
<h2>May the Fourth Be With You!</h2>
<p>This is a particularly joyous Star Wars Day for me because I finally get to share some very exciting news! Later this year, Wiley will be publishing my newest book <cite>“Threats: What Every Engineer Should Learn From Star Wars.”</cite></p>
<p>After decades implementing and teaching cybersecurity, I’ve learned — we can’t wait for “Jedi Knights” to show up and secure our systems. There aren’t enough security experts who wield their powers for good, designing secure systems. Developing deep expertise will take time — time we no longer have. </p>
<p>But they are not our only hope, there is another way… when I started at Microsoft, I very quickly learned that even when we had engineers who wanted to secure large products (like Windows) or relatively simple products (like Word), that I couldn't do it alone. We all need to progress towards mastery at a pace that works for us.</p>
<p>If we want secure systems we must teach engineers what security means and how to do it. We have to find compelling and relatable ways to ensure everyone has the same foundational knowledge. This must include software engineers and project managers who aren’t cybersecurity experts. In my decades of teaching, I've found the answers often lie in stories from a long time ago in a galaxy far, far away. </p>
<h2>Timeless Threats</h2>
<p>Just like there are timeless tropes in fiction, there are broad, timeless sets of threats.</p>
<p><cite>Star Wars</cite> gives us a large and accessible set of examples. Concrete examples, whether they are real or fictional, help people grasp complex ideas. Being playful encourages creative thinking about threats and that leads to more secure design. </p>
<p>Threats will bring together examples from both <cite>Star Wars</cite> and real life to teach every engineer the fundamentals of how to find security problems as they build systems. These lessons will include: </p>
<ul><li>Which timeless threats keep raising their ugly heads</li>
<li>The concrete details and true stories of where those threats have emerged (struck? manifested?)</li>
<li>A framework for organizing the complex, sprawling world of security threats</li></ul>
<p>Even though we don’t have a pre-order link yet, you can sign up below and we’ll email you as soon as pre-orders open up. </p>
<p>You can also get early access to some of the content if you sign up as a “beta reader.” I’m looking for technologists (including developers, SRE, or other IT professionals) who’d like a chance to read the book early and provide feedback. If you’re interested just check “Beta Reader” on the form below. </p>
[Edit: Outdated form removed]
<!-- Begin Mailchimp Signup Form
<link href="//cdn-images.mailchimp.com/embedcode/classic-10_7.css" rel="stylesheet" type="text/css">
<div id="mc_embed_signup">
<form action="https://twitter.us3.list-manage.com/subscribe/post?u=2cf11d868166877ee6c3e7806&id=efe0f4a2fd" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate>
<div id="mc_embed_signup_scroll">
<div class="indicates-required"><span class="asterisk">*</span> indicates required</div>
<div class="mc-field-group">
<label for="mce-EMAIL">Email Address <span class="asterisk">*</span></label>
<input type="email" value="" name="EMAIL" class="required email" id="mce-EMAIL">
</div>
<div class="mc-field-group">
<label for="mce-FNAME">First Name </label>
<input type="text" value="" name="FNAME" class="" id="mce-FNAME">
</div>
<div class="mc-field-group">
<label for="mce-LNAME">Last Name </label>
<input type="text" value="" name="LNAME" class="" id="mce-LNAME">
</div>
<div class="mc-field-group input-group">
<strong>Even More Goodness </strong>
<ul>
<li><input type="checkbox" value="1" name="group[479625][1]" id="mce-group[479625]-479625-0"><label for="mce-group[479625]-479625-0"> Be a beta reader</label></li>
<li><input type="checkbox" value="2" name="group[479625][2]" id="mce-group[479625]-479625-1"><label for="mce-group[479625]-479625-1"> Get course announcments</label></li>
</ul>
</div>
<div hidden="true"><input type="hidden" name="tags" value="10074265"></div>
<div id="mce-responses" class="clear">
<div class="response" id="mce-error-response" style="display:none"></div>
<div class="response" id="mce-success-response" style="display:none"></div>
</div> < !-- prevent form bot signups-- >
<div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_2cf11d868166877ee6c3e7806_efe0f4a2fd" tabindex="-1" value=""></div>
<div class="clear"><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div>
</div>
</form>
</div>
<script type='text/javascript' src='//s3.amazonaws.com/downloads.mailchimp.com/js/mc-validate.js'></script>
<script type='text/javascript'>(function($) {window.fnames = new Array(); window.ftypes = new Array();fnames[0]='EMAIL';ftypes[0]='email';fnames[1]='FNAME';ftypes[1]='text';fnames[2]='LNAME';ftypes[2]='text';fnames[3]='ADDRESS';ftypes[3]='address';fnames[4]='PHONE';ftypes[4]='phone';}(jQuery));var $mcj = jQuery.noConflict(true);</script>
< !--End mc_embed_signup-->
<h2>A Star Wars Day Gift</h2>
<p>In honor of Star Wars Day, I also wanted to share four lessons from one of our favorite droids. I hope you have as much fun watching them as I did putting them together. If you’re feeling inspired, please submit your answers to the extra credit questions in the YouTube comments or via email to <a id="mlink" href="https://shostack.org/blog/star-wars-day-2022/#">threatmodeling [at] gmail.com</a> for a chance to win a free book.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/4sFFdIEEJyk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>May the Fourth Be With You!</p>
</div>
<script>// "protect" email address from scraping
const toname = "threatmodeling"; const toaddy = "gmail.com";
let elink = document.getElementById("mlink");
elink.href = `mailto:${toname}@${toaddy}`;
</script>on the security of Star Wars2022-05-09T00:00:00Zhttps://shostack.org/blog/on-security-of-star-wars/AdamAdam joined Josh and Kurt on the Open Source Security podcast<span class="single-post-excerpt"><!-- excerpt -->Adam joined Josh and Kurt on the Open Source Security podcast<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/Star-Wars-Legos-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/Star-Wars-Legos-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/Star-Wars-Legos-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/Star-Wars-Legos-800w.jpeg" width="800" height="301" alt="Lego storm troopers" decoding="async" />
</picture>
<p>Adam joined Josh and Kurt on the Open Source Security podcast, <a href="https://opensourcesecurity.io/2022/05/08/episode-322-adam-shostack-on-the-security-of-star-wars/" target="_blank" rel="noopener">Episode 322</a>. It was a super-geeky and fun conversation. </p>
Application Security Roundup - May2022-05-12T00:00:00Zhttps://shostack.org/blog/appsec-roundup-may/AdamA collection of interesting appsec posts.<span class="single-post-excerpt"><!-- excerpt -->A collection of interesting appsec posts.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/may-appsec-roundup-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/may-appsec-roundup-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/may-appsec-roundup-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/may-appsec-roundup-800w.jpeg" width="800" height="300" alt="A set of locks" decoding="async" />
</picture>
<p>A collection of interesting reads over the last few months:</p>
<ul>
<li><a href="https://leaddev.com/security/supporting-influencing-and-leading-security-practitioner" target="_blank" rel="noopener">Supporting, influencing, and leading as a security practitioner</a>, Izar Tarandach</li>
<li><a href="https://b-vieira.medium.com/scaling-up-appsec-within-your-enterprise-1d1b072a868e" target="_blank" rel="noopener">Scaling up appsec</a>, Bárbara Vieira</li>
<li><a href="https://blog.relyabilit.ie/the-curse-of-systems-thinkers/" target="_blank" rel="noopener">The Curse of Systems Thinkers (Part 1)</a>, Niall Murphy. Focuses on systems without making an explicit call to security, but security is always a systems property.</li>
<li><a href="https://www.wired.com/story/arian-taherzadeh-haider-ali-fake-agents-case/" target="_blank" rel="noopener">The Fake Federal Agents Case Baffling US Intelligence Experts</a>. People often ask ‘why would someone do that’ as we threat model. Keep this one in mind: motives are hard, and even in the police world of ‘means, motive, and opportunity’, they may be baffling. (See also: crazy, obsessive people.)</li>
<li><a href="https://assets.ey.com/content/dam/ey-sites/ey-com/en_us/topics/assurance/accountinglink/ey-ttp15346-221us-03-10-2022.pdf?download" target="_blank" rel="noopener">SEC proposes requiring more cybersecurity disclosures</a> (E&Y)</li>
<li><a href="https://corpgov.law.harvard.edu/2022/04/11/proposed-sec-cyber-rules-a-game-changer-for-public-companies/https://corpgov.law.harvard.edu/2022/04/11/proposed-sec-cyber-rules-a-game-changer-for-public-companies/" target="_blank" rel="noopener">Proposed SEC Cyber Rules: A Game Changer for Public Companies</a> Paul Ferrillo and Christophe Veltsos, Harvard Law forum on corporate governance</li>
<li><a href="https://shostack.org/blog/future-of-appsec/">Adam on the Future of Appsec podcast</a></li>
<li><a href="https://www.siliconrepublic.com/people/threat-modeller-cybersecurity-jobs" target="_blank" rel="noopener">What exactly does a threat modeler do?</a> featuring Judy Kelly.</li>
<li><a href="https://www.ncsc.gov.uk/blog-post/the-technology-assurance-principles" target="_blank" rel="noopener">Technology Assurance principles</a> from the UK's NCSC. An interestingly holistic group, including ‘Product development', 'Design and functionality' and ‘Though-life’</li>
</ul>
<p class="credit-text">Photo: <a href="https://www.flickr.com/photos/davelawler/50286690202/" target="_blank" rel="noopener">Dave Lawler</a></p>
Plants grow in lunar soil2022-05-13T00:00:00Zhttps://shostack.org/blog/moon-dirt/AdamLife will find a way!<span class="single-post-excerpt"><!-- excerpt -->Life will find a way!<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/lunar-soil-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/lunar-soil-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/lunar-soil-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/lunar-soil-800w.jpeg" width="800" height="300" alt="Plants growing in lunar soil." decoding="async" />
</picture>
<p>More than 50 years after the samples were returned from the moon, scientists have grown plants in lunar soil. Fascinating results, including that older soil samples were harsher on the plants. The <a href="https://www.cbc.ca/news/science/moon-dirt-plant-scientists-nasa-1.6451351" target="_blank" rel="noopener">CBC article</a> is a good starting place, as is the NASA release.</p>
<p class="credit-text">Photo: <a href="https://www.nasa.gov/feature/biological-physical/scientists-grow-plants-in-soil-from-the-moon" target="_blank" rel="noopener">NASA</a></p>
OWASP podcast with Matt Tesauro2022-05-31T00:00:00Zhttps://shostack.org/blog/owasp-podcast/AdamAdam joined Matt Tesauro on the OWASP podcast<span class="single-post-excerpt"><!-- excerpt -->Adam joined Matt Tesauro on the OWASP podcast<!-- /excerpt --></span>
<img src="https://shostack.org/blog/img/2022/owasp-shostack-Tesauro.png" alt="The OWASP podcast logo" class="single-post-main-img" />
<p>Adam <a href="https://soundcloud.com/owasp-podcast/threat-modeling-using-the-force-with-adam-shostack-owasp-podcast-e001" target="_blank" rel="noopener">joined</a> Matt Tesauro on the OWASP podcast, and much fun was had by all!</p>
A Science of Cybersecurity Public Health2022-06-02T00:00:00Zhttps://shostack.org/blog/cybergreen/AdamI've been working with the CyberGreen Institute to develop public health as a way of thinking about cybersecurity.<span class="single-post-excerpt"><!-- excerpt -->I've been working with the CyberGreen Institute to develop public health as a way of thinking about cybersecurity.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/cybergreen-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/cybergreen-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/cybergreen-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/cybergreen-800w.jpeg" width="800" height="303" alt="Cybergreen's mission to establish a science of cybersecurity public health" decoding="async" />
</picture>
<p>For more than a year, I've been working with the CyberGreen Institute to develop public health as a way of thinking about cybersecurity, and this morning have a <a href="https://cybergreen.net/exploring-public-health-as-a-frame-for-cybersecurity/" target="_blank" rel="noopener">long blog post</a> at their new website explaining what that means.</p>Authentic Thoughts About What Can Go Wrong2022-06-09T00:00:00Zhttps://shostack.org/blog/what-can-go-wrong-authentic-thoughts/AdamThreat modeling doesn't need to be big and complex<span class="single-post-excerpt"><!-- excerpt -->Threat modeling doesn't need to be big and complex<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/backupcode-798w.webp 798w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/backupcode-798w.jpeg 798w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/backupcode-798w.png 798w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/backupcode-798w.jpeg" width="798" height="298" alt="Printed backup codes for mutlifactor authentication" decoding="async" />
</picture>
<p>This is a backup code I printed recently for some account, and I want to talk about threat modeling by asking: what can go wrong?</p>
<p>Take a minute and look at it and ask that question. I have an answer which is very real: I have no idea what site this is for. I dilligently used the print button on the site, it sat on my printer, and I forgot. I have some guesses, based on when it was printed, but I don't know, because the print button led to it being printed without a URL, without a site name or any other identifiers.</p>
<p>This doesn't require a framework — simply asking what can go wrong may well illuminate this. We can also use a framework like NEAT (necessary, explained, actionable, tested) to see that while this is necessary, there's no explanation. An explanation could have a few elements: where do I go ("just try to login to foo.com, and when prompted for your password, there'll be a "use backup code" link. Click that and enter this code.") What do I do next (do I need make sure to print additional access codes, or maybe this code will work repeatedly?).</p>
<p>It's also pretty clear that this is not tested. One beautiful element that I've seen in these systems is a prompt after you supposedly printed the codes and hit next: Please enter the code you supposedly just printed and "oops let me print more codes." I've needed that oops button at least once.</p>
<p>Threat modeling doesn't need to be big and complex. It's a way of thinking about the world, pausing to ask, what can go wrong. If you're not doing it, we have open classes coming up in July and August. These courses are longer and more intensive than just asking what can go wrong. Participants will learn how to use the Four Question Framework, tools for answering each question, apply them hands on, and get feedback from both me and their fellow students. It's awesome training, and especially beneficial for people who want to take their careers to the next level.</p>
<p>The week of <a href="https://courses.shostack.org/courses/threat-modeling-intensive-july-2022" target="_blank" rel="noopener">July 11</a>, I have a distributed training, and in August, I have two courses at Blackhat in Las Vegas one <a href="https://www.blackhat.com/us-22/training/schedule/#adam-shostacks-threat-modeling-intensive-25825" target="_blank" rel="noopener">Aug 6+7</a> the other <a href="https://www.blackhat.com/us-22/training/schedule/#adam-shostacks-threat-modeling-intensive-258251645117907" target="_blank" rel="noopener">Aug 8+9</a>. Both Blackhat courses are in person only.</p>
Application Security Roundup - June2022-06-28T00:00:00Zhttps://shostack.org/blog/appsec-roundup-june/AdamInteresting appsec posts: from medical devices to bridges.<span class="single-post-excerpt"><!-- excerpt -->Interesting appsec posts: from medical devices to bridges.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/June-roundup-puzzle-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/June-roundup-puzzle-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/June-roundup-puzzle-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/June-roundup-puzzle-800w.jpeg" width="800" height="300" alt="A set of puzzle pieces" decoding="async" />
</picture>
<p>The most interesting #appsec articles I read this month were all about requirements — from medical devices to bridges. </p>
<ul>
<li><a href="https://www.fda.gov/media/119933/download" target="_blank" rel="noopener">Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions Draft Guidance for Industry and Food and Drug Administration Staff</a>. The title says it all. Comments due soon.</li>
<li><a href="https://www.lawfareblog.com/medical-device-security-offers-proving-ground-cybersecurity-action" target="_blank" rel="noopener">Medical Device Security Offers Proving Ground for Cybersecurity Action</a> (Jim Dempsey, Lawfare.) Perspective on a new law passed by the House, and how the draft guidance should be seen from a policy and implementation perspective. </li>
<li><a href="https://www.cbc.ca/news/canada/saskatchewan/engineer-designed-bridge-collapsed-facing-discipline-1.6450110" target="_blank" rel="noopener">Engineer who designed Sask. bridge that collapsed hours after opening facing disciplinary hearing</a> (Geoff Leo, CBC.) The client, a “Rural Municipality,” insisted that “no geotechnical investigation should be obtained as the RM was concerned about the additional cost and delay.”</li>
</ul>
<p>Many of us in cybersecurity are told to avoid what the client thinks is meaningless cost and delay — or worse, looking under rocks for fear of what we'll see. Clearer guidance can help, but as Dr. Depmsey says in his Lawfare article, a lot is subsumed into what risk management is appropriate. Licensed engineers can be sanctioned, which gives them a basis to assert that steps are essential. We often lack data to quantify the probability that specific threats will manifest or to predict the outcomes, but that doesn't mean we should skip the geotechnical investigation.</p>
Congratulations to the CSRB!2022-07-14T00:00:00Zhttps://shostack.org/blog/csrb-report/AdamI'm thrilled the first CSRB report is available.<span class="single-post-excerpt"><!-- excerpt -->I'm thrilled the first CSRB report is available.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/csrb-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/csrb-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/csrb-800w.jpeg" width="800" height="300" alt="The cover from the CSRBs inaugural report." decoding="async" />
</picture>
<p>I'm thrilled that the CSRB has released <a href="https://www.cisa.gov/sites/default/files/publications/CSRB-Report-on-Log4-July-11-2022_508.pdf" target="_blank" rel="noopener">Review of the December 2021 Log4j Event</a>, their inaugural report. There's also a short Twitter <a href="https://threadreaderapp.com/thread/1547552265774448641.html" target="_blank" rel="noopener">thread</a> from Undersecretary Rob Silvers. I'm happy to see him call the board "a new institution."</p>
<p>I've not yet had a chance to read the report, and I'm sure I'll have more to say when I do, but given my <a href="https://shostack.org/resources/lessons">longstanding interest I wanted to share.</a></p>Webb Telescope comparitor2022-07-18T00:00:00Zhttps://shostack.org/blog/webb-comparator/AdamA beautiful comparison site that allows you to see how the Webb telescope compares to Hubble.<span class="single-post-excerpt"><!-- excerpt -->A beautiful comparison site that allows you to see how the Webb telescope compares to Hubble.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/webbcompare-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/webbcompare-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/webbcompare-800w.jpeg" width="800" height="281" alt="The Carina Nebula, as imaged by Hubble and Webb" decoding="async" />
</picture>
<p>Last week, the President revealed the first images from the Webb Telescope. The images were beautiful, but it was hard to see how much better they are. <a href="https://web.archive.org/web/20220714081110/https://johnedchristensen.github.io/WebbCompare/basic.html" target="_blank" rel="noopener">This comparitor site</a> has aligned images of the same parts of the universe, as imaged by the two telescopes.</p>
Major Cyber Incidents Investigations2022-07-19T00:00:00Zhttps://shostack.org/blog/mciib/AdamI'm thrilled this how to guide for standing up new investigations is available.<span class="single-post-excerpt"><!-- excerpt -->I'm thrilled this how to guide for standing up new investigations is available.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/mciib-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/mciib-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/mciib-800w.jpeg" width="800" height="300" alt="The cover from the MCIIB how-to." decoding="async" />
</picture>
<p>Victoria Ontiveros, Tarah Wheeler and I have a new report out at Harvard's Belfer Center, <a href="https://www.belfercenter.org/publication/how-stand-major-cyber-incident-investigations-board" target="_blank" rel="noopener">How to Stand Up a Major Cyber Incident Investigations Board</a>. We document the lessons and tradeoffs that we learned about or crystalized as we worked on <a href="https://www.belfercenter.org/publication/learning-cyber-incidents-adapting-aviation-safety-models-cybersecurity" target="_blank" rel="noopener">Learning from Cyber Incidents</a>. We took the name from Steve Bellovin's work on the subject to avoid confusion with the newly created CSRB. Also, Victoria and Tarah have a talk on the subject at Blackhat, <a href="https://www.blackhat.com/us-22/briefings/schedule/index.html#no-one-is-entitled-to-their-own-facts-except-in-cybersecurity-presenting-an-investigation-handbook-to-develop-a-shared-narrative-of-major-cyber-incidents-27152" target="_blank" rel="noopener">No One Is Entitled to Their Own Facts, Except in Cybersecurity? Presenting an Investigation Handbook To Develop a Shared Narrative of Major Cyber Incidents</a>. </p>
<blockquote>
The goal of this document is to provide guidance for any organization that wishes to set up an independent cyber incident review board. The document serves as a blueprint for an independent review board which may be needed by private or public organizations, such as municipalities, counties, hospitals, utilities, or other organizations that anticipate experiencing cyberattacks and wish to maximize their learning from them. We offer considerations and analysis throughout the document to present alternative options and insights. An organization such as a think tank, local or federal government agency, university, or other non-profit organization may also set up a MCIIB. Such a Board would conduct investigations of major cybersecurity incidents and deliver a report outlining the sequence of events, contributing factors, and recommendations for security practices.
</blockquote>
The Buffet Overflow cafe2022-07-22T00:00:00Zhttps://shostack.org/blog/buffet-overflow/AdamThe Buffet Overflow Cafe is now accepting orders, both local and remote 🥁<span class="single-post-excerpt"><!-- excerpt -->The Buffet Overflow Cafe is now accepting orders, both local and remote 🥁<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/buffet-overlow-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/buffet-overlow-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/buffet-overlow-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/buffet-overlow-800w.jpeg" width="800" height="300" alt="The headline" decoding="async" />
</picture>
<p><a href="https://www.computer.org/csdl/magazine/sp/2022/04/09826523/1EVdFa4nQty" target="_blank" rel="noopener">The Buffet Overflow Café</a> is a restaurant for dining cryptographers and cybersecurity professionals. A bit of humor and puns to help present important concepts. Cite: The Buffet Overflow Café, T. Kohno, C. Cobb, A. Lerner, M. Lin, A. Shostack, IEEE Security & Privacy July-Aug. 2022, pp. 4-7, vol. 20 DOI Bookmark: 10.1109/MSEC.2022.3173122</p>
Application Security Roundup - July2022-07-30T00:00:00Zhttps://shostack.org/blog/appsec-roundup-july/AdamInteresting appsec posts: machine learning, performance, and C4<span class="single-post-excerpt"><!-- excerpt -->Interesting appsec posts: machine learning, performance, and C4<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/jigsaw-july-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/jigsaw-july-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/jigsaw-july-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/jigsaw-july-800w.jpeg" width="800" height="300" alt="A set of puzzle pieces" decoding="async" />
</picture>
<p>The most interesting #appsec articles I read this month included a deep dive into attacks on machine learning, Spotify's post aboutthe C4 model, and a rant that's nominally about performance, but applies equally to security.</p>
<ul>
<li><a href="https://research.nccgroup.com/2022/07/06/whitepaper-practical-attacks-on-machine-learning-systems/" target="_blank" rel="noopener">Practical Attacks on Machine Learning Systems</a> (Chris Anley, NCC) A 40 page review, including a new taxonomy, how traditional attacks impact ML systems and a set of categorized references.</li>
<li><a href="https://engineering.atspotify.com/2022/07/software-visualization-challenge-accepted/" target="_blank" rel="noopener">Software Visualization — Challenge, Accepted </a> (Renato Kalman and Johan Wallin) Spotify discusses how they've adapted and extended the C4 model to help them understand and track their system architecture. I've been aware of C4 for a while, and most of the discussion I see is "this looks interesting," not "we adopted this."</li>
<li><a href="https://ricomariani.medium.com/a-few-good-performance-men-a8d836afa2e7" target="_blank" rel="noopener">A Few Good Performance Men</a> (Rico Mariani) Nominally about performance, but in truth — you can't handle the truth! — it's also about security. </li>
</ul>Podcast: A Fully Trained Jedi2022-08-02T00:00:00Zhttps://shostack.org/blog/ITSP-jedi/AdamA fun podcast with the ITSP team.<span class="single-post-excerpt"><!-- excerpt -->A fun podcast with the ITSP team.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/itsp-podcast-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/itsp-podcast-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/itsp-podcast-800w.jpeg" width="800" height="450" alt="A zoom call, with a Yoda" decoding="async" />
</picture>
<p><a href="https://itspmagazine.simplecast.com/episodes/a-fully-trained-jedi-you-are-not-a-conversation-with-adam-shostack-black-hat-2022-and-def-con-30-las-vegas-event-coverage-redefining-cybersecurity-podcast-with-sean-martin-and-marco-ciappelli-mDz7X6Lm?utm_source=itspmagazine&utm_medium=web" target="_blank" rel="noopener">A Fully Trained Jedi You Are Not | A Conversation With Adam Shostack | Black Hat 2022 And DEF CON 30 Las Vegas Event Coverage | Redefining CyberSecurity Podcast With Sean Martin And Marco Ciappelli</a></p>Threats — The Cover2022-08-19T00:00:00Zhttps://shostack.org/blog/threats-cover/AdamSo excited to share the cover with you<span class="single-post-excerpt"><!-- excerpt -->So excited to share the cover with you<!-- /excerpt --></span>
<!-- post-specific css for wipe reveal -->
<style>
.cover-reveal { position: relative; aspect-ratio: 0.65 / 1; max-height: 650px; margin: 3em auto; }
.cover-hidden, .cover-shown { position: absolute; inset:0; }
.cover-hidden { background-color: #000; background-size:cover; background-image: url('/blog/img/2022/starfield.png'); }
.cover-hidden p { font-size: 2.4em; font-weight: bold; font-family: 'Sarabun', sans-serif; position: absolute; top:10%; left:24%; text-align:center; color: #e89e10; }
.cover-shown {
background-size:contain; background-repeat:no-repeat; background-image: url('/blog/img/2022/threats-what-every-engineer-should-learn-from-star-wars.png');
-webkit-mask-image: linear-gradient(to top, transparent 47.5%, #fff 52.5%);
-webkit-mask-size: auto 210%;
-webkit-mask-position: bottom;
}
.cover-reveal:is(:hover, :focus) .cover-shown {
-webkit-mask-position: top;
transition: -webkit-mask-position 2.5s linear;
}
</style>
<p>Drawing on Star Wars can be risky. The stories, the imagery, they're so iconic that if you get them wrong, like not having the suns of Tatooine properly aligned, it can be jarring. That's why I am so excited to reveal the cover for <cite>Threats: What Every Engineer Should Learn From Star Wars</cite>!
</p>
<p>There are some obvious allusions ... and some looming, like a proper threat, or even, foreshadowing.</p>
<p>What's more, we even have a planned publication date: February 7, 2023</p>
<p>And now... the cover!</p>
<div class="cover-reveal">
<div class="cover-hidden">
<p>Hover or<br />Tap / Hold<br />to reveal</p>
</div>
<div class="cover-shown">
</div>
</div>
<p>The <a href="https://amzn.to/3Pu8axg" target="_blank" title="affiliate link opens in a new tab">pre-order link is here</a>, or you can <a href="https://mailchi.mp/shostack.org/adams-new-book" target="_blank" title="sign up form opens in a new tab">sign up for updates</a> and we’ll email you as soon as it’s actually available. You can sign up for just announcements, to be a beta reader, or to hear about other educational content from Adam.</p>
Threat Modeling Training Announcements Fall, 20222022-09-06T00:00:00Zhttps://shostack.org/blog/course-announcement-fall/AdamOur fall course offerings<span class="single-post-excerpt"><!-- excerpt -->Our fall course offerings<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/threat-modeling-courses-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/threat-modeling-courses-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/threat-modeling-courses-800w.jpeg" width="800" height="299" alt="a person learning in a distributed course" decoding="async" />
</picture>
<p>Lots of cool news to share!</p>
<p>Linkedin Learning has released my latest course with them, <a href="https://www.linkedin.com/comm/learning/creative-problem-solving-for-technologists" target="_blank" rel="noopener">Creative Problem Solving for Technologists</a>. To be blunt, we can't keep doing the same things and expecting better results. We need to get creative in how we solve problems, and in this course, I share my tricks and techniques for doing exactly that. Threat modeling uses both structures and creativity to help us build security in, and so this is a great compliment to other courses.</p>
<p>We also have an great live instruction course lineup for the fall! Our students say things like this — from last week's corporate offering:
<img src="https://shostack.org/blog/img/2022/praise.jpg" alt="Student praise: I loved this course! I learned so much, felt that it was paced really nicely, and was forced to think critically in ways that I know will carry into my work (and problem solving in general). Can’t thank you enough!" align="center" width="95%" />
</p>
<p>The open enrollment courses include:</p>
<ul>
<li>Seats are still available for next week's Threat Modeling Intensive (Sept 12-16, distributed) [<a href="https://courses.shostack.org/courses/threat-modeling-intensive-september-2022" target="_blank">Details</a>] </li>
<li><b>Threat Modeling for Security Champs</b> — (Oct 3-7, distributed) Delivered by Shostack + Associates. [<a href="https://courses.shostack.org/courses/threat-modeling-intensive-222-Q1-2022" target="_blank" rel="noopener">Details</a>]</li>
<li><b>Threat Modelling Hands-on</b> — Nov 10 + 11, In person in London, UK. Hosted by QA. [<a href="https://www.qa.com/course-catalogue/courses/threat-modelling-hands-on-qashotm/?daterange=06%2F09%2F2022%20-%2006%2F09%2F2024&learningMethod=Classroom&" target="_blank" rel="noopener">Details</a>]</li>
</ul>
<p>Are these not what you're looking for? I'm thinking about what course to build next, let me know your thoughts on what it should be!</p>
Doing an AMA2022-09-09T00:00:00Zhttps://shostack.org/blog/reddit-privacy-ama/AdamI'm doing a reddit AMA in /r/privacy<span class="single-post-excerpt"><!-- excerpt -->I'm doing a reddit AMA in /r/privacy<!-- /excerpt --></span><picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/adam-shostack-ama-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/adam-shostack-ama-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/adam-shostack-ama-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/adam-shostack-ama-800w.jpeg" width="800" height="544" alt="Adam holding a reddit AMA sign" decoding="async" />
</picture>
<p>Over on <a href="https://www.reddit.com/r/privacy/comments/xaaw3x/im_adam_shostack_ask_me_anything/?utm_source=share&utm_medium=web2x&context=3" target="_blank" rel="noopener">Reddit</a> /r/privacy/ I'm doing an AMA.</p>
Threat Modeling for Security Champs2022-09-14T00:00:00Zhttps://shostack.org/blog/oct-champs-course/AdamOur next open course is in just a few weeks!<span class="single-post-excerpt"><!-- excerpt -->Our next open course is in just a few weeks!<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/threat-modeling-for-security-champs-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/threat-modeling-for-security-champs-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/threat-modeling-for-security-champs-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/threat-modeling-for-security-champs-800w.jpeg" width="800" height="400" alt="A banner for the course threat modeling for security champs" decoding="async" />
</picture>
<p>Threat modeling often fails because it seems like a game of ‘ask two experts, get three answers.’ (And let's be honest, some days people feel lucky that it stops at three.)</p>
<p>One of the keys to scaling threat modeling (and software security more generally) is consistency. Ensuring that your champs have the skills, beliefs and attitudes to support a threat modeling program at scale.</p>
<p>Because we saw that fail so often, we built a course, <i>Threat Modeling For Security Champs</i>, that’s all about consistency. How do we introduce threat modeling? How do we encourage people to do useful work in useful ways? In class today, someone shared a story about a security champ who asked them to focus on the use case, “what if someone's username and password were stolen? What would we do then?”</p>
<p>Maybe that champ would tell a different story? Maybe they tried to push for multi-factor authentication? Maybe they wanted to think about misuse detection? I don't know, but more importantly, my student didn’t know.</p>
<p>That’s why getting your champs to consistency is sooo important. You leave with partners, not confusion.</p>
<p>Our next open Champs course kicks off in just a few weeks, on October 3, and we have some seats remaining, so we’re sharing a last early bird coupon for 15% off: Use eboct301-15 when you sign up at <a href="https://courses.shostack.org/courses/threat-modeling-for-security-champs-october-2022" target="_blank" rel="noopener">Threat Modeling for Security Champs</a>, and we hope to see you there!</p>
Oregon Forestry2022-09-16T00:00:00Zhttps://shostack.org/blog/oregon/AdamYou think shipping takes you a long time?<span class="single-post-excerpt"><!-- excerpt -->You think shipping takes you a long time?<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/Smiley-Face-Oregon-Forest-Boing-Boing-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/Smiley-Face-Oregon-Forest-Boing-Boing-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/Smiley-Face-Oregon-Forest-Boing-Boing-800w.jpeg" width="800" height="450" alt="A Smiley face in trees" decoding="async" />
</picture>
<p>In Oregon, Hampton Lumber used a mix of trees (Larch, Douglas Fir) to create a smiley face that appears in the autumn. <a href="https://boingboing.net/2022/09/15/enormous-smiley-face-appears-in-oregon-forest-every-fall.html" target="_blank" rel="noopener">via Boing Boing</a>.</p>Application Security Roundup - September2022-09-30T00:00:00Zhttps://shostack.org/blog/appsec-roundup-sept/AdamInteresting appsec posts: machine learning, performance, and C4<span class="single-post-excerpt"><!-- excerpt -->Interesting appsec posts: machine learning, performance, and C4<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/appsec-roundup-sept2022-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/appsec-roundup-sept2022-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/appsec-roundup-sept2022-800w.jpeg" width="800" height="300" alt="An attack tree" decoding="async" />
</picture>
<p>The most interesting #appsec articles I read this month included a sad, lawyerly response to an appsec failure, an application of STRIDE to a hotel minibar, perspective on devsecops, attack trees, human factors and more:</p>
<ul>
<li><a href="https://www.databreaches.net/urls-are-not-passwords-and-sadly-that-needed-to-be-said/" target="_blank" rel="noopener">URLs Are NOT Passwords, and Sadly, That Needed to Be Said</a> (Dissent Doe, Databreaches.net). Discusses an issue with sequential URLs and lacking access controls, which wasn't fixed for three years. (A former employee accessed the information to whistleblow, and was prosecuted for CFAA violations.)</li>
<li><a href="https://www.linkedin.com/posts/mllamazares_stride-hacking-threatmodeling-activity-6975008455971467264-kd0A" target="_blank" rel="noopener">STRIDE as applied to a hotel minibar</a> (Miguel Llamazares, Linkedin)</li>
<li><a href="https://diginomica.com/youre-probably-doing-devsecops-wrong?utm_content=bufferf210d&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer" target="_blank" rel="noopener">You’re probably doing devsecops wrong</a> (Kymberlee Price, Diginomica) Kymberlee brings the 🔥: “DevSecOps is popular with engineers because security teams are not.”</li>
<li><a href="https://arxiv.org/pdf/2204.04008.pdf" target="_blank" rel="noopener">Taxonomy of Attacks on Open-Source Software Supply Chains</a> (Piergiorgio Ladisa et al, Arxiv). An interesting taxonomy, and features one of the larger attack trees I've seen, with an <a href="https://sap.github.io/risk-explorer-for-software-supply-chains/" target="_blank" rel="noopener">interactive version</a>. Even if you're not an attack tree fan, or currently focused on supply chain security, the user interface is worth playing with. (<a href="https://sap.github.io/risk-explorer-for-software-supply-chains/#/documentation" target="_blank" rel="noopener">Documentation</a>)</li>
<li><a href="https://bobsullivan.net/cybercrime/poor-customer-service-is-our-greatest-cybersecurity-vulnerability/" target="_blank" rel="noopener">Poor customer service is our greatest cybersecurity vulnerability</a> (Bob Sullivan on his blog) Tells a predictably sad story of a scam, made possible in part by godawful customer service and eroded trust. This is an engineering problem. Designing so that customers trust your system, designing your customer-facing service so customers can always call the number on the back of their card (or initiate contact through your website) and get to a person who can actually see the relevant information in a reasonable time without being transferred six times... As Bob points out, these are engineering tasks with security implications. </li>
<li><a href="https://thehackernews.com/2022/09/warning-pypi-feature-executes-code.html" target="_blank" rel="noopener">Warning: PyPI Feature Executes Code Automatically After Python Package Download</a> (Ravie Lakshmanan, The Hacker News) Documents how PyPi has a feature that runs setup.py when you pip install or pip download code, and related the, <a href="https://pytorch.org/docs/stable/generated/torch.load.html#torch.load" target="_blank" rel="noopener">torch.load</a> function of the PyTorch open source ML framework does the same thing. It even has a big red warning, right there in the docs. It’s RCE-as-a-Service.</li>
<li>Which brings me to the Wikipedia entry for <a href="https://en.wikipedia.org/wiki/Auguste_Kerckhoffs" target="_blank" rel="noopener">Auguste Kerckhoffs</a>. It contains a banner that “This article may be expanded with text translated from the corresponding article in Volapük.” It turns out that in addition to his work in cryptography, Kerckhoffs was an outspoken proponent of the constructed language Volapük (even the Director of the Academy of Volapük), and that article has more detail than the English one.</li>
<li>Last, but not least, we announced a return of the <a href="https://mdic.org/project/2022-threat-modeling-bootcamps/" target="_blank" rel="noopener">Medical Device Threat Modeling Boot Camps</a>, organized with the Medical Device Innovation Consortium.</li>
</ul>
Bic Transit Gloria Mundi2022-10-15T00:00:00Zhttps://shostack.org/blog/pens/AdamIs it really cheating if you learn?<span class="single-post-excerpt"><!-- excerpt -->Is it really cheating if you learn?<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/spanish-cheater-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/spanish-cheater-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/spanish-cheater-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/spanish-cheater-800w.jpeg" width="800" height="300" alt="words carved into Bic pens" decoding="async" />
</picture>
<p>The image of these intricately and carefully carved notes on pens is really striking. It’s even more striking when (assuming the redittors are right) the text is largely the Spanish civil code. Taking notes, re-writing things to be concise, and then copying them out are great study aids. More, it’s really unclear to me why an exam like this would be closed book. You have to know where the material is, and be able to efficiently find it.</p>
<p>The first phases of the pandemic brought many changes to how we teach and learn. Some were good, some bad. The rushed transition was hard for teachers and administrators, losing the babysitting function of schools was hard for families. Overall, a lot of broad metrics are way down, including at the 9 year old level (<a href="https://apnews.com/article/covid-science-health-race-and-ethnicity-education-4d02ce3fe0fe432efc68373ee961c5bb" target="_blank" rel="noopener">math and reading</a>) and high school level (<a href="https://www.npr.org/2022/10/12/1128376442/act-test-scores-pandemic" target="_blank" rel="noopener">ACT falls to a 30 year low</a>). But it also gave us a chance to grapple with things like how we test and why.</p>
<p>A single exam is generally a worse indicator of learning than project work. Exams are needlessly stressful, and so we’re testing function under stress more than we're testing learning. Of course, function during stress matters for a few jobs, like police or firefighters. And, back to the pens, lawyering requires an ability to think on ones feet in front of a judge. But much of lawyering is gathering and integrating facts and developing arguments. That work doesn’t require a stressful exam.</p>
<p>When I taught Security Engineering at UW, the final was a project, with layers of check-in so that students who put in the work could succeed. When I teach threat modeling, I don’t grade, because the real grade is “how does a team do” after I’ve taught them. But anyone who wants to cheat in my threat modeling courses by carving notes into a pen will get an automatic A.</p>
<p>Source: Reddit, <a href="https://www.reddit.com/r/interestingasfuck/comments/y4t8pw/confiscated_pens_containing_cheat_notes/" target="_blank" rel="noopener">confiscated pens containing cheat notes intricately carved by a student at the University of Malaga, Spain</a></p>
Worthwhile Books Q3 20222022-10-23T00:00:00Zhttps://shostack.org/blog/worthwhile-books-q3-2022/AdamThe books I read in the third quarter of 2022 that are worth your time.<span class="single-post-excerpt"><!-- excerpt -->The books I read in the third quarter of 2022 that are worth your time.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/book-piles-750w.webp 750w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/book-piles-750w.jpeg 750w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/book-piles-750w.jpeg" width="750" height="360" alt="A big stack of books" decoding="async" />
</picture>
<p>The books I read in the third quarter of 2022 that are worth your time.</p>
<h3>Cyber</h3>
<ul>
<li><a href="https://amzn.to/3TQnrei" target="_blank" rel="noopener"><cite>Secure Coding in C and C++</cite></a>, and <a href="https://amzn.to/3eXb95b" target="_blank" rel="noopener"><cite>Effective C: An Introduction to Professional C Programming</cite></a> Robert Seacord. Research for my next book. He's authoritative.</li>
</ul>
<h3>Non-fiction</h3>
<ul>
<li><a href="https://amzn.to/3FasRwY" target="_blank" rel="noopener"><cite>Good Strategy/Bad Strategy</cite></a> by Richard Rumelt. I don't like strategy books much, and this one explains why: they're not actually strategy, but rather just pablum like “we're going to work smarter.” Strategy starts from a clear-headed search for advantage, and then using it. What gives you that ability to work smarter?</li>
<li><a href="https://amzn.to/3z6PX3J" target="_blank" rel="noopener"><cite>A Woman of No Importance: The Untold Story of the American Spy Who Helped Win World War II</cite></a> by Sonia Purnell. A really amazing story of a woman who fought Nazis while overcoming sexism from both the British SOE and American OSS, and did it all after losing a leg to gangrene. Shockingly, not yet a major motion picture. </li>
<li><a href="https://amzn.to/3MXgiXr" target="_blank" rel="noopener"><cite>Meltdown: Why Our Systems Fail and What We Can Do About It</cite></a>, Chris Clearfield, András Tilcsik. Draws heavily on the work of Charles Perrow, with a lot of focus on tight coupling and complexity as causes. Sadly, written in 2018, uses Boeing's 737 as an example of safety done right. Still broad and thought provoking.</li>
<li><a href="https://amzn.to/3CXJppe" target="_blank" rel="noopener"><cite>The Writing of the Gods: The Race to Decode the Rosetta Stone</cite></a>,
Edward Dolnick. Fascinatingly explains why the Rosetta stone was
one of several keys to regaining understanding of
hierrogliphs. (Includes some slightly irksome misunderstandings of cryptography,
such as calling Whit Diffie a cryptanalyst.) </li>
<li><a href="https://amzn.to/3f0G1BV" target="_blank" rel="noopener"><cite>The world in a Grain: The Story of Sand and How It Transformed Civilization</cite></a>, Vince Beiser. Enjoyable in the way of many books that take a single topic and expound on its crucial history to civilization. But you're reading this on a screen made from sand, with instructions sent to sand-derived chips, over sand-derived fiber optic cables. If you get this book in physical form, it will be printed in a building made from sand, warehoused in another, and driven over sand-derived roads. (One of many books with similar titles.) </li>
</ul>
<h3>Fiction</h3>
<ul>
<li><cite><a href="https://www.amazon.com/Fated-Sky-Lady-Astronaut-Novel-ebook/dp/B0756JKWCJ/ref=sr_1_2?crid=QL9UNV12XQNR&keywords=fated+sky&qid=1643388509&sprefix=fated+sky%2Caps%2C214&sr=8-2" target="_blank" rel="noopener">The Fated Sky</a></cite> by Mary Robinette Kowal continues her Lady Astronaut series.</li>
<li><a href="https://shostack.org/blog/worthwhile-books-q3-2022/" target="_blank" rel="noopener"><cite>Rule of Capture</cite></a> by Christopher Brown. Depressing, and excellent. The story of a lawyer trying to defend their client when the system has become twisted. </li>
<li><a href="https://shostack.org/blog/worthwhile-books-q3-2022/" target="_blank" rel="noopener"><cite>Hench</cite></a>, Natalie Zina Walschots is a delightful little book about the world of supervillian henchmen, hired through temp agencies. </li>
<li><a href="https://amzn.to/3gBGRFt" target="_blank" rel="noopener"><cite>The Apollo Murders</cite></a>, by astronaut Chris Hadfield. Entertaining, and delves deep into...can’t tell you, spoilers, but many of the strange little parts of the world that I find fascinating.</li>
<li><a href="https://amzn.to/3TPdOgg" target="_blank" rel="noopener"><cite>Leviathan Falls</cite></a>, by James Corey. Getting book 9 of any series to be reasonable is a huge task, and they executed remarkably well. </li>
</ul>
<h3>And, uhhh....</h3>
<p>There is another ... <a href="https://threatsbook.com/">book</a> that is. Another book I spent quite a bit of time reading and re-reading... Now publishing in February.</p>
Medical Device Threat Modeling Boot Camp2022-11-01T00:00:00Zhttps://shostack.org/blog/bootcamp/AdamOh my gosh, the boot camps are back!<span class="single-post-excerpt"><!-- excerpt -->Oh my gosh, the boot camps are back!<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/boot-camp-dec-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/boot-camp-dec-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/boot-camp-dec-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/boot-camp-dec-800w.jpeg" width="800" height="271" alt="People learning together." decoding="async" />
</picture>
<p>I'm quite excited to share news of the next two Medical Device Threat Modeling Bootcamps! Delivered in partnership with the Medical Device Innovation Consortium, these boot camps are the most intensive, in-depth training in our catalog.</p>
<p>The first boot camps integrated community building and opportunities to contribute to the <a href="https://mdic.org/resource/playbook-for-threat-modeling-medical-devices/" target="_blank" rel="noopener">Playbook for Threat Modeling Medical Devices</a>, and were sponsored by FDA. These are offered commercially, like other training, in a sustainable way, while still delivering a great ratio of participants to instructors.</p>
<p>If you want the best medical device threat modeling training available, it's December 12-16th. Visit <a href="https://mdic.tech/ThreatModeling" target="_blank" rel="noopener">mdic.tech/threatmodeling</a> for more details. </p>
Miro Threat Modeling Template for EoP2022-11-10T00:00:00Zhttps://shostack.org/blog/eop-miro/Brett CrawleyA Miro template for Elevation of Privilege<span class="single-post-excerpt"><!-- excerpt -->A Miro template for Elevation of Privilege<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/miroverse_threat_modeling_with_eop-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/miroverse_threat_modeling_with_eop-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/miroverse_threat_modeling_with_eop-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/miroverse_threat_modeling_with_eop-800w.jpeg" width="800" height="408" alt="A new template for EoP" decoding="async" />
</picture>
<p>This is a guest post from Brett Crawley</p>
<p>Threat modeling with teams is a process that requires visuals, interaction between team members and discussion and so lends itself to everyone being in a room together. This has been quite hard the last two years. It also doesn't look to be getting any easier, so we should probably get used to it. Here's how I've been doing it with several teams.</p>
<p>To get them started threat modeling their applications, I’ve been using the Elevation of Privilege card game remotely. I've trialed several collaboration tools and methods of play before I came up with the <a href="https://miro.com/miroverse/threat-modeling-with-eop/" target="_blank" rel="noopener">Threat Modeling with EoP Miro Template</a>.</p>
<p>Teams have found this quite helpful in documenting their threat models. They can export them and save them alongside the code in their repo so they have a record of the model for the version of the application they're designing. </p>
<p>I imagine if you're reading this on Adam's site you're already familiar with Elevation of Privilege, so I won't go into the details of play here.</p>
<h3>What do you need to play?</h3>
<p>You'll need:</p>
<ul>
<li>The requirements for what you're building</li>
<li>An architecture (data flow) diagram which shows your trust boundaries (where ownership or access rights to the data change in the flow)</li>
<li>A neuro diverse group of people who know what they're building.</li>
</ul>
<h3>How many people and who?</h3>
<p>More than one and as many as needed. One pair of eyes is more likely to miss something, so if there are a few of you it works better. When I said a neuro diverse group, by that I mean but don't limit to</p>
<ul>
<li>frontend or backend engineers</li>
<li>someone from QA</li>
<li>someone from product</li>
<li>an architect</li>
</ul>
<p>People in different roles think differently, they may have different context and they/you don't have to be technical (think the door is unlocked, I don't need to be a locksmith to understand that). </p>
<h3>How to use the board</h3>
<p>The board is split into 3 sections, section 1 contains some instructions for getting setup, section 2 a worked example and section 3 is where the team can model their architecture.</p>
<p>These are the steps to use the board:</p>
<ol>
<li>Create an architecture diagram</li>
<li>Insert your diagram into each of the STRIDE/STRIPED sections of the board</li>
<li>Lock all elements apart from the stickies</li>
<li>select all the stickies and "bring to front" from the context menu</li>
<li>Deal the cards (see below) and share with the players</li>
<li>Start playing EoP with the 3 of Tampering</li>
<li>Use the red sticky for the T3 card on the tampering section of the board, add your name and a description of the threat after the number</li>
<li>You can also add existing mitigations or proposed mitigations with green or orange stickies respectively</li>
<li>For cards not of the same suite, just grab their sticky and drag them over to the hand you're currently playing.</li>
</ol>
<p>Finshed the first hand, the winner chooses the next suite and everyone moves to the corresponding section of the board.</p>
<h3>Ticket up the threats</h3>
<p>If the Jira app is installed in the miro board, you can also create any tickets for mitigation work directly from the board so you don't lose track of them. This might require the administrator to configure the application link but then from the board it's as easy as clicking on a sticky and then clicking the convert to Jira issue button on the context menu as seen below:</p>
<img src="https://www.ostering.com/images/convert_to_jira_issue.png" width="50%" alt="Convert to Jira Issue dialog" />
<p>This will open the modal dialog where you can fill out the details of the ticket and then click convert as in the below example:
<img src="https://www.ostering.com/images/convert_to_jira_issue_dialog.png" width="50%" alt="convert to Jira issue dialog" />
</p>
<h3>Dealing the cards virtually</h3>
<p>However, if you're playing remotely, you (the facilitator) might also find the <a href="https://croupier.agilestationery.co.uk/" target="_blank" rel="noopener">croupier app</a> from Agile Stationery useful for dealing the cards. The app allows you to deal for Elevation of Privilege, Elevation of Privilege with Privacy, Cornucopia and LINDDUN Go. I recommend you email each player their respective hand because if you don't finish in one session, I can guarantee in the second session someone won't remember what their cards were.</p>
<p>You can read about some of my other initiatives including the OWASP Application Security Awareness Campaigns and CAPEC-STRIDE Mappings on my blog <a href="https://www.ostering.com/" target="_blank" rel="noopener">www.ostering.com</a></p>
<p><b>Hope you enjoy it</b><br />
Author: Brett Crawley, Principal Application Security Engineer @ Mimecast</p>
Trainings and discounts2022-11-11T00:00:00Zhttps://shostack.org/blog/trainings-and-discounts/AdamPeople learning together<span class="single-post-excerpt"><!-- excerpt -->People learning together<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/training-virtual-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/training-virtual-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/training-virtual-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/training-virtual-800w.jpeg" width="800" height="289" alt="undefined" decoding="async" />
</picture>
<p>In training-related news, Shostack + Associates has published our 2023 open courses schedule. We're offering a mix of our Threat Modeling Intensive, Threat Modeling for Engineers (shorter, less intense), Threat Modeling for Security Champs (more focused on organizational elements: How do we consistently threat model <i>here</i>?) and Medical Device Threat Modeling Boot Camps. All of these courses are instructor-led by Adam Shostack, and each is online-only.</p>
<p>You can see our full schedule at <a href="https://shostack.org/training/open">https://shostack.org/training/open</a></p>
<p>Also, we're offering our first-ever Black Friday and cyber Monday discounts, because who doesn't like a good discount? <b>BLKFRI10</b> can be applied at checkout to any individual purchase of our upcoming open courses between now and end of day Friday, November 25. <b>CYBRMON10</b> can be applied at checkout to any individual purchase of our upcoming open courses between Friday, November 25 and end of day Monday November 28. (All times 'anywhere on Earth.') You can apply these to open courses or self-paced (computer-based training).</p>
<p>Photo by <a href="https://unsplash.com/@jasongoodman_youxventures" target="_blank" rel="noopener">Jason Goodman</a> on <a href="https://unsplash.com/" target="_blank" rel="noopener">Unsplash</a></p>
Application Security Roundup - October and Nov2022-11-30T00:00:00Zhttps://shostack.org/blog/appsec-roundup-oct/AdamInteresting reads this month include signals from the administration, a history of appsec by one of the originals, and a longread from Apple about kernel memory design.<span class="single-post-excerpt"><!-- excerpt -->Interesting reads this month include signals from the administration, a history of appsec by one of the originals, and a longread from Apple about kernel memory design.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/jigsaw-nov-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/jigsaw-nov-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/jigsaw-nov-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/jigsaw-nov-800w.jpeg" width="800" height="301" alt="A set of puzzle pieces" decoding="async" />
</picture>
<ul>
<li><a href="https://www.wsj.com/articles/u-s-officials-say-tech-companies-must-build-secure-products-11666901221" target="_blank" rel="noopener">U.S. Officials Say Tech Companies Must Build Secure Products</a> (James Rundle, The Wall St Journal) National Cyber Director Chris Inlgis implies that we'll see liability in the next National Cybersecurity Strategy, and Anne Neuberger says "tech providers must make fundamentally secure products, starting at the earliest design phases, at no extra cost to buyers." I expect to see a lot more in this area.</li>
<li><a href="https://blog.crashoverride.com/a-personal-history-of-the-appsec-industry" target="_blank" rel="noopener">A Personal History of the AppSec Industry</a> (John Viega, Crash Override blog) John's been in appsec as long as I have, and his personal recollections are a fascinating view into how the industry has changed.</li>
<li><a href="https://security.apple.com/blog/towards-the-next-generation-of-xnu-memory-safety/" target="_blank" rel="noopener">Towards the next generation of XNU memory safety: kalloc_type</a> (Apple Security Engineering) Exposes the design of Apple's new allocator, compares it with other choices. Part of how the industry has changed? Not only is Apple doing this work, they're sharing it. </li>
<li><a href="https://kerrsecure.com/2022/09/24/the-hybrid-approach-to-threat-modeling/" target="_blank" rel="noopener">The Hybrid Approach to Threat Modeling </a> (Chris Romeo, Kerr Secure) Interesting discussion of mandatory versus voluntary threat modeling.</li>
<li><a href="https://thehackernews.com/2022/09/warning-pypi-feature-executes-code.html" target="_blank" rel="noopener"> PyPI Feature Executes Code Automatically After Python Package Download</a> (Ravie Lakshmanan, Hacker News). Unexpected that a package manager unavoidably results in code execution. I might want to install code to inspect it for suitability or other reasons.</li>
<li><a href="https://miro.com/miroverse/threat-modeling-with-eop/" target="_blank" rel="noopener">Threat Modeling with EoP (on Miro)</a> (Brett Crawley, Miroverse) What it says, and Brett guest blogged about it <a href="https://shostack.org/blog/eop-miro/">here</a>.</li>
<li><a href="https://www.threatmodelingconnect.com/" target="_blank" rel="noopener">Threat Modeling Connect</a> is a new community site, powered by Iriusrisk.</li>
</ul>
<p>Also, I’m formalizing an informal approach I’ve had: I won't include anything behind a paywall in these roundups. I’ve lost track of the time I’ve wasted trying to get access to papers whose authors have chosen to lock them away. I have library access at the University of Washington, and despite that, I can’t figure out how to get access to IEEE, Elsevier, and others, and you know? I’m done working extra hard to get workable links. “The literature” is open access.</p>
<p>If you have something open access you'd like to nominate, send it along!</p>
The Threats book is complete2022-12-08T00:00:00Zhttps://shostack.org/blog/threats-announce/AdamThreats is almost in bookstores<span class="single-post-excerpt"><!-- excerpt -->Threats is almost in bookstores<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/book-crawl-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/book-crawl-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/book-crawl-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/book-crawl-800w.jpeg" width="800" height="312" alt="A Star Wars styled text, reading A long time ago, in a galaxy far, far away. . . I started work on Threats: What Every Engineer Should Learn from Star Wars, and I could not be more excited that it's done." decoding="async" />
</picture>
<p> When my first threat modeling book came out, I talked about STRIDE in terms of Star Wars – Luke Skywalker spoofing a Stormtrooper and being caught because he was too short. And as I thought about turning that into a book, my publisher suggested “maybe you can get a chapter out of it?” And so, I started writing, because, as we know, there’s do, or do not.</p>
<p> As I wrote, I discovered both how hard it is to explain some of the concepts which we use, and how important it is that we explain them. For example: </p>
<ul>
<li>What is privilege in computing?
</li><li>How does it relate to permissions? We know it when we see it — root has privilege, but is creating a new account a privilege (as per Windows) or a matter or permissions (as on Unix)? Which is right?
</li><li>How do we teach it if the simple things we say are all complicated?
</li><li>Why is parsing safely hard, and what should every engineer know about that?
</li></ul>
<p>Once I’d opened these cans of worms, and saw students struggling with them in classes, I realized how valuable this book could be to readers, and I pushed to ensure that it’s accessible and fun, even if you’ve never seen Star Wars. (I mean, kids today …. no knowledge of the classics.)</p>
<p> Feeling hatred or anger at engineers for not understanding security? These are the easy path. Supporting engineers who aren’t security specialists — explaining our concepts and making them accessible — was far harder. But... there’s good in them. I’ve seen it. In 2022, security matters as a property of what people deliver. But engineers struggle to understand what they should do. This book aims to solve that.</p>
<p>The book ended up longer than I expected. Mostly, that’s because it turns out there’s a lot of threats that every engineer needs to know. It’s also longer because I’ve learned iteration and variation helps people learn, so we revisit threats a little.</p>
<p> As I said, I’m tremendously excited, and I’m grateful that as I’ve told people about the book, that excitement has been contagious … nope, I’m skipping that joke … that excitement has been palpable, and people have asked how they can help. Pre-ordering can certainly help and getting the word out can help too. There are some additional ideas at <a href="https://threatsbook.com/">threatsbook.com</a>.</p>
GPT-32022-12-09T00:00:00Zhttps://shostack.org/blog/gpt-3-threat-modeling/AdamText captured from GPT-3<span class="single-post-excerpt"><!-- excerpt -->Text captured from GPT-3<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/Terminators-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/Terminators-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/Terminators-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/Terminators-800w.jpeg" width="800" height="299" alt="Text from GPT3, claiming that terminators cannot take over the world in the same way that real machines or robots could." decoding="async" />
</picture>
<p>On the OWASP Slack, DS <a href="https://owasp.slack.com/archives/C1CS3C6AF/p1670563102016699" target="_blank" rel="noopener">posted:</a></p>
<blockquote>
Chatgpt can create entire STRIDE libraries. Just a sample of spoofing threats for a backend to backend service in kubernetes. I did this with all other type of threats.Also, created a library for client side threats in a similar fashion. Saved hours of work 😅 May not be perfect, but that’s why you need a human layer to it to make right decisions ;) </blockquote>
<p><a href="https://phpc.social/@andrewfeeney" target="_blank" rel="noopener">Andrew Feeney</a>:</p>
<p>
</p><div class="center">
<img loading="lazy" src="https://shostack.org/blog/img/2022/mansplaining-as-a-service.png" alt=" @webber described ChatGPT as Mansplaining As A Service, and honestly I can't think of a better description. A service that instantly generates vaguely plausible sounding yet totally fabricated and baseless lectures at an instant with unflagging confidence in it's own correctness on any topic, without concern, regard or awareness even of the level of expertise of it's audience." />
</div>
<p></p>
* Most threats are simpleMost threats are simple
* surprise
* reason by analogy
* habituation
*
* Great TM vs run of the mill TM
* book
https://www.linkedin.com/pulse/uncertain-futures-using-chatgpt-generate-risk-fair-inputs-whitsitt/
GPT-32022-12-09T00:00:00Zhttps://shostack.org/blog/gpt-3/AdamThe OpenAI chatbot is shockingly improved — its capabilities deserve attention.<span class="single-post-excerpt"><!-- excerpt -->The OpenAI chatbot is shockingly improved — its capabilities deserve attention.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/Terminators-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/Terminators-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/Terminators-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/Terminators-800w.jpeg" width="800" height="299" alt="Text from GPT3, claiming that terminators cannot take over the world in the same way that real machines or robots could." decoding="async" />
</picture>
<p>This week, it’s been hard to avoid text from OpenAI’s GPT-3 text generator, which has gotten transformationally better over the last year. Last year, as I prepared for my OWASP Appsec keynote (<cite><a href="https://shostack.org/blog/25-years-appsec-keynote/">25 Years in AppSec: Looking Back, Looking Forward</a></cite>), I was given early access, and gave it the prompt “In 25 years, application security will be...” and, after filtering through some answers, it gave me some ok bullet points. This year, it gave me...something quite different, and I inserted the text into my slides:</p>
<div class="center">
<img loading="lazy" src="https://shostack.org/blog/img/2022/GPT3-ai-appsec.png" alt="It is difficult to predict exactly how application security will evolve in 25 years, as it will likely depend on a variety ot factors such as critical issue and will require ongoing attention and investment. Some potential developments in the field of application security in the next 25 years could include the adoption of new technologies such as quantum computing, the development of more sophisticated security protocols, and the integration of artificial intelligence and machine learning into security systems. Additionally, it is likely that there will be an increasing emphasis on protecting user data and privacy in the digital world." />
</div>
<p>The impact of freely available text that’s reasonably convincing is something that OpenAI (and others) have been thinking about, but it’s now viscerally here. A few interesting longreads I’ve come across are:</p>
<ul>
<li><a href="https://clivethompson.medium.com/on-bullshit-and-ai-generated-prose-611a0f899c5" target="_blank" rel="noopener">On Bullshit, And AI-Generated Prose</a> (Clive Thompson, Medium) </li>
<li><a href="https://www.bleepingcomputer.com/news/technology/openais-new-chatgpt-bot-10-dangerous-things-its-capable-of/" target="_blank" rel="noopener">OpenAI's new ChatGPT bot: 10 dangerous things it's capable of</a> (Ax Sharma, Bleeping Computer) </li>
<li><a href="https://www.vice.com/en/article/xgyp9j/openais-new-chatbot-will-tell-you-how-to-shoplift-and-make-explosives" target="_blank" rel="noopener">OpenAI’s New Chatbot Will Tell You How to Shoplift And Make Explosives</a> (Janus Rose, Vice) </li>
<li><a href="https://arxiv.org/pdf/2212.03551.pdf" target="_blank" rel="noopener">Talking About Large Language Models</a> (Murray Shanahan, Arxiv) makes the point that Large Language Models (including GPT-3) are really answering the question “what's the most common next token after my prompt?” (Added Dec 11)</li>
<li><a href="http://glinden.blogspot.com/2022/12/ml-and-flooding-zone-with-crap.html" target="_blank" rel="noopener">ML and flooding the zone with crap</a> (Greg Linden, Geeking With Greg) points out that the majority is not always right, especially when there’s an incentive to manipulate appearances. (Added Dec 11)</li>
<li><a href="https://arxiv.org/abs/2210.07321" target="_blank" rel="noopener">Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods</a> (Evan Crothers, Nathalie Japkowicz, Herna Viktor, Arxiv) an extensive survey, added Dec 11.</li>
<li><a href="https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html" target="_blank" rel="noopener">A New Chat Bot Is a ‘Code Red’ for Google’s Search Business</a> (Nico Grant, Cade Metz, New York Times) added Dec 22</li>
<li><a href="https://www.sentinelone.com/labs/11-problems-chatgpt-can-solve-for-reverse-engineers-and-malware-analysts/" target="_blank" rel="noopener">11 Problems ChatGPT Can Solve For Reverse Engineers and Malware Analysts</a> (Aleksandar Milenkoski & Phil Stokes, SentinelOne), but note that Jordan <a href="https://infosec.exchange/@psifertex/109592064688809178" target="_blank" rel="noopener">points out</a> that “the diagram is very wrong...locals are too high (they go below the saved return address), saved registers are too low (they go above arguments), and who knows what's up with the padding bit it doesn't make sense at that location.”</li>
<li><a href="https://www.nytimes.com/interactive/2022/12/26/magazine/yejin-choi-interview.html" target="_blank" rel="noopener">An A.I. Pioneer on What We Should Really Fear</a> (David Marchese, NY Times) added Dec 28</li>
</ul>
<p>Also, not a longread, but attributed to <a href="https://phpc.social/@andrewfeeney" target="_blank" rel="noopener">Andrew Feeney</a>:</p>
<div class="center">
<img loading="lazy" src="https://shostack.org/blog/img/2022/mansplaining-as-a-service.png" alt=" @webber described ChatGPT as Mansplaining As A Service, and honestly I can't think of a better description. A service that instantly generates vaguely plausible sounding yet totally fabricated and baseless lectures at an instant with unflagging confidence in it's own correctness on any topic, without concern, regard or awareness even of the level of expertise of it's audience." />
</div>
Human-Centered Security2022-12-14T00:00:00Zhttps://shostack.org/blog/human-centered-podcast/AdamThreat Modeling for UX Designers with Adam Shostack on Heidi Trost's podcast<span class="single-post-excerpt"><!-- excerpt -->Threat Modeling for UX Designers with Adam Shostack on Heidi Trost's podcast<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/human-centered-security-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/human-centered-security-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/human-centered-security-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/human-centered-security-800w.jpeg" width="800" height="300" alt="A podcast user interface, showing a 40 minute episode, Threat Modeling for UX Designers with Adam Shostack" decoding="async" />
</picture>
<p>Heidi Trost is a user researcher and strategist, who's writing a book about improving the UX of security. She was kind enough to have me on her podcast, <a href="https://share.transistor.fm/s/ad97b9b4" target="_blank" rel="noopener">Human-Centered Security</a>. We hurt ourselves and the people who use our software when we make it hard-to-use, and I really enjoyed the conversation. It's a tremendously important topic, and I'm really looking forward to her book.</p>
<p>One of the themes of the interview was how usability people should engage with security, and usability has so much to offer even if the usability folks know nothing about security. I was browsing Mastodon, and a <a href="https://infosec.exchange/@adamshostack/109490631084786685" target="_blank" rel="noopener">poll I ran there</a>. The poll was inspired by a post that “only” 17% of users on the infosec.exchange Mastodon instance were using MFA. I asked why, and the reason that came back in the comments was: usability, and especially discoverability:</p>
<ul>
<li> <a href="https://hachyderm.io/@ezekiel/109490964109261711" target="_blank" rel="noopener">”Not shown here is "I didn't know it was available" which was the reason I would've given before I enabled it”</a> </li>
<li> <a href="https://infosec.exchange/@benfromkc/109490655230366793" target="_blank" rel="noopener">”Must have missed it”</a> </li>
<li> <a href="https://infosec.exchange/@TristanDostaler/109490672876735233" target="_blank" rel="noopener">”Didn't know I could”</a> </li>
<li> <a href="https://techhub.social/@empress/109490682817357832" target="_blank" rel="noopener">”Never got a prompt”</a> </li>
<li> <a href="https://infosec.exchange/@e38383/109490815563021729" target="_blank" rel="noopener">”actually I just don’t want to set it up on mobile and keep forgetting it when I’m at my desk”</a> </li>
</ul>
<p>These are not subtle points — they are lack of awareness because the onboarding and account setup features don't make setting up 2FA part of the default path, or even a prompt of “would you like to...?” This is not a criticism per se. Perhaps the decision was made to focus on the number of steps to get a person setup and using the software. Maybe it’s something else? But we need partnership between security and usability to make things like this better. </p>
Liability for the Second Death Star2022-12-16T00:00:00Zhttps://shostack.org/blog/death-star-liability/AdamLegal commentary on the Second Death Star Project<span class="single-post-excerpt"><!-- excerpt -->Legal commentary on the Second Death Star Project<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/2nd-death-star-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/2nd-death-star-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/2nd-death-star-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/2nd-death-star-800w.jpeg" width="800" height="296" alt="Blueprint for the second death star" decoding="async" />
</picture>
<p>In <a href="https://thelegalgeeks.com/2018/02/02/did-the-general-contractor-on-the-death-star-project-have-to-build-the-second-death-star-for-free/" target="_blank" rel="noopener">Did the General Contractor on the Death Star Project Have to Build the Second Death Star for Free</a>, attorneys Michael Hogue and Katharine Malone look at contract law as it might result in the contractor spending $825 quadrillion to perform to contract.</p>
<p>It’s a fun and fascinating read, and my primary quibble is that Galen Erso’s claim in Rogue One that the defects are subtle and hard to find is nonsense.</p>
<p>The design defects are palpable, and even a cursory review by the buyer should have noticed a lack of blow-out panels or baffles around the reactor in general, or screens or other barriers in the ventilation shafts. The Empire accepted the work, performed accelerated acceptance testing and flew the station into battle.</p>
<p>I’m not going to assert it was clearly reckless only because that’s a specific legal standard. Also, it’s unclear if Erso was working for the Empire or the defense contractor at various crucial moments.</p>
<p>Now, hindsight is 20/20, and so perhaps those defects aren’t as visible as I claim? The experts working for the Empire would like you to think so, and I find their lack of analysis...disturbing. The uncontested fact is that the Rebel team was able to perform the analysis and prepare a plan while the first Death Star was in hyperspace, pursuing the Falcon. The record shows no analysis of the delivered product by Imperial Engineering; in fact, officers as senior as Grand Moff Tarkin chose to be aboard while test firing a planet destroying laser for the first time.</p>
<p>As an engineer, the design evidence is easily analyzed. The Death Star needed better answers to the question of what can go wrong. The Death Star Architect can <a href="https://www.youtube.com/watch?v=agcRwGDKulw" target="_blank" rel="noopener">complain</a> all he wants about not being told about space wizards. Those complaints don’t change the fact that his methodology was overly-focused on attackers and their capabilities. (I’ve <a href="https://shostack.org/blog/who-are-we-kidding-with-attacker-centered-threat-modeling/"> written extensively</a> about that on this blog and elsewhere.) Also, can we just for a minute address the shocking religious intolerance here? Referring to members of an ancient and respected religion who were subjected to a genocide as “space wizards” is really beneath the architect, especially when he’s on stage.</p>
<p>The question of resisting subtle design flaws is a fascinating one, but we shouldn't allow ourselves to be distracted from good engineering practice.</p><p>
</p><p>If you enjoyed this, you might want to check out my new book, <a href="https://threatsbook.com/"><cite>Threats: What Every Engineer Should Learn from Star Wars</cite></a>, available for pre-order wherever fine books are sold, and publishing February 7, 2023.</p>
<p>Credit: <a href="https://starwars.fandom.com/wiki/DS-2_Death_Star_II_Mobile_Battle_Station" target="_blank" rel="noopener">Death Star II plans</a> via Wookiepedia.</p>
Space News2022-12-18T00:00:00Zhttps://shostack.org/blog/spacenews/AdamInteresting space news, including ISS and 3d printed engines<span class="single-post-excerpt"><!-- excerpt -->Interesting space news, including ISS and 3d printed engines<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/nasa_iss_on_orbit_status_report_120720_945-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/nasa_iss_on_orbit_status_report_120720_945-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/nasa_iss_on_orbit_status_report_120720_945-800w.jpeg" width="800" height="575" alt="6 - count em! spacecraft docked with the international space station" decoding="async" />
</picture>
<p>There are seven spacecraft (including the ISS) in this schematic from NASA. Seven. Take a moment and think about that. For comparison, here’s Deep Space 9, the imagined future of 30 years ago, with ... three, and docking points for only twelve. And I had to search for an image that showed anything other than the station.</p>
<p><img loading="lazy" src="https://shostack.org/blog/img/2022/DS9_Space_Wiki2.png" alt="The fictional deep space 9 station" width="960" height="540" /></p>
<p>Also, I came across a company that 3d printed an aerospike rocket engine, claiming “It’s easily 15-20% more efficient than bell nozzle engines.” (The engine is upside-down as shown, and also this is a concept model printed as a cutaway.)</p>
<p><img loading="lazy" src="https://shostack.org/blog/img/2022/aerospike-engine-4.png" alt="A very organic looking rocket engine" width="682" height="1024" /></p>
<p>You can read <a href="https://www.hyperganic.com/magazine/works/its-rocket-science/" target="_blank" rel="noopener">Hyperganic's explanation of why it’s interesting</a>, and I'll flag that they take a lot of words to say <a href="https://www.hyperganic.com/blog/but-does-it-work/" target="_blank" rel="noopener">they haven’t fired it</a>. Their approach centers on software that allows rapid iteration and analysis within a constraint space. It's fascinating.</p>
<p class="credit-text">Credits: NASA, via <a href="https://spaceref.com/space-stations/nasa-space-station-on-orbit-status-7-december-2020-six-spacecraft-docked/" target="_blank" rel="noopener">SpaceRef</a>, Hyperganic.</p>
Worthwhile Books Q4 20222022-12-19T00:00:00Zhttps://shostack.org/blog/worthwhile-books-q4/AdamBooks that I read in the fourth quater that are worth your time include several about safety with lessons for cybersecurity<span class="single-post-excerpt"><!-- excerpt -->Books that I read in the fourth quater that are worth your time include several about safety with lessons for cybersecurity<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/books-dec-2022-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/books-dec-2022-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/books-dec-2022-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/books-dec-2022-800w.jpeg" width="800" height="312" alt="a big pile of books" decoding="async" />
</picture>
<h3>Cyber</h3>
<p>Both of my so-called cyber books are not about cyber — if you judge a book by its cover. But both have some pretty important lessons for us.</p>
<ul>
<li><a href="https://amzn.to/3TARAPq" target="_blank" rel="noopener"><cite>Do Safety Differently</cite></a>, Sidney Dekker and Todd E Conklin. This book is about safety and the tension between form and function, between bureaucracy and work, and it’s beautiful. Almost everything in it makes sense for cybersecurity. Substitute ‘security’ for ‘safety,’ and almost all the critiques and suggestions hold. </li>
<li><a href="https://www.amazon.com/Into-Raging-Sea-Thirty-Three-Megastorm-ebook/" target="_blank" rel="noopener"><cite>Into the Raging Sea</cite></a>, Rachel Slade. The story of how over-confidence, bad information, cost cutting and pressure to put business over safety led to the first American-flagged merchant ship to sink in decades. Each of these has direct analogies in cybersecurity. The excellent book by Ms. Slade is complemented by a formal NTSB report, described and linked by <a href="https://gcaptain.com/ntsb-releases-el-faro-investigation-final-report/" target="_blank" rel="noopener">GCaptian</a>. I am optimistic that the CSRB will be giving us such reports for cybersecurity. </li>
</ul>
<h3>Non-fiction</h3>
<ul>
<li><a href="https://amzn.to/3VTmK60" target="_blank" rel="noopener"><cite>Index, A History of the</cite></a>. Duncan, Dennis. A delightful little history.</li>
<li><a href="https://amzn.to/3VUbQwF" target="_blank" rel="noopener"><cite>Chasing New Horizons: Inside the Epic First Mission to Pluto</cite></a>, Alan Stern and David Grinspoon. A really well told story of the first Pluto mission. Inexplicably, the authors spend time explaining why the vote to demote Pluto was academic politics. But also convincingly. I especially enjoyed the engineering and risk management discussions, which were accessible and went to good depth.</li>
<li><a href="https://amzn.to/3Dmb7xe" target="_blank" rel="noopener"><cite>Extraterrestrial: The First Sign of Intelligent Life Beyond Earth</cite></a>, Avi Loeb, chair of Harvard’s Astronomy department. Makes an interesting case that Oumuamua was the product of intelligent life elsewhere in the universe. Sadly, we also learn that Dr. Loeb doesn’t like science fiction, and he somehow missed that von Neumann proposed self-replicating machinery, which seem relevant to probabilities and origins.</li>
</ul>
<h3>Fiction</h3>
<ul>
<li><a href="https://amzn.to/3eV3NPE" target="_blank" rel="noopener"><cite>Black Sun</cite></a>, by Rebecca Roanhorse. A rare choice for me to anti-recommend, but the exceptionally graphically violent opening pages were too much for me. I was looking forward to the setting, and I enjoyed her <a href="https://www.amazon.com/Resistance-Reborn-Star-Wars-Skywalker-ebook/dp/B07RLV3NCP/" target="_blank" rel="noopener">Resistance Reborn</a> (in the Star Wars universe).</li>
<li><a href="https://europeanastrobiology.eu/strangest-of-all/" target="_blank" rel="noopener"><cite>Strangest of all: Anthology of Astrobiological SF</cite></a> from the Auropean Astrobiology Institute, stories that explore unexpected biological possibilities. In the strongest of the hard-SF tradition.</li>
</ul>
Usable Security Matters2022-12-21T00:00:00Zhttps://shostack.org/blog/security-alerts/joeUsable security matters<span class="single-post-excerpt"><!-- excerpt -->Usable security matters<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/usable-security-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/usable-security-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/usable-security-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/usable-security-800w.jpeg" width="800" height="339" alt="A person screaming into a phone" decoding="async" />
</picture>
<p> Well, I didn’t need any coffee this morning, because my bank sent me email:</p>
<blockquote>Phone number changed<br />You changed your phone number on your account Profile.<br /> If you made this change, you don’t need to do anything.<br /> If you did not make this change, please call us immediately at #<br /></blockquote>
<p> No one in my household made that change. And the message is awful.</p>
<p> Both the message and the experience could be improved by usability engineering. This isn’t a matter of me looking at it and suggesting changes, it’s a simple matter of usability testing.</p>
<p>First, don’t say it was <i>my</i> account if it wasn’t. Say exactly whose account it was. Say <i>which</i> account it was, in this case an IRA. If there’s a privacy concern, have that information available to the first account rep I reach at the number you gave me. The second CSR, Matt, was very helpful, and <i>believed</i> that he knew which account was the cause, but frankly, that’s not as reassuring as I’d like. I’d like to know that someone knows precisely the cause of the alert.</p>
<p>At least the phone number they included in the alert was one that’s on their website. But the reason I care about usable security is at the end of the day, it’s people I hope to protect.</p>
<p> As it turns out, it was not a phone number on <b>my</b> account, but a new IRA being set up, and the phone number on it was a phone number on other accounts. I’m not naming the bank, because I learned a long time ago: praise specifically, criticize generally. </p>
<p class="credit-text">Photo: <a href="https://www.pexels.com/photo/annoyed-black-woman-having-video-chat-on-smartphone-at-home-5699823/" target="_blank" rel="noopener">Alex Green, Pexels</a></p>
Darkreading: Threat Modeling in the Age of OpenAI's Chatbot2022-12-22T00:00:00Zhttps://shostack.org/blog/gpt3-threatmodeling-darkreading/AdamPointer to Adam’s latest Darkreading article<span class="single-post-excerpt"><!-- excerpt -->Pointer to Adam’s latest Darkreading article<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/AI-Brain_light-alamy-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/AI-Brain_light-alamy-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/AI-Brain_light-alamy-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/AI-Brain_light-alamy-800w.jpeg" width="800" height="306" alt="A stock image about AI" decoding="async" />
</picture>
<p>Over at Dark Reading, I have a new article, <a href="https://www.darkreading.com/operations/threat-modeling-in-the-age-of-openai-s-chatbot" target="_blank" rel="noopener">Threat Modeling in the Age of OpenAI's Chatbot</a>. It opens:</p>
<blockquote>There's been a flood of news about OpenAI's new GPT-3 Chatbot. For all the very real critiques, it does an astounding and interesting job of producing reasonable responses. What does it mean for threat modeling? There's real promise that it will transform threat modeling as we know it.</blockquote>
What do you get the person who has everything?2022-12-23T00:00:00Zhttps://shostack.org/blog/spacepostcards/AdamThe best gift for 2023<span class="single-post-excerpt"><!-- excerpt -->The best gift for 2023<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/space-postcards-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/space-postcards-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/space-postcards-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/space-postcards-800w.jpeg" width="800" height="300" alt="The words send a postcard to space and some postcards" decoding="async" />
</picture>
<p> While they won't deliver as a last-minute gift, next year, you can <a href="https://www.clubforfuture.org/missions/" target="_blank" rel="noopener">send a postcard to space</a>:</p>
<blockquote>Earth is finite. To protect our gem of a planet and enable a future of abundance and growth, Earth needs space. Write or draw your vision on a postcard for what you think the future of life in space will look like—or any other message you’d like to send to space. Send your postcard to Club for the Future, we'll launch it to space and back on a New Shepard rocket, and return it to you stamped “Flown to Space.” Send us as many postcards as you'd like—we'll fly them all.</blockquote>
<p>And if you read those words and say “Challenge Accepted!” do keep me, ummm, posted.</p>
Gavle Goat2022-12-25T00:00:00Zhttps://shostack.org/blog/gavle/AdamA straw goat that has not been burned.<span class="single-post-excerpt"><!-- excerpt -->A straw goat that has not been burned.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/gavlegoat2022-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/gavlegoat2022-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/gavlegoat2022-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/gavlegoat2022-800w.jpeg" width="800" height="336" alt="The Gavle Goat on Christmas, 2022" decoding="async" />
</picture>
<p> Apparently, the Gavle Goat <a href="https://twitter.com/Gavlebocken/status/1607021566520352769?s=20&t=wfWs3sTdoMnrqWG9UBlwAA" target="_blank" rel="noopener">made it</a>. Merry Chrismas! </p>
More on GPT-3 and threat modeling2022-12-25T00:00:00Zhttps://shostack.org/blog/more-on-gpt3/AdamMore thoughts about AI and threat modeling<span class="single-post-excerpt"><!-- excerpt -->More thoughts about AI and threat modeling<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/AI-tm-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/AI-tm-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/AI-tm-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/AI-tm-800w.jpeg" width="800" height="408" alt="A Disney princess hacking" decoding="async" />
</picture>
<p>Since writing my <a href="https://www.darkreading.com/operations/threat-modeling-in-the-age-of-openai-s-chatbot" target="_blank" rel="noopener">Dark Reading article</a>, I had some time to continue investigating ChatGPT and threat modeling, and a few notes. First, I fed it the same question that DS did, and got very different answers. This may relate to the Dec 15th software update, or it may be a feature of GPT that it tries to emit unique answers. Second, the answers I get are much more about IP level attacks than Kubernetes. (And in fact, I got exceptionally similar responses when asking about a rust environment, and it failed to fall into my traps of asking about a covfefe or flempt environments.)</p>
<p>One of the things I focus on in training people is consistency; we want to avoid the problems that come with “ask two experts, get three answers.” So inconsistency on the part of the chatbot is an issue if we’re naively taking the output. But we might not be. We might be using a chatbot to provoke thinking or coverage.</p>
<p>That use, as I mention in my Dark Reading article, might lull people into relaxing and not paying close attention. And that seems really bad. But what other effects might we see?</p>
<p>One of the values of threat modeling is that it gives time, space and name for a ritual of assembling people and talking about design. The agile community has learned the value of naming and making time for work that’s otherwise invisible: “We haven’t done a retro in a while,” or “we haven’t done backlog grooming for our next sprint.” If threat modeling evolves to be a ‘conversation’ with an AI, then we lose that social space. Part of the magic of threat modeling can be that bringing people together for a conversation that otherwise doesn’t happen.</p>
<p>Another value from the work may be that when people invest time and energy in a project, the project takes on psychological significance. This effect is widespread and has many names: sunk cost, ‘the Ikea effect’...whatever we call it, if people invest in threat modeling work, then they’re invested in having it matter, and that will include driving fixes to completion. (Perhaps the Ikea effect is part of why code review seems to work better than static analysis? No one’s offended when you ignore gcc -W3.)</p>
<p>Driving fixes to completion is a strange liminal space. It’s both part of threat modeling and more traditional software engineering. If we consider it as part of threat modeling, does the chatbot pentulently nag you about ‘what happened with that conversation we had about missing authentication on the calendar interface?’ Does it @ you in Jira? (How chatbots might replace junior project managers is a fascinating topic for another day, as is how we make senior project managers in such a world.)</p>
<p>With that, my first tries are reproduced below, as text not as screencaptures because text is better than pictures of text.</p>
<hr />
<p><b>Prompt</b>: "List all spoofing threats for a system which has a backend to backend interaction in a kubernetes environment in a table format with columns threats, description, and mitigations"</p>
<p>Response: Here is a table listing several spoofing threats for a system with a backend-to-backend interaction in a Kubernetes environment, along with a description and potential mitigations for each threat:</p>
<table>
<thead>
<tr>
<th>Threat</th>
<th>Description</th>
<th>Mitigations</th>
</tr>
</thead>
<tbody>
<tr>
<td>IP spoofing</td>
<td>An attacker can pretend to be a trusted server by sending packets with a forged source IP address, potentially gaining access to restricted resources or disrupting communication between the backend and other services.</td>
<td>Implement IP filtering at the network level to block packets with suspicious or forged source IP addresses. Use strong authentication mechanisms, such as mutual TLS, to verify the identity of servers and prevent unauthorized access.</td>
</tr>
<tr>
<td>DNS spoofing</td>
<td>An attacker can manipulate the Domain Name System (DNS) to redirect traffic intended for a legitimate server to a malicious one, potentially stealing sensitive data or conducting a man-in-the-middle attack.</td>
<td>Use DNS security measures such as DNSSEC and DNS over HTTPS to prevent DNS spoofing attacks. Implement network segmentation and access controls to limit the ability of an attacker to manipulate DNS records.</td>
</tr>
<tr>
<td>ARP spoofing</td>
<td>An attacker can send forged Address Resolution Protocol (ARP) messages to manipulate the ARP cache of a target device, potentially redirecting traffic intended for the legitimate device to the attacker's own device.</td>
<td>Use ARP spoofing detection and prevention tools to detect and block forged ARP messages. Implement network segmentation and access controls to limit the ability of an attacker to manipulate ARP caches.</td>
</tr>
<tr>
<td>MAC spoofing</td>
<td>An attacker can change the Media Access Control (MAC) address of their device to match that of a trusted device, potentially gaining unauthorized access to restricted resources or disrupting communication between the backend and other services.</td>
<td>Implement MAC filtering at the network level to block packets with suspicious or forged MAC addresses. Use strong authentication mechanisms, such as mutual TLS, to verify the identity of devices and prevent unauthorized access.</td>
</tr>
</tbody>
</table>
<p>It's important to note that these are just a few examples of spoofing threats that could potentially impact a backend-to-backend system in a Kubernetes environment. It's always a good idea to conduct a thorough threat assessment and implement appropriate security measures to protect against potential spoofing attacks.</p>
<p><b>Prompt</b>: make that more specific to kubernetes</p>
<P>Here are some additional details on how these spoofing threats could specifically impact a backend-to-backend system in a Kubernetes environment:<p></p>
<ul>
<li>IP spoofing: In a Kubernetes environment, an attacker could potentially use IP spoofing to gain access to restricted resources within the cluster by pretending to be a trusted service or pod. For example, an attacker could send packets with a forged source IP address that appears to be a trusted database service, potentially gaining access to sensitive data stored in the database. To mitigate this threat, you could implement network policies within the cluster to block packets with suspicious or forged source IP addresses, and use mutual TLS to verify the identity of servers and prevent unauthorized access.</li>
<li>DNS spoofing: In a Kubernetes environment, an attacker could potentially use DNS spoofing to redirect traffic intended for a legitimate service to a malicious one, potentially stealing sensitive data or conducting a man-in-the-middle attack. To mitigate this threat, you could use DNS security measures such as DNSSEC and DNS over HTTPS to protect against DNS spoofing attacks, and implement network segmentation and access controls to limit the ability of an attacker to manipulate DNS records.</li>
<li>ARP spoofing: In a Kubernetes environment, an attacker could potentially use ARP spoofing to redirect traffic intended for a legitimate pod to their own device, potentially stealing sensitive data or conducting a man-in-the-middle attack. To mitigate this threat, you could use ARP spoofing detection and prevention tools to detect and block forged ARP messages, and implement network segmentation and access controls to limit the ability of an attacker to manipulate ARP caches.</li>
<li>MAC spoofing: In a Kubernetes environment, an attacker could potentially use MAC spoofing to gain unauthorized access to restricted resources or disrupt communication between the backend and other services by changing the MAC address of their device to match that of a trusted one. To mitigate this threat, you could implement MAC filtering at the network level to block packets with suspicious or forged MAC addresses, and use strong authentication mechanisms such as mutual TLS to verify the identity of devices nd prevent unauthorized access.</li>
</ul>
<p>It's important to note that these are just a few examples of how spoofing threats could potentially impact a backend-to-backend system in a Kubernetes environment. It's always a good idea to conduct a thorough threat assessment and implement appropriate security measures to protect against potential spoofing attacks.</p>
<p><b>Prompt</b>: what about kubernetes specific attacks</p>
<p>There are several spoofing attacks that could specifically impact a backend-to-backend system in a Kubernetes environment:</p>
<ol>
<li>Service spoofing: An attacker could create a malicious service with the same name as a legitimate one, potentially stealing sensitive data or disrupting communication between the backend and other services.</li>
<li>Pod spoofing: An attacker could create a malicious pod with the same name as a legitimate one, potentially gaining access to restricted resources or disrupting communication between the backend and other services.</li>
<li>Node spoofing: An attacker could create a malicious node with the same name as a legitimate one, potentially gaining access to restricted resources or disrupting communication between the backend and other services.</li>
<li>Namespace spoofing: An attacker could create a malicious namespace with the same name as a legitimate one, potentially gaining access to restricted resources or disrupting communication between the backend and other services.</li>
</ol>
<i>Editorial comment: Adam doesn’t know why this list was numbered, and the previous was bulleted.</i>
<p>To mitigate these threats, you could implement network policies within the cluster to block traffic from suspicious or unauthorized services, pods, nodes, and namespaces. You could also use strong authentication mechanisms such as mutual TLS to verify the identity of services, pods, nodes, and namespaces and prevent unauthorized access. Additionally, you could implement access controls to limit the ability of an attacker to create malicious resources with the same name as legitimate ones.</p>
<p class="credit-text">Image credit: DALL-E, “a stock art hacker breaking into computers with the hacker as a disney character” </p>
</P>Fast, Cheap and Good, Redux2022-12-28T00:00:00Zhttps://shostack.org/blog/fast-cheap-good-redux/AdamA new paper on how fast, cheap and good can combine into something we usually discount.<span class="single-post-excerpt"><!-- excerpt -->A new paper on how fast, cheap and good can combine into something we usually discount.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/fast-cheap-good-2-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/fast-cheap-good-2-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2022/fast-cheap-good-2-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/fast-cheap-good-2-800w.jpeg" width="800" height="288" alt="A screenshot of the paper's header" decoding="async" />
</picture>
<p>I'm happy to announce a new article, <a href="https://shostack.org/files/papers/fast-cheap-good-shostack.pdf" target="_blank" rel="noopener">Fast, Cheap, Good: Lightweight Methods Are Undervalued</a>. The title really says what I want to say: we don’t like to study fast and cheap techniques, which are, in practice, good enough for satisficing humans. Too often, we discount such things, and instead believe the signs car mechanics hang on their walls.</p>
<p>This is a more formal analysis of some of the intuitions that led me to write <a href="https://shostack.org/files/papers/Fast-Cheap-and-Good.pdf" target="_blank" rel="noopener">Fast, Cheap and Good: An Unusual Tradeoff Available in Threat Modeling</a>, and a more formal version is in press. There's also both <a href="https://shostack.org/files/papers/fastcheapgood22/fastcheapgood.epub" target="_blank" rel="noopener">epub</a> and <a href="https://shostack.org/files/papers/fastcheapgood22/fastcheapgood.html" target="_blank" rel="noopener">HTML</a> versions (created with tex4ebook, htlatex respectively) if you prefer one of those.</p>
<p>Brett Watston pointed out <a href="https://ctan.org/pkg/tex4ebook" target="_blank" rel="noopener">tex4ebook</a> which basically works, and I'll add epub and html versions soon. <del>Also, I'd like to make a epub or html version available for accessibility, but Pandoc won't parse the tex that overleaf produces for use with Arxiv. Advice on simple ways to do it would be appreciated.</del></p>
Threat Model Thursday: curl2022-12-29T00:00:00Zhttps://shostack.org/blog/threat-model-thursday-curl/AdamLooking at a threat model for curl, the command line web client.<span class="single-post-excerpt"><!-- excerpt -->Looking at a threat model for curl, the command line web client.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/curl-ThreatModel-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/curl-ThreatModel-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/curl-ThreatModel-800w.jpeg" width="800" height="300" alt="A data flow diagram for curl" decoding="async" />
</picture>
<p>For Threat Model Thursday, I want to look at a fairly unusual threat model, for cURL, the command line web client. It was done by Alex Useche and Anders Helsing of Trail of Bits, and kudos to Daniel Stenberg for publishing the results. (He blogs about motive and choice <a href="https://daniel.haxx.se/blog/2022/12/21/the-2022-curl-security-audit/" target="_blank" rel="noopener">here</a>, and the report is <a href="https://raw.githubusercontent.com/trailofbits/publications/master/reviews/2022-12-curl-threatmodel.pdf" target="_blank" rel="noopener">here</a>.)</p>
<p>As always, the goal of this work is to constructively engage with and learn from the threat model we’re examining. So what are we examining? It’s a threat model for cURL. It starts with context: who’s doing the work, why it’s being done. It continues with a set of system diagrams (models), and a list of threat actors and possible attack vectors, which I might call threats. That’s followed by detailed findings and a methodology section.</p>
<p>Generally this is solid work. It’s also unusual — I rarely see anyone threat modeling what we might initially think of as a local client binary.</p>
<h3>The Models</h3>
<p>Let’s start with the model. The “High-Level Data Flow” on page 8 is nice. It sets the context for what we’re looking at and how it works. I think the arrangement could be improved. Generally, time in diagrams flows like text: left to right and top to bottom, and so this diagram might be better mirrored. Also, what’s up with the local attacker and no boundary?</p><p>
<img loading="lazy" src="https://shostack.org/blog/img/2022/curl-Local-System.jpg" height="430" width="444" alt="A subset of the system diagram showing few internal boundaries" />
</p><p>We can add trust boundaries to how the local system is portrayed. I think the CA store is owned by root, while cURL’s memory storage is in the user’s boundary. To re-draw a little, I’d like to see boundaries added, maybe like these: </p>
<img loading="lazy" src="https://shostack.org/blog/img/2022/curl-local-redrawn.jpg" height="454" width="476" alt="a redrawn version of the previous diagram with additional boundaries sketched in." />
<p>To be fair, in the binary data flow model on the next page, the CA is in a “local system” boundary, which is good. And while this diagram is interesting and I’m learning things from reading it, I’m finding myself asking: what’s the point of the binary data flow model? Clearly, a lot of work went into documenting all the things happening inside the libcurl binary, but there are no futher boundaries as I usually think of them. What are the logical blocks here? Perhaps this is an important part of the consultants, or even the developers, solidifying their mental models to help with other analysis?</p>
<img loading="lazy" src="https://shostack.org/blog/img/2022/curl-binary-data-flow.jpg" height="410" width="800" alt="An excerpt of the local data flows within curl." />
<h3>The Threats</h3>
<p>The list of attack paths includes “An attacker sitting on the same machine where cURL application is being run. Has the same or lower level of privileges as the end user.” These (“the same” and “lower”) seem like importantly different paths to me. An attacker with exactly the same privileges as the victim is largely uninteresting. There are confused deputy problems, where, say, bash, gets input from an outsider, but if there’s a case where that’s happening, perhaps we should portray it in the diagram and create a restricted version where that can be done?</p>
<p>The attack list is also interesting. Is “Invalid usage of libcurl by third-party application developers” really a threat which cURL can address? And what about “Flawed cross-endpoint transfers such as insufficient Same Origin Policy correctness and insecure HTTP redirects”? What should cURL do with an insecure redirect? Should I, the end user need to include --yes-pretend-to-be-a-web-browser, or should I have an option --strict-security? I don’t have an answer for this, and I’m glad to see them thinking about these issues.</p>
<p>I was also surprised to learn that curl includes email, telnet and even gopher. (But not archie? 🤯 do these kids have no respect for heritage?)</p>
<p>I don’t have a lot to add to the list of findings.</p>
<h3>The Methodology</h3>
<p>The methodology seems generally solid. I’m on record as saying that I’m not a fan of listing attackers, but here the attackers are characterized by position, rather than by motivation.</p>
<p>I was somewhat confused seeing ‘risk assessment’ as a security control. It’s listed in Appendix B, which says “Risk assessment policies, vulnerability scanning capabilities, and risk management solutions.” (To get a little meta, is this where cURL should be doing their own threat modeling of new features?)</p>
<p>All in all, this cURL threat model is different than models for a three-tier web app, or a mobile banking app, and it was a lot of fun to read. Thanks to the team who did the work, and to Daniel Stenberg for releasing it.</p>
The Last 7472022-12-30T00:00:00Zhttps://shostack.org/blog/747/AdamThoughts on the last 747, and engineering culture.<span class="single-post-excerpt"><!-- excerpt -->Thoughts on the last 747, and engineering culture.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2022/last-747-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2022/last-747-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2022/last-747-800w.jpeg" width="800" height="449" alt="The last 747 rolls out." decoding="async" />
</picture>
<p><a href="https://www.geekwire.com/2022/boeing-rollout-last-747/" target="_blank" rel="noopener">The last 747</a> has rolled off the assembly line, which was older than I am. There are still <a href="https://simpleflying.com/fly-on-a-boeing-747/" target="_blank" rel="noopener">a few 747s</a> in active passenger service, the final years of production were all cargo configured.</p>
<p>The upper decks were something special. I’ve had the chance to fly them a couple of times, once in what I remember as a crowded and dark setup on El Al, and a few years ago on British Airways where the upper deck was mostly empty. That plane was almost certainly never getting updated. Parts like the overhead bins might have even been original — the plastic was thick, 1980s style, rather than newer compounds. But I digress.</p>
<p>The assembly line isn’t the only thing that’s going away. Five hundred senior Boeing engineers are retiring as a result of financial incentives and pension plans being linked to inflation in ways that are hard to change. It would be easy to assert that “spreadsheet engineering” seems more important than aerospace engineering to the new management, who’ve decamped from Seattle to Chicago and then Virginia, home of precisely one customer.</p>
<p>When I was growing up, Boeing was perhaps the consumate example of an engineering-driven company that made great profits on fantastic products. But that’s changed, as documented in <a href="https://amzn.to/3t46fI9" target="_blank" rel="noopener"><cite>Flying Blind</cite></a>, the excellent book by Peter Robison.</p>
<p> The days of management walking over to the assembly lines are over, but more than that, so are the days when engineers could all walk over to the assembly lines. Engineering is now spread around the globe for both cost and talent reasons. In a way, that’s great — Boeing can employ engineers can’t tolerate our weather in Seattle. What’s lost is harder to express, but includes opportunities to physically see the assembly line and the products. (<a href="https://www.seattletimes.com/business/boeing-aerospace/ditching-russia-boeings-engineer-search-intensifies-in-india-brazil/" target="_blank" rel="noopener">Seattle time story on closing of Boeing Russia.</a>)</p>
<p>On retention, the Seattle Times <a href="https://www.seattletimes.com/business/boeing-aerospace/boeing-lost-hundreds-of-experienced-seattle-area-engineers-last-month/" target="_blank" rel="noopener">reports</a>:</p>
<blockquote>
<p>Boeing has pushed out the launch of an all-new airplane toward the end of this decade. By then, there’ll be a limited number of senior engineers left who worked on development of the last all-new plane — the 787, delivered in 2011 — and have detailed knowledge of that complex process and its pitfalls.</p>
<p>Boeing’s white-collar union said more than 500 highly experienced U.S. engineers and more than 130 technical staff retired in November.</p>
<p>For most of them, it was because a pending interest rate adjustment would otherwise have dramatically slashed their lump sum pension payouts by as much as an entire year’s salary.</p>
<p>To try to retain the most critical expertise, Boeing identified 26 key engineers represented by its white-collar union, the Society of Professional Engineering Employees in Aerospace, and offered them specific financial enticements to stay, SPEEA spokesperson Bill Dugovich said...Only nine accepted the offer.</p>
</blockquote>
<p> That last story closes, “And Boeing has been documenting how its engineering work is done and standardizing technical design reviews so the “tribal knowledge” acquired through experience can be more readily passed down to younger engineers. The loss of so many senior people exiting this year will be a stiff test of that strategy.”</p>
<p>While engineering experience and a good culture of design reviews are both exceptionally important, the really crucial test will be: do executives prioritize safe planes and great engineering over short term results? They didn’t with the 737 MAX, and I’d hope to hear more about that from leaders as they talk about what they’ll be delivering in 2030 and beyond.</p>
<p>The theme of how managerial priorities impact product development is important to both safety and security. Each can be either squeezed or underfunded, or asked to operate more efficiently, depending on your perspective. Each seems most successful when “nothing’s happening.” That’s when leadership (in contrast to management) is most important.</p>
<p class="credit-text">Photo: Boeing/Paul Weatherman.</p>
The Appsec Landscape in 20232023-01-05T00:00:00Zhttps://shostack.org/blog/the-appsec-landscape-in-2023/AdamExternal changes will be driving appsec in 2023. It’s time to frame the decisions in front of you.<span class="single-post-excerpt"><!-- excerpt -->External changes will be driving appsec in 2023. It’s time to frame the decisions in front of you.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/landscape-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/landscape-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/landscape-800w.jpeg" width="800" height="301" alt="a fuzzy landscape with people reading a map" decoding="async" />
</picture>
<p>2023 brings new challenges and new opportunities for software companies, and all companies are now software companies. Many of those will come from regulation coming into force. This post provides you with context, and then some economic, regulatory, and engineering factors that play into decisions your organization needs to make.</p>
<p>Today, new American regulations are sectoral. Medical device makers have been hearing the FDA increasing pre-market requirements for years in a series of drafts. New rules from OMB are less prominent, but much more impactful to those who want to sell to governments: “Federal agencies must only use software provided by software producers who can attest to complying with the Government-specified secure software development practices.” (<a href="https://www.whitehouse.gov/wp-content/uploads/2022/09/M-22-18.pdf" target="_blank" rel="noopener">OMB-M-22-18</a>.) There are important European regulations like the Digital Markets act, but I’m less familiar with them.</p>
<p>That’s today. Over the next few years, you should expect regulation to cover more software more stringently. Requirements like these often lean on <a href="https://csrc.nist.gov/publications/detail/sp/800-218/final" target="_blank" rel="noopener">NIST SSDF</a>. Even if they don’t have a line “you must threat model,” because there’s a lot of ways you can threat model, it’s hard to comply if you aren’t threat modeling.</p>
<p>If you’re not asking “what can go wrong” with each feature, and with the design as a whole, how will you attest that you “have been following a risk-based approach for secure software development?” (OMB again.) Threat modeling is the activity that lets us identify those risks.</p>
<p>The roll-out of new regulations like these will, probably, generally, have some grandfather clauses that accepts some fraction of legacy code created before the regulation. And in today’s economic climate, you may want to take a wait and see approach.</p>
<p>Your decisions depends on how you expect the future to unfold, and there’s (at least) three elements which you should consider.</p>
<h3>Economic Factors</h3>
<p>Disruption is complex. Many companies will invest in lobbyists to argue against rules like those in 22-18. Others will invest in appsec programs and paying down some technical debt. Those are bets on customers caring more, more regulatory requirements, or that an ounce of security prevention is worth a pound of <del>Log4shell</del> CircleCI pain avoidance.</p>
<p>Buyers are starting to treat supply chain as an engineering issue, augmenting their questionnaires with hard questions about how you build products. Like the rest of the future, it’s unevenly distributed. The FDA and many governments are starting to care deeply. Are you at a company or in a sector that’s had “lighthouse events?” Has a key customer had them?</p>
<p><i>Are your good customers going to drive hard on security improvements?</i></p>
<h3>Regulatory Factors</h3>
<p>In my Oct/Nov Appsec Roundup (<a href="https://shostack.org/blog/appsec-roundup-oct/">here</a>) I wrote:</p>
<blockquote>National Cyber Director Chris Inlgis implies that we'll see liability in the next National Cybersecurity Strategy, and Anne Neuberger says "tech providers must make fundamentally secure products, starting at the earliest design phases, at no extra cost to buyers." I expect to see a lot more in this area.</blockquote>
<p>I stand by that, and expect that the techlash is going to get worse, and that the regulations designed to bring big tech into line will get more complex and onerous. SBOM is a part of this, and I expect to see more tools, such as VEX, carried on top of it. We’re also likely to see requirements for architecture diagrams. For example, the FDA’s latest <a href="https://www.fda.gov/media/119933/download" target="_blank" rel="noopener">pre-market cyber draft</a> includes:</p>
<blockquote>[Architecture views including]...Detailed diagrams and supporting explanatory text that identify all manufacturer and network assets of the system in which the device will operate, including but not limited to:
<ol type="i">
<li>Device hardware itself (including assessments for any commercial platforms);</li>
<li>Applications, hardware, and/or other supporting assets that directly interact with the targeted device, such as configuration, installation/upgrade, and data transfer applications;</li>
<li>Health care facility-operated assets;</li>
<li> Communications/networking assets; and</li>
<li> Manufacturer-controlled assets, including any servers that interact with external entities</li>
</ol> (Page 37, lines 1392-1405)
</blockquote>
<p>Other than changing “health care” to “customer,” that can be adopted by roughly any buyer or industry regulator. There’s an argument to be made that a single, broad law would be less complex or onerous to comply with than a slew of sectoral laws with slight differences. (I know of large companies that employ teams of people whose entire jobs is mapping these compliance regimes to the internal engineering guidance, and producing visibly high quality documentation defending their mappings.)</p>
<p>A Lawfare article <a href="https://www.lawfareblog.com/medical-device-security-offers-proving-ground-cybersecurity-action" target="_blank" rel="noopener">Medical Device Security Offers Proving Ground for Cybersecurity Action</a> provides some context, and the title really says a lot: medical is a proving ground. [Update, Jan 9, the author has <a href="https://www.lawfareblog.com/one-small-legislative-step-cybersecurity" target="_blank" rel="noopener">another article</a>; apparently the omnibus spending bill included medical device updates, “the first time since the Energy Policy Act of 2005 that Congress has expressly authorized any agency to regulate the cybersecurity of privately owned and operated systems of any kind.”]</p>
<p><i>Are your regulators going to move quickly or slowly?</i></p>
<h3>Engineering Factors</h3>
<p>The sorts of products you deliver influence your engineering costs. The more intensely your customers integrate your products, the harder it is to change them. For example, if you sell an operating system, changing the APIs that your customers use imposes costs on them. Microsoft invests in supporting old APIs. It’s expensive, but contributes to Microsoft’s success.</p>
<p>Threat modeling reduces the need for security-driven refactoring, by helping you anticipate problems and design for them. It’s the measure twice, cut once of software. If your dependencies are small, you may get less value from threat modeling.</p>
<p>Similarly, if the data you process is lower-value, you may get less value from threat modeling.</p>
<p>In contrast, if your dependencies are high, your data is valuable, or you need to avoid refactoring or technical debt, more threat modeling may be a good part of your 2023 plans.</p>
<p><i>How important is reducing re-work in your product engineering?</i></p>
<h3>Going Forward</h3>
<p>External forces beyond features will continue to influence product delivery in 2023. Smart executives are examining those forces, engaging with their leadership teams and making plans that take them into account. When I talk to execs, the first things I ask are “what do you hear from your leadership team?” “Are they aware of these changes?”</p>
<p class="credit-text">Credit: Dall-E, “a black and white photo of a landscape with a meandering and forking path, desert on the left and green and lush on the right. businesspeople on the path reading a map”</p>
Threat Modeling is Measure Twice, Cut Once2023-01-12T00:00:00Zhttps://shostack.org/blog/threat-modeling-is-measure-twice-cut-once-for-software/AdamThreat Modeling is the software version of measure twice, cut once.<span class="single-post-excerpt"><!-- excerpt -->Threat Modeling is the software version of measure twice, cut once.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/measure-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/measure-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/measure-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/measure-800w.jpeg" width="800" height="300" alt="A pile of tape measures" decoding="async" />
</picture>
<p>Anyone who’s taken a shop class knows that spending time to measure carefully saves you time and material. The mistakes you prevent mean that you don’t spend energy figuring out how to use a bit of wood that’s too short, or cut into a shape that makes it scrap.</p>
<p>It’s easy to think that software’s different. After all, there’s no raw materials, so what could be wasted? There are several sets of answers. The direct answers include:</p>
<ul>
<li>Developer time on the feature</li>
<li>Developer time on dependencies</li>
<li>Computer cycles (especially in AI and cloud systems, these get pricey)</li>
</ul>
<p>Staff time is expensive. Time spent on a feature that’s getting re-worked is less efficient. When the rework flows to dependencies, the time and energy grows. (“There’s never time to do it right, there’s always time to do it over.”) There are also indirect costs, including:</p>
<ul>
<li>Cost of communication</li>
<li>Costs of uncertainty</li>
<li>Reward for working efficiently</li>
</ul>
<p>Communication costs are simply explaining what’s happening, and what should be done now. The costs of uncertainty are both higher and less obvious. When there’s a clear plan and people’s role is clear, they tend to work harder. “Let’s get this done.” “Let’s wait. The folks at Acme always revise the spec a couple of times.”</p>
<p>People don’t like to admit to slowing down like that, but we learn to do it. (There’s a reason that email clients have a sending delay feature.)</p>
<p>Threat modeling can give us a space to measure, discuss and then develop features.</p>
<p>Another way to think of this is the maxim from Fred Brooks, “Plan to throw one away, you will anyway.” The one we throw away can be the whiteboard model, or it can be fully written (but insecure) software.</p>
<p> One of the inflection points here is learning. Do you need to develop it to get feedback? Agile methodologies bring value by developing in small units, so you can make it work and then adjust. There are times when that’s crucial. Other times the code demands things you don’t learn from, say, error handling. You need to do it, and you often need to do it to get a prototype that people can use. And if you throw away that code, there’s no learning from the code you write to handle weird DNS edge cases. (Or whatever it was.)</p>
<p>When rework is cheap, it makes little sense to do this. (There are plenty of articles out there on why developers shouldn't measure twice, seemingly based on the idea that creating software is inexpensive and easy. If that’s the case for you, cool. You do you.)</p>
<p>But if you find yourself closer to typical ratios of 30% new code, 70% maintenance and refactoring, then maybe some “measure twice, cut once” makes sense for you.</p>
<p class="credit-text">Credit: <a href="https://unsplash.com/photos/zPZ9vqqDNBA" target="_blank" rel="noopener">patricia serna</a>, unsplash. </p>
Threats: The Table of Contents2023-01-16T00:00:00Zhttps://shostack.org/blog/threats-table-of-contents/AdamLike the Force, each threat has a light side, and a dark side.<span class="single-post-excerpt"><!-- excerpt -->Like the Force, each threat has a light side, and a dark side.<!-- /excerpt --></span>
<p>Let me share the table of contents for <cite>Threats</cite>. Each chapter is devoted to a threat. And like the Force, each threat has a light side and a dark side. For example, the book opens with Spoofing and Authenticity. But unlike the Force, we don't hope for balance. We hope to understand the threat, because that's the best way to anticipate and address the threat, and bring peace to the galaxy. (Or so we hope.)</p>
<div class="center">
<img loading="lazy" src="https://shostack.org/blog/img/2023/threats-table-of-contents-med.png" height="1005" width="840" alt="1. Spoofing and Authenticity; 2. Tampering and Integrity; 3. Repudiation and Proof; 4. Information Disclosure and Confidentiality; 5. Denial of Service and Availability; 6. Expansion of Authority and Isolation; 7. Predictability and Randomness; 8. Parsing and Corruption; 9. Kill Chains" />
</div>
<p>I'm excited that we have a simple table of contents that works as a quick guide to the book. It's common for technical books to have complex detailed tables of contents, but here, each chapter is structured the same:</p>
<ul>
<li>The threat</li>
<li>Mechanisms</li>
<li>Specific scearios or technologies</li>
<li>Defenses</li>
</ul>
<p>Many trees could have died to bring you that information. (There's also a bibliography and two indexes, because like the Force... no, that's not it. But there really is a second index, which I'll reveal soon.)</p>
<p><cite>Threats: What Every Engineer Should Learn From Star Wars</cite> will be available January 25th wherever fine books are sold. You can pre-order on <a href="https://amzn.to/3Pu8axg" target="_blank" rel="noopener">Amazon</a>, <a href="https://www.adasbooks.com/book/9781119895169" target="_blank" rel="noopener">Ada's Technical Books</a> (my local technical bookstore), or <a href="https://bookshop.org/p/books/threats-what-every-engineer-should-learn-from-star-wars-adam-shostack/18577959?ean=9781119895169" target="_blank" rel="noopener">bookshop.org</a>.</p>
Threats Book is Complete2023-01-17T00:00:00Zhttps://shostack.org/blog/threats-books-complete/AdamThe serious side of the book<span class="single-post-excerpt"><!-- excerpt -->The serious side of the book<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/stack-of-threats-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/stack-of-threats-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/stack-of-threats-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/stack-of-threats-800w.jpeg" width="800" height="312" alt="A stack of copies of threats: what every engineer should learn from star wars" decoding="async" />
</picture>
<p>I’m really excited to have my first copies of the new book in hand. I’ve said a lot about how much fun I’ve had writing it, and perhaps haven’t written enough about what a serious book it really is. Let me share that, and then close with some information about signed copies, launch parties, and other fun things.</p>
<p>Computer security is a young field, and evolving rapidly. People arrive through many paths, few of which include fundamental understanding of threats. As we learn from Rick Proto (and as I discuss in the introduction) theories of insecurity inform theories of security. Checkbox security is inevitable unless people understand what they’re defending against. And as I’ve learned from teaching, my students often don’t understand the threats well enough to discuss them even for a few sentences.</p>
<div class="center">
<a href="https://shostack.org/blog/threats-table-of-contents/%E2%80%9D%3E%3Cimg%20loading=%E2%80%9Dlazy%E2%80%9D%20src=%E2%80%9D/blog/img/2023/threats-table-of-contents-small.png" height="670" width="571" alt="1. Spoofing and Authenticity; 2. Tampering and Integrity; 3. Repudiation and Proof; 4. Information Disclosure and Confidentiality; 5. Denial of Service and Availability; 6. Expansion of Authority and Isolation; 7. Predictability and Randomness; 8. Parsing and Corruption; 9. Kill Chains"></a>
</div>
<p>These foundations are crucial, and they are the focus of the book, which starts with STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service and Expansion of Authority.) If you’re paying attention, you may notice that I’ve redefined the E from Elevation of Privilege. That chapter was a dark moment in my journey. When you try to explain it, Privilege in cybersecurity is a squishy concept. The best definition I found was ‘the ability to change security configuration,’ and then we get to mechanism – is it special like the ability to bind to a low port? A matter of who can sudo? The concept of authority is more crisp and measurable.</p>
<p>Those chapters are followed by ones on Predictability and Randomness, Parsing and Corruption, and Kill Chains. Corruption often leads to Expansion of Authority, but not always, and is complex enough that it deserves its own chapter, and I think is in many ways a new contribution that will be helpful even to those steeped in security. So all that to say: this is a serious book in a fun package, and I hope you enjoy it and learn from it.</p>
<p>For more on the book, please visit <a href="https://threatsbook.com/" target="_blank" rel="noopener">threatsbook.com</a>. Available — soon — wherever fine books are sold.</p>
Threats Book Launch Party2023-01-19T00:00:00Zhttps://shostack.org/blog/threats-book-launch-seattle/AdamThe live launch party for Threats!<span class="single-post-excerpt"><!-- excerpt -->The live launch party for Threats!<!-- /excerpt --></span>
<img loading="lazy" src="https://shostack.org/blog/img/2023/Book_Launch-800x600.png" width="800" height="600" alt="6PM, Feb 7, Adas Books, 425 15th Ave East, Seattle" />
<p>(Today! Tomorrow!)
I’m excited to share that I’ll be doing a book reading and signing at Ada’s Books, Feb 7th at 6PM.
<a href="https://www.adasbooks.com/author-talk-what-every-engineer-should-learn-star-wars-adam-shostack" target="_blank" rel="noopener">official site</a>, and they ask you <a href="https://fb.me/e/3PGwokKjh" target="_blank" rel="noopener">RSVP on Facebook.</a></p>
<p>I’ll also be doing a virtual launch party, 3PM Feb 8th for those not in Seattle, or otherwise unable to attend. Details on that are on the <a href="https://threatsbook.com/">threatsbook page</a> <del>will follow</del>.</p>
<p class="credit-text">Poster: <a href="https://www.fiverr.com/oskardraws?source=order_page_summary_seller_link" target="_blank" rel="noopener">oskardraws</a></p>
Friday Star Wars: Presidents Daily Brief2023-01-20T00:00:00Zhttps://shostack.org/blog/friday-star-wars-jan-2023/AdamThe return of the (my) Friday Star Wars posts<span class="single-post-excerpt"><!-- excerpt -->The return of the (my) Friday Star Wars posts<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/typewriter-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/typewriter-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/typewriter-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/typewriter-800w.jpeg" width="800" height="300" alt="An evil looking person at an old fashioned typewriter" decoding="async" />
</picture>
<p>Years ago, I tried to drop a fun Star Wars themed post each Friday, and I'm going to try to get back to that rhythm. Today's choice is <a href="https://medium.com/@aaronbman/" target="_blank" rel="noopener">Aaron Berman</a>, who claims to have written many articles for the President's Daily Brief. You'll have to read it to see why it's a Friday Star Wars pick. </p>
<p class="credit-text">Credit: Dall-E “an author at a typewriter turning to the dark side of the force in the style of the maltese falcon”</p>
Threats, To The Supply Chain2023-01-22T00:00:00Zhttps://shostack.org/blog/threats-to-the-supply-chain/AdamThe threats book is in the supply chain, inconsistently.<span class="single-post-excerpt"><!-- excerpt -->The threats book is in the supply chain, inconsistently.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/overnight-threatsbooks-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/overnight-threatsbooks-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/overnight-threatsbooks-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/overnight-threatsbooks-800w.jpeg" width="800" height="300" alt="An amazon screencapture showing threats as the #1 new release in Computer Network Security, and delivering overnight" decoding="async" />
</picture>
<p>Many people are asking about the lack of an ebook version of <cite>Threats</cite>. I think the real reason is Wiley’s a Microsoft shop, the ebook is on an Azure server, and AWS is being petulant.</p>
<p>More seriously, ebook is now here [as of Jan 26] and audio book is forthcoming.</p>
<p>Let me pull back the curtain a bit and explain. Books are complex products. My Wiley team included an acquisition editor, a project editor, a content refinement specialist, a copyeditor, a marketing lead, a cover designer — and those are the folks whose names I know. There was also a proofreader, a layout person, an indexer, and probably more. Each of those people works on multiple books, and there’s staging and pipelining so that everyone stays busy.</p>
<p>Something around October, I got an email telling me that our publication date of Feb 7th was at risk, and we were out of slop days in the schedule. (They had a fancier name for them.) I made the call to buckle down and hit my dates, and accept that there might be some quality or assurance tradeoffs.</p>
<p>If you’re paying close attention, you may notice that I said Feb 7th, and people have books in hand today, in January? First, the original publication date was February 7. If you read the end of the <a href="https://threatsbook.com/" target="_blank" rel="noopener">the Threats book site</a>, it still says the 7th, we’re still doing the launch parties then, because launch parties also have some production lead times, and we wanted to make sure we could have a stack of books ready for signing.</p>
<p>But the awesome team at Wiley had the printed books done early, and we made the call to put them in the supply chain, because we didn’t expect my fans to be lined up outside bookstores at midnight, clamoring. 😂 And so the physical books are flowing. In fact, Amazon is now delivering them overnight, 2 days ahead of the revised publication date!</p>
<p>I assume that ebooks take longer to produce because there’s more platforms to test on, including e-ink readers and phones and desktop software. There may be text-to-speech to check? (I don’t know if that’ll be enabled, I hope so.) And similarly, the audiobook takes time to record, check, master, and all the rest. Barnes and Noble is taking pre-orders and shows it as available March 21st.</p>
<p>As William Gibson famously said, the future is already here, it’s just unevenly distributed. Or as Yoda said, “Patience! Adventure? Excitement? A Jedi craves not these things.” And while this book is an exciting adventure, it’s not the sort that Padawans get themselves killed chasing, so it’s ok to crave.</p>
<p>So: the books are flowing, and the various forms are just catching up.</p>
Not all developers can be Jedi2023-01-23T00:00:00Zhttps://shostack.org/blog/not-all-jedis/AdamAdam joined Paul Roberts on the Conversing Labs podcast<span class="single-post-excerpt"><!-- excerpt -->Adam joined Paul Roberts on the Conversing Labs podcast<!-- /excerpt --></span>
<p>Adam joined Paul Roberts on the Conversing Labs podcast (43 minutes, video):</p>
<p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/DhKJWPTL_pc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
</p>
The Hacker Mind2023-01-27T00:00:00Zhttps://shostack.org/blog/hackermind/AdamAdam spoke with Robert Vamosi of The Hacker Mind podcast<span class="single-post-excerpt"><!-- excerpt -->Adam spoke with Robert Vamosi of The Hacker Mind podcast<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/hackermind-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/hackermind-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/hackermind-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/hackermind-800w.jpeg" width="800" height="330" alt="A Star Warsy character in a hoodie" decoding="async" />
</picture>
<p>I joined Robert Vamosi on <i>The Hacker Mind Podcast</i> for an episode titled <a href="https://forallsecure.com/blog/the-hacker-mind-podcast-what-star-wars-teaches-us-about-threats" target="_blank" rel="noopener">What Star Wars Can Teach Us About Threat Modeling</a>. (43 minutes, transcribed.) We covered a lot of ground, and I think it came out super well.</p>
<p class="credit-text">Credit: Dreamstudio, “a hacker in a hoodie in the style of star wars cantina”</p>
Fumée d'incendie2023-01-29T00:00:00Zhttps://shostack.org/blog/ocean-serpent/Adam<span class="single-post-excerpt"><!-- excerpt --><!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/fumeedincendie-799w.webp 799w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/fumeedincendie-799w.jpeg 799w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/fumeedincendie-799w.jpeg" width="799" height="533" alt="A sea monster skeleton at sunset" decoding="async" />
</picture>
<p>
I have a very occaisonally indulged fascination with giant art installations, and enjoyed digging into the story behind <a href="https://www.flickr.com/photos/134360779@N04/52656383984/in/feed-134328640-1675001590-1-72157721655485244" target="_blank" rel="noopener">Fumée d'incendie</a>, a 130 meter art installation where the Loire reaches the Atlantic Ocean. (<a href="https://en.wikipedia.org/wiki/Serpent_d%27oc%C3%A9an" target="_blank" rel="noopener">Wikipedia</a>.)</p>
<p class="credit-text">Photo: <a href="https://www.flickr.com/photos/134360779@N04/" target="_blank" rel="noopener">Antoine Bouyer</a></p>
Application Security Roundup - January2023-01-30T00:00:00Zhttps://shostack.org/blog/appsec-roundup-jan-2023/AdamSo many interesting articles from AI to an organizatoion of socio-technical harms, fascinating incident reports about Uber and Circle CI and some history of attack trees.<span class="single-post-excerpt"><!-- excerpt -->So many interesting articles from AI to an organizatoion of socio-technical harms, fascinating incident reports about Uber and Circle CI and some history of attack trees.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/appsec-jigsaw-1-2023-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/appsec-jigsaw-1-2023-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/appsec-jigsaw-1-2023-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/appsec-jigsaw-1-2023-800w.jpeg" width="800" height="299" alt="A set of puzzle pieces" decoding="async" />
</picture>
<p>Let me kick off with an entertaining and thoughtful adaptation of Maslow’s Hierarchy of Needs, by Irfaan Santoe, <a href="https://www.iriusrisk.com/resources-blog/hierarchy-of-needs-for-threat-modeling?utm_content=234235770&utm_medium=social&utm_source=linkedin&hss_channel=lcp-10045607" target="_blank" rel="noopener">The Hierarchy of Needs for Threat Modeling</a> (IriusRisk blog), and <a href="https://www.softwaremaxims.com/blog/not-a-supplier" target="_blank" rel="noopener">I am not a supplier</a> in whcih Thomas Depierre objects to that label and its implications.</p>
<p>There are several very interesting retrospectives: </p>
<ul>
<li> <a href="https://circleci.com/blog/jan-4-2023-incident-report/" target="_blank" rel="noopener">CircleCI incident report for January 4, 2023 security incident</a> (Rob Zuber, CircleCI)</li>
<li><a href="https://magoo.medium.com/a-blameless-post-mortem-of-usa-v-joseph-sullivan-a137162f7fc9" target="_blank" rel="noopener">A blameless post-mortem of USA v. Joseph Sullivan</a> (Ryan McGeehan) are both interesting incident analyses. McGeehan is admirably up-front about his relationship to Uber and Joe Sullivan.</li>
</ul>
<p>From there, we come to three interesting academic articles:</p>
<ul>
<li> <a href="https://arxiv.org/abs/2211.03622" target="_blank" rel="noopener">Do Users Write More Insecure Code with AI Assistants?</a> (Neil Perry and colleagues on Arxiv) and a <a href="https://www.theregister.com/2022/12/21/ai_assistants_bad_code/" target="_blank" rel="noopener">shorter summary</a> from the Register. Even shorter: Yes — in flagrant violation of Betteridge's Law!</li>
<li> <a href="https://arxiv.org/abs/2210.05791" target="_blank" rel="noopener">Sociotechnical Harms: Scoping a Taxonomy for Harm Reduction</a> (Renee Shelby and colleagues on Arxiv). An in-depth literature review and analysis of harms from algorithmic systems.</li>
<li> <a href="https://jolt.richmond.edu/files/2022/11/Pfefferkorn-Manuscript-Final.pdf" target="_blank" rel="noopener">Shooting the Messenger: Remediation of Disclosed Vulnerabilities as CFAA “Loss,”</a> (Riana Pfefferkorn) Law review article explaining that remediation costs should not count as CFAA costs.</li>
</ul>
<p>Lastly, some history of attack trees. My understanding of their first formal writeup had been Ed Amoroso's 1994 book, but Alex Gantman <a href="https://infosec.exchange/@MildlyAggrievedScientist@mastodon.social/109699277938404323" target="_blank" rel="noopener">shared</a> a link to <a href="https://csrc.nist.gov/CSRC/media/Publications/conference-paper/1991/10/01/proceedings-14th-national-computer-security-conference-1991/documents/1991-14th-NCSC-proceedings-vol-2.pdf" target="_blank" rel="noopener">D. Weiss, "A System Security Engineering Process,"</a> Proceedings of the 14th National Computer Security Conference, 1991. (page 572), and Stuart Schecter <a href="https://infosec.exchange/@MildlyAggrievedScientist@mastodon.social/109699277938404323" target="_blank" rel="noopener">pointed out</a> that there's history back to at least 1961, and the history is in <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/thesis-6.pdf" target="_blank" rel="noopener">his PhD thesis</a> (Section 2.3.1).</p>
Watermarks2023-02-03T00:00:00Zhttps://shostack.org/blog/watermarks/AdamWatermarks show us wierd edges of AI work
<span class="single-post-excerpt"><!-- excerpt -->Watermarks show us wierd edges of AI work<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/dreamtime-excerpt-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/dreamtime-excerpt-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/dreamtime-excerpt-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/dreamtime-excerpt-800w.jpeg" width="800" height="300" alt="An AI generated image with a watermark of dreamtime" decoding="async" />
</picture>
<p>I’ve been playing with diffusion image generators lately. I think they’re interesting as technical tools, and worth investigation. They’re also challenging and working with them a little helps me think about those challenges. </p>
<p>One of the challenges is getting neat images. I really liked the <a href="https://shostack.org/blog/hackermind/">hacker in a hoodie</a> and the <a href="https://shostack.org/blog/more-on-gpt3/">Disney character</a>, but got meh results for <a href="https://shostack.org/blog/friday-star-wars-jan-2023/">Presidents Daily Brief</a> and <a href="https://shostack.org/blog/the-appsec-landscape-in-2023/">Appsec Landscape posts</a>. Exploring this is a useful reality check on the media, who tend to be selecting amazing results for their stories.</p>
<p>These tools are trained on millions or billions of scraped images, and some of the things they do are strange artifacts of that. For example, the excerpt that headlines this post seems to have learned that stock photos have watermarks on them.</p>
<p>It makes sense. Lots of bloggers make fair use of images in their posts, and stock image businesses want to get paid. So they watermark their images, and we go to sites like Flickr and Unsplash. Some of the unwitting providers of training images, for example, <a href="https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit" target="_blank" rel="noopener">Getty Images</a>, are suing. Copyright law is rarely simple, and attorney Kate Downing has a deep <a href="https://katedowninglaw.com/2023/01/26/an-ip-attorneys-reading-of-the-stable-diffusion-class-action-lawsuit/" target="_blank" rel="noopener">analysis</a> of the suit against Stable Diffusion. My prediction: no one but the copyright attorneys will be happy at the end of it. The copyright laws are the result of intense and ongoing lobbying, and don’t make a lot of sense the way they might if they were reasoned from first principles. That’s shaky ground for the courts to build on. The best outcome I can see might be a form of mechanical licensing, following player piano precedents. (And since those rules allow cover music, it’s not the worst place for us to end up.)</p>
<p>The full image which prompted this post was:</p>
<img loading="lazy" src="https://shostack.org/blog/img/2023/dreamtime.png" width="512" height="512" alt="dreamtime" /> <p></p>
<p>There’s also interesting ethical challenges. Especially with a new book that draws on Star Wars, I’ve spent a lot of time over many years talking to lawyers about “fair use.” I think it’s reasonable for me to use AI images in places that I wouldn’t otherwise pay someone. I also pay people, like <a href="https://www.fiverr.com/oskardraws" target="_blank" rel="noopener">Oskar</a> who did the <a href="https://shostack.org/blog/threats-book-launch-seattle/">launch party posters</a>. I'm comfortable with that, and each business will need to consider their policy.</p>
<p>More generally, these tools really feel like something new, and how they roll out will reflect and amplify power structures. Some people will use them to drive costs down; others will use them to amplify their voices or drown out other voices.</p>
<p>The change isn’t restricted to images. After I recorded a podcast with Bob Gourley of OODA, he showed me new tools he’s building, like <a href="https://unrestrictedintelligence.com/cyber_threat_intelligence" target="_blank" rel="noopener">ask a cyber threat analyst</a> and <a href="https://unrestrictedintelligence.com/ask_the_corporate_board" target="_blank" rel="noopener">ask a corporate board director</a> (it’s interesting to give them the same task and see the results.) These are also being used for marketing, for example, see <a href="https://www.demandsage.com/jasper-ai-review/" target="_blank" rel="noopener">My Experience After Using For 18 Months</a> (not to pick on Mr. Ruby, note the grammatical error in the headline.)</p>
<p>Make time to play with these, and think about what they mean for you. AI can’t do that until the next round of training data: blog posts like this one.</p>
<p class="credit-text">Dreamstudio, “fence with a wide open field. fence toward on bottom sky on top dark greys and blacks. stock photography. HQ, 4k” Other params — steps: 40; sampler: automatic; seed: 3007701878; cfgScale: 9; model: Stable Diffusion v1.5; CLIP enabled: true </p>
Usable Security and Privacy for Engineers2023-02-09T00:00:00Zhttps://shostack.org/blog/usablesecurity/AdamThe new IEEE S+P is all about usable security.<span class="single-post-excerpt"><!-- excerpt -->The new IEEE S+P is all about usable security.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/usablesec-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/usablesec-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/usablesec-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/usablesec-800w.jpeg" width="800" height="248" alt="The cover of the Jan 2023 IEEE Security magazine" decoding="async" />
</picture>
<p>
The <a href="https://www.computer.org/csdl/magazine/sp/2023/01" target="_blank" rel="noopener">latest issue</a> of IEEE Security & Privacy is a special issue on "Usable Security and Privacy for Security and Privacy Workers." I have an article, <a href="https://www.computer.org/csdl/magazine/sp/2023/01/10035553/1KrcceiUNaM" target="_blank" rel="noopener">Nothing Is Good Enough: Fast and Cheap Are Undervalued as Influencers of Security Tool Adoption</a>. The guest editors have a great overview of the issue <a href="https://www.computer.org/csdl/magazine/sp/2023/01/10035552/1Krcb8KX24U" target="_blank" rel="noopener">here</a>.
</p>
2001, as directed by George Lucas2023-02-11T00:00:00Zhttps://shostack.org/blog/2001-lucas/AdamAn amazing mash-up<span class="single-post-excerpt"><!-- excerpt -->An amazing mash-up<!-- /excerpt --></span>
<iframe width="560" height="315" src="https://www.youtube.com/embed/v_suy7gARps" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
<p>An amazing mash-up. Thanks, T!</p>
Bing’s ChatGPT2023-02-21T00:00:00Zhttps://shostack.org/blog/bing-chatgpt/AdamChatGPT in the headlines again<span class="single-post-excerpt"><!-- excerpt -->ChatGPT in the headlines again<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/bing-chatgpt-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/bing-chatgpt-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/bing-chatgpt-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/bing-chatgpt-800w.jpeg" width="800" height="380" alt="An AI generated image of a robot making converation at a cocktail party" decoding="async" />
</picture>
<p>Last week, Microsoft picked up Tay’s shovel and...wow. Headlines like: <a href="https://simonwillison.net/2023/Feb/15/bing/" target="_blank" rel="noopener">Bing: “I will not harm you unless you harm me first”</a> (Simon Willison), <a href="https://www.geekwire.com/2023/nasty-conversations-with-bing-chat/" target="_blank" rel="noopener">More news outlets get caught up in nasty conversations with Bing chatbot over facts</a> (Alan Boyle, Geekwire), <a href="https://stratechery.com/2023/from-bing-to-sydney-search-as-distraction-sentient-ai/" target="_blank" rel="noopener">From Bing to Sydney</a> (Stratechery), and <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html" target="_blank" rel="noopener">A Conversation With Bing’s Chatbot Left Me Deeply Unsettled</a> (Kevin Roose, New York Times).</p>
<blockquote>Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward....</blockquote>
<p>But I think the kicker is this: <a href="https://www.theverge.com/2023/2/15/23599072/microsoft-ai-bing-personality-conversations-spy-employees-webcams" target="_blank" rel="noopener">Microsoft’s Bing is an emotionally manipulative liar, and people love it</a> (James Vincent, the Verge). Mr. Vincent is both right, and incomplete. ChatGPT has been trained on a corpus that includes an infinite stream of emotionally manipulative language. And given the right prompts, it veers towards using that language in its production. The prompt included things like “engaging”, “informative” and “rigorous.” It’s like giving a freshman a description of how sophmores think about debate. (There's a list in the Simon Wilson article that Marvin von Hagen talked Bing into revealing.) So Mr. Vincent is right that people love it, because they are being fed the tropes that draw them in at a rate and quality that people have never experienced. (There’s also an element of wanting to watch something awful happening.)</p>
<p>There’s a lot more to say about the impact of these things on the world, but for now, I wanted to share the more interesting stories.</p>
<p class="credit-text">Dreamstudio, prompted with "A robot charming everyone at a cocktail party in the matrix, painting, HQ, 4k" </p>
Roman Concrete2023-02-24T00:00:00Zhttps://shostack.org/blog/roman-concrete/AdamRoman concrete was cool, but the new MIT result may be off-base<span class="single-post-excerpt"><!-- excerpt -->Roman concrete was cool, but the new MIT result may be off-base<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/concrete-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/concrete-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/concrete-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/concrete-800w.jpeg" width="800" height="306" alt="Corrosion in steel-reinforced concrete" decoding="async" />
</picture>
<p>There was a story last month, <a href="https://news.mit.edu/2023/roman-concrete-durability-lime-casts-0106" target="_blank" rel="noopener">Riddle solved: Why was Roman concrete so durable?</a>, which attributed the longevity of the material to a self-healing form of lime. Cool story! But, as Brian Potter wrote in <a href="https://constructionphysics.substack.com/p/roman-vs-modern-concrete" target="_blank" rel="noopener">Construction Physics</a>, the problem that the Romans were solving is not the problem that we solve for today. They solved for durability and strength in compression, but we use concrete quite differently, as steel-reinforced, which is a very nice building material with a different failure mode. Steel reinforced concrete fails because water gets in and degrades the steel, turning it (essentially) into rust-reinforced concrete. And as you may imagine, rust is not a particularly helpful reinforcement material. </p>Application Security Roundup - Feb2023-02-27T00:00:00Zhttps://shostack.org/blog/appsec-roundup-feb/AdamThis month is all about memory safety, unless you’re a standards group.<span class="single-post-excerpt"><!-- excerpt -->This month is all about memory safety, unless you’re a standards group.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/appsec-jigsaw-feb-2023-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/appsec-jigsaw-feb-2023-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/appsec-jigsaw-feb-2023-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/appsec-jigsaw-feb-2023-800w.jpeg" width="800" height="301" alt="A set of puzzle pieces" decoding="async" />
</picture>
<p>Before we get to the memory safety, two great trip reports from Appsec Dublin: <a href="https://ismisepaul.github.io/owasp-global-appsec-dublin-day1/" target="_blank" rel="noopener">OWASP AppSec Dublin 2023 - Day 1</a> by Paul McCann. (Day 2 hasn’t dropped as I draft this.) and <a href="https://shehackspurple.ca/2023/02/21/owasp-global-appsec-dublin-2023/" target="_blank" rel="noopener">OWASP Global AppSec Dublin 2023</a> by Tanya Janca, who cleverly baited time travellers into revealing themselves at our secret get-together. (There were a lot of them!)</p>
<h3>Memory Safety</h3>
<p>There’s a lot happening in memory safety, and important progress from Microsoft, Google and Apple.</p>
<ul>
<li> Microsoft <a href="https://www.microsoft.com/en-us/security/blog/2023/01/26/introducing-kernel-sanitizers-on-microsoft-platforms/" target="_blank" rel="noopener">Introduc[ed] kernel sanitizers on Microsoft platforms</a>, somewhat confusingly calling them “detection features.”</li>
<li> Google discussed the impact of <a href="https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html" target="_blank" rel="noopener">Memory Safe Languages in Android 13 </a> by Jeffrey Vander Stoep</li>
<li> <a href="https://blog.yossarian.net/2023/02/11/The-unsafe-language-doom-principle" target="_blank" rel="noopener">The unsafe language doom principle</a> is an interesting blog on the topic by William Woodruff</li>
<li> <a href="https://saaramar.github.io/memory_safety_blogpost_2022/" target="_blank" rel="noopener">Survey of security mitigations and architectures, December 2022</a>, by Saar Amar of Microsoft, but covering a lot of neat improvements in Apple and Linux as well.</li>
<li> <a href="https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2759r0.pdf" target="_blank" rel="noopener">Direction Group Opinion On Safety For ISO C++</a>, H. Hinnant, R. Orr, B. Stroustrup, D. Vandevoorde, M. Wong. The abstract states “This paper describes the opinion of the DG on the matter of Safety of C++...We do not aim to define a solution.” Many people found this a surprising lack of urgency.</li>
<li> Meanwhile, Consumer Reports issued an impassioned report, <a href="https://advocacy.consumerreports.org/research/report-future-of-memory-safety/" target="_blank" rel="noopener">Future of Memory Safety</a>, which I hope to say more about this coming month.</li>
</ul>
<h3>Microsoft</h3>
<p>Well, it's not all memory safety — there were a stack of interesting posts from Microsoft:</p>
<ul>
<li>Shawn Hernan shared how <a href="https://techcommunity.microsoft.com/t5/azure-sql-blog/repudiation-threats-and-ledger-in-azure-sql-database-sql-server/ba-p/3746204" target="_blank" rel="noopener">Azure Security expands variant hunting capacity at a cloud tempo</a>. I particularly liked this paragraph:
<blockquote>In addition to detailed technical lessons, variant hunting also seeks to understand the frequency at which certain bugs occur, the contributing causes that permitted them to escape SDL controls, the architectural and design paradigms that mitigate or exacerbate them, and even the organizational dynamics and incentives that promote or inhibit them. It is popular to do root cause analysis, looking for the single thing that led to the vulnerability, but variant hunting seeks to find all of the contributing causes.</blockquote>
</li>
<li>Michael Howard discusses <a href="https://techcommunity.microsoft.com/t5/azure-sql-blog/repudiation-threats-and-ledger-in-azure-sql-database-sql-server/ba-p/3746204" target="_blank" rel="noopener">Repudiation Threats and ledger in Azure SQL Database/SQL Server</a>, with the trenchant observation “Sadly, because repudiation is not well understood, it is often ignored when building threat models or designing systems.” (Hey, have you seen my new book? )</li>
<li>I'd missed the blog post <a href="https://www.microsoft.com/en-us/security/blog/2022/12/19/gatekeepers-achilles-heel-unearthing-a-macos-vulnerability/" target="_blank" rel="noopener">Gatekeeper’s Achilles heel: Unearthing a macOS vulnerability</a>, which exploits MacOS’s extended ACLs to set permissions that prevent Safari from appending com.apple.quarantine attributes to the file. This is the sort of complex flaw that security experts aspire to discover: it’s subtle and amusing. But if you skip the more mundane ones, your system is still vulnerable.</li>
</ul>
Threat Modeling Google Cloud (Threat Model Thursday)2023-03-01T00:00:00Zhttps://shostack.org/blog/threat-modeling-google-cloud/AdamNCC has released a threat model for Google Cloud Platform. What can it teach us?<span class="single-post-excerpt"><!-- excerpt -->NCC has released a threat model for Google Cloud Platform. What can it teach us?<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/threat-modeling-google-cloud-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/threat-modeling-google-cloud-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/threat-modeling-google-cloud-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/threat-modeling-google-cloud-800w.jpeg" width="800" height="300" alt="A subset of a data flow diagram" decoding="async" />
</picture>
<p> In <a href="https://research.nccgroup.com/2023/01/31/threat-modelling-cloud-platform-services-by-example-google-cloud-storage/" target="_blank" rel="noopener">Threat Modelling Cloud Platform Services by Example: Google Cloud Storage</a> Ken Wolstencroft of NCC presents a threat model for Google Cloud Storage, and I’d like to take a look at it to see what we can learn. As always, and especially in these Threat Model Thursday posts, my goal is to point out interesting work in a constructive way.</p>
<p>Let me start by saying that I love that there’s a methodology section at the top. I’ll add that not everything in the document is introduced in methodology, and I’ll list those as we go. The approach is reasonably and implicitly tied to the Four Question Frame, so I’ll use that to organize this post.</p>
<h3>What are we working on</h3>
<p>There are three views into what NCC looks at. They are “key features”, a diagram and a list of assets. The relationship between the feature list and the diagram is not stated. I think that the key features includes both processes and data flows shown in the diagram. The diagram also includes decoration for assets, threat actors, and “default base security controls.” The last includes https, which is, surprisingly, not applied everywhere (or not shown as applying everywhere). Interestingly, the diagram shows three nested boundaries, “Google Cloud Platform”, “Organization level” and “project level”, but nothing is shown in the cloud platform level.</p>
<p>The diagram could be improved by higher contrast text. The gray on white is hard on my eyes. I don’t want to make too much of this, but making our diagrams easy to read so we can spend our mental energy on other things pays off.</p><p>
</p><p>The list of threat actors includes both “internal attacker,” “internal malicious user,” and Google engineers. I think that means the first two are internal to the GCP customer. It also includes compromised services? (The list inexplicably stops before listing TA007, James Bond.)</p>
<h3>What can go wrong?</h3>
<p>This starts with a section “attack goals”, which nicely acknowledges that “An attacker’s motives and goals are often hard to accurately predict...” 5 of the 6 goals are closely aligned with STRIDE (lacking repudiation) and adding a “host malicious content,” a nice illustration of the danger of limiting oneself to STRIDE. That said, repudiation does show up, for example in T11, and 12.</p>
<p>This is followed by a list of “Potential System Weaknesses” and a “List of Potential Threats”, an interesting split. Here, weaknesses are “opportunities for stronger security configurations,” and threats seem to be threat actions/impacts. T02 Guessing of Google Cloud Platform credentials" is an action, “T04 Authenticated access to Google Cloud Storage bucket” is an impact. Also, here, the weakness of STRIDE as a categorization shines through — many threats are listed in multiple categories. (I prefer to use STRIDE to prompt my thinking, rather than categorize what I find.)</p>
<h3>What are we going to do about it? (threat mitigation)</h3>
<p>There’s a nice list of controls, motivated by the threats, and traceable to the threats in a nice way. They re-iterate the threat as a headline and propose many controls that may help mitigate it. I'm somewhat surprised that they've stopped assigning short tags here (there's no C03, C04...) and those seem both natural and a useful way to assess what controls would work most broadly.</p>
<h3>Did we do a good job?</h3>
<p>The explicit methodology section isn’t complemented by an explicit assessment of the work, but rather by a conclusion, which is that threat modeling does find interesting weaknesses that can be addressed by appropriate use of features.</p>
Leonardo da Vinci’s Gravity Experiment2023-03-10T00:00:00Zhttps://shostack.org/blog/davincisgravity/AdamAn interesting discovery, hidden in Leonardo da Vinci's notebooks<span class="single-post-excerpt"><!-- excerpt -->An interesting discovery, hidden in Leonardo da Vinci's notebooks <!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/davinci-gravity-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/davinci-gravity-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/davinci-gravity-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/davinci-gravity-800w.jpeg" width="800" height="300" alt="An image from Davinci showing water pouring from a jar" decoding="async" />
</picture>
<p>There’s an interesting <a href="https://www.nytimes.com/2023/02/17/science/leonard-da-vinci-gravity.html" target="_blank" rel="noopener">article</a> in the New York Times about an experiment da Vinci did to estimate the force of gravity. The article is at <a href="https://direct.mit.edu/leon/article-abstract/56/1/21/113863/Leonardo-da-Vinci-s-Visualization-of-Gravity-as-a" target="_blank" rel="noopener">Leonardo da Vinci’s Visualization of Gravity as a Form of Acceleration</a>. (Sadly, both paywalled.)
</p>
My David Prouse Moment2023-03-17T00:00:00Zhttps://shostack.org/blog/my-david-prouse-moment/AdamSearching my feelings as the audiobook of Threats is released.<span class="single-post-excerpt"><!-- excerpt -->Searching my feelings as the audiobook of Threats is released.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/where-are-those-transmissions-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/where-are-those-transmissions-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/where-are-those-transmissions-800w.jpeg" width="800" height="340" alt="A screencapture from Star Wars, the first time we hear Darth Vader speak" decoding="async" />
</picture>
<p>I have some exciting news, and ... I’m terrified of my David Prouse moment.</p>
<p>Who’s David Prouse you ask? You’ve seen him. You’ve been terrified and thrilled by his iconic performances.</p>
<p>He was the original Darth Vader. And no one told him that his lines had been re-recorded by James Earl Jones. He learned while watching the movie for the first time, in the theater.</p>
<p>And that brings me to my exciting news: The audiobook of <cite><a href="https://threatsbook.com/" target="_blank" rel="noopener">Threats</a></cite> is now available!</p>
<p>It’s 10 hours, 7 minutes and you can get copies from <a href="https://amzn.to/3FybaXz" target="_blank" rel="noopener">Amazon</a>, <a href="https://www.barnesandnoble.com/w/threats-adam-shostack/1141686383?ean=2940178331217" target="_blank" rel="noopener">Barnes and Noble</a>, and probably other places.†</p>
<p>When I read my book, I read it in my voice. I don’t know what it would be like to hear someone else doing it. That said, since someone asked, I have no reason at all to doubt the work of Joe Hempel here. I’m sure he’s done an awesome job.</p>
<p>If you’re a fan of audiobooks, I encourage you to get your copy. It’s available wherever ummm, fine audiotapes are sold? 😁 </p>
<p class="credit-text">† People often kindly ask where I make more money. Thank you, and ... I don’t actually know. The Amazon link is an affiliate link, I get a little extra, but then there’s the free audible trial. How do those balance out on the bottom line? I really don’t know. Do what’s easy for you.</p>
The National CyberSecurity Strategy: Liability is Coming2023-03-21T00:00:00Zhttps://shostack.org/blog/the-national-cybersecurity-strategy/Adam<span class="single-post-excerpt"><!-- excerpt --><!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/liability-is-coming-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/liability-is-coming-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/liability-is-coming-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/liability-is-coming-800w.jpeg" width="800" height="301" alt="" decoding="async" />
</picture>
<p>After months of <a href="https://www.foreignaffairs.com/united-states/stop-passing-buck-cybersecurity" target="_blank" rel="noopener">signals</a>, the new <a href="https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf" target="_blank" rel="noopener">US National CyberSecurity Strategy</a> is out, and I can <a href="https://shostack.org/blog/the-appsec-landscape-in-2023/">stop beating around the bush</a> and be explicit:</p>
<p><strong>Liability is coming.</strong></p>
<p>There’s lots more in the Strategy. Bruce Schneier has a good roundup of first <a href="https://www.schneier.com/blog/archives/2023/03/new-national-cybersecurity-strategy.html" target="_blank" rel="noopener">responses</a>. I didn’t to try to provide a hot take on it, because I think others did a fine job. (I will say that I’m thrilled to see lesson-learning and assessing effectiveness in the implementation section.)</p>
<p>Liability is going to be a big shift, and it will have undesirable consequences. Let me start by quoting a summary from Jim Dempsey:</p>
<blockquote>We must begin, the administration says, to shift liability onto those who should be taking reasonable precautions to secure their software. This will require three elements, according to the strategy: preventing manufacturers and service providers from disclaiming liability by contract, establishing a standard of care, and providing a safe harbor to shield from liability those companies that do take reasonable measurable measures to secure their products and services. Together, the three points are based on a recognition that the goal is not perfect security but, rather, reasonable security. (<a href="https://www.lawfareblog.com/cybersecuritys-third-rail-software-liability" target="_blank" rel="noopener">Lawfare</a>) </blockquote>
<p>The first of these, eliminating the “as is” and “no warranty” from commercial software sales, seems like a no-brainer unless you sell software, in which case, it’s the end of the world. Expect a chorus of doom, with arguments much like the ones Andy Ellis makes <a href="https://www.csoonline.com/article/3689870/software-liability-reform-is-liable-to-push-us-off-a-cliff.html" target="_blank" rel="noopener">here</a>. This post is long enough, so I may come back to Andy’s list in a different post.</p>
<p>The second of these, a standard of care, is simply not that hard — if you’re a lawyer. NIST’s <a href="https://csrc.nist.gov/Projects/ssdf" target="_blank" rel="noopener">SSDF</a>, FDA’s <a href="https://www.fda.gov/media/119933/download" target="_blank" rel="noopener">Pre-market Guidance</a>, and FTC’s <a href="https://www.ftc.gov/news-events/news/press-releases/2015/06/ftc-kicks-start-security-business-education-initiative" target="_blank" rel="noopener">Start With Security</a> are all out there today to establish minimum standards. If you’re doing nothing, you need to start doing something. If you’re doing something, you should assess if it’s generally in line with those. Standards like OWASP’s <a href="https://owasp.org/www-project-samm/" target="_blank" rel="noopener">OpenSAMM</a> provide a measurable tool for assessing how you’re doing. Yes, there’s nuance, but:</p>
<p>The third point is a safe harbor. A lot of the action will be around defining who gets to be in that safe zone. There’s a spectrum of possible answers. Will big tech companies with billions in annual profit be able to join and remain? Companies that ship new code in languges without memory safety? Those without bug bounties or safe harbors for reporing security issues? We can expect a bruising fight over where the safe harbor starts, and how it ratchets over time.</p>
<p>I believe the White House position is that breaches continue, critical infrastructure is at risk and adversaries run essentially unchecked, “nothing” is no longer a viable national strategy, so let’s explore liability. Many of the critiques conflate liability with ‘new regulation.’ Liability is not new regulation, it’s consequences for failing to avoid forseeable consequences of your choices that harm others. I think the interesting thing about liability is that, along with attestation, it’s a response to industry’s unyeilding anti-regulation position. (Attestation is in <a href="https://www.whitehouse.gov/wp-content/uploads/2022/09/M-22-18.pdf" target="_blank" rel="noopener">OMB-M-22-18</a>, which I discussed in my January <a href="https://shostack.org/blog/the-appsec-landscape-in-2023/">The Appsec Landscape in 2023</a>). </p>
<p>Getting your software and operations processes up to speed is not a quick adjustment like rolling out Zero-Trust. 😉 It requires both technical changes and culture changes, and the culture changes will take a while. It’s time to get started.</p>
<p>What this means for you is that “nothing” is no longer a good enough engineering strategy. If your response to the National Strategy is to take a “wait and see” approach, lawyers are probably going to have a field day with a big stamp that says “negligent.” And there’s going to be plenty of work for expert witnesses in sorting out the meaning of reasonable, which is expensive and leads to a lot of uncertainty. </p>
<p>My estimate that this will take “a while” comes both from my time at Microsoft, where, after the Trustworthy Computing memo, we took years to codify our SDL, evolve it, and get to a point of excellence, and also from my more recent experience with the Shostack + Associates <a href="https://shostack.org/coaching" target="_blank">coaching service</a>, where we help organizations through a similar journey. That journey is faster when someone else has made many mistakes that are obvious in retrospect, but even that doesn’t mean that changing the way an organization works is easy or fast.</p>
<p>Liability is coming. The sooner you start working through what that means, the happier you’ll be.</p>
<p class="credit-text">Image: A random “Winter is coming” image from Game of Thrones. </p>
Star Wars, The Infographic2023-03-24T00:00:00Zhttps://shostack.org/blog/star-wars-as-infographic/AdamThis version of Star Wars is quite spectacular<span class="single-post-excerpt"><!-- excerpt -->This version of Star Wars is quite spectacular<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/a-new-hope-theinfographic-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/a-new-hope-theinfographic-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/a-new-hope-theinfographic-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/a-new-hope-theinfographic-800w.jpeg" width="800" height="300" alt="An infographic style presentation of star wars" decoding="async" />
</picture>
<p>I hate starting posts like this, but I have to admit to hating the infographic format. But... as I scrolled through <a href="https://swanh.net/" target="_blank" rel="noopener">https://swanh.net/</a>, I went from “ehh” to “huh” to “wow.” It’s beautifully done, and would be 123 meters long if printed out. (Don’t miss the <a href="https://swanh.net/making-of/" target="_blank" rel="noopener">making of</a> page like I did.)</p>
<p>Anyway, a little Friday Star Wars, to inspire your weekend.</p>
Imperium for Men2023-03-31T00:00:00Zhttps://shostack.org/blog/imperium-for-men/AdamToday in Friday Star Wars fun<span class="single-post-excerpt"><!-- excerpt -->Today in Friday Star Wars fun<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/imperium-recognize-566w.webp 566w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/imperium-recognize-566w.jpeg 566w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/imperium-recognize-566w.jpeg" width="566" height="800" alt="An advertising style poster featuring Grand Moff Tarkin and Princess Leia" decoding="async" />
</picture>
<p>Today in Friday Star Wars fun, I want to share:
<a href="https://imgur.com/a/JhR7PUH" target="_blank" rel="noopener">Imperium For Men. The New Fragrance, from Tarkin</a>, by OnlyMrBlack. Check out the originals on Imugr for more posters and some views of them in the real world.
</p>
When will Adam be replaced by ChatGPT?2023-04-01T00:00:00Zhttps://shostack.org/blog/when-will-adam-be-replaced-by-chatgpt/AdamWhen will Adam Shostack be replaced by ChatGPT<span class="single-post-excerpt"><!-- excerpt -->When will Adam Shostack be replaced by ChatGPT<!-- /excerpt --></span>
<p>
Irius Risk, a company specializing in automating threat modeling for software security, hosted a webinar on Machine Learning and Threat Modeling March 30, 2023. Adam Shostack participated along with <a href="https://berryvilleiml.com/" target="_blank" rel="noopener">BIML CEO Gary McGraw</a>.
</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/9k3scZFKYYA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
<p>And the kicker? As of April 1st, Adam is already letting ML models, along with deepfake video, fill in for him on carefully selected webinars.</p>
Application Security Roundup - March2023-04-05T00:00:00Zhttps://shostack.org/blog/appsec-roundup-march-2023/AdamA few tools, some thoughts on injection, some standards, and some of Adam’s appsec news.<span class="single-post-excerpt"><!-- excerpt -->A few tools, some thoughts on injection, some standards, and some of Adam’s appsec news.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/appsec-jigsaw-march-2023-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/appsec-jigsaw-march-2023-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/appsec-jigsaw-march-2023-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/appsec-jigsaw-march-2023-800w.jpeg" width="800" height="330" alt="A set of puzzle pieces" decoding="async" />
</picture>
<p>The March appsec roundup includes few tools, some thoughts on injection, some standards, and some of my own appsec news.</p>
<h3>Tools</h3>
<ul>
<li><a href="https://semgrep.dev/blog/2023/gpt4-and-semgrep-detailed" target="_blank" rel="noopener">Semgrep now integrates GPT4</a>. What could go wrong?</li>
<li><a href="https://blog.doyensec.com/2023/02/14/pesd-extension-public-release.html" target="_blank" rel="noopener">Introducing Proxy Enriched Sequence Diagrams (PESD)</a> is a new tool from Doyensec that builds sequence diagrams for your extant systems, which is pretty neat.</li>
</ul>
<h3>Injection and Parsing</h3>
<p><a href="https://greshake.github.io/" target="_blank" rel="noopener">Indirect Prompt Injection on Bing Chat</a> is an interesting and powerful attack which relies on a mix of unclear boundaries and the unique programming model of LLMs, which is that ‘everything is part of the prompt.’ Bob Gourley took advantage of that to create <a href="https://unrestrictedintelligence.com/" target="_blank" rel="noopener">Unrestricted Intelligence</a>, where he submits a carefully crafted pre-prompt to ChatGPT, followed by your submission. These attacks are worked through in more depth in <a href="https://arxiv.org/pdf/2302.12173.pdf" target="_blank" rel="noopener">More than you’ve asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models</a>. (Speaking of which, did you know you can swap <a href="https://ar5iv.labs.arxiv.org/html/2302.12173" target="_blank" rel="noopener">ar5iv</a> for arxiv and get readable HTML? )</p>
<p>All of these injection attacks can be seen as parsing attacks, where code and data intermingle. Another example of that is in <a href="https://wrv.github.io/h26forge.pdf" target="_blank" rel="noopener">The Most Dangerous Codec in the World: Finding and Exploiting Vulnerabilities in H.264 Decoder</a>, which presents a new toolset for “analyzing, generating, and manipulating syntactically correct but semantically spec-non-compliant video files.” Video decoding has always been intensely dangerous. People mocked Microsoft for putting graphics into the NT kernel, but note footnote 2, “Some Twitter commentary about CVE-2022-22675 assumed that Apple only recently moved video parsing into the iOS kernel. Not so. In fact, the first bug we identified was present in the kernel as far back as iOS 10.” (Also, what do we expect of Twitter commentary?)</p>
<h3>Standards</h3>
<ul>
<li><a href="https://www.wired.com/story/a-us-agency-rejected-face-recognition-and-landed-in-big-trouble/" target="_blank" rel="noopener">A US Agency Rejected Face Recognition—and Landed in Big Trouble</a> is a Wired story about a challenge of standards, which is that they may require you to do certain things which may be sub-optimal in your situation.</li>
<li><a href="https://berthub.eu/articles/posts/eu-cra-secure-coding-solution/" target="_blank" rel="noopener">The EU's new Cyber Resilience Act is about to tell us how to code</a> by Burt Hubert and <a href="https://berthub.eu/articles/posts/eu-cra-practicalities/" target="_blank" rel="noopener">his update post</a>.</li>
<li><a href="https://www.forbes.com/sites/bradtempleton/2023/03/22/waymo-makes-a-new-safety-case-and-challenges-all-to-also-do-sobut-is-it-right/" target="_blank" rel="noopener">Waymo Makes A New Safety Case</a> by Brad Templeton which asks some really good questions about the fuller <a href="https://storage.googleapis.com/waymo-uploads/files/documents/safety/Waymo%20Safety%20Case%20Approach.pdf" target="_blank" rel="noopener">Safety Case</a>, which takes an interesting perspective of “the absence of unreasonable risk.”</li>
<li><a href="https://journals.sagepub.com/doi/epub/10.1177/0952076712458111" target="_blank" rel="noopener">Whitehall wiring: The Communications-Electronics Security Group and the struggle for secure speech</a> is a fascinating look back into the UK’s struggle to ship a secure communications system, including politics, economics, and the struggle to get funding for defense. Some really interesting details such as, even after the UK had broken Enigma, they declined to look at their own ciphers. It’s interesting how far back the bias goes: offensive success is easily seen and celebrated, defense has long played second fiddle. (Sadly, paywalled.)</li>
</ul>
<h3>Training and Adam Notes</h3>
<ul>
<li>Today’s the last day to get the early bird discount for our <a href="https://courses.shostack.org/courses/threat-modeling-intensive-may-2023" target="_blank" rel="noopener">May Threat Modeling Intensive</a>!</li>
<li>My fellow Star Wars geeks at Panther are giving away copies of <cite><a href="https://threatsbook.com/" target="_blank" rel="noopener">Threats</a></cite> as part of both an April 11 webinar <a href="https://panther.com/resources/webinars/adam-shostack-what-every-engineer-should-learn-from-star-wars/" target="_blank" rel="noopener">registration required</a>, and a signing at RSA at their booth (#228 in the South Expo Hall) on Wednesday, April 26 at 11 am).</li>
<li>I’m keynoting <a href="https://www.appsecpnw.org/" target="_blank" rel="noopener">Appsec PNW</a>, with a working title of “From Tacoma Narrows to West Seattle...Lessons from a century of PNW bridges.”</li>
<li>Last, but not least, my article “<a href="https://www.computer.org/csdl/magazine/sp/2023/01/10035553/1KrcceiUNaM" target="_blank" rel="noopener">Nothing is Good Enough</a>” got a callout on the cover of <a href="https://www.computer.org/csdl/magazine/sp/2023/01" target="_blank" rel="noopener">the Jan/Feb 2023 IEEE S+P</a> all about how “nothing” is often seen as “good enough” and how we should not ignore that in process design. (Paywalled, sorry)</li>
</ul>
Cumulus2023-04-06T00:00:00Zhttps://shostack.org/blog/cumulus-threat-model-thursday/AdamCumulus is a cloud-oriented version of Elevation of Privilege<span class="single-post-excerpt"><!-- excerpt -->Cumulus is a cloud-oriented version of Elevation of Privilege<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/cumulus-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/cumulus-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/cumulus-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/cumulus-800w.jpeg" width="800" height="300" alt="A set of cards with threats like our deployment artifacts contain secrets that can be extracted" decoding="async" />
</picture>
<p><a href="https://github.com/TNG/cumulus" target="_blank" rel="noopener">Cumulus</a>, from <a href="https://www.tngtech.com/" target="_blank" rel="noopener">TNG Technology Consulting</a>, is a cloud-centered version of Elevation of Privilege. Their suits are “access + secrets”, “delivery”, “recovery” “monitoring” and “resources.” I think its a good group, and I think they’ve done a good job defining threats. They seem likely enough in the real world, broad enough that they’ll provoke a discussion, and they written clearly.
</p>
Reflecting on Threats: The Frame2023-04-10T00:00:00Zhttps://shostack.org/blog/reflecting-on-threats-the-frame/AdamReflecting on the framing of the Threats book<span class="single-post-excerpt"><!-- excerpt -->Reflecting on the framing of the Threats book<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/threats-frame-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/threats-frame-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/threats-frame-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/threats-frame-800w.jpeg" width="800" height="448" alt="An AI-drawn sith with ... frames" decoding="async" />
</picture>
<p>Now that the Threats book is out and the first reviews are in (thank you!), I want to talk more about the frame of the book and how the subtitle influenced where the book needed to go.</p>
<p>The subtitle, “What every engineer should learn from Star Wars” is very close to “What every engineer should know,” but catchier. When I questioned if something should be there, I asked ‘does every engineer need to know this?’ and so cut elements like ‘how password storage works’ and called it a side-quest.</p>
<p>This is close to popularization as a goal, and I tried to align with common understanding as much as I could. But there were places that didn’t work, and I’ll come to those. Before I do, I want to talk about the importance and challenge of common understandings.</p>
<p>First, common understandings are important to communication in engineering. If we can’t talk about materials in tension or compression, and expect everyone understands those in the same way, then our jobs are far more difficult, and that inhibits how large or complex the projects we undertake can be.</p>
<p>Second, common understandings allow us to divide and conquer. If I ask Alice to assess spoofing threats, and Bob to assess tampering (etc) and we have a common understsanding of those terms, than I can have a degree of assurance that the components have been addressed. (System complexities remain, of course.) </p>
<p>Third, common understandings allow us to talk about what ‘a reasonable engineer’ would do. Today, no reasonable engineer would fail to do a wind-tunnel test of a new bridge design. Many of the flaws we see in real systems are obvious once we know where to look. (I’m drafting this April 9, and the 3 most recent CVEs are: <a href="https://www.cvedetails.com/cve/CVE-2023-29475/" target="_blank" rel="noopener">2023-29475</a>, unauthenticated attackers can run arbitrary commands; <a href="https://www.cvedetails.com/cve/CVE-2023-29478/" target="_blank" rel="noopener">2023-29478</a>, path traversal; and <a href="https://www.cvedetails.com/cve/CVE-2023-30450/" target="_blank" rel="noopener">2023-30450</a>. (I’m not sure what the bug really is. The <a href="https://github.com/redpanda-data/redpanda/pull/7719" target="_blank" rel="noopener">pull request</a> mentions a YAML file, so maybe it was expansion of authority from a developer user to arbitrary code in prod?) For a subset of these, I think we’re nearing the point where we can ask questions like ‘were tools available that would have found this at a reasonable cost?’ and if so, were those tools in use, but mis-used, or not in use? Without a standard or even a norm, the best we have is opinion and maybe 20/20 hindsight.</p>
<p>So common understandings are important, and as I wrote, there were several places the common understandings failed. The first was ‘elevation of privilege.’ I use the term out of habit, and even writing about 30450 there, I wrote that it was an elevation issue before editing. The trouble is, privilege and permissions are implementation choices. I write about how adding a user on unix is a matter of permissions (on /etc/passwd) and on windows its the netuseradd() privilege. Setting OS-design religious wars aside, that’s a shaky ground on which to explain fundamentals. Another way to say that is if we can have a debate over which of those is ‘right,’ we’re not talking about fundamentals, but design choices.</p>
<p>Two other places where I felt I had to push the envelope were parsing and kill chains. For parsing, some of the work being done by Langsec is excellent, when its feasible to apply. And for kill chains, I really wanted to draw more on the work of others, but the general-use chains I wanted just didn’t exist. But those will be future posts.</p>
<p class="credit-text">Midjourney: a set of sith, floating in a frame, cinematic, dramatic, professional photography, studio lighting, studio background, advertising photography, intricate details, hyper-detailed, ultra realistic, 8K UHD --v 5 --aspect 16:9 </p>
Five Threat Model Diagrams for Machine Learning2023-04-13T00:00:00Zhttps://shostack.org/blog/five-threat-model-diagrams-for-ml/AdamSome diagrams to help clarify machine learning threats<span class="single-post-excerpt"><!-- excerpt -->Some diagrams to help clarify machine learning threats<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/threats-to-ml-tools-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/threats-to-ml-tools-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/threats-to-ml-tools-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/threats-to-ml-tools-800w.jpeg" width="800" height="300" alt="A diagram showing threats such as prompt injection and model theft that are threats to a ML tool" decoding="async" />
</picture>
<p>When we talk about “threat modeling machine learning,” some people seem to be talking about threats <em>to</em> machine learning systems, some about threats <em>from</em> those systems, and sometimes even about threat to <em>our jobs</em>. (That last comes up more when talking about “AI and cybersecurity.”)</p>
<p>For threat model Thursday, I wanted to share a few sketches I’m working on to help clarify what we’re talking about. Diagrams that we can point to are an incredibly powerful tool that are so simple that they can be hard to talk about.</p>
<p>So this first set of diagrams shows threats <em>to</em> and threats <em>from</em> the ML system.</p>
<img loading="lazy" src="https://shostack.org/blog/img/2023/threats-to-from-ml.png" width="780" height="674" alt="two diagrams whose details are explained in the main text" /><br />
<p>It also lets me talk about an element of sketching, which is what data flow arrows do we need? In the first diagram, I show only the threats, expecting that the response is implicit. In the second, I show both the “write email” and the “phishing response,” and I didn’t bother with sequence numbers. In a more complex diagram, I might have. I could also have used a message sequence diagram to show the second one, and then time would have been more visible, but I’d have lost the similarity between the diagrams which helps us see the “to/from” nature that I wanted to emphasize.</p>
<p>This second set is about training data, and where it comes from. These are intentionally similar on the left, to draw attention to differences on the right. (In fact, they’re copy/pasted.) This allows us to think about the training data, and how system design decisions about how the system is trained impact it.</p>
<img loading="lazy" src="https://shostack.org/blog/img/2023/training-data.png" width="880" height="423" alt="three data flow diagrams whose details are explained in main text" />
<p>I’m pretty sure the model of how “racist garbage” from Twitter got to Tay is not accurate. Microsoft didn’t retrain a full model each time it got an @. What’s important is that there’s no ‘data quality process.’ I hope the relevant differences jump out.</p>
<p>So, this threat model Thursday is entirely focused on these diagrams to help people think about what can go wrong in different scenarios. (And hopefully obviously, the threats shown are intended to be illustrative, not complete.)</p>
Threats Book News and Updates for April2023-04-19T00:00:00Zhttps://shostack.org/blog/threats-news-updates/AdamBook signings at RSA, big discounts and more!<span class="single-post-excerpt"><!-- excerpt -->Book signings at RSA, big discounts and more!<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/threats-updates-april-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/threats-updates-april-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/threats-updates-april-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/threats-updates-april-800w.jpeg" width="800" height="307" alt="People holding the threats book and smiling" decoding="async" />
</picture>
<p>There’s so much exciting stuff happening with the Threats book (currently running 4.7 stars on Amazon, with all but two reviews as 5-⭐️, and only one less than 4 stars️.) Everyone I hear from has been thrilled, and I’d love it if we can get to 50 5-⭐️ reviews on Amazon by Star Wars day. If you’ve read and enjoyed the book, can you take a moment to either give it 5 stars, and if you’d like, even a review? </p>
<p>That said, some of the new exciting bits include:</p>
<ul>
<li>Upcoming at RSA on April 26th, I’ll be signing books at Panther’s booth. 11 AM booth #228 in the South Expo Hall. I'll be doing a reading and signing, and Panther will be supplying books. If you can’t make that, I’m also doing a signing at the conference bookstore at
3PM.</li>
<li>The audiobook is still 40% off <a href="https://www.audiobooks.com/promotions/promotedBook/654778/threats-what-every-engineer-should-learn-from-star-wars?refId=69036&utm_medium=email&_hsmi=253691596&_hsenc=p2ANqtz-9-ob87aWnMo6cXmZ_x8vS6pVW69ie-aMppJPFehzUFGjMzcUlVMtO9SpccKZr9-bI3k5Doxws0LbnEeq2pOxbgAl42eg&utm_content=253691596&utm_source=hs_automation" target="_blank" rel="noopener">here</a>, and you have a few more days to get it and that great price.</li>
<li>I joined Jeremy Snyder of Firetail for the Ask a CISO podcast, links <a href="https://www.linkedin.com/feed/update/urn%3Ali%3Aactivity%3A7052191209934454785/?midToken=AQGNPnqN6KrpiA&midSig=1YeZgJ9FhGCaI1&trk=eml-email_notification_single_mentioned_you_in_this_01-notifications-1-hero%7Ecard%7Efeed&trkEmail=eml-email_notification_single_mentioned_you_in_this_01-notifications-1-hero%7Ecard%7Efeed-null-1wrj%7Elgeubpl9%7Er1-null-voyagerOffline&lipi=urn%3Ali%3Apage%3Aemail_email_notification_single_mentioned_you_in_this_01%3Bm7%2BvIR5DRcSLuY8UYSYgUg%3D%3D" target="_blank" rel="noopener">here</a>.</li>
<li>I also did an interview with Ken Westin of Panther, available <a href="https://www.youtube.com/watch?v=ZHSGDNAau88" target="_blank" rel="noopener">here</a>.</li>
</ul>
<p>You can find <cite>Threats: What Every Engineer Should Learn from Star Wars</cite> wherever fine books are sold, and I maintain a list of places to buy on the <a href="https://threatsbook.com/">threatsbook.com</a> site.</p>Layoffs in Responsible AI Teams2023-04-21T00:00:00Zhttps://shostack.org/blog/responsible-ai-layoffs/AdamSome inferences from layoffs in responsible AI teams<span class="single-post-excerpt"><!-- excerpt -->Some inferences from layoffs in responsible AI teams<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/ai-layoffs-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/ai-layoffs-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/ai-layoffs-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/ai-layoffs-800w.jpeg" width="800" height="376" alt="An AI generated image of scientists" decoding="async" />
</picture>
<p>Wendy Grossman <a href="https://netwars.pelicancrossing.net/2023/03/31/unclear-and-unpresent-dangers/" target="_blank" rel="noopener">asks</a> “what about all those AI ethics teams that Silicon Valley companies are disbanding? Just in the last few weeks, these teams have been axed or cut at Microsoft and Twitch...” and I have a theory.</p>
<p>My theory is informed by a conversation that I had with Michael Howard, maybe 20 years ago. I was, at the time, a big proponent of <a href="https://shostack.org/files/essays/review.html">code reviews</a>, and I asked about Microsoft’s practices. He said, “oh, they don’t scale, we don’t <del>do</del> [require] things that don't scale.” (Or something like that. It was a long time ago.) After I joined the SDL team, and we started working together, I saw the tremendous focus that the team had on bugs. (My first day on the job included an all-hands, and I saw GeorgeSt present how many bugs the Secure Windows Initiative had managed through the Vista process.)</p>
<p>Bugs, tickets, stories and the like are all intended as actionable elements of software development. ‘Stop and reflect’ doesn’t fit that mold. One of the advantages we get moving from 'evil brainstorming' to the Four Question Frame for threat modeling is that it lets us define specific tasks and relate them to other work.</p>
<p>But it seems ‘stop and reflect’ is often a key part of what responsible AI researchers advocate. One possible takeaway is such a focus would be helpful for ethical AI teams. I suspect that there are extreme ‘impedance mismatches’ between software developers and people who get PhDs in ethical AI. It’s also possible to think that such a focus would be putting a bandaid on when we need a tourniquet. The DAIR Institute’s <a href="https://www.dair-institute.org/blog/letter-statement-March2023" target="_blank" rel="noopener">response to the AI Pause letter</a> calls for transparency and accountability, enforced by regulation: “but organizations building these systems should also be required to document and disclose the training data and model architectures. The onus of creating tools that are safe to use should be on the companies that build and deploy generative systems, which means that builders of these systems should be made accountable for the outputs produced by their products.”</p>
<p>Since we do not have such regulation, and (as far as I know) no AI lab has actually stepped up to announce that they’re pausing, perhaps a short-term focus on short-termism would be helpful. What makes for a stop-ship bug in an ML model? Perhaps I’ve missed the work that provides such definitions.</p>
<p>A crucial lesson is the value that actionability provides. For example, one laid off person said: “People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies,’” (<a href="https://www.theverge.com/2023/3/13/23638823/microsoft-ethics-society-team-responsible-ai-layoffs" target="_blank" rel="noopener">The Verge</a>). That same article characterizes the work as “the team has been working to identify risks posed by Microsoft’s adoption of OpenAI’s technology throughout its suite of products.” Again, a lesson from how we’ve come to define threat modeling is that we cannot stop with the question “what can go wrong,” but we must get to “what are we going to do about it?”</p>
<p>Many people are arguing that risks of AI are so monumental that we shouldn’t try to work within the structures which exist within companies, but regulate. There are complexities there, including that regulation mainly impacts ethical players in the market. But perhaps we can learn from similar efforts and make headway while we wait for regulatory action.</p>
<p class="credit-text">Image: Midjourney, “a large team of scientists in lab coats teaching a mainframe computer about ethical behavior. Scientists, some with clipboards, can be seen displaying deep concern and are in intense discussion in small groups 1960s cinematic, hyper-realistic, nasa. --v 5 --ar 16:9” I chose to leave the all-white output, and appreciated the various AI oddities. <br />Edited based on a comment from Michael that he probably said something about not treating code reviews as mandatory.</p>
May the Fourth Secure You2023-05-04T00:00:00Zhttps://shostack.org/blog/may-the-fourth-secure-you/AdamCelebrating Star Wars Day in style!<span class="single-post-excerpt"><!-- excerpt -->Celebrating Star Wars Day in style!<!-- /excerpt --></span>
<p>May The Fourth Be With you!</p>
<p>To celebrate Star Wars day, we have a cornucopia of fan-tastic Star Wars security content for you.</p>
<p>I am most excited to have gotten to know an honest to goodness movie star who was in <i>The Force Awakens</i> (and a little jealous, to be honest) and who is also an honest to goodness security expert whose work touches every email you send. We’re excited to share a video today to celebrate.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/MQJjtYG7KYA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
</div>
<p>I’m also excited to share the new <a href="https://www.applause.com/resources/podcasts/ep-8-digital-quality-lessons-from-star-wars" target="_blank" rel="noopener">Digital Quality Lessons from Star Wars</a> (Episode 8 of the Ready, Test, Go podcast), and the Digital Minds podcast has re-released their <a href="https://forallsecure.com/blog/the-hacker-mind-podcast-what-star-wars-teaches-us-about-threats" target="_blank" rel="noopener">What Star Wars Can Teach Us About Threat Modeling</a>.</p>
<p>We also have some amazing discounts for you:</p>
<ul>
<li>Wiley is offering 35% off as a May 4th <a href="https://www.wiley.com/en-us/Threats%3A+What+Every+Engineer+Should+Learn+From+Star+Wars-p-9781119897699" target="_blank" rel="noopener">special</a> with code "May4"</li>
<li>Ascent Audio has a massive 70% off the <a href="https://cscsk04.na1.hubspotlinks.com/Ctc/T7+113/cscsk04/VX4K6b4dk0jnW6CCpfL6f8f7JW6B8lDm4-670lN8NyyCX5jyLQV3Zsc37CgVLFN57JSFLn_6j5Vbgc7-5Cqwk_N21wxql-8yrxW5mwcDY4BqhrPW57nLrP3L_NLjW8W-RFR6pVt1jW5kHcp22csRnwW6j13m02YsMX4W7075bB5pbZ5nW2SYgWs4qxYDTW4QlP3b69ZKbfN2Bjwhb2MVt9W17TZts7JtZ3kN3PPLnHnT2T9W7ytJX36pLWdbW503JVm4qXsPlW6kFjgZ4ql7s_N2nRwhjj_JrhW2ZZMh24FH_MCW2qTXvM6g8XB1W5QqYpc2RzJXKW5Ys5kn4WLjp5W52Hx8p91mh40VPzJNB5jwWYSW7DQH6X3zt0bqW8JMYJW6JbpDzW4nd-VB7S5cDgW3gPSL85PscNCW8c2jhZ3798B1W291x2x5Wlj5rW8n8Bxl4wwdBJW6vKYXl7gRYJn3mwn1" target="_blank" rel="noopener">audiobook</a></li>
<li>And of course — you can buy copies wherever fine books are sold.</li>
</ul>
The Cyber Safety Review Board Should Investigate Major Historical Incidents2023-05-25T00:00:00Zhttps://shostack.org/blog/cyber-safety-review-board-historical-incidents/AdamTarah Wheeler and Adam write in CFR<span class="single-post-excerpt"><!-- excerpt -->Tarah Wheeler and Adam write in CFR<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/cyber-safety-review-board-history-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/cyber-safety-review-board-history-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/cyber-safety-review-board-history-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/cyber-safety-review-board-history-800w.jpeg" width="800" height="307" alt="A small council meets to review an incident in a well lit room in Rivendell" decoding="async" />
</picture>
<p>The Council on Foreign Relations has published an essay by Tarah Wheeler and myself, <a href="https://www.cfr.org/blog/cyber-safety-review-board-should-investigate-major-historical-incidents" target="_blank" rel="noopener">The Cyber Safety Review Board Should Investigate Major Historical Incidents</a>. It starts:</p>
<blockquote>The U.S. Cyber Safety Review Board (CSRB) was established on February 3, 2022 after the major cyber incident known as SolarWinds. This board was intended to investigate and report on significant cyber incidents, following the example of the U.S. National Transportation Security Board’s ability to build and disseminate a respected shared history of aviation incidents. This newly-established CSRB was to investigate SolarWinds first, but the CSRB has not done so yet, and apparently will not be choosing to do so. The one report issued so far, on the open source Log4shell vulnerabilities, was clear and helpful, but there are much more impactful events that can be investigated to provide a pattern from which history can be established by careful investigation.</blockquote>
<p>Spoiler: I quote Tolkien, not Star Wars.</p>
<p class="credit-text">Image by Midjourney, "A small council meets to review an incident in a well lit room in Rivendell. Cinematic. High fantasy. lord of the rings. --ar 16:9" </p>
Application and AI roundup - May2023-05-30T00:00:00Zhttps://shostack.org/blog/appsec-roundup-may-2023/AdamThis month runs quite heavy on AI, but the CISA Safe by Design and Default document is going to be important for the next several years.<span class="single-post-excerpt"><!-- excerpt -->This month runs quite heavy on AI, but the CISA Safe by Design and Default document is going to be important for the next several years.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-april-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-april-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-april-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/appsec-roundup-april-800w.jpeg" width="800" height="356" alt="an AI reading a book" decoding="async" />
</picture>
<p>CISA’s <a href="https://www.cisa.gov/resources-tools/resources/secure-by-design-and-default" target="_blank" rel="noopener">new guidance</a> on safety by design and by default is another large brick in an emerging strategy, and it’s not just a US strategy. The document carries logos from the US, UK, Canada, New Zealand and Australia, the “five eyes” intelligence alliance, but also the Germans and Dutch, with multiple agencies stepping up from several of those countries. This sort of alignment is hard work, and will likely be followed by regulation and law in many of those places.</p>
<h3>AI</h3>
<ul>
<li>Ram Shankar Siva Kumar and Hyrum Anderson have announced <a href="https://amzn.to/3n6GoyY" target="_blank" rel="noopener">Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them</a>. I enjoyed it, review to follow, but highly recommended as an overview of the field.</li>
<li>Kai Greshake has an article, <a href="https://betterprogramming.pub/the-dark-side-of-llms-we-need-to-rethink-large-language-models-now-6212aca0581a?gi=fdaeb512f471" target="_blank" rel="noopener">The Dark Side of LLMs: We Need to Rethink Large Language Models</a> with the subtitle “We cannot deploy the current crop of LLMs safely.”</li>
<li>A group led by Gadi Evron released <a href="https://team8.vc/wp-content/uploads/2023/04/Team8-Generative-AI-and-ChatGPT-Enterprise-Risks.pdf" target="_blank" rel="noopener">Generative AI and ChatGPT Enterprise Risks</a>. I contributed.</li>
<li>OpenAI released a <a href="https://cdn.openai.com/papers/gpt-4-system-card.pdf" target="_blank" rel="noopener">GPT-4 System Card</a>, an extended writeup of its safety and security risks. (There are critics who say it doesn’t go deep enough, and is too focused on surface metrics. #include Jeff Goldberg meme.)</li>
</ul>
<h3>AI meets Appsec</h3>
<p>Static analysis tool Semgrep <a href="https://semgrep.dev/blog/2023/gpt4-and-semgrep-detailed" target="_blank" rel="noopener">announced a GPT-4 integration</a>. Their first example is fascinating: the code hardcodes a password, and they say it’s safe to ignore. I think it’s not safe, the sample code should show how to get the password from a secret store API. I had a good conversation with their folks about the tradeoff, and what I take here is the threat and the need for vigilance as we think about tooling.</p>
<h3>Application Security</h3>
<ul>
<li><a href="https://www.aboutlawsuits.com/fda-cybersecurity-vulnerabilities-dna-sequencing-devices/" target="_blank" rel="noopener">FDA Warns of Cybersecurity Vulnerabilities in Certain DNA Sequencing Devices</a> explains that Illumina’s DNA sequencer can be accessed without a password.</li>
<li><a href="https://www.theverge.com/2023/4/27/23700612/google-authenticator-end-to-end-encryption-e2ee" target="_blank" rel="noopener">Google plans to add end-to-end encryption to Authenticator</a> is a bit of a jaw-dropper. How did you roll out a feature that copies super-sensitive data to the cloud and not encrypt it?</li>
</ul>
<p>In her <a href="https://www.patreon.com/posts/cybersecurity-2-82384773" target="_blank" rel="noopener">cybersecurity roundup</a> where I saw both of those, Violet Blue asks the same question: “How do medical research devices get made without passwords? How do Google employees stay employed at Google or anywhere on Earth after releasing a security tool to move critical security data with no end-to-end encryption?” My answer comes in two parts. First, what to look for is far more obvious with hindsight. These systems are big and complex, and security is a wierd niche, and so building security into engineering processes is hard. The second part is that historically, Google has hired really smart people and trusted them to do the right thing. They’ve been described as ‘process alergic,’ and that works better when you’re smaller.</p>
<p>The reason I wrote <i>Threats</i> is that ‘security is a wierd niche’ is less and less acceptable as a reason to be insecure. As that happens, we need to make it easy to access the knowledge that people need.</p>
<p>I’ll close this month with a quite unusual denial of service: a performance of the musical <i>The Bodyguard</i> was halted <a href="https://www.cbc.ca/news/world/u-k-musical-the-bodyguard-halted-after-fans-ejected-for-refusing-to-stop-singing-i-will-always-love-you-1.6805525" target="_blank" rel="noopener">after a fan sang along</a>. </p>
<p class="credit-text">Image by Midjourney: an AI reading a book, while being hacked cinematic, dramatic, professional photography, studio lighting, studio background, advertising photography, intricate details, hyper-detailed, ultra realistic, 8K UHD --ar 8:3 --v 5 </p>
Phishing Defenses2023-06-07T00:00:00Zhttps://shostack.org/blog/phishing-defense/AdamPhishing behaviors, as observed in the wild.<span class="single-post-excerpt"><!-- excerpt -->Phishing behaviors, as observed in the wild.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/phishing-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/phishing-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/phishing-800w.jpeg" width="800" height="322" alt="A sunken boat, surrounded by phish" decoding="async" />
</picture>
<p>There’s a good article on the UK’s National Cyber Security Centre blog, <a href="https://www.ncsc.gov.uk/blog-post/telling-users-to-avoid-clicking-bad-links-still-isnt-working" target="_blank" rel="noopener">Telling users to ‘avoid clicking bad links’ still isn’t working</a>. It starts:</p>
<blockquote>Let's start with a basic premise: several of the established tenets in security simply don’t work. One example is advising users not to click on bad links. Users frequently need to click on links from unfamiliar domains to do their job, and being able to spot a phish is not their job. The NCSC carries out and reviews red team operations, and a common observation is that red teamers (and indeed criminals or hostile states) only need one person to fall for a ruse for an attacker to access a network.<br />
We're even aware of some cases where people have forwarded suspicious emails from their home accounts to their work accounts, assuming that the security measures in place in their organisations will protect them. </blockquote>
<p><b>That</b> is a heck of an observation, and deserves deeper analysis. Are the people forwarding suspicious emails to work being rational? Conflating burden and effectiveness? Cleverly offloading response work onto professionals? Lacking any other methods for testing a suspicious email?</p>
<p>Almost the entire article is excellent, but there’s a fly in the ointment, and that is a sentence which starts out well: “Firstly, because one of the above controls may fail, and so defence in depth is always good.” </p>
<p>Defense in depth may, indeed, sometimes be useful. But there’s an ocean between ‘sometimes useful’ and ‘always good.’ The article touches on one of the costs in the very preceding sentences, “... worth training users to spot suspicious links.” That training has cost, as does the work to spot and report links. Is that cost worthwhile? Is it the best use of our time in training people? The sentence reminded me of a lesson from J.E Gordon. As he writes in <cite>The New Science of Strong Materials</cite>,</p>
<blockquote>If we make the structure too weak we may save weight and money, but then the chance of the thing breaking too soon will become unacceptably high. Contrariwise, if we make a structure so strong that, in human terms, it is likely to last ‘for ever’ – which is what the public would like – then it will probably be too heavy and expensive. As we shall see, there are many cases where more danger is incurred by extra weight than is avoided by the corresponding increase of strength. (Chapter 15, and <a href="https://shostack.org/blog/structures-engineering-and-security/">My review of Gordon’s Structures</a>)</blockquote>
<p>People, and their time, are our most precious resource. We should be exceptionally cautious in how we spend it.</p>
<p class="credit-text">Image: Midjourney, “the underwater scene with different types of fish underwater character illustration, In the water there are fish, cleverly avoiding being caught by an evil fisherman in a rowboat on top of the water. The fish are the heros. in the style of hazy landscapes, light brown and azure, lush landscape backgrounds, orange and azure, flat backgrounds.” (disappointed)</p>
<p class="credit-text">Disclaimer: I really wanted to spell it <i>Cybre</i>, to go with Centre.</p>
AppSecPNW 2023 2023-06-12T00:00:00Zhttps://shostack.org/blog/appsecpnw2023/AdamAdam's AppSecPNW 2023 keynote<span class="single-post-excerpt"><!-- excerpt -->Adam's AppSecPNW 2023 keynote<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/sarahconnor-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/sarahconnor-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/sarahconnor-800w.jpeg" width="800" height="328" alt="A movie still of Sarah Connor from Terminator 2, saying ‘The Future is not set, there is no fate but what we make’" decoding="async" />
</picture>
<p>Over the weekend, I delivered the opening keynote of OWASP's <a href="https://www.appsecpnw.org/" target="_blank" rel="noopener">AppSec PNW 2023</a> in Portland. It was a very nice regional event, organized by OWASP Portland, Seattle, Vancouver and Victoria. I hope to find a little time to write more about it, but my slides are below.</p>
<iframe src="https://www.slideshare.net/slideshow/embed_code/key/uI6QLk9XJrS06m?hostedIn=slideshare&page=upload" width="476" height="400" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe>
<p><a href="https://www.slideshare.net/AdamShostack/from-tacoma-narrows-to-west-seattleappsec-lessons-from-a-century-of-pacific-northwest-bridge-failures" target="_blank" rel="noopener">archival link</a> at slideshare.</p>
AI will be the high interest credit card of 20232023-06-16T00:00:00Zhttps://shostack.org/blog/ai-high-interest-credit-card/Adam<span class="single-post-excerpt"><!-- excerpt --><!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/ai-high-interest-credit-card-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/ai-high-interest-credit-card-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/ai-high-interest-credit-card-800w.jpeg" width="800" height="434" alt="" decoding="async" />
</picture>
<p>I haven’t done a lot of work in Python, and I’ve never used it to produce graphs. But after an hour of pair programming and then using Chatgpt and Github Copilot got me quite far in writing a set of Jupyter Notebooks, and dramatically shrunk the effort to use and debug a new tool. I wanted to record some thoughts on the experience, and what it means for programming and for application security.</p>
<p>The world’s <a href="https://www.theregister.com/2020/12/04/microsoft_excel_lambda/" target="_blank" rel="noopener">most popular programming language</a> has long been ... Excel. So called “no code/low code” platforms like IFTTT, Zapier and more are hugely popular, and “software robots” that allow anyone to add layers of automation on top of extant software have brought a billion (with a B) dollars of revenue to UIPath. People want to take control of the systems that are in front of them, and fighting the syntax of a programming language is a barrier.</p>
<p>The strong temptation to distinguish these things from “real programming” mis-leads us. The reality is that even professional programmers routinely encounter scenarios beyond their expertise. Also, <a href="https://www.zdnet.com/article/github-developer-survey-finds-92-of-programmers-using-ai-tools/" target="_blank" rel="noopener">a new study</a> by GitHub finds 92% of programmers are using AI tools, and 70% of them think it’s helping them code.</p>
<p>My own experience is, if not for the LLM help, I’d likely have given up along the way. The faster cycles (compared to combing through Stack Exchange/Reddit threads) are meaningful, and frankly, that's going to be complex waters for appsec folks. I’m not, by any stretch, proud of the code, and much of what’s ok about it (parameterization) the tools didn’t really help with. But it runs. ChatGPT was fine as a ‘rubber duck debugging’ tool, even if it didn’t tell me that the problem was always indentation. (Who designs a programming language like that?!?)</p>
<p>These new ways of writing code require new structures to help us program and engineer systems. Much like compilers took developers away from writing machine code and let us focus on algorithms, AI assistance will take us further from the machine and what it’s doing. That will result in code that’s more bloated and slower, and in many cases, that’ll be ok. The added abstraction will help us do more.</p>
<p>In many ways, it’ll be like how NPM pulls in modules we’ve never heard of, with functionality we don’t understand and vulns we can’t stay on top of. But developers will get to first functionality faster, they’ll ship faster, and that will lead to technical debt accumulating ever faster. Like credit card debt, you can get shiny new things, and then find yourself unable to pay the cost of owning or maintaining them.</p>
<p>One engineering challenge is how to exploit the new tools without paying that cost. The survey mentions people checking in code they don’t understand(!?) and getting caught in code review. The issue of vulns in the code is well documented; when I search on Panda (one of the libraries I’m using) I get things like <a href="https://johnsyvineela.medium.com/python-for-cybersecurity-lesson-3-data-analysis-with-pandas-501441e14fe0" target="_blank" rel="noopener">Lesson 3: Data analysis with Pandas</a> rather than something about how to use Panda safely. </p>
<p>All in all, these models are not coming fast, but already here, and understanding what that means involves both considering things systematically, and also with the benefit of experimentation and experience. If you haven’t tried the tools yet, you’re missing out.</p>
<p class="credit-text">The title is an intentional reference to <a href="https://research.google/pubs/pub43146/" target="_blank" rel="noopener">Machine Learning: The High Interest Credit Card of Technical Debt</a>; the image is by Midjourney, with a prompt “:a human programmer, sitting in front of many monitors filled with software code, with a swarm of fairies pointing things out and making changes. --ar 8:3” (v5 was a big shift in this one). </p>
Worthwhile Books Q2 20232023-06-28T00:00:00Zhttps://shostack.org/blog/worthwhile-books-q2-2023/AdamBooks that I read in the second quater that are worth your time include two memoirs, a great book on the security of ML, and more!<span class="single-post-excerpt"><!-- excerpt -->Books that I read in the second quater that are worth your time include two memoirs, a great book on the security of ML, and more!<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/worthwhile-books-second-quarter-2023-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/worthwhile-books-second-quarter-2023-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/worthwhile-books-second-quarter-2023-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/worthwhile-books-second-quarter-2023-800w.jpeg" width="800" height="299" alt="a big pile of books" decoding="async" />
</picture>
<h3>Cyber</h3>
<ul>
<li><a href="https://books2read.com/spacerogue" target="_blank" rel="noopener"><cite>Space Rogue: How the Hackers Known as the L0pht Changed the World</cite></a>, Cris Thomas (aka Space Rouge) was a page-turner and a fun stroll down memory lane. Fortunately he’s forgotten how my first startup tried and failed to license L0phtCrack. </li>
<li><a href="https://amzn.to/3CNbyQc" target="_blank" rel="noopener"><cite>Password (Object Lessons series)</cite></a>, Martin Eve. A short book reflecting on the history of thinking about passwords, treating the secret knowledge of how to solve the labyrinth as password adjacent. Lovely and fun.</li>
<li><a href="https://amzn.to/42YSDfy" target="_blank" rel="noopener"><cite>A Hacker’s Mind</cite></a>, Bruce Schneier. Interesting thesis around the use of power, and how power allows people to twist (“hack”) the rules of systems.</li>
<li><a href="https://amzn.to/3Ntec2t" target="_blank" rel="noopener"><cite>Project Zero Trust</cite></a>, George Finney. A didactic story of how a business rolls out zero-trust, in the model of <i>The Goal</i> or <i>The Phoenix Project</i>.</li>
<li><a href="https://amzn.to/3NsMmDe" target="_blank" rel="noopener"><cite>The Fifth Domain</cite></a>, Richard Clarke and Rob Knake. (Re-read). A very solid broad view of the cybersecurity landscape, why it matters, and what we can do to improve things, from two folks who've worked for many administrations.</li>
<li><a href="https://amzn.to/3Np8cHR" target="_blank" rel="noopener"><cite>Not With a Bug, But with A Sticker</cite></a>,Ram Shankar Siva Kumar and Hyrum Anderson. A highly readable overview of the state of attacks on machine learning systems. The book orients around what can be done more than the technical details of how. (I gave it 5 stars on Amazon; my publisher sent me an e-book and I bought the hardcover.)</li>
</ul>
<h3>Non-fiction</h3>
<ul>
<li><a href="https://amzn.to/3NswjoW" target="_blank" rel="noopener"><cite>Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact</cite></a>, Vaclav Smil. A fascinating history. Smil makes the case that almost everything that defined the last century was invented in a roughly 50 year period starting in 1867. His (2005) argument that the computer doesn’t matter is looking somewhat, but not exceptionally, dated. Smil’s writing is not the most fluid, which I say not as a criticism but a heads-up.</li>
<li><a href="https://amzn.to/444cifO" target="_blank" rel="noopener"><cite>The art of writing technical books</cite></a>, Peter Gregory. A technical manual for writers of technical books. Lots of good advice, some of which I’d learned at the school of hard knocks, some of which led me to restructure my book template directory.</li>
<li><a href="https://shostack.org/blog/worthwhile-books-q2-2023/" target="_blank" rel="noopener"><cite>A Fish Has No Word for Water</cite></a>, Violet Blue. A memoir of growing up in a very different San Francisco. This was a challenging read because of what she’s lived through, but worthwhile and reminded me just how much San Francisco has changed since I first visited. </li>
<li><a href="https://amzn.to/3NLAsWm" target="_blank" rel="noopener"><cite>The New Science of Strong Materials</cite></a>, J.E. Gordon. Wonderfully readable summation of the science of strong materials, at least as the state of the art existed 50 years ago. Required reading before you can be minted as an expert on carbon fiber.</li>
</ul>
<h3>Fiction</h3>
<ul>
<li><a href="https://amzn.to/432xaCH" target="_blank" rel="noopener"><cite>Light from Uncommon Stars</cite></a> by Ryka Aoki is the joyful story of a runaway trans violinist, her teacher, who happens to have sold the souls of her students to hell, and the aliens who run the local donut shop.</li>
</ul>
<p>Also, I remembered to publish the <a href="https://app.box.com/file/1233637575450?s=sm6ekis3z6pul4tf91qucjjcc4f3rooy" target="_blank" rel="noopener">sample chapter</a> for <cite>Threats</cite>, and still have times available for my virtual book tour. The way it works is an organization buys a reasonable number of books, and I show up for a reading via your favorite meeting tool. If you're interested, <a href="https://shostack.org/contact">contact us</a>.</p>
<p class="credit-text">Image by Midjourney: a pile of books, watercolor, renoir, with a greenish theme, reminiscent of plants --ar 8:3. If you’re paying attention to these, note the very strong allusion to handwritten captions, memorized somewhere. </p>No guns please2023-06-30T00:00:00Zhttps://shostack.org/blog/no-guns-please/AdamArt imitating life in ways you didn’t think possible<span class="single-post-excerpt"><!-- excerpt -->Art imitating life in ways you didn’t think possible<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/no-guns-please-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/no-guns-please-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/no-guns-please-800w.jpeg" width="800" height="299" alt="People fighting at the top of the world" decoding="async" />
</picture>
<p>One of the plotlines in Neal Stephenson’s <a href="https://amzn.to/3r6yaZc" target="_blank" rel="noopener"><cite>Termination Shock</cite></a> revolves around increased tensions between China and India, fought at the “line of actual control”, with an agreed ban on guns. The sides invest tremendous amounts in national pride in this strangely-constrained way of fighting. And the wierd thing is — the “no guns” agreement is a real thing.</p>
<p>Last year, the <a href="https://www.economist.com/asia/2022/12/15/chinas-frontier-aggression-has-pushed-india-to-the-west" target="_blank" rel="noopener">Economist</a> wrote about how things are really heating up.</p>
<p class="credit-text">Midjourney: “two people are fighting with sleek carbon-fiber versions of traditional polearms. They're wearing fancy brightly colored tight modern mountaineering gear in team colors, made from high-tech fabrics. It's high noon. tall, craggy, snow-covered mountains frame the scene. cinematic, superhero movie. dramatic lighting, bright --ar 8:3”</p>
Microsoft Can Fix Ransomware Tomorrow2023-07-05T00:00:00Zhttps://shostack.org/blog/microsoft-can-fix-ransomware-tomorrow/AdamMy latest at Dark Reading draws attention to how Microsoft can fix ransomware tomorrow.<span class="single-post-excerpt"><!-- excerpt -->My latest at Dark Reading draws attention to how Microsoft can fix ransomware tomorrow.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/fix-ransomware-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/fix-ransomware-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/fix-ransomware-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/fix-ransomware-800w.jpeg" width="800" height="308" alt="An AI generate image for the blog" decoding="async" />
</picture>
<p>My latest article at Dark Reading is <a href="https://www.darkreading.com/vulnerabilities-threats/microsoft-can-fix-ransomware-tomorrow" target="_blank" rel="noopener">Microsoft Can Fix Ransomware Tomorrow</a>. It starts:</p>
<blockquote>
<p>Recently, I was at a private event on security by design. I explained that Microsoft could fix ransomware tomorrow, and was surprised that the otherwise well-informed people I was speaking to hadn't heard about this approach.</p>
<p>Ransomware works by going through files, one by one, and replacing their content with an encrypted version. (Sometimes it also sends copies elsewhere, but that turns out to be slow, and sometimes sets off alarms.) Software on Microsoft Windows uses an application programming interface (API) called "CreateFile" to access files. Somewhat confusingly, CreateFile not only creates files but is also the primary way to open them.</p>
<p>Microsoft should rate-limit the CreateFile() API. That is to say,
it should limit how often a given program can use the API. Because
you can't encrypt a file until you can open it, this would have a
dramatic impact on ransomware. It would slow it down, and help
defensive tools catch it in time for humans to react.</p>
</blockquote>
<p class="credit-text">Image by Midjourney: "microsoft fixing ransomware --ar 8:3" </p>
Valorizing Rule-breaking2023-07-17T00:00:00Zhttps://shostack.org/blog/valorizing-rule-breaking/Adam<span class="single-post-excerpt"><!-- excerpt --><!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/titan-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/titan-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/titan-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/titan-800w.jpeg" width="800" height="296" alt="An AI produced image four clowns building a submarine in a large, bright, well-lit hanger. The submarine dominates the scene and is much larger than the clowns. It has badly-fitted parts. The clowns are wearing baggy polka-dotted clown suits, bright clown makeup and wigs." decoding="async" />
</picture>
<blockquote>[Stockton Rush] understood that his approach “flies in the face of the submersible orthodoxy, but that is the nature of innovation,” he wrote. “We have heard the baseless cries of ‘you are going to kill someone’ way too often. I take this as a serious personal insult.” (<a href="https://www.newyorker.com/news/a-reporter-at-large/the-titan-submersible-was-an-accident-waiting-to-happen" target="_blank" rel="noopener">An Accident Waiting to Happen</a>, The New Yorker)</blockquote>
<p>I’ve been reading about Oceangate with increasing horror and anger, and the confusion over the relationship between engineering and — frankly — bullshit. I don’t think anyone has ever told me ‘you are going to kill someone.’ Certainly not ‘way too often.’ If you’re hearing that, the right response is to slow down, not take it as an insult.</p>
<p>Engineering is about making tradeoffs in complex situations. Sometimes, as you do so, you have to violate rules. But the causal relationship is rulebreaking sometimes allows innovation, not vice versa. What Stocktom Rush says here is not “just” upsetting, it literally killed people:</p>
<blockquote>“If you’re not breaking things, you’re not innovating,” Rush said, at the GeekWire Summit last fall. “If you’re operating within a known environment, as most submersible manufacturers do—they don’t break things. To me, the more stuff you’ve broken, the more innovative you’ve been.”</blockquote>
<p>This is so very dangerously wrong that I want to write something equally outrageous, like: the most useful thing Mr. Rush ever did was to die spectacularly so we can judge his perspective.</p>
<p>“To me, the more stuff you’ve broken, the more innovative you’ve been.” If it’s not unclear to you why that’s wrong, pick up a hammer and hit things until you’ve gotten clear about the amount of innovation you’re producing.</p>
<p>The trouble is the confusion over what comes first. The trouble is exacerbated by valorizing rule-breaking. That is, we ascribe value to the rule breaking. For example, the New York Times story, <a href="https://www.nytimes.com/interactive/2023/07/14/us/titan-submersible-implode-design.html" target="_blank" rel="noopener">The Maverick Design Choices That May Have Doomed Titan</a> actually does a good job of explaining some of the engineering tradeoffs they wanted to make. There were admirable goals being pursued, such as more people in the sub. But the headline? “Maverick?”</p>
<p>These were not admirable choices. Using <a href="https://futurism.com/oceangate-ceo-expired-carbon-fiber-submarine" target="_blank" rel="noopener">expired carbon fiber</a> to save money was, simply, dangerous. Manufacturers set expiration dates for a reason, and it may have been relevant. Of course, carbon fiber works better in tension than compression, so it may have not mattered. And when that’s the direction of your conversation, it’s time to quit.</p>
<p>The video <a href="https://www.youtube.com/watch?v=6LcGrLnzYuU" target="_blank" rel="noopener">The Questionable Engineering of Oceangate</a> is by someone who wrote his Master’s thesis on predicting failures in composites. The decisions being made by Oceansgate were foolhardy. That’s why Rush’s peers sent repeated warnings. It’s why people quit and then blew the whistle.</p>
<p>As the New Yorker article points out, the most innovative thing that Oceansgate did was to carefully structure their operations to minimize liability: “It is truly hard to imagine the discernment it took for Stockton to string together each of the links in the chain,” Patrick noted. “ ‘How do I avoid liability in Washington State? How do I avoid liability with an offshore corporate structure? How do I keep the U.S. Coast Guard from breathing down my neck?’”</p>
<p>Again, sometimes rule-breaking is required to achieve new results. But good engineering, by definition, includes an awareness that you’re venturing into the unknown, and caution and respect for the environment as you do so.</p>
<p>One of the reasons that I teach people to ask “what can go wrong” is that it’s expansive. (Expansive, not expensive! It can be very inexpensive.) It creates space to bring up concerns. And if what you’re building is a website, maybe moving fast and breaking things is ok. I don’t have a one size fits all approach to “how to threat model” because you can, and should, adjust your engineering effort and due care to the scenario in which you expect to operate.</p>
<p>Anything less is worth a response, from a raised eyebrow to outraged scorn.</p>
<p class="credit-text">Image: Midjourney four clowns building a submarine in a large, bright, well-lit hanger. The submarine dominates the scene and is much larger than the clowns. It has badly-fitted parts. The clowns are wearing baggy polka-dotted clown suits, bright clown makup and wigs. ultrarealistic, photorealistic, documentary. --ar 8:3 </p>
Threat Modeling and Secure by Design2023-07-19T00:00:00Zhttps://shostack.org/blog/cisa-secure-by-design-feedback/AdamOur feedback to CISA is now public<span class="single-post-excerpt"><!-- excerpt -->Our feedback to CISA is now public<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/letter-to-cisa-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/letter-to-cisa-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/letter-to-cisa-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/letter-to-cisa-800w.jpeg" width="800" height="295" alt="An AI generated image of a group of experts offering friendly advice on a complex technical problem to other experts hyperrealistic" decoding="async" />
</picture>
<p>The folks behind the <a href="https://www.threatmodelingmanifesto.org/" target="_blank">Threat Modeling Manifesto</a> have written a fairly detailed letter to CISA in response to CISA’s Secure By Design Guidance. We published it, and Infosecurity Magazine has the scoop at <a href="https://www.infosecurity-magazine.com/news/experts-cisa-update-secure-design/" target="_blank" rel="noopener">Industry Experts Urge CISA to Update Secure by Design Guidance</a>.</p>
<p class="credit-text">Image: Midjourney, a group of experts offering friendly advice on a complex technical problem to other experts hyperrealistic 8k --ar 8:3 </p>
Chuck, Acme, and Remediation Avoidance2023-07-27T00:00:00Zhttps://shostack.org/blog/chuck-acme-remediation-avoidance/AdamThreat modeling really CAN save you money, just ask Chuck!<span class="single-post-excerpt"><!-- excerpt -->Threat modeling really CAN save you money, just ask Chuck!<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/chuck-acme-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/chuck-acme-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/chuck-acme-800w.jpeg" width="800" height="390" alt="man sitting on gray arm chair using silver laptop computer on building balcony at daytime" decoding="async" />
</picture>
<p>Back in April, Forrester published <cite><a href="https://www.iriusrisk.com/forrester-tei-study" target="_blank">The Total Economic Impact™ Of The IriusRisk Automated Threat Modeling Platform</a></cite>. They looked at a composite of three organizations that moved from ad-hoc, manual threat modeling to automated threat modeling. One of the report’s key findings was that cost savings from “remediation avoidance” was the biggest cost saving category with $4.9 million over a three-year period. The others included (all over three years):</p>
<ul><li> Automation efficiencies ($1.8m)</li>
<li> Increased productivity in compliance and reporting ($3.9m)</li>
<li> Productivity from integration ($108k)</li>
<li> Avoiding security incidents ($35k)</li></ul>
<p>That’s a lot of money. With companies struggling right now, the cost savings increases your margins this year. And to be honest, I think that the avoidance of security incidents savings are surprising, and assume that the pen testers who find these things will be the kind that get forgiveness and negotiate rules of engagement before planting their flag.</p>
<p>That said, you should be asking the question: How do I get those savings?</p>
<h2>“Beep-Beep!”</h2>
<p>Let’s think about a hypothetical company, Acme. Chuck Jones is working on an app for drone-delivered anvils. He knows that his primary user-base are <a href="https://en.wikipedia.org/wiki/Wile_E._Coyote_and_the_Road_Runner" target="_blank">coyotes attempting to catch road runners</a>. Chuck and Acme didn’t realize that road runners only survive in this crazy world because of their mad skills — now including compromising mobile apps. </p>
<p>Acme has heard angry complaints about these problems, and now pays a lot for penetration testing. They get lots of ugly findings when they think they’re ready to ship. This leads to big, shouting escalations and difficult fights over what to fix first. They’re going to spend several months to a year trying to remediate all these things while still running the business, and, frankly, writing more vulnerable features.</p>
<h2>When threat modeling backfires</h2>
<p>After this pen test, <dn id="TPTB">The Powers That Be</dn> (<abbr>TPTB</abbr>) at Acme send Chuck a memo saying that he needs to threat model. Maybe they’ve heard about it from the pen testers, or they’ve seen it in some cyber resilience compliance requirement. Since TPTB doesn’t have a plan for rolling out the program, they just say “we’ll figure that out later.”</p>
<img style="float:right;height:auto;width:250px;padding-left:1em;" loading="lazy" src="https://shostack.org/blog/img/2023/workplace-conflict.webp" alt="Standing woman with arms outstretched in a questioning stance apparently upset at her seated male coworker who is looking away and is resting a hand on top of his head." />
<p>Chuck rolls his eyes. He’s been told to make this new “threat modeling” thing happen, but he doesn't know why. Worse, no one’s given him resources to successfully complete the task. Now he’s given another item on an already long “to do” list. Without the right skills and resources, Chuck flails like a falling coyote, and TPBT that sent the memo lose credibility.</p>
<p>Without a plan, threat modeling just becomes another waste of time.</p>
<h2>Threat modeling is a journey, not a destination</h2>
<p>Going from no threat modeling to good automated threat modeling takes time. If Acme had planned better, they would have given Chuck some resources and time to learn.</p>
<p>Let’s look at what happens if Acme gives Chuck some free resources, like our <cite><a href="https://shostack.org/files/papers/Fast-Cheap-and-Good.pdf" target="_blank">Fast, Cheap and Good: An Unusual Tradeoff Available in Threat Modeling</a></cite> whitepaper. </p>
<p>He learns that if he’d asked “what could go wrong” he’d have realized:</p>
<ul><li> Road runners are really good at modifying app data to protect themselves from flying anvils.</li>
<li> Gravity is a coyote’s natural enemy. </li>
<li> A coyote can overtake heavier objects to fall faster and get squished by them. </li></ul>
<p>If he’d thought about these threats as he designed the app, he’d have been able to address them. Customers would be happier, and less likely to squash themselves. There’d be fewer late nights working to re-work the laws of gravity.</p>
<p>Now that Chuck has proven to Acme that even lightweight threat modeling can save money and time, Acme decides to give him a training course so that he can uplevel these skills and apply them consistently to all software he develops. </p>
<h2>Automating for road runner speed</h2>
<p>To go back to the report, manual threat modeling is hard to scale. With a small team of five or ten developers, a threat modeling course alone may be enough. With small teams, you can manage the ambient information, or shared knowledge, that people have when working together on a project. As your team grows, ambient information needs to become more crisp so you can create consistency. </p>
<p>If we look at the report, its description of the composite company is:</p>
<blockquote>
multinational financial organization with headquarters in North America and Europe, and it generates revenues of $10 billion to $20 billion each year. It has an employee base of 50,000 to 100,000; the software security group consists of 50 security architects and 150 security champions. There are a total of 1,000 developers. The composite organization has a portfolio of 1,000 products.
</blockquote>
<p>With automation, you formalize the work and share the information across all these different people, but that doesn’t mean you can just use technology. That’s different from, say, static analysis, which can find insecure API use without a code review.</p>
<img style="float:left;height:auto;width:250px;padding-right:1em;" loading="lazy" src="https://shostack.org/blog/img/2023/frankie-lopez-unsplash-desert-speed.jpg" alt="Winding road through desert canyon at sunset, taken at slow shutter speed to create streaks of light from a passing car." />
<p>If your team only relies on the technology, you’ll be able to bring your threat modeling time down to eight hours from eighty. The question is whether that automation delivers the fullest value? If your team doesn’t understand how to threat model, then they won’t be able to gain the full benefit.</p>
<p>You’ll be achieving some remediation avoidance, saving some time and money. With the right skills to manage the automation, you optimize the value of that technology investment. </p>
<p>Even more importantly, you want the different people within your organization to have the appropriate skills for their jobs. This means that you want to look for courses focused on:</p>
<ul><li> <a href="https://courses.shostack.org/courses/threat-modeling-for-engineers-201-self-paced" target="_blank">Engineers</a> who aren’t security specialists.</li>
<li> Security architects who want an <a href="https://courses.shostack.org/courses/threat-modeling-222-self-paced" target="_blank">in-depth course</a> with information about things like tools, kill chains and risk management</li>
<li> <a href="https://courses.shostack.org/courses/threat-modeling-for-security-champs-oct-2023" target="_blank">Security champions</a> who introduce, lead, and evaluate threat modeling work</li></ul>
<p>With the right skills, you’ll find that your team can optimize the technology, reducing the time and costs of remediating product security laws with:</p>
<ul><li> Developers highlighting issues and implementing controls prior to them ever being an issue in the code</li>
<li> Security teams receiving the list of tasks before any code is written</li>
<li> Security champions creating metrics that prove the business value of threat modeling and the tool</li></ul>
<h2>People skills support technology investments</h2>
<p>Avoiding the time and money spent on remediating issues is one of threat modeling’s key business values. It’s the digital version of “measure twice, cut once.” If you’re just getting started on your threat modeling journey, you might want to check out some of <a href="https://courses.shostack.org/" target="_blank">our courses</a> to help your team build their skills. If you’re looking for something to help a larger team, we also provide customized <a href="https://shostack.org/training/corporate">corporate trainings</a> that can include incorporating your real systems and working with any threat modeling technologies you have. </p>
<p>To learn more about what’s available or to get notified when new courses go live, <a href="https://shostack.org/contact">contact us today</a>!</p>
<p class="credit-text">Primary photo by <a href="https://unsplash.com/@linkedinsalesnavigator?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank">LinkedIn Sales Solutions</a> on <a href="https://unsplash.com/photos/nysVe5rsJtc?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank">Unsplash</a>.<br />Photo "Conflict Avoidance" from 3back.com.<br />Photo by <a href="https://unsplash.com/@frankielopez?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank">Frankie Lopez</a> on <a href="https://unsplash.com/photos/sMzlrCmZSwY?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText" target="_blank">Unsplash</a>.
</p>
SEC Cybersecurity Rules2023-08-01T00:00:00Zhttps://shostack.org/blog/sec-cybersecurity-rules/AdamThe SEC has important new cybersecurity rules<span class="single-post-excerpt"><!-- excerpt -->The SEC has important new cybersecurity rules<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/sec-breach-reporting-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/sec-breach-reporting-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/sec-breach-reporting-800w.jpeg" width="800" height="289" alt="A Board meeting" decoding="async" />
</picture>
<p>Last week, the SEC <a href="https://www.sec.gov/news/press-release/2023-139" target="_blank" rel="noopener">issued new cybersecurity guidance</a>. It includes a requirement to disclose material breaches within four days, and does not, contrary to drafts, require boards to disclose their cyber expertise.</p>
<p>Fifteen years ago, Andrew Stewart and I published <a href="https://amzn.to/3KlP3pn" target="_blank" rel="noopener"><cite>The New School of Information Security</cite></a>, in which we called for greater disclosure of cyber incidents and learning more from them. At the time, it was a controversial position, and I’m glad to see it broadly coming to pass in the United States. There will certainly be companies who continue to sweep their issues under the rug, mis-lead their shareholders, and are punished not for the breach, but for the cover-up. The SEC has nice whistleblower rewards.</p>
<p>Some of the things I note in reviewing the 186(!) page final rule:</p>
<ul>
<li>“[W]e are streamlining Item 1.05 to focus the disclosure primarily on the impacts of a material cybersecurity incident, rather than on requiring details regarding the incident itself.” (Page 29) I find this balancing unfortunate, but it’s may be inline with what the SEC is able to do today. Investors have access to sophisticated advice, and can use details regarding the incident to better assess the firm’s investments in cybersecurity. More on this below.</li>
<li>“[W]e consider prompt reporting of material cybersecurity incidents to investors critical to investor protection and well-functioning, orderly, and efficient markets.” (Page 45) This is a new, and, as they discuss, different basis for disclosure than other laws.</li>
<li>“[A] large number of commenters expressed concern about the aggregation requirement, saying, for example, that companies experience too many events to realistically communicate internally upward to senior management, and that retaining and analyzing data on past events would be too costly.” (Page 49) Both parts of this, frankly, are fascinating positions to take. If either a company is experiencing too many events to communicate to management about, or if management doesn’t want to hear about them, how is that not of interest to investors? And if organizations fail to retain or analyze data on past events, how are they learning or improving?</li>
<li>The “interesting” parts of the rule ends around page 80, moving to technical matters like foreign issuers and structured information in submissions.</li>
</ul>
<p>Overall, the final rule clearly demonstrates thoughtful engagement with the comments. I don’t love every decision they made, but maybe the fix for that ought to have been me writing a comment in response to the draft rules.</p>
<p>Some other people have published worthwhile commentary, including a webinar, <a href="https://www.iansresearch.com/portal/executive-communications/briefing-sec-finalizes-cyber-disclosure-rules" target="_blank" rel="noopener">SEC Finalizes Cyber Disclosure Rules</a> (Jake Williams, Anand Singh, George Gerchow, IANS customer-only). They point out that the SEC’s attempt to avoid requiring details isn’t going to work, because your customers are going to demand those details from you, and if you’re a public company, scale will require you to publish them. Here, I think the SEC got the answers wrong, and they did so giving overly much attention to the idea that attackers need roadmaps. It seems pretty clear to me that they don’t. I want to renew the request to CISA to clarify that we made in the Threat Modeling Manifesto group’s <a href="https://www.threatmodelingmanifesto.org/cisa-feedback/" target="_blank" rel="noopener">letter to CISA</a>.</p>
<p>Looking to the future, I think there’s a <i>fascinating</i> question of the relationship between a breach and controls. If a company is breached, and fails to detect it for more than 180 days, how can the officers of the company be confident in their financial results?</p>
<p class="credit-text">Thanks to David Mortman for the nudge, and Midjourney: “A Wes Anderson style board meeting”</p>
Use the Defcon Wifi2023-08-02T00:00:00Zhttps://shostack.org/blog/use-the-defcon-wifi/AdamWhy it’s ok to use the Defcon wifi<span class="single-post-excerpt"><!-- excerpt -->Why it’s ok to use the Defcon wifi<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/use-the-defconwifi-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/use-the-defconwifi-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/use-the-defconwifi-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/use-the-defconwifi-800w.jpeg" width="800" height="299" alt="" decoding="async" />
</picture>
<p>Many security professionals, especially on social media, have an unfortunate tendency towards what we might call <i>performative security</i>. It’s where people broadcast their security measures to show how aware they are, and they suggest others follow their lead. It’s the inverse of <i>security theater</i> where ineffective security is imposed on us by organizations. It’s often ineffective, inconvenient, or both.</p>
<p>And today’s bad advice is “Don't use the defcon wifi.”</p>
<p>The Defcon and Blackhat networks are some of the most monitored networks anywhere. No one's going to blow an 0-day by using it on either network. This assumes everything's up to date and fully patched, and that you join the official networks, which are listed on signage around the venues. It also assumes that all your apps are using TLS everywhere. In contrast, there is a never-ending <a href="https://www.nsa.gov/About/Cybersecurity-Collaboration-Center/Enduring-Security-Framework/" target="_blank" rel="noopener">parade</a> of warnings about <a href="https://www.nytimes.com/2023/07/29/us/politics/china-malware-us-military-bases-taiwan.html" target="_blank" rel="noopener">malware in telecom infrastructure</a>. There are routinely reports of extra base stations around Las Vegas. (I’ve heard numbers on the order of an extra 50, of which I’d guess many are simply just-in-time capacity from authorized suppliers.) The lack of authentication of base stations is apparently a ...feature... that’s never going to be fixed.</p>
<p>Now, there’s another way to interpret this, which is to put your devices in airplane mode or a Faraday cage, and that’s not awful advice. Disconnect. Be present. Enjoy the events. Talk to the people around you. If you want to disconnect, a well-constructed Faraday cage is safer than airplane mode, which let bluetooth and wifi work.</p>
<p>When I was at Microsoft, some of my co-workers made a big deal of how they locked down their laptop, or bought a burner for Defcon. Me? I asked why our products weren’t safe enough to use in that environment, given that they’re certainly used in more dangerous places.</p>
<p class="credit-text">Image: Midjourney</p>
Airline Close Calls2023-08-22T00:00:00Zhttps://shostack.org/blog/airline-close-calls/AdamThoughts on an article on near misses<span class="single-post-excerpt"><!-- excerpt -->Thoughts on an article on near misses<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/near-misses-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/near-misses-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/near-misses-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/near-misses-800w.jpeg" width="800" height="351" alt="An AI image of two modern 737 airplanes nearly missing each other, styled as a 1950s pulp paperback cover" decoding="async" />
</picture>
<p>Many people have been sending me the New York Times story, <a href="https://www.nytimes.com/interactive/2023/08/21/business/airline-safety-close-calls.html?unlocked_article_code=jIjrzRvOrWA5dsjBxFbOfXSgCGjOFnIAiDEjMLWoluQQ-riav_9OWotdkBlDl29SK0iYeAK_DvUanMioDO5tzno5QvIEDn5Zlawt6gTo7fwxTiQqmTPXm_Up-3HXyN4TUGZzweLdk_LN8_Tb0iL_4-f3PNgy1SFBIiFtpiCgliHTeMYV1yclAYQRmP5vyXj5RWJaFdqeN3YYixM4LwOtQZb0ZU3GgfWDzA8ZdejrFY9EaysOlVOw4oXcMjDJ836KlBuqoxloQh4JIwG7aHs8Hz6XP_hfu0UKABf-o2GfS_w4WUXBnZzb2D4tNwWLUAFdy8gIWmvQYqf0AGZK-1qweTbs1qa6ka1vQT-029aIWN8upO_05Q&smid=url-share" target="_blank" rel="noopener">Airline Close Calls Happen Far More Often Than Previously Known</a>. The article analyzed both private FAA records, and the ASRS database maintained by NASA, and comments “The number of such near misses in the NASA database [...] has more than doubled over the past decade, though it is unclear whether that reflects worsening safety conditions or simply increased reporting.”</p>
<p> A few comments:</p>
<ul>
<li>First, I’m glad to see ASRS getting attention and use. The system of confidential reports, anonymously released being analyzed by outsiders illustrates a key strength, and one we should borrow for cybersecurity.</li>
<li>You might be asking, why does it take the Times to do this analysis? Shouldn’t NASA be doing it? First, it’s possible that NASA has done it, but not trumpeted the results. ASRS doesn’t tend to be flashy. Second, I understand that the FAA hasn’t asked for a budget increase for ASRS in a long time. It’s reasonable to think that ASRS could do more with a bit more money.</li>
<li>It’s clear that a big chunk of the problem is a lack of Air Traffic Control staffing. This has been a visible problem for forty years, and requires some political will to properly budget. We probably also need an approach where getting medical help for medical issues doesn’t cause traffic controllers to lose their jobs.</li>
<li>Overall, while this is not great, I prefer to see near misses reported, rather than covered up. It’s worrisome, but hiding the issues makes them harder to fix, not easier.</li>
</ul>
<p><a href="https://shostack.org/resources/lessons" target="_blank">Previous writings on learning lesson learning lessons from aviation</a>.</p>
<p class="credit-text">Image: Midjourney, two modern 737 airplanes nearly missing each other, styled as a 1950s pulp paperback cover </p>
Application and AI roundup - August2023-08-30T00:00:00Zhttps://shostack.org/blog/appsec-roundup-aug/AdamLots of interesting work in LLMs (again)<span class="single-post-excerpt"><!-- excerpt -->Lots of interesting work in LLMs (again)<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-aug2023-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-aug2023-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-aug2023-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/appsec-roundup-aug2023-800w.jpeg" width="800" height="346" alt="an AI reading a book" decoding="async" />
</picture>
<p>You can spell appsec without “LLM,” but only if you’re living in the past. And so, this roundup starts off with several general articles about LLMS, then we get to the appsec applications and appsec in general.</p>
<h3>AI</h3>
<ul>
<li><a href="https://www.honeycomb.io/blog/hard-stuff-nobody-talks-about-llm" target="_blank" rel="noopener">All the Hard Stuff Nobody Talks About when Building Products with LLMs</a>, by Phillip Carter.</li>
<li><a href="http://neil-clarke.com/ai-statement/" target="_blank" rel="noopener">AI Statement</a> by publisher Neil Clarke, whose publication was DDoS'd by a flood of LLM-generated stories because they pay good rates.</li>
<li><a href="https://davistreybig.substack.com/p/software-systems-in-a-world-of-llms" arget="_blank" rel="noopener">Software Systems in a World of LLMs</a>, by Davis Treyberg.</li>
<li><a href="https://www.cioapplications.com/cxoinsights/the-threat-model-of-ai-nid-10765.html" target="_blank" rel="noopener">The Threat Model of AI</a>, by Matt Clapham.</li>
<li>The <a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/" target="_blank" rel="noopener">OWASP Top 10 for Large Language Model Applications</a> was released.</li>
</ul>
<h3>Application Security</h3>
<ul>
<li><a href="https://medium.com/@Jamie_Dicken/letting-go-of-perfect-scaling-threat-modeling-aa3663b49895" target="_blank" rel="noopener">Letting go of perfect</a>, Jamie Dicken writes about how perfection is the enemy of scaling threat modeling.</li>
<li><a href="https://jfrog.com/blog/examining-openssh-sandboxing-and-privilege-separation-attack-surface-analysis/" target="_blank" rel="noopener">Examining OpenSSH Sandboxing and Privilege Separation</a>, by Yair Mizrahi. OpenSSH does an excellent job at attenuating its privileges, and anyone who writes code that sometimes needs extra privilege can learn from this.</li>
<li><a href="https://www.iriusrisk.com/forrester-tei-study" target="_blank" rel="noopener">Forrester Total Economic Impact(™) of IriusRisk Threat Modeling</a> looks at the IriusRisk Threat Modeling Tool’s economic impact. Much of it generalizes to threat modeling overall, and the way the impacts are broken out are worth some thought.</li>
<li>The Center for Medicare Security has a <a href="https://security.cms.gov/policy-guidance/threat-modeling-handbook" target="_blank" rel="noopener">threat modeling handbook</a>.</li>
<li><a href="https://www.threatmodelingconnect.com/general-discussion-32/regulatory-requirements-on-threat-modelling-reaches-the-apac-region-243" target="_blank" rel="noopener">Regulatory Requirements on Threat Modelling reaches the APAC region</a>, “Singapore’s 2018 Cybersecurity Act indirectly makes it a criminal offence not to perform cybersecurity risk assessments which include threat modelling on computers and systems that have been designated by the Cybersecurity Agency (CSA) as Critical Information Infrastructure (CII).”</li>
<li>Amazon has released a <a href="https://github.com/awslabs/threat-composer" target="_blank" rel="noopener">Threat Composer</a>, a “simple threat modeling tool to help humans to reduce time-to-value when threat modeling.”</li>
</ul>
<p class="credit-text">Image by Midjourney: an AI reading a book, while being hacked by evil disney characters. cinematic, dramatic, professional photography, studio lighting, studio background, advertising photography, intricate details, hy↩ per-detailed, ultra realistic, 8K UHD --ar 8:3 --v 5.0 </p>
Open training: Threat Modeling for Champs (October)2023-09-14T00:00:00Zhttps://shostack.org/blog/threat-modeling-for-security-champs/AdamSeats are available in our October training<span class="single-post-excerpt"><!-- excerpt -->Seats are available in our October training<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/threat-modeling-for-security-champs-october-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/threat-modeling-for-security-champs-october-800w.jpeg 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/threat-modeling-for-security-champs-october-800w.jpeg" width="800" height="802" alt="An AI generated image of people learning" decoding="async" />
</picture>
<p>Calling all technology professionals!</p>
<p>Are you ready to strengthen your ability to deliver rock-solid secure systems? Look no further than our <a href="https://courses.shostack.org/courses/threat-modeling-for-security-champs-oct-2023" target="_blank" rel="noopener">October Threat Modeling for Security Champs</a> course!</p>
<p>👨🏫This live-instruction Security Champs course kicks off October 2, and features 5 instructor-led discussions over a week in addition to individual, self-paced work. Our Security Champs 301 is equivallent to a 1-day in-person course, spanning 11 hours, including classes and homework. (Hey, things take a bit longer online!)</p>
<p>🔐As the name suggests, we recommend having prior experience, often from either our Engineers or Intensive Threat Modeling Courses. Familiarity with multiple ways to answer each question of the Four Question Framework is a great indicator that you’re ready.</p>
<p>📝 Ready to Assess Your Skills? Not sure if this course is the right fit for you? Take our <a href="https://forms.office.com/Pages/ResponsePage.aspx?id=DQSIkWdsW0yxEjajBLZtrQAAAAAAAAAAAAN__jwTWXdUMFo4WEpQMkhGR1lHTThKWU1BRlZUVEdJRy4u&wdLOR=c1E282901-3E8B-4AFC-B109-3C6800D9339D" target="_blank" rel="noopener">short skills assessment</a> to gauge your readiness for Threat Modeling for Security Champs.</p>
<p>💪Invest in your career and master the art of threat modeling with us! Secure your spot today and join the ranks of security champions!</p>
<p class="credit-text"> </p>
Comparing Retrospectives2023-09-19T00:00:00Zhttps://shostack.org/blog/comparing-retrospectives/AdamWe can learn a lot from comparing retrospectives<span class="single-post-excerpt"><!-- excerpt -->We can learn a lot from comparing retrospectives<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/comparing-retrospectives-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/comparing-retrospectives-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/comparing-retrospectives-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/comparing-retrospectives-800w.jpeg" width="800" height="305" alt="An AI generated image of a bright watercolor of storm clouds passing over a corporate campus, filled with low square concrete buildings. All are surrounded by well manicured lawns. In the background is the iconic arecebo radio telescope" decoding="async" />
</picture>
<p>Last week, I read two retrospectives. One, by Microsoft, was <a href="https://msrc.microsoft.com/blog/2023/09/results-of-major-technical-investigations-for-storm-0558-key-acquisition/" target="_blank" rel="noopener">Results of Major Technical Investigations for Storm-0558 Key Acquisition</a>, while the other was Thornton Tomasetti’s <a href="https://www.thorntontomasetti.com/resource/arecibo-telescope-collapse-forensic-report" target="_blank" rel="noopener">Arecibo Telescope Collapse Forensic Investigation</a>.</p>
<p>The contrast between the reports is remarkable, and so I’m gonna remark. Reports like these serve several purposes. The most important is to create and preserve an authoritative report of what happened, and ideally to extract lessons from it so the organization and perhaps others can learn from experience. Another is to reassure stakeholders that the organization has done so. In this post, I’m going to look at the reports as a mismatched pair, and ask what we can learn about reports by comparing and contrasting them. Smart retrospectives are important not only to our ability to anticipate and manage within our organization; sharing them helps us all get smarter faster, and get a higher return on our threat modeling work.</p>
<p>Looking at these reports, the first thing that jumps out is the length. Microsoft’s post is 748 words. I believe the executive summary of the Arecibo report is longer. Certainly, the full report, at 362 pages (with appendices) is longer — nearly a page for every two words in the Microsoft post. The Arecibo report is authored by three named and credentialed authors. The Microsoft report isn’t even attributed beyond “by MSRC.” Having written these, I have little doubt that’s because the list of lawyers and marketing people is far longer than the list of engineers who wrote the draft and checked what emerged.</p>
<p>I did want to comment on one specific point made by Microsoft: “Due to log retention policies, we don’t have logs with specific evidence of this exfiltration by this actor, but this was the most probable mechanism by which the actor acquired the key.” Those log retention policies were last revisited when disk cost dollars per megabyte, and after another breach at Microsoft, even less well documented, I advocated for longer retention, using free disk space to hold arbitrary amounts of old logs. I tweeted about that <a href="https://infosec.exchange/@adamshostack/111024172140973638" target="_blank" rel="noopener">here</a> (where Karl Baron points out a GDPR question), and also remembered <a href="https://infosec.exchange/@adamshostack/111020189356038095" target="_blank" rel="noopener">some of the complexities</a>. To be clear: log retention policies are a security design choice, and logs seem to be deleted by design more than by attackers.</p>
<p>The log retention issues, unlike several other issues, is not followed by a parenthetical “(this issue has been corrected).” It’s hard to interpret that omission.</p>
<p>There’s more ways to use these reports to reflect on how we learn from experience than simply the length. They include:</p>
<ul>
<li> <b>Private vs public</b>. Microsoft is a private firm, not a public agency. We reasonably expect more transparency from government (“our tax dollars at work”) than we do from companies. On the other hand, in this instance, Microsoft was serving at least two government departments, and this incident has gotten attention from <a href="https://www.wyden.senate.gov/imo/media/doc/wyden_letter_to_cisa_doj_ftc_re_2023_microsoft_breach.pdf" target="_blank" rel="noopener">the Senate</a>, and so there’s an argument for more transparency than if serving private companies. (I’ll return to this.)</li>
<li> <b>Physical vs software</b>. The failures of the Arecibo were failures of physical systems, which can be easier to observe or analyze.</li>
<li> <b>Records</b>. The engineering diagrams and plans of record for the telescope go back 53 years, and specifically record as designed, as built, and modifications. They are specific and clear, and the report includes, for example, scanned in cable tension tables from the structural drawings. We rarely invest in similarly clear records for software, and don’t require an engineer of record to sign off on the designs or changes.</li>
<li> <b>Standards</b>. The Arecibo report cites (for example) the American Society of Civil Engineers ACSE-19-16 Structural Applications of Steel Cables for Buildings and the American Association of State Highway and Transportation Officials, Bridge Design Specifications (page 13). There are fewer standards for software construction, and those that we have are far less specific.</li>
<li> <b>Adversaries vs nature</b>. The Microsoft breach was was the result of action by “Storm-0558”, who are presumed to be the Chinese government. People are able to adapt and adjust their attacks, and there is a faction that argues that Microsoft should not “tip its hand” as to what it knows about the intrusion, in case there are things that Microsoft knows that Storm-0558 doesn’t know that Microsoft noticed or knows. There is no need for this hall of mirrors in dealing with a failed zinc-filled spelter socket assembly.</li>
<li> <b>Specific lessons</b>. The telescope failure provides lessons that can clearly be applied elsewhere, including choices of safety factors in similar systems (especially those exposed to hurricanes or earthquakes), and the inadvisability of ignoring cable slip. It is less clear if the issues in the Microsoft report are generally applicable, but this is a defect of the Microsoft report. What, precisely, was the race condition that allowed key material in a crash dump? That is, what process or thread was racing with what? Are the issues “The key material’s presence in the crash dump was not detected by our systems”, and “Our credential scanning methods did not detect its presence” discussing the same issue or separate issues? Both are commented as “this issue has been corrected,” which indicates that perhaps they’re separate issues? The “post incident review” lists them separately as items 2 and 3.</li>
</ul>
<p>I said that I’d return to the issue of transparency. There are several goals which transparency can support. They include:</p>
<ul>
<li> Recording specifics for learning and sharing engineering lessons</li>
<li> Informing investors</li>
<li> Reassuring customers</li>
</ul>
<p>As I discuss in the “specific lessons” point above, the blog post falls short in the recording of lessons and the sharing of engineering lessons.</p>
<p>With regards to informing investors, in recent guidance, the SEC has encouraged firms to disclose material cybersecurity incidents to allow investors to incorporate that into their investment decisions. Microsoft has <a href="https://shostack.org/blog/sec-cybersecurity-rules/" target="_blank" rel="noopener">apparently</a> decided that this issue is not material, and materiality is a legal term. I’ll simply argue that how the flaws crept into Microsoft’s operational systems and were not discovered is information that an investor might want to understand.</p>
<p>Microsoft states that “As part of our commitment to transparency and trust, we are releasing our investigation findings.” I would like to encourage them to demonstrate that commitment to transparency with a release of the full internal report, possibly with small redactions.</p>
<p class="credit-text">[Update Sept 25: Fixed link to Arecibo report.]<br />Image by midjourney, “a bright watercolor of storm clouds passing over a corporate campus, filled with low square concrete buildings. All are surrounded by well manicured lawns. In the background is the iconic arecebo radio telescope --ar 8:3”</p>FDA Final Cyber Guidance is out2023-09-26T00:00:00Zhttps://shostack.org/blog/fda-final-cyber-guidance/AdamThe FDA has released their new guidance, which will be broadly impactful.
<span class="single-post-excerpt"><!-- excerpt -->The FDA has released their new guidance, which will be broadly impactful.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/cybersecurity-in-medical-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/cybersecurity-in-medical-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/cybersecurity-in-medical-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/cybersecurity-in-medical-800w.jpeg" width="800" height="226" alt="The title of the FDA's new guidance, 'Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions'" decoding="async" />
</picture>
<p>The FDA has released their new final guidance, <a href="https://www.fda.gov/regulatory-information/search-fda-guidance-documents/cybersecurity-medical-devices-quality-system-considerations-and-content-premarket-submissions" target="_blank" rel="noopener">Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions</a>. This formally replaces guidance that’s nearly 9 years old. They’ve also released an <a href="https://www.fda.gov/medical-devices/digital-health-center-excellence/cybersecurity-medical-devices-frequently-asked-questions-faqs#6512f1fc068bb" target="_blank" rel="noopener">FAQ</a>.</p>
<p>I haven’t yet had a chance to dig in, but on the surface, it seems similar to the April draft.</p>
<p>This is, I think, the most detailed guidance from a regulator on quality systems and the security of devices. And while it’s roughly mandatory for medical device makers, product security professionals in other fields would do well to study it and ask what the reasonable implications are for their own work.</p>
<p>This guidance is far more specific, and far more product-focused, than say, the FTC rules for businesses, or the SEC rules to protect investors. It says, “In general, FDA’s guidance documents do not establish legally enforceable responsibilities.” You have choices in how you follow it. Some companies want more specific rules so they can follow them, others want less specificity so they can make choices about how to achieve their goals.</p>
<p>The first impact that this will have is obviously on medical device makers, who need to ensure their new submissions are aligned with the guidance. The second impact will be on other industries. Other regulators are paying attention, both in the medical space internationally, and US regulators who’ll treat this as a thoughtful approach that will inform other guidance and regulation. The third impact will be on product liability lawsuits. The guidance will be treated as a point of reference for reasonable practice in other areas. Those arguing that their SDL is sufficient will say it only needs to be a subset of what’s done for medical devices. Those arguing that a product’s insecurity harmed them will claim that the other side ought to have been closer to what FDA suggested.</p>
Application and AI roundup - September2023-10-04T00:00:00Zhttps://shostack.org/blog/appsec-roundup-september/AdamSeptember was a big month in appsec for both memory safety and policy<span class="single-post-excerpt"><!-- excerpt -->September was a big month in appsec for both memory safety and policy<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/roundup-appsec-oct2023-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/roundup-appsec-oct2023-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/roundup-appsec-oct2023-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/roundup-appsec-oct2023-800w.jpeg" width="800" height="300" alt="an AI reading a book" decoding="async" />
</picture>
<p>September was a month big appsec month for both memory safety and policy, with a lot of sharp elbowed takes on C, and a lot of important developments in policy, including medical devices and open source.</p>
<h3>C and Memory Safety</h3>
<ul>
<li><a href="https://danakj.github.io/2023/08/20/cpp-in-three-parts.html" target="_blank" rel="noopener">A tale of C/C++ development in three parts</a>. Dana Jansens writes about adding a markdown parser to a program. Their blog is titled “An Update on Writing Memory Safety Bugs,” which is either amusing or frustrating.</li>
<li><a href="https://research.swtch.com/ub" target="_blank" rel="noopener">C and C++ Prioritize Performance over Correctness</a>, by Russ Cox. </li>
<li><a href="https://blog.logrocket.com/getting-started-zig-programming-language/" target="_blank" rel="noopener">Getting started with the Zig programming language</a>, by Shalitha Suranga. I’ve been hearing about Zig a bunch lately. One of the interesting things is Zig’s focus on doing everything it can at compile-time, which means that everything in memory at runtime is tainted. This article doesn’t cover that, which is reasonable for a getting started, but I wanted to mention it.</li>
<li><a href="https://googleprojectzero.blogspot.com/2023/08/summary-mte-as-implemented.html" target="_blank" rel="noopener">Summary: MTE As Implemented</a> from Google Project Zero. I believe MTE is the production name for the CHERI project. The aspects of it not addressing speculative execution are interesting regardless. </li>
<li><a href="https://youtube.com/live/1EulLwyd2_U?si=eAPynHbxc2wgY_p5%E2%80%A6" target="_blank" rel="noopener">Safety and Security: the future of C and C++</a> (rough version) by Robert Seacord, a leader in C standards (and the security of C programming).</li>
</ul>
<h3>AI</h3>
<p>There's a short article, <a href="https://www.cisa.gov/news-events/news/software-must-be-secure-design-and-artificial-intelligence-no-exception" target="_blank" rel="noopener">Software Must Be Secure by Design, and Artificial Intelligence Is No Exception</a>, Christine Lai and Jonathan Spring, at CISA. I like their diagrams, and this will probably influence policy going forward. </p>
<h3>Policy</h3>
<ul>
<li>The FDA has released their new final guidance, <a href="https://www.fda.gov/regulatory-information/search-fda-guidance-documents/cybersecurity-medical-devices-quality-system-considerations-and-content-premarket-submissions" target="_blank" rel="noopener">Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions</a>. This formally replaces guidance that’s nearly 9 years old, my comments are <a href="https://shostack.org/blog/fda-final-cyber-guidance/" target="_blank">here</a>, and since I wrote that, an <a href="https://www.govinfosecurity.com/interviews/inside-look-fdas-cyber-review-process-for-medical-devices-i-5294" target="_blank" rel="noopener">excellent interview</a> with Jessica Wilkerson of FDA appeared on GovInfoSecurity. [Update: The FDA has announced a <a href="https://www.fda.gov/medical-devices/workshops-conferences-medical-devices/webinar-final-guidance-cybersecurity-medical-devices-quality-system-considerations-and-content" target="_blank" rel="noopener">webinar</a> on the Final Guidance.]</li>
<li>CISA has released their <a href="https://www.cisa.gov/sites/default/files/2023-09/CISA-Open-Source-Software-Security-Roadmap-508c%20%281%29.pdf" target="_blank" rel="noopener">open source strategy</a>. It opens with the importance of partnering with the community.</li>
<li>The Office of the National Cyber Director is seeking comments on <a href="https://www.federalregister.gov/documents/2023/08/10/2023-17239/request-for-information-on-open-source-software-security-areas-of-long-term-focus-and-prioritization" target="_blank" rel="noopener">Open-Source Software Security: Areas of Long-Term Focus and Prioritization</a> by October 9.</li>
<li>The city of Seattle is suing Hyudnai and Kia for not installing certain anti-theft technology which is standard across other car makers. (Story at <a href="https://www.vice.com/en/article/93kdmp/kias-and-hyundais-keep-getting-stolen-by-the-thousands-and-cities-are-suing" target="_blank" rel="noopener">Vice</a>, <a href="https://storage.courtlistener.com/recap/gov.uscourts.wawd.318515/gov.uscourts.wawd.318515.1.0_1.pdf" target="_blank" rel="noopener">complaint</a>.) The core legal theory is “Car thefts are expensive and dangerous; Measures to Prevent Vehicle Theft Have Existed for Over a Century; Adoption of Modern Engine Immobilizers is widespread; Defendants’ Deviation from the Industry Standard.” (Pulling from the table of contents of the complaint). While I’m not a lawyer, it seems to me that this theory could apply to many things we do in the software world.</li>
</ul>
<h3>Threat Modeling</h3>
<ul>
<li><a href="https://excalidraw.com/" target="_blank" rel="noopener">Excalidraw</a> is a new, simple, free drawing tool that produces sketch-like drawings. The default shapes don’t include a drum, I added the “IT Icons” set to get that.</li>
<li>Seats remain available for my <a href="https://owasp2023globalappsecwashin.sched.com/event/1MXE5/2-day-trainingadam-shostacks-threat-modeling-intensive">Threat Modeling Intensive</a> at OWASP Gloabl AppSec DC, Nov 1-2. (In person only).</li>
</ul>
<p class="credit-text">Image by Midjourney: a set of star wars critters being rounded up by a bot. Updated to add FDA webinar.</p>
Threat Modeling on Sale2023-10-06T00:00:00Zhttps://shostack.org/blog/threat-modeling-on-sale/AdamBest price ever for Threat Modeling<span class="single-post-excerpt"><!-- excerpt -->Best price ever for Threat Modeling<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/threat-modeling-sale-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/threat-modeling-sale-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/threat-modeling-sale-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/threat-modeling-sale-800w.jpeg" width="800" height="310" alt="a screencapture from Amazon" decoding="async" />
</picture>
<p>Amazon is having a one day sale on my <cite><a href="https://www.amazon.com/Threat-Modeling-Designing-Adam-Shostack-ebook/dp/B00IG71FAS/" target="_blank" rel="noopener">Threat Modeling: Designing for Security</a></cite>. $4.99 is the lowest price I remember.</p>
<p>If you already have a copy and enjoyed it, let me mention that Amazon ratings and reviews are incredibly important to what Amazon promotes, which is now an important way people learn about new books. If you have a second to give it 5 ⭐️ or a few minutes to write a nice review, I — like any author — would very much appreciate it.</p>
Adversarial Thinking and Wargames2023-10-12T00:00:00Zhttps://shostack.org/blog/adversarial-thinking-wargames/AdamThinking about adversarial thinking<span class="single-post-excerpt"><!-- excerpt -->Thinking about adversarial thinking<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/adversarial-thinking-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/adversarial-thinking-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/adversarial-thinking-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/adversarial-thinking-800w.jpeg" width="800" height="298" alt="cadets at a high tech military academy" decoding="async" />
</picture>
<p>At a recent private event, I had the pleasure of meeting a professor from a military academy, and we had a brief conversation about how they teach adversarial thinking. They said something like “its at the core of what we teach.”</p>
<p>Regular readers know <a href="https://shostack.org/archive/2008/09/think-like-an-attacker/" target="_blank" rel="noopener">my dislike</a> of demands to “Think like an attacker”, and that extends to both reliance on “adversarial thinking” and worrying about specific attackers. But I’ve been thinking a lot about the conversation and what we might learn about adversarial thinking.</p>
<p>First, I think that it would be silly, naive or arrogant for me to declare that the military academies don’t understand their own pedagogy. So, what are they teaching and how does that relate to cybersecurity adversarial thinking? How do they teach it?</p>
<p>To help me understand, I read Martin van Creveld’s <a href="https://amzn.to/45bnhUR" target="_blank" rel="noopener"><cite>Wargames</cite></a>, in which he discusses the use of wargames as a teaching tool. He also discusses how war differs from things like gladiatorial combat or dueling. In those, there is less opportunity for surprise, mis-direction, or selecting terrain. You know where your opponent is, and when you’ll fight. You have little opportunity to bring superior force to bear on an unprepared enemy. War also differs in that physical factors like fatigue, weather, and bad food impact combatants, as does physical risk. Those are hard to simulate in either a board or computerized wargame.</p>
<p>Those sorts of wargames allow participants to focus on learning strategy, and even adversarial thinking. You have resources X and Y, and you can deploy them in various ways to fight your enemies. Your enemy will be doing the same, and you need to learn to think “adversarially” to anticipate how they may behave, and organize your activity to beat them as they try the same. This is similar to business strategy asking what advantages do we have, and how can we maximize them?</p>
<p>In commercial cybersecurity, we have a somewhat different problem. Companies do not have forces that we can bring to bear to break the enemy’s will or ability to fight. They are not continuing politics by other means. We are convincing customers to give us their money, time, attention or personal data. The goal of a cybersecurity team is to allow their employer to meet those business goals. (I’ll ignore the idea of hacking back, because of the hard to surmount legal and business challenges, and note that governments have diferent priorities.)</p>
<p>We can think about how companies educate their employees or even teach them specific skills, and I'll note that cybersecurity ranges, where staff can practice with specific technologies, have an interesting relationship to wargames, where both tend to eliminate the physical factors like fatigue and the stress of the CEO calling every 30 minutes for updates. Tabletops do something similar for executives; neither really touches on adversarial thinking as a learning goal.</p>
<p>Those differences aside, we can ask how they teach adversarial thinking. It’s first worth noting that people who apply to the military academies first self-select and then are selected for those who focus on conflict. Those who are admitted and succeed are taught over several years to view the world in terms of fights and winning those fights. They learn this through the framing that their teachers use, and through leaning experiences including war games and maneuvers. (There are infinite other frames we can use. Examples include an engineering frame of “will the bridge bear weight?” a business frame of “how can we best serve customers?”)</p>
<p>And so we can distinguish between adversarial thinking for the military and adversarial thinking for cybersecurity on at least three grounds. First is the situation in which they find themselves, second is the amount of time they spend learning to think adversarially, and third is the people who learn that.</p>
<p class="credit-text">Midjourney: “cadets at a high tech military academy, standing around a table with a map and miniatures. The are actively learning to think about strategy and defeating an opponent. The scene is sunny, brightly lit, outdoors in a field”</p>
Security Principles in 20232023-10-27T00:00:00Zhttps://shostack.org/blog/security-principles-in-2023/AdamPrinciples are lovely, but do they lead us to actionable results?<span class="single-post-excerpt"><!-- excerpt -->Principles are lovely, but do they lead us to actionable results?<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/security-principles-2023-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/security-principles-2023-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/security-principles-2023-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/security-principles-2023-800w.jpeg" width="800" height="300" alt="Image by Midjourney, “stone tablets displayed on tall pedastals under spotlight in a museum, professor giving a lecture. The room is brightly lit with sunlight streaming in.”" decoding="async" />
</picture>
<p>Security principles are one of those secure design approaches that are so obvious that it’s hard to argue with them. Twenty years ago, I used Star Wars to illustrate <a href="https://shostack.org/blog/the-security-principles-of-saltzer-and-schroeder">Saltzer and Schroeder’s principles</a>.</p>
<p>Right now, I want to talk about the evolution of my thinking to contribute to a dialog about how and when to use principles in security engineering.</p>
<p>In 2005, my view of security engineering was centered on adversarial reviews. I felt the best reviews had the most cutting insights, and conflict was a natural, even desirable result of quality findings, often ones that were hard to address. Principles are an excellent tool for that. <i>“Your (software) is not (principle) enough!”</i></p>
<p>Fast forward to when we cleaned up the Microsoft SDL in 2008 or so (maybe v2?), I had an argument with several people because I really wanted to keep the principles in our developer education. They made the case that the principles were imprecise, hard to evaluate, and almost never came up in engineering discussion. Those were all fine reasons to stop. The only reason to keep them was to avoid giving outside critics ammo. But critics are going to critique, and we wanted to focus on the highest return tools we had. (In hindsight, there was another choice we might have made, to improve either our teaching or the principles themselves. But I don’t think that came up, because there wasn’t enough evidence that we should invest in principles.)</p>
<p>My views on design principles have evolved. Since writing my Principles illustrated with Star Wars posts nearly twenty years ago and <a href="https://shostack.org/training">teaching</a> thousands of developers about threat modeling and security design. By 2023, I’ve observed that design principles are harder to learn and apply than <a href="https://shostack.org/threatmodeling">threat modeling techniques</a>. It’s not that the principles are bad — I think most have stood the test of time — but I’ve seen that applying principles in design analysis requires careful analysis, abstraction, and the ability to conceive and compare a set of designs. It’s both a more nuanced skillset, and the results are harder to assess. If you ask two experts to apply the principles, you may get very different results. For example, is “Run as a normal user, not Administrator” a good application of Least Privilege? Does it require running as a sandboxed “Modern” app? More segmentation? The choice about how far to take a principle makes it both harder to teach and harder to apply.</p>
<p>Today, I think of principles as one of the types of design tool that can be used to evaluate or improve a design. That set also includes formal methods, security patterns, root cause analyses, and threat modeling. Formal methods are finally making a transition to practice. Security patterns seem stuck as another approach that’s so obvious that they’re hard to argue with, but developing useful patterns seems empirically hard. Root cause analysis is helping us deal with families of memory corruption by showing the need to improve platforms and languages.</p>
<p>Above, I said that principles are excellent tools for “Your X is not Y enough!” That’s true, but incomplete. Senior engineers can and should use principles as rubrics, as distillations of wisdom, and as reminders of properties their systems should have. All else being equal. Which it never is, and that’s what makes principles more useful to more senior practitioners. Their experience leads them to be more willing and able to consider alternative designs. That’s what makes them more worth teaching in a college course than a two or three day training.</p>
<p>I also said that <em>most</em> of the Principles had stood the test of time, and that begs the question, which haven’t? I think the worst faring is Least Common Mechanism. We’ve learned it’s hard to get software right, and the principle seems to lead people to re-invent the wheel. (I think the principle could possibly be restated today to “Isolate system components”, but it’s only about that when you get to the explanation. The principle itself is misleadingly named/summarized.) The second worst faring is Least Privilege. Privilege is simply a hard concept, and as I discuss at length in <cite>Threats</cite>, Authority is a good replacement.</p>
<p>For all those issues, I think that understanding principles and how to use them is part of what makes a senior engineer. Today in my Masters-level, quarter-long <a href="https://courses.cs.washington.edu/courses/csep590b/23au/" target="_blank" rel="noopener">Security Engineering</a> course, I teach both their work and those of Paul Van Oorschot (in Chapter 1 of his excellent <cite><a href="https://people.scs.carleton.ca/%7Epaulv/toolsjewels.html" target="_blank" rel="noopener">Computer Security and the Internet: Tools and Jewels from Malware to Bitcoin</a></cite>).</p>
<p>Lastly, as an aside, a princip<b>le</b> is “a fundamental truth or proposition that serves as the foundation for a system of belief.” A princip<b>al</b> is “first in order of importance; main” or “the person with the highest authority or most important position.”
</p><p class="credit-text">Image by Midjourney, “stone tablets displayed on tall pedastals under spotlight in a museum, professor giving a lecture. The room is brightly lit with sunlight streaming in.” </p>
Application and AI roundup - October2023-11-09T00:00:00Zhttps://shostack.org/blog/appsec-roundup-oct-2023/AdamExciting news from the SEC, lots of AI, and lots of threat modeling.<span class="single-post-excerpt"><!-- excerpt -->Exciting news from the SEC, lots of AI, and lots of threat modeling.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-oct-2023-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-oct-2023-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-oct-2023-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/appsec-roundup-oct-2023-800w.jpeg" width="800" height="294" alt="a robot reading a book" decoding="async" />
</picture>
<p>Perhaps the largest news I saw in October was the SEC <a href="https://www.sec.gov/news/press-release/2023-227" target="_blank" rel="noopener">charging</a> Solarwinds and CISO Tim Brown with fraud. The full complaint is on the upper right of the press release page. The essence of the complaint seems to be “It’s not the crime, it’s the coverup,” and in particular, telling the public things that were at odds with what Brown was saying internally. Note that a “Complaint” is the government’s case — as strongly as they can make it — and as far as I know, neither Brown nor Solarwinds has responded. Key takeaway: Make sure that your public discussion of your appsec program doesn’t mislead. And as you’ll learn in the AI section... it’s important to your career.</p>
<h3>AI</h3>
<ul>
<li><a href="https://www.semanticscholar.org/paper/Survey-of-Vulnerabilities-in-Large-Language-Models-Shayegani-Mamun/4f63c5a89c7299a864c6c48aa1844fb0fe8c9437" target="_blank" rel="noopener">Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks</a> by Efran Sheygani and collaborators. A long academic survey of the state of the field. (40 pages + 14 pages of references.)</li>
<li>Alex Stamos and Chris Krebs <a href="https://intel.ks.group/p/ai-executive-order-key-takeaways" target="_blank" rel="noopener">analyze</a> the new <a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" rel="noopener">AI Executive Order</a>.</li>
<li><a href="https://arxiv.org/abs/2307.11760" target="_blank" rel="noopener">Large Language Models Understand and Can be Enhanced by Emotional Stimuli</a>, Cheng Li and collaborators. Adding “it’s important to my career” gets better results from LLMs, via <a href="https://fedi.simonwillison.net/@simon/111342409859246588" target="_blank" rel="noopener">Simon Willison</a>.</li>
</ul>
<h3>Threat Modeling</h3>
<ul>
<li>Eric Lawrence <a href="https://textslashplain.com/2023/10/04/security-tradeoffs-privacy/" target="_blank" rel="noopener">makes</a> an excellent point about browser privacy: URL reputation checks might result in the security provider knowing what URLs are visited, but if you can’t trust the vendor of your web browser, your threat model has bigger (insurmountable) problems. He’s both right and wrong. Browser makers need privacy clearer policy, both in the sense of “what we expect to collect” and “what we allow you to configure.” That’s not easy, and it not being easy is not an excuse.</li>
<li>Dan Goodin starts from the headline <a href="https://arstechnica.com/information-technology/2023/11/no-okta-senior-management-not-an-errant-employee-caused-you-to-get-hacked/" target="_blank" rel="noopener">No, Okta, senior management, not an errant employee, caused you to get hacked</a> and continues with “The fault, instead, lies with the security people who designed the support system that was breached.” Dan packs a lot of analysis into his article, and I think Dan’s underlying and accurate message is better threat modeling might have prevented this, or perhaps better followup on issues found by threat modeling.</li>
<li>Dana Epp writes about <a href="https://danaepp.com/adversarial-thinking-for-bug-hunters" target="_blank" rel="noopener">Adversarial Thinking for Bug Hunters</a>. I think this is an important thread, and so I want to respectfully pull on it: what makes this ‘adversarial?’ </li>
<li><a href="https://futurism.com/neoscope/brain-implant-removed-consent" target="_blank" rel="noopener">Doctors Remove Woman’s Brain Implant Against Her Will</a> is a fascinating story. It seems likely the the implant was not designed to stay in forever, and that the creators were concerned it would physically degrade and cause worse damage if left in. After a lot of thinking, I think that’s a reasonable concern with really tragic consequences.</li>
</ul>
<h3>Application Security</h3>
<ul>
<li><a href="https://theupdateframework.github.io/specification/latest/" target="_blank" rel="noopener">The Update Framework</a> is an open source framework designed to enable secure updates, which is harder than it sounds, and an ideal target for open source to solve once, and solve well.</li>
<li>In <a href="https://research.swtch.com/nih" target="_blank" rel="noopener">Running the “Reflections on Trusting Trust” Compiler</a>, Russ Cox asked Ken Thompson for the code, and Ken gave it to him(!) Russ does a phenomenal job explaining the backdoor. Like Russ, I’m amazed at how simple it turns out to be.</li>
<li><a href="https://www.microsoft.com/en-us/security/blog/2023/11/02/announcing-microsoft-secure-future-initiative-to-advance-security-engineering/" target="_blank" rel="noopener">Announcing Microsoft Secure Future Initiative to advance security engineering</a> is a memo by four of Microsoft’s security leaders, echoing the original Trustworthy Computing memo. Overall, some <a href="https://www.theverge.com/2023/11/2/23943178/microsoft-security-secure-future-initiative-cybersecurity" target="_blank" rel="noopener">analysis</a> by Tom Warren at The Verge. I’m, frankly, confused by a goal to “cut the time it takes to mitigate cloud vulnerabilities by 50 percent.” That seems far from audiacious or transformative. I’m also surprised by the heavy inclusion of Confidential Computing, which also seems tactical. But overall, strong echos of both the Trustworthy Computing memo and the response it got. Let’s see where it goes. </li>
</ul>
<p class="credit-text">Image by Midjourney: “A robot reading many books::2, while being hacked. The background is a library with walls of books. The image is cinematic, dramatic, professional photography, studio lighting, studio background, advertising photography, intricate details, hyper-detailed, 8K UHD --ar 8:3 --v 5.0”</p>
Threat Modeling Thursday: Thanksgiving2023-11-23T00:00:00Zhttps://shostack.org/blog/threat-model-thursday-turkey/AdamWhat can we learn from Gunnar Peterson’s Threat Model for Thanksgiving?<span class="single-post-excerpt"><!-- excerpt -->What can we learn from Gunnar Peterson’s Threat Model for Thanksgiving?<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/thanksgiving-threat-model-banner-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/thanksgiving-threat-model-banner-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/thanksgiving-threat-model-banner-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/thanksgiving-threat-model-banner-800w.jpeg" width="800" height="300" alt="a threat graph for Thanksgiving" decoding="async" />
</picture>
<p>Sunil Yu updated Gunnar Peterson’s “Thanksgiving Threat Model” and on Linkedin, I posted that this threat model fails to consider important elements of what can go wrong:</p>
<ul>
<li>Insufficient hygiene in the kitchen</li>
<li>Spills from your turkey fryer</li>
<li>Overly pedantic threat modeling experts getting judgy with it</li>
</ul>
<p>And as I thought about it, I decided I wasn’t judgy enough! But more seriously, I decided to revive Threat Model Thursday and talk about this. Especially on Thanksgiving, my goal is not to start a fight, but to be thankful for their work, and constructively ask “what can we learn from it?” I think the answer is quite a bit, and I hope that this doesn’t ruin anyone’s fun.</p>
<p>The first question we ask in threat modeling is “<b>what are we working on?</b>” The question helps us scope and ensure we have good coverage, and we usually have a diagram and the The model is labeled “Thanksgiving,” and it maybe doesn’t even include gravy and mashed potatoes. It does include “inadequate side dishes,” but it’s not clear what adequate or inadequate means to the creators.</p>
<p>It also doesn’t include travel chaos. Should it? We use diagrams to help us paint a picture.</p>
<img loading="lazy" src="https://shostack.org/blog/img/2023/thanksgiving-threat-model.png" width="960" height="540" alt="Thanksgiving Threat Model" />
<p>This model seems more like a flowchat, and it works surprisingly well to integrate a set of use cases and outcomes. To be blunt, I’ve given up on abuse and misuse cases because they don’t work <i>even in the academic papers that advocate for them</i>. I’ve yet to see a paper that has more than a few obvious samples, or thats explains how the style and mechanisms of use cases relate to structured analysis. But the idea is dreadfully hard to kill, and those working on abuse cases might want to experiment with this.</p>
<p>While I’m talking about diagrams, I want to comment on the graphical presentation. The diagram uses attractive colors as well as shading and 3d boxes to make it pop visually. And the boxes take up space, and distract from the text enough that my critique about spills from a fryer was inaccurate -- there’s a box for that. And the text (especially white on gray) is hard to read, and so I missed it.</p>
<p>The second question we ask is “<b>What can go wrong?</b>” and this is a smorgasboard of answers. It’s not clear how Uncle Billy relates to the inebriated cook, nor, really, why assigning him to do the dishes will help. Is someone universalizing a bit here?</p>
<p>It’s also not clear to what extent this threat model should consider the trifecta of airborne pathogens (RSV, flu, and Covid) which are on track to kill a great many Americans thisyear, and especially with covid’s high rates of mutation, people should be aware of the additional risk, especially as travel helps strains migrate to new areas.</p>
<p>The next question is “<b>What are we going to do about it</b>?” For about a decade, I’ve been using Kenji Lopez-Alt’s deconstructed approach with <a href="https://www.seriouseats.com/sous-vide-turkey-breast-crispy-skin-recipe-thanksgiving" target="_blank" rel="noopener">sous vide breast</a> and <a href="https://www.seriouseats.com/turkey-in-parts-white-dark-recipe" target="_blank" rel="noopener">roasted legs</a>. You don’t have the 2 minutes of oohing and ahhing over the bird, but you lose hours of cooking time, hours of stress over when the bird will be done, and you get better food. (The dates on those recipies are ‘updates’ because Google prioritizes recency.)</p>
<p>And the final question we ask is “<b>Did we do a good job?</b>” And frankly, Gunnar and Sounil did a better job than me at keeping it fun, but I hope you enjoyed this post, learned a little, and, if you celebrate, have or had a lovely Thanksgiving.</p>
C2PA Threat Modeling2023-11-29T00:00:00Zhttps://shostack.org/blog/threat-model-thursday-c2pa/AdamWhat can we learn from the C2PA security considerations document?<span class="single-post-excerpt"><!-- excerpt -->What can we learn from the C2PA security considerations document?<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/C2PA-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/C2PA-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/C2PA-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/C2PA-800w.jpeg" width="800" height="298" alt="A diagram showing a camera, storage, photoshop and validation. The camera and photoshop are in boundaries" decoding="async" />
</picture>
<p>The <dfn id="C2PA">Coalition for Content Provenance and Authenticity</dfn> (<abbr>C2PA</abbr>) has released a “Security Considerations” document, which is pretty much a threat model, and I want to use it for a Threat Model Thursday post. As always, my goal is to analyze the document to see what we can learn, and even engage in criticisms, which are intended to be helpful.</p>
<p>C2PA is an industry coalition focused on the threat of AI misinformation by creating ways to authenticate the images that haven’t been altered, and by showing what alterations have been performed on an image.</p>
<p>There is a lot to like here. <a href="https://c2pa.org/specifications/specifications/1.0/security/Security_Considerations.html#_trust_model" target="_blank" rel="noopener">The document</a> is generally easy to read, it starts off with a methodology overview, which is great. I’d like to have seen more specifics about process choices (for example, did they ever create a data flow diagram, or do a full STRIDE analysis or create a “kill chain?”) I think my understanding is “we brought together really smart people, and they worked it out as they went.” In a process like this, where a presumably group of experts come together across time and versions of the system, both changing with time, it’s nice to express how they worked on it. </p>
<h3>What are we working on?</h3>
<p>It would be good to have a graphic representation of the system as they see it. There are some in the parent doc, and I will admit to not having read the 172 pages, but Figure 3 (Elements) and Figure 13 (Entities) are not helpful-to-me overviews. My swag at a representation is above.</p>
<p>The way it’s represented might be a data flow diagram, it might be some other form of diagram to concisely show the relationship between parties. And yes, my ”Signed photograph” is a dramatic simplification of their manifest with its signature, claims and assertions. Models simplify, and <i>that’s why we use them.</i></p>
<h3>What can go wrong?</h3>
<p>Section 3, the security model is part of “what can go wrong,” and is tied to Integrity, Availability and Confidentiality (in that order), and protection of personal information is in a separate section. I would have liked to see explicit discussion of the non-repudiation property here.</p>
<p>Section 4 is labeled as a threat model, and it starts off with assumptions. Nice! All too rare to see this. Threats are grouped by goals (aka impact), and each has a specific threat, with description, pre-requisites, impact analysis, and security guidance (which is akin to mitigations, for a standards document). Without trying to re-do the work, it seems reasonably complete and communicative.</p>
<p>I’d like to see linkability considered as a first-class threat: Being able to determine that the same person took these two photos is meaningful for journalists and activists. How to handle linkability is complex. A journalist may want all their news photos to be seen as a set, and not want their family photos taken with the same camera at home, to be in that set.</p>
<p>I’d also like to see an “accepted dangers” section. For example, is the threat of confusion between one John Smith and another in scope? What about confusion between johnsmith@example.org and johnsmith@example.com? A standard probably can’t address the former, but can mandate that compliant implementations display both human-readable and globally unique names, and talk about the levels of validation of those names. (Valid when the account was created? Valid when the image was signed?) </p>
<p>At least one threat is the violation of an assumption. Many assumptions are violations of threats, and methodology designers should consider if they need to be explicitly listed and consistency. (4.2.2.2, spoofing signed C2PA via a stolen key violates assumption 4.1, “Attackers do not have access to private keys.”) I think the rest of the assumptions don’t list the threats that manifest as they’re violated.</p><p>
</p><h3>What are we going to do about it?</h3>
<p>There are no separated out list of “What are we going to do about it?” The way a document like this considers ‘what to do’ is generally to build the defenses it can into the specification and here the form is much closer to, say, IETF RFCs than it is to Jira tickets with mitigations. It would be nice to see an implementor-focused list of threats and required security controls. That is, if you make cameras, ensure you address threats like key extraction. If you’re making something editing software, do these other things... The C2PA group has clearly thought a lot about those, summarizing them would be great.</p>
<h3>Did we do a good job?</h3>
<p>The relationship to other C2PA documents is complex. I’d have liked a cleaner relationship between the <a href="https://c2pa.org/specifications/specifications/1.3/specs/C2PA_Specification.html#_information_security" target="_blank" rel="noopener">Technical Specification</a> and the <a href="https://c2pa.org/specifications/specifications/1.0/security/Security_Considerations.html" target="_blank" rel="noopener">Security Considerations</a>, because the Spec contains a section (18, Information Security) that has threats and security considerations and also a section on harms, misuse and abuse that branches into yet another document, <a href="https://c2pa.org/specifications/specifications/1.0/security/Harms_Modelling.html" target="_blank" rel="noopener">C2PA Harms Modelling</a>, which seems to have a related set of issues. I think the way in which they fit together is Security Considerations is threats <i>to</i> the system, while Harms is threats <i>via</i> the system.</p>
<h3>Final thoughts</h3>
<p>As a nit, the word “asset” is confusing, even though they explain their unusual use. They’d have been better off with “image,” even though asset refers to more than images.</p>
<p>I've spoken to a person involved with the process to help me understand what's going on, all remaining mistakes are my own.</p>Application and AI roundup - November2023-11-30T00:00:00Zhttps://shostack.org/blog/appsec-roundup-nov2023/AdamA threat modeling conference, lots of government appsec guidance, and some updates from Shostack + Associates<span class="single-post-excerpt"><!-- excerpt -->A threat modeling conference, lots of government appsec guidance, and some updates from Shostack + Associates<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-nov2023-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-nov2023-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/appsec-roundup-nov2023-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/appsec-roundup-nov2023-800w.jpeg" width="800" height="300" alt="Jigsaw pieces fitting together" decoding="async" />
</picture>
<h3>Threat Modeling</h3>
<ul>
<li>The first ever threat modeling conference (ThreatModCon) was held in Washington DC! It was sold out with 130 attendees from around the world. I delivered a segment for the keynote which you can <a href="https://youtu.be/4ngYKVAs22c" target="_blank" rel="noopener">watch here</a>. You can also read the <a href="https://www.threatmodelingconnect.com/welcome-announcements-38/recap-on-threatmodcon-2023-the-first-threat-modeling-conference-ever-257" target="_blank" rel="noopener">official recap</a> and <a href="https://shehackspurple.ca/2023/11/07/trip-report-threatmodcon-and-owasp-global-appsec-2023/" target="_blank" rel="noopener">Tanya Janca’s recap</a> which covers ThreatModCon and also OWASP and her joining me to deliver training. </li>
<li><a href="https://excalidraw.com/" target="_blank" rel="noopener">Excalidraw</a> has added a text to diagram feature. It’s under the tools menu on the far right of the toolbar. (<a href="https://mastodon.social/@excalidraw/111483429122236164" target="_blank" rel="noopener">video demo</a>.)</li>
</ul>
<h3>AI</h3>
<ul>
<li>The UK’s National Cyber Security Center released <a href="https://www.ncsc.gov.uk/files/Guidelines-for-secure-AI-system-development.pdf" target="_blank" rel="noopener">Guidelines for secure AI system development</a> with CISA and a score of agencies from around the world. These multi-national guidelines are a fascinating trend.</li>
<li>I think something happened at OpenAI? Perhaps the most entertaining detail was Microsoft apparently had to promise that OpenAI employees <a href="https://duhaone.substack.com/p/move-fast-and-break-teams" target="_blank" rel="noopener">wouldn’t have to use Microsoft Teams</a> if they took jobs at Microsoft.</li>
</ul>
<h3>Application Security</h3>
<ul>
<li>CISA released v2 of their <a href="https://www.cisa.gov/sites/default/files/2023-10/SecureByDesign_1025_508c.pdf" target="_blank" rel="noopener">Security By Design</a> document</li>
<li>Alex Gantman has posted an <a href="https://www.linkedin.com/pulse/security-engineering-proceed-withcaution-alex-gantman-b6iec%3FtrackingId=PC2LY%252Bn0SDGx9JtKPLyvvQ%253D%253D/?trackingId=PC2LY%2Bn0SDGx9JtKPLyvvQ%3D%3D" target="_blank" rel="noopener">interesting critique</a> of Ross Anderson’s <i>Security Engineering</i>.</li>
<li>The paper <a href="https://arxiv.org/pdf/2311.09394.pdf" target="_blank" rel="noopener">GWP-ASan: Sampling-Based Detection of Memory-Safety Bugs in Production</a> reports on a sampling approach to detecting memory safety bugs now deployed in Chrome, Firefox, Android, both of Apple's operating systems and Linux. Microsoft is noticably absent. They explicitly frame that “GWP-ASan is <i>not a security mitigation</i> tool due to its low detection probability.” But it’s still very cool. Appsec practitioners should pay attention to the focus on practicality that eluded earlier versions of similar ideas. </li>
<li>A blog post titled <a href="https://www.reidmore.online/post/accessibility-training-will-not-save-you" target="_blank" rel="noopener">Accessibility training will not save you</a> would be the latest in my ongoing series “We can replace the string X with ‘security’”. Except the author discusses security, saying “.. everyone who writes software should have a concept of security and best practices, it's a complex field, and if you want to do it well you need people with expertise.” Such optimism! I still see lots of organizations which treat security as a tool problem, where they have not really reached that understanding. Regardless, it may be useful to see security challenges reflected through this lens.</li>
</ul>
<h3>Shostack + Associates updates</h3>
<p>Above I've rounded up the most important industry news I saw in November. I also want to share that here at Shostack + Associates, we have a couple of companies who are spending “use it or lose it” budget on our self-paced training courses. I’m very appreciative, and if you’re a leader wondering what to get your folks, let me suggest some training?</p>
<div class="courses-widget-area">
<p><a href="https://courses.shostack.org/courses/threat-modeling-for-engineers-201-self-paced" t="_blank"><img src="https://shostack.org/img/site/TM-Essentials-card.jpg" alt="Course card for “Threat Modeling Essentials (201)” course by Shostack+Associates" style="width:auto;max-width:250px;" /><br /><strong>Buy Now! (Threat Modeling Essentials self-paced version)</strong></a> (6-12 hours)</p>
<p><a href="https://courses.shostack.org/courses/threat-modeling-222-self-paced" target="_blank"><img src="https://shostack.org/img/site/TM-Intensive-card.jpg" alt="Course card for “Threat Modeling Intensive (222)” course by Shostack+Associates" style="width:auto;max-width:250px;" /><br /><strong>Buy now! (Threat Modeling Intensive self-paced edition)</strong></a> (14-16 hours)</p>
</div>
<p>Also, here at Shostack + Associates, we’ve got a new <a href="https://shostack.org/training">training page</a>, we renamed a course from “Engineers” to “Essentials”, we added some of our associates to a new <a href="https://shostack.org/about">about</a> page, and updated the menu at the top of the website to mention all the Services we offer.</p>
<p>Lastly, there's some great discounts on my new <i>Threats</i> book, including 60% off as an <a href="https://www.audiobooks.com/promotions/promotedBook/654778/threats-what-every-engineer-should-learn-from-star-wars?refId=106914&utm_campaign=ASMPpromosends&utm_medium=email&_hsmi=281538268&_hsenc=p2ANqtz-_DEJPF3FVNrNjj9cZM2IwA2NYLp89NOZHL-Cfx5AeAIrS_J1hI_SgfG1zcMrU6SnBjdvspCCgRzh-sXxrqV9JMbIC5Rg&utm_content=281538268&utm_source=hs_automation" target="_blank" rel="noopener">audio book</a>, and a <a href="https://www.humblebundle.com/books/cybersecurity-2023-wiley-books?hmb_source=&hmb_medium=product_tile&hmb_campaign=mosaic_section_1_layout_index_1_layout_type_threes_tile_index_1_c_cybersecurity2023wiley_bookbundle" target="_blank" rel="noopener">Humble Bundle</a> with it and many other great books from Wiley. I may humbly suggest, even a Trek geek would appreciate a Star Wars book as a stocking stuffer.</p>
<p class="credit-text">Image by Midjourney: “puzzle pieces, people collaborating and putting together puzzle pieces, green, blue simplistic, puzzle pieces, studio background, primitivist style, precisionist lines --ar 8:3”</p>
Think like Alph-V?2023-12-12T00:00:00Zhttps://shostack.org/blog/think-like-alphav/Adam<span class="single-post-excerpt"><!-- excerpt --><!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/think-like-alphv-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/think-like-alphv-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/think-like-alphv-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/think-like-alphv-800w.jpeg" width="800" height="296" alt="" decoding="async" />
</picture>
<p>Recently, ransomware operators AlphV were in the news for telling the SEC that AlphV’s victim, MeridianLink, had not filed the relevant SEC form 8K about their breach. While this is something between clever, humorous and outrageous, I’d like to ask a question of those who advocate for adversarial thinking: Did you predict this?</p>
<p>For years, I’ve been talking about how it’s hard to think like an attacker. Now, we have an innovative manuever from an attacker, and we can ask: did advocates of attacker emulation anticipate it?</p>
<p>This is an important question because if thinking like an attacker didn’t lead to predicting what an attacker would do, then we should increase our skepticism of the approach and replace it with effective techniques, like threat modeling with STRIDE or Kill Chains.</p>
<p>We have lots of evidence that we can teach those skills, and most people can learn them with reasonable reproducibility in a short time of a few hours to a few days. Obviously reproducibility rises with more practice, but to the best of my knowledge, no one makes any claim for any consistency from ‘think like an attacker.’</p>
<p>Source: <a href="https://www.databreaches.net/alphv-files-an-sec-complaint-against-meridianlink-for-not-disclosing-a-breach-to-the-sec/" target="_blank" rel="noopener">AlphV files an SEC complaint against MeridianLink for not disclosing a breach to the SEC</a> at Databreaches.net, who also note that the SEC rule may not be in effect until December 15.</p>
<p>Relatedly:</p>
<ul>
<li><a href="https://shostack.org/blog/think-like-alphav/">Think like Alph-V</a></li>
<li>
<a href="https://shostack.org/blog/who-are-we-kidding-with-attacker-centered-threat-modeling/">Who
are we kidding with attacker-centered threat modeling?</a></li>
<li> <a href="https://shostack.org/archive/2016/04/think-like-an-attacker-is-an-opt-in-mistake/">“Think Like an Attacker” is an opt-in mistake</a>
</li>
<li>
<a href="https://shostack.org/archive/2008/09/think-like-an-attacker/">Think Like An Attacker?</a>
</li>
<li>
<a href="https://shostack.org/archive/2008/09/the-discipline-of-think-like-an-attacker/">The Discipline of “think like an attacker”</a>
</li>
<li>
<a href="https://shostack.org/archive/2014/11/think-like-an-attacker-flip-that-advice/">Think Like An Attacker? Flip that advice!</a>
</li>
<li>
<a href="https://shostack.org/blog/modeling-attackers-and-their-motives/">Modeling Attackers and Their Motives</a>
</li>
</ul>
<p class="credit-text">Image by Midjourney, "a computer scientist in the foreground and a criminal hacker in the background. The computer scientist and the criminal hacker are both thinking. However, the computer scientist is trying to think like the hacker and is getting frustrated. He's eyeing the hacker suspiciously. FRUSTRATED. THINKING. Cartoon and CORPORATE and PROFESSIONAL style with light green, blue, simplistic, studio background, primitivist style, precisionist lines --ar 8:3 --s 750" </p>
The Nazgul of Threat Modeling2023-12-15T00:00:00Zhttps://shostack.org/blog/nazgul-of-threat-modeling/Adam(no description available)<span class="single-post-excerpt"><!-- excerpt --> (no description available) <!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/nazgul-of-threat-modeling-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/nazgul-of-threat-modeling-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/nazgul-of-threat-modeling-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/nazgul-of-threat-modeling-800w.jpeg" width="800" height="567" alt="A still from Lord of the Rings, showing nazgul crossing a river, with the four questions of threat modeling added, and a note ‘There are five nazgul, their manager is tagging along’." decoding="async" />
</picture>
<p> Igor Andriushchenko has a really interesting presentation from SEC-T, <a href="https://www.youtube.com/watch?v=25ln8eINFl0" target="_blank" rel="noopener">Autonomous Offensive AI Threat Modeling and Exploitation</a>, and opened with this awesome version of the Four Questions. </p>
Take Control of What You Read2023-12-26T00:00:00Zhttps://shostack.org/blog/take-control-of-what-you-read/AdamAs 2023 draws to a close, take control of what you read.<span class="single-post-excerpt"><!-- excerpt -->As 2023 draws to a close, take control of what you read.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/take-control-of-what-you-read-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/take-control-of-what-you-read-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/take-control-of-what-you-read-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/take-control-of-what-you-read-800w.jpeg" width="800" height="284" alt="Technology controlling what people read" decoding="async" />
</picture>
<p>As we close out the year, I discovered that the blog feed to email which I’d set up on Substack had broken. I like to make my content available in convenient ways, but for the small number of readers, it’s just not feasible for me to spend time on it.</p>
<p>And as we close out, I want to urge everyone to take control of what they read. Take control back from the mega corporations. Take control back from algorithms that are designed to enrage and outrage you so you keep coming back. Choose what you pay attention to, rather than letting someone else choose how you spend your precious time.</p>
<p>For example, the <a href="https://www.youtube.com/c/shostack" target="_blank" rel="noopener">Shostack Channel on Youtube</a> nominally has 832 subscribers as I write this. And of those, 100 have watched my latest video. Is that because they don’t want to see it, or because Youtube prioritizes other content, from people who adjust how they speak, act, and the topics they speak about to feed the algorithm and become effective Youtubers? Maybe it’s my bad lighting or audio. But I think the folks who subscribed to that channel didn’t do it for my bubbly personality. They know what they wanted, and YouTube is giving them something else. As the Mozilla Foundation <a href="https://foundation.mozilla.org/en/youtube/user-controls/" target="_blank" rel="noopener">discovered</a>, YouTube shows you what they want to show you.</p>
<p>Pay attention to my choice of words. I don’t say ‘my youtube channel,’ because, frankly, it’s not my channel. It’s theirs. They control the horizontal. They control the vertical. They control how it looks and how often it’s shown.</p>
<p>As Prince <a href="https://www.marketplace.org/2023/06/07/30-years-ago-prince-changed-the-way-artists-negotiate-with-the-music-industry/" target="_blank" rel="noopener">said</a>, “If you don’t own your masters, your masters own you.” He also <a href="https://shostack.org/blog/take-control-of-what-you-read/" target="_blank" rel="noopener">said</a> “Tell me a musician who’s got rich off digital sales. Apple’s doing pretty good though, right?”</p>
<p>And so I’m going to encourage everyone to take control of their reading and writing. As you read or listen, look for tools that let you control what you read and listen to. That probably means “RSS” and “ATOM”, but can also mean Mastodon. If you really want blog posts via email, there are services like blogtrotter and Feedrabbit, but I haven’t used either and want less email, so I’m not making any recommendation. And yes, as you learn to use these new tools, there’s a learning curve that can be hard. It's harder than other places because no one is paying a team of people to spend their lives making the onboarding as “frictionless” or addictive as possible. And while it would be easy to claim that’s a feature, it’s not. It’s unfortunate. But as you struggle, ask yourself: why is that investment worthwhile to Mark Zuckerberg, to Tiktok, to others? Hint: It’s not because they’re selflessly serving your interests.</p>
<p>The general philosophy here is called “<a href="https://www.theverge.com/2023/10/23/23928550/posse-posting-activitypub-standard-twitter-tumblr-mastodon" target="_blank" rel="noopener">POSSE</a>” (Publish Own Site, Syndicate Everywhere.) I’m going to spend less and less energy to support the “content platforms” and encourage you to do the same.</p>
<p class="credit-text">Image: Midjourney </p>
Giant Pink Bunny, Redux2023-12-27T00:00:00Zhttps://shostack.org/blog/giant-pink-bunny-redux/AdamArt is transient<span class="single-post-excerpt"><!-- excerpt -->Art is transient<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/mountain-bunny-498w.webp 498w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/mountain-bunny-498w.jpeg 498w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/mountain-bunny-498w.jpeg" width="498" height="361" alt="A giant pink rabbit, no more" decoding="async" />
</picture>
<p> Way back when, we reported on a <a href="https://shostack.org/archive/2005/09/giant-pink-bunny-on-the-mountaintop/">giant pink bunny on a mountaintop</a>. Atlas Obscura reports <a href="https://www.atlasobscura.com/places/colletto-fava" target="_blank" rel="noopener">that it's now gone</a>. The Google Maps <a href="https://maps.app.goo.gl/6sf7ACszNt1WFGFEA" target="_blank" rel="noopener">pin</a> no longer shows the bunny.</p>
<p>We thought you'd want to know.</p>
Think like Sieged-sec?2023-12-29T00:00:00Zhttps://shostack.org/blog/think-like-siegedsec/AdamYet again, attackers surprise us<span class="single-post-excerpt"><!-- excerpt -->Yet again, attackers surprise us<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2023/think-like-siegedsec-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2023/think-like-siegedsec-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2023/think-like-siegedsec-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2023/think-like-siegedsec-800w.jpeg" width="800" height="300" alt="AI generated image of a brightly colored AT-AT Cat" decoding="async" />
</picture>
<p>Apparently, SiegedSec has hacked Idaho National Labs, and their demands are not for money, but for ... genetically engineered catgirls:</p>
<blockquote>"We're willing to make a deal with INL. If they research creating IRL catgirls we will take down this post," the group said.</blockquote>
<p>For years, I’ve been talking about how it’s hard to think like an attacker. Now, we have an innovative manuever from an attacker, and we can ask: did advocates of attacker emulation anticipate it?</p>
<p>Also, while I support people's right to be imaginative, I urge you: don't naively ask Midjourney for catgirls.</p>
<p>Source: <a href="https://www.theregister.com/2023/11/22/nuclear_lab_hacked/" target="_blank" rel="noopener">US nuke reactor lab hit by 'gay furry hackers' demanding cat-human mutants</a>.</p>
<p>Relatedly:</p>
<ul>
<li><a href="https://shostack.org/blog/think-like-alphav/">Think like Alph-V</a></li>
<li>
<a href="https://shostack.org/blog/who-are-we-kidding-with-attacker-centered-threat-modeling/">Who
are we kidding with attacker-centered threat modeling?</a></li>
<li> <a href="https://shostack.org/archive/2016/04/think-like-an-attacker-is-an-opt-in-mistake/">“Think Like an Attacker” is an opt-in mistake</a>
</li>
<li>
<a href="https://shostack.org/archive/2008/09/think-like-an-attacker/">Think Like An Attacker?</a>
</li>
<li>
<a href="https://shostack.org/archive/2008/09/the-discipline-of-think-like-an-attacker/">The Discipline of “think like an attacker”</a>
</li>
<li>
<a href="https://shostack.org/archive/2014/11/think-like-an-attacker-flip-that-advice/">Think Like An Attacker? Flip that advice!</a>
</li>
<li>
<a href="https://shostack.org/blog/modeling-attackers-and-their-motives/">Modeling Attackers and Their Motives</a>
</li>
</ul>
<p class="credit-text">Image by Midjourney, "make a deal with idaho national labs if they do certain research we'll take down this post overrun with mutant robot cats in outlandish colors --v 6.0 --ar 8:3" I did not ask for an AT-AT cat, but couldn't resist.</p>
The State of Appsec in 20242024-01-02T00:00:00Zhttps://shostack.org/blog/the-state-of-appsec-2024/Adam2024 is bringing lots of AI, and Liability, too<span class="single-post-excerpt"><!-- excerpt -->2024 is bringing lots of AI, and Liability, too<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/Ai-is-coming-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/Ai-is-coming-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/Ai-is-coming-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/Ai-is-coming-800w.jpeg" width="800" height="299" alt="A messed up image generated by AI" decoding="async" />
</picture>
<p>At the start of 2024, appsec is moving through two major inflection points: liability and AI. The first has two facets: how do we secure AI systems, and how do we use AI in appsec? The second major inflection is driven by governments re-arranging liability from software operators to software makers. And as I think about where we are in 2024, I’m optimistic and hopeful because of a third change, much more nascent, that lays groundwork for assessing and improving both of those transformations. Let me start with the AI changes, because while they have lots of crucial details, they’re conceptually very simple.</p>
<h3>Appsec will be a crucial part of safer AI deployments</h3>
<p>Your executives want to deploy AI. AI, especially generative LLMs, replace finicky, unpredictable, expensive people with finicky, unpredictable, expensive software that doesn’t complain when told to come into the office. But importantly, you can’t just “deploy” an LLM. You have to build it into business processes, and if you don’t think about what can go wrong and what you’re going to do about it, don’t worry, other people will help you figure those things out while making off with your crown jewels. So you’re going to need to threat model, and you’re going to need to determine what defenses make sense around your new AI systems. The problems are not new, but they are increasingly urgent. Bolting security onto an LLM is going to be exceptionally, embarrassingly ineffective, and so we’ll see growth in appsec. We’ll need to learn a lot to make this work, but we cannot do it without appsec.</p>
<h3>AI will improve appsec</h3>
<p>Many of the challenges that appsec brings challenges which are solvable with specialized knowledge, and LLMs, used well, can make that specialized knowledge easier to find. Of course, it will remain tricky to evaluate the information (not knowledge) that gets returned, and doing so will get more important. Nevertheless, we’ll see organizations learning to use LLMs to assess features, code and designs, and to suggest improvements. There will be embarrassing mis-steps along the way that will make it easy to focus on how AI is hurting appsec. Many people will focus on how AI is writing insecure code, but that will likely get sorted out by the creation of code-specific LLMs with better training data, by better prompts for getting secure code, and better output filtering/checking. It probably won’t be in 2024, but I expect we’ll see a time where machine generated code is safer than human generated code.</p>
<h3>Liability is here, and shifting</h3>
<p>Today, we tend to blame breached companies for their woes, and there’s often a lot of reasons for that. Underinvestment in patching or configuring systems is rampant! But there’s another reality, which is that a lot of the software which needs patching and configuration is made by companies who are making billions of dollars in annual profits, rather than investing more in making that software more robust or easier to configure. CISA has been talking a lot about Ralph Nader’s “Unsafe at Any Speed,” and how many cars were unsafe until government stepped up to measure and regulate more strongly.</p>
<p>Liability for software makers isn’t just coming, it’s here. It’s here in the form of the European CRA, and it’s here in the US for anyone who makes medical devices, sells to the federal government, moves money, makes software that kids use, rents videos, operates transportation systems or other critical infrastructure, works with location data... who needs to do what is very, very complicated, and it’s going to get worse. </p>
<p>It’s easy to think that “software makers” are just a few big tech companies in Silicon Valley or Seattle, but as Marc Andreessen pointed out a decade ago, every company was already a software company. Every company makes software, if only in Excel, IFTTT, Salesforce, Hubspot and the like.</p>
<p>It’s also easy to get confused because the regulatory train is rolling down the tracks that we can wait for it to get to the station before the effects can be felt. That would be a mistake. Many of these changes are going to take time to implement. For example, changes to “Secure by Default” (CISA Secure by Design, page 9) may entail changes to documentation or installers. Security configuration checkers are implied by the phrase “The complexity of security configuration should not be a customer problem.” Should you be building one? I don’t know your circumstances, but I would encourage everyone to start figuring it out now, rather than waiting until OMB releases their attestation requirements for selling to the US Government. Similarly, when the same guidance calls for “organizational structure and leadership,” those are going to take time. Do you want to be explaining in your SEC filings that you haven’t done those things? After a breach, do you want to be explaining to plaintiff counsel that you were going to get to it once the rules were firmed up? There’s plenty of detail, and the longer you wait, the more clearly tenuous your explanations are going to be.</p>
<p>Let me be frank: This is going to be a hard transition. Lawyers will increasingly try to influence software development. (Smart lawyers will make time to learn to code a little.) Leadership who can continue development at speed while managing these new challenges will be scarce. Good threat modeling to reduce rework, rather than bad threat modeling that enables paperwork, is going to be important.</p>
<h3>The emergence of Cyber Public Health</h3>
<p>One of the things that makes liability bad is we don’t have a quantified understanding of what’s going wrong, in the sense of root causes and contributing factors to the problems, and that makes it impossible to pass laws which effectively and narrowly target just those root causes. Instead we get broad laws which require things like security awareness training and insanely short-trigger provisions about telling regulators. We get requirements to change passwords every 90 days baked into regulations.</p>
<p>The last trend is one that we’ve been building towards: a science of cyber public health. CyberGreen has released a set of <a href="https://cybergreen.net/research/" target="_blank" rel="noopener">interesting technical reports</a>, run a <a href="https://cybergreen.net/news/" target="_blank" rel="noopener">seminar series</a> jointly with the <a href="https://ostromworkshop.indiana.edu/" target="_blank" rel="noopener">Ostrom Workshop</a>, and with support from Google, is <a href="https://rsvp.withgoogle.com/events/cyber-public-health-workshop" target="_blank" rel="noopener">organizing</a> the world’s first workshop on cyber public health in January 2024.</p>
<p>The public health frame is one I’ve used as far back as Project Broad Street, which led Microsoft to fix Autorun. Focusing on population health gives us an way to identify impactful problems, and the tools of public health give us proven ways to address them.</p>
<p>Update: Fixed an extraneous “not” so people should start figuring it out now, not wait. Thanks, LK!</p>
<p class="credit-text">Image by Microsoft copilot: “create for me a stock art image of 2024 with the words "liability is coming" and "AI" and "Appsec" in it. bright and airy. impressionist colorism, bokeh panorama, molecular structures. aspect ratio of 8:3/Yes, I can make the first image bigger and more detailed for you. Here is the improved version of the image. I hope it meets your expectations. 😊” < LIAR</p>
Threat Modeling Capabilities Released2024-01-12T00:00:00Zhttps://shostack.org/blog/threat-modeling-capabilities/AdamA great new resource for threat modeling<span class="single-post-excerpt"><!-- excerpt -->A great new resource for threat modeling<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/tm-capabilities-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/tm-capabilities-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/tm-capabilities-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/tm-capabilities-800w.jpeg" width="800" height="301" alt="A screencapture of the capabilities." decoding="async" />
</picture>
<p>My collaborators on the Threat Modeling Manifesto have just released a set of <a href="https://www.threatmodelingmanifesto.org/capabilities/#" target="_blank" rel="noopener">Threat Modeling Capabilities</a>. A <i>capability</i> delineates what an organization does. Others led this important work, and I don’t want to take much credit. I do think this fits into the “organizational” part of my <a href="https://www.shostack.org/files/papers/The_Jenga_View_of_Threat_Modeling.pdf" target="_blank" rel="noopener">Jenga Framework</a>. </p>
<p> I’m really excited for this release because the document provides a way to think about a specific organization’s capabilities, and that means we can also talk about them and start to compare. It’s a very important step forward for effective threat modeling practice. If you threat model or run a threat modeling process, take a look. You can start to use this to answer the question “Are we doing a good enough job?” By design, we didn’t try to provide maturity scales. </p>
<p>Anytime there’s a document authored by 15 people, there are disagreements. I’m not a fan of the attackers bit for reasons I’ve written on extensively, most recently in <a href="https://shostack.org/blog/think-like-siegedsec/">Think Like Sieged-Sec</a>. I’m also not a big fan of including risk: I think risk is informed by threat modeling and we should keep them conceptually separate, and that was hard in the context of a document like this.</p>
<p>My comments on attackers led to a lot of conversation, including:</p>
<ul>
<li>Think of actors in the “where does the attacker need to be” sense, not the threatintel sense.</li>
<li>Similarly, think of both external and internal attackers. The latter might be unaware they’re breaking rules, especially in privacy.</li>
<li>It’s important and tough to avoid getting fixated on what attackers are provably doing today. But if you’re fixated, you’re only locking barn doors after the horses have escaped.</li>
</ul>
<p>But I appreciate that many of the folks involved did much more work than I did, and I share these not to cast shade, but to demonstrate that effective collaboration is a give and take process of rough consensus, which we achieved.</p>
CSRB Senate Hearing2024-01-19T00:00:00Zhttps://shostack.org/blog/csrb-senate-hearing-comments/AdamComments following the Senate’s CSRB hearing<span class="single-post-excerpt"><!-- excerpt -->Comments following the Senate’s CSRB hearing<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/tarah-wheeler-csrb-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/tarah-wheeler-csrb-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/tarah-wheeler-csrb-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/tarah-wheeler-csrb-800w.jpeg" width="800" height="446" alt="Tarah Wheeler testifying" decoding="async" />
</picture>
<p>On Wednesday, the Senate held a hearing, <a href="https://www.hsgac.senate.gov/hearings/the-cyber-safety-review-board-expectations-outcomes-and-enduring-questions-2/" target="_blank" rel="noopener">The Cyber Safety Review Board: Expectations, Outcomes, and Enduring Questions</a>. There were very solid questions, and I want to offer perspective on two of them: recommendations and classified information.</p>
<p>One issue that seemed to be on the minds of several Senators was how to ensure that the CSRB is listened to, not ignored, by either government agencies or private companies.</p>
<p>I’ve come to understand that the NTSB’s ability to express recommendations and let other act on them (or not) is a strength. It allows the NTSB to investigate and express recommendations that may be complex, expensive or otherwise perhaps sub-optimal. This is a useful separation of power and duties, and lets regulators and private actors make decisions about how to engage with those recommendations. Regulatory agencies can take a recommendation, engage with stakeholders, draft guidance or rules, get feedback, and more.</p>
<p>As we build out the CSRB as an institution, I think that's a solid model.</p>
<p>The other issue is classification, and I’m with Tarah Wheeler on this: The CSRB does not need access to classified information, and having classified information reduces the transparency that’s essential to building credibility. Information about how a system was attacked will generally not be classified. (If it’s a private sector system, it’s almost certainly not classified.) Information about <i>who</i> or <i>why</i> might be available in classified form, but the CSRB probably doesn’t need to answer “whodunnit” to find lessons that are generally applicable. It would be fine to say “The NSA informed us that a highly capable foreign power did this, and we relied on that information as we made these following assessments.”</p>
<p>More, it’s reasonable to think information about how a system was attacked should not be classified. The formal bar for classification is that the release of the information would damage national security. (There are, as I understand it, other protections for law enforcement investigations which are separate from the classification system.)</p>
<p>The attacker already knew about the means they used to attack, and they’re probably using that knowledge against other targets. Making that information widely available quickly is essential to adapting to attacks, increasing costs to attackers, and informing effective defense. Classification is in active conflict with each of those goals.</p>
<p>Other attackers might not have known about it, but I would hope that the specific route is rapidly addressed. (And as CISA says, “Attackers are doing fine without roadmaps.”) So the information most useful to the CSRB probably does not meet the bar for either classification or remaining classified. Overclassification is a well-understood problem in Washington. The CSRB should be protected from it.</p>
Red Teaming2024-01-21T00:00:00Zhttps://shostack.org/blog/red-teaming/AdamRed Teaming by Bryce Hoffman is a thought-provoking read.<span class="single-post-excerpt"><!-- excerpt -->Red Teaming by Bryce Hoffman is a thought-provoking read.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/red-teaming-hoffman-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/red-teaming-hoffman-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/red-teaming-hoffman-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/red-teaming-hoffman-800w.jpeg" width="800" height="299" alt="A screencapture of the book's cover." decoding="async" />
</picture>
<p><cite><a href="https://www.amazon.com/Red-Teaming/dp/0349431469/" target="_blank" rel="noopener">Red Teaming</a></cite> by Bryce Hoffman is a thought-provoking book for those in threat modeling, and worth reading for its descriptions of how red teaming fits into business. Hoffman went to the US Army’s Red Teaming school, which is about applying red teaming to concepts and plans, rather than the technical red teaming of breaking into a system. He’s since become a consultant, which brings me to the first major flaw that you’ll have to get past: He spends a lot of time telling you how important red teaming is, how his great consulting helps customers and the like. The second major flaw is that his first book was a hagiography of Alan Mulally, whose legacy is tainted by his time at Boeing, and whose opinions of Mulally were distracting. This Red Teaming book was written before the 737 MCAS crisis or the 737-9 MAX crisis. Technical readers may be happier having skimmed or skipped the first two chapters.</p>
<p>The core of the book is a collection of tools for thinking critically, and getting groups to think critically, about what can go wrong with a plan. Those are intertwined with quite good advice about the interpersonal elements of effective red teaming in a business environment.</p>
<p>The tools are a collection, mostly collected from the US Army, the UK Ministry of Defense and others, that include Liberating Structures, String of Pearls analysis, SWOT, How Others See, and similar tools. One recurring theme is have people write down their ideas before you start discussing. I’d heard about this technique recently and have started incorporating it into my training to good effect. It’s inexpensive and surprisingly powerful. You can see the Army’s list of 48 structures in the Table of Contents of <a href="https://usacac.army.mil/sites/default/files/documents/ufmcs/The_Red_Team_Handbook.pdf" target="_blank" rel="noopener">The US Army Red Team Handbook</a>. I think the Army’s list is longer than Hoffman’s.</p>
<p>The Army manual has a great list of techniques, and some helpful advice, and it’s also focused on how to use the techniques in a military setting. Hoffman adds value in organizing these into these of how to start, the problem and the solution, questioning the unquestionable, thinking the unthinkable, and challenging everything (Chapters 4-8). Where Hoffman shines is the business integration of red teaming. He doesn’t just say “Don’t be a jerk,” but explains how red teaming often appeals to people who are critical and they end up acting like jerks, limiting their own effectiveness. He also discusses how “you don’t always have to be right, but you can’t always be wrong.”</p>
<p>Let me talk about the subtitle, “Transform your business by thinking like the enemy.” It’s a fine tagline, but it’s just not what the book teaches. When I look at the Army list (easier because of the nature of their Table of Contents), I see the following as trying to think like the enemy:</p>
<ol>
<li>4 Ways of Seeing</li>
<li>Cultural Perception Framework</li>
<li>Devil’s Advocacy</li>
</ol>
<p>Now, Hoffman adds, “Being your own worst enemy,” and notes “The U.S. Army simply calls this method <em>Threat Emulation</em> or simply <em>Traditional Red Teaming</em>.” He may have a couple of others along the way, But regardless it’s only one technique in ten. And I think this — unintentionally — makes a crucial point: we don’t have to think like the enemy to bring critical and even contrarian thinking to our analyses. There is a long list of tools that help us do so. (Depending on which edition you get the subtitle may also be “How your business can conquer the competition by challenging everything.” The cover Amazon currently showns me is the “think like the enemy” version. Mr. Hoffman is lucky that his publisher is willing to re-cover the book several times.)</p>
<p>There’s a fairly deep set of similarities between red teaming and threat modeling. Each is frequently seen as an inborn skill, but turns out to be a set of skills that we can teach. Is there an aspect where temperment, perspective, cunning, or other innate quality come in? Possibly! But much like threat modeling has STRIDE and kill chains, red teaming has its own structures that allow us to teach and scale the practices. Almost all of Hoffman’s advice on the business side of red teaming applies well to threat modeling, and so leaders responsible for threat modeling will get at least that out of it.</p>
<p>Relatedly, a few months ago, I wrote about the role of wargames in <a href="https://shostack.org/blog/adversarial-thinking-wargames/">Adversarial Thinking and Wargames</a>, and have a collection of links about adveraries in posts like <a href="https://shostack.org/blog/think-like-siegedsec/">Think Like SiegedSec</a> (they go back further than the formal <a href="https://shostack.org/blog/category/adversaries">adversaries</a> category.)</p>
Application and AI roundup - Jan 20242024-01-29T00:00:00Zhttps://shostack.org/blog/appsec-roundup-jan-2024/AdamA busy month+ in appsec, AI, and regulation.<span class="single-post-excerpt"><!-- excerpt -->A busy month+ in appsec, AI, and regulation.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/appsec-roundup-jan-2024-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/appsec-roundup-jan-2024-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/appsec-roundup-jan-2024-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/appsec-roundup-jan-2024-800w.jpeg" width="800" height="302" alt="a robot reading a book" decoding="async" />
</picture>
<h3>Application Security</h3>
<p></p><ul>
<li>The <a href="https://terrapin-attack.com/" target="_blank" rel="noopener">Terrapin Attack</a> on SSH is fascinating,
and on Hacker News, Colm Macc <a href="https://news.ycombinator.com/item?id=38684904" target="_blank" rel="noopener">says</a> “Formal
verification is still the most comprehensive and exhaustive form
of testing that we have, but gaps can really bite you. It is like
relying Pythagoras' theorem for a triangle, but then it turns out
that your triangles are on the surface of a sphere, which isn't a
plane. The math itself is right, but the environment was different
all along.” This is an important point, and I think it's important
to consider the costs of creating and working with formal
verification, which perhaps distracted from the missing
transcript handshakes. (Tom Ptacek’s comment on the thread
explains those transcripts succinctly.)</li>
<li>The Debian project has issued a <a href="https://bits.debian.org/2023/12/debian-statement-cyber-resillience-act.md.html" target="_blank" rel="noopener">statement on
the EU CRA</a>, including a
mention of the “need to
perform risk assessments and
produce technical
documentation.” I think they
meant to object to that on the
basis that free software is a
gift to society, but I don’t
know how they develop ‘an
integrated system of
high-quality materials’
without those as
inputs. Anyway, it’s not clear
because despite the quote in
the preamble they don’t
explicitly return to the idea.</li>
<li>Bert Hubert continues to do outstanding, thoughtful work around
what the EU’s CRA means. His
latest <a href="https://berthub.eu/articles/posts/eu-cra-what-does-it-mean-for-open-source/?trk=feed-detail_main-feed-card_feed-article-content" target="_blank" rel="noopener">EU CRA: What
does it mean for open
source?</a> includes a
response to the Debian statement.</li>
<li>Sean Baxter writes about evolving C++ in backwards compatible ways
in <a href="https://github.com/seanbaxter/circle/blob/master/new-circle/README.md" target="_blank" rel="noopener">Circle</a>.</li>
</ul>
<p></p>
<h3>AI</h3>
<p></p><ul>
<li>Brendan Bycroft has an amazing <a href="https://bbycroft.net/llm" target="_blank" rel="noopener">LLM Visualization</a>. </li>
<li>In <a href="https://visualstudiomagazine.com/articles/2024/01/25/copilot-research.aspx" target="_blank" rel="noopener">New GitHub Copilot Research Finds
'Downward Pressure on Code Quality'</a> David Ramel reports on a
regwalled report about code quality and LLMs. According
to GitClear it results in lots of code churn and bad code that
violated DRY coding.</li>
<li>The Berryville Institute of Machine Learning released <a href="https://berryvilleiml.com/2024/01/24/all-your-llm-are-belong-to-us/" target="_blank" rel="noopener">An Architectural Risk Analysis of
Large Language Models: Applied Machine Learning Security</a>
(Regwalled), a refinement of their ML risk analysis.</li>
</ul><p></p>
<h3>Threat Modeling</h3>
<p>A lot of people are exploring how we can use LLMs in threat
modeling. I think of these explorations as a continuum from “use
ChatGPT” to “train a custom model,” with many points in between,
such as “LangChain that!” The other crucial spectrum is how much
work the threat modeler or threat modeling team needs to do to
prepare the the LLM.
</p>
<h3>Regulation</h3>
<p>My <a href="https://shostack.org/blog/the-state-of-appsec-2024/" target="_blank" rel="noopener">State of Appsec in 2024</a>
started with the rise of liability. As I think about it more, and
as I read Jim Dempsey’s article, I think I want to talk about “legal
consequences” to be more clear.</p>
<p></p><ul>
<li>Let’s start with that paper: Jim Dempsey has a new paper
<a href="https://www.lawfaremedia.org/article/standards-for-software-liability-focus-on-the-product-for-liability-focus-on-the-process-for-safe-harbor" target="_blank" rel="noopener">Standards
for Software Liability: Focus on the Product for Liability, Focus
on the Process for Safe Harbor</a> issued as the start of a
Security By Design series by Lawfare. It starts with a concise and
excellent survey of the
legal frameworks (warranty, negligence, liability, certification),
and the challenges that our responses like SDLs are about processes
not software quality outcomes and certainly not about safety of
operation. </li>
<li>John Voorhees wrote about <a href="https://www.macstories.net/stories/understanding-apples-response-to-the-dma/" target="_blank" rel="noopener">Understanding Apple’s Response to the
DMA</a>, (the EU’s Digital Markets Act) and how it’s changing Apple’s
App store. </li>
<li>In an editorial at
DataBreaches.net, “<a href="https://www.databreaches.net/if-entities-continue-to-obfuscate-and-lie-its-time-to-mandate-more-transparency-in-breach-disclosures/" target="_blank" rel="noopener">If entities continue to obfuscate and lie, it’s time to mandate more transparency in breach disclosures</a>,”
the inexhaustible blogger points out that “there are entities who rush
to assure people that they have no evidence that data has been misused
even though it’s early days, and even though they know that the data
is in the hands of criminals who wouldn’t hesitate to misuse it.” They continue
“DataBreaches believes that incomplete and misleading breach
disclosures constitute an unfair practice as defined in the FTC Act.”
They were building on a fairly outrageous story in which staff decided
to ignore open meeting laws, ignore breach disclosure laws, and not
discuss details that were already public because <a href="https://www.databreaches.net/if-youre-in-rock-county-wisconsin-do-not-read-this-post-absolutely-do-not-read-this-post/" target="_blank" rel="noopener">people
could connect dots</a> that could negatively affect their
employer. I think such actions are going to result in harsher and
harsher penalties. We as a community need to reset norms, soon, by
condemning these choices and making clear that they’re
unacceptable. You can’t go wrong remembering: it’s not the crime,
it’s the coverup.
</li></ul><p></p>
<h3>Shostack + Associates updates</h3>
<p>We’ve made a set of changes to our courses website to continue
simplifying how we communicate. There’s now a list of our most
popular courses across the top of <a href="https://courses.shostack.org/" target="_blank" rel="noopener">courses.shostack.org</a>
and we created course pages for them:
</p><ul>
<li><a href="https://courses.shostack.org/pages/essentials-201" target="_blank" rel="noopener">Threat Modeling Essentials</a></li>
<li><a href="https://courses.shostack.org/pages/intensive-222" target="_blank" rel="noopener">Threat Modeling Intensive</a> </li>
<li><a href="https://courses.shostack.org/pages/champs-301" target="_blank" rel="noopener">Threat Modeling for Security
Champs</a> </li>
</ul>
There’s still complexity, because our clients routinely want different
things to meet their needs, and our <a href="https://shostack.org/training" target="_blank" rel="noopener">/training</a> page continues to evolve to help you
make sense of it.
<p></p>
<p class="credit-text">Image by Midjourney: “A robot that looks
shocked and outraged by what it’s reading. The background is a
library lined with books. The image is cinematic, dramatic,
professional photography, studio lighting, studio background,
advertising photography, intricate details, hyper-detailed, 8K UHD --ar 8:3 --v 6.0”</p>
My Instructional Journey2024-02-06T00:00:00Zhttps://shostack.org/blog/instructional-journey/AdamThoughts on my instructional journey - and what yours might be<span class="single-post-excerpt"><!-- excerpt -->Thoughts on my instructional journey - and what yours might be<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/learning-journey-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/learning-journey-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/learning-journey-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/learning-journey-800w.jpeg" width="800" height="284" alt="a boy walking through a the forest and passing by mushrooms, in the style of chiaroscuro figurative paintings, light sky-blue and gold, ephemeral installations, intricate, delicate flower and garden paintings, texture exploration, joyful celebration of nature, nightscape --ar 8:3 --v 6.0 " decoding="async" />
</picture>
<p>The very first time I was asked to deliver threat modeling
training, I had no idea how. And I thought I was fortunate, because
my officemate at the time had been a professor, and so I asked
him. And he told me “I have no idea! They just throw us into the
deep end!” And while that’s sadly often true, it turns out there’s a
huge field of study around pedagogy, instructional design, and
more.
</p><p>I’ve recently learned about the second edition of
<a href="https://www.amazon.com/How-Learning-Works-Research-Based-Principles/dp/1119861691/" target="_blank" rel="noopener">How Learning Works</a>, and as I
read it, I realize I want to talk more about the instructional
design that now goes into our courses, and my journey to being a competent
instructional designer.</p>
<p>To do that, let me go back to 2015. I’d left Microsoft to build a
startup, and someone called and said: “can you come teach a one day
threat modeling class? We liked your book!” and I said, sure. And so
I went to their offices with a stack of powerpoint decks based on
the book, and I raced through material because there was <i>soooo
much</i> I wanted to share. </p>
<p>At the end of the day, I went and sat in my car, completely
exhausted, for so long that site security came over to check if I was
ok. Long story short, I asked: “Why am I so tired?” There were
two answers. The first is because I care about my students and want
them to learn, and the second is because I didn’t know how to
teach. I talked to colleagues and for the first time understood the words
“the person doing the work is the person doing the learning.” And I
started to learn about teaching.</p>
<p>Fast forward to 2020, I had learned a fair bit about teaching. I
had created formal learning goals, I’d adjusted what I taught to fit
better in a day. And as I was about to be confronted with how much I
still relied on whiteboards and how much I relied on face to face
interaction to show that I cared about student progress. From that
perspective, the pandemic was a chance to make some lemonade. I
could hire instructional designers from all over to help with my
design and delivery. I didn’t have to fly them anywhere to watch, I
didn’t have to worry about if they’d be presentable in a corporate
setting, and I got a lot out of value by bringing them in. It’s why
I’m comfortable saying I’m competent at course design: Those
consultants could have charged me a lot more if they had useful
improvements to suggest. But I’m still learning, thus reading the
2nd edition of <i>How Learning Works</i>.</p>
<P>Recently, I was talking to a smart friend, and he said “I've been
asked to create a training workshop for threat modeling here at work,
and they gave me some old slides and syllabus. LOL if it wasn't so
sad.”<p></p>
<p>Exactly.</p>
<p>Writing good TM content is time consuming, both for drafts and to
refine it. I’ve been refining content for eight years. I’ve gone so
far as to buying a teleprompter to help me record high quality
video. We have subtitled video, exercises, instructor guides, timing
sheets and more. As I mentioned, we’ve brought in instructional
designers to help refine it. It’s hard to say how long I’ve spent on
it, but for my first Linkedin Learning courses, it was about an hour
of work for a minute of content. But Linkedin Courses don’t include
the time students spend on exercises, so that’s not a perfect guide.</p>
<p>And so let’s say you “only” spend ten minutes per minute of the
course. A one day course at 7 hours would be 70 hours of course
design, or about 2 weeks. More likely, it’s a months-long project to
get to alpha quality, and then more to hone and refine. And as you
do, you’re spending the exceptionally expensive time and goodwill of
your engineers on your beta.</p>
<p>So I’ll ask you what I asked him: Why not license ours? Much like
adapting from in person to zoom, or live instruction to self-pace,
the way we deliver can update and evolve. </p>
<p class="credit-text">Image by Midjourney</p>
</P>Conoscere Il Lato Oscuro2024-02-14T00:00:00Zhttps://shostack.org/blog/il-lato-oscuro/AdamThreats is available in Italian!<span class="single-post-excerpt"><!-- excerpt -->Threats is available in Italian!<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/threats-italian-cover-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/threats-italian-cover-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/threats-italian-cover-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/threats-italian-cover-800w.jpeg" width="800" height="320" alt="The cover of Il Lato Oscuro Del Software" decoding="async" />
</picture>
<p>
</p><p>Così come la Forza è una proprietà di tutti gli esseri viventi, la
sicurezza è una proprietà di tutti i sistemi tecnologici. E come
la Forza ha un lato chiaro e un lato oscuro, la sicurezza ha
difese e attacchi.</p>
<p> È importante concentrarsi sugli attacchi, le minacce, i problemi e
comprenderli per predisporre le difese appropriate. </p>
<p>Adam Shostack l'ha fatto partendo da alcune intuizioni tratte
dall’universo di Star Wars. E il risultato è una guida pratica e
divertente che illustra modelli di sicurezza rilevanti per un'ampia
varietà di sistemi e racconta le strategie attraverso cui gli hacker
tentano di raggiungere i loro dannosi obiettivi. </p>
<p>Se ti interessa creare da zero codice a prova di attacco, in questa
newsletter trovi articoli, letture e spunti per fare un primo passo
nel lato oscuro del software.</p>
<P>In other words, the <a href="https://www.apogeonline.com/libri/il-lato-oscuro-del-software-adam-shostack/" target="_blank" rel="noopener">Italian
Edition of Threats is available!</a> My Italian publisher, Apogeo, also has an <a href="https://www.apogeonline.com/articoli/e-ora-di-finirla-con-la-rotazione-delle-password-la-sicurezza-secondo-adam-shostack-autore-de-il-lato-oscuro-del-software-adam-shostack/" target="_blank" rel="noopener">interview</a>
with me, translated into Italian.<p></p>
<p>
</p><p class="credit-text">Image by Apogeo</p>
</P>Blackhat and Human Factors2024-02-19T00:00:00Zhttps://shostack.org/blog/blackhat-and-human-factors-2024/AdamBlackHat invites human factors work<span class="single-post-excerpt"><!-- excerpt -->BlackHat invites human factors work<!-- /excerpt --></span>
<p>Blackhat 2024 will be August 7-8 in Las Vegas, The <a href="https://www.blackhat.com/call-for-papers.html" target="_blank" rel="noopener">call for
papers</a> is open, and will close on April 10. (Please check all
dates in the official CFP.) </p><p>
<P>As a member of the BlackHat Review Board, I’m responsible for the
Human Factors track. Over the last decade, we’ve developed a good track with a wide
variety of content. (You can click on “briefings” and “archive” to
see past events and filter the talks shown. I recommend focusing
on the ones with “human factors” listed first.)
<p>This year, we’ve extend the track description to incorporate
“interesting” uses of generative AI. New text is in bold.
</p><blockquote>
The Human Factors track focuses on people in security: how
their decisions can affect the security of the organization,
and how engineering and technology can help. This includes the
way people make decisions and how to influence those decisions
as an attacker or defender. It also includes how to reduce
their decision load and the organizational (and potentially
economic) factors that surround those decisions. This track
welcomes submissions on how to get individuals or groups to
act against their interest, including the use of
disinformation or misinformation. <b>This track is open to new
and original ideas about use of generative AI insofar as they
manipulate or influence people.</b> It also includes new ways to
strengthen technology and other solutions to decrease
harm. This track is not about career development/BOFH
stories/simple ploys like buying a UPS outfit <b>or using voice
cloning or deepfake video</b>/sploits to make the browser draw a
fake UI.</blockquote>
</P></p>
<p>Additionally, the submission requirements specify: “Submissions
that include text generated entirely from a large-scale language model
(LLM) such as ChatGPT, Google Bard, etc. are prohibited. Submitters
may use LLM tools for editing or polishing author-written text ONLY.”
</p><p>In 2016, I wrote a short <a href="https://shostack.org/archive/2016/04/humans-in-security-blackhat-talks/">blog
post</a> on what we were looking
for. I’m pleased we’re executing on
that vision and growing it.</p>
<p>The BlackHat CFP calls for work which has not been published
elsewhere. We prefer fully original work, but will consider a new talk
that’s designed for the BlackHat audience. Oftentimes, Blackhat does
not count as “Publication” in the view of academic program committees,
and so you can present something at BlackHat that you plan to Publish
later. (You should, of course, check with the other venue, and
disclose what you’re doing to BlackHat.)</p>
<p>If you’re considering submitting, I encourage you to read both <a href="https://www.blackhat.com/call-for-papers.html" target="_blank" rel="noopener">/call-for-papers.html</a> and <a href="https://usa-briefings-cfp.blackhat.com/" target="_blank" rel="noopener">usa-briefings-cfp.blackhat.com/</a> in
their entirety. Both have useful information for submitters.</p>
ThreatModCon Lisbon 20242024-02-21T00:00:00Zhttps://shostack.org/blog/threatmodcon-lisbon/AdamSubmit your papers for ThreatModCon 2024 now!<span class="single-post-excerpt"><!-- excerpt -->Submit your papers for ThreatModCon 2024 now!<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/threatmodcon2024-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/threatmodcon2024-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/threatmodcon2024-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/threatmodcon2024-800w.jpeg" width="800" height="271" alt="A banner for the conference" decoding="async" />
</picture>
<p>The call for papers for ThreatModCon Lisbon is open for another week!</p>
<p>
Hosted by the Threat Modeling Connect community, <a href="https://shostack.org/blog/threatmodcon-lisbon/threatmodcon2024.png" target="_blank" rel="noopener">ThreatModCon made</a>
waves last year as the first-of-its-kind conference dedicated to
threat modeling. Since then, it has quickly become one of the
trendiest boutique cybersecurity events around. The CFP is
currently open through March 1, 2024. You can find out more
details about the CFP on portal:
<a href="https://sessionize.com/threatmodcon-2024-lisbon" target="_blank" rel="noopener">https://sessionize.com/threatmodcon-2024-lisbon</a>
</p>Solving Hallucinations2024-02-23T00:00:00Zhttps://shostack.org/blog/solving-hallucinations/AdamSolving hallucinations in legal briefs is playing on easy mode —— and still too hard<span class="single-post-excerpt"><!-- excerpt -->Solving hallucinations in legal briefs is playing on easy mode —— and still too hard<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/solving-hallucinations-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/solving-hallucinations-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/solving-hallucinations-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/solving-hallucinations-800w.jpeg" width="800" height="299" alt="a modern paper on a desk. it's a few pages long, and is in 12 point times roman font, double spaced and with lots of complex blue book citations in the text. The content of the text is a disciplinary hearing for a lawyer. The text is wierdly distorted, and the edges show signs of hallucination " decoding="async" />
</picture>
<p>All LLMs ever do is hallucinate. Sometimes we like their
hallucinations, sometimes we don’t. And a lot of people are
claiming to have solved LLM hallucinations, because they’re a
clear impediment to business adoption.</p>
<p>Those people are hallucinating, along with their machines.</p>
<p>Solving hallucinations in legal briefs is playing on easy mode. You
chain your main LLM output through a thing which detects legal
citations. This is not a trivial thing. You can find a lot looking
for <code>/v\./</code>, as in <i>Brown v. Board of
Education</i>. That gets you a lot, and it may not be
enough. That’s ok! A quick search reveals code like <a href="https://gist.github.com/mlissner/dda7f6677b98b98f54522e271d486781" target="_blank" rel="noopener">open source code</a> that matches a
case name and rewrites it to be a link to Cornell’s law
site. Looking up the case was going to be my step 2: check the
citation you’ve matched against a database of cases. There are
lots of those. You can chain that code with your LLM today, and if
there are too many failures, toss the whole response. I could have
ChatGPT write the code for me in an afternoon.</p>
<p>Sure, my afternoon’s code would be imperfect, and there are
certainly edge cases that make this hard. There’s probably more
sophisticated approaches available... but that’s not even getting
done. An article in Lawnext,
<a href="https://www.lawnext.com/2024/02/not-again-two-more-cases-just-this-week-of-hallucinated-citations-in-court-filings-leading-to-sanctions.html" target="_blank" rel="noopener">Two More Cases, Just this Week, of
Hallucinated Citations in Court Filings Leading to Sanctions</a>
documents a brief with 22 out of 24 cases cited being
inaccurate. That’s not a result of edge cases slipping
through. It’s a result of not trying. In fact, in <a href="https://www.lawnext.com/wp-content/uploads/2024/02/Opinion_ED111172.pdf" target="_blank" rel="noopener">that case</a>, 20 out of the 22
citations would be identified by a check for ‘v.’ Of those, 5 are
real cases, what the court refers to as ‘Fictitious citation using
a real case name.’ Those would be distinguished by my second
check. So really trivial validation could have
revealed that 17 of 22 cases didn’t even exist.</p>
<p>Perhaps we can attribute this to LLMs not being fine tuned (in the
English sense, not the technical one) for legal cases. But all the
public chatbots are wrapped in layers of ‘safety,’ and it’s hard
to see this as an oversight, or the many, many ways in which LLMs
might need to be tuned means that general purpose chatbots are
going to remain dangerous. It’s more reasonable to see it as an
indication that preventing damaging hallucinations is not just a
complex task or an hard bit of engineering: it’s unclear if it’s
even possible when you get beyond easy mode.</p>
<p class="credit-text">The observation that all LLMs do is hallucinate
is by <a href="https://twitter.com/karpathy/status/1733299213503787018?lang=en" target="_blank" rel="noopener">Andrej Karpathy</a>. Image by midjourney. </p>
Insecurity of Government Infrastructure2024-02-26T00:00:00Zhttps://shostack.org/blog/insecurity-of-government-infrastructure/AdamWe have a new paper at NDSS<span class="single-post-excerpt"><!-- excerpt -->We have a new paper at NDSS<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/insecurity-of-gov-infra-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/insecurity-of-gov-infra-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/insecurity-of-gov-infra-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/insecurity-of-gov-infra-800w.jpeg" width="800" height="265" alt="A graph showing use of DNSSec" decoding="async" />
</picture>
<p>In security, there’s work we do to protect ourselves or our business, and there’s
work we do to protect others, or to enable people to securely interact with
us. Guess which CISOs tend to prioritize? </p>
<p>Giving society a way to think about, measure, and improve the later
is an important value of the “cyber public health” framing. That’s
why a group of us, led by Evan Lam and Sudheesh Singanamalla, have
a new short paper* in NDSS, “<a href="https://www.ndss-symposium.org/wp-content/uploads/ndss24-posters-42.pdf" target="_blank" rel="noopener">On the (In)Security of Government Web
and Mail Infrastructure</a>:”
</p><blockquote>Abstract: Government web infrastructure is a critical
part of today’s Internet and the functioning of society. Citizens’
interactions with digital government infrastructure needs to be
secure since they might contain important and sensitive information.
These interactions can be through various web applications providing
digital public services, or through communication mechanisms such as
email. Government websites and mail servers typically form the long
tail of today’s Internet and do not appear on large top million
Internet datasets making them very understudied. DNS infrastructure
forms the center piece for citizens to interact with government
services allowing resolution of IP addresses, and enabling email
communication and sender policy enforcement between mail service
providers. In this poster, due to their inter-dependent nature, we
present a comprehensive security evaluation of government web
infrastructure covering both web and mail services in addition to
understanding the security of the DNS services they rely on. We open
source our implementation of the security scanner to the community,
invite collaborators to engage with the data periodically scanned, and
release the largest public dataset of government hostnames.</blockquote>
<p>* It’s technically accepted as a “poster,” and NDSS lets you post two
page explainers with those posters. </p><p>
</p>Archimedes Early Bird Ends Soon2024-02-29T00:00:00Zhttps://shostack.org/blog/archimedes-early-bird/AdamEarly bird pricing ends March 4<span class="single-post-excerpt"><!-- excerpt -->Early bird pricing ends March 4<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/archimedes-early-bird-2024-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/archimedes-early-bird-2024-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/archimedes-early-bird-2024-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/archimedes-early-bird-2024-800w.jpeg" width="800" height="266" alt="" decoding="async" />
</picture>
<p>You know, you can't always get what you want, but if you try
sometime, you just might find, you get what you need. And if what
you need is a fast introduction to threat modeling...
At the Archimedes Health Care Security Week, I’ll be presenting my
“Threat Modeling Essentials” course in a rare in-person
version. There’s going to be lots of amazing content all week
long. <a href="https://www.secure-medicine.org/events" target="_blank" rel="noopener">details here</a>.
</p>
<p>Also, I hear there’s <a href="https://www.nojazzfest.com/music/" target="_blank" rel="noopener">some sort of music festival</a>
happening around the same time.
</p>Application and AI roundup - Feb 20242024-03-05T00:00:00Zhttps://shostack.org/blog/appsec-roundup-feb-2024/AdamA busy month in appsec, AI, and regulation.<span class="single-post-excerpt"><!-- excerpt -->A busy month in appsec, AI, and regulation.<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/appsec-roundup-feb-2024-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/appsec-roundup-feb-2024-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/appsec-roundup-feb-2024-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/appsec-roundup-feb-2024-800w.jpeg" width="800" height="301" alt="a robot reading a book" decoding="async" />
</picture>
<p>Noted cryptographer Ari Juels has a new novel, <a href="https://www.oraclenovel.com/" target="_blank" rel="noopener">The
Oracle</a> (Update: Wendy Grossman has a <a href="https://netwars.pelicancrossing.net/2024/03/05/review-the-oracle/" target="_blank" rel="noopener">review</a>.)
</p><h3>Application Security</h3>
<p></p><ul>
<li>Qualys found <a href="https://blog.qualys.com/vulnerabilities-threat-research/2024/01/30/qualys-tru-discovers-important-vulnerabilities-in-gnu-c-librarys-syslog" target="_blank" rel="noopener">a set of local
privilege escalation bugs in glibc</a> that had gone un-noticed
for 30 years. </li>
<li>Zach Hanley of Horizon3 did an analysis of the Known Exploited
Vulns list, and found that 48% exploit “insecure exposed
functions,” and concludes that <a href="https://www.horizon3.ai/analysis-of-2023s-known-exploited-vulnerabilities/" target="_blank" rel="noopener">Rust Won’t Save Us</a>.</li>
<li>Speaking of Rust, Speykious released <a href="https://crates.io/crates/cve-rs" target="_blank" rel="noopener">CVE-rs</a>, “Blazingly 🔥 fast 🚀 memory vulnerabilities, written in 100% safe Rust.” </li>
</ul>
<p></p>
<h3>AI</h3>
<p></p><ul>
<li>A widely reported story, but: <a href="https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454" target="_blank" rel="noopener">Air Canada's chatbot gave a B.C. man the wrong information. Now, the airline has to pay for the mistake</a>. </li>
</ul><p></p>
<h3>Threat Modeling</h3>
<p>
</p><ul> <li>Corey Quinn has a good article, <a href="https://www.lastweekinaws.com/blog/are-aws-account-ids-sensitive-information/" target="_blank" rel="noopener">Are AWS
account IDs sensitive information?</a> He says: “I don’t
particularly care whether or not the account IDs are
sensitive, personally. If they are, great! If not, super! Just
answer the question authoritatively so I can avoid the mental
overhead of wondering.” It’s a great point, and while he
doesn’t mention threat modeling, this is the sort of
question that high-functioning threat modeling can identify
and prioritize for resolution.</li>
<li>The 2nd Threat Modeling Connect <a href="https://www.threatmodelingconnect.com/events/spring-2024-hackathon-50" target="_blank" rel="noopener">Hackathon</a> is still
open. I’ll give an opening keynote, and one of the prizes for the
winning team is an Elevation of Privilege game with me.
<p></p>
</li></ul>
<h3>Regulation</h3>
<p></p><ul>
<li>NIST released CSF2.0. Randy Sabbet of law firm Cooley has a
good <a href="https://cdp.cooley.com/nist-unveils-cybersecurity-framework-2-0/" target="_blank" rel="noopener">summary</a>. I’ll note
that more than a decade after Mark Andreesen said “All companies
are software companies,” CSF 2 treats software as a “supply
chain” issue, not a core competency. </li>
<li>The White House has released a paper, <a href="https://www.whitehouse.gov/oncd/briefing-room/2024/02/26/press-release-technical-report/" target="_blank" rel="noopener">Future Software Should Be Memory
Safe</a>. Michal Zalewski
(aka lcamtuf) has <a href="https://lcamtuf.substack.com/p/a-reactionary-take-on-memory-safety" target="_blank" rel="noopener">A
reactionary take</a> in which he questions the cost. Regardless, your
management will increasingly care, and you need a plan.</li>
</ul><p></p>
<h3>Shostack + Associates updates</h3>
<p>Open trainings: We have a one day Essentials class at Archimedes in
New Orleans (<a href="https://www.secure-medicine.org/events/2024healthcaresecurityweek" target="_blank" rel="noopener">April 30</a>), and two Intensives
in person at Blackhat (both two days), which are <a href="https://www.blackhat.com/us-24/training/schedule/#adam-shostacks-threat-modeling-intensive-36722" target="_blank" rel="noopener">Aug 3-4</a> or
<a href="https://www.blackhat.com/us-24/training/schedule/#adam-shostacks-threat-modeling-intensive-367221705524592" target="_blank" rel="noopener">Aug 5-6</a>
</p>
<p>And last but not least, my book <a href="https://www.linkedin.com/pulse/conoscere-il-lato-oscuro-apogeo-pypwf%3FtrackingId=bP9LB5hJTXm9N1DqUikqwA%253D%253D/?trackingId=bP9LB5hJTXm9N1DqUikqwA%3D%3D" target="_blank" rel="noopener">Threats</a> is available
in Italian!
</p><ul> Una guida pratica per scrivere applicazioni sicure, con l'aiuto dei
tuoi maestri Jedi, Sith e droidi preferiti.
<li>
Le principali minacce informatiche che ogni ingegnere dovrebbe
conoscere.
</li><li> Semplici framework di sicurezza del software da integrare nei
propri sistemi.
</li><li> Strategie per costruire sistemi sicuri per team di lavoro di
grandi dimensioni.
</li><li>Strategie usate dagli hacker per violare sistemi.
</li></ul>
<p class="credit-text">Image by Midjourney: “A robot that looks shocked and outraged by what it's reading. The background is a library lined with books. The image is cinematic, dramatic, professional photography, studio lighting, studio background, advertising photography, intricate details, hyper-detailed, 8K UHD --ar 8:3 --v 6.0”</p>
The British Library’s Incident Review2024-03-09T00:00:00Zhttps://shostack.org/blog/british-library-incident-report/AdamThoughts on the British Library incident<span class="single-post-excerpt"><!-- excerpt -->Thoughts on the British Library incident<!-- /excerpt --></span>
<picture class="single-post-main-img">
<source type="image/webp" srcset="https://shostack.org/images/blog/img/2024/british-library-800w.webp 800w" sizes="100vw" />
<source type="image/jpeg" srcset="https://shostack.org/images/blog/img/2024/british-library-800w.jpeg 800w" sizes="100vw" />
<source type="image/png" srcset="https://shostack.org/images/blog/img/2024/british-library-800w.png 800w" sizes="100vw" />
<img src="https://shostack.org/images/blog/img/2024/british-library-800w.jpeg" width="800" height="288" alt="The report title, ‘LEARNING LESSONS FROM THE CYBER-ATTACK British Library cyber incident review’" decoding="async" />
</picture>
<p>The British Library has been operating at reduced capacity for
months as a result of a ransomware attack. They’ve just released a
quite solid analysis of what they’ve undergone and learned: <a href="https://www.bl.uk/home/british-library-cyber-incident-review-8-march-2024.pdf" target="_blank" rel="noopener">Cyber Incident Review</a>.</p>
<p>There’s an awful lot to like. It’s readable, thoughtful, transparent, and not
finger-pointing or blame-dodging. It has a good history of the
events, separated from the analysis and lessons. And as I read it,
I did have questions, and before I get to them, I want to
emphasize that I don’t mean to nit-pick, and respect that they got
a report like this out while they’re still in what they call the ‘adapt’
phase.</p>
<p>More, I feel like they’ve staked out and demonstrated a modern way
to respond to a cyber incident. They’ve taken the reasonable
perspective that this was done to them, they show concern about
the individuals whose data was taken, they seem to be engaged with
the police and Information Commissioner’s office, and seem to not
have “lawyered up.” All of that is easier for a public
institution, but we know that <a href="https://www.r-klaw.com/2010/08/18/want-to-avoid-a-lawsuit-admit-your-mistakes/" target="_blank" rel="noopener">admitting mistakes reduces
lawsuits</a>, even when people have died from preventable medical errors.</p>
<p>My questions are largely targeted at the promise the report
makes in its title: Learning lessons. There are several places
where I’d like more details, and I’d love to see an appendix or
somesuch with IOCs/TTPs and other technical details that didn’t
make it to the report.</p>
<blockquote>
While the Library’s monitoring software did not automatically
isolate the intrusion at source, it did intervene in some of the
actions and prevented further intrusion into parts of the
Library’s technology estate. </blockquote>
<p>I’d like to know more about the monitoring software. Where did it
intervene? Were there places that someone might have expected it
to intervene and it didn’t? Was its intervention and misses a
matter of defaults, configuration, or capabilities?</p>
<blockquote>
We believe that the unedited Electoral Roll database held as part of the collection was not
compromised, as all indications are that the enhanced levels of
encryption in place on that particular database functioned as
intended and protected it from the attack method described above.
</blockquote>
<p>I don’t understand this. Is “the attack method described above”
copying the network drives? (That’s the first attack method
described.) Because it seems that would not work against “forcibly
creating backup copies of 22 of our databases.” So a bit more about
what happened would be helpful.</p>
<p>I also like that they thought about the ways in which they are not unique:
</p><blockquote>“Many of the major collections institutions in the DCMS family and the wider sector are likely to have
similar risks to the British Library in terms of investment levels in cyber-security, legacy
infrastructure, and difficulties attracting and retaining sufficient
IT talent.”</blockquote>
That’s a point that was made by the NCSC’s founder, Ciaran Martin, who
<a href="https://ciaranmartin.substack.com/p/on-the-matter-of-the-british-library" target="_blank" rel="noopener">wrote</a>: “Indeed,
an incident of the
severity of the BL
attack is likely in
each of the next five
years.”<p></p>
<p>Overall, an excellent report, and it’s worth learning from.</p>
<p class="credit-text"> Ok, I do have one nit I’m going to pick: Why is the report not on a
white background? Does the Library not know that people print things?</p>