Professional Documents
Culture Documents
Home
Topics
Subscribe
Submit an Article
Sign in
RSS
MSDN Magazine > Issues and Downloads > 2005 > November > A Look Inside the Security Development Lifecycl...
Michael Howard
Current Issue
This article discusses:
Overview of the Security Development
Lifecycle
Security in the design and development
processes
Threat modeling and testing
Security reviews and responses
Contents
Leadership and Education
The Design Phase
Threat Modeling
The Development Phase
Security Testing
Starting a Security Push
Final Security Reviews
The Security Response
Does SDL Work?
The goals of the Security Development Lifecycle (SDL), now embraced by Microsoft, are twofold: to
reduce the number of security-related design and coding defects, and to reduce the severity of any
defects that are left. This follows our oft-cited motto, "Secure by Design, Secure by Default, Secure
in Deployment and Communication" (also known as SD3+C). SDL focuses mainly on the first two
elements of this motto. Secure by Design means getting the design and code secure from the
outset, and Secure by Default is a recognition that you never will. To be realistic, you never will get
the code 100 percent correct, but more on this later when I discuss attack surface reduction.
This article outlines how to apply the SDL to your own software development processes. I will
explain how you can take some of the lessons we have learned at Microsoft when implementing
SDL, so you can use these concepts in your own development process. But before I get started, I
want to make clear that SDL is process-agnostic as far as how you go about developing software.
Whether you use a waterfall model, a spiral model, or an agile model, it really doesn't matter; you
can still use the process improvements that come from SDL. SDL involves modifying a software
development organization's processes by integrating measures that lead to improved software
security. The really great news is SDL does improve software quality by reducing security defects.
SDL adds security-specific checks and measures to any existing software development process.
Figure 1 shows how SDL maps onto a "generic" process. If it makes you happy, wrap the SDL
around a spiral or down a waterfall.
I'll take a look at each major phase and outline what you can do within your own organization to
implement SDL.
For leadership, you need to nominate one or more individuals to be the point people for security.
Their jobs include staying on top of security issues, pushing the security practices on the
development organization and being the voice of reason when it comes to making tough security
decisions. (If you're reading this, that person is probably you.) The leadership person or people
should monitor the various security related newsgroups, such as Bugtraq (www.securityfocus.com).
If your engineers know nothing about the basic security tenets, common security defect types,
basic secure design, or security testing, there really is no reasonable chance they could produce
secure software. I say this because, on the average, software engineers don't pay enough attention
to security. They may know quite a lot about security features, but they need to have a better
understanding of what it takes to build and deliver secure features. It's unfortunate that the term
security can imply both meanings, because these are two very different security realms. Security
features looks at how stuff works, for example the inner operations of the Java or common
language runtime (CLR) sandbox, or how encryption algorithms such as DES or RSA work. While
these are all interesting and useful topics, knowing that the DES encryption algorithm is a 16-
round Feistel network isn't going to help people build more secure software. Knowing the
limitations of DES, and the fact that its key size is woefully small for today's threats, is very useful,
and this kind of detail is the core tenet of how to build secure features.
The real concern is that most schools, universities, and technical colleges teach security features,
and not how to build secure software. This means there are legions of software engineers being
churned out by these schools year after year who believe they know how to build secure software
because they know how a firewall works. In short, you cannot rely on anyone you hire necessarily
understanding how to build security defenses into your software unless you specifically ask about
their background and knowledge on the subject.
Some good sources for online and instructor-led security education include Microsoft eLearning
(www.microsoftelearning.com/security). The security guidance for developers is derived from some
of the security basics material we present at Microsoft.
You should also build a library of good security books, such as those listed in Figure 2.
All functional and design specifications, regardless of document size, should contain a section
describing how the component impacts security. To get some ideas on what to add to this section,
you should review RFC 3552 "Guidelines for Writing RFC Text on Security Considerations".
An important part of the design process is to understand how you will reduce the attack surface of
your application or component. Anonymously accessible and open UDP ports to the Internet
represent a larger attack surface than, say, an open TCP port accessible only to a restricted set of
IP addresses. I don't want to spend too much time on the subject here; instead you should refer to
my article on attack surface reduction "Mitigate Security Risks by Minimizing the Code You Expose
to Untrusted Users" in MSDNMagazine November 2004. Figure 3 should give you a quick
reference for reducing the attack surface of your code.
Open socket
Closed socket
UDP
TCP
Anonymous Access
User Access
Internet Access
Weak ACLs
Strong ACLs
Threat Modeling
Threat modeling must be completed during the product design process. A team cannot build a
secure product unless it understands the assets the product is trying to protect (customers'
personal information such as credit card numbers, not to mention their computers), the threats
and vulnerabilities introduced by the product, and details of how the product will mitigate those
threats. Additionally, it is also important to consider threats and vulnerabilities present in the
environment in which the product is deployed or those that arise due to interaction and interfacing
with other products or systems in end-to-end real world solutions. To this end, the design phase
of a product cannot be considered complete until a threat model is in place. Threat models are
critical components of the design phase and will reference both a product's functional and design
specifications to describe both vulnerabilities and mitigations.
Understanding the threats to your software is a critical step to creating a secure product. Too
many people bolt security technology to their app and declare it secure, but the code is not secure
unless the security countermeasures are really resolving real-world threats. That's the goal of
threat modeling. To get a good feel for the process in less than 30 minutes, I would recommend
you read the blog entry "Guerrilla Threat Modeling" written by my colleague Peter Torr.
PREfast PREfast is a static analysis tool for C/C++ code. It can find some pretty subtle security
defects, and some egregious bugs, too. This is lint on security steroids.
Standard Annotation Language (SAL) Of all the tools we have added to Visual Studio 2005,
this is the technology that excites me the most because it can help find some hard to spot bugs.
Imagine you have a function like this:
You know that buffer and dwBufferLength are tied at the hip; buffer is cbBufferLength bytes long.
But the compiler does not know thatall it sees is a pointer and a 32-bit unsigned integer. Using
SAL, you can link the two. So the header that includes this function prototype might look like the
following:
Please note the final syntax used for SAL may change before Visual Studio 2005 ships.
FxCop You may already know of FxCopit's a tool to find defects, including security defects in
managed code. It's available as a download from www.gotdotnet.com, but the version in Visual
Studio 2005 is fully integrated, and includes some new issues to watch out for.
Application Verifier AppVerifier is a runtime tool that operates on a running application. It can
be used to trap memory-related issues at run time, including heap-based buffer overruns.
Other tools and requirement at Microsoft include:
All unmanaged C/C++ code must be compiled with the /GS stack overrun detection capability.
All unmanaged C/C++ code must be linked using the /SafeSEH option.
All RPC code must be compiled with the MIDL /robust flag.
Security issues flagged by FxCop and PREfast must be fixed.
The functions shown in Figure 4 are banned for new code, and should be removed over time
for legacy code.
Strsafe Replacement
String*Copy or
String*CopyEx
strcpy_s
strcat, wcscat
String*Cat or
String*CatEx
strcat_s
String*Printf or
String*PrintfEx
sprintf_s
_snwprintf, _snprintf
String*Printf or
String*PrintfEx
_snprintf_s or
_snwprintf_s
String*VPrintf or
String*VPrintfEx
_vstprintf_s
_vsnprintf, _vsnwprintf
String*VPrintf or
String*VPrintfEx
vsntprintf_s
strncpy, wcsncpy
String*CopyN or
String*CopyNEx
strncpy_s
strncat, wcsncat
String*CatN or
String*CatNEx
strncat_s
scanf, wscanf
None
sscanf_s
String*Length
strlen_s
You can read about the Strsafe string replacement code in "Strsafe.h: Safer String Handling in C".
The Safe C library is the new C runtime library replacement built into Visual Studio 2005. You can
read about it at "Safe! Repel Attacks on Your Code with the Visual Studio 2005 Safe C and C++
Libraries".
Security Testing
A very useful testing technique for finding security defects is "fuzzing," which means taking valid
data, morphing that data, and then observing an application that consumes the data. In its
simplest form, you could build a library of valid files that your application consumes, and then use
a tool to systematically corrupt a file and have your application play or render the file. Run the
application under application verifier with heap checking enabled to help uncover more errors.
Examples of morphing data include:
Exchanging random bytes in a file
Writing a random series of bytes of a random size at a random location in file
Look for known integers to change the sign, or make them too large or too small
Look for ASCII or Unicode characters and set the trailing NULL character to be non-NULL
Michael Sutton and Adam Greene gave an interesting session at Blackhat USA 2005 about fuzzing.
You can read it at The Art of File Format Fuzzing. Ejovi Nuwere and Mikko Varpiola also gave an
interesting presentation on fuzzing the VoIP networking protocol, available at The Art of SIP
Fuzzing and Vulnerabilities Found in VoIP.
pushes to date have been gated by code quantity. Teams are strongly encouraged to try to conduct
security code reviews throughout the development process, once the code is fairly stable, as the
quality of code reviews will suffer from trying to condense too many code reviews into too short of
a time period.
The rule of thumb is to determine critical code, using heuristics such as exposure to the Internet,
handling sensitive or personally identifiable information and so on, and mark that as priority one
code. That code must be reviewed during the push, and the push cannot be complete until that
code is reviewed. Assign a name to every code file, and assign a name and priority to the code
before the push.