Contents
History
By the mid 1960s, the growing popularity of online time-sharing computer systems, which had made their resources accessible to users over communications lines, had created new concerns about system security. As the scholars Deborah Russell and G. T. Gangemi, Sr. explain, "the 1960s marked the true beginning of the age of computer security."[5] In June 1965, for example, several of the country's leading computer security experts held one of the first major conferences on system security, one that was hosted by the government contractor, the System Development Corporation (SDC). During the conference, it was noted that one SDC employee had been able to easily undermine the various system safeguards that had been added to SDC's AN/FSQ-32 time-sharing computer system. In the hopes that the further study of system security could be useful, the attendees requested "studies to be conducted in such areas as breaking security protection in the time-shared system." In other words, the conference participants initiated one of the first formal requests to use computer penetration as tool for studying system security.[6]
At the Spring 1967 Joint Computer Conference, many of the country's leading computer specialists met again to discuss their concerns about system security. During this conference, the computer security experts Willis Ware, Harold Petersen, and Rein Tern, all of the RAND Corporation, and Bernard Peters of the National Security Agency (NSA), all used the phrase "penetration" to describe an attack against a computer system. In a paper, Ware referred to the military's remotely accessible time-sharing systems, warning that "deliberate attempts to penetrate such computer systems must be anticipated." His colleagues Petersen and Turn shared the same concerns, observing that on-line communication systems "are vulnerable to threats to privacy," including "deliberate penetration". Bernard Peters of the NSA made the same point, insisting that computer input and output "could provide large amounts of information to a penetrating program." During the conference, computer penetration would become formally identified as a major threat to online computer systems.[7]
The threat posed by computer penetration was next outlined in a major report organized by the United States Department of Defense (DoD) in late 1967. Essentially, DoD officials turned to Willis Ware to lead a task force of experts from NSA, CIA, DoD, academia, and industry to formally assess the security of time-sharing computer systems. By relying on many of the papers that had been presented during the Spring 1967 Joint Computer Conference, the task force largely confirmed the threat to system security posed by computer penetration. Although Ware's report was initially classified, many of the country's leading computer experts quickly identified the study as the definitive document on computer security.[8] Jeffrey R. Yost of the Charles Babbage Institute has more recently described the Ware report as "by far the most important and thorough study on technical and operational issues regarding secure computing systems of its time period."[9] In effect, the Ware report reaffirmed the major threat posed by computer penetration to the new online time-sharing computer systems.
To get a better understanding of system weaknesses, the federal government and its contractors soon began organizing teams of penetrators, known as tiger teams, to use computer penetration as a means for testing system security. Deborah Russell and G. T. Gangemi, Sr. stated that during the 1970s "'tiger teams' first emerged on the computer scene. Tiger teams were government and industry sponsored teams of crackers who attempted to break down the defenses of computer systems in an effort to uncover, and eventually patch, security holes.".[10] One of the leading scholars on the history of computer security, Donald MacKenzie, similarly points out that "RAND had done some penetration studies (experiments in circumventing computer security controls) of early time-sharing systems on behalf of the government."[11] Jeffrey R. Yost of the Charles Babbage Institute, in his own work on the history of computer security, also acknowledges that both the RAND Corporation and the SDC had "engaged in some of the first so-called 'penetration studies' to try to infiltrate time-sharing systems in order to test their vulnerability."[12] In virtually all of these early studies, the tiger teams would succeed in breaking into their targeted computer systems, as the country's time-sharing systems had very poor defenses.
Of the earliest tiger team actions, the efforts at the RAND Corporation demonstrated the usefulness of penetration as a tool for assessing system security. At the time, one RAND analyst noted that the tests had "demonstrated the practicality of system-penetration as a tool for evaluating the effectiveness and adequacy of implemented data security safe-guards." In addition, a number of the RAND analysts insisted that the penetration test exercises all offered several benefits that justified its continued use. As they noted in one paper, "a penetrator seems to develop a diabolical frame of mind in his search for operating system weaknesses and incompleteness, which is difficult to emulate." For these reasons and others, many analysts at RAND recommended the continued study of penetration techniques for their usefulness in assessing system security.[13]
Perhaps the leading computer penetration expert during these formative years was James P. Anderson, who had worked with the NSA, RAND, and other government agencies to study system security. In early 1971, the U.S. Air Force contracted with Anderson's private company to study the security of its time-sharing system at the Pentagon. In his study, Anderson outlined a number of the major factors that were involved in computer penetration. The general attack sequence, as Anderson described it, involved a number of steps, including: "1. Find an exploitable vulnerability. 2. Design an attack around it. 3. Test the attack. 4. Seize a line in use... 5. Enter the attack. 6. Exploit the entry for information recovery.’’ Over time, Anderson's description of the general steps involved in computer penetration would help guide many other security experts, as they continued to rely on this technique to assess the security of time-sharing computer systems.[14]
In the following years, the use of computer penetration as a tool for security assessment would only become more refined and sophisticated. In the early 1980s, the journalist William Broad briefly summarized the ongoing efforts of tiger teams to assess system security. As Broad reported, the DoD-sponsored report by Willis Ware had "showed how spies could actively penetrate computers, steal or copy electronic files and subvert the devices that normally guard top-secret information. The study touched off more than a decade of quiet activity by elite groups of computer scientists working for the Government who tried to break into sensitive computers. They succeeded in every attempt."[15] While these various studies may have suggested that computer security in the U.S. remained a major problem, the scholar Edward Hunt has more recently made a broader point about the extensive study of computer penetration as a security tool. As Hunt suggests in a recent paper on the history of penetration testing, the defense establishment ultimately "created many of the tools used in modern day cyberwarfare," as it carefully defined and researched the many ways in which computer penetrators could hack into targeted systems.[16]
Standards and certification
The Information Assurance Certification Review Board (IACRB) manages a penetration testing certification known as the Certified Penetration Tester (CPT). The CPT requires that the exam candidate pass a traditional multiple choice exam, as well as pass a practical exam that requires the candidate to perform a penetration test against servers in a virtual machine environment.[17]
Tools
This section does not cite any references or sources. (January 2013) |
Specialized OS distributions
There are several operating system distributions, which are geared towards performing penetration testing.[18] Distributions typically contains pre-packaged and pre-configured set of tools. This is useful because the penetration tester does not have to hunt down a tool when it is required. This may in turn lead to further complications such as compile errors, dependencies issues, configuration errors, or simply acquiring additional tools may not be practical in the tester's context.
Popular examples are Kali Linux (replacing BackTrack as of December 2012) based on Debian Linux, Pentoo based on Gentoo Linux and WHAX based on Slackware Linux. There are many other specialized operating systems for penetration testing, each more or less dedicated to a specific field of penetration testing.
Software frameworks
Automated testing tools
This section does not cite any references or sources. (January 2013) |
The process of penetration testing may be simplified as two parts:
- Discovering a combination of legal operations that will let the tester execute an illegal operation: unescaped SQL commands, unchanged salts in source-visible projects, human relationships, using old hash/crypto functions
- A single flaw may not be enough to enable a critically serious exploit. Leveraging multiple known flaws and shaping the payload in a way that will be regarded as valid operation is almost always required. Metasploit provides a ruby library for common tasks and maintains a database of known exploits.
- Under budget and time constraints, fuzzing is a common technique to discover vulnerabilities. What it aims to do is to get an unhandled error through random input. Random input allows the tester to use less often used code paths. Well-trodden code paths have usually been rid of errors. Errors are useful because they either expose more information, such as HTTP server crashes with full info tracebacks or are directly usable such as buffer overflows. A way to see the practicality of the technique is to imagine a website having 100 text input boxes. A few of them are vulnerable to SQL injections on certain strings. Submitting random strings to those boxes for a while will hopefully hit the bugged code path. The error shows itself as a broken HTML page half rendered because of SQL error. In this case, only text boxes are treated as input streams. But software systems have many possible input streams such as cookie/session data, the uploaded file stream, RPC channels, or the memory. In any of these input streams, errors can happen. The goal is first, to get an unhandled error, and second, come up with a theory on the nature of the flaw based on the failed test case. Then write an automated tool to test the theory until it is correct. After that, with luck it should become obvious how to package the payload so that its execution will be triggered. If this is not viable, one can hope that another error produced by the fuzzer will yield more fruit. The use of a fuzzer means time is not wasted on checking completely adequate code paths where exploits are unlikely to occur.
- Specifying the illegal operation, also known as payloads according to Metasploit terminology: remote mouse controller, webcam peeker, ad popupper, botnet drone or password hash stealer. Refer to Metasploit payload list for more examples.
Some companies maintain large databases of known exploits and provide products to automatically test target systems if they are vulnerable.
See also
Notes
- "Penetration Testing". O'Reilly Media. Retrieved 16 January 2014.
- "Penetration Testing: Assessing Your Overall Security Before Attackers Do". SANS Institute. Retrieved 16 January 2014.
- "Penetration test". Network Security Services. Retrieved 16 April 2012.
- "Corporate IT Security Courses". eLearnSecurity. 16 April 2012.
- Russell and Gangemi, Sr. (1991), p. 27
- Hunt (2012), pp. 7-8
- Hunt (2012), p. 8
- Hunt (2012), p. 8
- Yost (2007), p. 602
- Russell and Gangemi, Sr. (1991), p. 29
- MacKenzie (2001), p. 156
- Yost (2007), pp. 601-602
- Hunt (2012), p. 9
- Hunt (2012), p. 9
- Broad, William J. (September 25, 1983). "Computer Security Worries Military Experts", New York Times
- Hunt (2012), p. 5
- "CWAPT - CERTIFIED PENETRATION TESTER". IACRB. Retrieved 17 January 2012.
- Faircloth, Jeremy (2011). "1". Penetration Tester's Open Source Toolkit, Third Edition (Third ed.). Elsevier. ISBN 1597496278.[need quotation to verify]
References
- Hunt, Edward (2012). "US Government Computer Penetration Programs and the Implications for Cyberwar", IEEE Annals of the History of Computing 34(3)
- Long, Johnny (2007). Google Hacking for Penetration Testers, Elsevier
- MacKenzie, Donald (2001). Mechanizing Proof: Computing, Risk, and Trust. The MIT Press
- MacKenzie, Donald and Garrell Pottinger (1997). "Mathematics, Technology, and Trust: Formal Verification, Computer Security, and the U.S. Military", IEEE Annals of the History of Computing 19(3)
- McClure, Stuart McClure (2009) Hacking Exposed: Network Security Secrets and Solutions, McGraw-Hill
- Russell, Deborah and G. T. Gangemi, Sr. (1991). Computer Security Basics. O'Reilly Media
- Yost, Jeffrey R. (2007)."A History of Computer Security Standards," in The History of Information Security: A Comprehensive Handbook, Elsevier
External links
- List of Network Penetration Testing software, Mosaic Security Research