Operations Security (OPSEC) is the process by which one may identify information that is critical to the accomplishment of their mission or function of their organization and apply appropriate safeguards to them after considering the nature and capabilities of adversarial systems. Critical information; that is, the “specific facts about friendly capabilities, activities, limitations (including vulnerabilities), and intentions needed by adversaries for them to plan and act effectively so as to degrade friendly mission accomplishment” (1).
Formally, OPSEC is an iterative, five-step process that is comprised of identifying critical information, analyzing the threat, analyzing the vulnerabilities, assessing risks and applying countermeasures (1); implied in the process is the need to periodically revisit existing countermeasures to gauge effectiveness and ensure proper and continued application. OPSEC is, in itself, not a security discipline. Rather, as the name implies, it is an operations discipline that may be applied to any venture, task or effort. OPSEC can further be referred to as information risk management (2; 3/p.14), drawing a clear parallel to Operational Risk Management (ORM).
The implementation of an OPSEC program is the responsibility of an organization’s senior leadership or, in a military organization, command. While specific requirements are generally delegated to lower levels, the senior leadership must appropriately resource, fund and champion the program in order for it to be effective. Furthermore, it is the leadership that is entrusted with the responsibility of weighing vulnerabilities against countermeasure cost and accepting any remaining risk on behalf of the organization.
This information paper will explore the role that OPSEC plays in the realm of Information Technology, and how traditional security disciplines may be leveraged to protect an organization’s critical information from the threats that are identified during the process. It will also report on the OPSEC challenges to Information Technology and discuss the threats, vulnerabilities and proposed countermeasures that may be employed by any organization in order to reduce its adversary’s ability to kill, counter or clone its mission.
OPSEC’S Role in Information Technology
Information is the lifeblood of most ventures, organizations or missions, and may take the form of trade secrets, military plans, secret recipes, strategies or other essential elements. It goes to follow that certain information must be protected in order to prevent an organization’s adversary from developing systems or mechanisms for reducing the effectiveness of friendly capabilities (4). The domain of Information Technology is well-equipped to protect information that traverses networks or exists in digital form; firewalls and intrusion detection systems are in place to prevent unauthorized users from gaining access to the network; antivirus software detects and removes malicious software; encryption ensures that data is protected both “at rest” and in transit and standardization programs exist to help ensure a uniform approach to security (5/p442).
The Achilles heel of the aforementioned technologies, however, is that they are only as good as their programming or signatures and they cannot overcome that most fundamental of security concerns – the human element. In order to be effective, technology must be configured to defend against identified threats or, in the case of a heuristics-based system, certain suspect behaviors. Without understanding the threat, to include its capabilities and intent, the Information Technology specialist is unable to effectively leverage technologies to help protect the critical information of an organization. This is most noticeable during the research, development, test and evaluation (RDT&E) activities of an organization, which is itself often dependent on Information Technology assets such as simulations, electronic records and confidential communications. The loss of RDT&E data could result in an adversary developing strategies to counter the new technology or clone the capability, resulting in a myriad of wide-ranging losses.
OPSEC, therefore, is a critical component of an organization’s Information Technology protection strategy and must be nested in the existing security disciplines that protect an organization’s information. Emissions Security (EMSEC) may protect the signals emanating from a controlled area, Physical Security (PHYSEC) may help to secure a facility from unauthorized access and Information Security may help to protect data at-rest and in transit, but OPSEC supports each from an operations perspective.
The OPSEC Survey and the Information Technology Sector
An OPSEC survey is “a method to determine if there is adequate protection of critical information during planning, preparation, execution, and post-execution phases of any operation or activity. It analyzes all associated functions to identify sources of information, what they disclose, and what can be derived from the information” (1). This is a systematic analytical process, by which one may view their organization from the perspective of their adversary; it is this adversarial perspective that makes OPSEC unique as a discipline and allows the analyst to consider and protect against threat characteristics that may not have been otherwise identified.
There are two broad types of OPSEC surveys, each of which will consider Information Technology as both an enabler and a consideration within the scope of the survey itself (1). The first type of survey is restricted to the specific organization being evaluated and uses the resources of the same. This may be referred to as an in-house survey or a command survey, depending on the environment. The second type of survey is called a formal survey, which includes activities that are beyond the specific organization being considered. Both formal and in-house surveys are comprised of three basic conceptual stages; the planning stage, survey stage and the analysis and reporting stage.
The planning stage of an OPSEC survey establishes the scope and identifies the resources required for successful completion of the event. During this phase the survey team is identified and will ensure familiarity with the procedural requirements; this familiarity is especially critical for those team members that are selected based on their specific subject matter expertise, such as Information Technologists selected for their ability to evaluate databases or properly analyze source code. During this stage, concurrence is elicited from senior leadership, who may then be counted upon to support both the goals and the execution of the survey. The survey stage, or the actual execution of the survey itself, is the stage in which information is gathered but not necessarily processed. By its nature, the survey frequently must compete with operational requirements, particularly manpower- a concern which is ideally negated by the pervious buy-in from senior leadership. In keeping with the spirit of OPSEC, that is, viewing the organization from an adversarial perspective, the survey team must remain flexible and be able to adapt to observed phenomenon that may not have been previously considered. In this sense, an OPSEC survey is equally an art as it is a science. Finally, the reporting and analysis phase involves the correlation of information and the presentation, which may take the form of a compiled report and briefing or both (1).
Characteristics of the Survey
Each organization’s OPSEC program is as unique as its critical information, threats and vulnerabilities. There are, however, common elements that must be considered in order to effectively conduct the survey. Among those common elements is the role of Information Technology systems, which may be both a target for evaluation and a means by which information can be collected, collated and disseminated. Because of the sensitive nature of the survey’s data, the computer systems and network communications assets must be protected and controlled, which is a fitting example of the interdependent relationship that OPSEC has with traditional security disciplines. This is to counter the adversarial collection capability, which becomes a greater concern when the data packets leave the organization’s control (e.g. e-mail). To the former, specific emphasis is to be placed on the Information Technology systems and supporting infrastructure, which generally necessitates appropriate subject matter expertise in order to properly consider the technological factors from the adversary’s perspective. This is especially true for the Information Technology sector, which relies upon systems and infrastructure for business operations, and Military organizations, which rely upon systems and infrastructure to maintain critical intelligence and Information Operations (IO) superiority on the battlefield. Areas that are likely to expose vital information to an adversary are noted and addressed immediately, which means that those tasked with developing and implementing countermeasures must have an accurate insight into the strategic capabilities of the adversaries and the potential for accessing critical information (4).
An effective OPSEC survey is not possible without first accurately and completely identifying an organization’s Critical Information. According to Daryl Haegley (7/p5), a lack of defined Critical Information may result in a failure to capture, and thus address, all organizational vulnerabilities. Assuming that the organization’s Critical Information is identified, the OPSEC survey may proceed with respect to the unique factors that define each survey: the nature of the organization, the points that the survey emphasizes and the external factors that influence the organization’s environment. The former point is especially concerning to the OPSEC professional, as there may be multiple factors that affect an organization but are outside of their control (5).
OPSEC Challenges to the Information Technology Sector
The Information Technology sector is a truly global industry that is among the largest in the world. As a whole, the sector has increased revenue by 4% annually over the last decade, adding up to over $550 billion in 2010. The industry has increased research and development expenditures to $35 billion in the same year, an increase of 7% over the previous year. There are more than 100,000 software and Information Technology service companies in the United States alone, with the number worldwide totaling in the millions, providing or servicing billions of computers and mobile devices (8). Each of those billions of devices have potential technical vulnerabilities that may be exploited, resulting in a threat that is the amalgamation of technical and operations vulnerabilities (9).
Identification of Critical Information (CI)
Critical Information is the specific facts about friendly activities, capabilities and limitations and intentions that are required by adversarial elements in order to counter, clone or degrade the mission or organizational function. Like OPSEC in general, Critical Information is considered from the viewpoint of the adversary – it is the information that they would attempt to obtain, correctly analyze and act upon in order to achieve their objectives. This can be determined by first identifying the Essential Elements of Friendly Information, or EEFI. The EEFI are the questions that the adversary is likely to ask in order to pursue their objectives; the answers to the EEFI is the Critical Information (1).
For example, consider a hypothetical financial corporation from the viewpoint of foreign hackers with ties to Al Qaeda. Their objective, in this scenario, is both to obtain funds to support traditional terrorist activities and also to send a political message by defacing the corporation’s public-facing website. The attackers will first ask themselves several questions (the EEFI) that will likely include, but are not limited to:
What Internet Protocol (IP) range is registered to the company?
What is the web URL?
What security technologies does the company employ, such as firewall or router models or Intrusion Detection Systems?
Who has elevated privileges on or remote access to the network, and how can we contact them?
What are the existing technical vulnerabilities? Passwords? Response processes?
Considering these Essential Elements of Friendly Information will allow the OPSEC officer to develop the Critical Information List (CIL). Note that not all of the answers will result in CI, as some of the resulting information cannot be reasonably protected. The IP range is a necessary piece of information for the attacker; however, the information is published in an online directory that is publicly available. The Web URL, or Web address, is another example of information that is needed by the adversary but must be publicly available in order to support business functions and cannot be reasonably restricted without impacting operations.
The remaining questions give an indication as to the CI that a technology-based organization, or even an organization that utilizes technology in order to achieve its aims, will need to consider. The security technologies can be revealed through purchase orders, press releases or technical personnel job descriptions (in which desired proficiencies will reveal technologies in use). Elevated privileges and remote access, as well as response processes, may be revealed through appointment memos, internal Standard Operating Procedures (SOPs), recall rosters and other similar pieces of typically unclassified CI. Vulnerabilities may be found in reports or scan data, while passwords can be found on official documents or discarded sticky-notes. The list goes on, but a working CIL can be assembled simply by asking specific questions from an adversarial perspective. Note that a CIL must be specific enough to identify specific information, but also broad enough to allow all CI to be captured. For example, not all purchase orders may be CI, but purchase orders showing the acquisition of specific security technologies to support a mission or exercise likely would be.
Information Technology must be represented in the OPSEC Working Group (OWG), especially when the CIL is being developed. In the case of a non-IT organization (e.g. a restaurant or clothing chain), this may be a subject matter expert for the IT department. A technology-heavy organization, such as an Information Technology consultancy, may have one or more subject experts for each type of technology (e.g. firewalls and intrusion detection, networking, desktop support, etc.). When properly educated as to the nature of Critical Information and CIL development, cognizant subject matter experts will be able to identify specific elements for inclusion that could have otherwise been missed. Ideally, this is completed during the initializing stage of a process, where OPSEC considerations may be included during the planning process (9). The CIL must be seen as a living document, which is subject to change as the organization’s mission, functions or tactics evolve. An ever evolving threat must also be considered. As such, the CIL must be revised as required in order to remain relevant and valuable to the organization’s OPSEC program.
The OPSEC practitioner must understand the intent and capabilities of each potential threat in order to develop effective countermeasures. This allows for consideration as to how the adversary may collect Critical Information, which in turn defines the measures that may be taken to counter these capabilities. Although there are general threat categories that are common to multiple organizations (e.g. hackers, domestic terrorism, etc.) the specific threats to the given organization must be identified.
Generally, it is not the OPSEC practitioner that will define and analyze the threat, but rather the security officer (in the military, the S2/G2 section). At this stage a threat assessment will be created, upon which the subsequent steps will be based (10). In order to be considered a threat, the entity must have both the intent and the capability to act against the target organization; if the entity is not able to act (lacks the capability) or is unwilling to act (lacks the intent), it is not a threat and no resources will be expended to guard against it. Conversely, if the entity demonstrates both the intent and the capability to act, then countermeasures must be considered.
Common threats against the Information Technology sector include, but are not limited to:
Hackers: Individuals or groups motivated by profit, ideology or challenges. Hackers may have the ability to disrupt computing services, modify websites, and obtain information. Gaining possession of an organization’s Critical Information allow an adversary to clone, counter or defeat friendly operations (12).
Competitors: Business competitors routinely exploit open source information (i.e. that information that may be legally obtained from public sources) in order to affect a business advantage (3). It is also feasible that a competitor will use illegal methods to exploit an information system and obtain trade secrets.
Foreign Intelligence Services (FIS): Certain Information Technology implementations support national defense (Either directly, in the case of military systems, or indirectly, in the case of defense contractors) and may be the target of Foreign Intelligence Services (1). It is reported that Chinese hackers have infiltrated a significant number of critical networks in Washington DC, to include government systems and those belonging to defense contractors (11).
This list, of course, is not exhaustive and merely represents a few of the many threats that an Information Technology-based organization or department may face.
Critical Information is not always directly revealed, and may be discovered by the presence of observables. This is distinct from an indicator, which is data that is able to be pieced together and interpreted to reach conclusions or estimates with varying degrees of certainty. Observables are actions that may convey indicators, but must be carried out in order to plan or execute activities (1). Given the specialized nature of certain Information Technology implementations, organizations frequently partner with each other in order to share resources. This necessary partnership, when done publicly, may reveal friendly intentions prior to the official announcement. For example, if an Information Technology company were to begin partnering with a company that manufactures capacitive touch screens, it’s possible to infer that the firm may be entering the mobile device market. Another example would be found in the military. If the J6 is working extensively with a combat unit prior to deployment, it may indicate that the unit is deploying with a new system or IT capability.
Adversaries employ multiple means by which they hope to collect CI (22). Those means include:
Human Intelligence (HUMINT): Collecting information from human sources either overtly (via military personnel, diplomats, members of foreign delegations, etc) or clandestinely (via spies, internal sources, etc.). A disgruntled employee providing information on network vulnerabilities to a competitor would be an example of HUMINT.
Signals Intelligence (SIGINT): Intercepting communications or non-communication transmissions in order to obtain sensitive information. Capturing telemetry data for a prototype airframe in order to determine capabilities and “war-driving” (looking for open WiFi connections to connect to networks) are examples of exploiting SIGINT.
Measurements and Signatures Intelligence (MASINT): Obtaining scientific and technical intelligence information by quantitatively and qualitatively analyzing data obtained from technical sensors in order to identify the source or the sender. MASINT is generally not considered to be a threat to Information Technology, but can reveal capabilities that an Information Security Program would otherwise protect such as capabilities of a new radar system.
Imagery Intelligence (IMINT): Viewing images obtained from multiple sources and designed to be displayed in print or using electronic display devices. The images may be obtained from photography, sensors, lasers or other optics, and provides visual ground-truth data that can be correlated with other forms of intelligence. Google Earth collects satellite imagery from multiple sources, and allows users to upload their own images from ground level. The program also provides street-view and high-resolution imagery in many areas. This allows adversaries to view any images that were captured by satellites or the street view vehicle, or added by users.
Open-Source Intelligence (OSINT): Collecting information that is available to the public, whether free or purchased. OSINT represents a significant source of intelligence information available to adversaries, and is very difficult to control once made available. Examples of OSINT include product release statements, deployment announcements, embedded reporters, military and corporate web pages, critical infrastructure data, personal information, conference speeches, professional journals, etc., that may reveal the capabilities of Information Technology or security assets.
Computer Intrusion for Collection Operations: Engaging computer hackers to obtain proprietary data or sensitive government information. The Interagency OPSEC Support Staff notes that this type of intelligence was observed as early as 1989, when the KGB sponsored a hacking collective that was able to access at least 28 government computer systems. Increasing intrusion efforts from China demonstrate that this threat is not decreasing
All Source Intelligence: While not an intelligence discipline in itself, All Source Intelligence is the employment of all collection activities in order to collect and examine all facets of an intelligence target. Engaging in All Source Intelligence allows an adversarial analyst to reinforce information and confirm or deny assumptions. The thwarting of a terrorist act, which is seen increasingly regularly, is typically the result of utilization of all-sources intelligence to correlate, understand and predict future actions.
Often, adversaries will simultaneously exploit multiple means of data collection. For example, when a solider posts pictures (which may include GPS location data) of IED damage from a mission, the adversary has potential access to both OSINT and IMINT.
The nature of Information Technology means that vulnerabilities may exist both in the processes employed and the technology itself. Only by identifying the vulnerabilities can appropriate countermeasures be developed and implemented (9). Because of the diverse and complex nature of technology, subject matter experts are required to evaluate systems and advise the OPSEC practitioner, who will combine the technical findings to the organizational ones.
Vulnerabilities must be considered from a holistic point of view; it may be tempting to consider only technological vulnerabilities when evaluating an Information Technology implementation while ignoring the other security considerations upon which it depends. For example, a computer containing sensitive information may be completely inaccessible from any public network, but if physical security-related vulnerabilities remain, an adversary may simply gain direct access to the data contained within. Similarly, the same information may be obtained if personnel can be manipulated into divulging it on the phone or in person.
The human factor represents the greatest risk to Critical Information, as it’s often the most vulnerable to exploitation with little effort by the adversary. Social Networking Sites like Facebook and Twitter allow for nearly instantaneous worldwide communication, which can result in a massive flow of information from private parties rather than organizational spokespersons or media outlets. This was made very apparent when Twitter user @ReallyVirtual unknowingly tweeted specifics of the 2011 Osama Bin Laden raid before it was officially announced by the Department of Defense or the news media (22). This single Twitter user was the only one documenting the event as it occurred. The Tweets, posts and updates that are affiliated with Social Networks pass unfiltered through servers that are outside of the control of the poster and are often archived; this means that Social Network updates can be readable for a long time after posting, despite successful attempts to remove them.
Another human-related vulnerability is connected to a person’s potential susceptibility to Social Engineering, which is the act of manipulating people into performing desired actions or divulging confidential information using interpersonal methods. This is often preferable to the hacker, who can save a great deal of time and effort by simply asking for information rather than compromising a secure network. In a 2011 interview with Salon Magazine, Kevin Mitnick, former hacker turned security consultant, noted that “Both social engineering and technical attacks played a big part in what I was able to do. It was a hybrid. I used social engineering when it was appropriate, and exploited technical vulnerabilities when it was appropriate.”
Vulnerabilities may be introduced from sources outside the organization. In 1998, the Tokyo police department contracted for software to track police patrol cars. The winning contractor was affiliated with Aum Shinri Kyo, the “Supreme Truth Sect,” which was responsible for the 1995 nerve gas attack on the Tokyo subway system. This compromised software allowed the sect to receive tracking data for 115 police vehicles (6). This is an example of the sort of vulnerability that may be introduced with new technologies, particularly when obtained from external sources.
Each vulnerability must be paired with at least a tentative OPSEC measure which may reduce the impact or likelihood of exploitation and inadvertent release of Critical Information. Operations Security measures may be grouped into three broad categories (1):
Action Controls are measures implemented to control friendly activities in order to reduce indicators or vulnerabilities. This may include requirements to encrypt data, review website content, change behaviors when working in public (to address concerns related to shoulder surfing) or other procedures.
Countermeasures are designed to disrupt adversarial collection abilities or to prevent the recognition of indicators. Countermeasures include technical protection measures, such as the employment of data Intrusion Detection Systems, or compromising emanations protection systems.
Counter-analysis measures are designed to frustrate the efforts of adversarial analysts to correctly interpret indicators. A common counter-analysis measure is deception, in which misleading information is presented to the adversary in order to force an inaccurate conclusion.
OPSEC measures that may be applied to Information Technology vulnerabilities may include, but are not limited to, the use of deceptive data transmissions, conducting security inspections, screening new employees (particularly those with elevated access to the network), deceptive targets (i.e., honeypots) and destroying vulnerability scan outputs rather than throwing them in the trash.
Assessing the risk allows an organizations leadership to make decisions as to which OSPEC measures to implement based on resources available. Risk is the likelihood or probability that an event will occur, when considered against the consequences of occurrence (1). After each vulnerability is assessed separately, the highest risks are considered first while few resources are allotted to low risks. Risk is represented in a simple 2-axis line graph with four quadrants, ranging from low-impact/low-probability to high-impact/high-probability.
The IOSS OPSEC-2500 course explains that risk is calculated as probability of compromise (P) multiplied by the impact of that compromise. As probability is the product of the threat times the vulnerability, the risk becomes the product of (Threat x Vulnerability) x Impact.
|Threat||Threat Value||Vulnerability||Probability (T x V)||Impact||Risk (P x I)|
|OSINT||High (.80)||Medium (.60)||Medium (.48)||High (90)||Medium (43)|
|MASINT||Low (0)||Low (.25)||Low (0)||High (80)||Low (0)|
Note that the threat and vulnerability are percentages while impact is a whole number. Also, note that due to the properties of multiplication, a “0” value in any box renders the Risk at 0; this holds up to logic, as if there is zero probability or zero impact from a threat, then there is no actual risk.
Application of Appropriate OPSEC measures and countermeasures
Those vulnerabilities that pose an unacceptable level of risk will be selected to have countermeasures applied in order to reduce the threat value, vulnerability, impact or any combination thereof. Ultimately, the applied OPSEC measures are designed to increase the effectiveness of friendly capabilities (10). Common countermeasures related to Information Technology include technical INFOSEC measures (such as security patches or configurations), procedural measures (such as shredding important documents) or other tactics unique to the specific threats and vulnerabilities previously identified. While countermeasures may be recommended by the OPSEC analyst, it remains the responsibility of management to approve and allocate resources to them.
Implied in the process of applying countermeasures is periodically revalidating them in order to ensure effectiveness and necessity.
Characteristics of the Information Technology Sector
There is a tendency for some technologically-inclined individuals to believe that all information should be free, as evidenced by the rise in avenues by which information may be shared. Peer to Peer sharing systems, such as torrents and file sharing websites, allow individuals to share files worldwide, while sites like WikiLeaks (which is still active in different forms) encourage the free exchange of sensitive information. The Onion Routing network allows information to be shared across an encrypted network that overlays” the public internet in such a way that attribution is not generally possible. Never before has information been so easily shared, which represents both an opportunity for collaboration and a significant risk to organizations.
Due in part to the widespread availability of information online, 80% of actionable intelligence may be found in open, publicly available sources which includes the Internet (19). Due to the ease of data retention online (made possible by mirror sites, internet archives and file storage sites), sensitive data, once released, may be very difficult to remove from the public’s view. An example of this phenomenon is found in the 1995 case of Religious Technology Center vs. Netcom On-line Communications Services, Inc. A former Scientology minister posted copyrighted information and secret Scientology doctrines to the internet; the copyright holder, The Religious Technology Center, filed a lawsuit against the Internet Service Provider in an effort to remove the information from public view. This effort was unsuccessful, as the information had already spread past the possibility of reasonable control, prompting the court to rule that posted documents that become “generally known” lose trade secret protection (20). Despite the efforts of the Religious Technology Center to control the spread of confidential information, it is still easily found online. This is further complicated by the so-called Streisand Effect, in which attempting to remove information from the internet only serves to call attention to it.
While safeguarding information is critical, a balance must be sought between usability and security. A computer system that has had the keyboard, mouse and monitor removed before being encased in concrete is a very secure system, but not very useful. Conversely, an information system with no security factors in place will be very usable but is very vulnerable to exploitation. Therefore, any countermeasures that are put in place must consider both the function and the security requirements of the information system that is addressed. This means that any network or computer will have remaining vulnerabilities that cannot be addressed with purely technical solutions and personnel controls must be emplaced.
OPSEC Mitigating Strategies for the Information Technology Sector
As technical countermeasures alone cannot sufficiently protect against the threat to information systems, particularly given the requirement to balance security and functionality, a blend of technical controls and OPSEC measures must be applied to the missions and operations supported by Information Technology implementations. The following elements should be included as a part of a holistic approach to OPSEC within an organization:
- Utilize subject matter experts, cognizant in the employed technologies within the organization, in order to obtain accurate information and gain credibility for mitigating measures. Ensure that each broad technical category is represented in the organization’s OPSEC working group.
- Implement OPSEC training as part of the overall security program in order to highlight the connection between OPSEC and traditional security disciplines such as INFOSEC.
- Ensure that the CIL is disseminated to all levels in such a way that it is relevant to their functions. For example, the Information Technology division must be aware of the CI for which they are responsible.
- Ensure that public releases of information are considered pre-release for OPSEC impact. This is especially true for those releases that are distributed via the Internet, such as those on the organization’s website or through press releases.
- Ensure that technical solutions are nested into the overarching OPSEC program, and that any potential indicators caused by implementation of technical countermeasures are considered. For example, it’s beneficial to encrypt certain information which traverses the Internet. However, a sudden increase in encrypted information may indicate that a sensitive task or operation is underway.
- Consider outside organizations with which data is shared. Data sharing is growing increasingly common and vital for globally connected and interconnected organizations. Organizations outside of the sphere of the OPSEC Manager’s control may introduce risks that must be considered during the OPSEC survey.
This paper has presented an analysis of the OPSEC challenges to Information Technology. While the continued development of technology in general allows for unprecedented opportunities for collaboration and data sharing, it’s those same characteristics that introduce new and complex risks which must be considered by the OPSEC analyst. The nature of Information Technology represents a logical dichotomy, in which both the technology itself and the way that it’s utilized may introduce vulnerabilities to an organization, and each of those concepts must be fully explored.
Because of the specialized nature of certain Information Technology Implementations, the OPSEC analyst often cannot fully understand the characteristics of the systems in place. In order to overcome this limitation, the analyst must draw upon the expertise of the organization’s subject matter experts in order to capture the required information. This, however, doesn’t address the cases in which the human factor is the vulnerability; social engineering is a favored tool for hackers, who are often able to obtain Critical Information simply by asking for it, eliminating the need to hack into a network or system.
In his book Secrets and Lies, Bruce Schneir wrote, “If you think technology can solve your security problems, then you don’t understand the problems and you don’t understand the technology.” Expensive security solutions, such as firewalls or Intrusion Detection Systems, are helpful in remediating vulnerabilities, but they can’t identify nor eliminate all possible risks to an organization’s Information Technology implementation. A strong, comprehensive OPSEC program can ensure that the traditional security programs are protecting against the actual threats while reducing risk to an organization by identifying countermeasures that would have been otherwise neglected.
1. United States Army (2005). Operations Security (OPSEC): Army Regulation 530-1, Washington, DC:
2. United States Army, Range Commanders Council, 2011. Operations Security (OPSEC) Guide.
3. DeGenaro, Bill. 2005. A Case for Business Counterintelligene. Competitive Intelligence Magazine 8: 12-16.
4. Department of Defense. 2006. DoD Operations Security (OPSEC) Program-DoD Directive 5205.02. Washington D.C.
5. Davies, Howard, and Pun-Lee Lam. 2001. Managerial Economics: An Analysis of Business Issues. Harlow: Prentice Hall.
6. Associated Press. Cult Reportedly Tapped Classified Police Data. (2000, March 03). Retrieved from http://articles.chicagotribune.com
7. Haegley, Daryl. 2012. Operations Security Proffessionals: Enabling Critical Information Identification, Vulnerability, Analysis and Risk Management. The OPSEC Proffessional Society Newsletter 2: 1-10.
8. Select USA. n.d. The Software and Information Technology Services Industry in the United States. http://selectusa.commerce.gov. (accessed December 21, 2012).
9. Cichonski, Paul, Tom Millar, Tim Grance, and Karen Scarfone. 2012. Computer Security Incident Handling Guide. http://csrc.nist.gov/publications/drafts/800-61-rev2/draft-sp800-61rev2.pdf (accessed December 21, 2012).
10. Fisher, Patricia. n.d. Operations Security and Controls.
http://www.cccure.org/Documents/HISM/655-661.html (accessed December 21, 2012).
11. Burn, Janice, and Maris Martinsons. 1997. Information technology and the challenge for Hong Kong. Hong Kong: Hong Kong University Press.
12. Joint Publication 3-05. 2001. Joint Tactics, Techniques, and Procedures or Joint Special Operations Task Force Operations. www.bits.de/NRANEU/others/jp-doctrine/jp3_05_1(01).pdf (accessed December 21, 2012).
13. Asa, Norman. 2012. Cyberattacks on Iran – Stuxnet and Flame. http://topics.nytimes.com/top/reference/timestopics/subjects/c/computer_malware/stuxnet/index.html (accessed December 21, 2012).
14. Al-Deen, Hana, and John Hendricks. 2011. Social Media: Usage and Impact. London: Lexington Books.
15. Langheinrich, Marc, and Gunter Karjoth. 2010. Social networking and the risk to companies and institutions. Information Security Technical Report, 15: 51-56.
16. Findlay, Michael. 2000. SOCJFCOM: Integrating SOF into Joint Task Forces. Special Warfare, 10-17.
17. Joint Center for Lessons Learning. 2003. Special Operations II. Quarterly Bulletin, 5: 1-35.
18. The Operations Security Professional�s Association. 2009. State of OPSEC Survey.
(accessed December 21, 2012).
19. Hulnick, A. S. , 2008-08-28 “OSINT: Is It Really Intelligence?” Paper presented at the annual meeting of the APSA 2008 Annual Meeting, Hynes Convention Center, Boston, Massachusetts Online <PDF>. 2012-06-22 from http://www.allacademic.com/meta/p281211_index.html
20. Religious Technology Center v. Netcom-Online Communication Services, Inc., 907 F. Supp. 1361 (N.D. Cal. 1995)
21. Jolie, O. One Twitter User Reports Live from Bin Laden Raid. Retrieved from http://mashable.com (accessed March 1, 2013)
22. Interagency OPSEC Support Staff. 1996. Intelligence Threat Handbook.