Celestial Defense of Atlanta Georgia is a highly skilled and experienced provider of expert consultant computer security and computer forensic services involving:
To speak with one of our government contract consultants to learn how we can help you click below or call 770-777-2090.
Neither the 2013 nor the 2015 and 2016 DFARS rules contains penalties for contractors whose information systems do not comply with the rules’ requirements. In fact, there is not even a penalty for cyber incidents mentioned in any version of the rules, at least with respect to safeguarding CUI or even a cyber incident. In fact, the rules indicate that a cyber incident is not by itself evidence that a contractor failed to meet the requirements imposed by the rules.[EN-1]
All of three of the rule iterations, the 2013, 2015 and 2016 versions, require contractors to report a “cyber incident”. There are no other reporting requirements such as whether a contractor’s systems actually comply with the security standards.
Whether a contractor’s system meets the various system standards is likely a very subjective determination, however. The various security controls are broadly worded. In addition, there simply are no hard and fast rules about how they are to be achieved or how compliance should even be measured. In fact, the only thing that likely is measurable is whether or not the contractor has experienced a “cyber incident”.
A cyber incident is defined identically in all versions of the rules as, “actions taken through the use of computer networks that result in an actual or potentially adverse effect on an information system and/or the information residing therein.” Remarkably, the definition of a cyber incident is not very helpful when determining a contractor’s reporting requirements. After all, what is an adverse effect, whether actual or potential?
The 2013 version of the rule provides two examples of a cyber incident. The first is possible exfiltration, manipulation, or other loss or compromise of any unclassified controlled technical information resident on or transiting through Contractor’s, or its subcontractors’, unclassified information systems.”[EN-1] The second is “Any other activities not included in [the first example] that allow unauthorized access to the Contractor’s unclassified information system on which unclassified controlled technical information is resident on or transiting.”[EN-2] Thus, under the 2013 version, a cyber incident includes both loss or compromise as well as simply an unauthorized access even though exfiltration, manipulation, loss or compromise of the protected data occurred may not have occurred.
Interestingly, neither the 2015 nor the 2016 versions contain any examples of a cyber incident. Rather, those versions simply provide the definition, that arguably, could be limited to some kind of impairment or exposure to liability or damage. Thus, without a statement in the rules that a simple unauthorized access is a cyber incident, a simple unauthorized access could arguably not rise to the level of a compromise. In fact, if the system or its data were not impaired, it is arguable that the safeguard systems were effective and prevented any kind of impairment, damage or exposure to liability and any resulting cyber incident.
Some may want to argue that any unauthorized access results in a, “possible exfiltration, manipulation, or other loss or compromise.” While it is possible that a review of an unauthorized access is unable to confirm that there was no exfiltration, manipulation or other loss, it is also possible that a determination can conclude that there was no exfiltration, manipulation or other loss. In this latter situation, it is quite logical that the unauthorized access is not a cyber incident as defined in the 2015 or 2016 versions of the rule.
Once a cyber incident is discovered, contractors have 72 hours in which to report it. As with the other terms involving cyber incidents, there is no definition, illustration or other explanations of the term discovered or when the reporting clock is actually triggered. The issue is that detection of an anomaly worthy of investigation could occur on one day but it could take several more days before it can be analyzed and classified as a cyber incident. In fact, it is possible that once an anomaly is detected that the 72 hour clock could expire before the analysis of the anomaly could be completed and have it classified as a cyber incident. Thus, arguably discovery is not achieved until it is realized that there actually has been a cyber incident.
A contractor’s reporting requirements are not limited to simply reporting the incident. Contractors are expected to expend some effort analyzing the event to determine its nature and scope as well as determine the functionality of any malware. The Contractor or subcontractors that discover and isolate malicious software in connection with a reported cyber incident shall submit the malicious software in accordance with instructions provided by the Contracting Officer.
In addition to the analysis, contractors are to preserve the incident in sufficient detail to permit subsequent analysis by other defense elements. When a Contractor discovers a cyber incident has occurred, the Contractor shall preserve and protect images of all known affected information systems identified in paragraph (c)(1)(i) of this clause and all relevant monitoring/packet capture data for at least 90 days from the submission of the cyber incident report to allow DoD to request the media or decline interest.
Both the 2013 and 2015 or 2016 versions describe specific data elements that are to be included in the contractor’s cyber incident report. The 2013 rule identifies 13 data elements that are to be included but the 2015 and 2016 version of the rule identifies 20 data elements. For the most part these data elements are for things like the contractor, contract, and contact identifiers like names, dates, CAGE codes, DUNS numbers, phone numbers, e-mail addresses, etc. There is a limited amount of data about the outcome and nature of the incident.
All rule versions require contractors to rapidly report cyber incidents to DoD at http://dibnet.dod.mil, including subcontractors. Thus, subcontractors do not report to their customers but directly to DoD.
For most contractors, there will likely be two different challenges for satisfying the reporting requirement. The first will be in determining whether a cyber incident has actually occurred. The second will be the actual reporting.
Performing the kind of necessary analysis is one area where contractors are likely to fall short. While most technology professionals will start their analysis with things like event logs, there are actually numerous other artifacts that could shed light on the events giving rise to any kind of analysis. Other sources of potentially useful information, on a Windows system for example, are the registry hives, browser history, link files and jump lists to mention a few. In addition, there could be network switch and router logs.
Another significant limitation could be the extent to which these other sources have sufficient data history if their storage sizes have not been deliberately increased to ensure that sufficient storage capacity exists. It is not uncommon on network servers, for example, that event logs are not sufficiently large to have retained all of the data necessary to evaluate the history of the suspicious activity.
The second area where many contractors are likely to fall short is providing the data specified by the reporting requirement. One of the most significant hurdles is the imaging of the data. Images are something that contractors are expected to provide but typical technology personal may not have the knowledge or skills to capture a proper forensic image and confirm its suitability for forensic analysis.
Despite there is considerable substance to the reporting requirement and its importance to assessing failures and improve security of the system, there does not seem to have been a lot of focus on this requirement. Rather, the focus has centered more on the security standards, as evidenced in earlier discussions on the various changes that have been both made and are currently being proposed to those standards. (see, Untangling the Governing set of Security Standards for CUI)
The next part in this series of articles on safeguarding CUI is on the Penalties and Certifications.
The previous part in this series of articles on safeguarding CUI is on the Security Standards