Celestial Defense of Atlanta Georgia is a highly skilled and experienced provider of expert consultant computer security and computer forensic services involving:
To speak with one of our government contract consultants to learn how we can help you click below or call 770-777-2090.
All three of the rule iterations, the 2013, 2015 and 2016 versions, require contractors to report a “cyber incident”. There are no other reporting requirements such as whether a contractor’s systems actually comply with the security standards. Prior to the CMMC requirements and their full implementation in the years to come, the compliance aspect was more of an honor system and actual compliance was assumed when contractors submitted their proposals for contracts containing the rule requirements.
Whether a contractor’s system meets the various system standards is likely a very subjective determination, however. The various security controls are broadly worded. In addition, there simply are no hard and fast rules about how they are to be achieved or how compliance should even be measured. In fact, the only thing that likely is measurable is whether or not the contractor has experienced a “cyber incident”. With the advent of the CMMC requirements it will be quite interesting to see how the various security controls are interpreted and their requirements imposed on contractors by the third party certifying officials.
For most contractors, there will likely be two different challenges for satisfying the reporting requirement. The first challenge will be in determining whether a cyber incident has actually occurred. In all but the obvious situations like a ransomware attack, considerable expertise is required for not only how to conduct the analysis and make the determination but the variability of equipment and systems that could be included. A contractor's IT personnel who are chosen more for their ability to support day-to-day operations are not well suited for the reporting requirement.
The second challenge will be the actual reporting and all that can go along with it. Once contractors realize that their reporting obligations can include providing DoD with copies of all of the forensic images and analysis data they collected when making their incident determination as well as providing DoD with wide ranging access to the contractor's other equipment and information, it will be too late to redesign their systems to minimize their reporting obligations. The toothpaste will be out of the tube as they say.
Both of these challenges as well as the reporting requirements as a whole are discussed further in the sections that follow.
A cyber incident is defined in both the 2013 and 2015 versions of the rule as, “actions taken through the use of computer networks that result in an actual or potentially adverse effect on an information system and/or the information residing therein.” Since the wording uses the term "effect" that indicates something about its usefulness. It would not cover the situation where it was simply taken but otherwise uneffected.
The definition was revised in the 2016 version to include the prospect of a compromise. Specifically, the 2016 version defines a cyber incident as, “actions taken through the use of computer networks that result in a compromise or an actual or potentially adverse effect on an information system and/or the information residing therein.” A compromise is then defined as, essentially, an unauthorized disclosure of information to persons, or the violoation of a security policy of a system in which unauthorized disclosure, modification, destruction, loss, or copying information to an unauthorized media may have occurred. Thus, a compromise covers the siituatin where the data has been taken or as more precisely described "disclosed".
While the adverse effect mentioned in the definition of a cyber incident can be either actual or potential a compromise is not similarly described in the definition as actual or potential. Thus, a compromise appears more limited to situations that actually occurred such as unauthorized disclosure or the actual violation of a security policy. It is only if the actual violation of the security policy occurs that its consquence for determining whether a cyber incident occured can be actual or potential.
Despite the more permissive nature of the issues related to a security policy viotation and whether it "may have" resulted in an unauthorized disclosure, modification, destruction, loss or copying information to an unauthorized media, an orgnanization's security policy may not be that helpful in determining whether a cyber incident has occured. In other words, it is not a catch all for everything else not covered by the actual unauhorized disclosure requiremet. An organization's security policy identifies the rules and procedures used by individuals for using and accessing an organizations IT resources. The policy could be too general to be of any real value when determining whether a cyber incident has occurred. In addition, while a disregard for these rules coud have permitted a cyber incident, it is also possible that even following those rules would not have prevented whatever has happened. For example, perhaps the exploit is the result of a vulnerability in an operating system that is up-to-date with patches as required by the security policy. Also, the security policy could include things like awareness training but if an employee is duped, nonetheless, would that still be a violation of the security policy? Thus, even with the more permissive nature of of events as those that may have occurred, a security policy violation may not be relevant and the issue still turns on whether any compromise has involved an actual unauthorized disclosure.
While the 2013 and 2015 rule versions did not include the term compromise in the definition of a cyber incident, both versions of the rule had included a compromise in the discussion on cyber incident reporting. It was not until the 2016 version of the rule that the term compromise was included in the definition of a cyber incident. Thus, regardless of how the definition has been constructed, a cyber incident is either an unauthorized disclosure of information or an adverse effect on the system or the information it contains.
The 2013 version of the rule also provides two examples of a cyber incident. The first is a "possible exfiltration, manipulation, or other loss or compromise of any unclassified controlled technical information resident on or transiting through a contractor’s, or subcontractor's, unclassified information systems.”[EN-1] The second is “Any other activities not included in [the first example] that allow unauthorized access to the Contractor’s unclassified information system on which unclassified controlled technical information is resident on or transiting.”[EN-2] Thus, under the 2013 version, a cyber incident includes both compromise of information or an adverse effect on a system as well as simply an unauthorized access even though exfiltration, manipulation, loss or other compromise of the protected data may not have occurred. It seems, therefore, that the second example in the 2013 version is not consistent with the actual rule requirement unless the criteria stated in the definition of a cyber incident for "potentially adverse effect" is to be interpreted so broadly that there simply is no escaping it unless it can be proved positively that there was no cyber incident. As described previously, however, the descriptive term "effect" in this part of the definition appears more related to system or information usefulness and not its disclosure.
Interestingly, neither the 2015 nor the 2016 versions contain any examples of a cyber incident. Rather, those versions simply provide the definition of a cyber incident. Thus, without a statement in the rules that a simple unauthorized access is a cyber incident, a simple unauthorized access would arguably not rise to the level of a compromise. In fact, if the system was not impaired and its data not disclosed, it is arguable that the safeguard systems were effective and prevented any kind of cyber incident from happening.
Some may want to argue that any unauthorized access results in a, “possible exfiltration, manipulation, or other loss or compromise.” While it is possible that a review of an unauthorized access is unable to confirm that there was no exfiltration, manipulation or other loss, it is also possible that a determination can conclude that there was no exfiltration, manipulation or other loss. In this latter situation, it is quite logical that the unauthorized access is not a cyber incident as defined in the 2015 or 2016 versions of the rule if it can be conclusively shown that it did not result in an unauthorized disclosure. As for the situation where it cannot be conclusively shown, the definitions of a cyber incident are all phrased somewhat actively. In other words, they are for situations that "result in" a particular condition or "effect" a particular system in some way. Thus, the mere speculation that a disclosure might have happened because it has not been established conclusively that it did not happen would seem not to rise to the level of a cyber incident.
The description of a cyber incident as being "actions taken through the use of computer networks" has significance too and would exclude several kinds of data compromises and losses. For example, consider the situation where an employee leaves his briefcase in a taxi and the brief case contains a laptop containing CDI. Since that loss does not involve an "action taken through the use of computer networks" it arguably does not satisfy the definition of a cyber incident even if it otherwise met the definition of a compromise. For example, if the laptop is suitably encrypted it could not be an unauthorized disclosure, since the data would be unreadable and meaningless. On the other hand if the laptop was not suitably encrypted it could possibly be an unauthorized disclosure, although, assuming the laptop is recovered, a forensic examination might be able to rule that out as well. In either case, however, it would not be a cyber incident because it was not an action taken through the use of computer networks.
Once a cyber incident is discovered, contractors have 72 hours in which to report it. As with the other terms involving cyber incidents, there is no definition, illustration or other explanations of the term discovered or when the reporting clock is actually triggered. Presumably it is once an event has been determined to actually exist and not from the date when a concern was escalated for examination or even the date when malware first penetrated the network. Unfortunately it could take days, weeks, or even months from the penetration date to when the situation is detected and actually elevated for analysis. In addition, in many cases the detection of an anomaly worthy of investigation could occur on one day but it could take several more days before it can be analyzed and classified as a cyber incident. In fact, in all but very obvious cases such as a ransomware attack, it is possible that once an anomaly is detected that the 72 hour clock could expire before the analysis of the anomaly can be completed and have it classified as a cyber incident. Thus, arguably discovery is not achieved until it is actually confirmed that a cyber incident has occurred.
Both the 2013, 2015, and 2016 versions describe specific data elements that are to be included in the contractor’s cyber incident report. The 2013 rule identifies 13 data elements that are to be included while the 2015 and 2016 version of the rule identifies 20 data elements. For the most part these data elements are for things like the contractor, contract, and contact identifiers like names, dates, CAGE codes, DUNS numbers, phone numbers, e-mail addresses, etc. There is a limited amount of data about the incident itself and its outcome such as the detection date, type of compromise, technique employed, the outcome, and the impact on CDI. There is also a place for a narrative description about the incident and its particulars. Thus, even providing the report requires a certain level of definitively known information.
All rule versions require contractors to rapidly report cyber incidents to DoD at http://dibnet.dod.mil, including subcontractors. Thus, subcontractors do not report to their customers but directly to DoD.
The reporting is performed through a web interface. To connect to the web interface and complete the reporting form the contractor must have a medium assurance certificate. Contractors can obtain their certificate from http://iase.disa.mil/pki/eca/Pages/index.aspx.
The NIST 800-171 standards at 3.6.2 impose reporting requirements as well but there are no real details about what is to be reported. Rather, it simply requires compliance with whatever reporting requirements exist in the circumstance. Thus, in regards to the DFARS rules the particulars about the nature of the report and its contents are really only provided in the DFARS rule itself.
A contractor’s reporting requirements are not limited to simply reporting that a cyber incident has occurred. In the 2015 and 2016 versions of the rules, contractors are expected to expend some effort analyzing the event to determine its nature and scope as well as the extent of compromise. The 2013 version of the rule did not mention forensic analysis, although unless something really bad has happened that makes it obvious that an incident has occurred, such as a ransomware attack, a contractor is not likely to even know if an incident has occurred, and be able to fulfill its reporting requirement, until it does a forensic analysis and determines that a compromise has occurred.
The analysis is not limited to just the infected machine. Indeed, the analysis should look beyond the infected device and include a determination about the extent to which other devices on the network have been accessed and potentially compromised.
A forensic analysis is defined in the 2015 and 2016 versions of the rules as, ". . . [T]he practice of gathering, retaining, and analyzing computerrelated data for investigative purposes in a manner that maintains the integrity of the data." Performing the kind of analysis necessary to determine if there actually has been a compromise and the extent of that compromise is one area where contractors are likely to fall short. This is true for several reasons.
First, as clearly stated in the definition the analysis is to be done in a way that maintains the integrity of the data. All too often in-house IT personnel dive into the analysis on a live machine and do not take steps to maintain the integrity of the data. They do not use write blocking tools and they do not work with a forensic image of the storage media.
Second, it is not uncommon for IT professionals to simply perform various malware scans to clean a system and then go no further. A malware scan may be able to spot a problem and remove it but that does not answer the other questions such as what kind of malware was it? How long has it been there? What has it done? In addition, the lag time between malware appearing in the wild and when it is included in a virus checker can be months. Thus, simply having something detected and deleted by a virus checker is not enough and likely still requires a forensic analysis.
Of course, a lot of issues are first detected based on some kind of behavior and not the result of a virus checker. The forensic analysis skills necessary to determine whether there has been a compromise and to what extent it has occurred can require significant expertise. In-house IT personnel typically do not have the tools, training, or expertise to conduct meaningful forensic analysis in order to determine whether a cyber incident actually occurred and then provide the details required by the reporting requirement.
Third, for those that do go further whether in-house or outside consultants, most will start their analysis by examining things like event logs but there are actually numerous other artifacts that could shed light on the events giving rise to any kind of analysis. Other sources of potentially useful information, on a Windows system for example, are the file system, registry hives, browser history, link files and jump lists to mention a few. For malware that covers its tracks by deleting those artifacts additional expertise at data recovery is another essential skill. In addition, there could be network firewall, switch, and router logs as well as packet data that was collected by the contractor's intrusion detection system that could also be analyzed.
If the machine being investigated is up and running and the activity or behavior causing the concern is on-going then examination of memory is also likely in order. When memory is examined the contents need to be dumped to an external source so that its contents can be examined. If the machine is cold then volatile memory is not available for examination but a review of things like the swap file and potentially hibernation files could still reveal something. While a lot can be learned from memory analysis, one must realize that a review of memory will most likely just reveal the here and now. If the malware has been present and active for sometime, as is often the case prior to detection, an analysis of disk and other network artifacts will be needed to determine the full scope of any compromise.
Fourth, even if whoever the contractor has perform the forensic analysis has the necessary expertise they could still be hamstrung by not having enough data to examine. Again, the behavior and activity could have actually been occurring for sometime prior to its detection and the decision to conduct an analysis. It is not uncommon that the time periods for collecting data on a machine's activity may not have been set large enough to cover the needed time period for an effective examination. Often times the storage sizes for things like event logs are at the default sizes and they have not been deliberately increased to ensure that sufficient storage capacity exists to hold several months worth of data. It is not uncommon on network servers, for example, that event logs are not sufficiently large to have retained all of the data necessary to evaluate the history of the suspicious activity. With regard to event logs they are often configured to capture errors as compared to all events. In that case a lot of activity go unrecorded and unable to support any kind of analysis. Similarly, the Windows Shadow Copy feature may not have been enabled broad enough or its storage limits set large enough for there to have been significant historical data captured about the machine's use and functioning. In fact, there are a number of various logging and archiving features that are often not implemented because the strategy was more about real-time operational debugging as compared to after-the-fact forensic analysis. Since Windows 8 the file system's last access date has been considerably desensitized. While it is possible to re-enable this feature few organizations do even though it can be quite valuable for interpreting file activity usage on a Windows system. Even good security practices such as disk encryption on portable devices like laptops limit the future ability to conduct forensic analysis if the data retention horizons have not been suitably increased. So, there are just a lot of different elements that can impede a good forensic analysis.
Essentially, the real problem, at least to this particular kind of problem, is that this is kind of a chicken or egg kind of thing. In other words, people setting up and configuring the equipment usually are not forensic people that are experienced in performing forensic analysis. Configuring a machine to facilitate subsequent forensic analysis requires someone experienced with that effort and the problems that can be encountered.
Fifth, the final problem contractors will face is that the requirements imposed by the 800-171 requirements at 3.6 and the references to other NIST publications involving incident handling (800-61) and forensics (800-101) are too basic and generic for them to be helpful for anyone that is not already knowledgeable about incident response and digital forensics.
The forensic analysis element is an integral part of the entire DFARS cyber security effort, particularly since the definition of a cyber incident requires a compromise or loss. The third party certification will be just about as meaningless as the previous honor system if the disposition of potential incidents is either not seriously undertaken or purposefully designed to avoid detection of cyber incidents. Of course the lack of seriousness or the purposeful design to avoid incident detection could also increase the likelihood of False Claims Act type controversies. Thus, the entire subject of forensic analysis is another area where the actual implementation of the security standards under CMMC will be interesting to watch. Specifically, to what extent the extent the forensic analysis requirement proves troublesome for contractors using either in-house personnel or even outside consultants. Will there be any procedural or proficiency requirements for however the contractor plans to meet this requirement. Equally interesting will be what recourse contractors should the certifying officials exceed the requirements of the standards when evaluating a contractor's system.
The consequence of a cyber incident does not end with the submission of the incident report and the associated forensic analysis. Indeed there are several other consequences when a contractor experiences and reports a cyber incident. These consequences can result in additional contractor submittals as well as expanded analysis of the contractor's system. Remarkably there are numerous unexpected consequences stemming from these additional actions. The following sections discuss these additional actions and their consequences.
The Contractor or subcontractor that discovers and isolates malicious software in connection with a reported cyber incident is required by DFARS 52.204-7012(d) to submit the malicious software to the Defense Cyber Crime Center (DC3) as instructed by the DC3 or the the Contracting Officer. Contractors are instructed not to send the malware to Contracting Officer, however. Rather, the contractor is supposed to simply follow the Contracting Officer's instructions.
The subject of malicious software in the DFARS rules has changed several times over the years. In the 2013 version the issue was not even addressed. In fact, it was not even mentioned or defined. In the 2015 version it appeared, although at that time the contractor was to simply follow the submission instructions provided by the contracting officer. It was not until the final 2016 version of the rules that DC3 was identified and named as the entity to whom the malicious software was to be sent.
The definition of malicious software has not changed since it first appeared in the 2015 version of the rules. The definition describes malicious software as, "Computer software or firmware intended to perform an unauthorized process that will have adverse impact on the confidentiality, integrity, or availability of an information system. This definition includes a virus, worm, Trojan horse, or other code-based entity that infects a host, as well as spyware and some forms of adware."
A contractor may have mailicious software identified for them by various tools and techniques. One of those techniques is where the malicious code was identified by a virus checker as one of the types of software identified in the definitiion of malicious software. In fact, Identification of software by a virus checker as one of the types appearing in the definition of malicious software may have been what triggered the contractor's incident response and forensic analysis to determine whether a cyber incident had actually occurred. The presence of malicious software does not, by itself, trigger a cyber incident if it has not caused a compromise or adverse effect, however.
If the contractor finds software that is linked to whatever kinds of behavior or events it is investigating and the software has not been identified by a virus checker as malicious software there are other techniques the contractor can employ to evaluate what it has found. Of course, if the contractor has already determined that a cyber incident has occurred then time is short with any investigation it can perform, unless the instructions it receives from DC3 include additional time for additional analysis.
In addition to the analysis, contractors are to preserve the incident in sufficient detail to permit subsequent analysis by other defense elements. Under DFARS 52.204-7012(e),"When a Contractor discovers a cyber incident has occurred, the Contractor shall preserve and protect images of all known affected information systems identified in paragraph (c)(1)(i) of this clause and all relevant monitoring/packet capture data for at least 90 days from the submission of the cyber incident report to allow DoD to request the media or decline interest." While this requirement appears somewhat benign, since the retention period mentions 90 days, that is not actually the limitation. Indeed, the requirement is for "at least" 90 days or until DoD declines interest, which who knows how long that could be.
The whole image issue is one that happens to be an area where many contractors are likely to fall short. Images from their forensic analysis are something that contractors are expected to provide but typical technology personal may not have the knowledge or skills to capture a proper forensic image and confirm its suitability for forensic analysis. A forensic image is a special kind of copy of an entire storage media unless identified as something else like a logical image, which is not the entire storage media but an entire partition on a storage media. All too frequently when in-house IT personnel go to make an image they end up making a backup of active data or maybe just some portion of active data. The backup is not nearly as comprehensive or as data rich as a forensic image.
An image is different from a copy in that it capture all data, including active data, deleted data, and slack space. Consequently, an image is very extensive since it captures everything within its boundary. In fact, forensic analysts typically need everything within the boundary and many of their tests will scan everything looking for the items of interest and determining the consequence of whatever has happened.
Generally, creating images of the entire media is preferred, although when the media is extremely large as is often the case for network storage devices it may not be practical to image an entire media and some lesser unit like a partition could be desired. The decision to image an entire media or some lesser unit is typically made by the forensic analyst at the start of their work. In fact, the image is typically the first thing they do, since all of their subsequent analysis will be done using the forensic image and not the original media itself.
Another special feature of an image is that the captured data has been collected inside a protective file wrapper. The file wrapper prevents the data from accidentally being altered after the imaging. There are several different formats for the protective fie wrappers used for holding image data. Some of the file types use compression whereas others do not. Some have integrity checks within the structure of the protective wrapper file format whereas others do not. In addition, the image can also be segmented, which means that it may not be a single file containing all of the captured data. Rather it can be many files comprising the captured data. The downside to a forensic image is that the data inside can only be read with software tools capable of interpreting the protective wrapper format as well as its data contents.
When creating the image, there are also certain tests that are performed to validate that the image was created correctly and that its contents are an exact duplicate of the media being imaged. These tests validate that the image was created properly with very high probabilities and if there are any problems found, such as bad sectors that could not be read, those problems are documented in the imaging log.
A final feature of an image is that they are most often created using some kind of write-blocking device that prevents the data on original media from being altered. Without these write-blocking devices there are a lot of different ways that the original data could be altered. For example, file system date stamps could be changed. If the storage media is imaged while the machine runs then other aspects could be changed like portions of freespace overwritten, elements within the Windows registry could be altered, and various normal housekeeping functions could be run at the same time. There are simply a lot of different ways that the condition of the data on the original media could be altered.
An image of the storage media may not be all that is at issue with forensic analysis of potential malware. If the machine is still running when the incident is detected then getting a memory dump and preserving memory contents for subsequent analysis could also be useful and part of the preservation process. Getting a memory dump can require very special software as well. The issue is not so much knowing how to grab memory but how to grab the correct portions of memory without causing other system issues and crashing the entire effort.
Memory dumps along with the network logs and packet data typically come as some kind of raw data file. To prevent this data from being damaged after collection they should be encapsulated within a forensic container after collection similar to what happens with the acquisition of drive data. Thus, they, essentially, need to be imaged too, although the kind of image involved for these artifacts will likely be limited to loose files. While technically not an image in the same sense as a hard drive or logical partition, they will be encapsulated in a protective file wrapper just like the images of hard drives.
All three versions of the rules included provisions for allowing DoD to conduct its own review and analysis of a contractor's cyber incident. In such an event, the contractor is supposed to share whatever data it has including the images that it created. In addition, DoD can access other contractor equipment to collect additional forensic images for DoD analysis.
While a lot of people have focused on the NIST 171 security controls and the phase-in of CMMC, providing DoD with access to the forensic images the contractor created as well as access to the contractor's other equipment to create additional forensic images for own analysis is very disturbing. As recognized in the various paragraphs ((f), (g), (h), and (i)) of the CUI rules at DFARS 52.204-7012, DoD could easily have access to a contractor's proprietary data . This could include data which the contractor has never before provided to DoD as a result of exemptions under the various Data Rights clauses. Remarkably this could also include the proprietary data of its customers, vendors and other partners with whom it may have entered a Non-Disclosure Agreement (NDA) prior to receiving that data.
The kinds of information that these images can contain is not limited to trade secret or intellectual property, however. Even devices having other kinds of financial information or personnel information could be captured if DoD were to decide to image those systems, too, as part of their own analysis. In fact, there is just no limit to the kinds of information that could be swept up into the expansive access to a contractor's equipment and information provided by the DFARS rule on safeguarding CUI. Thus, DoD's access to the contractor's own proprietary data or that of its customers, vendors or partners is not all that could be revealed. The forensic images created for forensic analysis of the cyber incident typically involve the entire storage device. So, whatever was present on the machine being examined will be contained in those images. Consequently, it could include sensitive personnel information such as pay rates and other personal information about key employees. Of course it could also include some very embarrassing information. As an example, ever wonder how some people are spending their time at work or perhaps using their laptops at home once they are removed from whatever firewall screening policies are employed at work?
Under the 2013 version of the rule, the contractor could resist providing any of the data for which it had a legal basis not to provide. Perhaps something like a Non-Disclosure Agreement (NDA) with some other commercial entity would be suitable legal authority not to share that data with DoD. Another might have been exemptions under the FAR Data Rights clauses or contractual implementation of those clauses. Perhaps HIPPA information and other Personal Identifying Information (PII) could also have been other bases on which to exempt disclosure. In the 2015 and 2016 versions of the rule this option has vanished, however.
Generally the best way to protect something is never to share it with anyone. Remember that these forensic images are exact copies, warts and all, of the media storing information on a contractor's information systems. Under this rule contractor's can now be required to share all kinds of things that they never have shared before. Since the 2013 versions of the rule the only protections offered to contractors is that the government claims it will safeguard and protect contractor information, although there are no specifics provided. (see DFARS 52.204-7012(f) and (g) and (h)). Also, in the 2016 version of the rule at least the contractor's information could even be shared outside of DoD. (see DFARS 52.204-7012(i)) Furthermore, there is apparently no time limit for how long DoD might retain the images it collects nor is there is any rule about how the information would be destroyed and the storage media of the forensic images sterilized.
Clearly, the aspects of the DFARS rule that gives DoD access to a contractor's forensic images as well as the ability to access other contractor equipment and create their own images puts considerable pressure on how the determination of a cyber incident is rendered.
When considering the prospect for cyber incidents and the coverage of the security standards people tend to think only in terms of their network connectivity issues and things like routers, switches and the attached computing devices like personal computers, network servers, and network storage units. Then even when they think of these kinds of devices they tend to think of it more in terms of loose files.
Actually the scope of the requirements can be broader than that and include business systems like accounting, HR, engineering, and other ERP and enterprise based applications and functions. The analysis of those kinds of systems is far different than what one does for a media examination. These business system are typically databases and there is a lot more specialty to that kind of analysis. Furthermore, one might think that those kinds of business systems are much more secure and harder for someone to access than the loose files on computing devices. Actually, usually it is quite simple to access the data in these systems, once you understand how these database systems typically work.
Another area not usually considered in the coverage of the CUI requirements is factory and shop floor systems that include CNC equipment and even those manufacturing management functions like MRP systems but they are part of the coverage area as well. They likely are quite different, too. The the various pieces of equipment could have have non-standard or even proprietary files systems. In some respects this could help keep them secure, although it did not seem to matter in the case of the Stuxnet exploit. On the other hand they could be built on less robust systems like Windows CE and even open source Linux systems which could make them more vulnerable to attack. In any event, it is very likely that monitoring, detection, and forensic analysis of these systems for cyber incidents is a lot more specialized and likely far beyond what your typical IT person can handle. The point is do not think that the applicability of the various CUI requirements, including the requirements for cyber incident reporting, is limited to conventional network devices and connectivity.
Despite there is considerable substance to the reporting requirement and its importance to assessing failures and improve security of the system, there does not seem to have been a lot of focus on the requirement for cyber incident reporting. Rather, the focus has centered more on the security standards, as evidenced in earlier discussions on the various changes that have been both made and are currently being proposed to those standards. (see, Untangling the Governing set of Security Standards for CUI) Nonetheless, there are likely bigger concerns once a contractor has its first cyber incident.
It is not so much the reporting per se, or that the contractor has to submit a report with some kind of secure certificate, or even that it will have to perform some kind of forensic analysis as part of their incident response, which, most likely, their own IT people won’t know how to do. Rather, the really big bug-a-boo is if a contractor has to provide to DoD all of the forensic images it made and the data it examined as part of its incident response and forensic analysis. Even worse could be if a contractor has to provide DoD with wide ranging access to its data and equipment for DoD to conduct its own imaging and forensic analysis.
Where it gets really scary is when all that data and equipment contains highly sensitive proprietary data that a contractor has never previously provided to DoD under exemptions to the FAR Data Rights clauses. Even worse could be that DoD uses a contractor to perform its assessment and to review all of a contractor's data and equipment and that contractor is a different branch of a major competitor or customer that has historically been stiff armed from conducting various audits for competitive reasons.
They say it is not a matter of IF you will have a cyber incident but WHEN you will have a cyber incident. Once you consider the cyber incident reporting rules, one has to wonder from whom are you really protecting your network?
The next part in this series of articles on safeguarding CUI is on the Penalties and Certifications.
The previous part in this series of articles on safeguarding CUI is on the Security Standards