In April, the government had announced a new law requiring many companies and government agencies to report cyber incidents to India’s Computer Emergency Response Team (CERT-In) within six hours of detection. The law covers a broad range of events that must be reported, and requires logs to be retained for a rolling 180-day period. Although this law is well-intended, it has opened the door to a scope of consequences that may overburden companies and agencies.
There should be no question as to the need for greater cooperation between government agencies, cybersecurity agencies and private-sector organizations. But having focused on the practical realities of cybersecurity, this law might need refinement, preferably before it takes effect.
Given the complexities of this new requirement, law makers have delayed its enactment until September. While an in-depth analysis of the law is beyond the scope of this article, consider the following examples of how the inevitable “law of unintended consequences” can operate to lessen the effectiveness of the law and, indeed, distract agencies like CERT-In. Agencies could get inundated with reports that cybersecurity professionals know are low-risk and low-priority, with limited ability for analysts to separate the wheat from the chaff. It remains to be seen if additional changes to this law will be made before September.
What needs to be reported?
Right now, “targeted” scanning or probing of networks is an activity that must be reported. Many large organizations may see attempts to scan them dozens of times every day—or even every hour. While they may be targeted, they are often automated, and cybersecurity professionals consider the existence of scans to be low-risk and just part of the environment in which large firms operate every day. But what happens if a company is compelled to generate dozens or even hundreds of reports a day? What happens to the analysts receiving those reports, and to what extent will the need to deal with high-volumes of reports take resources from higher-priority tasks, both for the reporting entity and the people receiving those reports? Perhaps a government-appointed public-private sector action group could look at the types of issues to be reported, and based on actual experience, modify the list to gain insight into the highest-risk categories of incidents.
Is six hours enough time to generate a report?
The European Union’s General Data Protection Regulation provides 72 hours for submitting a report of a covered event. In the U.S., current guidelines are aiming at a 24-hour deadline. The reality—which is well understood by incident responders—is that in the early hours of a potential incident, the focus is on understanding whether an incident actually took place, stopping it if it is ongoing, identifying the root cause of the breach and identifying any data that was exfiltrated.
The kinds of incidents that are current, which often combine data theft with a ransomware attack, are particularly intense for a victim company, which must deal with ransomware encryption while trying to determine what, if anything, was stolen. Expecting reports within six hours—aside from the issue of the volume of reports that may be received for incidents that turn out to be low-risk or “false positives”—may lead organizations to simply generate reports without doing basic analysis, for fear of running afoul of the law for not reporting quickly enough.
It’s also unusual that the law seems to require everyone to use a single designated time-source for network time measurement. Clearly, there is a need to use a reliable source—but there are many, and it isn’t clear what advantage there is in forcing everyone to use one and only one source for time data.
What’s in the report?
To the extent that a thorough analysis of report content is done, reports can be generated in a format that is subject to automated triage. But where there are a lot of fill-in-the-blank fields—as currently available forms seem to show—human analysts will need to spend time reading and classifying the reports by some measure of risk. Is that an efficient or effective way to handle the flood of data that can come in, given the tight time reporting deadline and the breadth of incidents that are required to be reported?
What logs must be kept? Maintaining logs is not an insignificant exercise. Systems can generate many gigabytes of log files every day. For complex network environments, that can add up quickly, both in terms of sheer volumes of data and the cost of maintaining it. At a time when there are very real supply chain issues, it could be quite costly to require companies to maintain potentially petabytes (one thousand gigabytes) of data.
This also ignores the question of exactly what logs must be retained, and how granular those logs should be. What fields are required to be maintained? What logs? Are all log records to be maintained, or are only certain ones relevant? The costs of simply requiring 180-day log availability can be expensive. Alternatively, companies may be forced to maintain only a minimum number of logs which may be of limited use by forensic analysts or data scientists.
Laws that are written by lawmakers are well intentioned, as this one most certainly is. But with something as technical as incident reporting and mandated logging, getting it right should be of equal importance to getting it done immediately.
The government and the private sector should work hand in hand to determine how to operationalize the positive principles laid out in the law, collaborating with lawmakers to maintain their vision while providing the best implementation of that vision. Experience from the cybersecurity, audit, corporate compliance and legal communities is vital, and applying that experience will lead to a more effective and efficient cybersecurity result for the nation and its people.
Alan Brill is the Senior Managing Director, Kroll Cyber Risk Practice, and Kroll Institute Fellow.