PROMOTED

ABOUT THIS EPISODE

This month Synopsys put out their State of Fuzzing 2017 report. It's useful data, but the context of the collection and the metrics used to evaluate failures is very important. I talked with Chris Clark, Principal Security Engineer for Strategic Initiatives at Synopsys, to discuss the report.

Key points from the podcast and report include:

  • The data comes from the yearly totals of usage by Synopsys customers. The data is anonymized.
  • This means the protocol stacks are likely in development or QA and being tested. Some percentage of these flaws are fixed prior to release. The numbers may not be as grim as they appear.
  • You need to understand the market and protocol history to make sense of the numbers.

Two examples on the last point.

  1. The CAN protocol testing for the automotive sector is almost entirely of ECU's in a lab environment, not deployed in the automobile. There are large numbers of ECU's in cars as opposed to a single or very small number of protocol stacks in a PLC or even in an entire SCADA or DCS. 
  2. The Modbus TCP numbers look terrible. Modbus TCP is such a simple protocol that many, if not most, code up their own Modbus TCP stack. Most large and established ICS vendors have fuzzed and fixed their Modbus TCP stacks. So the numbers don't likely reflect the robustness of the deployed Modbus TCP stacks. Contrast this to DNP3 that has a smaller number of stacks and most stacks purchased from a single vendor, or OPC UA where the complexity of the stack encourages buying rather than building.

Other points:

  • The number of protocol stacks in each protocol (Modbus TCP, OPC UA, etc.) varies and is based solely on customer use. It could be one stack or 21 stacks.
  • The number of tests and test methodology is entirely determined by the Synopsys customer. It is not uniform.
  • The detection of failure is also not as rigorous as you would see if the device was monitored for performance of its full role. 

As Chris stated, this should be considered a top level view of the state of the ICS protocol robustness. The key is to understand where these numbers come from and not read more into them than the constraints warrant. And we should appreciate that ICS vendors are doing this type of testing.

Note: I apologize for the voice quality of this. It was a combination of a mistake I made in setup and marginal line quality. It is not difficult to understand, but not pleasant to the ear. I will do better.

English
United States
PROMOTED

TRANSCRIPT

Disclaimer: The podcast and artwork embedded on this page are from Dale Peterson: ICS Security Catalyst and S4 Conference Chair, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.

EDIT

Thank you for helping to keep the podcast database up to date.
PROMOTED
ELSEWHERE

RECOMMENDATIONS