With few exceptions, control architecture has changed little over the past 40 years. However, advances in processing power, network technologies and software will enable greater value for end users in the near future by changing the way controllers are implemented and interface to the field.
With few exceptions, the basic architecture of a process (DCS) or discrete (PLC) control system consists of a set of I/O cards logically connected or assigned to a single control processor housed in dedicated hardware. This has been the general state of affairs since the first digital controllers were introduced over 40 years ago. Initial control system incarnations consisted of a card rack in which a local real-time control processor communicated to a set of I/O directly coupled to the same backplane.
As network technologies advanced, systems began to employ architectures in which a single control processor might support several card racks of I/O connected via proprietary, deterministic protocols. While still widely employed today, this predominant architectural approach is effective, but potentially wasteful.
Generally speaking, every control processor is limited by three main parameters: the ability of the controller to handle I/O scans, diagnostics and program execution in a timely fashion; the capacity to store code, I/O maps and program variables; and the ability to handle the data transfer with the I/O and the level 2 network. This often results in wasted potential.
An application may reach the limit of the number of supported I/O for a single controller, but the controller may be able to support much more logic processing than the application requires. This means that the user has probably paid for processing that is either not required or unable to be properly used.
Alternatively, logic-intensive applications, like some batch applications, reduce the amount of I/O the control processor can support. If the application requires high availability, the extra hardware and soft-ware required amplifies the waste.
For remote I/O applications, the user may be required to have multiple racks of co-located I/O assigned to different control processors. Alternatively, the user might choose to have all the field data pass through one processor and pass those values (or other relevant field data) to another controller with available processing capability.
Controller and I/O Interfacing
Unlike most current architectures, a new approach would have a common I/O network shared by all controllers and all field devices. This network would support a deterministic communication standard and allow any controller to address any field device. It would even allow multiple controllers and/or other applications to access the same data without intermediaries and permit peer-to-peer communications between field devices. The I/O network would support both traditional (analog) and intelligent (digital) field devices. Because such a network would support peer-to-peer communications, some applications would be implemented at the field level.
Through this decoupling of previously dedicated I/O and controllers, end users would be able to buy the appropriate amount of I/O for each physical area without the constraints of the controllers. Controllers would less likely have unused processing and/or unused I/O connectivity. Details that would need to be worked out include the number of network connections an I/O device or a controller could handle, network efficiencies, speed impediments and how to migrate existing users. However, ARC does not believe that these are insurmountable challenges.
Control in the "Cloud"
The "Cloud," as used in the IT world, isn't deterministic enough, available enough or fast enough for most level-two control applications; though it may be in the future. However, the decoupled architecture would enable a "local cloud" or virtualized control platform much like today’s virtualized IT environments. This architecture could meet the requirements of determinism, availability and speed of response.
In this scenario, ARC envisions a set of hardware hosting multiple real-time control instances or hosting a single control entity that grows with the application.
The hardware would run a real-time virtualization platform similar to the corresponding IT equivalent, and could be dispersed throughout the facility. This platform would ensure real-time communications between the virtualized controller instance(s), and between the controller instance(s) and the I/O. In a manner similar to IT virtualization, the platform would also handle load balancing and failure mode recovery.
The hosted controller instance(s) would run in a real-time manner similar to current controller implementations, with each instance running similar execution environments to today's equivalents. From a user standpoint, the interface to these controller interfaces could be nearly identical. Because of the purely software nature of the controllers, they could be licensed just like any other virtualized software platform, "spun up" nearly as quickly, and the virtualized instances could be managed with tools similar the current IT virtualization tools.
Senior Consultant, Mark Sen Gupta, leads ARC’s coverage of process automation and automation supplier services. He also covers topics in process safety and SCADA. Mark has over 24 years of expertise in process control, alarm management, SCADA, and IT applications. He holds a Bachelors of Electrical Engineering and a Masters of Science in Electrical Engineering from Georgia Institute of Technology.