True Analytics™ - Energy Savings, Comfort, and Operational Efficiency
In the Dark Age before the DOS and green screens it was certainly not given that machines would be able to communicate with each other in so many different ways. In automation at that time everything was mostly PLC organized. Usually, such structure included one large controller, kilometers of wires between: controller, sensors and actuators and eventually one PC station with HMI interface called Supervisory Control and Data Acquisition (SCADA). Simple control applications were not connected to such expensive architecture. Connectivity between controller sensors and actuators was mainly delivered through the 4 to 20mA current loop as a method of transferring the information. A typical current loop assigned a sensing range 0 to 100% to the current range between 4 to 20mA. The transmitter impressed a certain current into the loop and the receiver measured the current in the loop. After measuring the current at the receiver side internal receiver logic then determined the present level of the signal within the defined range. Same principle was used with voltage signaling. Since, one station is always the transmitter and the other one is the receiver, this is a unidirectional system.
We all know that our industry is not so simple and straight forward. There are lot of technologically different subsystems in buildings with different requirements, needs and behavior history. As the microprocessor power become more affordable such “mainframe” kind of systems were exchanged with the Direct Digital Controller approach. There were lot of benefits in the new structure; applications were distributed to DDC substations, wiring got shorter, generally the structure became more manageable. But a new problem arose, we needed a new way of communication between substations. It had to be bidirectional and cover the whole plant. That requirement delivered back in 1983, the RS-485 serial protocol is still in use today as one of the major electrical interfaces at the field level. EIA RS-485 was made a standard, derived from the RS 422 standard. It is a OSI layer 1 specification. RS 485 provides half duplex communication, since a station cannot simultaneously transmit and receive independent data streams. Each station in a RS-485 system has a transmitter and receivers both commonly called a transceiver. When one transceiver is transmitting, all others should be receiving. Which station is allowed to transmit at the time is not specified in that standard, and is covered by higher layer protocols through the Media Access Layer. Media access is the method by which individual stations determine when they are permitted to use the media. Modbus, BACnet and lot of proprietary serial protocols use RS 485 as an electrical interface. At the very beginning each manufacturer developed their own automation system mainly using RS 485 electrical interface as a base for their proprietary serial protocol. As a result of that the systems from different vendors were incompatible. Instantly a need for intercommunication between these systems became a must. A fast solution was a gateway, the device which could talk the language of two foreign systems and exchange information between them. The need for common application protocols for multivendor installations was understood as an important direction. Somehow the Modbus, which was designed in the late 1970s to communicate to programmable logic controllers, was accepted by many vendors. The acceptance at the time was so widely recognised that today there is no vender not offering it. Today most vendors carry Modbus as one of the dominant application protocols in their offering. Unfortunately, as designed for PLC interconnection, the number of data types in Modbus is limited to those understood by PLC. Somewhere in the eighties both LonWorks and BACnet start their voyage. Through the nineties and this decade many new protocols were delivered.
Regardless of electrical interface, there are three media access methods used today, Carrier Sense Multiple Access with Collision Detection (CSMA/CD) (Ethernet, LonTalk, CAN), Token passing ( BACnet) and Master-Slave (Modbus). CSMA/CD systems allow all stations on a network equal access; each station must listen to the network to determine periods of inactivity before transmitting. In token passing networks there is a logical token which is exchanged among stations by network messages. Station that holds the token is allowed to transmit; all other stations are only permitted to receive messages. In the master slave networks there is just one master and others are slaves. Slaves only respond to the master and they only responding when the master initiates communication with them by sending them a message.
General growth of networks, dominantly Ethernet and IP through Internet and Intranet structures built the need for most building application protocols to have the ability to use IP as transport media. Today, we are not really thinking about what will be used at the physical level, copper or fiber. Generally, we can assume that the Internet/Intranet itself is media. The systems and subsystems are increasingly sophisticated, since they have to answer to rigorous demands. The need for speed and the ability to communicate and deliver information to various corporate levels opens a demand request need for integration to remove these communication roadblocks and improve facility performance. The variety of protocols that we have today with their complexity and purpose makes it impossible to integrate them all unconditionally into a single seamless integrated system.
For years we listened to stories about interoperability and interchangeability, but in reality most manufacturers are trying to find the way to close the system, so they can control the automation and control structure in the building forever. Even when they are using open protocols like LonWorks or BACnet they are establishing a BMS structure which is able to accept other devices and protocols but could not be exchanged with the other BMS system. We are living in an era of proprietary systems based on top of open protocols. Facility executives are locked into a single manufacturer’s system, and if there is a need for a function not supported by the manufacturer, they will simply be unable to fulfill the need. Moreover, for system expansions, modifications or upgrades, facility executives are forced to accept the system manufacturer's pricing.
Regardless of usage of published interoperability protocols implementation through software is not enabling facility independence from the manufacturers. Most widely spread protocols today are BACnet, LonMark, and Modbus. All three are very successfully used in building automated systems but generally they are addressing interoperability in very different ways. Furthermore when we come to general ICT integration there use will not be straight forward. In most cases we will look for the OPC interface or web service if they are available. Again, we will create a "new tire", which will be used to integrate building data into the new data model.
Generally, we can divide it into two platforms: application protocol platform and technology protocol platform. Service oriented architecture with web services represents basically the application protocol platform. Traditional building application protocols (LonWorks, BACnet, Modbus, as well as others) we can position as the technology protocol platform. Positioning them all in the same group does not make them equal. The purpose of the protocol generally, could be used as a key for this micro positioning. Putting side by side protocols and automation layers of the subsystems together with their purpose, responsibility, interactions and application self sufficiency could be used for protocol scope definition.
Following above definitions we can assume there are four general layers:
- Integration platforms ( JSON, XML structures, oBix,
OPC UA, BrightCore)
- Network protocols ( BACnet, LonWorks, Modbus, SNMP, N1)
- Field protocols (LonWorks, BACnet, Modbus, N2, KNX, DALI, M-bus, SMI)
- Edge protocols (4-20mA, 0-10V, digitals, ZigBee, Z-wawe)
The buzz word of today is "Cloud computing“ but there are a lot of different definitions. In most cases "the cloud“ is addressing virtual servers and the ability to have a kind of updated version of the utility computing. What does that mean for the user? First of all, increasing the capability on the fly without investing in new infrastructure. It is a kind of pay-per-use real time services over the internet. To use such a way of computing in our industry, we do have to have an absolutely open system at all 4 mentioned protocol levels. Until we reach that we will struggle with demarcation points and responsibility. More or less everything will be performed like in the "Who's on First?" by Abbott and Costello. The general idea of integration platforms is to simplify building infrastructure with ICT type of application using all reliable technology delivered to us by internet revolution as well evolution. Some of the mentioned technologies have very strict object models others don't. Again I say if technology is well documented and simple to use, the market will distinguish it from the average one.
The last fifteen years we've been in the middle of the protocol battle. Who will win? The simple answer is that everyone will win, since, all have a long history of usage, and everyone serves the need. Some, are pushed by standards, others through manufacturer strength or by good price ratio. What is clear is there are three, which are addressing the largest market. BACnet, LonWorks and Modbus are definitely the three Kings of our Building automation networks. BACnet looks as the strongest one, LonWorks has the largest product base, Modbus is the cheapest one. Why am I not putting KNX in the raw with these protocols? Simply, because it addresses the best with the BACnet protocol. The key device in the networks should be the network controller with the purpose to sustain network manageability if the connection to the service cloud or "BMS" station is not functioning. This is the critical position. Whose control network controller controls the network? This is exactly the place where an open system becomes the appropriate system. Accessibility of the network controller structure from another system integrator other than the one who installed the system is the key point for the enhancement of the system and ability to accept a new service. As long as that point will be closed, we will not be able really to enjoy a full power of "Cloud".
Typically, subsystems like AHU units, zone controls or boiler rooms are interconnected with the rest of the network by Field Bus. The characteristics of such a subsystem are that they have to have their independent intelligence for autonomous operation and ability to get high authority data for changing, their day to day routine. Lot of protocols could fit at that level. Which one will be used, depends on really various reasons. Some are used because of the historical background. The others like DALI, M-BUS or SMI are particular industry standards and can live without interfering with others. At that level, we do not need high speeds, what we need is reliability and exchangeability. At that level, most of the devices could be exchanged within the same protocol with the devices from other manufacturers. I will say this is the really interoperable layer of our networks. We should not forget the new breed of wireless protocols like ZigBee and Z-wave. They are here, and they are pretending to fulfill both the Field protocol area as well the Edge area, specially in the sensor networks.
These protocols are the oldest ones and will always be here in some form. Mostly, they are used for very simple unidirectional communication with sensors and actuators. That simplicity keeps their price very affordable. Therefore, at that level our networks are really open. Enhancement of the technology is opening that space to the wireless protocols. They are very aggressive in taking this protocol space. Simplicity of deployment is their biggest ally. As the application profiles are fully developed they will take over for sure this area.
Cloud computing is at the beginning, and everyone is trying to address cloud-based services. Software as a Service will prevail for sure in the future. Everyone will offer their scope of web services, APIs and even cloud computing development environment, trying to dominate by providing fast application development and assuring big customer base. However, this will not be an easy transformation. Our industry will not follow in full social network scenarios. It will need more than just opening an account and documenting all what we do. New value that we will be able to deliver at an affordable price, together with the security of the service and logarithm scale of profitability for the end user will be the most important trigger. We will see that evolution through gradual steps in the next decade. That evolution will open building automation service space globally to the professionals who know the trade. How the industry will behave in that process is not easy to predict. What is sure is that small players will accept it in full and quickly, but the larger players will try to keep their domination untouched. Until the killer application gets its place in the world, there will be no dramatic changes. As such, "a must" application appears. Everyone will be forced to open up all network layers, and we will achieve at the speed of light full-scale interoperability and manageability. The enterprises will become as a node in the cloud creating an absolutely new perception of the organization and society in whole. We will move from lousy coupled parts collections into fully organized interoperable systems which will be able to accomplish the overall desired goals within an affordable time frame and price. This is probably the main task of this century, which will be seen from the future as the century of Food, Water, Energy and Organisation.
[Click Banner To Learn More]
[Home Page] [The Automator] [About] [Subscribe ] [Contact Us]