December 2010

[an error occurred while processing this directive]
(Click Message to Learn More)


John PetzeEMAIL INTERVIEW  John Petze & Ken Sinclair 

John Petze. Former CEO of Tridium joins new software startup focused on analytics.

John Petze, C.E.M., has over 25 years of experience in building automation, energy management and M2M, having served in senior level positions for manufacturers of hardware and software products including Andover Controls, Tridium, and Cisco Systems. At SkyFoundry ( he joins Brian Frank, co-founder and chief architect of Tridium’s Niagara Framework, as they look to bring the next generation of information analytics to the “Internet of Things”.

“Finding What Matters” in the sea of data

The new frontier is “how do we derive value from all of this data”. That is what we focus on at SkyFoundry – automating the analysis of data to find what is important.
New Products
[an error occurred while processing this directive]
Site Search
[an error occurred while processing this directive]
Past Issues
[an error occurred while processing this directive]
[an error occurred while processing this directive]

Sinclair:  Tell us about SkyFoundry and the concept of analytics?

Petze:  Our focus is to help people “find what matters” in the sea of data contained in smart systems – whether that be building automation systems, smart meters, energy management systems or other smart devices. When you think about it for a moment, things have really changed in relation to the information we get from control systems and smart devices. With the advances we have seen over the past 5-10 years it is now possible to “get the data” – whether via open communication protocols or by tapping into SQL databases or connecting to a web services interface. And because technology has made it increasingly economical to instrument, acquire, and store any piece of data the amount of data has become overwhelming.

So overall it is fair to say that we can now get at the data and that the new frontier is “how do we derive value from all of this data”. That is what we focus on at SkyFoundry – automating the analysis of data to find what is important.

Sinclair:  Is there a difference between analytics and information dashboards?

Petze:  That’s a great question because they are related yet very different. Here is a way to look at the difference – dashboards are tools to present information to users – pictures of equipment systems, graphs and charts of temperatures or energy consumption, etc. There is a lot of progress being made in improved presentation techniques to make dashboards more effective, but there is still a lot of data to look at.

Analytics on the other hand is the process of determining what data should be presented. For example do you have time to look through 100 graphics of equipment systems to see if everything is OK? Do you want to look at displays showing the value of hundreds of temperature sensors or be directed to look at a display that only shows the temperature readings that are out of bounds? Or only the temperatures that have been out of bounds for more than 1 hour? Or only temperatures that are out of bounds by more than 2 degrees for more than one hour? Analytics adds those additional factors that tell us that something really matters and is worth your time.

Those are fairly simple examples to help to convey the essence of what we mean by analytics. The next step is to add correlation to the analysis process. For example, identify the fact that temperatures are out of bounds for more than 15 minutes in certain zones, and automatically determine that the reason is that we are in a demand response event, which will end in 30 minutes, and tell that to the operator. That is the where the real value comes from – giving people the whole picture so the operator doesn’t end up going on a wild goose chase simply to determine that a condition is “acceptable” based on a combination of factors.

Sinclair:  Some of the examples you describe sound similar to alarms? Is there a difference?

Petze:  That’s a question I often hear. There is a big difference between alarms and analytics. First of all, alarms require that you fully understood what you want to look for at the time you programmed the system. For example, did you know that you would participate in a demand response program? Did you know that you would not want to create a work order to in response to a temperature alarms when the building is participating in a DR event if the value is only a bit out of bounds for a short period of time?

Alarms are also very local in nature, typically evaluating a single item or point. A local controller operating an air handler simply can’t combine data from throughout an enterprise to discern relationships or patterns that are potentially important.

Alarms are also not well suited to exploration of data relationships. For example, would you really be able to justify the cost of programming alarm logic into 1000 sites because you have an idea about a correlation that could be resulting in energy waste? It’s costly and complex to reprogram control systems. These types data analysis situations are just not well served by alarms.

[an error occurred while processing this directive] Similarly, in large facilities there are a lot of interactions between the various systems that simply can’t be known until after the building is operational. A key part of analytics is accepting the reality that we are going to discover new things over time and having tools to do that effectively. Analytics allows you to test new ideas, identify new patterns and correlations. It’s about deriving new value from the data generated by our smart devices.

Sinclair:  Sounds compelling but is it a highly complex undertaking that only large facilities can take advantage of?

Petze:  The interesting thing about analytics is that it is multilayered – like peeling an onion. There are many things that you can find quickly and easily that result in immediate financial benefit. How about finding malfunctioning temperature sensors with a “rule” that says that if a sensor value doesn’t change by more than 0.1 degrees in 24 hours the operator should be notified?  Bad sensors could be causing units to run wild and consume excess energy. It’s a simple rule that can result in immediate benefit. Another example could be identifying HVAC units that heat and cool within the same hour (or 15 minute period) – perhaps that means that the control algorithms are not set up correctly and the unit is over heating and over cooling resulting in excess energy use. Even basic analytic relationships can result in significant operational benefits and the barrier to entry is very low both in software cost and complexity of set up. From there you move on to more sophisticated rules and relationships to assess operation of complex systems like central plants, system performance vs. baseline or ideal conditions, etc. If a domain expert can describe a pattern or rule they want to use it's an easy matter to implement it in the system.

Sinclair:  What does it look like for the user?

Petze:  Here are a few examples of how information is presented to the user. It all starts when a rule finds a hit – conditions that match a set of definitions. When it hits it generates what we call a “spark”. The software then automatically assembles information into a display that graphically shows the correlation between the “spark” and related data – for example, weather conditions, electrical demand, status of associated control points, zone temperatures, process variables, occupancy schedules, etc.

Screen Capture 1

Screen Capture 2

So in essence we’ve created a way to automatically assess data against a set of rules, and then present the user with the information that’s related to the issue to show correlations. It’s really amazing what you find. It shows you things you never knew were happening. It’s like mining your data for money.


[an error occurred while processing this directive]
[Click Banner To Learn More]

[Home Page]  [The Automator]  [About]  [Subscribe ]  [Contact Us]


Want Ads

Our Sponsors