
Truth: Data Without Assurance Is Noise
Why this matters across every facility
Most teams still assume that clean graphics equal a healthy system, and that is the trap. Offices feel it on the utility bill. Data centers see PUE, ELC, and MLC creep. Labs get humidity and pressurization excursions. Hospitals see room relationship alarms. Factories watch yield drift. Pretty graphics can hide real problems when sensors drift, names do not match, trends are missing, or clocks are out of sync. Data Assurance Commissioning turns data from a byproduct into a deliverable so dashboards, invoices, and outcomes tell the same story. It starts in the OPR, so goals and KPIs are plain, and it is detailed in Division 25 so every system follows the same naming, trend, and time plan. It is a team scope with budget, not a favor from one vendor. And yes, sometimes the bill is wrong, which is why meter hierarchy and reconciliation are part of Data Assurance before anyone starts tuning.
It is all one data system
BAS, SCADA, EIS, EPMS, CMMS, FDD, and AI. Different logos, one bloodstream. If the data is wrong at the source, it is wrong everywhere downstream. One bad scale factor can mislead six systems at once. Controls look normal, SCADA floods alarms, EIS flags a spike, analytics chase ghosts, CMMS opens tickets, and finance fights a bill. Screens do not make truth. Validated data does. Data Assurance Commissioning makes the source true with one naming rule, one trend plan, and one clock.
A short story you might recognize
A chiller looks good and efficient on the dashboard. Everyone relaxes. Commissioning checks the field and finds the leaving water sensor was never verified after install. The controller is set to the wrong sensor type, units and scaling were assumed, the probe is not fully seated or insulated, and it sits near a mixing tee. The same point is labeled three different ways in BAS, graphics, and CMMS. Trends are thin and the clock is a few minutes off, so no one can compare apples to apples. Now swap the chiller for an air handler, a boiler, a lab room, or a data hall CRAH and the plot is the same.
Fix the inputs, not the interface. That is Data Assurance: verify the critical sensors in the field against a trusted reference and record curve, range, units, and location. Use one naming and tagging rule across BAS, SCADA or EPMS, analytics, and CMMS. Start with Project Haystack so names and meaning hold still across all projects. Run a simple burn-in trend plan, then steady trends with units stored and at least a year retained. Keep one clock for devices and servers. Start it in the OPR so goals and KPIs are plain. Detail it in Division 25 so naming, required points, trend rates, time sync, and sensor checks are not optional. Require proof at turnover with calibration and point-to-point logs, a time-sync report, and a live trend library. Once the foundation is true, turn on FDD to catch drifts fast and let AI help prioritize. When Haystack is truly standard across your work, add Brick to map relationships at portfolio scale.
Billing is important, and it comes next. Put a meter hierarchy and a monthly reconciliation in Division 25 so invoices align with the now trustworthy data.
What Data Assurance Commissioning is
Think of two tracks that run together. Track one is operational readiness. Equipment, sequences, and safeties perform as intended under real conditions. Track two is data assurance. Inputs are accurate, named and tagged consistently, trended with intent, and traceable from sensor to screen. If either track fails, analytics underperform and optimization does not stick. Data assurance does not compete with commissioning. It completes it.
Write the target in the OPR, in plain English
Start with the Owner’s Project Requirements. Add a one-page data objective that anyone on the team can read and repeat. List the decisions you want to support. Name five to eight KPIs that prove success. Identify a small set of required points by system. Specify the naming and tagging approach, the minimum trend plan, the time source, and how the commissioning team will verify results. If an operator cannot explain the objective back in their own words, it is not clear enough to enforce.
Division 25 is where you make data rules real
Division 25 does not invent a new standard. It gives you one home for the rules that every discipline follows. Use it to make data quality visible, testable, and shared. Then repeat the same pattern across the portfolio.
- Data objective: state that the BAS must deliver validated, traceable, timely data that supports operations, optimization, and analytics.
- Naming and tags: adopt one convention for the whole portfolio. Use a readable pattern that includes site, building, system, equipment, and point. Keep suffixes short and clear, such as Cmd, Sts, Sp, Temp, Flow, and kW. Use the same IDs in BAS, dashboards, analytics, EPMS or SCADA, and CMMS.
- Required points and trends: publish standard point lists by equipment type. Separate must-have points from nice-to-have points. Trend the few points that prove behavior. One minute during burn-in and functional testing. Five minutes ongoing. Retain at least thirteen months. Store units and quality flags with each sample.
- Time and lineage: require a single time source for devices and servers. Keep drift within one second, tighter where event order matters. For each KPI, keep a one-page sensor-to-screen map that shows the sensor, scaling, unit, historian tag, dashboard tile, and who owns each step.
- Acceptance tests: require pass-or-fail evidence, not screenshots. Point-to-point checks, basic calibration on the sensors that matter most, trend-based sequence tests, and a few safe fault tests for alarms and interlocks. Capture everything in a Data Quality Log that is signed at turnover.
Haystack first. Then Brick when the foundation is real
Start with Project Haystack as the operational tagging standard. Haystack gives your data a common vocabulary so names, units, and intent stop drifting between systems. It speeds submittals, reduces custom mapping, and makes validation obvious. Once Haystack is in use on all projects and the owner’s dictionary is established, add Brick Schema for portfolio scale. Brick is a structured map of equipment, points, spaces, and relationships. It helps future-proof analytics, digital twins, and AI by making relationships queryable across many sites. In short, Haystack for daily operations and consistency. Brick for long-term scale after Haystack is truly standard.
FDD and AI help after the foundation is true
Turn on a small, high-value FDD rule set once names, units, time, and trends are trustworthy. Start with the plant and the air handlers. Look for plant efficiency drift, sensor disagreement beyond tolerance, setpoint tracking errors, economizers that do not produce the expected trend shape, pressure cascades that are out of order, and fan power rising while inlet temperatures rise. Meet weekly to triage the top findings, confirm causes, and close or assign work orders. Add AI when the labels and the clock are right. Then it accelerates triage instead of inventing problems.
Yes, sometimes the bill is wrong. Prove it fast
Before you tune the plant to chase a suspicious invoice, run a quick check. Look for estimated reads. Confirm multipliers and constants for CTs and PTs match your EPMS or BAS. Align the time windows between the utility interval data and your historian. Reconcile main and submeter sums and power-factor penalties. Pull the utility 15-minute load shape and compare it with yours. Put this in Division 25 as a monthly billing reconciliation requirement with a clear meter hierarchy that includes IDs, multipliers, and tariff.
MBCx keeps it honest after day one
Commissioning sets the baseline. Monitoring-based commissioning keeps it true.
- Week 0 to 2: burn-in at one-minute trends. Fix mis-tags, mis-scales, and clock drift.
- Week 3: enable a small FDD rule set and start a standing review.
- Weekly: triage the top findings. Confirm root causes. Close or assign work orders.
- Seasonal: re-check resets, changeover logic, and schedules under new conditions.
- Annual: re-validate critical sensors and update thresholds.
This rhythm turns commissioned once into commissioned always.
Team sport, not vendor heroics. Budget it
Data assurance is QA and QC work. It takes hours, tools, witnesses, and retest time. Put it in scope for every role or you will buy it later with change orders.
- Owner and Facilities: set non-negotiables in the OPR; fund the team scope; approve the tag dictionary; own change control; run the MBCx cadence.
- EOR/AOR: EOR writes Division 25 requirements, points, trends, time source, and acceptance. AOR supplies the spatial hierarchy and room IDs so BAS, analytics, and CMMS match drawings.
- Controls: implement names and tags; units and scaling; trend setup; fix P2P and calibration findings.
- MSI (Div 25): co-develops the master tag dictionary with Owner, Facilities, and EOR; implements integrations; administers the historian; maintains lineage and change control; preserves units and meaning across systems.
- CxA: author the Data Assurance plan; witness tests; verify time sync and trends; sign the Data Quality Log.
- EPMS/SCADA: integrate and validate meters, multipliers, and tariff mapping.
- IT/OT: provide networks, roles, certificates, time service, backups, and access governance.
- TAB/Electrical: provide calibrated references for temperature, flow, pressure, and power.
Create a RACI so nothing falls through the cracks
Attach a one-page RACI to Division 25 – This also takes time to prepare and all parties must agree to their roles.
Responsible • Accountable • Consulted • Informed
Review the RACI at kickoff, first equipment startup, and before turnover.
Single building vs campus
A single site can survive with a local host if names, trends, and time are disciplined. A campus or portfolio needs site and building codes embedded in every name, a shared time source and certificate management, a global point dictionary, reusable graphics, and portfolio dashboards and FDD that reuse the same IDs. That is how you compare buildings fairly and scale without rework.
Owner guidance: show it, do not just say it
Vague requests create vague results. Do not just tell the design team what you want it to look like. Show it and specify it so there is no confusion, no rework, and no change order parade.
- Include a one-page Data Assurance objective in the OPR with the actual KPIs you will use.
- Attach example point packs for two or three key systems, marked must have vs nice to have.
- Provide a naming and tagging dictionary excerpt and require it verbatim.
- Add a simple trend schedule table: sample rates, retention, and units.
- Name the time source and the acceptable drift.
- Include pass or fail acceptance tests with example plots.
- Add a meter hierarchy sketch with CT or PT multipliers and tariff codes.
- Attach a one-page RACI and identify who signs the Data Quality Log.
When owners show the target with concrete examples, the team can build it once, verify it cleanly, and operate it confidently.
Bottom line
Get the fundamentals right and the rest follows. Put Data Assurance in the OPR, detail and enforce it in Division 25, and fund it as real QA and QC. Use one naming rule, one trend plan, one clock, and an owner approved Haystack tag dictionary on every project. When that foundation is steady, add Brick to scale relationships across sites. Publish a simple RACI and budget so it stays a team effort. Include a meter hierarchy and monthly reconciliation so invoices and trends agree. Decide what matters, name and tag it once, trend it with intent, and test it until it is true. That is how dashboards, work orders, and outcomes align for a single building and for a campus.