EIS Reality Check: Meters, CTs, and Data Assurance.

We have too many siloed standards. We need a best-practices playbook built from real field lessons learned.

Why this matters now

How many projects have you seen without metering issues? Be honest. Wrong meter. Wrong location. CTs installed backwards. No safe access. “We will fix it later.” “I just installed it.” “Who tests it?” The list goes on.

When it all comes to a head, the rabbit hole deepens. Finger-pointing starts over who designed it, who chose it, who was supposed to verify it for the owner, how it was proven before turnover, and who will pay to fix it. Too often it simply stays wrong, and you can probably name a project right now. These are not edge cases. They are common. Owners pay for hardware that does not deliver reliable, useful data. Dashboards look great on day one, then drift out of sync with reality. Operations teams stop trusting the numbers. Savings go unmeasured. Fault detection starts labeling noise as signal. Frustration is earned.

This is a call to do better, not by creating another standard, but by turning the references we already respect into a field-ready sequence that teams can follow, test, and accept. The Engineer of Record still stamps design. Contractors and installing subs still own means and methods. The Master Systems Integrator coordinates interfaces and the data model so systems connect and can be proven. Controls contractors integrate meter data to BAS or SCADA, apply naming, units, intervals, and time sync, and align BAS trends to the EIS model. Commissioning protects outcomes with witnessed evidence. The result is Day 1 data you can trust.

Also, a mindset shift. Start with the end in mind. Plan for ongoing commissioning at the beginning. Anticipate closeout. Follow the equipment from Day 1 to Day 3,650. Meters do not live in isolation. They depend on devices, sequences, time sources, and naming that either support truth or undermine it. One weak link puts the whole system at risk.

Lessons learned from the field that repeat

Selection mistakes.
Display-grade meters show up in cost recovery or M&V roles. CTs sized for 800 A land on 200 A feeders, so low loads vanish in the noise. Split-core CTs get installed on VFD outputs where they saturate. PT ratios do not match the one-line. Neutrals go unmonitored on multi-wire feeders, so kW never balances.

Placement mistakes.
CT arrows face the load. Taps land on the wrong side of a breaker. Test switches are omitted or buried behind deadfronts. Devices sit where verification requires outages or hot work. The building inherits blind spots.

Access and safety gaps.
Working clearances are ignored. Labels and read-only ports are missing. Proving polarity or ratio would require energized work, so it never gets proven. Safety wins, as it should. Data truth loses.

Integration mismatches.
Names on the panel do not match BAS, which do not match tags in the EIS. Intervals differ across systems. Time is not synchronized. Phases are swapped and kW goes negative at noon. Reports do not reconcile. Trust erodes.

Commissioning shortcuts.
No secondary injection before energization. No comparison to a reference analyzer after. No 24 to 72 hour burn-in to catch gaps, flatlines, drift, or sign errors. The job closes on a promise instead of proof.

Consequences you actually feel.
Weekend shutdowns to re-land CTs. Five-figure change orders to relocate hardware that should have been accessible. Schedule slips while teams argue scope. Incentive or compliance reporting rejected. Operators stop trusting dashboards and go back to guesswork.

Data assurance, the most important.
Data must be correct, complete, and on time or nothing downstream matters. Treat it like life-safety for decisions. Calibrate, verify, and prove before anyone relies on it. Data is the gold.

  • One data model applied everywhere: names, tags, units, scaling, interval, timezone, sign conventions.
  • Time synchronized to a common source.
  • Three proofs: secondary injection before energization, comparative check after energization, then a 24 to 72 hour burn-in with a QA report for completeness, timeliness, validity, and reasonableness.

The references are solid. The sequence is missing.

We are not short on guidance. We are short on a single, jobsite-ready order of operations that ties it all together.

  • DOE FEMP Metering Best Practices explains why to meter, what to meter, where to meter, and how to prioritize at the program level. It is the umbrella that sets direction and intent.
  • LBNL EMIS and EIS resources define what good data looks like, how to specify data fields and QA rules, and how to use that data to drive action.
  • ANSI C12 family covers meter performance and accuracy.
  • IEEE C57.13 covers CT and PT selection, ratio, burden, and polarity.
  • ANSI or NETA ATS covers acceptance tests and reporting.
  • ASHRAE Guideline 0 and Standard 202 cover commissioning process, documentation, and roles.
  • NEC and UL cover clearances, labeling, and safe panel-shop practices.
  • Protocols and IT hygiene such as BACnet, Modbus, and NTP keep data moving on time and with integrity.

These are the ingredients. The playbook is the recipe.

From standards to a proven Lessons Learned Playbook

What follows is a performance-based sequence you can adopt without adding liability or erasing professional boundaries. It converts the references into eight gates with owners and evidence.

Gate 1. Program and OPR

Connect FEMP to owner goals. Define the metering strategy, the priority list, and Day 1 acceptance thresholds. Clarify which circuits support billing transfer, cost allocation, demand management, analytics, and FDD.
Deliverables: OPR with FEMP cross-references. Prioritized scope. Acceptance rubric.

Gate 2. Basis of Design

Apply ANSI C12 and IEEE C57.13 on paper. Call out meter accuracy class and demand interval. Specify CT and PT ratios and burden. Show exact tap points, P1 orientation to source, phase order, test switch location, read-only verification port, and working clearances on one-lines and details.
Deliverables: Stamped one-lines and details. Burden calculation sheet. Accuracy class callouts.

Gate 3. Integrated coordination and data model

Use LBNL EMIS guidance to publish a single data model. Names, tags, units, scaling, interval, timezone, and sign conventions are defined once and used everywhere. The MSI, working with Owner Facilities and IT, issues an owner-coordinated Interface Coordination Map that reconciles vendor submittals to the EOR stamped risers and makes interfaces testable. In plain terms, the ICM shows who talks to whom, where they connect, what they exchange, and how each interface will be proven. It lists endpoints and protocols, the source of truth and any write permissions, required units and intervals, NTP time source, alarm routing, and the pass criteria and screenshots that will serve as evidence. EOR reviews for conformance to the stamped design. Cx and DACx confirm that test steps, tolerances, and artifacts are clear. Controls contractors then map meter points to BAS or SCADA using the same model so BAS trends match the EIS. The ICM is a coordination document, not a stamped design or means and methods, and it is version-controlled at FAT, pre-energization, post-energization burn-in, and turnover.
Deliverables: Data model and tag dictionary; owner-coordinated Interface Coordination Map; vendor conformance matrices.

Gate 4. Factory or panel-shop prove-out

Prove what you can while the covers are off. Verify polarity, ratio, scaling, addressing, and alarming. Label test switches. Record configuration screenshots for interval, demand, and time source.
Deliverables: FAT script and results. Label photos. Exception log.

Gate 5. Installation and prewire verification

Install to the stamp. Photograph P1 to source, A-B-C order, labels, clear access, test switch, and read-only port. Update the as-built riser before energization.
Deliverables: Photo set. Red-lined risers. As-built updates.

Gate 6. Pre-energization tests

Follow NETA ATS. Perform secondary injection through the test switch. Confirm insulation and wiring. Controls contractor completes point-to-point from field to BAS or SCADA staging. Cx witnesses. MSI validates names, units, scaling, and intervals in staging.
Deliverables: Signed secondary-injection reports tied to each circuit. Device configuration screenshots. Cx field forms.

Gate 7. Post-energization proving and burn-in

Compare to a reference analyzer or upstream meter. Then run a 24 to 72 hour burn-in. Use LBNL QA rules for completeness, timeliness, validity, and reasonableness. Investigate and close gaps, time drift, negative kW, and phase swaps.
Deliverables: Comparison window with method and tolerance. Burn-in QA summary with issue log and resolutions.

Gate 8. Turnover and post-occupancy QA

Deliver the evidence pack. Turn on EMIS QA rules. Assign ownership for QA alarms and calibration. Provide a one-page field SOP so technicians can re-verify safely without energized work.
Deliverables: Interface map. Data model and tag dictionary. Certificates. FAT and field reports. Final point list. Field SOP. QA alarm routing and escalation.

Make Division 25 the backbone

Engineer of Record. Provide buildable, testable drawings. Specify meter class, CT and PT selections with burden, exact tap points, P1 orientation, phase order, and accessible test switch locations. Stamp the one-lines and details. When the details are explicit, field crews do not guess and Cx can verify without outages.

Master Systems Integrator. Be the bridge, not the builder. Publish the Interface Coordination Map for coordination only and the Data Model for names, tags, units, scaling, interval, timezone, and sign conventions. Reconcile vendor submittals to the EOR drawings. Do not change design intent. Withhold integration sign-off when interfaces or data fail the model.

Controls contractor. Integrate meter points to BAS or SCADA using the Division 25 data model. Apply units, intervals, scaling, and time sync. Prove that BAS trends match EIS values over a defined window. Provide point-to-point screenshots and configuration backups.

Contractors and vendors. Install and calibrate per the stamp and approved submittals. Provide factory evidence for polarity, ratio, scaling, addressing, and alarms. Label clearly and leave safe access and test switches.

Testing firm or NETA. Perform acceptance testing and issue traceable, witnessed reports that tie to circuits and devices. Capture instrument calibration certificates and photographs that confirm P1 to source and phase order.

Commissioning. Verify design and submittals, witness factory and field tests, run data QA, and sign only when the trends tell the truth. Convert promises into proof.

Owner Facilities or EMS and EIS. Set acceptance thresholds, approve gate artifacts, and own post-occupancy QA. Keep the feedback loop alive so data stays healthy.

Minimum acceptance to write into Division 25

  • Meter accuracy class per ANSI C12 with certificate at turnover.
  • CT and PT selection, burden, and polarity per IEEE C57.13 with P1 to source and phase photos.
  • Accessible test switch and read-only verification port so proof does not require energized work.
  • Pre-energization secondary-injection results attached and witnessed.
  • Post-energization comparative test within a defined tolerance to a reference meter or analyzer.
  • Complete EIS point list: names, tags, units, scaling, timezone, interval, and system of record.
  • Burn-in period of 24 to 72 hours with no gaps, correct timestamps, no unexplained negatives, and no phase swaps.

Write those seven lines into your spec and you will change outcomes.

Go one step further: Data Assurance Commissioning

Traditional Cx often stops at “the system runs.” or it is “reading data”. That is not enough. Add explicit scope for Data Assurance Commissioning (DACx) so what the owner sees and uses to make decisions is correct, complete, on time, and retained. The DACx person or team is not a typical Cx provider. They must be fluent in equipment, controls, data, and electrical. This is a new hybrid that should become the norm to deliver these services well. It takes real time and effort to grow this skill set, and owners should understand how valuable these professionals are. They are rare, like a unicorn, and should be compensated accordingly because their value far outweighs the risks they mitigate and the savings they unlock. DACx defines acceptance up front, owns the data model, proves meters before and after energization, keeps QA rules alive, and hands over an evidence pack that stands up to audits. In short, DACx turns standards into sequence, sequence into proof, and proof into better decisions on Day 1.

DACx scope at a glance

  • Define acceptance up front. Day 1 thresholds, required artifacts and screens, and who signs each gate.
  • Model the data. One Division 25 model across BAS, EIS, and SCADA for names, tags, units, scaling, interval, timezone, and sign. Artifacts: tag dictionary, point list, mapping tables.
  • Prove before energization. Secondary injection through a test switch with signed results. Configuration screenshots for ratios, vectors, demand, and NTP.
  • Prove after energization. Comparative check to a reference analyzer. Then a 24 to 72 hour burn-in with a QA report for completeness, timeliness, validity, and reasonableness.
  • Keep QA rules live. Automated checks for gaps, flatlines, time drift, and phase or sign errors with a clear escalation path.
  • Turnover that matters. Final interface map. Point list mapped to the data model. Certificates. FAT and field tests. Burn-in QA. One-page field SOP for safe re-verification.
  • After Day 1. Document calibration cadence. Assign ownership of QA alarms. Use a simple RACI so issues do not fall through the cracks.

DACx criteria to include in Division 25

  • Acceptance criteria: No unexplained negatives or phase swaps, timestamps correct, interval and timezone consistent, trends reconcile from panel labels to BAS to EIS over a defined window, all metering points present with correct names, units, and scaling, NTP status shown.
  • Evidence pack: Pre-energization secondary injection results, post-energization comparative checks to a reference meter or analyzer, 24 to 72 hour burn-in QA summary with issues and resolutions.
  • Responsibilities: Contractor provides secondary-injection results before energization witnessed by Cx, controls contractor submits screenshots showing names, units, scaling, interval, timezone, NTP, and demand method, MSI withholds integration sign-off until the data model is met, DACx delivers burn-in QA and the full evidence pack.
  • Signers per gate: Owner accepts, Cx and DACx recommend, MSI and Controls confirm conformance.

Add DACx and you move from “the system runs.” or it is “reading data” to “the data is true.” That is the difference between dashboards that impress and dashboards that deliver.

What good looks like

Picture a single electrical room on a Tuesday. The EOR’s one-line shows tap locations, P1 orientation, and a hinged test switch mounted where a technician can reach it with both feet on the floor. The MSI’s Interface Coordination Map and data model are posted on the wall. The controls contractor has already mapped points into BAS staging with names, units, and scaling that match the model. The testing firm closes the shorting block, performs secondary injection, and hands Cx a signed report with instrument traceability. After energization, a reference analyzer is clipped on for a comparative check. The team runs a 48 hour burn-in. The EMIS flags one interval gap and a small time offset. The team corrects the NTP setting and reruns the window. The QA report goes green. Operations reviews the trend, nods, and accepts the meter.

That room is not a fantasy. It is a sequence. You can repeat it.

Procurement that rewards proof, not adjectives

Ask each bidder for a short evidence pack you can score.

  • One integrated coordination example that aligns BAS, metering, lighting, fire, security, AV, and IT.
  • A Division 25 point-list sample with names, tags, units, scaling, interval, timezone, and cyber fields mapped to EIS and CMMS.
  • A FAT script and a redacted result showing polarity, ratio, scaling, addressing, alarms.
  • A pre-energization secondary-injection report.
  • A burn-in summary with completeness and sign checks.
  • A one-page field SOP for safe verification without energized work.
  • A roles map with references where Day 1 data acceptance was achieved.

Score it on examples and clarity of sequence, proven outcomes, data model and QA approach, and team references. Keep contract language performance-based. Require sequence and proof, not means and methods.

Pilot before you scale

Choose one feeder or one AHU and run the full sequence end to end. Prove the riser locations, the point list, the FAT, the pre-energization test, and the burn-in. Fix misses while the room is still open. Roll the pattern to the rest of the project. Update the playbook with each lesson learned.

A short glossary for clarity

EIS or EMIS. The analytics, dashboards, and QA rules that make data useful for decisions.
Data assurance. Proof that data is complete, on time, valid, and reasonable.
P1 to source. CT orientation for correct sign and phase order.
Secondary injection. Controlled test through a test switch to prove scaling and polarity.
Burn-in. A 24 to 72 hour observation period to catch gaps, time drift, sign errors, or phase swaps before acceptance.

A human closing

People make buildings work. Not drawings. Not meters. People. The controls tech who labels a test switch at 2 a.m. The commissioning agent who refuses to sign a trend that lies. The engineer who redraws a riser so a field crew does not have to guess. The operator who wants numbers she can trust because she is on the hook when the lights stay on and the bills arrive.

We do not need a new standard to honor that work. We need a clear, shared playbook that turns good intent into good outcomes. One sequence. Clear handoffs. Evidence at every gate. Built from lessons learned, and improved with every project.

Standards are the parts list. The playbook is the assembly and test manual. Build it once. Use it on every project. Improve it every time. That is how we get Day 1 data that stays good on Day 3,650. And that is how we turn dashboards that impress into dashboards that deliver.

LinkedIn
Twitter
Pinterest
Facebook