What Is Metadata?


Simply put

Metadata is “data about data.” In BAS, it gives every point identity, meaning, and context so people and tools can trust it.

Why the one-liner is deceptive

“Data about data” sounds obvious. Making dashboards, alarms, work orders, and invoices all agree is hard. It only works when the data about the data is complete, consistent, and connected.

What people think vs what it actually means

  • Think: “We named the point, done.”
    Reality: The same name must appear in drawings, controllers, graphics, CMMS, and analytics.
  • Think: “BACnet has the info.”
    Reality: BACnet tells you where the value lives. It does not tell you what it means.
    Tags are short keywords that say what it is.
    Ontology is the formal map of types and relationships so tools understand whole systems.
    Why this matters: Tags let tools understand a single point. Ontology lets tools understand the building.
  • Think: “We can tell it’s °F from context.”
    Reality: Units must be explicit or trends are meaningless.
  • Think: “Trends exist, we’re good.”
    Reality: Trend interval and one clock must be defined or time series cannot be compared.

What “data about data” really includes

  • Identity: Building → System → Equipment → Point. One clear name everywhere.
  • Meaning: short tags that say what it is. Example tags: sensor, air, supply, temp.
  • Context: how to use it: unit, trend interval, writable, limits, relationships, location, clock.
  • Mapping: the crosswalk to BACnet, graphics IDs, CMMS asset IDs, history IDs.

30-second example

  • Identity: Building_AHU-05.SAT
  • Meaning: tags = sensor, air, discharge, temp, supply
  • Context: unit °F, trend 5 min, writable false, high 70, low 40, time synced
  • Mapping: BACnet device 4005 AI-32; graphics id ahu05_sat; CMMS tag ASSET-12345.SAT

    BACnet note: “device 4005” is the controller’s unique network address and “AI-32” is Analog Input 32 inside that controller. Together they tell you where to fetch the value (likely the SAT sensor); the name and tags define what it means. AI = Analog Input, not artificial intelligence.

Why data quality matters

Bad data looks cheap and becomes very expensive. Poor naming, missing units, and unsynced clocks drive nuisance alarms, slow troubleshooting, bad energy reports, confused work orders, and “mystery” change orders. Good metadata is the difference between guessing and managing.

How commissioning assures data quality

Cx treats data as a deliverable. The goal is simple: the number you see on dashboards, alarms, work orders, and invoices is the same number, with the same name, unit, and time.

Start with the end in mind in the OPR

  • Naming rule with examples per equipment type
  • Core tag dictionary with allowed extensions
  • Required units, trend plan, single time source, acceptance tests

Lock it into Division 25

  • Required point table and mapping file to BACnet, graphics, CMMS, and FDD
  • RACI for who names, who tags, who maps, who verifies
  • Submittal timing and change-log format

Build and test

  • P2P includes names, units, ranges, writable flags
  • Alarm limits documented; trends enabled at the specified interval
  • NTP time sync verified on servers and controllers

Verify with evidence

  • Tag coverage and unit presence pass rate
  • Trend intervals match spec; one clock confirmed
  • Cross-checks: canonical ID ↔ BACnet object ↔ graphics ID ↔ asset ID ↔ history ID
  • Spot tests under normal and fault conditions

Turnover package

  • As-built point table and mapping file
  • Calibration certs, time-sync screenshots, 7-day trend pack
  • Exceptions list with fixes and owners

Operate and assure

  • MBCx and FDD onboarding with watchlists
  • Change control for names, tags, and mappings
  • Periodic data audits tied to warranty or service

Pay for proof, not promises

  • Tie milestone payments to passing these data QA checks.

Make it “Crystal Clear,” not “Clear as Mud”

Spell out roles, responsibilities, and acceptance criteria in plain language. Use a simple RACI so everyone knows who does what.

RACI keys
R = Responsible (does the work)
A = Accountable (final yes or no)
C = Consulted (inputs and reviews)
I = Informed (kept in the loop)

Who’s who (plain English)
Owner = pays for and accepts the work
MSI = Master Systems Integrator, steward of the data model across platforms
Controls = BAS contractor
Engineer = Engineer of Record
Cx = Commissioning provider
IT = owner’s IT/OT team

Example RACI for metadata and trends

  • Tag dictionary ownership: MSI R, Owner A, Engineer C, Controls I
    Acceptance: published v1.0 with examples per system, versioned in a shared location.
  • Point table delivery: Controls R, MSI A, Cx C, Owner I
    Acceptance: table includes canonical_id, display_name, unit, kind, writable, min/max, BACnet device/type/instance, trend interval, alarm limits, location, relationships.
  • Time sync policy and proof: MSI R, IT A, Controls C, Cx C
    Acceptance: NTP source documented, screenshots of sync status, skew less than 2 seconds at BAS server and field controllers.
  • Trend plan and retention: Controls R, Cx A, MSI C, Owner I
    Acceptance: list of trending points with intervals and retention, 7-day sample pack exported and reviewed.
  • FDD onboarding: MSI R, Cx A, Controls C, Owner C
    Acceptance: mapping file loaded, tag coverage threshold met, initial ruleset enabled, watchlists created, findings review held.

Add these two to close common gaps

  • Mapping file maintenance (IDs ↔ BACnet ↔ graphics ↔ CMMS/FDD): MSI R/A, Controls C, Cx C, Owner I
    Acceptance: round-trip crosswalk delivered and updated through turnover.
  • Change control for names, tags, mappings: MSI R, Owner A, Controls C, Cx C, IT C
    Acceptance: change log with who/what/when/why, monthly diff report, downstream systems updated.

If there is no formal MSI: name a Data Integration Lead who inherits MSI duties above. Cx independently verifies.

Owner playbook to avoid the GMP surprise

  • Ask for examples early: one sample AHU and one VAV with full point table, tags, units, BACnet mapping, trend intervals, and alarm limits.
  • Request mockups: a graphics screenshot labeled with canonical IDs and tags, plus a 7-day sample trend showing the right units and time sync.
  • Require a sample RACI: for metadata, time sync, trends, analytics, turnover.
  • Set gateways: at 30-60-90, review and approve examples before the rest is replicated.
  • Hold the line: “Crystal Clear” specs, submittal checklists, and pay-for-proof milestones prevent the “this isn’t what I expected” moment and stop change orders from flowing.

Owner quick test you can use today

  • Show the naming rule with two examples per equipment type.
  • Where is the approved tag list? Provide the core dictionary and any site extensions.
  • Give me the mapping file that links Canonical ID to BACnet, graphics, CMMS, and FDD.
  • What is the time source and how was sync verified? Include screenshots and max skew.
  • Which points are trending and at what intervals? Show at least 7 days of trends.

Common failure modes

  • Inconsistent names or late renames with no change log
  • Missing units or mixed units across systems
  • Mixed time zones or unsynced clocks
  • No mapping file across Canonical ID ↔ BACnet ↔ graphics ↔ CMMS/FDD
  • Tags that do not match behavior or site conventions
  • Trends enabled but wrong interval or retention
  • Vendor-specific labels that ignore the spec’d tag dictionary (for example, Haystack)
  • Graphics built before IDs and tags were approved

How to avoid this up front

  • One steward: name the MSI in the contract as R/A for the data model.
  • Submittal gates: no graphics or programming until the point table and tag coverage are approved.
  • Pilot first: require a 1-AHU or 10-point sample pack with naming, tags, units, trend plan, and mapping file before full build.
  • Acceptance criteria:
    • 100% units present, 95% tag coverage minimum
    • Trend intervals match spec, one time source, skew < 2 seconds
    • Round-trip crosswalk delivered (CSV or JSON)
  • Payment tied to proof: release milestones only after the above pass Cx checks.
  • Change control: any rename or tag change requires a change-log entry and updated mapping file.

*If the spec requires Haystack (or similar) and a contractor submits non-conforming labels, treat it as a non-compliant submittal and require correction, not a change order.

If you are already off track: recovery plan

  1. Freeze new graphics and naming until a baseline is built.
  2. Inventory: export all live points and create a single canonical point table.
  3. Map: build the crosswalk Canonical ID ↔ BACnet device/type/instance ↔ graphics ID ↔ CMMS/FDD IDs.
  4. Retag: apply the approved tag dictionary (Haystack style) and document exceptions.
  5. Units and clocks: normalize units and enforce NTP; capture sync screenshots.
  6. Trends: enable per spec; export a 7-day sample for review.
  7. Validate: Cx spot tests; target 95% tag coverage, 100% unit presence, skew < 2 seconds.
  8. Unfreeze graphics and rebuild only after the table passes.
  9. Lock: put the mapping file under version control and require monthly diffs.

Fast-lane options while the site runs

  • Map-now, rename-later: keep live BACnet names; expose Canonical IDs via mapping for analytics and CMMS; schedule renames by zone.
  • Aliases: add Haystack tags and Canonical IDs as aliases where renaming risks downtime.
  • Sprints: time-box corrections per system (for example, AHUs this week, VAVs next).

*Fast-lane options exist to keep operations stable while we correct metadata. They are temporary, owner-approved, and expire on a fixed date. They do not change the spec or the required end state.

Bottom line

Metadata works when identity, meaning, context, and mapping are defined up front, verified by Cx, and owned in a clear RACI. That’s how you get performance without surprises.

LinkedIn
Twitter
Pinterest
Facebook