A digital twin is a 3D model of a physical asset with live operational data mapped onto it. A warehouse where bin locations glow red when stock is low. A factory floor where machine stations pulse with throughput data. A building where HVAC zones show temperature readings overlaid on the floor plan.
The value is spatial context. A table of sensor readings tells you that sensor 47 is above threshold. A digital twin shows you where sensor 47 is, what it is attached to, what is adjacent to it, and what the trend looks like. Operators develop spatial intuition about their facility that spreadsheets cannot provide.
Sensor Monitor
A Three.js 3D floor plan with sensors that change state over time. Click any sensor to see its type, current value, and recent history. Orbit the camera to explore the facility from different angles. In a production digital twin, this floor plan becomes a full 3D model with the same state management pattern.
The Constraint: Live Data on Complex Geometry
A digital twin combines two hard problems: rendering complex 3D models (building geometry, equipment details, piping runs) and displaying live data that updates every few seconds.
Rendering Scale
A commercial building model exported from Revit can contain millions of polygons. A warehouse with 10,000 bin locations needs 10,000 interactive elements. A factory with 200 machines needs 200 sensor overlays updating in real-time. Raw CAD geometry is far too heavy for the browser.
Data Volume
A facility with 500 sensors, each reporting every 10 seconds, generates 50 readings per second. Each reading must find its corresponding 3D object, update the visual state (colour, label, animation), and do this without blocking the render loop.
These two problems interact. A frame budget spent on complex geometry leaves less room for data overlay updates. A data update that triggers material creation eats into the rendering budget. The architecture must balance both.
The Naive Approach
The proof-of-concept digital twin works with one floor and 10 sensors. It does not survive a real facility.
Model Preparation: Layered Architecture
Break the facility model into layers that load independently and serve different purposes. Each layer has different quality requirements and update frequencies.
Structure layer
Walls, floors, roof. Static geometry, loaded once, never changes. Aggressively decimated. This is context, not the focus. Load first as the visual anchor.
Equipment layer
Machines, HVAC units, bins, racks. Each is a named group in the scene graph, addressable by equipment ID. Geometry optimised per type using instancing where items are identical (warehouse bins, desk positions, light fixtures).
Data overlay layer
Visual indicators (colour coding, labels, icons) that represent sensor state. These are the dynamic elements. Kept separate from model geometry so updates are cheap. Material changes, opacity shifts, and label updates happen here.
Export layers as separate GLTF files. Load the structure layer first (users see the building immediately), then load equipment and data overlays progressively. Users perceive responsiveness even on slow connections.
Sensor Data Architecture
Use WebSocket or Server-Sent Events for live data. The backend pushes only changed sensor values, not the full dataset on every update. The data flow is a pipeline from physical sensor to visual state.
Backend receives reading from IoT gateway (MQTT, HTTP, or proprietary protocol).
Evaluates thresholds and determines sensor state: normal, warning, alarm, or offline.
Pushes state change to connected frontends via WebSocket: { sensorId: "HVAC-03", state: "warning", value: 28.4 }.
Frontend maps sensorId to 3D object using a pre-built lookup table (equipment ID to Three.js object reference). O(1) lookups, not scene traversals.
Updates visual state: change material, update label text, trigger attention animation if state is alarm.
The lookup table is built once when the model loads: traverse the scene graph, match equipment names or userData to sensor IDs, store references in a Map. Subsequent updates are O(1).
Visual State Encoding
Sensor states need clear, unambiguous visual encoding. Use a small set of shared materials (one per state) rather than creating materials per object. Assign objects to the appropriate material when state changes. Material count stays constant regardless of sensor count.
The offline state is often overlooked. "No data" looks like "everything is fine" if offline sensors default to a normal appearance. A desaturated, slightly transparent treatment makes the absence of data visible.
Navigation
Digital twins need purpose-built navigation, not generic orbit controls. Operators are looking for specific equipment and zones, not exploring freely. A floor picker with clip planes slices multi-storey buildings to reveal interior spaces. Clicking equipment zooms the camera and opens a detail panel with sensor history. Saved preset views (loading dock, server room, production line A) give operators one-click access to the positions they use most. A 2D minimap in the corner shows the camera's current position and lets users jump to a location without navigating through 3D space.
Alert Integration
The 3D view is one channel, not the sole alert mechanism. Not all operators watch the 3D view continuously. Alarms must also appear in tables, notification panels, and potentially external systems. The 3D twin provides spatial context when an operator investigates, not as the only way to learn about a problem.
-
Visual state in 3D Colour change, animation, and spatial highlighting for immediate visual context.
-
Alarm panel HTML overlay or side panel listing active alarms with timestamps and sensor details.
-
External notifications Browser notifications, email, Slack, PagerDuty for critical alarms that need immediate response.
Implementation Patterns
Three additional patterns address common requirements in production digital twin deployments.
IFC Loading
Building Information Models use IFC format. Pre-process IFC to GLTF on the server using IfcConvert. Serve optimised GLTF to the browser. Keep the IFC source for metadata queries.
LOD for Facilities
Overview: building outlines, equipment as simplified blocks, sensors as dots. Zone: equipment geometry within the visible zone. Detail: full component geometry for the focused item.
Time-Series Overlay
Clicking equipment shows sensor history as a small 2D chart in a floating panel. Spatial (3D) plus temporal (chart) context for diagnosis.
| What operators see | Sensor table | Digital twin |
|---|---|---|
| Alert notification | "HVAC-03 warning: 28.4C" | Floor 2, east wing, amber glow on unit |
| Context | Scroll to find HVAC-03 in table | See adjacent units, proximity to load |
| Trend | Open separate chart page | Click unit, sparkline appears in-situ |
| Spatial awareness | None (requires facility knowledge) | Built into the interface |
The Business Link
A digital twin gives operations teams spatial awareness of their facility without being physically present. Operators see where and what, not just that. Spatial context reduces diagnosis time from minutes to seconds: instead of scrolling a sensor table to find HVAC-03, the operator sees it glowing amber on floor 2, east wing, next to the server room. Equipment placement, workflow routing, and capacity analysis all benefit from visual context rather than spreadsheet coordinates. For distributed facilities, out-of-hours monitoring, and multi-site management, a digital twin provides operational awareness without physical walkthroughs.
Build a Digital Twin
We build digital twin interfaces that overlay real-time sensor data on 3D models of your facility. WebSocket integration, visual state encoding, purpose-built navigation, and alert systems that give operators spatial awareness.
Let's talk about your facility →