Industries · Hyperscale
HVAC Duct Machinery for Data Centers
Hyperscale, colocation and AI-infrastructure data centers drive the tightest HVAC duct specification in the commercial market — SMACNA Seal Class A, EN 1507 Class D, ±0.2 mm flange tolerance, and increasingly stainless steel for liquid-cooled halls. This page covers the standards, the math behind the specification, and the SBKJ machines that fit.
Why data center duct is a different specification
A commercial office building leaks 10–15 percent of its supply air into the ceiling plenum and nobody notices — the building still works, the occupants are still comfortable, and the energy cost is absorbed into the base operating budget. A hyperscale data center cannot absorb that loss. At 30 MW of IT load the cooling air has to hit the server inlet at the specified volume and temperature, every second, forever. Bypass air that leaks into the ceiling or through unsealed flange joints is pure waste: it raises the CRAH fan work required to deliver the target airflow, it raises the return-air temperature set point, and it directly raises PUE. At that scale, 5 percent extra fan power at a PUE of 1.3 translates to roughly 130–180 kW of continuous parasitic load, which is USD 115–155 thousand per year in wasted electricity at USD 0.10 per kWh. That is why hyperscale operators specify SMACNA Seal Class A, and why the duct workshop choice matters.
The standards that govern data center HVAC duct
- SMACNA HVAC Duct Construction Standards (3rd ed.) — Seal Class A for supply and return air serving IT load, Class B for general building and emergency ventilation. Class A allows a maximum leakage rate of roughly 2.4 L/s per square metre of duct surface area at 250 Pa static pressure, equivalent to CL3 in the old SMACNA leakage classification system.
- EN 1507 (rectangular) and EN 12237 (circular) — European equivalent, with Class D the tightest class corresponding to Seal Class A. Commonly specified on European colocation builds in Frankfurt, Dublin, Amsterdam and Stockholm.
- ASHRAE TC 9.9 Thermal Guidelines for Data Processing Environments — governs the supply-air temperature and humidity envelope at the server inlet (typically 18–27 °C dry bulb, 5.5 °C dew point lower limit, 60 percent RH upper limit for Class A1 equipment). Duct design and construction affect the delivered conditions at the rack inlet.
- Uptime Institute Tier Classification — Tier III and Tier IV require concurrently maintainable and fault-tolerant cooling, which usually means redundant CRAH units each with their own duct riser. Tier rating drives duct quantity more than duct specification, but it also increases scrutiny of leakage class compliance because failure of one riser has to be recoverable.
SMACNA Seal Class A in simple math
The Seal Class A limit is 2.4 L/s per square metre of duct surface at 250 Pa. For a typical hyperscale data center with roughly 15,000 square metres of supply and return duct surface area (including risers, branches and CRAH plenum), the total allowable leakage is about 36,000 L/s or 36 m³/s. At a delivered supply airflow of roughly 500 m³/s for the full 30 MW load, that is a hard cap of 7.2 percent total system leakage. Miss the Seal Class A target and you are looking at 12–18 percent leakage in practice, which is the difference between a PUE of 1.30 and a PUE of 1.38 — an extra 240 kW of continuous cooling load on a 30 MW facility. The duct workshop is the cheapest place to fix this. Trying to fix it in commissioning is expensive, slow and usually incomplete.
Why the machine matters
Seal Class A is achievable by hand on well-run traditional shops with skilled craftsmen, but at hyperscale volume (8,000–15,000 square metres of duct per building, delivered in 6–12 months) the dimensional consistency required is beyond what manual fabrication can sustain. The critical machine specifications are:
- TDF flange-bend tolerance of ±0.2 mm across the full 1600 or 2000 mm duct width. Loose tolerance on the flange directly causes gasket compression variation, which is the dominant source of transverse-joint leakage.
- Calibrated longitudinal-seam beading station with electronic position feedback. Pittsburgh-lock formed on a worn lockformer with slop in the anvil is the second-biggest source of field leakage.
- CNC repeatability of ±0.1 mm on duct length. Out-of-square duct cannot be joined tightly no matter how good the flange is.
- Auto notching and seam forming in a single pass to eliminate the handling errors that accumulate when duct moves between stations manually.
The SBAL-V auto duct production line meets all four of these specifications as standard. The SBAL-III meets three of them and is cost-effective for colocation projects under roughly 20 MW IT load, where the surface area is smaller and the leakage-class economics are less punishing. The SBAL-II is suitable for enterprise data centers (under 5 MW) where Seal Class B may be acceptable.
Recommended SBKJ machines for data center work
- SBAL-V Auto Duct Production Line — ±0.2 mm TDF flange tolerance, 2,000–2,500 m² per day single-shift throughput, handles 0.5–1.5 mm galvanized or stainless steel, 200–2,500 mm duct width. The standard recommendation for hyperscale and large colocation projects.
- SBAL-III Auto Duct Production Line — ±0.3 mm flange tolerance, 1,200–1,600 m² per day throughput. Fits enterprise and smaller colocation builds where Seal Class A is specified but daily volume does not justify the SBAL-V. Also the typical choice for contractors serving multiple smaller data center projects in parallel.
- SBTF-1602 Spiral Tubeformer — Φ80–Φ1600 mm spiral round duct. The default choice for CRAH return plenum, supply risers and cable-tray adjacent round duct. Stainless-capable configuration available for liquid-cooled halls.
- SBTF-2020 Large-Diameter Spiral Tubeformer — Φ200–Φ2000 mm range for oversized plenum supply duct serving multi-CRAH manifolds. Less common but specified on the largest hyperscale builds.
- SBPC Plasma Cutting Machine — for complex branch take-offs, CRAH penetrations and custom fittings that fall outside the SBAL's standard tooling range.
Liquid cooling and the stainless steel specification
The shift toward AI infrastructure and rack densities above 40 kW per cabinet has pushed a growing share of new-build data centers to liquid cooling — rear-door heat exchangers, direct-to-chip cold plates, or single-phase immersion tanks. Liquid cooling does not eliminate air-side ventilation (humidity control, gaseous-contaminant removal and general ventilation still need supply and return duct), but it does change the risk profile. Close-coupled cooling raises localised humidity, condensate can form on cold metal surfaces, and galvanized duct in that environment develops white rust within 2–5 years. Stainless steel duct (typically 304 grade, sometimes 316 for coastal sites) is the standard mitigation.
The SBAL-V handles 0.5–1.5 mm stainless with a roller swap and a tooling adjustment that adds roughly 4 hours of line setup time but does not require a second machine. The SBTF-1602 and SBTF-2020 both accept stainless coil with the optional stainless-roller kit. Contractors serving the AI-infrastructure segment are increasingly ordering SBKJ machines in stainless-capable configuration as the base specification, then dropping back to galvanized on traditional projects.
Hot-aisle/cold-aisle containment and duct routing
Hot-aisle and cold-aisle containment has been the dominant data center cooling architecture for over a decade, and it constrains duct routing in specific ways. Supply-air plenum feeds the cold aisle through perforated floor tiles or overhead drop-down plenum. Return-air plenum collects from the hot aisle at ceiling level through either a dedicated return duct or a plenum ceiling above the containment cap. Both routes require tight duct because any leakage short-circuits the containment and reintroduces hot-aisle air into the cold-aisle supply path — the single biggest driver of hotspot formation and the most common reason a nominally well-designed data center runs 3–5 °C hotter than design.
From a machine-specification perspective this means two things: the duct forming tolerance has to be tight enough that transverse joints actually seal against a standard TDF gasket without remedial caulk, and the duct dimensions have to match the containment enclosure tolerances so the supply plenum sits flush against the cold-aisle ceiling. Both requirements are easier to meet with a CNC-controlled SBAL-V than with a manually indexed lockformer workflow.
PUE sensitivity and the business case for tight duct
The power usage effectiveness (PUE) of a data center is the ratio of total facility power to IT equipment power. The best hyperscale operators target PUE below 1.2 and achieve it; the industry average is closer to 1.55. Duct leakage is a direct contributor to the cooling system portion of PUE. A simple sensitivity model: for every 1 percent of system supply-air that leaks into unconditioned space, the CRAH fan work required to deliver the target rack-inlet airflow increases by roughly 2.5–3 percent. At a 30 MW IT load with PUE 1.25, cooling system power is roughly 6 MW. A 5 percent leakage reduction saves about 360–450 kW of continuous cooling power, which is USD 320–400 thousand per year at USD 0.10 per kWh.
Against that operating saving, the capital cost delta between an SBAL-III line meeting Class B and an SBAL-V line meeting Class A is approximately USD 80–120 thousand. The SBAL-V pays back inside the first year of operation on any project above 20 MW. This is the core economic argument for specifying an SBAL-V on hyperscale projects — not that the machine is intrinsically better, but that at hyperscale the PUE arithmetic makes the capital delta trivial.
Daily volume and line sizing
A rule of thumb for sizing the duct line against a data center project: budget 250–450 square metres of duct surface area per megawatt of IT load, depending on cooling architecture (lower end for overhead cooling with short duct runs, upper end for raised-floor with long horizontal runs). A 30 MW hyperscale project needs roughly 7,500–13,500 square metres of duct. Delivered in a 6–9 month window that is 40–90 square metres per working day — well within a single-shift SBAL-V production rate. A 100 MW campus delivered in 12 months is 100–180 square metres per day, which still fits a single SBAL-V on double shift. Above 150 MW or compressed delivery windows, contractors usually run two lines in parallel rather than one larger line, to keep both risk and changeover time down.
Common mistakes first-time data center duct buyers make
- Ordering a line sized for commercial work and then pricing into hyperscale. The SBAL-II is a perfectly good commercial duct line but will not meet Seal Class A in field test. Quoting a hyperscale project with an SBAL-II specification almost always ends with the contractor either missing the schedule or missing the leakage class.
- Skipping the stainless option on liquid-cooled builds. Retrofitting stainless capability to a delivered galvanized-only line is possible but costs 40–60 percent of the base machine price and takes 6–10 weeks. Ordering stainless-capable from day one adds 8–15 percent to the base price and 10–15 days of lead time.
- Under-specifying the TDF flange tolerance on the purchase order. The default SBKJ quotation assumes Class A tolerance, but some competitors quote looser tolerances to hit a lower price point. Always verify the TDF flange-bend tolerance specification in writing before accepting a quotation from any vendor.
- Not budgeting for a calibrated air-leakage tester. The duct shop that can produce Class A duct but cannot test its own work in-shop is running blind. A portable duct leakage tester (USD 8–15 thousand) should be in the capital budget alongside the line.
- Assuming the SBAL-V can be operated by a single operator trained on Pittsburgh-lock work. The CNC controls and the tolerance workflow require a retraining cycle of 2–4 weeks for an experienced traditional operator. Budget the training time and plan for it during commissioning.
SBKJ data center project experience
SBKJ has supplied auto duct production lines and spiral tubeformers to contractors working on data center projects in China, Southeast Asia, the Middle East and Australia. Specific hyperscale reference projects are typically subject to contractor and operator confidentiality requirements, so detailed case studies are not always published, but the SBKJ engineering team can speak directly to peer contractors under NDA for buyers who need reference contacts. For a published case study on an SBAL-III installation that was specified partly to quote for data-center mechanical contractors, see the Australia SBAL-III sheet metal workshop case study.
What a data center duct workshop costs to set up
A complete SBKJ duct workshop configured for hyperscale data center work — SBAL-V stainless-capable line, SBTF-1602 spiral tubeformer stainless-capable, plasma cutter, TDF flange former, duct leakage tester and material handling — typically comes in at USD 420–620 thousand capital ex-works Jiangyin. Add 8–15 percent for freight, duty and installation. Full breakdown of pricing and lead time for every SBKJ machine is covered in the pricing and lead time guide.
Request a data-center duct workshop quotation →
Frequently asked questions
Why do hyperscale data centers specify SMACNA Seal Class A?
Seal Class A caps duct air-leakage at roughly 2.4 L/s per square metre of duct surface at 250 Pa static pressure, which is the tightest SMACNA class. Hyperscale operators specify it because every litre of bypassed supply air increases the CRAH fan work and directly raises PUE. For a 30 MW IT load, the difference between Class A and Class C leakage can be 150–250 kW of continuous extra cooling power, which translates to USD 130–220 thousand per year in wasted electricity at typical commercial rates. Tight duct is a direct PUE lever.
Can auto duct lines actually hit Seal Class A in field tests?
Yes, when the line is configured correctly. The critical specification is TDF flange-bend tolerance of ±0.2 mm or tighter across the full duct length, plus a calibrated beading station for the longitudinal seam. The SBAL-V meets these tolerances as standard, and SBKJ has supplied the machine to contractors working on hyperscale data center projects in the Asia-Pacific region that passed Class A air-leakage tests on the first attempt. The common failure mode on Class A is not the machine — it is the transverse-joint gasket and the installer's sealant workmanship. If the TDF flange is formed correctly, on-site leakage tests pass.
What is the right SBKJ line for a hyperscale data center build?
The standard recommendation is an SBAL-V auto duct production line with TDF flange forming station, plus an SBTF-1602 spiral tubeformer for round supply-air risers. The SBAL-V handles rectangular duct from 200 mm to 2500 mm wide at throughput of roughly 2000–2500 m² per day for a single shift, which is enough for a single phase of a 50–100 MW data center build. The SBTF-1602 forms spiral round duct from 80 mm to 1600 mm diameter, which covers the full range of CRAH return plenum and riser sizing. For stainless-steel builds (common on liquid-cooled halls to resist condensate corrosion) both machines can be ordered in a stainless-capable configuration.
Does a data center project justify a dedicated auto duct line?
For projects above roughly 20 MW IT load, yes. Below that, a good fabricator can often meet the duct delivery schedule with an SBAL-III or a combination of lockformer plus separate TDF flange former, provided the shop already has experience hitting Class A leakage. Above 20 MW the duct surface area crosses roughly 8,000–12,000 square metres, and the combination of tight tolerance, tight schedule and repeatable quality tips the economics firmly toward a dedicated SBAL-V. Hyperscale operators running 100 MW-plus typically specify SBAL-V on every contractor because the quality consistency is worth the extra capital cost at their scale.
Do I need stainless steel duct for data center HVAC?
Not for traditional air-cooled raised-floor designs — galvanized steel is still the default for hot-aisle/cold-aisle containment and CRAH plenum work. Stainless is specified on liquid-cooled halls where close-coupled cooling (rear-door heat exchangers, direct-to-chip cold plates, immersion tanks) creates localised high humidity and condensate risk. The SBAL-V handles both materials with a roller swap, so contractors serving both traditional and liquid-cooled builds typically order one stainless-capable line rather than two separate machines.
Other industries we serve
Page reviewed by SBKJ Engineering & QA · Last verified