Emission Factor Quality and Selection

A core differentiator in these approaches is how emission factors are chosen and validated. Bardo follows a clear doctrine:

Specificity Trumps Generality

The most specific applicable factor for an activity is chosen. For example, if calculating emissions for a flight, an emission factor per passenger-kilometer for a specific route and class is used, rather than a generic "average air travel" factor. If accounting for office electricity, the factor for the specific utility's suppliers product mix is used, not a regional average. This ensures the calculation reflects the actual activity as closely as possible. By contrast, many tools default to generic factors (for convenience or due to data limitations), which can misrepresent the activity. As one source notes, spend-based factors are generic and ignore differences between specific processes, geographies, suppliers – Bardo's approach avoids that pitfall whenever possible.

Evidence and Source Quality

Each factor in Bardo's system carries metadata about its origin: was it from a peer-reviewed LCA study, a government database (like EPA or DEFRA), an industry average, or a custom calculation? It also notes the year of the data, the geographic relevance, system boundaries (what's included in the footprint), and any reliability notes. Bardo ranks factors by a quality score (considering technological representativeness, temporal and geographic relevance, completeness, and reliability – similar to the pedigree matrix concept in LCA). For instance, a factor that comes from a supplier's recent LCA report (with cradle-to-gate scope, covering the region of production) would rank as high quality. An older generic factor from a different region would rank lower. This approach of assessing factor quality is in line with GHG Protocol guidance to document data quality indicators for emission factors. In practice, if Bardo has to use a lower-quality factor (say, no better data exists), it flags it as an area for improvement and often will engage with the client on how to get better data next cycle.

Constructed Factors vs. Averages

Bardo construct emission factors when none specific exist. For example, if a company buys a unique material that isn't in any database, Bardo combine data from multiple sources: e.g. take an emission factor for a similar material's production energy, adjust for known differences, include an upstream feedstock factor, etc., to build a proxy. These evidence-constructed LCAs are fully documented (so an auditor or the company can review the assumptions). While this involves some estimation, it yields a more representative factor than using an unrelated sector average. For instance, if you purchase a specialized electronic component, a spend-based method might map it to "electronics, average" (which includes a broad industry mix), whereas Bardo construct a factor from known data about semiconductors, assembly electricity, etc. – giving a result that reflects that specific component's likely footprint. Constructed factors are marked and then updated when better data (specific LCA or supplier info) becomes available, ensuring continuous improvement. This strategy acknowledges that not all data gaps can be filled by off-the-shelf factors, and sometimes a thoughtful estimate is better than a poor proxy. It also means Bardo's factors are often higher-resolution than what generic platforms use by default. As an example, flight emissions: rather than using a generic "0.15 kg CO₂ per passenger-km" for any flight, Bardo uses leg-specific data (taking into account exact distance/route, aircraft type, seating class, etc.), which is known to significantly affect emissions. This aligns with guidelines that say, for accurate reporting, one should consider distance bands and class for air travel because emissions per passenger can vary several-fold between long-haul business class vs. short-haul economy. Many carbon calculators skip these details; Bardo includes them.

Uncertainty Quantification

Alongside each factor and activity, Bardo can assign an uncertainty range (e.g. ±20%). This is important for context – numbers are not absolute truth but estimates with confidence levels. Traditional reports often don't convey uncertainty, giving a false sense of precision. Bardo's method acknowledges it make it possible to rolls up uncertainties to the total, so one can say, for example, "Total Scope 3 is 10,000 ± 1,500 tCO₂e at 95% confidence." This is helpful in audit situations or internal risk assessment, showing where the largest uncertainties lie. It's part of being transparent and honest about data quality, which neutralizes one common criticism of Scope 3 figures (that they're too error-prone). With Bardo's granular approach, many high-impact categories end up with lower uncertainty because primary data was used (as noted, primary data is audit-ready and precise), whereas the areas using secondary data carry higher uncertainty flags that can be targeted for improvement.

Summary: Factor Selection Doctrine

In summary, Bardo's factor doctrine is about maximizing relevance and reliability of data for each activity. This stands in contrast to a "one-size-fits-all" approach in some tools. The neutrality of this stance is that it doesn't automatically trust any single database or factor – each is weighed and chosen on merit. Over time, as more companies and suppliers provide specific emissions data, Bardo's approach means a client's inventory can seamlessly incorporate those (often yielding immediate improvements in accuracy and sometimes reductions in reported emissions, as specific data can replace conservative estimates). This approach is aligned with the direction of regulations and standards – for example, the EU's CSRD and ISO standards are pushing for more supply-chain specific data, and Bardo's methodology is built to use exactly that, whereas a static spend-based tool might struggle to integrate a mix of specific and generic data.


Conclusion and Common Concerns

To conclude this methodology, it's useful to address a few common objections or misconceptions that often arise when moving from incumbent approaches to a more advanced activity-based approach like Bardo's:

"We already follow the GHG Protocol; isn't that enough?"

GHG Protocol provides the framework, but how you follow it can differ. Spend-based tools also claim GHG Protocol compliance; the difference with Bardo is in the rigor of data quality within that framework. Bardo's approach is still fully aligned with GHG Protocol (in fact, it's using the Protocol in "strict mode", meaning clearly delineating categories, using hybrid data methods as recommended, etc.). The ledger approach simply implements the Protocol at a more detailed level, ensuring completeness and accuracy that generic implementations might miss. Essentially, Bardo doesn't change what you report, it changes the fidelity and trustworthiness of the numbers you report, all within GHG Protocol's rules. So it's not a departure from GHG Protocol – it's an enhancement in data quality and auditability while staying within that standard.

"We only need a high-level number for annual reporting; why go to this effort?"

It might seem easier to just get a single total for Scope 3 for the sustainability report. However, consider the risks and lost opportunities of an inflated or dubious number: If it's too high (due to crude estimates), the company might be allocating budget or attention inefficiently, or worse, making decisions that hurt business (imagine avoiding outsourcing because the generic emissions factor made it look bad, even though a detailed analysis might show it's efficient). If it's inaccurate, and later auditors check it, the company could face compliance issues or restatements. Furthermore, a single number with no detail doesn't help in reducing that number – it's hard to manage what you don't measure properly. A detailed ledger might sound like overkill if one only cares about disclosure, but it actually saves time and cost in the long run. The ledger can be reused and updated easily each year, whereas starting from scratch or dealing with inconsistent data can become a yearly headache (and cost center). Also, regulations are moving toward requiring more than just a number – they'll want to see how you derived it and that you have internal control over it. Investing in a proper system now can avoid fire drills later.

"Our services (or our supply chain) are too complex to model in detail."

This is a fair concern; getting data for every little thing is hard. But that's precisely why an approach like Bardo's, which is highlhy automated, managed and iterative, is useful. It doesn't demand perfect data on day one. Yes, it's challenging to gather data for, say, a legal services firm's footprint. But Bardo use ai powered agents to do extensive research and constructing reasonable models (office use, IT use, etc.) which is order of magnitude better than a flat spend factor, and then it flags that for improvement (maybe next time get actual data on that law firm's operations). Over time, even services can be refined (or at least kept transparent about uncertainty). The worst approach is to just throw up hands and assign one generic factor – that often does the most harm by overstating or misallocating emissions (and provides zero insight). By tackling services, one can find efficiencies (perhaps discovering that a cloud provider in one region has a much lower footprint than in another, influencing IT choices). Complexity is a reason to choose a solution built to handle complexity, rather than to avoid doing it.

"This sounds like a lot of change and work for our team."

It might seem daunting to adopt a new system, but if it's a managed service, the internal workload is actually lower than trying to do it yourself. Bardo explicitly states "you do not run software". That means no new tool for your team to learn and operate day-to-day. The primary tasks for the company are: provide data access (which IT/finance does), review results (which is much easier than creating them), and collaborate on edge cases (which is minor effort compared to building everything). So rather than burdening the team, it frees them to focus on interpretation and action. Bardo will guide the process – so the change is more about mindset (trusting a more data-driven process) than about heavy operational load. Once the initial setup is done, each cycle should be routine and faster. In fact, many organizations find that once this data is available, internal interest grows – finance might want to integrate carbon metrics into their planning, procurement might weave it into supplier scorecards, etc., which increases the utility of the sustainability team's work without a proportional increase in their labor, because the data flows are largely automated.

See how your data performs in a quick CSRD check

Book a demo, or open the ROI calculator to estimate time and cost.

Norra Stationsgatan 93a Stockholm
113 64, Sweden

Follow

Copyright © 2025 Bardo Technology AB. All Rights Reserved.

Norra Stationsgatan 93a Stockholm
113 64, Sweden

Follow

Copyright © 2025 Bardo Technology AB. All Rights Reserved.

Norra Stationsgatan 93a Stockholm
113 64, Sweden

Follow

Copyright © 2025 Bardo Technology AB. All Rights Reserved.