We believe any product’s environmental or health profile should be easily retrievable for decision making. Regrettably, today that’s not the case. Consider that there are around ~12.000 physical product categories according to UNSPSC, but there are no more than 3000 product models in any of the larger, stand alone LCA databases. It’s not for the lack of hard work: the global community creates thousands of assessments and data sets, every year. However, they are often confined behind paywalls or firewalls, spreadsheets or siloed databases. That leads to a patchwork of data that eventually becomes obsolete or redundant. A lack of disclosure and poor documentation means that data cannot be extended nor improved upon by the community. Yet the LCA method itself depends on connected industrial process chains that require connected data. Add lack of transparency and low incentives to that fragmentation, and everyone is held back.

We think it’s time for a better way.

Makersite.net is a network that connects professionals and data. Its technology excels in connecting and computing product data. So we tested it’s use for calculating product environmental impacts.[i] The results were astonishing:

  • The computation and visualization of supply chains across thousands of data points takes sub-seconds.
  • Connecting data is a result multiplier: Aggregated LCIA results were created for existing unit processes, netting over 15.000 LCIA impacts not previously available as open data.
  • Makersite automatically establishes connections between unit process data and other data, such as regulatory, chemical, material or cost information. This allows Makers to use LCA data not only for environmental impacts, but also to analyze other properties (for instance, there are over 80m chemicals referenced in Makersite).
  • By default, impacts of existing databases were calculated the way they were designed. But because they’re all in Makersite, users can now make connections manually. This is how we can connect chains across today’s disparate databases.
  • There is full transparency of models. Like Wikipedia, data models can be extended and improved upon by the community through robust, integrated publishing workflows that help ensure data quality.
  • How do updates and derived data work? First step: create the data in your own account. When ready to share, create a pull request. Next: experts decide. Experts are the people who created the original data set. Once you contribute, you become one of those experts. There is no obscure “selection committee”. If you don’t like a connection, simply fork it, update it and submit for peer review.
  • We built Makersite to include anyone’s data, commercial or open, public or private. The inability to accommodate different commercial terms prevents collaboration in the community today. And while Makersite is open by default, whatever data is confidential will remain so.
  • Everyone can keep data up to date, and receive recognition and reward for their contributions. We pay royalties to those who provided commercial data, based on usage. We think that’s a win-win for all.

 

[i]    We used data from multiple, fragmented sources such as Agribalyse, Bioenergiedat, NEEDS, ProBas and USDA. The data contains the material and energy product flows on input and output, selected process metadata, such as original authors. All data sets support the LCIA methods CML, Traci 2.1 and USeTox, 2.01 for the entire supply chain of the processes. To assure the quality of the data, the datasets were created with both automated scripts and expert assessment.
 

Did this answer your question?