Skip to main content

Plant Data

Contributors: Tyler Engelhardt

PI2eLogger Overview


The Plant Data application is a tool built to bring multiple data sources into a single location. It combines manually entered meter data used in reports, internal combustion runtime data for emissions reports, breaker operation counts, plant orientation logs, and a mechanism for sending notifications when required data entries are missing or when fuel levels are low.

Prior to this application, meter and internal combustion runtime data was entered into an excel spreadsheet. It was easy to make errors when recording readings or to miss readings altogether. This application resolved these issues by sending notifications when readings were not recorded and highlighting erroneous data entries through utilizing the Asset Framework. Low fuel levels were recorded manually on daily round checks but required manual notification of the status. The Asset Framework was used in this project to automatically send notifications when the fuel level PI Points were marked as low.

Breaker (operation) statuses were viewable in the application and allowed viewing the change in the value over time. Viewing this information helped determine if an issue developed in a breaker and provided an easier means to not only view the data, but to view it over time. This portion of the application was to replace a weekly email notification.

Plant contractor orientation logs were maintained through the plant logging system and were difficult to parse due to the schemas of that application. The Plant Data application hooked into the plant logging system database to read from the complex schemas and return the desired search queries. This allowed searches to be performed on company, personnel, and date which was previously difficult, if not impossible.


The goal of the Plant Data application was to view the sparse data sources more easily and bring attention to missing datasets. When datasets were missed under the previous model, it was a slow process to ensure the data was backfilled. There was a need to make the process proactive instead of reactive since the data was used in reports.

Contractor orientations needed to be tracked and determined if personnel needed to undergo training. The system where the data was entered made filtering and searching for personnel difficult due to the underlying schema of that system. There was a need to make searching for personnel easier and painless but to keep the data in the original system.


The main challenge in building the system was determining the best practices moving forward to ensure the existing issues could be addressed and the reports could be repeatable and reliable. This required trial and error along with coordination with management to ensure the reports displayed the correct information under the correct context, and that the new standard operating procedure would address the concerns and previous shortcomings of the process.

Another challenge was parsing the plant logging system database schema to determine how to read the requisite tables to display the required information. The 3rd party system was not set up to filter the dataset in the way that was required for the purpose of the data. This resulted in the need to find the most efficient way to filter the data using the required filter parameters with the schema provided by the plant logging system.