Resources Listing

Leveraging Knowledge Graphs to Add Context to Your Data

Introduction

What are knowledge graphs? Have you ever come across this technology? Before we understand the core meaning of this, it's important to understand it conceptually.

Scenario 1: “Hey temperature is shooting up, and some action needs to be taken. I seriously don’t know what could have made this happen?”

What temperature are we talking about? Which process, which equipment and what deviation? – Context is missing

Scenario 2: “Failure is recorded in one of the compressors. We need to troubleshoot the issue, for which I will need the equipment ID, design sheet, P&ID and couple of hours of time series data to investigate.

All the data are of different types, at various locations (some offline). Investigation is to be done manually by gathering all the data.

You, see? For each time context was missing. Most of the actions within the workflow is manual.

What is knowledge graph? 

Imagine it’s a methodology which allows to develop the relationships among multiple data sources, files, pdfs, images, tables, and columns across the data stored in silos within the organization. These relationships are sometimes also referred to as context, which stores the mapping in natural language which can be further processed by NLP or Generative AI. With continuously increasing initiatives for capturing data, more than ever in the past, it has started to pose several challenges with respect to its management and restoring the benefits of digitalization. Knowledge graphs has shown tremendous potential not just for only for enhancing the data management strategies but also for increasing the adaptability of the data for further AI initiatives.

Why Knowledge graph?

Traditionally it had always been a challenging task to relate the data from variety of sources and multiple formats. Several data remain untapped mainly because they are either unstructured or semi-structured which requires tremendous efforts to just extract the meaningful information from the data. Going with the current practices in a typically manufacturing organization, plant personnels / plant engineers basically rely upon standard engineering documents, P&IDs, excel data, logbook entries and many others. It has been a traditional practice, where for daily operations or troubleshooting activities, they refer to these data which are of varied formats and shapes. Think, that every engineer interprets the data differently with the experience they would have gathered in the organization. Every engineer acts differently, again based on their experiences. Imagine the overall time and efforts which goes behind this manual process of data collection and interpretation, to which the results are also not guaranteed due to differences in the interpretations. Knowledge graphs are exactly suited for such applications where variety of data sources and data itself are mapped based on their relationships. In fact, it allows for adding a layer of domain which typically is never captured in digital sense using the traditional approaches.

How to contextualize?

With the identification of all sources and types of data, a scalable knowledge graphs can be built which remains the foundation for all the contextual data across the organization. The build process is a simple workflow where different entities are mapped to each other based on their physical / abstract relationships it has got. Subject – predicate – object, is the foundation of Knowledge graphs.

Knowledge graphs are easily scalable to multiple equipment, plants and locations, which makes it very convenient to manage the data. Moreover, with its natural language capabilities, it becomes easy to learn and adopt this for various utilities.