Business Trends
Generative AI for SAP. Part V: Models and Knowledge Graphs (KG)
Opening
We have been storing data in SAP for decades, amount of data is too big so we used Data Warehouse and similar technologies
Now, with Transformer Models we the capacity to understand extremely large corpus of data. Imagine we show a General Knowledge Model the relationships between BKPF and BSEG, and answer queries about it. If I catch your attention, keep reading.
–
In evaluating Language Models interaction with SAP data, I am still investing my interaction with existing models, technologies like RAG, and even some Fine Tuning techniques. I am avoiding pre-training models with domain data. And I believe I am not going in the right direction š„š„ŗ
I believe Foundation Models have the power to reshape all industries, but existing Large Language Models fall short on multilingualism and their applicability to complex industry use cases.
From embedding and fine-tuning a generic Foundation Model, I am slowly jumping into incorporating Vertical and Domain-Specific data into models. My final goal is to build a custom model to process comprehensive company knowledge from SAP because none of the academic research I am reviewing and sometimes implementing goes specifically in that direction.
Apart from academia, I have a significant interest in how the software industry is approaching SAP, and mentally, I see there are two large groups of approaches from vendors.
On one side, we have the traditional Data vendors, who are exceptionally skilled in Data Management, Governance, Data Quality, Integration, Ingestion, etc. These companies talk a lot about building Domain Specific Models because it’s in their interest to show their customers that all of what is left to its right will not be relevant if you don’t have a suitable dataset for the models to interact.
A strong message like that is correct but not 100% accurate. Foundation Models are trained on a large variety of (let’s hope public) data that might be relevant for Enterprise and Enterprise queries, and there is a lot of research to allow the models to interact with another corpus of data in the way of performing an API query, or a DB query.
Shifting to the right end of the picture, we see the companies, primarily startups or research groups, bringing much innovation on the LLM layer, how we can master Prompt Engineering, and how to apply RAG.
On this right end, the entry-level for corporations is way simpler. We can easily interact with a given model, apply some of these techniques with a few API calls, and get some fantastic results. But the API call scalability becomes a risk really, really soon. Third-party models API calls are charged per token (characters), and a quick look at Cohere or OpenAI shows Cohere price is between 10 and 500 times cheaper than OpenAI; depending on the selected Engine, costs can go high quite fast.
RAG is cool, but Vector DBs are not everything.
Retrieval Augmented Generation (RAG) has disrupted the field of open-domain question answering, enabling systems to generate responses that mimic human-like behavior for various queries. At the core of RAG is a retrieval module, which scans an extensive collection of texts to identify relevant context passages. These passages are then processed by a neural generative module, typically a pre-trained language model such as GPT-3, to formulate a final answer.
The primary advantage of vector search is its ability to search for semantic similarity rather than relying solely on literal keyword matches. The vector representations capture the conceptual meaning, enabling the identification of more relevant search results that may differ linguistically but share similar semantic concepts. This allows a higher quality of search compared to traditional keyword matching.
However, there are limitations when converting data into vectors and conducting searches in high-dimensional semantic space. Vector search struggles to capture the diverse relationships and intricate interconnections between documents.
Performing vector search over a corpus of structured SAP data, especially when considering complex table relations, has several limitations and challenges.
Addressing these limitations may require a combination of specialized algorithms, data preprocessing, and domain-specific knowledge to create an effective vector search system for SAP data with complex table relations. Additionally, ongoing monitoring and optimization are essential to ensure the system remains accurate and efficient as the SAP data evolves.
LLMs have demonstrated their prowess in understanding and generating human languages, among other applications. However, their ability to work with graph data remains underexplored. Evaluating LLMs on graph-related tasks can enhance their utility, especially in relationship detection, knowledge inference, and graph pattern recognition.
Numerous studies have focused on assessing LLMs, primarily GPT models, in handling graph-related queries and producing accurate responses. However, these studies have mainly concentrated on closed-source models from OpenAI, overlooking other open-source alternatives. Additionally, they have not thoroughly explored aspects like fidelity in multi-answer questions or self-rectification capabilities in LLMs.
Knowledge Graphs and LLMs
To overcome these limitations, Knowledge Graphs (KG), or Knowledge Graph Prompting (KGP) emerges as a promising alternative. KGP explicitly encodes various relationships into an interconnected graph structure, enhancing the richness of reasoning capabilities.
Graph DB structure is beneficial for modeling and analyzing complex relationships between different data types, such as customer data, product data, market data, 3rd party data, and balance sheet data.
- Nodes and Labels:
- In a graph database, each piece of data is represented as a node. Nodes can be labeled to categorize them into different types. For instance, you might have labels like “Customer,” “Product,” “Market,” “ThirdParty,” and “BalanceSheet.”
- Properties:
- Each node can have properties associated with it. For example, a “Customer” node might have properties like “customer_id,” “name,” and “email.” Similarly, a “Product” node may have properties like “product_id,” “name,” and “price.”
- Relationships:
- Relationships between nodes are represented as edges in the graph. These edges define how different pieces of data are connected. For instance:
- A “Customer” node might have relationships with “Product” nodes to represent their purchased products.
- “Market” nodes may be linked to “Product” nodes to show which products are available in specific markets.
- “ThirdParty” nodes could be connected to “Product” nodes to indicate third-party services or data sources associated with specific products.
- “BalanceSheet” nodes may be linked to “Customer” nodes to show financial transactions and balances.
- Relationships between nodes are represented as edges in the graph. These edges define how different pieces of data are connected. For instance:
- Queries:
- Graph databases support powerful query languages, such as Gremlin, designed explicitly for traversing and querying graph data. These queries can retrieve information about relationships and patterns within the data.
Let’s say you want to find all the products purchased by a specific customer, their associated markets, any third-party services used during those transactions, and the corresponding balance sheet information. In a graph database:
- Start with the “Customer” node representing the specific customer.
- Traverse the edges labeled as “purchased” to reach the “Product” nodes.
- Traverse the edges labeled as “available_in” to find the associated “Market” nodes.
- Traverse the edges labeled as “third_party_service” to discover third-party data or services.
- Traverse the edges labeled as “transaction” to find “BalanceSheet” nodes and associated financial data.
Diverse Relationships Modeled by Knowledge Graphs:
Advantages of Knowledge Graphs:
- Structural Relationships: KGP encodes the contextual hierarchy of information by linking passages to specific documents or sections, aiding in determining importance, validity, and relevance during reasoning.
- Temporal Relationships: KGP factors in temporal dynamics by ordering passages chronologically, facilitating reasoning about unfolding narratives and timelines.
- Entity Relationships: KGP’s entity-centric approach allows focused exploration of the knowledge graph, facilitating the aggregation of facts about specific entities across documents.
While vector search has been a significant leap in open-domain question answering, it has limitations, particularly in handling complex queries and diverse relationships between content. Knowledge Graph Prompting offers a promising solution by explicitly modeling various relationships, enhancing reasoning capabilities, and addressing some of the shortcomings of vector search. As AI systems become increasingly integrated into our lives, understanding these strengths and weaknesses becomes paramount in harnessing their full potential.
A typical automotive data model represents the structure and relationships of data within the automotive industry. It organizes and manages information related to vehicles, their components, customers, dealerships, and other relevant entities. Below, I’ll outline some key entities and their relationships in a simplified automotive data model:
- Vehicle:
- Attributes: VIN (Vehicle Identification Number), make, model, year, color, engine type, etc.
- Relationships:
- Many-to-One with Manufacturer (each vehicle is made by one manufacturer)
- Many-to-Many with Dealership (a car can be sold by multiple dealerships)
- One-to-many with Service Records (a vehicle can have multiple service records)
- Manufacturer:
- Attributes: Name, headquarters location, founding year, etc.
- Relationships:
- One-to-Many with Vehicles (a manufacturer produces many vehicles)
- Customer:
- Attributes: Customer ID, name, contact information, etc.
- Relationships:
- Many-to-Many with Vehicles (a customer can own multiple vehicles)
- Many-to-Many with Dealership (a customer can buy from multiple dealerships)
- Dealership:
- Attributes: Dealer ID, name, location, contact information, etc.
- Relationships:
- Many-to-Many with Vehicles (a dealership can sell multiple vehicles)
- Many-to-Many with Customers (a dealership can have multiple customers)
- One-to-Many with Sales Transactions (a dealership can have multiple sales transactions)
- Service Record:
- Attributes: Service ID, date, description, cost, etc.
- Relationships:
- Many-to-One with Vehicle (a service record is associated with one vehicle)
- Sales Transaction:
- Attributes: Transaction ID, date, price, payment method, etc.
- Relationships:
- Many-to-One with Vehicle (a transaction is associated with one vehicle)
- Many-to-One with Customer (a transaction is associated with one customer)
- Many-to-One with Dealership (a transaction occurs at one dealership)
- Inventory:
- Attributes: Inventory ID, quantity, price, etc.
- Relationships:
- Many-to-One with Dealership (inventory is managed by one dealership)
- Many-to-One with Vehicle (inventory includes one type of vehicle)
- Employee:
- Attributes: Employee ID, name, role, contact information, etc.
- Relationships:
- Many-to-One with Dealership (an employee works at one dealership)
This is a simplified representation, and in a real-world scenario with ECC or S/4HANA, the data model could be more complex, bringing Ariba, Fieldglass, or additional entities and attributes of the automotive industry, including parts, suppliers, warranties, and more.
The interaction between KGs and LLMs has particular benefits for Enterprises that work with relational databases (case a.) However, I spent some days reading 55 papers that could guide me on how LLMs could better understand the SAP data model. This is how I summarize them.
Several works have explored using KGs to assist LLMs in making predictions (Yasunaga et al., 2021; Lin et al., 2019; Feng et al., 2020). For example, KagNet (Lin et al., 2019) employs graph neural networks to model relational graphs, enabling relational reasoning in symbolic and semantic spaces. QA-GNN (Yasunaga et al., 2021) learns representations by connecting QA context and KGs in joint graphs. However, these methods often involve training additional knowledge-aware modules like graph neural networks (GNNs), which can be challenging to adapt to novel domains and may underutilize the strengths of LLMs. |
1ļøā£ KG-enhanced LLM Pre-training: This research group focuses on incorporating knowledge graphs during the pre-training stage of Large Language Models. By doing so, it aims to improve the ability of LLMs to express and understand knowledge. This typically involves modifying the pre-training process to make the model more aware of structured knowledge from KGs.
2ļøā£ KG-enhanced LLM Inference: In this category, research uses knowledge graphs during the inference stage of LLMs. This allows LLMs to access the latest information from KGs without requiring retraining. This is particularly useful for keeping LLMs up-to-date with current knowledge.
3ļøā£ KG-enhanced LLM Interpretability: This group focuses on leveraging knowledge graphs to better understand the knowledge learned by LLMs and to interpret the reasoning process of LLMs. This can help make the decision-making process of LLMs more transparent and interpretable.
Integrating knowledge graphs with large language models is an exciting area of research that can potentially improve the performance and interpretability of these models across a wide range of applications. It allows them to incorporate structured knowledge, stay current with evolving information, and provide more transparent reasoning in their outputs.
Let’s do another example with a Supply Chain Graph;
Incorporated SAP data from the following entities
This approach allows us to structure the complex SAP data model into nodes and relationships, thereby generating a holistic picture of how materials, components, and products flow from suppliers to customers. The inherent interconnections and dependencies become evident and analyzable. We believe the future of LLM-based applications is combining a vector similarity search approach coupled with database query languages such as Gremlin or SPARQL.
How Knowledge Graphs Help LLMs Better Reasoning
Knowledge graphs have emerged as a powerful way to structure and encode world knowledge in a machine-readable format. Based on vector similarity, traditional passage retrieval techniques have limitations in their reasoning abilities. This article explores how modern large language models (LLMs) can enhance the construction of high-quality knowledge graphs and how these augmented graphs, combined with content retrieval, are poised to shape the future of retrieval systems.
Building knowledge graphs traditionally involved complex processes like entity extraction, relationship extraction, and graph population. However, tools like Llama Index’s KnowledgeGraphIndex
Ā leverage LLMs to automate these tasks, including entity recognition and relation extraction, reducing the barriers to using knowledge graphs effectively.
Knowledge graphs offer a promising path beyond traditional vector similarity for passage retrieval when combined with LLMs and content retrieval methods. They enhance multi-hop reasoning capabilities and have the potential to shape the future of intelligent retrieval systems, providing greater flexibility and depth of understanding.
The convergence of knowledge graphs and Large Language Models exemplifies a robust strategy for optimizing innovative search systems over structured data, particularly when addressing inquiries necessitating the harmonious integration of structured and unstructured data sources to provide contextually relevant responses.
Figure 1 extracted from a research Southeast University Nanjing: Knowledge Solver. An example comparing the vanilla LLM in (a) and zero-shot knowledge solver in (b) for question-answering tasks. LLMs search for necessary knowledge to perform tasks by harnessing LLMsā own generalizability. Purple represents nodes and relations in LLMsā chosen correct path.
Figure 2 extracted from a research University of Illinois. For each question answer choice pair, we retrieve relevant knowledge subgraph and encode it into a text prompt, injected into LLMs directly to help them perform knowledge-required tasks. In this question-answering scenario, LLMs interact with provided external knowledge to choose the correct path for answering the question.
Conclusion
Some pioneering research efforts involve fine-tuning various LLMs, such as BART and T5, for KG-to-text generation. These approaches often represent the input graph as a linear traversal, and this simple approach has shown success, outperforming many existing state-of-the-art KG-to-text generation systems.
While Graph databases have been around for many years, they have never gained much popularity, although this may change fast. In this blog, I introduce how the RAG framework combines the capabilities of knowledge graphs and large language models (LLMs) to enhance reasoning and information retrieval. These are some of the benefits introduced.
- Structured Context: Knowledge graphs provide structured data, enhancing an LLM’s understanding of relationships between entities and improving context-aware reasoning.
- Efficient Information Retrieval: Knowledge graphs enable precise and efficient information retrieval, reducing noise and improving the accuracy of responses.
- Multi-hop Reasoning: LLMs can perform multi-hop reasoning by traversing knowledge graph paths, allowing them to answer complex questions and make inferences based on structured data
Organizations can create custom knowledge graphs tailored to their specific domain or industry. This allows LLMs access to domain-specific knowledge, making them more valuable in specialized applications. However, this is a real challenge for SAP environments, which are very complex in ECC systems and less complicated on a S/4HANA but not easy.
Interestingly, one of the first courses SAP has introduced on Generative AI is how the new Graph API can be utilized to develop Large Language Model (LLM) based applications.
In the next blog, I will discuss how Knowledge Graphs can help with a ubiquitous problem for LLMs in the enterprise, especially for SAP, and how to overcome Role-Based authorization issues.
Hi Mario De Felipe,
The Enterprise ChatGPT is at the beginning of its evolution, but it will improve as it proves its efficiency and gains the trust of users concerned with privacy. Many companies are still using ECC and have not yet transitioned to S4/HANA due to other priorities from third parties and authorities, as well as the significant challenge of making holistic modifications to their systems and the entire company. I have published a series of blogs containing working code to help readers understand what is behind the scenes of LLMs' capabilities. My latest blog, ''Building Trust in AI: YouTube Transcript OpenAI Assistant" provides an online app YouTube Transcript OpenAI Assistant built with RAG libraries that allows one to ask questions from extracted transcripts.
Regards,
Sergiu
Ā
Ā
Hi Sergiu, I love your demo!
I would say, for the readers, the difference between HANA ML and an LLM like ChatGPT is the following (and please add your thoughts here)
SAP has already had Supervised/Unsupervised learning for some years now. With SAP Leonardo we already had Resful API calls on the former SAP Cloud Platform, now BTP, where data could be transferred using Data Intelligence Cloud (back then called SAP Data Hub) to build the pipelines and the ingestion.
The data had to be processed, the algorithms built, on top of the former SAP Vora or Hadoop or any Big Data Engines, by a Data Engineer, ML specialist, and similar person familiar with TensorFlow.
In 2023, we will deploy Models which have been pre-trained to understand human language and generate high-level texts; we will not use SAP platform to build our models, but we can use it to interact with a model using an API (like ChatGPT) while using a framework, or platform like streamlit to develop and orchestrate the workflow on what needs to feed the LLM model or if the model needs to call an external API (from SAP or others) to formulate the reply.
In your demo, you created the ML Model from scratch, using the HANA Cloud framework from SAP.
How would you build it by leveraging an LLM and not becoming an ML specialist?
Maybe we could still load the provided CSV file into a DataFrame using Pandas (for local processing), and classify the data, for that use Llama Index.
Given the circumstances LLM is bad at classification, we could then create relevant features that could be useful for predicting employee retention. This might involve transforming or combining existing elementsĀ using techniques like one-hot encoding or label encoding.
Choosing the correct LLM could be quite interesting since LLMs like GPT-3 are primarily designed for natural language understanding and generation. So, other models more trained for Random Forest, Gradient Boosting, Logistic Regression, or Neural Networks like BERT or T5 models could do the work, and we could focus on the data set preparation for the model to understand. And the Graph representation of the entities.
What do you think?
Hi Mario,
Data scientists can accelerate their projects with ChatGPT Plus by following these initial steps without worrying about privacy leaks:
1. Generate synthetic data from legacy data.
2. Create prompt instructions: "You are a machine learning expert. Help me with the exploratory data analysis (EDA) process step by step. Generate Python code, execute the code, and improve it. Build models with XGBoost, Keras, and other libraries, and compare results." Prompt engineering instructions help to get closer to an AI agent behaviour like Auto-GPT.
3. Upload the data and follow the process.
4. Afterward, you can use the code locally with legacy data.
5. Of course, you can go further and give instructions to ChatGPT Plus to be an expert AutoML and build the model without specific instructions. Once the model is built, you can even request to predict and output results in JSON or CSV format.
6. If you want to automate this entire process, you have to use GPT-4 API. There is one problem with using GPT-4 API for a large amount of data - the huge cost for tokens! So this set us back to getting a working Python code and predicting any amount of data for free!
Regards,
Sergiu
Concepts are understood faster with a playground. I found interesting for experiments a series of blogs "Improving GPT-3 Q&A Experiences with In-context Learning over Knowledge Graph".
Beautiful article about listing limitations of RAG with vector search and how KG can help overcome the limitations.
Currently we are also leveraging RAG and vector DBĀ and are planning to move to KG but the intial challenge remains on how to create a good KG from SAP data.
Can you please plan an article on same.
Thanks, Manoj; we are in the same direction.
Next is using Graph capability from HANA Cloud, and integrating third-party data on HANA Cloud became easier with DataSphere.
My question is, can I expose a HANA Cloud on API? I think so, but I am not sure Denys van Kempen do you know?
Good one Mario. I like the KG perspective (for me new) in association with Gen AI.
Thanks Viren. Then you will love my next blog about building a Generative AI on BTP interacting with Amazon Bedrock for Q&A, with RAG