top of page
About

WHAT IS STRING

String Overview.png

We developed a methodology—codenamed STRING—to support the design and deployment of integrated risk and business intelligence systems grounded in dynamic decision systems, knowledge graphs, and quantitative risk analysis.

STRING stands for Strategic and Tactical Risk Intelligence using Networks and Graphs. The name also reflects an intentional design choice: in STRING, analysis begins with natural-language artifacts (strings), such as narratives, assumptions, objectives, and decision rationales. These artifacts are progressively transformed into structured representations suitable for computation and governance.

Importantly, natural language is treated as a starting material, not as an execution mechanism. STRING enforces a staged transformation from narrative inputs to formal decision representations.

The methodological pipeline follows an explicit entropy-reduction path:

Unstructured narrative (strings) → Conceptual decision model → Typed entities and relations → Application knowledge graph → Executable decision and risk models

 

This progression mirrors how decision-makers reason in practice while enabling formalization, validation, and automation.

STRING is not a prompt-engineering framework, nor a replacement for quantitative risk methods, nor an attempt at autonomous decision-making. It is a methodology for progressively constraining narrative uncertainty into executable decision systems.

As a result, decision-makers gain the ability to explore the system of interest and understand its performance and shortcomings, increasing their ability to make sound decisions under uncertainty. Furthermore, the graph structure allows timelines to be captured and enables decisions to be revisited for evaluation and learning.

STRING is agnostic with respect to the underlying graph representation model. In practice, implementations may rely on either RDF-based graphs (subject–predicate–object triples with formal semantics) or labeled property graphs (nodes and edges with attached properties), depending on architectural constraints and use-case requirements.

STRING recognizes three layers of graph representation:

 

Enterprise Ontology

→ formal specification of concepts and relations (typically using RDF/OWL);

 

Domain ontology

→ reusable semantic layer for a specific field (cyber, supply chain, ESG, health and safety, etc.)

 

Application knowledge graph

→ instantiated graph for a concrete decision system (typically using Labeled Property Graph)

STRING treats RDF-based ontologies as semantic constraints and LPG-based graphs as execution substrates.

This may sound like a cumbersome technical discussion for newcomers, but it is important to remember that LLMs handle translation from natural language to the appropriate language. The key point is understanding what these concepts represent.

STEP 1

We recommend that organizations begin with a pilot case.

Step 1 involves creating a conceptual model that represents inputs (risks and metrics), the decision to be made, objectives and assumptions, and the expected outcomes of a specific decision system (actions).

A conceptual model is a free representation of the causal chain that drives a decision. It can be drawn by hand, as Large Language Models are now multimodal.

Ideally, conceptual models should be cyclical, because actions generate outcomes that alter risks and metrics—a feedback loop typical of any system.

 

STEP 2

The next step is to move into a computerized environment.

If there is no intention to simulate the decision using Generative AI (for example, ChatGPT, Claude, Gemini, Grok, or Copilot), or to connect live data sources, the conceptual model alone may be sufficient. A simulation can be run using available data, results can be presented to management, and a decision can be made. This already represents good risk analysis practice, although it is limited compared to what current technology enables.

STRING becomes particularly relevant when an LLM is used, which we strongly recommend. Beyond supporting modeling and simulation, LLMs can also act as orchestrators of agents.

LLMs thrive in structured inputs and memory. A domain ontology is important to improve contextual learning and allows this exercise to be replicated elsewhere in the organization and scale up. In a pilot case, the enterprise ontology probably has not been developed yet, so it is time to start drafting the domain level by using the instance at hand.

Ontology is made up of:

 

  • Semantics and controlled vocabulary for disambiguation.

  • Taxonomy: the hierarchy of each entity and its sub-entities

  • A knowledge graph: a set of relationships between entities and a specification of their properties.

 

In computer science, ontologies must have a formal syntax that adheres to standards like RDF or OWL. Ontologies can be represented as hypergraph-like structures, since relationships and axioms tend to involve more than two entities.

An ontology is not itself an knowledge graph to be used in production; it defines the vocabulary, constraints, and relations that application knowledge graphs instantiate.

For example, an ontology defines RISK as an entity. An application knowledge graph designed for a cyber security decision system defines REMOTE CONTROL OF OT/ICS INSTALLATION as an instance of RISK.

Several researchers demonstrated that LLMs are capable of constructing effective formal ontologies from natural language. If you feed a real-case conceptual model to an LLM and ask it to suggest an ontology to put on top, the LLM will certainly provide what you need, based on its training corpora in the domain.

This is a potential starting point, and you always can refine the ontology as you move on and become more experienced. Analyzing ontologies is an important skill for those who want to communicate effectively with GenAI devices and is an important way for them to store and retrieve specific domain knowledge.

 

STEP 3

A knowledge graph is a network of triples. Each triple is composed of a subject (the start node), a predicate (relationship or edge), and an object. (the end node).

 

  • Control mitigates Risk.

  • Assumption affects Risk.

  • Risk affects decision.

  • Performance attribute affects decision.

 

There are hundreds of platforms that design and host knowledge graphs; some of them require using a standardized ontology, while others let the user design KGs without a predefined ontology.

If you are not going to use on-line data from several sources, the application knowledge graph for the pilot case can be just a list of triples written in a piece of paper. Its utility lies in providing context of the situation to the LLM in a structured way (in-context learning). In that case, data is fed manually to the LLM, just like in a risk quantification task where the user inputs data to Excel.

In the other hand, if live data is to be connected automatically to the DDN application, you will need a graph database software to host the knowledge graph. We have tried many softwares, including the incumbent market leader, Neo4j. Many are excellent, so there is plenty of choice.

Graph platforms are able to sustain a firm link to the place where data is stored, using specific code. The graph structure allows certain classes of queries—especially relationship traversal and dependency analysis—to be expressed and executed more naturally than in SQL-based relational databases.

 

STEP 4

There are many open-source graph software tools that individuals can experiment with, but large organizations require secure platforms and Agentic AI projects to ensure proper curation and controlled access to production or strategically sensitive data.

Many financial institutions and insurers are already investing in this technology (Agentic AI) in areas such as credit analysis, fraud detection, and claims management. However, manufacturing and services companies remain reluctant at this point in time.

A graph database connects metadata from a knowledge graph with live data, making it easier to integrate information from disparate sources and answer a wide range of questions. Inference capabilities are expanded.

 

For example:

>What is the relevance of the following attributes to farmers’ default risk in the Southern region?

 

  • Purchasing large amounts of pesticide in the last three years (>100 kg)

  • Net Promoter Score in the most recent survey

>If we reduce the current 12-month number of violations associated with vulnerability V-564 by half, what would be the impact on total cyber risk?

 

STEP 5

The next step is to run a risk simulation.

STRING relies on quantitative tools such as FAIR, but quantitative analysis is a broad category. Monte Carlo simulation is only one option. Other techniques—including bow-tie analysis with probabilities, kill-chain modeling, and regression analysis—already represent a significant improvement over qualitative scoring approaches.

Simulations may be automated in digital twin or agentic AI applications, or executed manually during proof-of-concept phases.

STRING does not introduce new quantitative risk models. Instead, it provides a decision-system envelope around established quantitative techniques, such as Monte Carlo simulation, Bayesian inference, regression analysis, bow-tie models, or the FAIR methodology.

Quantitative risk models remain responsible for producing probabilistic outputs (e.g., loss distributions). STRING ensures that these outputs are:

 

  • explicitly linked to decisions,

  • contextualized within assumptions and objectives,

  • monitored over time,

  • and embedded in governance processes

 

If the application you are developing is to be a digital twin or an Agentic AI workflow, a risk simulation will be automated to run every time there is an important change in variables or an instance of query. Actually, many risk management platforms (see above) already do that.

With the simulation outcomes in hand, decision-making already can happen.

 

STEP 6

Risk professionals envision a future in which risk information is fully integrated into business performance dashboards. There is only one caveat: executives do not like to admit uncertainty publicly.

As a result of this natural lack of empathy, risk monitoring is treated as a secondary, behind-the-scenes activity, where the risk function follows up on action plans, checks isolated indicators, and looks for hidden patterns.

If an organization runs Business Intelligence applications, risk indicators such as simulation outcomes, probability of success, warnings, external volatility, etc., can be fed into the platform (Power BI, Tableau, Qlik, etc.) and integrated into dashboard analytics. All major graph database platforms have built-in integration capabilities with BI applications.

Risk integration with BI is doable without AI or graphs, of course, but these technologies are opening up the possibility of achieving the dynamism enabled by automation. This means: new data arrives in a data lakehouse; a simulation is triggered by an agent; the outcomes are transferred to BI and interpreted.

It is hard to understand why board directors are still not demanding this, since they are entitled to question how uncertainty influences decisions and should be able to receive early warnings as well.

 

STEP 7

There is a fundamental difference between an AI-driven enterprise intelligence platform and other enterprise-level OLTP applications, such as ERP, CRM, Supply Chain Management, Billing, and Inventory.

The latter applications show what happened, not what may happen in the future. In these transactional systems, correctness and throughput matter more than insight.

 

  • OLTP systems are event stores. They execute the business.

  • Graph systems are meaning stores. They explain, connect, and reason about the business.

 

The enterprise intelligence platform observes the outputs of transactional platforms, combines them with external scenarios, and brings to management’s attention decisions that are not process-driven or systematic (already handled by BI), but are instead driven by changes (real or hypothetical), early warnings, and relevant discrepancies that require strategic intervention.

 

STEP 8

When an organization reaches the stage of connecting BI to an enterprise intelligence platform, it needs a map of critical decision systems to obtain a top-level, eagle-eye portfolio view.

After accomplishing this task, the organization has a strategic monitoring system in which risk is embedded. We are not in favor of labeling this a risk management system, as standards such as ISO 31000 do. In advanced organizations, risk management systems tend to become diluted within broader strategic systems.

This shift is already being recognized by some regulators in banking and insurance (such as the PRA in the UK), which are requiring inventories of models used for critical decision-making, accompanied by assessments of each critical model’s risk (a meta-risk, by the way).

 

STEP 9

Finally, the time to discuss Enterprise Risk (or Decision, if you will) Ontology has arrived. This is an important step toward scaling up and connecting ontologies from vertical domains (compliance, cyber, supply chain, ESG, etc.) to a portfolio-level view, without losing meaning.

If we—risk management and business intelligence professionals—believe that AI will be an important part of our working lives in the future, then we need to start defining the ontologies that will teach AI agents how our organizations should function, given a strategy.

The STRING image above shows that, in theory, Enterprise Ontology would come before anything else. However, we are simply acknowledging the fact that most organizations will prefer to start with a pilot case.

Project Enchiridion

Enchiridion is our first attempt to map all the major  decisions that happen in a fictitious company called Verdenost, an agribusiness cooperative. 

It includes also the first draft of a Decision-centric Risk Ontology, and related material.

CONTATO

Entre em contato

Av. Cassiano Ricardo 601

12246-870

São José dos Campos - SP- Brasil

LOOPNUT CONSULTORIA LTDA.

CNPJ: 17.551.435/0001-27

nutini@riskleap.com
+55 (12) 99121-4336

© 2025 por Loopnut. Todos os direitos reservados.

bottom of page