top of page

How to Identify Decision Hotspots (Is Cyber one of them?)

Marco Nutini



In my last post on LinkedIn, I encouraged readers to identify a Hotspot to begin implementing Contextual Risk Assessment and Decision-Making (CRADM), a method that leverages the synergy between systems thinking, knowledge graphs, and AI Agents.

I’m not the only one who believes uncertainty should be wired into strategy and decision processes. The challenge isn’t believing in it—it’s executing it.

Let's look at one ubiquitous example.


The Decision Process Embedding Cyber Risk

Norman Marks recently highlighted a fundamental issue in his blog post that cuts to the core of the CRADM methodology. He describes a trade-off that exists in every company involving cyber risk, as well as the decision process managing it:


 How much should cyber matter?

The answer is not the same as the quantified level of risk.

Other factors need to be considered such as the effect on any further spending on the level of risk (i.e., how much will spending reduce it).

What is the ROI for further spending, and can the organization live through a breach?

What is the ROI for spending on other business needs, whether to mitigate different sources of risk or to seize opportunities?

You can’t decide between different spending needs if they are quantified and measured differently.

We need to understand how a breach (and all other sources of risk, but we are focusing on cyber for now) could affect the running of the business and the achievement of management and the board’s objectives.

This is what I would do and recommend to all:


  • Recognize that cyber is a business risk, not a technology risk. We say that, but what does it mean? It means that we should be concerned with and measure cyber risk based on how it affects the business, not just how it affects technology.

  • Facilitate a workshop with attendance by operating management; supporting functions like Finance, Compliance, and so on; the IT applications and other managers; the InfoSec leads; and others as needed.

  • Identify how a breach could affect the business. That requires the involvement of all at the workshop.

  • Identify and assess the likelihood of a severe impact; how much should be spent (in theory) to bring the likelihood of an unacceptable impact down to manageable levels.

  • Consider the options, which could include not only increasing defenses but enhancing responses. It may be possible to obtain insurance, but its usefulness should be challenged. Is it even possible to bring it down to acceptable levels?

  • For each option, determine how much it will reduce the risk. What is the ROI for any investment?

  • Will the level of risk still be unacceptable? Is further investment worth it?


Now the information on cyber risk is ready to be compared to similar information on other sources of risk and opportunity. How does the ROI on cyber investment compare?


This is the link to the full article:



I think he is right as usual. My interpretation of Norman’s point is that the relative importance of cyber security depends on the company's strategic context and that its risk level is just one variable in the equation. Yet, the only way to contextualize a decision is to use a proper standardized risk quantification procedure that encompasses all types of uncertainties and allows comparisons.


Anyone who has tried bringing properly quantified, aggregated and contextualized risk into an ongoing strategic debate knows this is not a trivial task.


It’s much easier to isolate risk in its cage, assign a color code, and hold a “let’s-get-worried-together” session to request an annual action plan from IT, than to feed uncertainty insights that bring real-time business meaning to a high-level decision process.


The Contextual RA & DM methodology ensures proper governance of decisions and recognizes interdependencies.


This requires overcoming business complexity, where AI and Machine Learning provide significant assistance. In my opinion, it is a fair price for an immense payoff.


What is a Decision Hotspot?

Imagine a T-shaped framework where the horizontal bar represents a pipeline. Flowing through it are:


  • Assumption: The performance or risk tolerances deployed from objectives that are built into the operational and financial plans. In cyber security, this means defining the maximum loss from a breach that the plans would still handle effectively, using available mitigations (flexibility, contingencies, insurance, economic capital, whatever).

  • Decision: The need to solve the trade-off between more investment in security vs. other business priorities. Do we spend more, accept the uncovered tail of the risk distribution, or manage it differently? In cyber security, these questions will never go away.

  • Key Result (or Target): The expression of the overarching business objective. E.g., uptime of mission-critical systems, customer experience, supply chain continuity (Just to not forget what we are trying to achieve).


All companies test assumptions, refine decisions, and reset targets as new data emerges—even if that is not conscious. Continuous adaptation is crucial and already happens everywhere.

The problem is that too many companies or sectors inside a company do it in an unstructured, ungoverned manner and with ill-timedinsufficient or invalid information. The T is broken or clogged, not working.

A typical major company holds over a hundred relevant decision hubs like Cyber Security. However, typically, only a few qualify as Hotspots.

The Vertical Bar of the T: The Data Pipeline

Every IT security operation generates a tsunami of data. Right now, even in major organizations, someone is still tasking an intern to analyze incoming data—resulting in those infamous messy PowerPoint decks.

This is where AI Agents shine. An application can access new data, analyze and inform decision-making through a predictive model, creating the vertical pipeline. This is going to happen with Risk Management participation or not.

AI Agents need context. Context is key—poor or unclear context means a bad connection between data and decision-making. Without the key, there is no T.

Are we talking about key risk indicators? Yes—but not only. We’re also talking about incidents and KPIs all mixed up in that vertical pipeline. We need sorting out and structuring, which a graph provides.

In some cases (cyber risk, for example) we need regressions, simulations and scenario analysis, because the day-to-day process data are insufficientIn other words, we generate and feed synthetic data to supplement the lack of observable instances. The same thing happens with some ESG materiality assessments and every assumption that is founded on an unknown future.

The T-junction is where an assumption meets data for decision-making.

The main mission of CRADM: to test assumptions with real and synthetic data until they fail, explaining the chance of that happening in a proper decision environment.

What Makes a Hotspot?


Hotspot is caused when a critical assumption is:


  • Not being properly validated against real-world or synthetic data (the “T-junction” reality check is turned off).

  • Producing unexpected performance outcomes or bearing too much risk (in comparison to a valid benchmark), i.e., failed the test.

  • Being decided and acted upon at an inadequate governance level or with insufficient risk information).


You don’t need to map every relevant assumption-decision pair in the company—but it would be extremely useful to do so (and that’s exactly what Project Enchiridion (publish.obsidian.md/enchiridion) is working on—providing templates.

An important remark: Hotspot is not a synonym for “top risk”. Top risk is a risk-centric invention, commonly based on the belief that risk is a product of likelihood and impact that does not need context.


Summarizing: The horizontal bar of the T is Context, and the vertical bar is Data Pipeline.

In Contextual RA & DM, risk is wired (or woven, if you prefer) into both and is not assessed in isolation anymore. A contextual graph does exactly that: provides the wiring.

In the image, I depict risk assessment as a type of “pipeline sensor”, comparing assumption with data as planned. Of course, it is only a drawing, but the analogy always worked for me.

It stands as well for ad-hoc, special occasion decisions, such as an investment or a go-no go critical situation. Many disasters are avoided everyday because of contextual reasoning and clarification of assumptions, and many disasters would not have occurred if assumptions were discussed deeply.


Would FAIR Methodology Help?

Yes! FAIR (Factor Analysis of Information Risk) is great for quantifying cyber risk—it’s a bow-tie model with embedded simulation, much like Archer Insight.

However, FAIR is a risk taxonomy, not a context model. It belongs to the risk assessment universe, as a quantitative tool. Even being a valuable synthetic data generator, FAIR doesn’t answer the top-level strategic questions about cyber risk.


How to Identify a Hotspot?

1. Ask Management. Before diving into data, start with leadership—they should have a good idea of critical risky assumptions.

2. Use a specialized Large Language Model like RAW@AI* to help identify risk and scan for potential hotspots, based on assumptions.

* An LLM trained by Alex Sidorenko (radar.riskacademy.ai).

My Verdenost Applied Ontology at Enchiridion also provides a suitable platform for brainstorming over decision hubs and assumptions.

3. Ask the right question:


  • What is the most dangerous assumption we are either betting on or downplaying?


Ironically, even though cyber risk is hyped as the #1 global concern, many companies actually developed quite reasonable cyber security assumptions—and the decisions are well based and backed by data. In this case, cyber would not be a hotspot, at least temporarily.

What about the risk registers in spreadsheet format? Keep doing, external auditors and regulators still demand them. They help create discipline at the operational level, while for strategy they are unimportant. What difference it makes if cyber is red, orange or yellow? Is management going to stop making decisions about cyber based on that?

For each hotspot that you contextualize properly, that context is the risk register now. That’s what wiring means.


7 STEPS for Contextual Risk Assessment and Decision-Making

So, you’ve identified a T-Hotspot? Here’s how to turn it into action:

1. Adopt an Ontology—either for the company or for the specific Hotspot.

A semantic layer is essential for structuring context. It is like a systems diagram that machines can read, understand and memorize.

I’ll be covering this in more detail in the upcoming webinar (check below).

There’s free software for this—you don’t need a big budget.

Take action before upper management invests in an expensive software solution like Palantir that may not fully address the real challenges.

2A. Improve the existing decision process using tools from the Quality kit.

When the company really owns an ontology, the gaps in the context will become clearer. It could be a lack of Policy and tolerances, unclear targets and assumptions, poor indicators, corrupt data, etc.

This is the good and old continuous improvement of internal controls, which may have been downplayed and go unaudited for a while. Very few internal auditors know how to audit a decision process.

2B. Just if this is new for the companySelect a proper quantitative risk assessment method for the context at hand. Qualitative scoring is dangerous for decision-making, especially when working strategically. Why would a company go through the trouble of mapping its strategic assumptions for someone to say that the chance of success is ‘medium-low’?

3. Select and train a Large Language Model (LLM).

A generic LLM will not help much, because it was not trained in the specific context. Even if you are not going to full automation, a dedicated, distilled LLM can significantly improve decision-making through querying (i.e., a specialized chat), especially in complex situations. It will become a cherished advisor, I guarantee.

Additionally, the LLM will help you in the next phase.

4. Develop a Contextual Graph for the Hotspot.

Based on the ontology, you map the specific improved decision process and document it using a software like Neo4j, GraphWise, ArangoDB and many other similar graph applications

By then, you would have reinforced the training of the LLM with the graph. This is self-learning, because the LLM helped with designing the graph.

5. Map the data pathways.

The graph will provide the connection points for real data, forming a graph database that will grow with time. You’ll need to explore and define how the data is created, collected and stored, so that an AI Agent will know where to pull them from.

6. Design the Solution Blueprint.

Using n8n, Snorkel, Crew, Flowise or similar tools, you will create a workflow with function calls. These low-code tools can take you a long way before coding starts.

7. Develop and deploy the solution.

Now, you’ll need developers (internal or outsourced) to move on to production.


But if you’ve followed these steps, you’ll be speaking their language, and they’ll understand the context.


Even if you had stopped at phase 3 above documenting the improved decision process and training the LLM, a lot of value would have been created just from straightforward better governance and decision-making support.


Full Agentic AI solutions are still rare, but the trend is clear: the number is growing fast. In Risk and Compliance Management, Fraud Detection has been the main target in these early days of the Age of GenAI.


2025 has been called the year of AI Agents by most tech gurus. Be prepared!


 
 
 

Recent Posts

See All

Thank You and Webinar Resources

I can’t thank enough those who participated in the March 11 live webinar —your engagement is a major source of motivation for me! I...

Memo to Verdenost's AI Strategist

cc: CEO and VPs Dear Shirley, First, congratulations on the AI Strategic Plan—fantastic work! I understand that you have begun...

Comments


CONTACT

Get In Touch

Av. Cassiano Ricardo 601

12246-870

São José dos Campos - SP- Brasil

LOOPNUT CONSULTORIA LTDA.

CNPJ: 17.551.435/0001-27

nutini@riskleap.com
+55 (12) 99121-4336

JOIN THE MAILING LIST

© 2025 by Loopnut. All rights reserved.

bottom of page