Data Integration, Full-Scale Mobilization, Scenario Construction -- The World's Largest Sovereign Wealth Fund Shares "How to Use AI"

Recently, Norway’s Bank Investment Management Company (NBIM, the Norwegian Sovereign Wealth Fund) held its first Artificial Intelligence Seminar. During this publicly accessible event, senior executives and key staff disclosed in detail the underlying logic of their internal AI strategy, organizational changes, and ten specific application cases covering investment decision-making, trade execution, legal compliance, and more.

At the opening, the fund’s executives candidly pointed out that the development of this technology “has not been smooth sailing, but has been continuously climbing, almost reaching a level of vertical growth.” For asset management organizations, the real challenge lies in how to absorb and utilize these technologies across a large organization.

As the fund set a grand goal: “By the end of 2028, reduce all manual processes by half.” This is not just a technological upgrade but a profound transformation touching corporate culture and operational models.

Laying the Foundation: In-house Operations, Cloud Migration, and Data Architecture Overhaul

NBIM staff reviewed that since 2015, the organization has undergone multiple transformations, with three major initiatives paving the way for AI:

  • In-house operations: Bringing back settlement, fund accounting, valuation, etc., from outsourcing to “control the process and also master the knowledge”;
  • Moving IT to public cloud: Pursuing “unlimited data growth” and real-time scaling, freeing from hardware refresh cycles;
  • Modernizing databases: Old databases couldn’t match cloud scalability, so they migrated everything to modern architectures.

To “cleanse the data,” Tangen described the difficulty bluntly: “Cleaning data is not fun at all. It’s the most boring job in the world. Will anyone thank you for cleaning data? No… Basically, you tell them, ‘On January 31, we’re shutting down the old data.’ If you find no data available the next day, you look stupid.” After these upgrades, NBIM claims to have “a high-quality, clean, and well-organized internal and external data warehouse that can be used for artificial intelligence.”

Full Staff Mobilization: 20 AI Ambassadors + Mandatory Training, “Sting Like Wasps”

NBIM defines AI promotion as organizational engineering. Its AI leader mentioned that the organization established a “network of ambassadors/advocates”: 20 volunteers within the organization tasked with identifying practical use cases and pushing forward with AI projects supported by the AI team and Anthropic, “which helps us start projects twice a week.”

Training is “mandatory.” Tangen emphasized this with a repeated stern message: “This is mandatory. Do people like mandatory? No, they hate it because it’s like going back to elementary school. Can they volunteer? No, because the ones who need help most are the ones who don’t want to participate. It must be enforced, and we have to keep a close eye on them, understand, okay?”

Regarding tool adoption, NBIM states “more than half of employees are using cloud code to create solutions,” and “more than two-thirds have registered and started using it”; additionally, about 70% of users are using development tools.

Starting with Fragmentation: 171 Projects Identified, but “No Perfect AI Use Cases”

NBIM divides its AI transformation into three phases: first, providing tools/training/experiments; second, seeking high-value use cases that can “comprehensively improve”; and finally, continuous iteration and upgrades.

However, their conclusion in the second phase is not “feel-good.” The AI leader said that the team identified “171 new projects” through interviews and workshops but “did not find the so-called ‘perfect AI use case’.” He summarized pragmatically: “The good news is, before starting the transformation, our efficiency wasn’t low. The bad news is, we had to complete all these small projects first to truly improve efficiency. So, the workload is enormous.”

Meanwhile, NBIM is adjusting its R&D approach: traditional Scrum “is very time-consuming,” and a better method is “keeping only two developers and one business person,” working in smaller teams to accelerate delivery with AI.

10 Use Cases Displayed on the Wall: From Investment Decisions, Cybersecurity to Financial Report Generation

NBIM used “each use case in 3 minutes” to quickly showcase real-world applications, covering front-to-back investment processes and support functions:

  • Investment: Hourly decision-making for block trades, using “intelligent agents collaboration” to save time: The investment team receives about “200 similar requests” annually: investment banks propose large stock sales needing responses “within an hour.” They deploy multiple intelligent agents for web searches, clause extraction, index effect calculations, etc., aiming to “obtain complete decision bases in a very short time.” The team summarized: “When Goldman Sachs asks questions, we spend less time collecting data and more time analyzing it.”

  • Cybersecurity: Handling “about 1 trillion” data points annually, AI reduces triage from half an hour to five minutes: The security team collects about “1 trillion data points” yearly, filtering out “about 100,000 to 1 million” suspicious signals. Now, “I get calls in the middle of the night, and one of our agents starts working simultaneously,” and “it can complete what used to take me half an hour in five minutes… it’s never slow.”

  • Meeting Preparation: Over 3,000 meetings annually, aiming for “10,000 hours”: NBIM states that by 2025, it will hold “more than 3,000 corporate meetings,” each requiring “about three hours” of prep, totaling nearly “10,000 hours” annually. Multi-agent systems analyze materials, with the final agent evaluating and outputting quality, emphasizing traceability to avoid “fake information,” and planning to add “simulation components” to predict responses.

  • Compliance Monitoring: 6 sub-agents + main agent “Eva” to reduce false alarms fatigue: Compliance team dissects trade alerts into six categories—trade background, index rebalancing, company news, industry news, timing patterns, and corporate interactions—and evaluates them in parallel, with results aggregated into the “enhanced alert agent ‘Eva’,” which only escalates cases that are “ambiguous/uncertain/require manual final decision.”

  • Financial Fraud Detection: Building a case library to train models, outputting “probability of stock price decline”: For about 7,000 companies, the team cleans “the past 16 years” of accounts, training models to identify financial embellishments. They built a dataset of “thousands of historical cases,” with models outputting “the percentage probability that such cases cause stock price drops,” and “it’s already in production.”

  • Automated Financial Disclosure: 2.5-person team saves 8 days, front-loading analysis before closing: The fund’s accounting team previously relied on complex Excel sheets and manual work. Now, they rebuild from “a single data source,” using cloud code tools for automatic summaries and imports; “forex and tax analysis can be done with one click on the second working day… full automation saves our small team (2.5 people) eight days.”

  • Responsible Investment Screening: 8-person team uses AI to screen 7,000+ companies across 60 countries: Responsible investment team states that manual screening would require “3,000 analysts working overtime on weekends,” but now, a two-stage model screens public info and produces structured risk reports, with “analysts re-engaging to make decisions,” including communication or divestment if needed.

  • Legal Negotiations: Negotiation simulator predicts “over 80%” of arguments: Legal team says AI can assist in planning negotiation strategies and voice simulations, “we can predict over 80% of arguments,” and extend AI to contract analysis, uncovering clause patterns and relationships.

  • Trade Execution & “Market Impact”: Estimated at about $14 billion last year: NBIM notes that the fund trades in over 60 markets, with about 250 internal portfolios, causing “market impact”—“estimated around $14 billion last year.” Their approach includes using AI for price trend predictions to “cultivate patience,” and better internal fund allocation. “I checked our cash reserves this morning. We have $10 billion on hand. Last year, we stored over $120 billion.” He added that, based on current cost structures, “this figure could approach $20 billion,” emphasizing AI as a “cherry on top,” but also as a driver for process and understanding improvements.

In conclusion, Tangen summarized that this presentation reflects “at least our current situation,” because “model updates are so rapid: this technology evolves daily and weekly, with new models and opportunities emerging.”

Below is the transcript

Speaker 1 00:00

Warm welcome to everyone attending the Norwegian Bank’s first AI seminar. This is a momentous occasion because we’ve never seen such technology before. Its development has not been smooth, but has been continuously climbing, almost reaching a level of vertical growth.

Speaker 1 00:20

What can this technology do now? The question is about the oversupply of technology—we wonder if we can fully leverage all these tools. I believe the real challenge is how organizations can absorb and utilize these technologies. As you know, we are the most transparent fund in the world. Why is transparency good? I think it’s because it allows people to understand our operations, and we believe this builds trust. Equally important, it enables us to look outward and see what the world is doing. That’s why we’ve been very eager to adopt this new technology, because we also communicate with leaders worldwide to understand what benefits it can bring if applied correctly.

Speaker 1 01:14

We think inviting you all is fantastic, not because we see ourselves as perfect, but because we want to learn from you. We share experiences, and we hope you share with us. Even better, there’s no competition between us; we can collaborate and share best practices nationwide. I dream that we can work together, with both public and private sectors, to boost productivity—an ambitious goal for this country.

Our company has many application cases. We could have chosen many different ones, but ultimately selected ten that showcase our various efforts. Some help us earn more money, some save costs, some improve efficiency, some enhance accuracy and quality, and others help us avoid tedious tasks. In this new era, we shouldn’t waste time on boring work. I hope everyone agrees with this.

Speaker 1 02:27

First, I’ll briefly introduce our current technological position. Bikita will give a quick overview of our AI development journey and how we’re working to benefit the entire organization. Lydia will explain the framework we’ve built to ensure proper implementation—compliant, safe, and reliable. Then, Tron will, along with ten colleagues, showcase many user cases. But first, please let me finish my part.

Speaker 2 03:06

Thank you, Nikola. Since 2015, NBIM has undergone multiple transformations. Today, I want to focus on three major initiatives that laid the foundation for our AI strategy.

First, we achieved in-house operations. Previously, we outsourced all activities like settlement, corporate actions, fund accounting, and valuation. But as we expanded into new markets, we needed deeper expertise and richer data. So, our solution was to bring all these activities back in-house, not only to control the process but also to master the knowledge.

Next, we migrated all IT infrastructure and systems to the public cloud. Before, we rented space in external data centers and outsourced technology. But we found a limit to data capacity, and we wanted unlimited data growth, instant scaling, and to escape server refresh cycles. After moving to the cloud, we quickly realized that the migrated old databases couldn’t meet the same demands or fully utilize the cloud providers’ scalability. So, we decided to migrate all databases to modern architectures to achieve similar scalability.

Speaker 1 05:06

Cleaning data is not fun at all. It’s the most boring job in the world. No one will thank you for cleaning data. How do you get people to do it? Basically, you tell them: ‘On January 31, we’re shutting down the old data.’ If the next day you find no data available, you look stupid. So, we often work late, and the entire team’s workload is huge.

Speaker 2 05:36

For all staff, organizing and rewriting code is crucial. Now, we have a high-quality, clean, and well-organized internal and external data warehouse for AI. These three points are the foundation of our success in AI. Next, I’ll hand over to Steham, who will introduce our subsequent work.

Speaker 4 06:14

Next, I’ll review our AI journey and the many challenges we faced, such as data errors and massive computational resource needs. It all started about two years ago when Nikola invited Sam Altman from OpenAI and Dario Amodei from Anthropic to his podcast. He believed our efficiency should improve by 20%, so he said: “Steham, you handle it.” I replied: “Thanks, that’s an easy goal to achieve.”

How did we do it? Sam is like an inexhaustible battery, inspiring us all over the past two years. Everyone got the tools they needed, invested time, experimented, and created many projects exploring how to leverage AI to enhance capabilities. But if we want to improve efficiency by 20%, that’s still far from enough. To truly start using AI, we must change our habits. We need continuous encouragement and reinforcement. To do this, we created a skills enhancement program for everyone, which I’ll detail later. We also built a network to help everyone get started quickly and maintain momentum.

Speaker 4 07:35

In 2025, we launched a series of activities aimed at making AI a focus for everyone. First, we created an ambassador network, also called a advocate network.

Speaker 4 07:53

There are 20 volunteers within the organization. Their task is to identify valuable AI use cases within teams and push forward with projects supported by the AI team and Anthropic, which helps us start projects twice a week.

Speaker 4 08:17

We prepared two months of training for these ambassadors and AI teams to ensure smooth project progress. The ambassadors not only solve project challenges and showcase team strengths but also participate in other activities. Soon, we will see AI’s value across the company. The core message of this slide is: by 2025, everything happening at NBIM will be closely related to AI.

Speaker 4 08:56

If your organization holds meetings, AI will definitely be on the agenda. We held large tech seminars in London, Oslo, and Singapore, focusing on tech stacks, cloud computing, data warehouses, and of course, AI. AI is also a key topic at leadership summits. The promotion of AI continues, constantly reminding everyone to apply AI in daily work.

We not only trained AI ambassadors but also other organizational members. We designed seven 30-minute training courses, each on a different theme, such as AI ethics, interaction with Claude, aiming to foster critical thinking and responsible use. These trainings are open to all.

Speaker 1 09:40

This is mandatory. Do people like mandatory? No, they hate it because it’s like elementary school again. Can they volunteer? No, because those who need help most are the ones who don’t want to participate. It must be enforced, and we have to keep a close watch.

Speaker 4 10:08

Everyone has received training, time for experiments, and support from the AI team. We started with three people, now there are ten. We are catalysts, not the sole drivers of AI—this is driven by the entire organization, as you can see from the use cases. We are simply empowering AI, providing tools and platforms.

Regarding tools, we are using cloud-based solutions that everyone uses daily. More than half of employees are creating solutions with cloud code, meaning over half of NBIM staff are coding. More than two-thirds have registered and started using these tools, and about 70% are using Keshia development tools, with more users shifting to cloud solutions.

Speaker 4 11:30

Our AI transformation has gone through three distinct phases.

The first phase provided everyone with tools, training, and ample experimentation time. This bottom-up approach generated many projects internally—thousands of initiatives to test and get hands-on.

The second phase focused on: since our efficiency could improve by 20%, is there a use case that can comprehensively enhance NBIM? We interviewed CEOs, business leaders, held workshops, and identified 171 new projects. But we did not find the so-called “perfect AI use case.” The good news is, before the transformation, our efficiency wasn’t low. The bad news is, we had to complete all these small projects first to truly improve efficiency, which is a huge workload.

Speaker 4 12:37

In the final phase, we need to deliver all planned resources, including tools, experimental support, and projects we want to implement. But by fall, we realized AI was evolving so rapidly that our previous upgrade plans were outdated, so we had to do a second round of upgrades for everyone. We also focused on expanding cloud code applications, especially for core developers, because we saw the data and its chain effects across the organization. Just last week, we held another two-day hackathon, again targeting core developers, to push AI implementation further.

Speaker 4 13:26

Finally, I want to mention that NBIM’s traditional project culture is based on Jeff Sutherland’s Scrum methodology from the 1990s, involving eight developers and one business person collaborating on business cases. This approach has many rituals, like daily stand-ups and sprint reviews, which are very time-consuming.

But with AI, we found this model no longer suitable. A better approach is to discard almost all cumbersome Scrum steps, keeping only two developers and one business person working together. They have autonomy and decision-making power, leveraging AI to accelerate project speed to a new level.

Speaker 4 14:16

But trusting AI to handle so much means we must ensure it also provides high-quality service—good code and reliable delivery. We need to trust everything it does and use it in a compliant manner. To this end, we’ve established a framework. Next, Lydia, our AI compliance officer, will explain in detail. Thank you.

Speaker 5 14:49

Welcome everyone. AI is evolving rapidly, changing how we work, interact with data, and make decisions. Our fund recognizes the importance of ensuring AI is always used responsibly, so we built a Responsible AI Framework. What does this mean in practice? Let me demonstrate.

Speaker 5 15:19

First, we set rules. Our “Responsible AI Guidelines” establish requirements for every employee when purchasing, building, or using AI. The guidelines comply with laws like the EU AI Act and globally recognized AI standards. They cover key areas such as protecting personal data and ensuring all AI systems used for investment support or HR decisions involve human intervention. The guidelines adopt a risk-based approach, meaning our handling of simple email filtering differs greatly from AI systems impacting users’ decisions. The rules are set, but how do we ensure they’re enforced in practice?

Speaker 5 16:15

Our operational model is a document translating AI principles into a practical governance framework. It details key processes from AI development to deployment and post-deployment, covering risk management, legal compliance, security, and more.

Speaker 5 16:42

The core of our governance is the AI Governance Working Group, composed of representatives from key NBIM teams. Their role is simple: ensure responsible AI is not just talk but practice. They closely monitor regulatory and industry developments, discuss AI issues, and seek solutions.

Speaker 5 17:22

We understand that the robustness of governance depends on the competence of its members, so we have trained all staff in responsible AI. We want everyone to understand AI’s current capabilities and limitations, critically evaluate outputs, and raise concerns. Responsible AI is not just compliance; it’s everyone’s responsibility.

Speaker 5 17:55

As technology advances rapidly, our governance must keep pace. We are confident in our effective system: a set of guiding principles, an operational model translating these into actionable processes, and a working group ensuring smooth operation. Most importantly, our staff—trained properly and embedded in a strong corporate culture—are practicing responsible AI daily.

These elements foster a responsible innovation culture, enabling us to work more efficiently, make bolder decisions, and always comply with laws and high standards. Next, I’ll hand over to Tron, who will introduce our AI strategy and showcase some applications. Thank you.

Speaker 1 19:02

Thank you. The key question is how to turn this foundation into something with real business value. First, what are your goals? At NBIM, our goal is to achieve the highest possible long-term returns in a safe, responsible, cost-effective, and transparent manner.

Speaker 1 19:22

We need a shorter-term strategy to achieve this. What are our goals for the next three years? In our recent report, we repeatedly mention AI applications across functions, roles, and every employee. Reducing all manual processes by half by 2028 is a bold target.

Speaker 1 19:55

We look forward to seeing the results. The key is everything we’ve experienced—having a cloud-native infrastructure, infrastructure as code instead of physical hardware; data lakes and Snowflake cloud data; the tools we have; organizational skills improved; capabilities in place; and proper safeguards. Now, fully leveraging this new technology depends on each of us.

Next, we’ll quickly show you ten different use cases, each in three minutes, starting with our core business—investment. ULA will explain how a team of five manages $2 trillion.

Speaker 1 20:45

European stock markets. Thanks, Ron. Imagine Goldman Sachs contacts you, saying Ferrari’s largest hedge asset wants to sell $30 billion worth of stock—more than three weeks’ normal trading volume. Goldman Sachs is reaching out to several major investors, and you need to know whether to participate, how much to bid, and at what price, all within an hour.

We receive about 200 such requests annually. Over time, these trades have contributed billions of dollars to the fund’s excess returns. We see clearly that the better we are at using data to decide when to give up, when to participate, and when to go all-in, the more money we make—because each trade involves risk and is different.

Speaker 1 21:59

To decide on this trade, we need to understand many things: who is the seller, why the market expects this trade, how similar past trades were conducted, what’s a reasonable price, and whether it will trigger index-tracking funds or other institutions to buy.

Speaker 1 22:19

But the challenge is the limited time. Data is everywhere—external and internal sources, in text, numbers, databases, and web searches. Moreover, one data source’s output can influence another’s, making automation very difficult because code can’t solve all problems, nor can language models. We need both.

Speaker 1 22:49

So, we built intelligent agents—dedicated AI programs with specific tasks and tools that work together. For example, one agent searches online to find the true owners behind holdings; another extracts key data points from trade texts and sends them to a third agent; the third runs an algorithm to see if it triggers index effects. In fact, more agents and tools are involved.

The key is that within a very short time, we can obtain a complete decision basis, with more data and better analysis than ever before. We first built a prototype within the investment team, then with help from developer Yifan, who helped turn our initial ideas into reality. Thus, when Goldman Sachs asks questions, we spend less time collecting data and more analyzing it, making better decisions and earning more profit. Thank you.

Speaker 1 24:10

Next, let’s talk about communication. Transparency is vital for our fund—we might be the most transparent fund. Our communications team’s strength is not only strong communication skills but also a focus on data-driven approaches. To better illustrate use cases, please listen to the relevant introduction.

Speaker 6 24:30

Echo is our real-time overview tool for all communication channels. We are not developers, but we independently developed it using AI tools. Over the past year, we’ve worked to turn it from a statistical tool into one providing real insights. By 2025, the fund will be mentioned in nearly 50,000 articles, with over 5,000 just this year. For a media team of only two people, tracking all coverage is nearly impossible, so we built an AI-driven sentiment analysis system to address this.

It’s a multi-agent system: each article is processed by a main agent, which assigns tasks to specialized sub-agents that classify sentiment, engagement, media priority, article type, prominence of the fund, and topics and figures mentioned. All data is stored directly in our Snowflake data warehouse. Existing media monitoring tools are expensive and not very effective, so building our own system is cheaper and customizable.

Here you see the sentiment analysis dashboard in Echo. This is historical data showing many negative reports. We built a timeline to easily see who wrote what. We also built an insights feature that uses AI to summarize reports and highlight key points, helping us understand coverage and underlying drivers, and identify where action may be needed.

Finally, we built a chatbot on Echo that is proficient in all communication data. We no longer need to manually search dashboards; just ask questions like “Analyze social media engagement,” and Echo will fetch data from Snowflake, pulling from LinkedIn, Instagram, and YouTube to generate reports. This isn’t a static view but an instant cross-channel analysis, trend detection, and strategic recommendation system. Previously, we had to log into each platform, gather data, and compile manually. AI enables us to build our own system, automate analysis, and make smarter, faster decisions.

Speaker 1 27:34

Accelerating like this is essentially risk management. Recently, you saw markets go up and down, energy prices fluctuate—these are manageable, and we can even leverage them. But one risk keeps us awake at night: cybersecurity. What’s the situation there?

Speaker 7 27:58

I work in the fund’s cybersecurity department. One responsibility is to think about how someone might attack us, steal funds, or commit fraud. My colleagues and I maintain a vast, invisible early warning network across all our digital infrastructure. It’s a huge data collection effort. To give an idea, we collect about 10 trillion data points annually about NBIM and its operations, then filter out about 1 million to 100,000 potentially suspicious signals, and further narrow down to the most valuable few.

Speaker 7 28:54

For example, if a football-loving employee is watching a live match online and visits some untrustworthy sites, our intelligence shows these sites contain inappropriate content. I might get an alert in the middle of the night, needing to understand what’s happening. Usually, I get an alert indicating that this computer connected to suspicious sites. I then gather all relevant background info, sift through massive data points, and determine it’s just a normal user browsing a website, reconstructing the full event.

This requires human judgment: I decide where to investigate, what info to focus on, which signals are important, and whether it’s abnormal or problematic.

Speaker 7 29:58

When I get an alert at midnight, one of our AI agents starts working immediately. While I handle it, the agent performs the same process—identifies what info to check, what data to extract, and makes judgments, ultimately generating a report or investigation result similar to my initial triage.

Speaker 7 30:28

It performs very well, with rapid progress over the past year. To give a clearer picture, it can complete in five minutes what used to take me half an hour. Another advantage is it never gets tired; even with repetitive, similar content, it performs with consistent energy. Thank you.

Speaker 1 30:55

I believe one of the fund’s key advantages is its scale and long-term investment horizon, making us an ideal partner for target companies. We can easily access these companies and meet with chairpersons, CEOs, and key teams. To better prepare for these meetings, we developed an AI application case, which Christina from London will explain in detail.

Speaker 8 31:30

Hello everyone, I’m from London. Over the past few months, we’ve been working on an exciting AI project that significantly improves key processes in our stock investments and interactions with portfolio companies. The AI team, portfolio managers, and the London team have maintained close collaboration.

Speaker 8 31:48

In 2025, NBIM held over 3,000 company meetings, each requiring about three hours of preparation—totaling nearly 10,000 hours annually that could be used more effectively.

Speaker 8 32:02

This is the core of our system development. First, it establishes our competitive advantage. As mentioned, we are a large long-term investor with a direct communication channel to company management. Second, we have a unique approach to company meetings, extensively trained in interview and inquiry techniques—difficult for leading external solutions to replicate.

Here you see an early version of our solution. It lists upcoming meetings with companies in the next few weeks. The model loads data, but only we have access to investment assumptions and meeting records.

Speaker 8 32:43

You can select the AI model, add instructions, attach documents, and these inputs are fed into a multi-agent system. First, one agent creates a plan; then three to five sub-agents research different resources; finally, an agent receives the output. This agent has been trained on our carefully prepared meeting examples and internal interview materials, evaluating input quality and judging whether it’s sufficient. The output includes our prompts and referenced resources to prevent false information. You can also identify our approach—posing questions that help build long-term relationships and focus on strategic development.

Speaker 8 33:27

We also ensure the agenda can be iterated via chat, and we plan to develop a simulation component that uses podcasts, past meetings, and corporate communications to predict what the other side might say, helping you refine plans, achieve meeting goals, and get voice feedback on your hosting.

In short, this system helps portfolio managers automate information collection and structuring, allowing focus on strategic issues. Trained on best cases, the simulation feature will make us all better and maximize our competitive edge. Thank you.

Speaker 1 34:24

This fund conducts millions of trades annually across more than 60 markets, all heavily regulated, so we must ensure everything is legal and compliant. Oscar, how do you use technology to improve this?

Speaker 9 34:44

The risk is that real cases of insider trading and market manipulation are common. We see related reports and active enforcement in the Nordics. Market integrity is crucial for all market participants, and for investors like NBIM, record-keeping is fundamental. So, how do we achieve this?

Speaker 9 35:10

As regulations tighten, buy-side institutions now need to demonstrate trading surveillance capabilities, previously relying on trade units. To address this, NBIM adopted an external system in 2018 that uses advanced market risk models and issues alerts for manual investigation by compliance teams. But this system doesn’t understand our context; it doesn’t know whether a trade was due to rebalancing, index events, or prior contact with the company—such information still requires manual collection. Frankly, handling these alerts is a tedious process—checking the same things repeatedly, which leads to fatigue and wastes a lot of time on false positives.

Speaker 9 35:59

That’s what we’re changing now. We introduced an AI monitoring team with six sub-agents, each reviewing all alerts generated by the system, focusing on six dimensions: trade background, index rebalancing, company news, industry news, timing patterns, and corporate interactions. They evaluate alerts simultaneously and consistently. All assessments are summarized into a main agent, called “Eva,” which generates a complete audit trail for each case. Eva is an expert in pattern recognition.

Cases are escalated to manual review only if they are ambiguous, cannot be automatically judged, or require final human decision. These cases are sent to the compliance department. We have a team of similar size but with much broader coverage. Thank you.

Speaker 1 37:14

Our management of this fund is based on a benchmark provided by the Ministry of Finance, covering about 7,000 companies. If we follow this benchmark, we buy roughly 1.5% of each company’s shares. The question is: do you really want to hold shares in all these companies? Morten, what do you think?

Speaker 7 37:36

Probably not. In legal accounting, the challenge is how to weed out the “bad apples.” On average, each analyst spends about two weeks thoroughly researching each company, reviewing financial statements and notes. Just to review the 7,000 companies we plan to sell, it costs millions of dollars in manpower. So, we need to expose those that make these companies look better than they are—cleaning the past 16 years of their accounts—and then train machines to recognize such financial fraud.

Speaker 7 38:34

How do we do it? For example, we look for keywords like “accounts payable delay.” We find this keyword in footnotes, then AI extracts relevant sentences from nearby pages and finds the numbers of interest. In this case, a donut manufacturer delayed accounts payable by $745 million. We store this data and learn from it. We’re building multiple intelligent agents to identify such financial structures—where companies manipulate income, costs, earnings, or cash flows to appear better than reality, sometimes with large adjustments.

Speaker 7 39:40

We use machine learning models to learn all this data. We developed a unique dataset, reviewing all legal accounting research and collecting thousands of historical cases of accounting manipulation—when the market discovers these cases, stock prices plummet. We’ve collected thousands of such cases and are training models to recognize similar patterns. The model outputs a probability percentage indicating the likelihood that a company’s legal accounting issues cause stock price declines. It’s already in production and used daily. We’re also developing more models to detect other types of financial fraud. Thank you.

Speaker 1 40:54

Now, let’s look at financial statement generation. This is Todius, who will explain how we regularly produce full financial reports compliant with IFRS, based on millions of transactions involving various financial instruments. Each quarter, we prepare NBIM’s financial statements, notes, and analyses—a rigorous process but resource-intensive. It involves complex Excel workbooks with lengthy formulas and manual operations.

Speaker 1 41:32

To meet our quality expectations, we invest significant time and effort. But this comes at a cost: we spend so much time in production mode that there’s little time for deeper analysis and insights. Critical knowledge is concentrated in a few people, creating a dependency we wanted to eliminate. It’s a high-quality process that should have a better infrastructure.

Speaker 1 42:10

Good processes can always improve, and AI provides us with tools for that. We decided to rebuild from scratch, working with the fund accounting team. We start with basic accounting data, establishing a single data source to ensure clean, structured, and reliable data so AI tools can perform optimally.

Our team has only two people and are not developers, so we use Claude and Cursor to write code. Even the most complex calculations and summaries now run directly on our custom-built underlying data set, automatically imported into our notes, financial statements, and analysis workflows. We maintain human oversight, applying accounting expertise and business logic to ensure accuracy, while also maintaining internal controls with new workflows.

Speaker 1 43:10

The final result is a platform capable of delivering higher-quality analysis and earlier, faster outputs. Our reports can be generated before the 10th working day, with FX and tax analysis done in one click by the second day, and securities and lending analyses by the seventh day.

Speaker 1 43:32

All this gives us time for investigation and correction, preventing delays. Full automation will save our small team (2.5 people) eight days annually, which can be used for analysis, control, quality assurance, and auditing, advancing the entire process.

Speaker 1 43:56

An excellent example is the note on collateral offset, previously prepared by one person over a week, now completed in a few hours. Our AI initiatives at NBIM are just beginning. We’re working with the Norwegian government to implement a new reporting tool. With this, most portfolio reports—from trading to official external reports—will be fully automated.

Speaker 1 44:29

To clearly demonstrate our current achievements, AI now automatically generates income and expense details for stocks, bonds, and derivatives, listing 11 FX gains/losses in the 2025 annual report.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin