Industry Insights

Transfer Pricing in Artificial Intelligence: A Sector Deep Dive

Explore the emerging transfer pricing issues in artificial intelligence: training data ownership, model IP location, GPU compute allocation, and how AI business models challenge traditional benchmarking.

As of early 2026, neither the OECD nor the United States Treasury has issued transfer pricing guidance specifically directed at AI businesses. The OECD’s planned 2026 revision of the Transfer Pricing Guidelines may eventually address some of the issues discussed below, but the current authoritative position is that AI businesses are subject to the same general framework as any other multinational group. The analysis that follows applies the existing framework to AI fact patterns and is explicit where the application is settled, where it is contested, and where it is genuinely unsettled.

Foundation Model Providers and AI Applications: A Distinction That Matters

The transfer pricing profile of a foundation model provider differs substantially from that of an AI application company.

A foundation model provider trains large machine learning models from scratch. The economics are characterized by very large training compute costs (often the single largest line item), concentration of frontier research and engineering in a small number of jurisdictions (predominantly the United States, with some activity in the UK, Canada, France, and Israel), and the model weights themselves as the central intangible asset. Examples include the labs behind well-known foundation models.

An AI application company builds products on top of foundation models, whether through API access to third-party models or through fine-tuning of open-weight models. The economics resemble those of traditional SaaS more closely: customer acquisition costs, deferred revenue, regional sales operations, and product engineering. The principal AI-specific element is typically the cost of inference (model serving) and any proprietary fine-tuning, training data, or model orchestration that the company has developed.

For the foundation model sub-segment, the existing transfer pricing framework is stretched in ways that warrant separate analysis. For the AI application sub-segment, the framework largely holds, with some specific adjustments. The remainder of the article addresses each in turn.

Foundation Model Providers: Where the Framework Is Stretched

Three issues in foundation model businesses do not have clean answers under the existing framework.

Compute as a major intercompany flow. Training a frontier foundation model typically requires capital expenditure on compute infrastructure that is large relative to the company’s other operating costs. This compute is often procured at the parent or a designated entity, frequently through long-term commitments with hyperscale cloud providers, and then allocated to operating subsidiaries that consume it for development or inference. The transfer pricing question is what intercompany pricing applies to the allocation. The existing framework provides several candidate methods: cost allocation under a services framework, market-based pricing by reference to third-party cloud rates, or a cost-plus markup. Each produces a different result, and the choice depends on the functional characterization of the entity that procures and that consumes the compute. There is no settled view on which method is most appropriate, and the answer may differ by jurisdiction.

Training data as a contested asset. The training data used to develop a foundation model has uncertain status under transfer pricing principles. Where data is acquired through licensing, the licensing arrangements themselves are intercompany flows that require analysis. Where data is collected through web scraping or partnerships, its legal status is itself a subject of pending litigation in multiple jurisdictions, and the allocation of associated risk between group entities is not addressed by any current TP guidance. Practitioners typically treat training data acquisition as a function within R&D for cost-sharing purposes, but the value of data to the resulting model is difficult to isolate from the value of the architecture and the training process.

Model weights as the central intangible. Trained model weights are arguably the single most valuable intangible asset that a foundation model provider owns. The OECD’s existing intangibles framework in Chapter VI of the 2022 Transfer Pricing Guidelines applies, but the practical application is not straightforward. Model weights are developed through a combination of architecture research, data acquisition and curation, large-scale training compute, and post-training refinement (fine-tuning, alignment, evaluation). The DEMPE analysis (development, enhancement, maintenance, protection, and exploitation of intangibles) under the 2022 Guidelines is the analytical tool, but applying it to the development of a foundation model produces a more dispersed set of contributing entities than is typical for, say, a piece of enterprise software. The hard-to-value intangibles (HTVI) provisions are directly relevant: the commercial value of a trained model is uncertain at the time of training and may diverge significantly from projections, in either direction.

A consequence of these features is that cost-sharing arrangements (CSAs under Treas. Reg. §1.482-7 in the United States, or cost contribution arrangements under OECD Chapter VIII) are an analytically attractive structural choice for foundation model groups developing IP across jurisdictions, but they raise hard questions about the valuation of platform contributions, the buy-in payments required, and the inclusion of stock-based compensation in the cost base (a settled issue within the Ninth Circuit following Altera, less settled elsewhere).

AI Application Companies: Mostly Familiar Territory

For AI application companies, the transfer pricing analysis is closer to that of any SaaS business, with three specific points worth flagging.

Inference cost allocation. Inference (the cost of running a model to serve user requests) is the largest variable cost line for many AI applications. Where inference is procured centrally and allocated to operating subsidiaries, the transfer pricing analysis follows the standard services framework, with the choice of cost-plus or services pricing depending on the functional characterization. The analysis is not novel, although the magnitude of the costs makes the choice of method more consequential than in traditional SaaS.

Proprietary fine-tuning and orchestration. Where an application company has invested in fine-tuning an open-weight foundation model, or in proprietary model orchestration and prompt engineering, these activities can create intangible assets in their own right. The valuation question is comparable to that for traditional software IP, although the speed at which fine-tuned weights become obsolete (because the underlying foundation model has been replaced by a newer version) compresses the relevant useful life and warrants careful documentation.

Benchmarking challenges. The comparable company sets used to benchmark traditional SaaS tested parties may not yet include enough AI-application companies to produce a tight industry-specific range. A study covering AI application businesses may need to draw from broader software comparables and to document the basis for inclusion or exclusion carefully.

What Practitioners Are Doing in the Absence of Guidance

In the absence of AI-specific authoritative guidance, the practical approach taken by experienced practitioners has converged on three principles. First, the existing framework is treated as the default: HTVI provisions for model weight valuations, CSA mechanics for cross-jurisdictional development, the standard services framework for compute and inference allocation, and standard SaaS benchmarking adapted for AI applications. Second, documentation is prepared with the expectation that guidance may evolve and that contemporaneous reasoning may need to be defended against later interpretive shifts. Third, jurisdictional positions are coordinated, recognizing that different tax authorities may converge on different views before any consensus emerges.

For mid-market companies in the AI sub-segment, the practical implication is that the transfer pricing file should be built on the existing framework, with additional documentation of the assumptions, projections, and methodologies used in the most uncertain areas (model weight valuations, training data treatment, compute allocation). Positions that may seem aggressive against a future guidance regime should be flagged and the rationale documented at the time the position is taken.

A Closing Note

Transfer pricing analysis for AI businesses is a current example of a recurring pattern: a new commercial activity outpaces the development of authoritative tax guidance, and practitioners apply the existing framework with appropriate documentation while waiting for guidance to catch up. Foundation model providers face the most analytically novel issues, particularly around model weight valuations and compute allocation, and would benefit from specialist transfer pricing attention. AI application companies face mostly familiar issues, with the AI-specific elements layered on top of an analysis that resembles that of any SaaS business.

Transfer Pricing in the Consumer Goods Sector: A Sector Deep Dive

Frequently Asked Questions

Where should AI model IP be located for transfer pricing purposes?

The location depends on where the development activity, decision-making, and risk-bearing occur. Models developed primarily by a single entity that bears the training costs and development risks should generally be owned by that entity. Cost-sharing arrangements allow joint development across multiple entities, but require careful documentation of each participant's contribution and benefit.

How is GPU compute capacity allocated between affiliated entities?

Compute capacity can be treated as an internal service or as a cost-shared resource. Service-based models charge a markup over cost, while cost-shared models allocate the underlying expense based on usage. The choice has significant implications for the location of profit and the nature of the documentation required.

What are the comparability challenges for AI companies?

Pure-play public AI companies are few, and most are at early stages without stable margins. Benchmarking often requires using broader software or technology comparables with adjustments, or relying on internal comparables where they exist. The rapid evolution of AI business models also makes historical financial data potentially less relevant.

How does training data ownership affect transfer pricing?

Training datasets, particularly proprietary or curated datasets, can themselves be valuable intangibles. Their location and the licensing or contribution arrangements between affiliated entities are increasingly important transfer pricing considerations, especially as data restrictions and regulations expand globally.