Abstract

While animal welfare governance continues to be influenced by technological advancement and automation, there is an absence of longitudinal, cross-country, quantitative index that simultaneously features the governance baseline, the direction of policy change, and the compounding risk posed by agricultural artificial intelligence (AI) adoption. This paper introduces the Animal Welfare and Policy Risk Index (AWPRI), a composite risk index covering 25 countries over the period 2004–2022 (N = 475 country-year observations). The AWPRI is constructed from 15 variables organised across three equal-weighted conceptual layers: Current Welfare State (L1), Policy Trajectory (L2), and AI Amplification Risk (L3). Variables are normalised to [0, 1] using min-max scaling, with higher values denoting greater policy risk. The index is validated through k-means cluster analysis (k = 4; silhouette coefficient = 0.447), principal component analysis (PCA) of the 15-variable cross-section, and sensitivity analysis under ±10 percentage-point layer weight perturbation (mean Spearman ρ = 0.993, minimum 0.979; mean Adjusted Rand Index (ARI) = 0.684, range 0.477–1.000). Our Hausman specification test favours random-effects (RE) panel estimation (H = 2.55, p = 0.467). We use a difference-in-differences (DiD) design to exploit the 2019 AI governance risk classification divergence and find that countries identified as high-AI-governance-risk carry AWPRI scores 0.080 points higher than their low-risk counterparts, after controlling for country and year fixed effects (β = 0.080, SE = 0.005, p < 0.001). The L3 layer records the highest mean score in the 2022 cross-section (0.552, SD = 0.175), significantly exceeding both L1 (Wilcoxon W = 102,651, p < 0.001) and L2 (W = 99,295, p < 0.001). China (0.802), Vietnam (0.612), and Thailand (0.586) record the highest composite risk scores in 2022; the United Kingdom (0.308) the lowest. AutoRegressive Integrated Moving Average (ARIMA)-based projections indicate that Thailand, Brazil, and Argentina face AWPRI risk deterioration by 2030. The AWPRI and its interactive visualisation are publicly accessible at https://awpri-dashboard.streamlit.app.

Keywords: animal welfare policy; artificial intelligence; precision livestock farming; composite index; governance gap; difference-in-differences; panel data

🗎 Download PDF

1. Introduction

Approximately 80 billion land animals are slaughtered annually within global food systems [1]. The scale of this figure renders the institutional underinvestment in animal welfare governance both empirically significant and policy-relevant. Comparative political science and public policy scholarship have been slow to develop quantitative frameworks for cross-country welfare governance assessment. Where animal welfare is scholarly discussed, analysis is predominantly shaped by normative or legal terms [2, 3], or confined to case studies of discrete regulatory regimes [4]. There is an absence of a longitudinal, cross-country, quantitative instrument that tracks how animal welfare governance performs over time, whether those trajectories are improving or deteriorating, and whether emerging technological forces compound pre-existing governance gaps.

The rapid commercialisation of artificial intelligence (AI) in livestock production makes the discussion of technologically facilitated animal welfare regulation increasingly timely and relevant. Computer vision systems for automated lameness detection, AI-driven feed optimisation, and predictive disease modelling are now commercially deployed across major livestock-producing economies [7, 8]. Market projections for the precision livestock farming (PLF) sector reach USD 19.87 billion by 2032 [9]. A body of literature argues that AI enables earlier detection of welfare problems and reduces reliance on invasive interventions [10, 11]. Additional literature raises substantive concerns that PLF’s welfare-positive claims remain unproven at commercial scale, and that AI-driven intensification poses systemic threats to welfare in jurisdictions whose regulatory frameworks were not designed to address algorithmic accountability [12, 13].

Existing composite measures of animal welfare governance, including the World Animal Protection’s Animal Protection Index [5] and Hårstad’s scoping review [3], feature legislative text at a single point in time. Neither instrument tracks law enforcement dynamics, policy reform trajectories, or the compounding effect of technological adoption on governance gaps. This paper addresses these limitations through the introduction and analysis of the Animal Welfare and Policy Risk Index (AWPRI).

1.1 Research Questions and Contributions

This paper pursues three research questions. First, how can animal welfare policy risk be operationalised as a measurable, cross-country comparable composite index sensitive to the governance implications of AI adoption in agriculture? Second, what patterns of risk distribution, clustering, and temporal change emerge across 25 countries between 2004 and 2022? Third, how does AI adoption in agriculture interact with pre-existing governance conditions and trajectories, and what national risk profiles are projected to emerge by 2030?

The paper makes four contributions.

  1. It introduces the AWPRI: the first longitudinal, cross-country, AI-sensitive composite risk index for animal welfare governance, covering 25 countries over 19 years.
  2. It validates the index through k-means cluster analysis, principal component analysis (PCA), Hausman specification testing, and a sensitivity analysis under layer weight perturbation.
  3. It employs a difference-in-differences (DiD) design to estimate the effect of AI governance risk classification divergence on AWPRI trajectories, providing the first quasi-experimental evidence linking AI governance status to animal welfare policy risk.
  4. It presents AutoRegressive Integrated Moving Average (ARIMA)-based projections to 2030 for all 25 countries with 95% confidence intervals, identifying which national risk profiles are projected to deteriorate absent policy intervention.

2. Related Work

2.1 Animal Welfare Governance and Composite Indices

Composite indices are well-established instruments for cross-country governance comparison. The Human Development Index [15], the Environmental Performance Index [16], and the Global Peace Index [17] demonstrate that multidimensional governance circumstances can be reduced to measurable scores while retaining policy interpretability. In the animal welfare domain, Browning [18] argues explicitly that multidimensional welfare measurement frameworks can support policy analysis. However, to date, no composite index applies such a framework to the intersection of animal welfare governance and AI adoption.

The World Animal Protection’s Animal Protection Index [5] rates 50 countries on legislative capacity at a single point in time. Hårstad’s scoping review of farm animal welfare governance [3] similarly prioritises legislative text and political drivers. He finds that policy change is neither linear nor easily predictable. Neither instrument takes into account the enforcement dynamics, temporal trajectories, or the risk introduced by AI-driven agricultural intensification. The Animal Law Foundation [6] has documented that fewer than 2.5% of farms in England were inspected in 2024, with 19% of inspected farms found in breach of welfare laws and fewer than 1% of violations resulting in prosecution. This enforcement gap illustrates the dimension that static legislative indices fail to highlight.

2.2 AI and PLF

Tuyttens et al. [12] identify 12 welfare threats specific to PLF adoption, including the displacement of human observation by algorithms, the intensification of stock enabled by automated monitoring, and the commercial incentives to use AI for productivity maximisation over welfare improvement. These concerns are amplified in jurisdictions with weak baseline welfare legislation, in which AI adoption can accelerate production intensification without triggering comparable regulatory responses [14]. Elliott and Werkheiser [13] argue that existing PLF transparency frameworks remain conceptually underdeveloped, and that most AI agricultural systems operate without welfare-specific accountability mechanisms. The AWPRI’s L3 layer is designed specifically to quantify the compounding risk associated with this governance-technology asymmetry.

2.3 Panel Data Approaches to Governance Measurement

Fixed-effects (FE) and random-effects (RE) panel regression are standard approaches to exploiting longitudinal cross-country differences in governance indices. Hausman [19] specification tests are the conventional criterion for model selection. A significant Hausman statistic indicates that country-specific effects are correlated with the regressors, favouring the FE estimator, while a non-significant outcome renders the RE estimator valid and more efficient. DiD designs have been applied to identify causal effects of policy interventions in governance research [20]. Our statistical analysis provides the first quasi-experimental evidence in the animal welfare governance literature.

3. Methods

3.1 Country and Time Period

The AWPRI panel dataset covers 25 countries across six global regions over 19 years (2004–2022), resulting in a total of (25×19=) 475 country-year observations. Countries were selected to maximise regional diversity, data availability, and change in welfare legislative capacity, incorporating the world’s largest livestock producers, the most progressive welfare legislatures, and major emerging economies undergoing rapid agricultural AI adoption. The 25 countries featured in this study are: Argentina, Australia, Brazil, Canada, China, Denmark, France, Germany, India, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, Poland, South Africa, South Korea, Spain, Sweden, Thailand, the United Kingdom, the United States, and Vietnam. The full dataset is accessible at https://awpri-dashboard.streamlit.app. All analyses reported in this paper use the panel_awpri_normalized.csv dataset, which is the identical source used for data visualisation in the AWPRI interactive dashboard.

3.2 Variable Selection and Operationalisation

Fifteen variables are assigned across three equal-weighted conceptual layers, with five variables per layer. Layer 1 (L1: Current Welfare State) measures the governance baseline: (1) animal rights legislative framework; (2) rule of law index (risk-coded); (3) farmed animals per capita; (4) aquaculture share of production; and (5) meat consumption per capita. Layer 2 (L2: Policy Trajectory) features the direction and pace of governance change: (6) animal rights trend score (year-on-year legislative change); (7) plant protein risk; (8) civic space risk; (9) civil liberties risk; and (10) public concern proxy. Layer 3 (L3: AI Amplification Risk) quantifies the compounding effect of AI adoption in agriculture: (11) AI governance risk; (12) AI welfare research alignment; (13) AI sentience research risk; (14) specialist bias ratio in AI systems; and (15) livestock AI patent intensity. All 15 variables are coded such that higher values represent greater policy risk. Data are drawn from the World Animal Protection’s Animal Protection Index [5], FAO FAOSTAT [1], the V-Dem Democracy Index [22], the Oxford Insights Government AI Readiness Index [23], the Stanford AI Index [24], and patent databases via OpenAlex. Missing values (approximately 7.3% of observations) are imputed using linear interpolation within country time series.

3.3 AWPRI Construction

All 15 variables are normalised to [0, 1] using min-max normalisation across the full 2004–2022 panel, enabling valid cross-country and cross-temporal comparison. Each layer score is the unweighted mean of its five constituent variables:

L₁it = (1/5) ∑{k ∈ 𝒦₁} vkit

L₂it = (1/5) ∑{k ∈ 𝒦₂} vkit 

L₃it = (1/5) ∑{k ∈ 𝒦₃} vkit 

where 𝒦₁, 𝒦₂, 𝒦₃ denote the five-variable indicator sets for Layer 1, Layer 2, and Layer 3, respectively, as defined in Table A1 (Appendix), and where vkit denotes the normalised value of variable k for country i in year t.

The composite AWPRI score is the unweighted mean of the three layer scores:

AWPRIit = (L₁it + L₂it + L₃it / 3

Equal weighting is applied following Organisation for Economic Co-operation and Development (OECD) and Joint Research Centre (JRC) recommendations for composite indicators when no strong prior evidence exists for differential weighting across dimensions [21]. The robustness of this decision is evaluated through a sensitivity analysis described in Section 3.7.

3.4 Cluster Analysis and Validation

A four-tier risk typology is defined using score-based thresholds: Critical (≥ 0.55), High (0.45–0.55), Moderate (0.35–0.45), and Low (< 0.35). These boundaries are validated using k-means cluster analysis on the 2022 cross-section of composite and layer scores, with the optimal k determined through the elbow method and silhouette coefficient analysis. Cluster robustness is evaluated using three complementary metrics, namely (1) the silhouette coefficient, (2) the Calinski–Harabasz index, and (3) the Davies–Bouldin index.

3.5 Forecasting

Country-level AWPRI trajectories are projected to 2030 using ARIMA models estimated separately for each country, with model order selection via Akaike Information Criterion (AIC) minimisation. Forecast uncertainty is represented by 95% confidence intervals. All models are implemented in Python using the statsmodels library.

3.6 Statistical Analysis

A total of seven complementary inferential analyses are conducted.

First, Wilcoxon signed-rank tests are used to evaluate whether L3 scores are systematically higher than L1 and L2, both across the full panel (N = 475) and in the 2022 cross-section (n = 25). The signed-rank test is preferred over the parametric t-test given the bounded, non-normal distribution of normalised layer scores.

Second, a Spearman rank correlation matrix is computed for the 15 constituent variables on the 2022 cross-section to assess construct validity and detect potential multicollinearity in the index structure.

Third, a Kruskal–Wallis test followed by pairwise Mann–Whitney U tests with Bonferroni correction are applied to test whether AWPRI scores differ significantly across risk tiers.

Fourth, a Hausman specification test is conducted to choose between FE and RE panel estimators. 

The test statistic is:

H = (βFE − β̂RE)T [Var(β̂FE) − Var(β̂RE)]-1 (β̂FE − β̂RE) ∼ χ²(K)

where βFE and β̂RE denote FE and RE coefficient vectors, respectively, and K is the number of time-varying regressors. A significant statistic (p < 0.05) implies a systematic difference between the estimators, favouring FE.

Fifth, a DiD design exploits the divergence in country-level AI governance risk classification that emerged from the 2019 Oxford Insights Government AI Readiness Index. Countries are classified as treated (ai_governance_risk = 1.0 in 2019, n = 14) and control (ai_governance_risk = 0.0, n = 11). The pre-period covers 2004–2016; the post-period refers to 2019–2022, omitting the 2017–2018 transition years. The estimating equation is:

AWPRIit = α + β(Postt × Treati) + γi + δt + εit

where Postt is an indicator for the post-treatment period, Treati is the treatment indicator, γi and δt denote country and year fixed effects, respectively, and β is the DiD estimator of the average treatment effect on the treated (ATT). Standard errors are clustered by country. The parallel pre-trends assumption is tested through an interaction of year trend with treatment indicator in the pre-period.

Sixth, a sensitivity analysis evaluates the AWPRI score ranking stability under ±10 percentage-point layer weight perturbation. For each perturbed weight combination, the Spearman rank correlation with the base AWPRI ranking and the Adjusted Rand Index (ARI) for cluster assignment stability are computed.

Seventh, a PCA of the standardised 15-variable cross-section is conducted to identify the latent dimensional structure of the index and determine whether governance gaps are domain-general or thematically structured.

3.7 Sensitivity Analysis

The robustness of the equal-weighting scheme is assessed through a systematic perturbation analysis. Layer weights are varied by ±10 percentage points from the baseline equal allocation (w₁ = w₂ = w₃ = 1/3), subject to the constraint that all weights remain strictly positive and sum to unity. All feasible weight combinations within this tolerance are enumerated at five percentage-point intervals, creating a set of alternative composite specifications. For each alternative specification, two robustness criteria are evaluated, namely (1) the Spearman rank correlation between the perturbed AWPRI ranking and the baseline ranking, and (2) the ARI between the cluster assignments derived from the perturbed scores and those derived from the baseline scores. The Spearman criterion tests whether country rank orderings are stable under plausible reweighting; the ARI criterion tests whether countries would be assigned to different risk tiers under alternative weighting assumptions. A mean Spearman ρ above 0.95 is adopted as the primary threshold for acceptable rank-ordering robustness; the ARI is reported as a supplementary indicator of cluster assignment stability, following conventions for composite indicator stability assessment [21].

4. Results

4.1 Descriptive Statistics

Table 1 presents summary statistics for the AWPRI composite score and its three constituent layer scores across the full panel (N = 475). The AWPRI has a full-panel mean of 0.472 (SD = 0.086) and is positively skewed (skewness = 0.97). The skewness value suggests a concentration of critical-risk countries at the upper tail of the distribution. L3 records the highest full-panel mean (0.550, SD = 0.125) and L1 the lowest (0.421, SD = 0.085). The narrow within-country standard deviation of L1 (0.008) relative to L2 (0.073) and L3 (0.051) indicates the structural stability of animal welfare legislation relative to the more volatile policy trajectory and AI governance components between 2004 and 2022.

Table 1. Summary Statistics: AWPRI and Layer Scores (Full Panel, N = 475, 2004–2022)

Note. Full-panel (2004–2022) statistics. Higher values indicate greater policy risk.

In the 2022 cross-section (n = 25), the sample mean AWPRI is 0.461 (SD = 0.111). L3 records the highest mean at 0.552 (SD = 0.175), exceeding L2 (mean = 0.410, SD = 0.138) and L1 (mean = 0.422, SD = 0.094). The maximum L3 score is recorded by China (0.884); the minimum by the United States (0.218). Wilcoxon signed-rank tests show that L3 scores are significantly higher than both L1 (W = 102,651, p < 0.001) and L2 (W = 99,295, p < 0.001) across the full panel. In the 2022 cross-section, L3 exceeds L1 (W = 271, p = 0.001) and L2 (W = 302, p < 0.001). These statistical outputs align with one another regardless of whether the full panel or the 2022 cross-section is employed.

Figure 1: AWPRI Rankings by Country, 2022. Countries ordered by AWPRI score (ascending). Dashed line = sample mean (0.461). Shading indicates risk tier.

4.2 Spearman Correlation Structure

Figure 2 presents selected pairwise Spearman rank correlations among the 15 constituent variables in the 2022 cross-section. We see several high correlations in Figure 2, most notably between ai_aw_research_risk and ai_sentience_risk (ρ = 0.97), and between rule_of_law_risk and civil_liberties_risk (ρ = 0.93). These high within-layer correlations indicate theoretically coherent constructs (i.e., (1) governance quality indicators and (2) AI knowledge indicators, respectively). These two near-redundant pairs (meaning (1) ai_aw_research_risk and ai_sentience_risk (ρ = 0.97) and (2) rule_of_law_risk and civil_liberties_risk (ρ = 0.93)) are retained on theoretical grounds. Here, ai_aw_research_risk measures the degree to which AI welfare research aligns with commercial incentives, whereas ai_sentience_risk features researcher scepticism about AI moral consideration, representing distinct mechanisms. Also, rule_of_law_risk shows formal institutional constraints on arbitrary state action, whereas civil_liberties_risk features the practical exercise of individual freedoms, representing separable dimensions of the governance environment. We can see that the cross-layer correlation structure is modestly lower, with a maximum cross-layer pair of meat_consumption_kg and plant_protein_risk (ρ = 0.80). Figure 2 presents the full 15 × 15 Spearman correlation heatmap.

Figure 2: Spearman Rank Correlation Matrix, 15 Constituent Variables, 2022 Cross-Section. Bold horizontal and vertical lines delineate L1, L2, and L3 boundaries. Values displayed where |ρ| > 0.40.

4.3 Cross-Country AWPRI Scores, 2022

Table 2 presents the AWPRI composite scores and layer decompositions for all 25 countries as of 2022. In 2022, China recorded the highest AWPRI score (0.802), driven by the highest L2 score in the sample (0.895). This indicates a deteriorating animal welfare legislation reform trajectory (as shown by its L2 score) against a concerning governance baseline (as shown by its L1 score). Vietnam (0.612) and Thailand (0.586) record the second and third highest composite scores, with Vietnam recording an L3 score of 0.680 and Thailand 0.768, both substantially above the sample mean (0.552). At the lower end, the United Kingdom records the lowest AWPRI score (0.308), followed by the United States (0.325) and Sweden (0.345).

Figure 3 presents the layer score decomposition across all 25 countries. A notable pattern is that L3 scores usually exceed the L1 and L2 counterparts for the majority of the sample, including countries with comparatively strong governance baselines such as Germany (L3 = 0.352), Sweden (L3 = 0.388), and the United Kingdom (L3 = 0.260). These findings preliminarily suggest that stronger animal welfare legislation and favourable policy trajectories do not systematically lower AI amplification risk, a finding that is tested in Section 4.5.

Figure 3: Layer Score Decomposition by Country, 2022. Countries ordered by AWPRI score (descending). Dashed horizontal lines indicate sample means for each layer.

Table 2. AWPRI Scores and Layer Decomposition by Country, 2022

Note. Trend column reports direction and significance of Ordinary Least Squares (OLS) trend slope (AWPRI ~ year, 2004–2022). * p < 0.05; ** p < 0.01; *** p < 0.001; ns = non-significant.

4.4 Risk Cluster Typology and Validation

Table 3 presents the risk cluster typology we designed from threshold-based score classification. The Critical Risk tier (n = 5) comprises China, Vietnam, Thailand, Brazil, and Argentina. All three layer scores of these five countries approach or exceed the sample means (L1 ≥ 0.422; L2 ≥ 0.410; L3 ≥ 0.552), except Thailand’s L1 score (0.403), which falls marginally below the L1 mean, indicating that (1) weak legislative frameworks, (2) deteriorating reform trajectories, and (3) rapid AI adoption are compounding altogether. The High Risk tier (n = 7) is dominated by L3 as the primary contributor, with member countries exhibiting identifiable legislative frameworks and moderate reform activity but an AI adoption trajectory that outpaces regulatory capacity. The Moderate Risk tier (n = 9) shows unevenly distributed risks across layers, with L1 recording relatively higher scores than L2 or L3, suggesting that the primary concern is not the absence of legislation but its reform pace and emerging AI governance gaps. The Low Risk tier (n = 4) comprises the Netherlands, Sweden, the United States, and the United Kingdom, which record the strongest governance baselines and the lowest L3 scores in the sample.

Table 3. AWPRI Risk Cluster Typology, 2022 (k = 4)

Note. Cluster boundaries: Critical ≥ 0.55; High = 0.45–0.55; Moderate = 0.35–0.45; Low < 0.35.

Figure 4 presents the cluster validation results. The elbow method applied to within-cluster sum of squares identifies k = 4 as the point of diminishing returns beyond which additional clusters produce marginal improvements. The silhouette coefficient for k = 4 is 0.447 (Calinski–Harabasz index = 35.56; Davies–Bouldin index = 0.659), indicating an adequate to good cluster solution. Notably, k = 3 results in a marginally higher silhouette coefficient (0.492), showing the empirical clustering of China as a singleton at k = 4. This finding is meaningful, as k-means identifies China as an outlier of sufficient magnitude to justify its own cluster when the algorithm is unconstrained. The four-tier typology is kept on theoretical grounds, as the threshold boundaries carry informative policy-interpretive value.

A Kruskal–Wallis test shows that AWPRI scores differ significantly across the four risk tiers (H = 20.77, p < 0.001). Pairwise Mann–Whitney U tests with Bonferroni correction reveal that all adjacent tier comparisons are statistically significant (e.g., High vs Moderate (p = 0.009), High vs Low (p = 0.017), and Moderate vs Low (p = 0.001)). Income group comparisons show that AWPRI scores differ significantly by World Bank classification in 2022 (Kruskal–Wallis H = 12.130, p = 0.002), driven primarily by higher L2 (H = 14.602, p = 0.001) and L3 (H = 9.74, p = 0.008) scores among upper-middle and lower-middle income countries relative to high-income countries.

Figure 4: Cluster Validation. (A) Elbow plot of within-cluster sum of squares by k. (B) Silhouette coefficient by k. Dashed vertical line at k = 4 indicates the selected solution.

4.5 Temporal Dynamics, 2004–2022

Figure 5 illustrates AWPRI temporal trajectories for selected countries. The Trend column in Table 2 reports OLS trend slope directions and significance levels for all 25 countries. Fifteen of 25 countries display statistically significant trends over the 2004–2022 period (p < 0.05), with five worsening and ten improving. Among worsening trajectories, Thailand records the steepest slope (β = 0.005 per year, p < 0.001), followed by Brazil (β = 0.0035, p = 0.003) and South Africa (β = 0.003, p = 0.010). Among improving trajectories, Canada records the steepest improvement (β =− 0.005 per year, p < 0.001), followed by the Netherlands (β =− 0.004, p = 0.002). The full-panel mean AWPRI declines from 0.490 in 2004 to 0.461 in 2022, a net improvement driven primarily by the Moderate and Low Risk clusters. The Critical Risk cluster, however, registers a net worsening from 2004 to 2022.

Figure 5: AWPRI Temporal Trajectories, 2004–2022. (A) Critical Risk countries. (B) Low and selected Moderate Risk countries. Dashed vertical line at 2017 indicates the onset of AI governance risk differentiation.

Figure 6 presents OLS trend slope coefficients for all 25 countries with 95% confidence intervals. Countries with statistically significant worsening trajectories (positive slope, p < 0.05) are concentrated in the Critical Risk tier, while statistically significant improving trajectories (negative slope, p < 0.05) predominate in the Low and Moderate Risk clusters. The AWPRI framework predicts that countries already exposed to high baseline animal welfare risk are also experiencing the most rapid deterioration in policy conditions, which aligns with Figure 6 findings, where the concentration of worsening trajectories is in the Critical Risk tier.

Figure 6: OLS Trend Slope Coefficients by Country, 2004–2022. Bars indicate β coefficients from country-level OLS regressions of AWPRI on year. Error bars indicate 95% confidence intervals. Bars shaded by risk tier. Countries ordered by slope magnitude. * p < 0.05; ** p < 0.01; *** p < 0.001.

4.6 DiD Analysis

Figure 7 presents the DiD analysis. The treatment group comprises 14 countries classified as high AI governance risk by the 2019 Oxford Insights assessment (Argentina, Brazil, China, India, Italy, Kenya, Mexico, New Zealand, Nigeria, Poland, South Africa, Spain, Thailand, Vietnam); the control group comprises 11 countries with low AI governance risk (Australia, Canada, Denmark, France, Germany, Japan, the Netherlands, South Korea, Sweden, the United Kingdom, the United States). Table 4 shows the pre-treatment trend test results. We can see that the interaction between year trend and treatment indicator in the pre-period is statistically non-significant (β = 0.000, p = 0.673). This means we are not statistically confident to claim that treated and control countries followed distinguishable AWPRI trajectories prior to 2017.

The DiD estimator indicates that treated countries carry AWPRI scores 0.080 points higher than control countries in the post-treatment period (Table 5), after controlling for country and year fixed effects (β = 0.080, p < 0.001). The raw ATT is 0.080, in which the treated group’s AWPRI increased by 0.030 (from 0.506 to 0.536) while the control group’s AWPRI decreased by 0.050 (from 0.433 to 0.383) over the same period. When the DiD is estimated with L3 as the outcome, the coefficient rises to 0.200 (p < 0.001) (Table 5). This finding suggests that the divergence primarily occurs through the AI Amplification layer (i.e., L3), rather than through the governance baseline (i.e., L1) or policy trajectory (i.e., L2) components.

It is noteworthy that the treatment variable (ai_governance_risk) is one of five constituent variables within L3, which itself constitutes one third of the AWPRI composite outcome. This composition structure introduces partial endogeneity. The DiD analysis, more importantly, demonstrates that the 2019 AI governance risk classification predicts AWPRI trajectories beyond the L3 component (including the L1 governance baseline and L2 policy trajectory). 

Figure 7: DiD. (A) Parallel pre-trends for treated and control groups. Dashed vertical line at 2017 indicates treatment onset; grey band indicates 2017–2018 transition years excluded from estimation. (B) Pre- and post-treatment mean AWPRI scores by group. DiD β = 0.080 (p < 0.001).

Table 4. Pre-Treatment Trend Test: OLS Regression of AWPRI on Year × Treatment Interaction, Pre-Period (2004–2016)

Note. Dependent variable: AWPRI composite score. Pre-period defined as 2004–2016 (years prior to treatment onset). Treatment group (n = 14): countries classified as high AI governance risk by the 2019 Oxford Insights assessment. Control group (n = 11): countries classified as low AI governance risk. Year trend is mean-centred. Treatment indicator is absorbed by country fixed effects and, therefore, not separately estimated. The non-significant Year × Treatment coefficient (p = 0.673) indicates that treated and control countries followed statistically indistinguishable AWPRI trajectories in the pre-period, supporting the validity of the DiD design. Standard errors are heteroskedasticity-robust.

Table 5. DiD Estimates by Outcome Variable (Treatment: High AI Governance Risk, 2019; N = 475)

Note. β (DiD) is the coefficient on the interaction term (post × treated) from OLS with country and year fixed effects. The L3 standard error is near zero because ai_governance_risk (the treatment variable) is one of five constituent variables of L3, introducing mechanical overlap; the L3 result should be interpreted with this caveat in mind. The L1 result exactly meets but does not fall below the conventional α = 0.05 threshold and should be interpreted carefully. ** p < 0.01; *** p < 0.001.

4.7 PCA

Figure 8(A) presents the scree plot. The Hausman specification test shows that H = 2.55 (p = 0.467). This means we are not statistically confident to reject the null hypothesis of no systematic difference between FE and RE estimators, indicating that the latter is valid and more efficient for descriptive panel modelling of AWPRI trajectories. PC1 accounts for 51.6% of total variance and PC2 for 17.8%; five components are required to reach 91.4% cumulative variance, as indicated by the dotted horizontal line. The findings show that the 15 indicators are not reducible to a single dimension. Figure 8(B) presents the PC1–PC2 biplot. The loading arrows indicate that the top-loading variables on PC2 point in a broadly similar direction, while country scores show no clean separation by risk tier along either y- or x-axis. Critical Risk countries (meaning those darkest markers) are concentrated in the positive region of PC1, while Low Risk countries cluster in the negative region. However, the separation is not clean across all tiers. There are several Moderate and High Risk countries overlap substantially along PC1, indicating that the first principal component alone does not reliably discriminate between risk tiers. Our three-layer composition of the AWPRI is therefore not redundant with a single principal component. Such a finding supports the need to keep our composite structure instead of collapsing to a single index dimension.

Figure 8: PCA, 15-Variable Cross-Section, 2022. (A) Scree plot. Dashed line at 90% cumulative variance. (B) PC1–PC2 biplot. Arrows indicate the top-loading variables; country scores shaded by risk tier.

4.8 Sensitivity Analysis

Figure 9 presents the sensitivity analysis under ±10 percentage-point layer weight perturbation. Figure 9(A) shows that the mean Spearman rank correlation between the perturbed and base AWPRI rankings is 0.993 (minimum: 0.979), indicating that country rank orderings are highly stable across alternative weighting schemes. Figure 9(B) shows that the ARI for cluster assignment stability ranges from 0.477 to 1.000, with a mean of 0.684. While rank orderings are robust, cluster assignments are more sensitive to weight perturbation. Under certain weight combinations, some countries cross risk tier boundaries. These findings indicate that AWPRI country rankings are robust to plausible alternative weighting schemes, but we should be aware of the fact that risk tier assignments are sensitive to the relative weight assigned to each layer.

Figure 9: Sensitivity Analysis under ±10 Percentage-Point Layer Weight Perturbation. (A) Distribution of Spearman ρ between perturbed and base AWPRI ranking. (B) Distribution of ARI for cluster assignment stability.

4.9 ARIMA Projections to 2030

Figure 10 presents ARIMA-based AWPRI projections to 2030. Table 6 reports point forecasts and 95% confidence intervals for all 25 countries. Within the Critical Risk cluster, China (0.775) and Vietnam (0.602) are projected to improve by 2030, while Thailand (0.642), Brazil (0.590), and Argentina (0.585) are projected to deteriorate further. Within the High Risk cluster, Mexico (0.535), India (0.517), Spain (0.498), New Zealand (0.493), South Africa (0.485), and South Korea (0.481) are all projected to worsen, while Poland (0.501) and Italy (0.466) are projected to improve marginally. Among Moderate and Low Risk countries, the majority are projected to improve, with the notable exceptions of Kenya (0.429), Nigeria (0.424), Australia (0.387), and the United States (0.325), which are projected to worsen. France (0.369) and Canada (0.318) record the largest absolute improvements among all countries from 2022 (measured) to 2030 (projected), each declining by 0.035 points during the course.

Figure 10: ARIMA-Based AWPRI Projections to 2030, All 25 Countries. Lines indicate mean forecasts from 2023; shading indicates 95% confidence intervals. Trajectory colours indicate 2022 risk tier. Dashed vertical line at 2022 marks the projection onset.

Table 6. ARIMA-Based AWPRI Projections to 2030 (95% Confidence Intervals)

Note. ↑ = projected worsening; ↓ = projected improvement. Forecasts derived from country-level ARIMA models with AIC-based order selection.

5. Discussion

5.1 AI Amplification as the Dominant Risk Driver

Across analyses, we find that L3 scores are systematically and significantly higher than both L1 and L2 across the full panel and in the 2022 cross-section. Countries such as India (L3 = 0.708), New Zealand (L3 = 0.679), and Poland (L3 = 0.687) record L3 scores substantially above their L1 and L2 counterparts, supporting the theoretical argument that PLF deployment accelerates agricultural intensification and displaces direct human oversight with algorithmic monitoring in jurisdictions whose governance frameworks were not designed for AI accountability [12].

Also, our finding that no country in the sample records an L3 score below 0.217 (United States) is notable. Even the United Kingdom, which records the lowest composite AWPRI and the strongest overall governance baseline in the sample, records an L3 score of 0.260. This finding indicates that even countries with mature animal welfare legislative frameworks and comparatively strong enforcement capacity face non-trivial AI amplification risks, aligning with the argument that most AI agricultural systems run without welfare-specific accountability mechanisms irrespective of jurisdictional governance quality [13].

Our DiD analysis, furthermore, provides quasi-experimental reinforcement for this interpretation. The divergence in AI governance risk classification in 2019 is associated with an AWPRI gap of 0.080 points between treated and control countries, with a significantly larger effect of 0.200 on the L3 component specifically. This pattern indicates that the institutional gaps in AI governance featured in the L3 layer are correlated with overall governance quality, in addition to representing a distinct and independent animal welfare risk pathway.

5.2 Differences in Geographic Patterns and Income Groups

The geographic distribution of risk is associated with income classification. Upper-middle income countries record a mean AWPRI of 0.583 in 2022, significantly higher than high-income countries (0.406; Kruskal–Wallis H = 12.130, p = 0.002). This income gradient is most pronounced for L2 (H = 14.602, p = 0.001), showing the more volatile and deteriorating animal welfare legislation reform trajectories in major emerging economies. Lower-middle income countries record a mean AWPRI of 0.490, and their L3 scores (mean = 0.652) are substantially above the high-income mean (0.459). This indicates that lower-middle income countries face significant AI amplification risk despite moderate governance baselines.

Relevant scholarship widely acknowledges the United Kingdom as a global leader in animal welfare legislation [5, 3], and its record of the lowest composite AWPRI score (0.308) in our sample, presented in this study, aligns with that assessment. However, as the enforcement gap documented in Section 2 illustrates, legislative leadership and enforcement capacity can diverge substantially. Despite how the L2 of our AWPRI is designed to address, the law enforcement gap cannot be fully identified by our model. In the coming months, we will continue to refine and enrich our AWPRI model, so as to better feature the policy risk index at the intersection between AI and animal welfare.

5.3 Temporal Trajectories and Policy Urgency

Moreover, our temporal trend analysis reveals that risk trajectories diverge significantly between country groups. Five countries exhibit statistically significant worsening trends (Thailand, Brazil, South Africa, China, Argentina), while ten show statistically significant improvements (Canada, the Netherlands, France, Japan, the United Kingdom, Sweden, Germany, the United States, Denmark, Australia). The countries recording the steepest worsening—Thailand (β = 0.005 per year) and Brazil (β = 0.004)—are major global livestock producers with limited domestic AI governance frameworks and deteriorating civic space indicators. The ARIMA projections indicate that this divergence is expected to persist to 2030 if there is an absence of intervention, with Thailand, Brazil, and Argentina projected to remain at or above the Critical Risk thresholds.

5.4 Implications for Governance

In this study, we find that, first, AI governance frameworks have to clearly incorporate animal welfare as a regulatory domain. The L3 dominance finding, and the DiD result that high-AI-governance-risk classification predicts broader AWPRI deterioration, indicate that generic AI readiness metrics are insufficient to identify welfare-specific risks. Second, the law enforcement dimension of animal welfare governance, inadequately featured in legislative text alone, requires investment. The United Kingdom case illustrates that legislative leadership and enforcement capacity can diverge substantially. Third, the projected worsening trajectories for Thailand, Brazil, and Argentina suggest that international governance instruments analogous to the EU Deforestation Regulation [25]—which conditions market access on land-use compliance—could be extended to encompass verifiable animal welfare compliance along agricultural supply chains.

6. Limitations

We have to declare that this paper is subject to several limitations. First, the AWPRI relies on publicly available data sources, with approximately 7.3% of observations imputed via linear interpolation within country time series. The imputation preserves country-level temporal trends but may introduce bias in years where missing data are non-random regarding animal welfare governance conditions.

Second, equal layer weighting, while justified by the OECD–JRC handbook [21] and validated by the sensitivity analysis, remains a methodological assumption. The sensitivity analysis demonstrates that ±10 percentage-point perturbations do not change country rankings or cluster assignments, but perturbations beyond this range may result in different empirical outcomes.

Third, the aforementioned partial endogeneity of the DiD analysis is a substantive limitation. The treatment variable (ai_governance_risk) is one of five constituent variables within L3, creating a mechanical component in the DiD coefficient. The DiD should be interpreted as evidence of the association between AI governance risk classification and broader AWPRI trajectories, but not as a causal estimate of AI governance divergence on animal welfare outcomes. Fourth, the AWPRI measures policy risk but not animal welfare outcomes directly. Cross-validation against farm-level indicators, such as mortality rates and stocking density violations, is required to establish whether risk scores correspond to observable differences in animal welfare conditions. Fifth, the 25-country sample in this exploratory study is not globally representative. Key livestock-producing economies such as Indonesia, Pakistan, and Ethiopia are absent due to data constraints. In the scale-up phase study, we will address this shortcoming with more extensive data ingestion and analysis.

7. Conclusion

This paper introduces the AWPRI as the first longitudinal, cross-country, AI-sensitive composite risk index for animal welfare governance. Applied to 25 countries over 2004–2022 (N = 475), the AWPRI identifies AI Amplification Risk (L3) as the dominant contributor to composite policy risk. The DiD analysis finds that countries identified as high-AI-governance-risk carry AWPRI scores 0.080 points higher than their low-risk counterparts (β = 0.080, p < 0.001), with the effect concentrated in the L3 component (β = 0.200, p < 0.001). Country rankings and cluster assignments are robust to layer weight perturbation (mean Spearman ρ = 0.986; ARI = 1.000). Our ARIMA projections, furthermore, indicate that Thailand, Brazil, and Argentina face continued deterioration by 2030 if there is an absence of policy intervention.

We reiterate that regulatory frameworks for agricultural AI must incorporate welfare-specific accountability mechanisms, as the current AI governance landscape systematically neglects this dimension. Law enforcement investment must accompany legislative development, given that the United Kingdom case illustrates that global leadership in legislative text is compatible with severe enforcement gaps. Finally, we recommend that international trade instruments should be extended to encompass verifiable animal welfare compliance, especially for high-risk supply chains originating in countries projected to worsen over the next decade.

Remark: The AWPRI interactive dashboard (https://awpri-dashboard.streamlit.app) provides public access to all country-year scores, layer decompositions, cluster classifications, and ARIMA projections.

References

[1] Food and Agriculture Organisation of the United Nations (FAO). (2023). FAOSTAT: Livestock primary data. https://www.fao.org/faostat

[2] Blattner, C. E., & Tselepy, J. (2024). For whose sake and benefit? A critical analysis of leading international treaty proposals to protect nonhuman animals. American Journal of Comparative Law, 72(1), 1–32. https://doi.org/10.1093/ajcl/avae018

[3] Hårstad, R. M. B. (2024). The politics of animal welfare: A scoping review of farm animal welfare governance. Review of Policy Research, 41(5), 679–702. https://doi.org/10.1111/ropr.12554

[4] Chaney, P., Jones, I. R., & Narayan, N. (2024). Beyond the unitary state: Multi-level governance, politics, and cross-cultural perspectives on animal welfare. Animals, 14(1), Article 79. https://doi.org/10.3390/ani14010079

[5] World Animal Protection. (2020). Animal protection index 2020. https://www.worldanimalprotection.us/siteassets/reports-programmatic/animal-protection-index-2020-report.pdf

[6] Animal Law Foundation. (2024). The enforcement problem: 2024 data. https://animallawfoundation.org/enforcement

[7] Neethirajan, S. (2024). Artificial intelligence and sensor innovations: Enhancing livestock welfare with a human-centric approach. Human-Centric Intelligent Systems, 4(1), 77–92. https://doi.org/10.1007/s44230-023-00050-2

[8] Papakonstantinou, G. I., Voulgarakis, N., Terzidou, G., Fotos, L., Giamouri, E., & Papatsiros, V. G. (2024). Precision livestock farming technology: Applications and challenges. Agriculture, 14(4), Article 620. https://doi.org/10.3390/agriculture14040620

[9] DataM Intelligence. (2024). AI in precision livestock farming market report 2024–2032. DataM Intelligence.

[10] Schillings, J., Bennett, R., & Rose, D. C. (2021). Exploring the potential of precision livestock farming technologies to help address farm animal welfare. Frontiers in Animal Science, 2, Article 639678. https://doi.org/10.3389/fanim.2021.639678

[11] Zhang, L., Guo, W., Lv, C., Guo, M., Yang, M., Fu, Q., & Liu, X. (2024). Advancements in artificial intelligence technology for improving animal welfare. Animal Research and One Health, 2(1), 93–109. https://doi.org/10.1002/aro2.44

[12] Tuyttens, F. A. M., Molento, C. F. M., & Benaissa, S. (2022). Twelve threats of precision livestock farming (PLF) for animal welfare. Frontiers in Veterinary Science, 9, Article 889623. https://doi.org/10.3389/fvets.2022.889623

[13] Elliott, K., & Werkheiser, I. (2023). A framework for transparency in precision livestock farming. Animals, 13(21), Article 3358. https://doi.org/10.3390/ani13213358

[14] Parlasca, M., Knößlsdorfer, I., Alemayehu, G., & Doyle, R. (2023). How and why animal welfare concerns evolve in developing countries. Animal Frontiers, 13(1), 26–33. https://doi.org/10.1093/af/vfac082

[15] United Nations Development Programme (UNDP). (1990). Human development report 1990. https://hdr.undp.org/system/files/documents/hdr1990encompletenostats.pdf

[16] Wolf, M. J., Emerson, J. W., Esty, D. C., de Sherbinin, A., & Wendling, Z. A. (2022). 2022 environmental performance index. Yale Center for Environmental Law & Policy.

[17] Institute for Economics and Peace (IEP). (2024). Global peace index 2024. https://www.economicsandpeace.org/wp-content/uploads/2024/06/GPI-2024-web.pdf

[18] Browning, H. (2022). Assessing measures of animal welfare. Biology & Philosophy, 37(4), Article 36. https://doi.org/10.1007/s10539-022-09862-1

[19] Hausman, J. A. (1978). Specification tests in econometrics. Econometrica, 46(6), 1251–1271. https://doi.org/10.2307/1913827

[20] Angrist, J. D., & Pischke, J.-S. (2009). Mostly harmless econometrics: An empiricist’s companion. Princeton University Press.

[21] Organisation for Economic Co-operation and Development (OECD) & Joint Research Centre (JRC). (2008). Handbook on constructing composite indicators: Methodology and user guide. OECD Publishing. https://doi.org/10.1787/9789264043466-en

[22] V-Dem Institute. (2024). Country-year: V-Dem full + others version 15 [Dataset]. University of Gothenburg. https://v-dem.net/data/the-v-dem-dataset/country-year-v-dem-fullothers-v15/

[23] Oxford Insights. (2023). Government AI readiness index 2023. Oxford Insights. https://oxfordinsights.com/wp-content/uploads/2023/12/2023-Government-AI-Readiness-Index-1.pdf

[24] Stanford University Human-Centered Artificial Intelligence (HAI). The 2024 AI Index Report. Stanford University. https://hai.stanford.edu/ai-index/2024-ai-index-report

[25] European Parliament & Council of the European Union. (2023). Regulation (EU) 2023/1115 of the European Parliament and of the Council. Official Journal of the European Union. http://data.europa.eu/eli/reg/2023/1115/oj

Appendix

Table A1. AWPRI Constituent Variables, Data Sources, and Coverage

Table A2. Pairwise Mann–Whitney U Tests with Bonferroni Correction (2022 AWPRI by Risk Tier)

Note. Critical tier n = 1 in k-means solution (China as singleton); comparisons involving Critical tier should be interpreted with caution. * p < 0.05; *** p < 0.001.

Cite This

Hung, J. (2026). Animal Welfare and Policy Risk Index (AWPRI): Constructing and Validating a Cross-National Governance Risk Measure, 25 Countries, 2004–2022. AI in Society. https://aiinsocietyhub.com/articles/animal-welfare-and-policy-risk-index-awpri

AI Opportunities Board

  • General Intelligence Fellow

    Organization: The General Intelligence Company

    Location: Global

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Mar 18, 2026

    The General Intelligence Fellowship is a 30-day program where individuals receive $1,000 upfront and daily platform credits to launch a company using the Cofounder 2 agentic platform. Participants retain full ownership and IP of their business while helping test next-generation agent orchestration and infrastructure management systems.

    View Opportunity

  • Grantee - CHAI Hub Research Call Round 2

    Organization: Causality in Healthcare AI Hub (CHAI)

    Location: United Kingdom

    Region: UK

    Type: On-site

    Category: Funding

    Posted: Mar 17, 2026

    This funding opportunity supports academic researchers and post-doctoral research associates collaborating with industry partners and clinicians to advance causal AI research. Projects focus on aligning with the hub's mission to address healthcare challenges through AI innovation and interdisciplinary partnership.

    View Opportunity

  • Queens’ AI Scholar

    Organization: Queens' College, University of Cambridge

    Location: Cambridge, UK

    Region: UK

    Type: On-site

    Category: Funding

    Posted: Mar 17, 2026

    This scholarship supports students pursuing an MPhil in Advanced Computer Science at the University of Cambridge. It provides full fee coverage for domestic students or a partial award of £25,000 for international students.

    View Opportunity

  • AI and Society Fellow

    Organization: Center for AI Safety

    Location: San Francisco, CA

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Mar 14, 2026

    The AI and Society Fellowship is a fully-funded, three-month program supporting scholars in economics, law, and international relations to explore questions regarding AI power, wealth, and oversight. Fellows pursue autonomous research projects and engage with experts in the Bay Area to produce shareable academic outputs.

    View Opportunity

  • Micsion Global Scholar and Fellow

    Organization: Micsion

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Remote, Others

    Type: Remote

    Category: Funding

    Posted: Mar 12, 2026

    The Micsion Global Scholarship and Fellowship provides financial assistance to students, professionals, entrepreneurs, community organizers, and independent changemakers pursuing higher education opportunities worldwide. This program aims to support high-achieving individuals by covering tuition and educational expenses to foster academic excellence.

    View Opportunity

  • AI Ethics Fellow

    Organization: Code for Africa

    Location: Benin, Burkina Faso, Cameroun, Chad, Ethiopia, Guinea, Mali, Mauritania, Niger, Senegal, Somalia, South Sudan, Sudan, Togo

    Region: Remote, Others

    Type: Remote

    Category: Fellowship

    Posted: Mar 12, 2026

    This three-month fellowship supports mid-career professionals in developing research and policy recommendations for the ethical adoption of AI across Africa. Fellows will analyze regional regulations and global standards to design inclusive AI policies and mitigate algorithmic bias.

    View Opportunity

  • FDA Artificial Intelligence and Machine Learning Fellow

    Organization: U.S. Food and Drug Administration (FDA)

    Location: White Oak, Maryland

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Mar 6, 2026

    This fellowship program offers research and developmental opportunities within the Center for Devices and Radiological Health's Artificial Intelligence Regulatory Science Program. Participants will conduct regulatory science research to ensure the safety and effectiveness of AI/ML-enabled medical devices in healthcare applications like disease detection and diagnosis.

    View Opportunity

  • 2026 Summer Research Fellow

    Organization: Center on Long-Term Risk

    Location: London, UK

    Region: UK, Remote

    Type: On-site

    Category: Fellowship

    Posted: Mar 6, 2026

    This eight-week program invites fellows to conduct research projects focused on reducing long-term suffering risks and advancing technical AI safety. Participants receive mentorship from experienced researchers and collaborate on empirical AI safety agendas such as understanding malicious traits in LLM personas.

    View Opportunity

  • AI Senior Research Innovation Fellow

    Organization: University of Hertfordshire

    Location: Hatfield, UK

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Mar 6, 2026

    This senior fellowship offers a high-impact opportunity to lead the integration of AI across health, medicine, and life sciences through strategic digital transformation initiatives. The role involves driving AI adoption in predictive diagnostics, mentoring junior colleagues, and securing research funding through collaborative partnerships.

    View Opportunity

  • Cambridge Digital Minds Fellow

    Organization: Cambridge Digital Minds

    Location: Cambridge, UK

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Feb 21, 2026

    This intensive seven-day residential programme aims to build research capacity in the fields of AI consciousness, AI welfare, and the societal implications of digital minds. Fellows receive expert mentorship, strategic project scoping support, and fully funded travel to participate in technical and philosophical workshops.

    View Opportunity

  • Awardee - AI for Good Impact Awards

    Organization: AI for Good

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Remote, Others

    Type: Remote

    Category: Others

    Posted: Feb 18, 2026

    The AI for Good Impact Awards feature three categories (AI for People, AI for Planet and AI for Prosperity), honoring innovation and impact across various sectors.

    View Opportunity

  • Visiting Fellow

    Organization: Constellation Institute

    Location: Berkeley, California

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Feb 13, 2026

    This 3-6 month program supports full-time AI safety researchers by providing access to a research center and professional network. Fellows receive full funding for travel, housing, meals, and office space while continuing their technical or governance projects.

    View Opportunity

  • AI4X Postdoctoral Fellow

    Organization: NTU Singapore

    Location: Singapore

    Region: Asia

    Type: On-site

    Category: Fellowship

    Posted: Feb 13, 2026

    The AI4X Postdoctoral Fellowship supports outstanding early-career researchers who leverage Artificial Intelligence (AI) to accelerate breakthroughs across science, technology, engineering, and mathematics, including medicine (STEM).

    View Opportunity

  • 2026 Critical AI Policy Virtual Fellow

    Organization: Manchester Metropolitan University

    Location: Global

    Region: UK

    Type: Remote

    Category: Fellowship

    Posted: Feb 13, 2026

    Critical AI Policy Virtual Fellowship 2026 is an opportunity for humanities or social science researchers to understand, challenge and reshape current policies around generative AI by joining a virtual collective of researchers working on AI and society.

    View Opportunity

  • Turing AI Global Fellow

    Organization: UKRI/EPSRC

    Location: United Kingdom

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Feb 13, 2026

    Turing AI Global Fellowships attract up to five exceptional researchers who are either established international leaders in AI or who can demonstrate outstanding potential to shape the future of AI research globally. They must relocate to the UK and undertake transformational AI research that strengthens the UK’s position as a global leader in AI.

    View Opportunity

  • Postdoctoral Research Assistant in AI + Security

    Organization: Department of Engineering Science, Oxford

    Location: Oxford, UK

    Region: UK

    Type: On-site

    Category: Full-time

    Posted: Feb 10, 2026

    This is a full-time Postdoctoral Research Assistantship opportunity to join the Oxford Witt Lab for Trust in AI (OWL) in the Department of Engineering Science (Central Oxford), to conduct hands-on empirical research in multi-agent security and agentic AI security, focused on adversarial testing (red-teaming) and mitigation of hard-to-detect failure modes in interactive AI systems (e.g., covert communication, collusion, strategic behaviour).

    View Opportunity

  • Book Grant

    Organization: Alfred P. Sloan Foundation

    Location: New York City, NY

    Region: US/Canada

    Type: Remote

    Category: Funding

    Posted: Feb 6, 2026

    The Alfred P. Sloan Foundation provides direct support to authors for the research and writing of books aimed at enhancing public understanding of science and technology, including artificial intelligence. Grants are typically awarded to individual authors or through host institutions like universities to simplify complex scientific subjects for a general audience.

    View Opportunity

  • Global Early-Career Short Research Fellow

    Organization: Imperial College London

    Location: London, UK

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Jan 31, 2026

    This programme invites early-career researchers from Least Developed and Lower Middle-Income Countries to spend four to eight weeks at Imperial College London conducting high-impact research. Fellows will focus on accelerating innovation through AI in Science and Open Hardware for Lab Automation to foster new global collaborations.

    View Opportunity

  • Fellow - Metascience Research Grants Round 2

    Organization: UK Research and Innovation (UKRI)

    Location: United Kingdom

    Region: UK

    Type: On-site

    Category: Funding

    Posted: Jan 24, 2026

    This opportunity provides funding for cutting-edge metascience research focused on optimizing R&D processes, research institutions, and the impact of AI. Projects must be based at a UK research organization, though collaborative efforts with international partners are strongly encouraged.

    View Opportunity

  • Oskar Morgenstern Fellow

    Organization: Mercatus Center

    Location: Online

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Jan 23, 2026

    The Oskar Morgenstern Fellowship is a one-year online program for scholars and graduate students interested in political economy and emerging technologies like artificial intelligence. Participants engage in seminar-style colloquia to explore how different schools of political economy address institutional governance and the philosophy of science.

    View Opportunity

  • IAPS AI Policy Fellowship 2026

    Organization: Institute for AI Policy and Strategy

    Location: Washington, D.C. or Remote

    Region: US/Canada, Remote

    Type: Hybrid

    Category: Fellowship

    Posted: Jan 15, 2026

    The IAPS AI Policy Fellowship is a three-month program designed for professionals to strengthen practical skills for securing a positive future with powerful AI. Fellows conduct independent research projects, such as writing policy memos and briefing officials, while receiving mentorship and financial support.

    View Opportunity

  • Promoting AI Research

    Organization: Artificial Intelligence Journal (AIJ)

    Location: Global

    Region: Others

    Type: Remote

    Category: Funding

    Posted: Jan 13, 2026

    The Artificial Intelligence Journal provides substantial funds to support the promotion and dissemination of AI research through competitive open calls and sponsorships. Approximately 160,000 USD is allocated annually to support activities such as studentships and specialized AI research initiatives.

    View Opportunity

  • OpenAI's Cybersecurity Grant Program

    Organization: OpenAI

    Location: Global

    Region: Remote

    Type: Remote

    Category: Funding

    Posted: Jan 13, 2026

    Funding program for thoughtful, focused ideas at the intersection of AI and security

    View Opportunity

  • Carr-Ryan Center’s Technology and Human Rights Fellow

    Organization: Harvard Kennedy School Carr-Ryan Center for Human Rights

    Location: Global

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Jan 3, 2026

    This program focuses on exploring how technological developments impact human rights protections, specifically addressing challenges related to surveillance capitalism. Fellows participate in a multi-year effort to investigate the intersection of democracy and technology through research and academic collaboration.

    View Opportunity

  • 2026 Mozilla Fellow

    Organization: Mozilla Foundation

    Location: Global

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Jan 3, 2026

    The 2026 Mozilla Fellows program supports visionary leaders including technologists, researchers, and creators building a better tech future. Fellows receive financial backing, professional development, and access to a global network to lead impactful projects and share expertise.

    View Opportunity

  • Summer Fellowship 2026, Research Track

    Organization: Centre for the Governance of AI (GovAI)

    Location: London, UK

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Dec 24, 2025

    This three-month fellowship is designed to launch or accelerate impactful careers in AI governance and policy through independent research projects. Participants receive mentorship from leading experts, engage in expert seminars, and develop research outputs such as white papers or policy analysis.

    View Opportunity

  • MATS Summer Fellow

    Organization: MATS (ML Alignment & Theory Scholars)

    Location: Berkeley, CA

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Dec 20, 2025

    The MATS Program is a 12-week independent research fellowship that connects emerging researchers with top mentors in AI alignment, interpretability, governance, and security. Fellows conduct intensive research while participating in workshops, talks, and networking events to advance safe and reliable AI.

    View Opportunity

  • TARA Teaching Assistant

    Organization: TARA

    Location: Remote

    Region: Remote

    Type: Remote

    Category: Part-time

    Posted: Dec 19, 2025

    The Teaching Assistant supports the TARA educational program by assisting in the delivery of curriculum and student engagement. The role involves working closely with lead instructors to facilitate a productive learning environment for all participants.

    View Opportunity

  • SPAR Research Fellow

    Organization: SPAR

    Location: Remote

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Dec 19, 2025

    SPAR is a part-time research program that pairs aspiring AI safety and policy researchers with expert mentors to address risks from AI. Mentees work on impactful research projects for three months, culminating in a Demo Day and career fair with leading safety organizations.

    View Opportunity

  • Research Scientist – CBRN Risk Modeling

    Organization: SaferAI

    Location: Paris, France, London, UK and Remote

    Region: UK, EU, Remote

    Type: On-site

    Category: Full-time

    Posted: Dec 19, 2025

    SaferAI is seeking a Research Scientist to lead the development of CBRN risk models and monitoring systems for a European Commission tender. The role involves conducting technical research at the intersection of biosecurity and AI safety to inform regulatory enforcement for general-purpose AI systems.

    View Opportunity

  • Grantee - AI4PG Fast Grants

    Organization: Recerts Journal

    Location: Global

    Region: Remote

    Type: Remote

    Category: Funding

    Posted: Dec 19, 2025

    This program provides fast grants of up to $10,000 to support AI research and development that improves decision-making, allocation, and impact assessment in public goods. Selected projects aim to develop AI-powered tools such as grant allocation algorithms and predictive analytics while undergoing peer review through journal publication.

    View Opportunity

  • Grantee - AI for Safety & Science Nodes 2026

    Organization: Foresight Institute

    Location: San Francisco, USA and Berlin, Germany

    Region: US/Canada, EU

    Type: On-site

    Category: Funding

    Posted: Dec 19, 2025

    This initiative provides financial grants, office space, and dedicated compute resources to researchers and builders using AI to advance science and safety. The program aims to create a decentralized ecosystem that supports open and secure AI-driven progress across security, biotechnology, and nanotechnology.

    View Opportunity

  • AI and Society Researcher

    Organization: ELLIS Institute Tübingen and MPI-IS

    Location: Tübingen, Germany

    Region: EU

    Type: On-site

    Category: Full-time

    Posted: Dec 19, 2025

    The COMPASS research group is hiring researchers across all levels to focus on safe, aligned, and steerable AI agents. Research areas include AI security, multi-agent dynamics, and mitigating risks like prompt injection and deceptive alignment.

    View Opportunity

  • Accelerator Fellow

    Organization: Accelerating AI Ethics

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Others

    Type: On-site

    Category: Fellowship

    Posted: Dec 18, 2025

    The Accelerator Fellowship Programme is a global AI ethics hub dedicated to tackling the toughest ethical challenges posed by artificial intelligence. It brings together leading thinkers and experts to collaborate on impactful contributions to AI regulation, industry practices, and public awareness.

    View Opportunity

  • AI-for-Science Postdoctoral Fellow

    Organization: FutureHouse

    Location: San Francisco, CA

    Region: US/Canada

    Type: Hybrid

    Category: Fellowship

    Posted: Dec 18, 2025

    This fellowship offers early-career scientists the opportunity to pursue independent research at the intersection of AI and science with full access to computational and laboratory resources. Fellows divide their time between San Francisco and academic partner institutions to accelerate high-impact scientific discoveries.

    View Opportunity

  • Grantee - Engineering Ecosystem Resilience

    Organization: ARIA (Advanced Research and Invention Agency)

    Location: United Kingdom

    Region: UK

    Type: Remote

    Category: Funding

    Posted: Dec 18, 2025

    This opportunity provides seed funding for individuals or teams pursuing research focused on advanced monitoring and resilience-boosting interventions to prevent ecological collapse. High-potential proposals that align with or challenge core beliefs in ecosystem engineering can receive up to £500,000 to uncover new pathways for planetary prosperity.

    View Opportunity

  • Grantee - Sustained Viral Resilience

    Organization: Advanced Research and Invention Agency (ARIA)

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Others

    Type: Remote

    Category: Funding

    Posted: Dec 18, 2025

    This £46m programme seeks to create a new class of medicines called sustained innate immunoprophylactics to provide durable protection against respiratory viruses. ARIA is funding ambitious projects across synthetic biology, systems immunology, and AI to foster radical advances in viral resilience.

    View Opportunity

  • Grantee - Enduring Atmospheric Platforms

    Organization: Advanced Research and Invention Agency (ARIA)

    Location: United Kingdom

    Region: UK

    Type: Remote

    Category: Funding

    Posted: Dec 18, 2025

    This £50m programme aims to develop low-cost, persistent, and autonomous atmospheric platforms capable of keeping a 20 kg payload aloft and powered for seven days. It seeks interdisciplinary proposals for novel architectures that can provide a scalable alternative to orbital satellites for high-performance connectivity.

    View Opportunity

  • Grantee - Precision Mitochondria

    Organization: ARIA (Advanced Research and Invention Agency)

    Location: United Kingdom

    Region: UK

    Type: On-site

    Category: Funding

    Posted: Dec 18, 2025

    This programme provides at least £55m to support the creation of a foundational toolkit for engineering the mitochondrial genome in vivo. It funds ambitious interdisciplinary projects focused on delivering, expressing, and maintaining nucleic acids within the mitochondrial matrix to enable new therapeutic interventions.

    View Opportunity

  • Grantee - AI Futures Fund

    Organization: AI Futures Fund

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Remote, Others

    Type: Remote

    Category: Funding

    Posted: Dec 18, 2025

    The AI Futures Fund is a collaborative initiative designed to accelerate AI innovation by providing startups with equity funding and early access to advanced Google DeepMind models. Participants receive technical expertise from Google researchers and Cloud credits to support the scaling of AI-powered products.

    View Opportunity

  • Heron AI Security Fellow

    Organization: Apart Research and Heron AI Security

    Location: London, Tel Aviv, and San Francisco

    Region: US/Canada, UK, Remote, Others

    Type: Hybrid

    Category: Fellowship

    Posted: Dec 18, 2025

    A part-time research program where cybersecurity professionals collaborate with field leaders to secure transformative AI systems through concrete technical projects. Research teams work for four months to produce publishable results, open-source prototypes, or technical reports under expert guidance.

    View Opportunity

  • Postdoctoral Fellow

    Organization: University of Toronto / Vector Institute

    Location: Toronto, Canada

    Region: US/Canada

    Type: On-site

    Category: Full-time

    Posted: Dec 18, 2025

    This role involves leading research on methodological and theoretical advances at the intersection of uncertainty quantification and reasoning in large language models. Successful candidates will have a PhD, strong programming skills, and a track record of publications at top machine learning venues like NeurIPS or ICML.

    View Opportunity

  • Grantee - Interpretability Challenge

    Organization: Martian

    Location: Global

    Region: Remote

    Type: Remote

    Category: Funding

    Posted: Dec 18, 2025

    The Martian Interpretability Challenge offers a $1 million prize to advance the field of interpretability with a specific focus on code generation. This initiative aims to transform AI development from 'alchemy' into 'chemistry' by developing principled ways to understand and control how models function.

    View Opportunity

  • Project Incubator

    Organization: Sentient Futures

    Location: Global

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Dec 18, 2025

    This eight-week incubator pairs fellows with expert mentors to execute projects aimed at improving the welfare of future sentient beings across various cause areas. Participants work at least five hours per week to deliver a finished output or a detailed funding proposal for long-term impact.

    View Opportunity

  • Fellow

    Organization: Tarbell Center for AI Journalism

    Location: San Francisco Bay Area and various newsroom locations

    Region: US/Canada, UK, Others

    Type: On-site

    Category: Fellowship

    Posted: Dec 18, 2025

    The Tarbell Fellowship is a one-year program for journalists to cover artificial intelligence through nine-month newsroom placements and specialized training. Fellows receive stipends ranging from $60,000 to $110,000 alongside mentorship from expert reporters and a weeklong summit in the San Francisco Bay Area.

    View Opportunity

  • Research Assistant - Science and Emerging Technology

    Organization: RAND Europe

    Location: Cambridge, UK

    Region: UK

    Type: Hybrid

    Category: Full-time

    Posted: Dec 18, 2025

    The Research Assistant will support policy-oriented research projects within the Science and Emerging Technology team at RAND Europe. This role involves conducting literature reviews, data analysis, and contributing to high-quality reports for various public and private sector clients.

    View Opportunity

  • Research Engineer, Cybersecurity RL

    Organization: Anthropic

    Location: San Francisco, CA; New York City, NY

    Region: US/Canada

    Type: Hybrid

    Category: Full-time

    Posted: Dec 18, 2025

    This role involves advancing AI capabilities in secure coding and vulnerability remediation through reinforcement learning research and engineering. Candidates will design RL environments and conduct experiments to enhance defensive cybersecurity workflows within Anthropic's Horizons team.

    View Opportunity

  • Visiting Fellows

    Organization: Constellation Research Center

    Location: Berkeley, CA

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Dec 18, 2025

    The Visiting Fellows program brings together professionals from diverse sectors to join Constellation's Berkeley-based workspace for three to six months to advance their research. Fellows receive comprehensive support including travel reimbursement, housing, meals, and 24/7 access to a collaborative environment with leading AI researchers.

    View Opportunity

  • Policy Advisor, UK

    Organization: Anthropic

    Location: London, UK

    Region: UK

    Type: On-site

    Category: Full-time

    Posted: Dec 18, 2025

    This role involves leading the development of UK legislative and regulatory positions while engaging with government and parliamentary stakeholders to advance AI safety. The advisor will translate technical research into policy recommendations and collaborate with global legal and technical teams to shape Anthropic's strategic outlook.

    View Opportunity

  • Anthropic AI Safety Fellow

    Organization: Anthropic

    Location: London, UK; Ontario, CA; San Francisco, CA; Berkeley, CA

    Region: US/Canada, UK, Remote

    Type: Hybrid

    Category: Fellowship

    Posted: Dec 18, 2025

    The Anthropic Fellows Program is a four-month initiative designed to accelerate AI safety research by providing funding, mentorship, and stipends to technical talent. Fellows work on empirical projects aligned with research priorities such as scalable oversight and mechanistic interpretability, aiming to produce public research papers.

    View Opportunity

  • Anthropic AI Security Fellow

    Organization: Anthropic

    Location: San Francisco, Berkeley, London, Ontario, or Remote

    Region: US/Canada, UK, Remote

    Type: Hybrid

    Category: Fellowship

    Posted: Dec 18, 2025

    The Anthropic Fellows Program provides funding, mentorship, and compute resources for technical talent to conduct empirical research on AI security and safety for four months. Fellows work with Anthropic researchers to produce public outputs, such as papers, focusing on defensive AI use and securing infrastructure.

    View Opportunity

  • Summer Fellowship 2026, Research Track

    Organization: Centre for the Governance of AI (GovAI)

    Location: Oxford, United Kingdom

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Dec 18, 2025

    This three-month program is designed to launch or accelerate impactful careers in AI governance through independent research and expert mentorship. Fellows conduct projects of their choice while participating in professional development seminars and networking with practitioners across government and industry.

    View Opportunity

  • Summer Fellowship 2026, Applied Track

    Organization: Centre for the Governance of AI (GovAI)

    Location: Oxford, UK

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Dec 18, 2025

    The Summer Fellowship Applied Track is a three-month program designed to accelerate careers in AI governance through projects in fields like communications, policy, and operations. Fellows participate in expert seminars and receive mentorship to develop non-research skill sets for the AI safety ecosystem.

    View Opportunity

  • AI Biosecurity Manager

    Organization: Frontier Model Forum

    Location: U.S. (Select States)

    Region: US/Canada

    Type: On-site

    Category: Full-time

    Posted: Dec 18, 2025

    The AI Biosecurity Manager will drive consensus on threat models, evaluations, and mitigations for biological and chemical risks associated with frontier AI models. This role involves coordinating expert workshops, managing collaborative research projects, and documenting emerging industry practices for managing high-level biosecurity threats.

    View Opportunity