Since November 2025, I have been building a periodically updated global panel dataset on artificial intelligence (AI). As a quantitative social and health data scientist and applied policy researcher who is transitioning into AI safety and AI societal impact research, I was disappointed by the fact that global panel data on AI are scattered.

Existing Data and Research Problems

Without centralised global panel data on AI, researchers and data scientists are discouraged from easily accessing comprehensive AI datasets for research. As currently constructed, different institutions publish their global AI data reports or datasets on their websites for public downloads, with some additional organisations presenting their internal AI data on interactive dashboards without allowing for public downloads. I know that I can do something about it—to make global panel data on AI more centralised, standardised, and curated and ready for public access and download.

The other issue I have realised since the beginning of 2025 is the lack of non-academic and non-paywalled publications that exclusively address AI in society. While some academic publications exclusively address AI in society, such as Oxford Intersections: AI in Society and AI & SOCIETY, we are unable to find non-paywalled equivalents outside academia. Therefore, since November 2025, I have decided to build my own site that exclusively presents non-academic and non-paywalled articles on AI societal impacts to both the professional AI safety research community and the general public.

The Global AI Dataset (GAID) Project

By the end of December 2025, I had very little clue about what addressing the above two data and research problems would lead my work to. All I was aware of was that once the above data and research gaps had been addressed, I should, sooner or later, have a clearer picture of how I should scale up my work. In December 2025, in the midst of software-engineering a web app that hosts non-academic and non-paywalled articles on AI societal impact and data-engineering my version 1 of a global panel dataset on AI, I decided to dub the entirety of my work the Global AI Dataset (GAID) Project. In this article, I would like to present what the GAID Project is about and how it has, as of writing this post, been designed as milestone-based. I am going to explain all milestones (or phases) of the GAID Project that I have already completed, as well as those that I am planning to develop and deliver in the coming months.

On Harvard Dataverse—a free, open-source, web-based repository, managed by the Institute for Quantitative Social Science (IQSS) at Harvard University—I describe the GAID Project (https://dataverse.harvard.edu/dataverse/gaidproject) as a comprehensive, longitudinal research repository designed to track the multi-dimensional evolution of AI across over 200 countries and territories. I explain that the GAID aims to bridge the data gap between fragmented raw data and high-integrity academic research, by unifying, centralising, curating, and standardising global panel data on AI to allow researchers, data scientists, and policy professionals to observe the global trajectory of the AI revolution.

While such a description on Harvard Dataverse outlines the main purpose of my work, the GAID Project, as currently planned, goes way beyond global panel AI data curation, compilation, and documentation. I would like to take the opportunity of writing this article to explain each of Phases 0–4 of my GAID Project, where I have already completed and delivered the outputs of Phases 0–2, and I have decided to spend the next months working on Phases 3–4.

Phase 0: Building a Web App, AI in Society

I engineered this web app, AI in Society (https://aiinsocietyhub.com/), in December 2025 for multiple reasons. One of the primary reasons, as indicated at the beginning of this article, is that I have learnt that there is a lack of non-academic and non-paywalled publications that exclusively address AI in society. To give some background about myself, I have 10 years of training (PhD, MSc, and BA) in quantitative sociology and social epidemiology. In addition to quantitative sociology and epidemiology, I have been trained in economics (especially in relation to the relationships between human capital and labour market participation), geopolitics (especially on China–Hong Kong and China–Southeast Asia relations), gender studies (with a specific focus on child sexual abuse, gender-based violence, gender inequalities, and women’s empowerment), human, international, and sustainable development (in alignment with the United Nations’ Sustainable Development Goals values), and public policy. In early 2025, when I was developing multiple original research papers on AI societal, economic, and geopolitical impacts (which are all published, as of writing this article), I realised two problems. The first problem was that, when searching for potential journal outlets for my work, there were only a handful of academic publications exclusively covering AI in society topics. As the influence of AI has been growing exponentially (I don’t have the data to back it up, but I reckon its influence has been growing at a much faster rate than that of social media and dot-coms decades ago), I believe there is an increasing need for non-academic researchers and the general public to gain access to data-driven, evidence-based, and narratively presented in-depth analysis on AI societal impacts.

The second problem was that I realised I had to inconveniently download datasets from multiple AI-focused databases, manually merge the datasets into one, and carry out econometric analysis for original research. Such a process was very time-consuming and human labour-intensive. In the AI era, the sexy terms are automation, efficiency, and productivity. The data analysis work for AI researchers and data scientists requiring heavy manual inputs, as of today, sounds user-unfriendly to some degree. Therefore, the initial design of the GAID Project was to address these two problems that I encountered.

Our world is at a stage, as of writing this article, where global AI players have been working on the societal and economic integration of AI technology. We have seen tech giants in the US, while chasing the continual boost of compute efficiency, emphasising more on how the ever-advancing AI technology can be translated into positive societal and economic returns over time. Alternatively, launched in 2024, China’s AI Plus (AI+) initiative has been strategically focusing on its 10-year plan to fully integrate its advanced AI technology across as many industries as possible. The awareness of the importance of optimising societal and economic impacts, instead of solely prioritising the continual boost of compute efficiency to achieve artificial general intelligence (abbrev., AGI), supports my decision to build a site that addresses AI societal impacts to the wider audience.

Therefore, when engineering my web app, AI in Society, I decided to add an Articles section that features non-academic and non-paywalled articles on AI societal impacts. Given my data science expertise, I expect most of the articles shared periodically would be data-driven. Yet, when applicable, some articles might be theoretically or methodologically focused, for example. In addition to building an Articles section, I decided to build a curated opportunities board section on my web app. In the early 2020s, when I was still a PhD student, I already spent the majority of my time browsing the Internet to search for both pre-doctoral and postdoctoral fellowships and funding opportunities. As we know, scientists spend the majority of their time on grant searching and writing rather than on actual research, so finding eligible funding opportunities that fit our expertise is a big deal for us. For AI funding opportunities, while some established sites, such as EA Opportunities Board, 80,000 Job Board, and AISafety.com, constantly feature new AI-focused fellowships and grants, many opportunities are being overlooked or unfeatured on these sites. Therefore, I engineered my curated AI Opportunities Board page to share AI fellowships and funding opportunities that I am aware of but may or may not be featured on these established sites.

The other, and more important, reason why I engineered my web app, AI in Society, is that I believe there is a need for me to establish my own site to host the deliverables of my GAID Project. As I mentioned, my GAID Project is milestone-based (which means it is ever-scaling). Therefore, rather than hosting my GAID Project deliverables across different online platforms, it is much easier for me to build my own site so that any future deliverables of the GAID Project can be directly featured on the centralised web app, AI in Society.

Phases 1–2: Compiling, Curating, and Documenting GAID Datasets

To further benefit the AI research community, between November and December 2025, I spent weeks data-engineering version 1 of the GAID dataset. As mentioned, it is very researcher-unfriendly to manually identify and download AI-focused datasets, followed by using any software package to merge them together and carrying out data cleaning and standardisation before analysis. I believe that as long as there can be a publicly-accessible global panel dataset covering AI across different domains, the time needed for researchers, data scientists, and policy teams to conduct AI research and evaluate AI impacts would be largely shortened. Therefore, in November 2025, I identified three arguably most comprehensive global AI databases, namely Stanford’s AI Index, OECD.ai (AI Policy Observatory), and the Global Index on Responsible AI, and intended to compile, clean, standardise, and document their public-access data as a new dataset for public use.

I finished engineering and published the version 1 GAID dataset (https://doi.org/10.7910/DVN/QYLYSA) in late December 2025 on Harvard Dataverse. The version 1 GAID dataset is a longitudinal panel dataset providing a comprehensive and harmonised overview of the global AI landscape currently available. This is a curated, compiled, and documented dataset that covers 214 unique countries and territories, from 1998 to 2025, across AI in eight domains, including economy, policy, and governance. I underwent a total of 123 steps of clinical cleaning and deduplication to optimise the data integrity of this version 1 dataset. This dataset can easily and immediately be ingested in R, Stata, Python, and SPSS, for example, for statistical data analysis. For my GAID datasets, including this version 1, I strategically only include country-level data, which means regional data (such as Europe, Asia, etc.) or city- or state-level data (such as California, New York) are excluded. Please note that I refer to country-level data to any place with an official three-letter International Organisation for Standardisation (ISO3) identifier that is assigned to countries, dependent territories, and special areas worldwide. For example, Hong Kong has its own country-level ISO identifier, which is independent of China’s counterpart. Therefore, data from Hong Kong are included in my GAID datasets.

In total, my version 1 dataset has over 24,000 unique metrics. The definitions of these unique metrics can be found in my codebook, which was published along with the version 1 GAID dataset at https://doi.org/10.7910/DVN/QYLYSA. For researchers, data scientists, and policy teams who would like to use my version 1 GAID dataset for AI research, please feel free to read the corresponding 186-page codebook that details how the unique metrics are measured and defined.

Publishing my web app, AI in Society, and the version 1 dataset marked the completion of Phase 0 and Phase 1 of my milestone-based GAID Project, respectively. From 26th December 2025, I spent roughly the next three weeks data-engineering, documenting, and publishing the version 2 dataset (https://doi.org/10.7910/DVN/PUMGYU) on Harvard Dataverse, which is Phase 2 of my GAID Project. The version 2 dataset is a significant expansion or upgrade of version 1 of the longitudinal panel dataset. I integrated, standardised, and surgically cleaned high-fidelity AI indicators from eight additional premier AI databases and websites into my existing version 1 dataset. These eight additional data sources for the version 2 dataset are: (1) MacroPolo Global AI Talent Tracker, (2) UNESCO Global AI Ethics and Governance Observatory, (3) IEA’s Energy and AI Observatory, (4) Epoch AI, (5) Tortoise Media – The Global AI Index, (6) WIPO (World Intellectual Property Organisation) – AI Patent Landscapes, (7) Coursera – Global Skills Report (AI & Digital Skills), and (8) World Bank – GovTech Maturity Index (GTMI).

Data collected from these eight additional sources was either by data ingestion or web-scraping, where applicable. Like version 1, the version 2 dataset is optimised for easy and immediate statistical data analysis on R, Stata, Python, and SPSS, for example. In this version 2 dataset, there are almost 26,000 unique metrics representing a total of 227 unique countries or territories (all with existing ISO3 codes) from 1998 to 2025 across 20 AI domains. Version 2 has almost 26,000 unique metrics, while version 1 has 24,000+ unique metrics; version 2 covers 227 unique countries and territories (all with specific ISO3 codes) and version 1 covers 214 unique countries/territories (all with specific ISO3 codes); and version 2 covers 20 AI domains, while version 1 covers eight AI domains. Therefore, version 2, overall speaking, is a far more comprehensive and in-depth global panel dataset on AI than version 1.

Version 2 was published in mid-January 2026 on Harvard Dataverse. Since version 2 is the most updated and comprehensive GAID dataset as of writing this article, I would recommend researchers, data scientists, and policy teams conducting AI research to use this version of the dataset for statistical data analysis. Also, please consult the accompanying 200-page codebook to understand how the unique metrics, across 20 domains, are measured and defined at https://doi.org/10.7910/DVN/PUMGYU when using the version 2 GAID dataset.

Phases 3–4: The Global AI Bias Audit—An Automated Evaluation and Interpretability Dashboard for Foundation Models and AI Agents

While building and finishing the version 2 GAID dataset, I have spent the past weeks designing the scale-up phases (i.e. Phases 3–4) of my GAID Project. I dub this scale-up project “The Global AI Bias Audit—An Automated Evaluation and Interpretability Dashboard for Foundation Models and AI Agents”. This scale-up project is designed to deliver the following two milestones:

  • Phase 3: Phase 3 involves engineering an interactive dashboard as a separate page hosted on my AI in Society web app, which interactively, dynamically, and programmatically shares national profiles and data visualisation of my version 2 GAID dataset.
  • Phase 4: Phase 4 involves stress-testing foundational AI models against my ground-truth GAID dataset for AI safety, fairness, and readiness, and to address digital colonisation in any AI-driven decision-making.

Description & Aims

This project aims to establish an automated AI Eval and interpretability dashboard built upon my GAID, 1998–2025, for which the wave 1, versions 1 and 2 were published on Harvard Dataverse, as discussed above. The dashboard will be hosted on my software-engineered web app, AI in Society. As of today, generative AI models lack a proactive validation mechanism to ensure their outputs are factually grounded and free from geographical bias. This project addresses such an interpretability gap by transforming the GAID into an automated benchmarking ecosystem for global AI researchers and policymakers. I built both my GAID dataset and my web app, AI in Society, via Python. I am updating the version 2 dataset to include composite AI indices (based on the GAID data) for global AI readiness, fairness, and safety. I will add code blocks to my Python scripts to design the interactive dashboard in a way that utilises the structure of GAID indices to programmatically audit foundation models, quantifying the discrepancy between model-generated assessments and my AI index scores.

This technical project has three objectives. The first objective is to develop a visual interpretability layer. I will refine my existing Python scripts to generate high-fidelity, interactive visualisations (e.g., geographical heatmaps and radar charts) of all 227 unique countries and territories across 20 GAID domains. Such an approach provides national profiles for all countries, serving as a visual ground truth against which large language models’ outputs can be compared. The second objective is to engineer an automated AI Eval pipeline. I will build a Python-based testing framework that programmatically evaluates the factual reliability of generative AI models via APIs. The engine will task generative AI models with estimating AI safety, fairness, and readiness for specific regions and will automatically calculate error metrics (e.g., mean absolute error) by comparing model responses to the GAID standards. The third objective is to quantify and explain geographical bias among generative AI models. I will launch the interactive dashboard that visualises model performance across different socioeconomic tiers, in order to see if the discrepancy between models’ assessments and the ground-truth GAID data and index scores increases among less developed countries compared to their wealthy counterparts. By utilising interpretability techniques, this project will, furthermore, identify specific domains (e.g., energy, talent, ethics) where models consistently fail to align with the ground-truth data, exposing the systemic hallucinations that compromise AI safety, fairness, and readiness in the Global South and non-Western democratic societies.

Goals

The project is strategically designed to advance our collective AI safety progress by making generative models more reliable, trustworthy, and customised for real-world governance. This project directly addresses the research priority of Responsible AI by transitioning my recently completed work on static data curation to the scale-up phase of automated AI Eval and interpretation. This project aims to satisfy three goals:

  • Achieving responsible AI and bias mitigation: I will build a bias audit infrastructure that uses longitudinal ground-truth data from my GAID dataset to quantify geographical hallucinations. Such work provides a rigorous technical framework to identify and mitigate systemic inaccuracies in how foundation models represent the Global South.
  • Designing innovative methodology by engineering a Python-based AI Eval pipeline: I will provide a scalable methodology for testing the factual reasoning of AI agents. Instead of building simplistic benchmarks, this project aims to deliver an evaluation-as-a-service platform, where model reliability is continuously verified against the high-scale, cross-domain, and yearly-updated GAID dataset.
  • Offering transparency and interpretability: The interactive dashboard utilises the 20 domains of GAID ground-truth data to explain why a generative AI model fails, locating specific knowledge gaps in areas such as energy infrastructure or ethical governance.

Timeline and Deliverables

The feasibility of this project is facilitated by my published foundational work: the GAID wave 1, versions 1 and 2 dataset and the software-engineered AI in Society web app. I resolved the primary technical hurdles—data acquisition and cleaning and the development of the Python scripts for the completed work. Such delivered outputs indicate that the implementation of this scale-up project is realistic and focused on engineering rather than data collection. Below are the three milestones I aim to reach for this project:

  • Coming months 1–4: Developing the Python-based visualisation backend to transform existing structured, clean data and corresponding composite AI index scores into interactive national profiles.
  • Coming months 5–8: Implementing the AI Eval engine, leveraging my expertise in Python and API integration to automate the stress-testing of foundation models (such as Gemini, GPT-5, Claude).
  • Coming months 9–12: Large-scale auditing and bias reporting on my interactive dashboard, as well as in a technical report paper (published at arXiv) and conference presentation (NeurIPS 2027 conference).

Note: A new wave of data from all data sources will be programmatically extracted at the end of each year for the GAID dataset, so the interactive dashboard enjoys metadata updates periodically to deliver living benchmarks.

Impact Assessment

The primary impact is the establishment of a global standard for auditing the reliability of generative AI across different domains (e.g., policy and governance). By providing the first automated tool, hosted as an interactive dashboard that quantifies geographical hallucination, this project enables developers, technologically enabled researchers, and policymakers to identify where models fail the Global South or non-Western democratic societies, preventing digital colonialism in AI-driven decision-making.

This project provides a positive scientific impact by introducing new benchmarks for interpretability through a statistically rigorous system with 20 domain quantification of model errors. This project also offers positive economic and policy impact. The interactive dashboard facilitates governments and industry professionals to verify the safety, fairness, and readiness of AI agents before deployment in global markets. This project supports, for example, the UK’s leadership in AI safety by providing a diagnostic tool that ensures AI-driven monitoring is factually accurate and geographically inclusive. My project optimises sustainability, as the automated pipeline with annually updated global panel GAID data ensures the tool remains relevant as AI strategies evolve. I will open-source the AI Eval Python scripts to foster a collaborative ecosystem where developers and researchers can contribute to a more equitable global AI landscape.

Collaborations across Disciplines and Sectors

This scale-up project is interdisciplinary, bridging computational data science, AI interpretability, and international political economy and governance. The project fosters a cross-sectional collaboration between academia and intergovernmental monitoring bodies. As the version 2 GAID dataset was developed by ingesting and web-scraping data feeds from 11 authoritative sources like OECD.ai, WIPO, and UNESCO, the project connects academic benchmarking with the practical needs of global governance organisations. The AI Eval engine is designed to be used by developers to stress-test their generative AI models for factual accuracy and bias. Not only will I publish a technical report paper to unveil the white-box details of the Python-based AI Eval pipeline, but I will also disseminate outputs to demonstrate how the GAID ground truth can improve model fine-tuning for global applications (through a technical paper on arXiv, a conference presentation at the NeurIPS 2027 conference, and public posts on the Effective Altruism Forum, LessWrong, and the AI Alignment Forum). Furthermore, I will aim at facilitating knowledge exchange by sharing outputs with the wider AI safety research community within and beyond academia. This ensures the technical insights gained from the Python-based AI Eval pipeline lead to better-informed regulations and more reliable AI tools globally.

Responsible AI

This project aligns with the core AI safety principle of responsible AI through the design and implementation of the Python-based AI Eval pipeline. This project is designed to mitigate systemic bias. The automated bias audit infrastructure will be presented in a leaderboard as part of the interactive dashboard, which quantifies geographical hallucinations and identifies precisely where foundation models fail to accurately represent the Global South and non-Western democratic societies.

Also, my project goes beyond traditional benchmarking. Based on my 20-domain global panel country-level GAID dataset, this project will establish a scalable methodology for testing the factual interpretability and reasoning fidelity of AI agents against high-scale, cross-domain evidence (meaning data from my GAID dataset). By delivering a rigorous, evidence-based diagnostic tool, this project ensures that we can develop generative AI that is not only high-performing but factually grounded and geographically equitable.

Equality, Diversity, and Inclusion

First, the primary objective of this project is to bridge the interpretability gap that leaves non-Western societies vulnerable to biased AI-driven decision-making. By setting an objective to quantify geographical hallucination, the research design includes the 227 unique countries and territories from the GAID dataset as equal subjects of study, rather than focusing on the high-resource AI ecosystems of the Global North alone. Such an approach ensures that the technical definitions of AI safety, fairness, and readiness are inclusive of, for example, diverse socioeconomic circumstances, energy constraints, and policy and ethical frameworks found across the Global South.

Second, the methodology of the AI Eval pipeline is built to detect and expose systemic bias. The methodology for stress-testing foundation models and AI agents includes a stratified analysis across different socioeconomic tiers. This ensures that the evaluation of reasoning fidelity is not biased toward countries with high data density. Also, by integrating 20 domains (e.g. talent, ethics, energy, policy, and governance), the methodology acknowledges that AI readiness is intersectional. This project audits whether a model’s bias in metrics from a single domain is compounded by a lack of understanding of factors from other domains in developing countries. Furthermore, to promote inclusion within the developer and technologically enabled researcher community, the Python scripts for the AI Eval pipeline will be open-sourced. Such an approach allows developers, researchers, and policymakers from low-resource institutions to utilise high-level interpretability tools that are often locked behind proprietary paywalls.

Third, my GAID dataset itself is an exercise in inclusive data gathering with global representation. Unlike many AI benchmarks that only cover OECD countries, GAID programmatically synthesises data for 227 unique countries and territories globally. By ingesting data from 11 diverse global AI databases and websites, such as UNESCO and WIPO, this project ensures that the ground truth is not derived from a single Western perspective but from a collection of international monitoring bodies. Moreover, the global panel nature of the country-level data (1998–2025) of my GAID dataset allows for an inclusive understanding of how AI safety, fairness, and readiness have evolved differently across various regions over time, preventing researchers, developers, and policymakers who use my GAID dataset or the expected outputs of this project (i.e. the interactive dashboard) from ignoring the progress of emerging economies.

Fourth, the reporting phase of this project is designed to be accessible and transparent to a global audience of stakeholders. The interactive dashboard will be hosted at my non-paywalled AI in Society web app with high-fidelity visualisations like geographical heatmaps. This project ensures that expected outputs are interpretable for researchers and policymakers who may not have a technical or computational data science background. I software-engineered my web app as an interactive and non-paywalled site. Its design avoids any static presentation and visualisation, and optimises reader-friendliness.

Also, the expected outputs will be disseminated across both highly visible technical academic platforms (such as NeurIPS and arXiv) and public-facing forums for AI safety research communities (such as Effective Altruism Forum, LessWrong, and AI Alignment Forum) by the end of the scale-up project. This multi-tiered reporting ensures that insights into geographical bias reach both the technically enabled researchers and developers of AI agents, as well as the policy-making community.

To supplement, the final reporting will rank model performance by country-income tier (i.e. high-income countries, upper-middle-income countries, lower-middle-income countries, and low-income countries). Such a design creates a public record of which AI agents are failing non-Western democratic societies. This design acts as a diagnostic and corrective measure, providing the evidence needed to advocate for more equitable AI development and deployment.

To Wrap Up

So far, I have self-invested and self-funded to complete and deliver the outputs of Phases 0–2 of the GAID Project. This week, I began to submit some funding applications to support the scale-up project (Phases 3–4) of my work. Like everyone else working on responsible AI, AI safety, AI alignment, and related fields, I don’t have a clear answer to how my ever-scaling work can benefit humanity upfront. I just started my work, reached a baby milestone, scaled my project up, and have repeated the same process until I gain a clearer picture and a more solid footing on how to contribute to building positive, constructive, and responsible AI.

Over the past few weeks, I have also thought about Phase 5 of my GAID Project. As of writing this article, Phase 5 is still at the concept stage. My concept is that Phase 5 will be building an AI for forecasting model based on my ground-truth GAID data and yearly updated GAID dataset(s), and will ideally be hosted on my AI in Society web app too. I will continue to consolidate my thought process and come up with a more concrete design of Phase 5 of my GAID Project, while working on Phases 3–4.

Cite This

Hung, J. (2025). The Global AI Dataset (GAID) Project: From Closing Research Gaps to Building Responsible and Trustworthy AI. AI in Society. https://aiinsocietyhub.com/articles/the-global-ai-dataset-gaid-project-from-closing-research-gaps-to-building-responsible-and-trustworthy-ai

AI Opportunities Board

  • Grantee - Internet Society Foundation: Research Grant

    Organization: Internet Society Foundation

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Remote, Others

    Type: Remote

    Category: Funding

    Posted: Apr 12, 2026

    The Research Grant Program supports thematic research, especially in light of emerging paradigms including artificial intelligence, that advances the understanding of the Internet and its impact on society. It provides funding for independent researchers and organizations to explore topics such as the digital divide and the evolution of Internet technology.

    View Opportunity

  • OpenAI Safety Fellow

    Organization: OpenAI

    Location: Berkeley, CA

    Region: US/Canada, Remote

    Type: Hybrid

    Category: Fellowship

    Posted: Apr 8, 2026

    The OpenAI Safety Fellowship is a pilot program designed for external researchers and engineers to conduct high-impact research on the alignment and safety of advanced AI systems. Fellows receive a monthly stipend, compute support, and mentorship to produce significant outputs like papers or benchmarks.

    View Opportunity

  • Grantee - Pear AI Researcher Grant

    Organization: Pear VC

    Location: San Francisco, CA

    Region: US/Canada

    Type: On-site

    Category: Funding

    Posted: Apr 5, 2026

    Pear AI Researcher Grant is a funding scheme for promising PhDs, PhD grads and Professors to push the boundaries of their research towards solving real-world applications and turning them into world changing companies

    View Opportunity

  • Adobe IndiaAI Research Fellow

    Organization: Adobe Research

    Location: India

    Region: Asia

    Type: On-site

    Category: Fellowship

    Posted: Apr 5, 2026

    The Adobe India AI Research Fellowship is a one-year program supporting PhD candidates in India who are conducting advanced research in areas like computer vision and natural language processing. Selected fellows receive a significant stipend, mentorship from Adobe researchers, and access to Adobe tools to advance their dissertation work.

    View Opportunity

  • Google PhD Fellow

    Organization: Google

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Others

    Type: On-site

    Category: Fellowship

    Posted: Apr 2, 2026

    The Google PhD Fellowship Program supports outstanding graduate students doing exceptional and innovative research in areas relevant to computer science. The program aims to nurture the next generation of researchers through financial support and opportunities to collaborate with Google researchers.

    View Opportunity

  • Grantee - AI at Work

    Organization: Schmidt Sciences

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Remote, Others

    Type: Remote

    Category: Funding

    Posted: Mar 29, 2026

    This program (Proposal development awards: $10,000; Full research awards: up to $200,000 for up to 2 years) provides empirical research grants to explore how AI adoption influences labor markets and workplace environments. It offers financial support for field and quasi-experiments conducted by early-career quantitative social scientists globally.

    View Opportunity

  • AI Science Institute Postdoctoral Fellow

    Organization: Boston Consulting Group (BCG X)

    Location: United States

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Mar 27, 2026

    This postdoctoral fellowship role at BCG X focuses on advancing artificial intelligence research within a leading global management consulting firm. Fellows collaborate with technical and business leaders to tackle complex challenges in the fields of technology and engineering.

    View Opportunity

  • CBAI Summer Research Fellow in AI Safety ‘26

    Organization: Cambridge Boston Alignment Initiative

    Location: Cambridge, MA

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Mar 27, 2026

    This intensive nine-week program supports talented researchers in advancing AI safety domains such as interpretability, multi-agent safety, and risk management frameworks. Fellows receive a stipend and dedicated mentorship while engaging with the AI safety community at institutions like Harvard and MIT.

    View Opportunity

  • Mila AI Policy Fellow

    Organization: Mila

    Location: Montreal, Canada

    Region: US/Canada, Remote

    Type: On-site

    Category: Fellowship

    Posted: Mar 27, 2026

    The Mila AI Policy Fellowship (in-person or virtual) is designed for professionals and researchers focused on AI governance, policy, and inclusion. This program provides an opportunity to contribute to responsible AI practices and engage with Mila’s world-class research ecosystem.

    View Opportunity

  • AI Ethics and Governance Fellow

    Organization: Policy Innovation Centre (PIC)

    Location: Africa

    Region: Remote, Others

    Type: Remote

    Category: Fellowship

    Posted: Mar 27, 2026

    This 12-week program equips African leaders with the technical literacy and governance expertise needed to design responsible, rights-based AI frameworks centered on African ethical traditions. Fellows will move from foundational learning to producing practical governance artifacts such as policy frameworks and AI audit protocols.

    View Opportunity

  • Ethical AI Governance Fellow

    Organization: Globethics

    Location: Geneva, Switzerland

    Region: EU

    Type: On-site

    Category: Fellowship

    Posted: Mar 27, 2026

    This competitive international programme is designed for early-career professionals to advance ethical and values-driven AI governance through mentorship and networking. Selected participants benefit from a one-week residency in Geneva and the opportunity to unlock seed funding for impactful AI ethics projects.

    View Opportunity

  • General Intelligence Fellow

    Organization: The General Intelligence Company

    Location: Global

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Mar 18, 2026

    The General Intelligence Fellowship is a 30-day program where individuals receive $1,000 upfront and daily platform credits to launch a company using the Cofounder 2 agentic platform. Participants retain full ownership and IP of their business while helping test next-generation agent orchestration and infrastructure management systems.

    View Opportunity

  • Grantee - CHAI Hub Research Call Round 2

    Organization: Causality in Healthcare AI Hub (CHAI)

    Location: United Kingdom

    Region: UK

    Type: On-site

    Category: Funding

    Posted: Mar 17, 2026

    This funding opportunity supports academic researchers and post-doctoral research associates collaborating with industry partners and clinicians to advance causal AI research. Projects focus on aligning with the hub's mission to address healthcare challenges through AI innovation and interdisciplinary partnership.

    View Opportunity

  • Queens’ AI Scholar

    Organization: Queens' College, University of Cambridge

    Location: Cambridge, UK

    Region: UK

    Type: On-site

    Category: Funding

    Posted: Mar 17, 2026

    This scholarship supports students pursuing an MPhil in Advanced Computer Science at the University of Cambridge. It provides full fee coverage for domestic students or a partial award of £25,000 for international students.

    View Opportunity

  • AI and Society Fellow

    Organization: Center for AI Safety

    Location: San Francisco, CA

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Mar 14, 2026

    The AI and Society Fellowship is a fully-funded, three-month program supporting scholars in economics, law, and international relations to explore questions regarding AI power, wealth, and oversight. Fellows pursue autonomous research projects and engage with experts in the Bay Area to produce shareable academic outputs.

    View Opportunity

  • Micsion Global Scholar and Fellow

    Organization: Micsion

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Remote, Others

    Type: Remote

    Category: Funding

    Posted: Mar 12, 2026

    The Micsion Global Scholarship and Fellowship provides financial assistance to students, professionals, entrepreneurs, community organizers, and independent changemakers pursuing higher education opportunities worldwide. This program aims to support high-achieving individuals by covering tuition and educational expenses to foster academic excellence.

    View Opportunity

  • AI Ethics Fellow

    Organization: Code for Africa

    Location: Benin, Burkina Faso, Cameroun, Chad, Ethiopia, Guinea, Mali, Mauritania, Niger, Senegal, Somalia, South Sudan, Sudan, Togo

    Region: Remote, Others

    Type: Remote

    Category: Fellowship

    Posted: Mar 12, 2026

    This three-month fellowship supports mid-career professionals in developing research and policy recommendations for the ethical adoption of AI across Africa. Fellows will analyze regional regulations and global standards to design inclusive AI policies and mitigate algorithmic bias.

    View Opportunity

  • FDA Artificial Intelligence and Machine Learning Fellow

    Organization: U.S. Food and Drug Administration (FDA)

    Location: White Oak, MD

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Mar 6, 2026

    This fellowship program offers research and developmental opportunities within the Center for Devices and Radiological Health's Artificial Intelligence Regulatory Science Program. Participants will conduct regulatory science research to ensure the safety and effectiveness of AI/ML-enabled medical devices in healthcare applications like disease detection and diagnosis.

    View Opportunity

  • 2026 Summer Research Fellow

    Organization: Center on Long-Term Risk

    Location: London, UK

    Region: UK, Remote

    Type: On-site

    Category: Fellowship

    Posted: Mar 6, 2026

    This eight-week program invites fellows to conduct research projects focused on reducing long-term suffering risks and advancing technical AI safety. Participants receive mentorship from experienced researchers and collaborate on empirical AI safety agendas such as understanding malicious traits in LLM personas.

    View Opportunity

  • AI Senior Research Innovation Fellow

    Organization: University of Hertfordshire

    Location: Hatfield, UK

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Mar 6, 2026

    This senior fellowship offers a high-impact opportunity to lead the integration of AI across health, medicine, and life sciences through strategic digital transformation initiatives. The role involves driving AI adoption in predictive diagnostics, mentoring junior colleagues, and securing research funding through collaborative partnerships.

    View Opportunity

  • Cambridge Digital Minds Fellow

    Organization: Cambridge Digital Minds

    Location: Cambridge, UK

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Feb 21, 2026

    This intensive seven-day residential programme aims to build research capacity in the fields of AI consciousness, AI welfare, and the societal implications of digital minds. Fellows receive expert mentorship, strategic project scoping support, and fully funded travel to participate in technical and philosophical workshops.

    View Opportunity

  • Awardee - AI for Good Impact Awards

    Organization: AI for Good

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Remote, Others

    Type: Remote

    Category: Others

    Posted: Feb 18, 2026

    The AI for Good Impact Awards feature three categories (AI for People, AI for Planet and AI for Prosperity), honoring innovation and impact across various sectors.

    View Opportunity

  • Visiting Fellow

    Organization: Constellation Institute

    Location: Berkeley, CA

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Feb 13, 2026

    This 3-6 month program supports full-time AI safety researchers by providing access to a research center and professional network. Fellows receive full funding for travel, housing, meals, and office space while continuing their technical or governance projects.

    View Opportunity

  • AI4X Postdoctoral Fellow

    Organization: NTU Singapore

    Location: Singapore

    Region: Asia

    Type: On-site

    Category: Fellowship

    Posted: Feb 13, 2026

    The AI4X Postdoctoral Fellowship supports outstanding early-career researchers who leverage Artificial Intelligence (AI) to accelerate breakthroughs across science, technology, engineering, and mathematics, including medicine (STEM).

    View Opportunity

  • 2026 Critical AI Policy Virtual Fellow

    Organization: Manchester Metropolitan University

    Location: Global

    Region: UK

    Type: Remote

    Category: Fellowship

    Posted: Feb 13, 2026

    Critical AI Policy Virtual Fellowship 2026 is an opportunity for humanities or social science researchers to understand, challenge and reshape current policies around generative AI by joining a virtual collective of researchers working on AI and society.

    View Opportunity

  • Turing AI Global Fellow

    Organization: UKRI/EPSRC

    Location: United Kingdom

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Feb 13, 2026

    Turing AI Global Fellowships attract up to five exceptional researchers who are either established international leaders in AI or who can demonstrate outstanding potential to shape the future of AI research globally. They must relocate to the UK and undertake transformational AI research that strengthens the UK’s position as a global leader in AI.

    View Opportunity

  • Postdoctoral Research Assistant in AI + Security

    Organization: Department of Engineering Science, Oxford

    Location: Oxford, UK

    Region: UK

    Type: On-site

    Category: Full-time

    Posted: Feb 10, 2026

    This is a full-time Postdoctoral Research Assistantship opportunity to join the Oxford Witt Lab for Trust in AI (OWL) in the Department of Engineering Science (Central Oxford), to conduct hands-on empirical research in multi-agent security and agentic AI security, focused on adversarial testing (red-teaming) and mitigation of hard-to-detect failure modes in interactive AI systems (e.g., covert communication, collusion, strategic behaviour).

    View Opportunity

  • Book Grant

    Organization: Alfred P. Sloan Foundation

    Location: New York City, NY

    Region: US/Canada

    Type: Remote

    Category: Funding

    Posted: Feb 6, 2026

    The Alfred P. Sloan Foundation provides direct support to authors for the research and writing of books aimed at enhancing public understanding of science and technology, including artificial intelligence. Grants are typically awarded to individual authors or through host institutions like universities to simplify complex scientific subjects for a general audience.

    View Opportunity

  • Global Early-Career Short Research Fellow

    Organization: Imperial College London

    Location: London, UK

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Jan 31, 2026

    This programme invites early-career researchers from Least Developed and Lower Middle-Income Countries to spend four to eight weeks at Imperial College London conducting high-impact research. Fellows will focus on accelerating innovation through AI in Science and Open Hardware for Lab Automation to foster new global collaborations.

    View Opportunity

  • Fellow - Metascience Research Grants Round 2

    Organization: UK Research and Innovation (UKRI)

    Location: United Kingdom

    Region: UK

    Type: On-site

    Category: Funding

    Posted: Jan 24, 2026

    This opportunity provides funding for cutting-edge metascience research focused on optimizing R&D processes, research institutions, and the impact of AI. Projects must be based at a UK research organization, though collaborative efforts with international partners are strongly encouraged.

    View Opportunity

  • Oskar Morgenstern Fellow

    Organization: Mercatus Center

    Location: Online

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Jan 23, 2026

    The Oskar Morgenstern Fellowship is a one-year online program for scholars and graduate students interested in political economy and emerging technologies like artificial intelligence. Participants engage in seminar-style colloquia to explore how different schools of political economy address institutional governance and the philosophy of science.

    View Opportunity

  • IAPS AI Policy Fellowship 2026

    Organization: Institute for AI Policy and Strategy

    Location: Washington, D.C. or Remote

    Region: US/Canada, Remote

    Type: Hybrid

    Category: Fellowship

    Posted: Jan 15, 2026

    The IAPS AI Policy Fellowship is a three-month program designed for professionals to strengthen practical skills for securing a positive future with powerful AI. Fellows conduct independent research projects, such as writing policy memos and briefing officials, while receiving mentorship and financial support.

    View Opportunity

  • Promoting AI Research

    Organization: Artificial Intelligence Journal (AIJ)

    Location: Global

    Region: Others

    Type: Remote

    Category: Funding

    Posted: Jan 13, 2026

    The Artificial Intelligence Journal provides substantial funds to support the promotion and dissemination of AI research through competitive open calls and sponsorships. Approximately 160,000 USD is allocated annually to support activities such as studentships and specialized AI research initiatives.

    View Opportunity

  • OpenAI's Cybersecurity Grant Program

    Organization: OpenAI

    Location: Global

    Region: Remote

    Type: Remote

    Category: Funding

    Posted: Jan 13, 2026

    Funding program for thoughtful, focused ideas at the intersection of AI and security

    View Opportunity

  • Carr-Ryan Center’s Technology and Human Rights Fellow

    Organization: Harvard Kennedy School Carr-Ryan Center for Human Rights

    Location: Global

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Jan 3, 2026

    This program focuses on exploring how technological developments impact human rights protections, specifically addressing challenges related to surveillance capitalism. Fellows participate in a multi-year effort to investigate the intersection of democracy and technology through research and academic collaboration.

    View Opportunity

  • 2026 Mozilla Fellow

    Organization: Mozilla Foundation

    Location: Global

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Jan 3, 2026

    The 2026 Mozilla Fellows program supports visionary leaders including technologists, researchers, and creators building a better tech future. Fellows receive financial backing, professional development, and access to a global network to lead impactful projects and share expertise.

    View Opportunity

  • Summer Fellowship 2026, Research Track

    Organization: Centre for the Governance of AI (GovAI)

    Location: London, UK

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Dec 24, 2025

    This three-month fellowship is designed to launch or accelerate impactful careers in AI governance and policy through independent research projects. Participants receive mentorship from leading experts, engage in expert seminars, and develop research outputs such as white papers or policy analysis.

    View Opportunity

  • MATS Summer Fellow

    Organization: MATS (ML Alignment & Theory Scholars)

    Location: Berkeley, CA

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Dec 20, 2025

    The MATS Program is a 12-week independent research fellowship that connects emerging researchers with top mentors in AI alignment, interpretability, governance, and security. Fellows conduct intensive research while participating in workshops, talks, and networking events to advance safe and reliable AI.

    View Opportunity

  • TARA Teaching Assistant

    Organization: TARA

    Location: Remote

    Region: Remote

    Type: Remote

    Category: Part-time

    Posted: Dec 19, 2025

    The Teaching Assistant supports the TARA educational program by assisting in the delivery of curriculum and student engagement. The role involves working closely with lead instructors to facilitate a productive learning environment for all participants.

    View Opportunity

  • SPAR Research Fellow

    Organization: SPAR

    Location: Remote

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Dec 19, 2025

    SPAR is a part-time research program that pairs aspiring AI safety and policy researchers with expert mentors to address risks from AI. Mentees work on impactful research projects for three months, culminating in a Demo Day and career fair with leading safety organizations.

    View Opportunity

  • Research Scientist – CBRN Risk Modeling

    Organization: SaferAI

    Location: Paris, France, London, UK and Remote

    Region: UK, EU, Remote

    Type: On-site

    Category: Full-time

    Posted: Dec 19, 2025

    SaferAI is seeking a Research Scientist to lead the development of CBRN risk models and monitoring systems for a European Commission tender. The role involves conducting technical research at the intersection of biosecurity and AI safety to inform regulatory enforcement for general-purpose AI systems.

    View Opportunity

  • Grantee - AI4PG Fast Grants

    Organization: Recerts Journal

    Location: Global

    Region: Remote

    Type: Remote

    Category: Funding

    Posted: Dec 19, 2025

    This program provides fast grants of up to $10,000 to support AI research and development that improves decision-making, allocation, and impact assessment in public goods. Selected projects aim to develop AI-powered tools such as grant allocation algorithms and predictive analytics while undergoing peer review through journal publication.

    View Opportunity

  • Grantee - AI for Safety and Science Nodes 2026

    Organization: Foresight Institute

    Location: San Francisco, CA and Berlin, Germany

    Region: US/Canada, EU

    Type: On-site

    Category: Funding

    Posted: Dec 19, 2025

    This initiative provides financial grants, office space, and dedicated compute resources to researchers and builders using AI to advance science and safety. The program aims to create a decentralized ecosystem that supports open and secure AI-driven progress across security, biotechnology, and nanotechnology.

    View Opportunity

  • AI and Society Researcher

    Organization: ELLIS Institute Tübingen and MPI-IS

    Location: Tübingen, Germany

    Region: EU

    Type: On-site

    Category: Full-time

    Posted: Dec 19, 2025

    The COMPASS research group is hiring researchers across all levels to focus on safe, aligned, and steerable AI agents. Research areas include AI security, multi-agent dynamics, and mitigating risks like prompt injection and deceptive alignment.

    View Opportunity

  • Accelerator Fellow

    Organization: Accelerating AI Ethics

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Others

    Type: On-site

    Category: Fellowship

    Posted: Dec 18, 2025

    The Accelerator Fellowship Programme is a global AI ethics hub dedicated to tackling the toughest ethical challenges posed by artificial intelligence. It brings together leading thinkers and experts to collaborate on impactful contributions to AI regulation, industry practices, and public awareness.

    View Opportunity

  • AI-for-Science Postdoctoral Fellow

    Organization: FutureHouse

    Location: San Francisco, CA

    Region: US/Canada

    Type: Hybrid

    Category: Fellowship

    Posted: Dec 18, 2025

    This fellowship offers early-career scientists the opportunity to pursue independent research at the intersection of AI and science with full access to computational and laboratory resources. Fellows divide their time between San Francisco and academic partner institutions to accelerate high-impact scientific discoveries.

    View Opportunity

  • Grantee - Engineering Ecosystem Resilience

    Organization: ARIA (Advanced Research and Invention Agency)

    Location: United Kingdom

    Region: UK

    Type: Remote

    Category: Funding

    Posted: Dec 18, 2025

    This opportunity provides seed funding for individuals or teams pursuing research focused on advanced monitoring and resilience-boosting interventions to prevent ecological collapse. High-potential proposals that align with or challenge core beliefs in ecosystem engineering can receive up to £500,000 to uncover new pathways for planetary prosperity.

    View Opportunity

  • Grantee - Sustained Viral Resilience

    Organization: Advanced Research and Invention Agency (ARIA)

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Others

    Type: Remote

    Category: Funding

    Posted: Dec 18, 2025

    This £46m programme seeks to create a new class of medicines called sustained innate immunoprophylactics to provide durable protection against respiratory viruses. ARIA is funding ambitious projects across synthetic biology, systems immunology, and AI to foster radical advances in viral resilience.

    View Opportunity

  • Grantee - Enduring Atmospheric Platforms

    Organization: Advanced Research and Invention Agency (ARIA)

    Location: United Kingdom

    Region: UK

    Type: Remote

    Category: Funding

    Posted: Dec 18, 2025

    This £50m programme aims to develop low-cost, persistent, and autonomous atmospheric platforms capable of keeping a 20 kg payload aloft and powered for seven days. It seeks interdisciplinary proposals for novel architectures that can provide a scalable alternative to orbital satellites for high-performance connectivity.

    View Opportunity

  • Grantee - Precision Mitochondria

    Organization: ARIA (Advanced Research and Invention Agency)

    Location: United Kingdom

    Region: UK

    Type: On-site

    Category: Funding

    Posted: Dec 18, 2025

    This programme provides at least £55m to support the creation of a foundational toolkit for engineering the mitochondrial genome in vivo. It funds ambitious interdisciplinary projects focused on delivering, expressing, and maintaining nucleic acids within the mitochondrial matrix to enable new therapeutic interventions.

    View Opportunity

  • Grantee - AI Futures Fund

    Organization: AI Futures Fund

    Location: Global

    Region: US/Canada, UK, EU, Asia, Australia, Remote, Others

    Type: Remote

    Category: Funding

    Posted: Dec 18, 2025

    The AI Futures Fund is a collaborative initiative designed to accelerate AI innovation by providing startups with equity funding and early access to advanced Google DeepMind models. Participants receive technical expertise from Google researchers and Cloud credits to support the scaling of AI-powered products.

    View Opportunity

  • Heron AI Security Fellow

    Organization: Apart Research and Heron AI Security

    Location: London, Tel Aviv, and San Francisco, CA

    Region: US/Canada, UK, Remote, Others

    Type: Hybrid

    Category: Fellowship

    Posted: Dec 18, 2025

    A part-time research program where cybersecurity professionals collaborate with field leaders to secure transformative AI systems through concrete technical projects. Research teams work for four months to produce publishable results, open-source prototypes, or technical reports under expert guidance.

    View Opportunity

  • Postdoctoral Fellow

    Organization: University of Toronto / Vector Institute

    Location: Toronto, Canada

    Region: US/Canada

    Type: On-site

    Category: Full-time

    Posted: Dec 18, 2025

    This role involves leading research on methodological and theoretical advances at the intersection of uncertainty quantification and reasoning in large language models. Successful candidates will have a PhD, strong programming skills, and a track record of publications at top machine learning venues like NeurIPS or ICML.

    View Opportunity

  • Grantee - Interpretability Challenge

    Organization: Martian

    Location: Global

    Region: Remote

    Type: Remote

    Category: Funding

    Posted: Dec 18, 2025

    The Martian Interpretability Challenge offers a $1 million prize to advance the field of interpretability with a specific focus on code generation. This initiative aims to transform AI development from 'alchemy' into 'chemistry' by developing principled ways to understand and control how models function.

    View Opportunity

  • Project Incubator

    Organization: Sentient Futures

    Location: Global

    Region: Remote

    Type: Remote

    Category: Fellowship

    Posted: Dec 18, 2025

    This eight-week incubator pairs fellows with expert mentors to execute projects aimed at improving the welfare of future sentient beings across various cause areas. Participants work at least five hours per week to deliver a finished output or a detailed funding proposal for long-term impact.

    View Opportunity

  • Fellow

    Organization: Tarbell Center for AI Journalism

    Location: San Francisco Bay Area and various newsroom locations

    Region: US/Canada, UK, Others

    Type: On-site

    Category: Fellowship

    Posted: Dec 18, 2025

    The Tarbell Fellowship is a one-year program for journalists to cover artificial intelligence through nine-month newsroom placements and specialized training. Fellows receive stipends ranging from $60,000 to $110,000 alongside mentorship from expert reporters and a weeklong summit in the San Francisco Bay Area.

    View Opportunity

  • Research Assistant - Science and Emerging Technology

    Organization: RAND Europe

    Location: Cambridge, UK

    Region: UK

    Type: Hybrid

    Category: Full-time

    Posted: Dec 18, 2025

    The Research Assistant will support policy-oriented research projects within the Science and Emerging Technology team at RAND Europe. This role involves conducting literature reviews, data analysis, and contributing to high-quality reports for various public and private sector clients.

    View Opportunity

  • Research Engineer, Cybersecurity RL

    Organization: Anthropic

    Location: San Francisco, CA; New York City, NY

    Region: US/Canada

    Type: Hybrid

    Category: Full-time

    Posted: Dec 18, 2025

    This role involves advancing AI capabilities in secure coding and vulnerability remediation through reinforcement learning research and engineering. Candidates will design RL environments and conduct experiments to enhance defensive cybersecurity workflows within Anthropic's Horizons team.

    View Opportunity

  • Visiting Fellows

    Organization: Constellation Research Center

    Location: Berkeley, CA

    Region: US/Canada

    Type: On-site

    Category: Fellowship

    Posted: Dec 18, 2025

    The Visiting Fellows program brings together professionals from diverse sectors to join Constellation's Berkeley-based workspace for three to six months to advance their research. Fellows receive comprehensive support including travel reimbursement, housing, meals, and 24/7 access to a collaborative environment with leading AI researchers.

    View Opportunity

  • Policy Advisor, UK

    Organization: Anthropic

    Location: London, UK

    Region: UK

    Type: On-site

    Category: Full-time

    Posted: Dec 18, 2025

    This role involves leading the development of UK legislative and regulatory positions while engaging with government and parliamentary stakeholders to advance AI safety. The advisor will translate technical research into policy recommendations and collaborate with global legal and technical teams to shape Anthropic's strategic outlook.

    View Opportunity

  • Anthropic AI Safety Fellow

    Organization: Anthropic

    Location: London, UK; Ontario, CA; San Francisco, CA; Berkeley, CA

    Region: US/Canada, UK, Remote

    Type: Hybrid

    Category: Fellowship

    Posted: Dec 18, 2025

    The Anthropic Fellows Program is a four-month initiative designed to accelerate AI safety research by providing funding, mentorship, and stipends to technical talent. Fellows work on empirical projects aligned with research priorities such as scalable oversight and mechanistic interpretability, aiming to produce public research papers.

    View Opportunity

  • Anthropic AI Security Fellow

    Organization: Anthropic

    Location: San Francisco, Berkeley, London, Ontario, or Remote

    Region: US/Canada, UK, Remote

    Type: Hybrid

    Category: Fellowship

    Posted: Dec 18, 2025

    The Anthropic Fellows Program provides funding, mentorship, and compute resources for technical talent to conduct empirical research on AI security and safety for four months. Fellows work with Anthropic researchers to produce public outputs, such as papers, focusing on defensive AI use and securing infrastructure.

    View Opportunity

  • Summer Fellowship 2026, Applied Track

    Organization: Centre for the Governance of AI (GovAI)

    Location: Oxford, UK

    Region: UK

    Type: On-site

    Category: Fellowship

    Posted: Dec 18, 2025

    The Summer Fellowship Applied Track is a three-month program designed to accelerate careers in AI governance through projects in fields like communications, policy, and operations. Fellows participate in expert seminars and receive mentorship to develop non-research skill sets for the AI safety ecosystem.

    View Opportunity

  • AI Biosecurity Manager

    Organization: Frontier Model Forum

    Location: U.S. (Select States)

    Region: US/Canada

    Type: On-site

    Category: Full-time

    Posted: Dec 18, 2025

    The AI Biosecurity Manager will drive consensus on threat models, evaluations, and mitigations for biological and chemical risks associated with frontier AI models. This role involves coordinating expert workshops, managing collaborative research projects, and documenting emerging industry practices for managing high-level biosecurity threats.

    View Opportunity