About us

Important Information

  • The AI Safety Initiative Groningen has been awarded funding from Open Philanthropy. Open Philanthropy is a grantmaking organization which aims to use its resources to help others as much as it can. Their support enables us to pursue our mission of ensuring AI development aligns with safe and ethical standards. Open Philanthropy evaluates their own performance every year, their report for 2022 can be found here; it gives a nice insight into their priorities and way of operating.

  • We received our funding from Open Philanthropy after going through a written application and online interview process. Open Philanthropy has stated that the purpose of the grant “...is to provide funding for individuals who want to direct or support university student groups focused on topics relevant to improving the long-term future… or specific cause areas [such as AI Safety]”. Examples of permitted activities include “organizing, publicizing, and providing material for reading and discussion groups, career advising workshops, research activities, and retreats, talks and events.” The only requirement for us in receiving this grant is that we “agree to submit a written report to Open Philanthropy within 30 days of the Termination Date [of the grant]. The report must be at least two paragraphs and must describe in reasonable detail what activities [we] conducted with the support of these grant funds.” Concretely, there are absolutely no conditions whatsoever of the grant that compromise our independence or objectivity in any of our operations. We are very fortunate to be able to receive this grant while maintaining a complete unencumbered focus on the ethical and safe development of AI.

  • We are well aware that political and commercial influences from, for example, Big Tech, are very sensitive in academia. Therefore we make sure to thoroughly investigate what kinds of influences our sources of funding might be subject to, and/or impose themselves.

    We have noted below important details to be aware of regarding Open Philanthropy, Big Tech, and power concentration that we have encountered so far, so that others may also investigate further themselves. Taking into consideration all the points below, and especially considering that our grant is delivered without any “strings” attached, we currently find it appropriate and beneficial for us to receive funding from Open Philanthropy. If you have any thoughts or arguments you feel may give reason for us to think otherwise, or have questions, please contact us at info@aisig.org.

    Important points:

    History of Open Philanthropy:

    Open Philanthropy’s main funders are Dustin Moskovitz, co-founder of Facebook, and his wife, Cari Tuna. Dustin Moskovitz left Facebook in 2008 - some reports claim that most of his net worth lies in his estimated 2% stake there, while others dispute this claim. After leaving Facebook in 2008, Moskovitz co-founded Asana, a workflow software company, which is now valued at around 4.4 billion USD.

    Tuna and Moskovitz were in their mid-20s in 2010 when they became the youngest couple ever to sign on to the Giving Pledge, the campaign started by Bill Gates and Warren E. Buffett to encourage the world’s billionaires to commit to giving away most of their wealth. They went on to be involved in the founding of Open Philanthropy.

    Holden Karnofsky and his role as CEO:

    For the majority of its operation, the CEO of Open Philanthropy has been Holden Karnofsky, whom Moskovitz and Tuna originally met through a mutual friend. Four years prior to the founding of Open Philanthropy, Holden Karnofsky co-founded GiveWell. GiveWell is a charity evaluator that searches for interventions that save or improve lives the most per dollar.

    While running Open Philanthropy, Karnofsky went on to write extensively about his concerns of potential risks from advanced AI systems, speaking both on behalf of Open Philanthropy [1,2] and on his personal blog [3,4,5,6]. Karnofsky has clearly outlined in his writing that, over time, his thinking regarding the best place to focus his philanthropic efforts moved towards preparing for the potential harms of speculative future technologies: ““The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is that I think the stakes are just that high.” This reasoning is largely based on the concept of hits-based giving which Open Philanthropy embraces: “Our overarching goal is to do as much good as we can, and as part of that, we’re open to supporting work that has a high risk of failing to accomplish its goals. We’re even open to supporting work that is more than 90% likely to fail, as long as the overall expected value is high enough.”

    Open Philanthropy’s influence, and their strong focus on future harms:

    With regard to the philanthropic work they do in connection to AI, it is clear that Open Philanthropy prioritizes work addressing speculative future harms, and places less focus on addressing existing harms from AI. Furthermore, there may be reason to believe that Open Philanthropy is using its significant financial resources to skew the global policy and research landscape in this direction too. That being said, fortunately, although our funder has this preference, it does not mean we are beholden to the same preference (please visit the “Conditions/Expectations Tied to Funding” point on this same page to understand this better). We, as AISIG, place equal importance on addressing both present and speculative harms from AI. One consequence of this is that at least half of our activities are of the kind that Open Philanthropy wants to support, and therefore we are fortunately able to receive funding from them.

    Insider relationships/networks relating to Open Philanthropy:

    There are many ties between people at Open Philanthropy and people involved with frontier AI labs that are important to be aware of. For us, the most fitting explanation of these ties is simply that people with similar values, interests, and careers end up being in contact and frequently enter personal relationships, however, we are well aware that it is possible that malevolent strategic actions (such as altering policies in the US so that one’s company gains more profits/power) could arise as a result of these kinds of relationships, and therefore it is important to keep track of them.

    Open Philanthropy previously gave a grant to OpenAI and received a seat on the board as a result. In short, the justification given for this was that “We believe that close involvement with such an organization is likely one of the most effective avenues available to us for advancing our goal of increasing the amount of work done on reducing potential risks from advanced AI, both within the organization and outside it”.

    Holden Karnofsky’s wife is Daniela Amodei, an ex-employee of OpenAI who went on to found the major frontier AI lab Anthropic together with her brother, Dario Amodei (who was also an ex-housemate of both Holden and Daniela). Holden Karnofsky has repeatedly publicly disclosed this relationship and made clear that Daniela Amodei has significant equity in both Anthropic and OpenAI, with a large portion of it being pledged for donation.

    Dustin Moskovitz, the primary funder of Open Philanthropy, has invested in Anthropic. Moskovitz has made clear that OpenAI CEO Sam Altman previously invested in Moskovitz’ Asana, and both Altman and Moskovitz invested in the same nuclear fusion startup.

    Summary:

    Our understanding from our investigation thus far is that, with their work, Open Philanthropy greatly prioritises safeguarding against speculative harms from advanced AI systems and gives significantly less support to efforts to address current harms from AI. Fortunately, however, as can be seen in the details of our agreement with them, we are not in any way forced to have exactly the same priorities as them. There are some additional observations that could be cause for concern, such as the level of influence Open Philanthropy is currently exerting over the AI policy landscape or the personal relationships that can be found between some people at Open Philanthropy and people at frontier AI labs, however, these observations also have potentially benign explanations which we are currently most convinced by.

  • To ensure objectivity in all our endeavors, we actively engage with a wide array of academic stakeholders, including professors, university staff, and students, fostering regular consultations and open dialogues. This approach allows us to critically assess our strategies, keeping them aligned with diverse academic standards. Additionally, we prioritize interactions with critics/skeptics, embracing their insights as a means to challenge our assumptions and enhance the overall balance of our offerings. By maintaining this inclusive and transparent approach, we aim to address potential biases and uphold the integrity of our support to the research community and beyond. If you have any critiques or questions, please feel free to contact us at info@aisig.org

  • Our initiative operates independently, without an advisory body, ensuring our freedom of expression in exploring diverse ideas. We place a strong emphasis on the quality of arguments, with critical thinking and rigorous debate at the heart of our work. This independence enables us to openly consider a broad spectrum of viewpoints, including those that challenge conventional thinking in the field. Our approach allows us to engage with a variety of perspectives, striving to contribute constructively to the ongoing discourse on AI safety.

  • The AI Safety courses we facilitate for free are developed by BlueDot Impact, an organization committed to responsibly steering the rapid advancement of transformative technologies for humanity's benefit. BlueDot's focus is on delivering high-quality courses, nurturing a collaborative community, and creating opportunities to tackle global challenges. This curriculum is the collective work of experts from various labs and institutions across the AI Safety Industry, including Alignment Research Center, OpenAI, Center for the Governance of AI, Center for Law and Technology, Machine Learning Group - University of Cambridge, Centre for the Study of Existential Risk. These experts are the principal architects of the courses, ensuring a balanced and thorough approach to AI safety education. Please note that these courses are not from the University of Groningen.

  • At AISIG, we are deeply engaged in understanding and addressing the complexities and challenges of artificial intelligence, both current and future. Initially more focused on speculative and future risks of AI, our scope has naturally broadened through engaging with diverse professionals and expanding our team. This has enhanced our commitment to a more encompassing view of AI-related challenges.

    Harms such as misinformation and discrimination from algorithms are already evident today; other harms show signs of emerging. We understand the importance of addressing both ongoing and potential future harms. This is not an either/or scenario. Present and emerging risks often share similar mechanisms, patterns, and solutions. Our investment in robust governance frameworks and AI safety is driven by this understanding, aiming to create meaningful impacts across various aspects of AI development and usage.

    Inclusivity remains a core principle in our approach. AI's impacts are wide-ranging, touching diverse groups in different ways. Hence, we actively engage with a spectrum of perspectives and solutions, from ensuring fairness and mitigating bias to addressing speculative existential risks. Our commitment is to recognize and respond to all concerns, whether immediate or speculative, with equal dedication and seriousness

Our team

  • Photo of Joris Postmus

    Joris Postmus

    Co-Director

    Driven to develop robust, transparent, and reliable AI systems, I specialize in the interpretability of Large Language Models. Off-duty, I relish spending time with friends, reading, and kitesurfing. Connect with me at joris@aisig.org.

  • Photo of organiser Max Kearney

    Max Kearney

    Co-Director

    I like to examine the ‘big picture’ of AI and strategic responses. How quickly could AI develop, what impact will this have for society, what responses are appropriate? I love meditation and being with friends. Contact me at +31626709586 or max@aisig.org.

  • Photo of organizer Xenia

    Xenia Demetriou

    Operations and AI Governance Facilitator

    I love AI ethics and its societal relevance, and learning about the human brain. I also like video games, and going on walks. You can reach me at +31629375012

  • Photo of organizer Davide Zani

    Davide Zani

    Research Groups Coordinator

    I'm Davide, a student pursuing a Bachelor's degree in AI. My primary focus is on AI Safety, with a keen interest in the fields of Deceptive Al and Mechanistic Interpretability. You can reach me at +393442602237

  • Photo of organiser  Gabriele Sarti

    Gabriele Sarti

    Research and Upskilling

    I am a PhD student with a passion for languages, and my research mainly focuses on the interpretability of AI systems. You can reach me at gabriele@aisig.org. More about me: https://gsarti.com.

  • Photo of organiser Steve Abreau

    Steve Abreau

    Research and Upskilling

    I'm a PhD student working on brain-inspired hardware and compute paradigms for AI, aiming to make AI systems more private, efficient, adaptive, and sustainable. More about me: stevenabreu.com

  • Lhea Beumer

    Technical AI Safety Facilitator

    I’m a MSc student working at the cross-section of human and machine intelligence, with a special interest in AI alignment. Among other things, I enjoy playing tennis, making music and spending time with loved ones. You can reach me at +31629366961

  • Picture of organizer Alex

    Alexandru Dimofte

    Hackathons & Research

    My scientific curiosity revolves around AI explainability and interpretability. Also passionate about philosophy, travelling, discovering diverse cultures and meeting fascinating people.

  • Photo of organiser Cornelia

    Cornelia Riefsa Narwastu Evangelica

    AI Governance Facilitator

    I'm an International and European Law student interested in exploring AI's multidimensional aspects and the strategy to align them continuously to suit societal exigencies. In my free time, I love to connect with people and learn new things.

  • Pepijn Emmen

    Technical AI Safety Facilitator

    Motivated to harness AI as a positive force for individuals and society, I am driven by the intersection of technology and philosophy. My focus is on comprehending the technical intricacies of AI and translating them into meaningful insights that contribute to constructive philosophical discourse.

  • Tiwai Mhundwa

    AI Governance Courses

    As an LLB student specializing in Technology law, I have a passion for applying legal reasoning to the tech landscape and unraveling the ethical facets of AI and current technology. I thrive on exploring new social spheres and acquiring diverse skills and experiences beyond my current program. You can reach me at +31633867831.

  • Jeremias Ferrao

    Technical AI Safety Courses

    I believe that Artificial General Intelligence (AGI) will become the most influential invention in human history. However, it is crucial that we approach its use with caution, as unrestrained application may lead to catastrophic outcomes. Therefore, my research interests lie in the areas of mechanistic interpretability and alignment

Some of the other AI Safety groups we are connected with: