Research & Discussion Groups

Due to high interest, admittance to several of these groups now involves an interview/selection phase. In this way a high standard is maintained by ensuring that admitted group members show clear commitment to consistently attend and contribute.

Why we’re launching these groups:

The AISIG Research and Discussion Groups are designed to bring together individuals passionate about AI safety, offering a platform for connection, collaboration, and mutual learning.

We've recognized a gap in local opportunities for individuals to engage deeply with specific AI safety topics. From the technical challenges of mechanistic interpretability to the societal implications of AI on labor and democracy, there's a need for focused discussion and research. These groups are our answer to providing practical experience, mentorship, and a collaborative space for students, academics, and AI safety enthusiasts.

For each of these groups, we will have at least one expert/experienced student who can help guide the conversation, lead (research) projects, and answer questions about the field.

About the groups

(Mechanistic) Interpretability

Reverse-engineering ML systems to understand the inner workings and components of the models. Designing systems whose components and decisions are more easily understood/interpretable. The goal is to uncover the specific circuits, motifs, and algorithms that neural networks use to perform tasks by understanding the computational mechanisms and representations they learn, beyond just analyzing inputs and outputs. Example

AI Governance

Dedicated to shaping policies for AI's ethical and responsible use. Suitable for individuals passionate about guiding the development of AI with regulatory frameworks. The goal is to discuss guidelines, regulations, and accountability mechanisms to ensure AI systems are transparent, secure, fair, and free of bias. More information

AI and Labour

Focus on discussing how AI affects jobs and the broader concept of work. Perfect for exploring the implications of AI on employment and human roles. The goal is to empirically assess how exposure to AI technologies impacts employment and wages, with evidence indicating that AI can both replace and create jobs. Example

Neuralignment

Neuralignment explores the intersection of neuroscience and AI to improve how AI systems align with human values. It studies how we can use our knowledge of the brain to help solve the alignment problem. A great option for those interested in the confluence of brain science and AI. More information

Agent Foundations

Centers on the core mathematical principles behind AI agents. A good fit for those interested in the theoretical underpinnings of AI behaviors. The goal is to develop a formal, mathematically precise understanding of agency to ensure AI systems are reliably aligned with human interests. More information

Practical AI Safety

Works on making AI systems safer and fairer through hands-on testing and improvement. A good fit for anyone focused on solving AI vulnerabilities with real-world applications. Examples of these include adversarial attacks, red-teaming, model evaluation, etc. Example

Who is this for?

The AISIG Research and Discussion Groups are designed for any students (BSc-PhD) passionate about AI safety. Whether you’re studying AI, law, philosophy, mathematics, or any related field and have an interest in the widespread impacts of AI, these groups are for you.

Why should you join?

1. Connect and Collaborate: There's currently a significant void in local opportunities for individuals passionate about specific AI safety topics to meet, share ideas, and work together. Our groups are designed to fill this gap, creating a vibrant community of enthusiasts and experts

2. Mentorship and Experience: Many students and newcomers to the field of AI safety find themselves at a loss due to the lack of practical experience and mentorship. Our groups offer a great opportunity to learn from knowledgeable mentors, including PhDs and professionals, ready to share their insights and guide you through your journey in AI safety research

3. Contribute to AI Safety: By joining our groups, you become part of a collective effort to shape the future of AI in a way that is safe, ethical, and beneficial for all

4. Active Community: Meet regularly to discuss ideas, work on projects, and engage with challenges in your area of interest. It's not just about discussions; it's about building a community of active learners and contributors!

Practical Details

Timing: Meetings will start towards the end of April, and will take place bi-weekly (~2 hrs per session).

Location: Sessions will take place in various settings, from cafes to offices in Groningen, providing a comfortable environment for discussions

Sessions: The actual content/expectations of the sessions will vary per group. Some will be focusing more heavily on readings and group discussions whereas others will be more practical and project-oriented.

Outputs: One of the group's aims is to produce usable output. This could range from simple blog posts to actual collaborative research projects/publications.

Support: AISIG is committed to supporting these groups, ensuring resources are available for a productive and engaging experience.

Interested in any of these groups? We welcome participants of all experience levels.

*These groups are not affiliated with the University of Groningen