Working Groups

From MozillaWiki
Jump to: navigation, search
AI v3 Square.png Trustworthy AI Working Groups
Owner: Temi Popo Years Active: 2020-present Updated: 2022-05-16
Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [1].


2021 - 2022 Cohort


As we seek to support the MozFest community in building more tools and technology that promote Trustworthy Artificial Intelligence (AI), we would like expert AI Builders to work alongside highly engaged Civil Society Actors in envisioning a more equitable automated future.

Building Trustworthy AI Working Group


The Building Trustworthy AI Working Group for AI builders is an engaged community of over 400 global members who aim to help our technical community build more Trustworthy AI. We invite developers, funders, and policy-makers expressly concerned with technical products and standards to this group.

Fair EVA: A Toolkit for Evaluating Voice Assistant Fairness. This project will build and deploy an evaluation toolkit for developers to test the fairness of deep learning components that make up embedded voice assistants. We believe a fair voice assistant should work equally well for all users, irrespective of their demographic attributes. At the moment it is not possible to even test if this is the case, despite obvious discrepancies during regular use. Join us to create more equitable access to voice assistants.

Visualizing Internet Subcultures on Social Media. This project will build an interactive visualization of the formation of and interactions between different communities on the internet, from die-hard fan bases to real-life social movements. Join us to help build this dynamic visualization. Foundational Tech for Low-Resource Languages. This project will re-imagine the foundational technology infrastructure needed to kickstart the artificial intelligence / machine learning frameworks for some of the low-resourced languages of South Asia. Join us in building a more diverse, equitable, and community-run AI ecosystem for global languages starting in South Asia.

Truth of Waste as a Public Good. This project is bringing transparency to waste recycling through standardized content authentication. Focusing on waste management, a prototype powered by AI will interact with the decision-making ecosystem that empowers better decisions around recycling or re-homing. Join us to streamline the cooperative, circular economy business model.

MOSafely: An AI Community Making the Internet a Safer Place for Youth. Modus Operandi Safely (MOSafely.org) is an open-source community that leverages evidence-based research, data, and artificial intelligence to help youth and young adults engage more safely online. Join us in building technology that detects risk behaviors of youth online and/or unsafe online interactions with others.

Atlanta University Center Data Science Initiative. As part of Mozilla’s HBCU Collaborative, The Atlanta University Center Data Science Initiative is working to develop a prototype of a tool or technology that promotes Trustworthy AI. Interested in supporting this open innovation project? Join the working group to learn more.

AI Unveiled: Discovering Patterns, Shaping Patterns. This project is building a serious game environment for users to discover the patterns driving the behavior of automated systems based on machine learning. Our goal is for users - even without technical knowledge - to gain insight into the workings of opaque systems based on so-called artificial intelligence. Join us in giving users a better understanding of how their data feeds algorithms.

Civil Society Actors for Trustworthy AI Working Group


The Civil Society Actors for Trustworthy AI Working Group is for civil society actors engaged with AI and the promotion of trustworthy solutions in their communities and work. We welcome global and local activists, artists, journalists, organizers, and researchers already working on Trustworthy AI and those who are interested in doing so to this group.

A feminist dictionary in AI will create a reference for desinging gender-inclusive algorithms. The dictionary will help AI Builders understand feminist theory and gender equality, as well as describe the kinds of classification, analysis, and prediction-making necessary for developing a gender-inclusive algorithm. Join us to work towards a more equitable and inclusive era in AI.

Accountability Case Labs will build community and common strategic goals across the full range of technical and social experts, researchers, builders, and advocates who care about AI accountability and auditing. Our cross-disciplinary project will also develop a body of collaborative case studies examining real world examples across groups — from policy-making and regulation to design, development, procurement, deployment and security. Join us for a collaborative approach to increasing AI accountability.

AI Governance in Africa will create an open and inclusive space for discussion and research into the development, application, and governance of AI in Africa. We will draft a white paper that tackles issues of societal concern like bias in AI. Join us to help create this resource for more open and inclusive AI in Africa.

Audit of Delivery Platforms, in order to reinforce labour rights defense activities, will design and perform a technological audit of the algorithms used by delivery services in Ecuador. With help from several community organizations, this audit will help further workers’ labour rights and efforts to unionize. Join us to help model this audit in Ecuador and consider how it might help labour rights where you live, as well.

Black Communities - Data Cooperatives (BCDC) will build a community of impact that helps organizers and leaders work with their communities to identify the links between their data and the AI technology around them. The project will also share practices for combating the negative effects of AI in Black communities and spaces like educational technology. Join us to work with like-minded community members who want Black communities to have more control over their data and its use in AI.

Harnessing the civic voice in AI impact assessment will develop guidance for ​meaningful participation of external stakeholders ​such as civil society organizations and affected communities engaged in Human Rights Impact Assessments (HRIA) of AI systems. These assessments should be a requirement for any AI system deployed in communities. Join us to amplify civic voices in AI development and design, safeguard human rights and to promote rights-based AI through HRIAs that benefit ​affected communities.

The Trustworthy AI in Education Toolkit will help science educators introduce and contextualize AI in their classrooms and other learning environments. Educators and learners need a better understanding of AI to understand what makes AI helpful and trustworthy - or not. Join us to gather, organize and document AI education frameworks, curriculum and activities and put them in a Trustworthy AI context to help educators and learners think critically about AI in their lives.

As part of Mozilla’s HBCU Collaborative, students at Spelman College will work with academics and industry experts on a white paper that centers Black women in the conversation on Trustworthy AI and approaches it from an intersectional lens. They wrote a Blackpaper on Black women's experiences with Algorithmic microagressions.

2020 - 2021 Cohort

The collaborative spirit and innovation felt at the Mozilla Festival should be experienced all year round. The MozFest team is testing out a working group structure to support the technical community interested in building trustworthy AI and we want your help to collaboratively make building trustworthy AI a reality.

Click any link below to find out more about the Trustworthy AI projects that were built, the working group process, outputs, and how some teams used MozFest as platform to expand their work.

  • Truth as a Public Good
    Member Feedback: "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."
  • Zen of ML
    Member Feedback: "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."
  • Nanny State
    Member Feedback: "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."
  • Narrative Future of AI
    Member Feedback: "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"