https://wiki.mozilla.org/api.php?action=feedcontributions&user=Temipopo&feedformat=atomMozillaWiki - User contributions [en]2024-03-29T13:24:22ZUser contributionsMediaWiki 1.27.4https://wiki.mozilla.org/index.php?title=Working_Groups&diff=1242457Working Groups2022-05-16T14:31:25Z<p>Temipopo: /* Civil Society Actors for Trustworthy AI Working Group */</p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=AI v3 Square.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
<br /><br />
<big><br />
=== 2021 - 2022 Cohort ===<br />
</big><br />
<br /><br />
As we seek to support the MozFest community in building more tools and technology that promote Trustworthy Artificial Intelligence (AI), we would like expert AI Builders to work alongside highly engaged Civil Society Actors in envisioning a more equitable automated future.<br />
<br />
==== Building Trustworthy AI Working Group ==== <br />
<br /><br />
The Building Trustworthy AI Working Group for AI builders is an engaged community of over 400 global members who aim to help our technical community build more Trustworthy AI. We invite developers, funders, and policy-makers expressly concerned with technical products and standards to this group. <br /><br />
<br />
'''[http://www.faireva.org Fair EVA: A Toolkit for Evaluating Voice Assistant Fairness.]''' This project will build and deploy an evaluation toolkit for developers to test the fairness of deep learning components that make up embedded voice assistants. We believe a fair voice assistant should work equally well for all users, irrespective of their demographic attributes. At the moment it is not possible to even test if this is the case, despite obvious discrepancies during regular use. Join us to create more equitable access to voice assistants.<br /><br />
<br />
'''[https://tai-online-communities.github.io/ Visualizing Internet Subcultures on Social Media.]''' This project will build an interactive visualization of the formation of and interactions between different communities on the internet, from die-hard fan bases to real-life social movements. Join us to help build this dynamic visualization.<br />
Foundational Tech for Low-Resource Languages. This project will re-imagine the foundational technology infrastructure needed to kickstart the artificial intelligence / machine learning frameworks for some of the low-resourced languages of South Asia. Join us in building a more diverse, equitable, and community-run AI ecosystem for global languages starting in South Asia.<br /><br />
<br />
'''[https://truthasapublicgood.github.io Truth of Waste as a Public Good.]''' This project is bringing transparency to waste recycling through standardized content authentication. Focusing on waste management, a prototype powered by AI will interact with the decision-making ecosystem that empowers better decisions around recycling or re-homing. Join us to streamline the cooperative, circular economy business model.<br /><br />
<br />
'''[https://mosafely.org/ MOSafely: An AI Community Making the Internet a Safer Place for Youth.]''' Modus Operandi Safely (MOSafely.org) is an open-source community that leverages evidence-based research, data, and artificial intelligence to help youth and young adults engage more safely online. Join us in building technology that detects risk behaviors of youth online and/or unsafe online interactions with others.<br /><br />
<br />
'''[https://foundation.mozilla.org/en/blog/exploring-trustworthy-ai-in-mozilla-data-powered-products/ Atlanta University Center Data Science Initiative].''' As part of Mozilla’s HBCU Collaborative, The Atlanta University Center Data Science Initiative is working to develop a prototype of a tool or technology that promotes Trustworthy AI. Interested in supporting this open innovation project? Join the working group to learn more.<br /><br />
<br />
'''AI Unveiled: Discovering Patterns, Shaping Patterns.''' This project is building a serious game environment for users to discover the patterns driving the behavior of automated systems based on machine learning. Our goal is for users - even without technical knowledge - to gain insight into the workings of opaque systems based on so-called artificial intelligence. Join us in giving users a better understanding of how their data feeds algorithms. <br /><br />
<br />
==== Civil Society Actors for Trustworthy AI Working Group ====<br />
<br /><br />
The Civil Society Actors for Trustworthy AI Working Group is for civil society actors engaged with AI and the promotion of trustworthy solutions in their communities and work. We welcome global and local activists, artists, journalists, organizers, and researchers already working on Trustworthy AI and those who are interested in doing so to this group. <br /><br />
<br />
A '''feminist dictionary in AI''' will create a reference for desinging gender-inclusive algorithms. The dictionary will help AI Builders understand feminist theory and gender equality, as well as describe the kinds of classification, analysis, and prediction-making necessary for developing a gender-inclusive algorithm. Join us to work towards a more equitable and inclusive era in AI. <br /><br />
<br />
'''[https://docs.google.com/document/d/1wi-OsM4l2HCn-F0L_PomqkpncT5y9DCQF6db2eMwsCY/edit Accountability Case Labs]''' will build community and common strategic goals across the full range of technical and social experts, researchers, builders, and advocates who care about AI accountability and auditing. Our cross-disciplinary project will also develop a body of collaborative case studies examining real world examples across groups — from policy-making and regulation to design, development, procurement, deployment and security. Join us for a collaborative approach to increasing AI accountability.<br /><br />
<br />
'''AI Governance in Africa''' will create an open and inclusive space for discussion and research into the development, application, and governance of AI in Africa. We will draft a white paper that tackles issues of societal concern like bias in AI. Join us to help create this resource for more open and inclusive AI in Africa.<br /><br />
<br />
'''Audit of Delivery ''' Platforms, in order to reinforce labour rights defense activities, will design and perform a technological audit of the algorithms used by delivery services in Ecuador. With help from several community organizations, this audit will help further workers’ labour rights and efforts to unionize. Join us to help model this audit in Ecuador and consider how it might help labour rights where you live, as well.<br /><br />
<br />
'''Black Communities - Data Cooperatives (BCDC)''' will build a community of impact that helps organizers and leaders work with their communities to identify the links between their data and the AI technology around them. The project will also share practices for combating the negative effects of AI in Black communities and spaces like educational technology. Join us to work with like-minded community members who want Black communities to have more control over their data and its use in AI. <br /><br />
<br />
'''Harnessing the civic voice in AI''' impact assessment will develop guidance for meaningful participation of external stakeholders such as civil society organizations and affected communities engaged in Human Rights Impact Assessments (HRIA) of AI systems. These assessments should be a requirement for any AI system deployed in communities. Join us to amplify civic voices in AI development and design, safeguard human rights and to promote rights-based AI through HRIAs that benefit affected communities. <br /><br />
<br />
'''The Trustworthy AI in Education Toolkit''' will help science educators introduce and contextualize AI in their classrooms and other learning environments. Educators and learners need a better understanding of AI to understand what makes AI helpful and trustworthy - or not. Join us to gather, organize and document AI education frameworks, curriculum and activities and put them in a Trustworthy AI context to help educators and learners think critically about AI in their lives. <br /><br />
<br />
As part of Mozilla’s HBCU Collaborative, students at '''Spelman College''' will work with academics and industry experts on a white paper that centers Black women in the conversation on Trustworthy AI and approaches it from an intersectional lens. They wrote a [https://foundation.mozilla.org/en/insights/mozfest-spelman-college-blackpaper/ Blackpaper on Black women's experiences with Algorithmic microagressions].<br /><br />
<br />
<big><br />
<br />
=== 2020 - 2021 Cohort === <br />
</big><br />
The collaborative spirit and innovation felt at the Mozilla Festival should be experienced all year round. The MozFest team is testing out a working group structure to support the technical community interested in building trustworthy AI and we want your help to collaboratively make building trustworthy AI a reality.<br />
<br />
Click any link below to find out more about the Trustworthy AI projects that were built, the working group process, outputs, and how some teams used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> '''Member Feedback:''' "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> '''Member Feedback:''' "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> '''Member Feedback:''' "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> '''Member Feedback:''' "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> '''Member Feedback:''' "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /><br />
<br />
[[Category:Mozilla Festival]]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1242429Working Groups2022-05-16T12:51:47Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=AI v3 Square.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
<br /><br />
<big><br />
=== 2021 - 2022 Cohort ===<br />
</big><br />
<br /><br />
As we seek to support the MozFest community in building more tools and technology that promote Trustworthy Artificial Intelligence (AI), we would like expert AI Builders to work alongside highly engaged Civil Society Actors in envisioning a more equitable automated future.<br />
<br />
==== Building Trustworthy AI Working Group ==== <br />
<br /><br />
The Building Trustworthy AI Working Group for AI builders is an engaged community of over 400 global members who aim to help our technical community build more Trustworthy AI. We invite developers, funders, and policy-makers expressly concerned with technical products and standards to this group. <br /><br />
<br />
'''[http://www.faireva.org Fair EVA: A Toolkit for Evaluating Voice Assistant Fairness.]''' This project will build and deploy an evaluation toolkit for developers to test the fairness of deep learning components that make up embedded voice assistants. We believe a fair voice assistant should work equally well for all users, irrespective of their demographic attributes. At the moment it is not possible to even test if this is the case, despite obvious discrepancies during regular use. Join us to create more equitable access to voice assistants.<br /><br />
<br />
'''[https://tai-online-communities.github.io/ Visualizing Internet Subcultures on Social Media.]''' This project will build an interactive visualization of the formation of and interactions between different communities on the internet, from die-hard fan bases to real-life social movements. Join us to help build this dynamic visualization.<br />
Foundational Tech for Low-Resource Languages. This project will re-imagine the foundational technology infrastructure needed to kickstart the artificial intelligence / machine learning frameworks for some of the low-resourced languages of South Asia. Join us in building a more diverse, equitable, and community-run AI ecosystem for global languages starting in South Asia.<br /><br />
<br />
'''[https://truthasapublicgood.github.io Truth of Waste as a Public Good.]''' This project is bringing transparency to waste recycling through standardized content authentication. Focusing on waste management, a prototype powered by AI will interact with the decision-making ecosystem that empowers better decisions around recycling or re-homing. Join us to streamline the cooperative, circular economy business model.<br /><br />
<br />
'''[https://mosafely.org/ MOSafely: An AI Community Making the Internet a Safer Place for Youth.]''' Modus Operandi Safely (MOSafely.org) is an open-source community that leverages evidence-based research, data, and artificial intelligence to help youth and young adults engage more safely online. Join us in building technology that detects risk behaviors of youth online and/or unsafe online interactions with others.<br /><br />
<br />
'''[https://foundation.mozilla.org/en/blog/exploring-trustworthy-ai-in-mozilla-data-powered-products/ Atlanta University Center Data Science Initiative].''' As part of Mozilla’s HBCU Collaborative, The Atlanta University Center Data Science Initiative is working to develop a prototype of a tool or technology that promotes Trustworthy AI. Interested in supporting this open innovation project? Join the working group to learn more.<br /><br />
<br />
'''AI Unveiled: Discovering Patterns, Shaping Patterns.''' This project is building a serious game environment for users to discover the patterns driving the behavior of automated systems based on machine learning. Our goal is for users - even without technical knowledge - to gain insight into the workings of opaque systems based on so-called artificial intelligence. Join us in giving users a better understanding of how their data feeds algorithms. <br /><br />
<br />
==== Civil Society Actors for Trustworthy AI Working Group ====<br />
<br /><br />
The Civil Society Actors for Trustworthy AI Working Group is for civil society actors engaged with AI and the promotion of trustworthy solutions in their communities and work. We welcome global and local activists, artists, journalists, organizers, and researchers already working on Trustworthy AI and those who are interested in doing so to this group. <br /><br />
<br />
A '''feminist dictionary in AI''' will create a reference for desinging gender-inclusive algorithms. The dictionary will help AI Builders understand feminist theory and gender equality, as well as describe the kinds of classification, analysis, and prediction-making necessary for developing a gender-inclusive algorithm. Join us to work towards a more equitable and inclusive era in AI. <br /><br />
<br />
'''Accountability Case Labs''' will build community and common strategic goals across the full range of technical and social experts, researchers, builders, and advocates who care about AI accountability and auditing. Our cross-disciplinary project will also develop a body of collaborative case studies examining real world examples across groups — from policy-making and regulation to design, development, procurement, deployment and security. Join us for a collaborative approach to increasing AI accountability.<br /><br />
<br />
'''AI Governance in Africa''' will create an open and inclusive space for discussion and research into the development, application, and governance of AI in Africa. We will draft a white paper that tackles issues of societal concern like bias in AI. Join us to help create this resource for more open and inclusive AI in Africa.<br /><br />
<br />
'''Audit of Delivery ''' Platforms, in order to reinforce labour rights defense activities, will design and perform a technological audit of the algorithms used by delivery services in Ecuador. With help from several community organizations, this audit will help further workers’ labour rights and efforts to unionize. Join us to help model this audit in Ecuador and consider how it might help labour rights where you live, as well.<br /><br />
<br />
'''Black Communities - Data Cooperatives (BCDC)''' will build a community of impact that helps organizers and leaders work with their communities to identify the links between their data and the AI technology around them. The project will also share practices for combating the negative effects of AI in Black communities and spaces like educational technology. Join us to work with like-minded community members who want Black communities to have more control over their data and its use in AI. <br /><br />
<br />
'''Harnessing the civic voice in AI''' impact assessment will develop guidance for meaningful participation of external stakeholders such as civil society organizations and affected communities engaged in Human Rights Impact Assessments (HRIA) of AI systems. These assessments should be a requirement for any AI system deployed in communities. Join us to amplify civic voices in AI development and design, safeguard human rights and to promote rights-based AI through HRIAs that benefit affected communities. <br /><br />
<br />
'''The Trustworthy AI in Education Toolkit''' will help science educators introduce and contextualize AI in their classrooms and other learning environments. Educators and learners need a better understanding of AI to understand what makes AI helpful and trustworthy - or not. Join us to gather, organize and document AI education frameworks, curriculum and activities and put them in a Trustworthy AI context to help educators and learners think critically about AI in their lives. <br /><br />
<br />
As part of Mozilla’s HBCU Collaborative, students at '''Spelman College''' will work with academics and industry experts on a white paper that centers Black women in the conversation on Trustworthy AI and approaches it from an intersectional lens. Interested in contributing to this research project? Join the working group to learn more.<br /><br />
<br />
<big><br />
=== 2020 - 2021 Cohort === <br />
</big><br />
The collaborative spirit and innovation felt at the Mozilla Festival should be experienced all year round. The MozFest team is testing out a working group structure to support the technical community interested in building trustworthy AI and we want your help to collaboratively make building trustworthy AI a reality.<br />
<br />
Click any link below to find out more about the Trustworthy AI projects that were built, the working group process, outputs, and how some teams used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> '''Member Feedback:''' "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> '''Member Feedback:''' "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> '''Member Feedback:''' "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> '''Member Feedback:''' "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> '''Member Feedback:''' "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /><br />
<br />
[[Category:Mozilla Festival]]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1237569Working Groups2021-08-25T12:31:17Z<p>Temipopo: /* Civil Society Actors for Trustworthy AI Working Group */</p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=AI v3 Square.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
<br /><br />
<big><br />
=== 2021 - 2022 Cohort ===<br />
</big><br />
<br /><br />
As we seek to support the MozFest community in building more tools and technology that promote Trustworthy Artificial Intelligence (AI), we would like expert AI Builders to work alongside highly engaged Civil Society Actors in envisioning a more equitable automated future.<br />
<br />
==== Building Trustworthy AI Working Group ==== <br />
<br /><br />
The Building Trustworthy AI Working Group for AI builders is an engaged community of over 250 global members who aim to help our technical community build more Trustworthy AI. We invite developers, funders, and policy-makers expressly concerned with technical products and standards to this group. <br /><br />
<br />
'''AI Unveiled: Discovering Patterns, Shaping Patterns.''' This project is building a serious game environment for users to discover the patterns driving the behavior of automated systems based on machine learning. Our goal is for users - even without technical knowledge - to gain insight into the workings of opaque systems based on so-called artificial intelligence. Join us in giving users a better understanding of how their data feeds algorithms. <br /><br />
<br />
'''Fair EVA: A Toolkit for Evaluating Voice Assistant Fairness.''' This project will build and deploy an evaluation toolkit for developers to test the fairness of deep learning components that make up embedded voice assistants. We believe a fair voice assistant should work equally well for all users, irrespective of their demographic attributes. At the moment it is not possible to even test if this is the case, despite obvious discrepancies during regular use. Join us to create more equitable access to voice assistants.<br /><br />
<br />
'''Visualizing Internet Subcultures on Social Media.''' This project will build an interactive visualization of the formation of and interactions between different communities on the internet, from die-hard fan bases to real-life social movements. Join us to help build this dynamic visualization.<br />
Foundational Tech for Low-Resource Languages. This project will re-imagine the foundational technology infrastructure needed to kickstart the artificial intelligence / machine learning frameworks for some of the low-resourced languages of South Asia. Join us in building a more diverse, equitable, and community-run AI ecosystem for global languages starting in South Asia.<br /><br />
<br />
'''Truth of Waste as a Public Good.''' This project is bringing transparency to waste recycling through standardized content authentication. Focusing on waste management, a prototype powered by AI will interact with the decision-making ecosystem that empowers better decisions around recycling or re-homing. Join us to streamline the cooperative, circular economy business model.<br /><br />
<br />
'''MOSafely: An AI Community Making the Internet a Safer Place for Youth.''' Modus Operandi Safely (MOSafely.org) is an open-source community that leverages evidence-based research, data, and artificial intelligence to help youth and young adults engage more safely online. Join us in building technology that detects risk behaviors of youth online and/or unsafe online interactions with others.<br /><br />
<br />
'''Atlanta University Center Data Science Initiative.''' As part of Mozilla’s HBCU Collaborative, The Atlanta University Center Data Science Initiative is working to develop a prototype of a tool or technology that promotes Trustworthy AI. Interested in supporting this open innovation project? Join the working group to learn more.<br /><br />
<br />
==== Civil Society Actors for Trustworthy AI Working Group ====<br />
<br /><br />
The Civil Society Actors for Trustworthy AI Working Group is for civil society actors engaged with AI and the promotion of trustworthy solutions in their communities and work. We welcome global and local activists, artists, journalists, organizers, and researchers already working on Trustworthy AI and those who are interested in doing so to this group. <br /><br />
<br />
A '''feminist dictionary in AI''' will create a reference for desinging gender-inclusive algorithms. The dictionary will help AI Builders understand feminist theory and gender equality, as well as describe the kinds of classification, analysis, and prediction-making necessary for developing a gender-inclusive algorithm. Join us to work towards a more equitable and inclusive era in AI. <br /><br />
<br />
'''Accountability Case Labs''' will build community and common strategic goals across the full range of technical and social experts, researchers, builders, and advocates who care about AI accountability and auditing. Our cross-disciplinary project will also develop a body of collaborative case studies examining real world examples across groups — from policy-making and regulation to design, development, procurement, deployment and security. Join us for a collaborative approach to increasing AI accountability.<br /><br />
<br />
'''AI Governance in Africa''' will create an open and inclusive space for discussion and research into the development, application, and governance of AI in Africa. We will draft a white paper that tackles issues of societal concern like bias in AI. Join us to help create this resource for more open and inclusive AI in Africa.<br /><br />
<br />
'''Audit of Delivery ''' Platforms, in order to reinforce labour rights defense activities, will design and perform a technological audit of the algorithms used by delivery services in Ecuador. With help from several community organizations, this audit will help further workers’ labour rights and efforts to unionize. Join us to help model this audit in Ecuador and consider how it might help labour rights where you live, as well.<br /><br />
<br />
'''Black Communities - Data Cooperatives (BCDC)''' will build a community of impact that helps organizers and leaders work with their communities to identify the links between their data and the AI technology around them. The project will also share practices for combating the negative effects of AI in Black communities and spaces like educational technology. Join us to work with like-minded community members who want Black communities to have more control over their data and its use in AI. <br /><br />
<br />
'''Harnessing the civic voice in AI''' impact assessment will develop guidance for meaningful participation of external stakeholders such as civil society organizations and affected communities engaged in Human Rights Impact Assessments (HRIA) of AI systems. These assessments should be a requirement for any AI system deployed in communities. Join us to amplify civic voices in AI development and design, safeguard human rights and to promote rights-based AI through HRIAs that benefit affected communities. <br /><br />
<br />
'''The Trustworthy AI in Education Toolkit''' will help science educators introduce and contextualize AI in their classrooms and other learning environments. Educators and learners need a better understanding of AI to understand what makes AI helpful and trustworthy - or not. Join us to gather, organize and document AI education frameworks, curriculum and activities and put them in a Trustworthy AI context to help educators and learners think critically about AI in their lives. <br /><br />
<br />
As part of Mozilla’s HBCU Collaborative, students at '''Spelman College''' will work with academics and industry experts on a white paper that centers Black women in the conversation on Trustworthy AI and approaches it from an intersectional lens. Interested in contributing to this research project? Join the working group to learn more.<br /><br />
<br />
<big><br />
=== 2020 - 2021 Cohort === <br />
</big><br />
The collaborative spirit and innovation felt at the Mozilla Festival should be experienced all year round. The MozFest team is testing out a working group structure to support the technical community interested in building trustworthy AI and we want your help to collaboratively make building trustworthy AI a reality.<br />
<br />
Click any link below to find out more about the Trustworthy AI projects that were built, the working group process, outputs, and how some teams used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> '''Member Feedback:''' "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> '''Member Feedback:''' "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> '''Member Feedback:''' "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> '''Member Feedback:''' "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> '''Member Feedback:''' "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /><br />
<br />
[[Category:Mozilla Festival]]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1237568Working Groups2021-08-25T12:29:59Z<p>Temipopo: /* Building Trustworthy AI Working Group */</p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=AI v3 Square.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
<br /><br />
<big><br />
=== 2021 - 2022 Cohort ===<br />
</big><br />
<br /><br />
As we seek to support the MozFest community in building more tools and technology that promote Trustworthy Artificial Intelligence (AI), we would like expert AI Builders to work alongside highly engaged Civil Society Actors in envisioning a more equitable automated future.<br />
<br />
==== Building Trustworthy AI Working Group ==== <br />
<br /><br />
The Building Trustworthy AI Working Group for AI builders is an engaged community of over 250 global members who aim to help our technical community build more Trustworthy AI. We invite developers, funders, and policy-makers expressly concerned with technical products and standards to this group. <br /><br />
<br />
'''AI Unveiled: Discovering Patterns, Shaping Patterns.''' This project is building a serious game environment for users to discover the patterns driving the behavior of automated systems based on machine learning. Our goal is for users - even without technical knowledge - to gain insight into the workings of opaque systems based on so-called artificial intelligence. Join us in giving users a better understanding of how their data feeds algorithms. <br /><br />
<br />
'''Fair EVA: A Toolkit for Evaluating Voice Assistant Fairness.''' This project will build and deploy an evaluation toolkit for developers to test the fairness of deep learning components that make up embedded voice assistants. We believe a fair voice assistant should work equally well for all users, irrespective of their demographic attributes. At the moment it is not possible to even test if this is the case, despite obvious discrepancies during regular use. Join us to create more equitable access to voice assistants.<br /><br />
<br />
'''Visualizing Internet Subcultures on Social Media.''' This project will build an interactive visualization of the formation of and interactions between different communities on the internet, from die-hard fan bases to real-life social movements. Join us to help build this dynamic visualization.<br />
Foundational Tech for Low-Resource Languages. This project will re-imagine the foundational technology infrastructure needed to kickstart the artificial intelligence / machine learning frameworks for some of the low-resourced languages of South Asia. Join us in building a more diverse, equitable, and community-run AI ecosystem for global languages starting in South Asia.<br /><br />
<br />
'''Truth of Waste as a Public Good.''' This project is bringing transparency to waste recycling through standardized content authentication. Focusing on waste management, a prototype powered by AI will interact with the decision-making ecosystem that empowers better decisions around recycling or re-homing. Join us to streamline the cooperative, circular economy business model.<br /><br />
<br />
'''MOSafely: An AI Community Making the Internet a Safer Place for Youth.''' Modus Operandi Safely (MOSafely.org) is an open-source community that leverages evidence-based research, data, and artificial intelligence to help youth and young adults engage more safely online. Join us in building technology that detects risk behaviors of youth online and/or unsafe online interactions with others.<br /><br />
<br />
'''Atlanta University Center Data Science Initiative.''' As part of Mozilla’s HBCU Collaborative, The Atlanta University Center Data Science Initiative is working to develop a prototype of a tool or technology that promotes Trustworthy AI. Interested in supporting this open innovation project? Join the working group to learn more.<br /><br />
<br />
==== Civil Society Actors for Trustworthy AI Working Group ====<br />
<br /><br />
The Civil Society Actors for Trustworthy AI Working Group is for civil society actors engaged with AI and the promotion of trustworthy solutions in their communities and work. We welcome global and local activists, artists, journalists, organizers, and researchers already working on Trustworthy AI and those who are interested in doing so to this group. <br /><br />
<br />
A '''feminist dictionary in AI''' will create a reference for desinging gender-inclusive algorithms. The dictionary will help AI Builders understand feminist theory and gender equality, as well as describe the kinds of classification, analysis, and prediction-making necessary for developing a gender-inclusive algorithm. Join us to work towards a more equitable and inclusive era in AI. <br /><br />
<br />
'''Accountability Case Labs''' will build community and common strategic goals across the full range of technical and social experts, researchers, builders, and advocates who care about AI accountability and auditing. Our cross-disciplinary project will also develop a body of collaborative case studies examining real world examples across groups — from policy-making and regulation to design, development, procurement, deployment and security. Join us for a collaborative approach to increasing AI accountability.<br /><br />
<br />
'''AI Governance in Africa''' will create an open and inclusive space for discussion and research into the development, application, and governance of AI in Africa. We will draft a white paper that tackles issues of societal concern like bias in AI. Join us to help create this resource for more open and inclusive AI in Africa.<br /><br />
<br />
'''Audit of Delivery ''' Platforms, in order to reinforce labour rights defense activities, will design and perform a technological audit of the algorithms used by delivery services in Ecuador. With help from several community organizations, this audit will help further workers’ labour rights and efforts to unionize. Join us to help model this audit in Ecuador and consider how it might help labour rights where you live, as well.<br /><br />
<br />
'''Black Communities - Data Cooperatives (BCDC)''' will build a community of impact that helps organizers and leaders work with their communities to identify the links between their data and the AI technology around them. The project will also share practices for combating the negative effects of AI in Black communities and spaces like educational technology. Join us to work with like-minded community members who want Black communities to have more control over their data and its use in AI. <br /><br />
<br />
'''Harnessing the civic voice in AI''' impact assessment will develop guidance for meaningful participation of external stakeholders such as civil society organizations and affected communities engaged in Human Rights Impact Assessments (HRIA) of AI systems. These assessments should be a requirement for any AI system deployed in communities. Join us to amplify civic voices in AI development and design, safeguard human rights and to promote rights-based AI through HRIAs that benefit affected communities. <br /><br />
<br />
'''The Trustworthy AI in Education Toolkit''' will help science educators introduce and contextualize AI in their classrooms and other learning environments. Educators and learners need a better understanding of AI to understand what makes AI helpful and trustworthy - or not. Join us to gather, organize and document AI education frameworks, curriculum and activities and put them in a Trustworthy AI context to help educators and learners think critically about AI in their lives. <br /><br />
<br />
As part of Mozilla’s HBCU Collaborative, students at '''Spelman College''' will work with academics and industry experts on a white paper that centers Black women in the conversation on Trustworthy AI and approaches it from an intersectional lens. Interested in contributing to this research project? Join the working group to learn more.<br /><br />
<br />
<big><br />
=== 2020 - 2021 Cohort === </big><br />
The collaborative spirit and innovation felt at the Mozilla Festival should be experienced all year round. The MozFest team is testing out a working group structure to support the technical community interested in building trustworthy AI and we want your help to collaboratively make building trustworthy AI a reality.<br />
<br />
Click any link below to find out more about the Trustworthy AI projects that were built, the working group process, outputs, and how some teams used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> '''Member Feedback:''' "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> '''Member Feedback:''' "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> '''Member Feedback:''' "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> '''Member Feedback:''' "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> '''Member Feedback:''' "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /><br />
<br />
[[Category:Mozilla Festival]]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1237567Working Groups2021-08-25T12:28:54Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=AI v3 Square.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
<br /><br />
<big><br />
=== 2021 - 2022 Cohort ===<br />
</big><br />
<br /><br />
As we seek to support the MozFest community in building more tools and technology that promote Trustworthy Artificial Intelligence (AI), we would like expert AI Builders to work alongside highly engaged Civil Society Actors in envisioning a more equitable automated future.<br />
<br />
==== Building Trustworthy AI Working Group ==== <br />
<br /><br />
The Building Trustworthy AI Working Group for AI builders is an engaged community of over 250 global members who aim to help our technical community build more Trustworthy AI. We invite developers, funders, and policy-makers expressly concerned with technical products and standards to this group. <br /><br />
<br />
'''AI Unveiled: Discovering Patterns, Shaping Patterns.''' This project is building a serious game environment for users to discover the patterns driving the behavior of automated systems based on machine learning. Our goal is for users - even without technical knowledge - to gain insight into the workings of opaque systems based on so-called artificial intelligence. Join us in giving users a better understanding of how their data feeds algorithms. <br /><br />
<br />
'''Fair EVA: A Toolkit for Evaluating Voice Assistant Fairness.''' This project will build and deploy an evaluation toolkit for developers to test the fairness of deep learning components that make up embedded voice assistants. We believe a fair voice assistant should work equally well for all users, irrespective of their demographic attributes. At the moment it is not possible to even test if this is the case, despite obvious discrepancies during regular use. Join us to create more equitable access to voice assistants.<br /><br />
<br />
'''Visualizing Internet Subcultures on Social Media.''' This project will build an interactive visualization of the formation of and interactions between different communities on the internet, from die-hard fan bases to real-life social movements. Join us to help build this dynamic visualization.<br />
Foundational Tech for Low-Resource Languages. This project will re-imagine the foundational technology infrastructure needed to kickstart the artificial intelligence / machine learning frameworks for some of the low-resourced languages of South Asia. Join us in building a more diverse, equitable, and community-run AI ecosystem for global languages starting in South Asia.<br /><br />
'''<br />
Truth of Waste as a Public Good.''' This project is bringing transparency to waste recycling through standardized content authentication. Focusing on waste management, a prototype powered by AI will interact with the decision-making ecosystem that empowers better decisions around recycling or re-homing. Join us to streamline the cooperative, circular economy business model.<br /><br />
<br />
'''MOSafely: An AI Community Making the Internet a Safer Place for Youth.''' Modus Operandi Safely (MOSafely.org) is an open-source community that leverages evidence-based research, data, and artificial intelligence to help youth and young adults engage more safely online. Join us in building technology that detects risk behaviors of youth online and/or unsafe online interactions with others.<br /><br />
<br />
'''Atlanta University Center Data Science Initiative.''' As part of Mozilla’s HBCU Collaborative, The Atlanta University Center Data Science Initiative is working to develop a prototype of a tool or technology that promotes Trustworthy AI. Interested in supporting this open innovation project? Join the working group to learn more.<br /><br />
<br />
==== Civil Society Actors for Trustworthy AI Working Group ====<br />
<br /><br />
The Civil Society Actors for Trustworthy AI Working Group is for civil society actors engaged with AI and the promotion of trustworthy solutions in their communities and work. We welcome global and local activists, artists, journalists, organizers, and researchers already working on Trustworthy AI and those who are interested in doing so to this group. <br /><br />
<br />
A '''feminist dictionary in AI''' will create a reference for desinging gender-inclusive algorithms. The dictionary will help AI Builders understand feminist theory and gender equality, as well as describe the kinds of classification, analysis, and prediction-making necessary for developing a gender-inclusive algorithm. Join us to work towards a more equitable and inclusive era in AI. <br /><br />
<br />
'''Accountability Case Labs''' will build community and common strategic goals across the full range of technical and social experts, researchers, builders, and advocates who care about AI accountability and auditing. Our cross-disciplinary project will also develop a body of collaborative case studies examining real world examples across groups — from policy-making and regulation to design, development, procurement, deployment and security. Join us for a collaborative approach to increasing AI accountability.<br /><br />
<br />
'''AI Governance in Africa''' will create an open and inclusive space for discussion and research into the development, application, and governance of AI in Africa. We will draft a white paper that tackles issues of societal concern like bias in AI. Join us to help create this resource for more open and inclusive AI in Africa.<br /><br />
<br />
'''Audit of Delivery ''' Platforms, in order to reinforce labour rights defense activities, will design and perform a technological audit of the algorithms used by delivery services in Ecuador. With help from several community organizations, this audit will help further workers’ labour rights and efforts to unionize. Join us to help model this audit in Ecuador and consider how it might help labour rights where you live, as well.<br /><br />
<br />
'''Black Communities - Data Cooperatives (BCDC)''' will build a community of impact that helps organizers and leaders work with their communities to identify the links between their data and the AI technology around them. The project will also share practices for combating the negative effects of AI in Black communities and spaces like educational technology. Join us to work with like-minded community members who want Black communities to have more control over their data and its use in AI. <br /><br />
<br />
'''Harnessing the civic voice in AI''' impact assessment will develop guidance for meaningful participation of external stakeholders such as civil society organizations and affected communities engaged in Human Rights Impact Assessments (HRIA) of AI systems. These assessments should be a requirement for any AI system deployed in communities. Join us to amplify civic voices in AI development and design, safeguard human rights and to promote rights-based AI through HRIAs that benefit affected communities. <br /><br />
<br />
'''The Trustworthy AI in Education Toolkit''' will help science educators introduce and contextualize AI in their classrooms and other learning environments. Educators and learners need a better understanding of AI to understand what makes AI helpful and trustworthy - or not. Join us to gather, organize and document AI education frameworks, curriculum and activities and put them in a Trustworthy AI context to help educators and learners think critically about AI in their lives. <br /><br />
<br />
As part of Mozilla’s HBCU Collaborative, students at '''Spelman College''' will work with academics and industry experts on a white paper that centers Black women in the conversation on Trustworthy AI and approaches it from an intersectional lens. Interested in contributing to this research project? Join the working group to learn more.<br /><br />
<br />
<big><br />
=== 2020 - 2021 Cohort === </big><br />
The collaborative spirit and innovation felt at the Mozilla Festival should be experienced all year round. The MozFest team is testing out a working group structure to support the technical community interested in building trustworthy AI and we want your help to collaboratively make building trustworthy AI a reality.<br />
<br />
Click any link below to find out more about the Trustworthy AI projects that were built, the working group process, outputs, and how some teams used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> '''Member Feedback:''' "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> '''Member Feedback:''' "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> '''Member Feedback:''' "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> '''Member Feedback:''' "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> '''Member Feedback:''' "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /><br />
<br />
[[Category:Mozilla Festival]]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1237566Working Groups2021-08-25T12:26:15Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=AI v3 Square.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
<br /><br />
<big><br />
=== 2021 - 2022 Cohort ===<br />
</big><br />
<br /><br />
As we seek to support the MozFest community in building more tools and technology that promote Trustworthy Artificial Intelligence (AI), we would like expert AI Builders to work alongside highly engaged Civil Society Actors in envisioning a more equitable automated future.<br />
<br />
==== Building Trustworthy AI Working Group ==== <br /><br />
The Building Trustworthy AI Working Group for AI builders is an engaged community of over 250 global members who aim to help our technical community build more Trustworthy AI. We invite developers, funders, and policy-makers expressly concerned with technical products and standards to this group. <br /><br />
<br />
'''AI Unveiled: Discovering Patterns, Shaping Patterns.''' This project is building a serious game environment for users to discover the patterns driving the behavior of automated systems based on machine learning. Our goal is for users - even without technical knowledge - to gain insight into the workings of opaque systems based on so-called artificial intelligence. Join us in giving users a better understanding of how their data feeds algorithms. <br /><br />
<br />
'''Fair EVA: A Toolkit for Evaluating Voice Assistant Fairness.''' This project will build and deploy an evaluation toolkit for developers to test the fairness of deep learning components that make up embedded voice assistants. We believe a fair voice assistant should work equally well for all users, irrespective of their demographic attributes. At the moment it is not possible to even test if this is the case, despite obvious discrepancies during regular use. Join us to create more equitable access to voice assistants.<br /><br />
<br />
'''Visualizing Internet Subcultures on Social Media.''' This project will build an interactive visualization of the formation of and interactions between different communities on the internet, from die-hard fan bases to real-life social movements. Join us to help build this dynamic visualization.<br />
Foundational Tech for Low-Resource Languages. This project will re-imagine the foundational technology infrastructure needed to kickstart the artificial intelligence / machine learning frameworks for some of the low-resourced languages of South Asia. Join us in building a more diverse, equitable, and community-run AI ecosystem for global languages starting in South Asia.<br /><br />
'''<br />
Truth of Waste as a Public Good.''' This project is bringing transparency to waste recycling through standardized content authentication. Focusing on waste management, a prototype powered by AI will interact with the decision-making ecosystem that empowers better decisions around recycling or re-homing. Join us to streamline the cooperative, circular economy business model.<br /><br />
<br />
'''MOSafely: An AI Community Making the Internet a Safer Place for Youth.''' Modus Operandi Safely (MOSafely.org) is an open-source community that leverages evidence-based research, data, and artificial intelligence to help youth and young adults engage more safely online. Join us in building technology that detects risk behaviors of youth online and/or unsafe online interactions with others.<br /><br />
<br />
'''Atlanta University Center Data Science Initiative.''' As part of Mozilla’s HBCU Collaborative, The Atlanta University Center Data Science Initiative is working to develop a prototype of a tool or technology that promotes Trustworthy AI. Interested in supporting this open innovation project? Join the working group to learn more.<br /><br />
<br />
==== Civil Society Actors for Trustworthy AI Working Group ====<br />
<br /><br />
The Civil Society Actors for Trustworthy AI Working Group is for civil society actors engaged with AI and the promotion of trustworthy solutions in their communities and work. We welcome global and local activists, artists, journalists, organizers, and researchers already working on Trustworthy AI and those who are interested in doing so to this group. <br /><br />
<br />
A '''feminist dictionary in AI''' will create a reference for desinging gender-inclusive algorithms. The dictionary will help AI Builders understand feminist theory and gender equality, as well as describe the kinds of classification, analysis, and prediction-making necessary for developing a gender-inclusive algorithm. Join us to work towards a more equitable and inclusive era in AI. <br /><br />
<br />
'''Accountability Case Labs''' will build community and common strategic goals across the full range of technical and social experts, researchers, builders, and advocates who care about AI accountability and auditing. Our cross-disciplinary project will also develop a body of collaborative case studies examining real world examples across groups — from policy-making and regulation to design, development, procurement, deployment and security. Join us for a collaborative approach to increasing AI accountability.<br /><br />
<br />
'''AI Governance in Africa''' will create an open and inclusive space for discussion and research into the development, application, and governance of AI in Africa. We will draft a white paper that tackles issues of societal concern like bias in AI. Join us to help create this resource for more open and inclusive AI in Africa.<br /><br />
<br />
'''Audit of Delivery ''' Platforms, in order to reinforce labour rights defense activities, will design and perform a technological audit of the algorithms used by delivery services in Ecuador. With help from several community organizations, this audit will help further workers’ labour rights and efforts to unionize. Join us to help model this audit in Ecuador and consider how it might help labour rights where you live, as well.<br /><br />
<br />
'''Black Communities - Data Cooperatives (BCDC)''' will build a community of impact that helps organizers and leaders work with their communities to identify the links between their data and the AI technology around them. The project will also share practices for combating the negative effects of AI in Black communities and spaces like educational technology. Join us to work with like-minded community members who want Black communities to have more control over their data and its use in AI. <br /><br />
<br />
'''Harnessing the civic voice in AI''' impact assessment will develop guidance for meaningful participation of external stakeholders such as civil society organizations and affected communities engaged in Human Rights Impact Assessments (HRIA) of AI systems. These assessments should be a requirement for any AI system deployed in communities. Join us to amplify civic voices in AI development and design, safeguard human rights and to promote rights-based AI through HRIAs that benefit affected communities. <br /><br />
<br />
'''The Trustworthy AI in Education Toolkit''' will help science educators introduce and contextualize AI in their classrooms and other learning environments. Educators and learners need a better understanding of AI to understand what makes AI helpful and trustworthy - or not. Join us to gather, organize and document AI education frameworks, curriculum and activities and put them in a Trustworthy AI context to help educators and learners think critically about AI in their lives. <br /><br />
<br />
As part of Mozilla’s HBCU Collaborative, students at '''Spelman College''' will work with academics and industry experts on a white paper that centers Black women in the conversation on Trustworthy AI and approaches it from an intersectional lens. Interested in contributing to this research project? Join the working group to learn more.<br /><br />
<br />
<big><br />
=== 2020 - 2021 Cohort ===<br />
</big>Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how some teams used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> '''Member Feedback:''' "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> '''Member Feedback:''' "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> '''Member Feedback:''' "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> '''Member Feedback:''' "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> '''Member Feedback:''' "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /><br />
<br />
[[Category:Mozilla Festival]]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1237565Working Groups2021-08-25T12:23:34Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=AI v3 Square.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
<br /><br />
<big>2021 - 2022 Cohort<br />
</big><big><br />
=== Big text ===<br />
</big><br />
<br /><br />
As we seek to support the MozFest community in building more tools and technology that promote Trustworthy Artificial Intelligence (AI), we would like expert AI Builders to work alongside highly engaged Civil Society Actors in envisioning a more equitable automated future.<br />
<br />
====Building Trustworthy AI Working Group==== <br /><br />
The Building Trustworthy AI Working Group for AI builders is an engaged community of over 250 global members who aim to help our technical community build more Trustworthy AI. We invite developers, funders, and policy-makers expressly concerned with technical products and standards to this group. <br /><br />
<br />
'''AI Unveiled: Discovering Patterns, Shaping Patterns.''' This project is building a serious game environment for users to discover the patterns driving the behavior of automated systems based on machine learning. Our goal is for users - even without technical knowledge - to gain insight into the workings of opaque systems based on so-called artificial intelligence. Join us in giving users a better understanding of how their data feeds algorithms. <br /><br />
<br />
'''Fair EVA: A Toolkit for Evaluating Voice Assistant Fairness.''' This project will build and deploy an evaluation toolkit for developers to test the fairness of deep learning components that make up embedded voice assistants. We believe a fair voice assistant should work equally well for all users, irrespective of their demographic attributes. At the moment it is not possible to even test if this is the case, despite obvious discrepancies during regular use. Join us to create more equitable access to voice assistants.<br /><br />
<br />
'''Visualizing Internet Subcultures on Social Media.''' This project will build an interactive visualization of the formation of and interactions between different communities on the internet, from die-hard fan bases to real-life social movements. Join us to help build this dynamic visualization.<br />
Foundational Tech for Low-Resource Languages. This project will re-imagine the foundational technology infrastructure needed to kickstart the artificial intelligence / machine learning frameworks for some of the low-resourced languages of South Asia. Join us in building a more diverse, equitable, and community-run AI ecosystem for global languages starting in South Asia.<br /><br />
'''<br />
Truth of Waste as a Public Good.''' This project is bringing transparency to waste recycling through standardized content authentication. Focusing on waste management, a prototype powered by AI will interact with the decision-making ecosystem that empowers better decisions around recycling or re-homing. Join us to streamline the cooperative, circular economy business model.<br /><br />
<br />
'''MOSafely: An AI Community Making the Internet a Safer Place for Youth.''' Modus Operandi Safely (MOSafely.org) is an open-source community that leverages evidence-based research, data, and artificial intelligence to help youth and young adults engage more safely online. Join us in building technology that detects risk behaviors of youth online and/or unsafe online interactions with others.<br /><br />
<br />
'''Atlanta University Center Data Science Initiative.''' As part of Mozilla’s HBCU Collaborative, The Atlanta University Center Data Science Initiative is working to develop a prototype of a tool or technology that promotes Trustworthy AI. Interested in supporting this open innovation project? Join the working group to learn more.<br /><br />
<br />
==== Civil Society Actors for Trustworthy AI Working Group ====<br />
<br /><br />
The Civil Society Actors for Trustworthy AI Working Group is for civil society actors engaged with AI and the promotion of trustworthy solutions in their communities and work. We welcome global and local activists, artists, journalists, organizers, and researchers already working on Trustworthy AI and those who are interested in doing so to this group. <br /><br />
<br />
A '''feminist dictionary in AI''' will create a reference for desinging gender-inclusive algorithms. The dictionary will help AI Builders understand feminist theory and gender equality, as well as describe the kinds of classification, analysis, and prediction-making necessary for developing a gender-inclusive algorithm. Join us to work towards a more equitable and inclusive era in AI. <br /><br />
'''<br />
Accountability Case Labs''' will build community and common strategic goals across the full range of technical and social experts, researchers, builders, and advocates who care about AI accountability and auditing. Our cross-disciplinary project will also develop a body of collaborative case studies examining real world examples across groups — from policy-making and regulation to design, development, procurement, deployment and security. Join us for a collaborative approach to increasing AI accountability.<br /><br />
<br />
'''AI Governance in Africa''' will create an open and inclusive space for discussion and research into the development, application, and governance of AI in Africa. We will draft a white paper that tackles issues of societal concern like bias in AI. Join us to help create this resource for more open and inclusive AI in Africa.<br /><br />
<br />
'''Audit of Delivery ''' Platforms, in order to reinforce labour rights defense activities, will design and perform a technological audit of the algorithms used by delivery services in Ecuador. With help from several community organizations, this audit will help further workers’ labour rights and efforts to unionize. Join us to help model this audit in Ecuador and consider how it might help labour rights where you live, as well.<br /><br />
<br />
'''Black Communities - Data Cooperatives (BCDC)''' will build a community of impact that helps organizers and leaders work with their communities to identify the links between their data and the AI technology around them. The project will also share practices for combating the negative effects of AI in Black communities and spaces like educational technology. Join us to work with like-minded community members who want Black communities to have more control over their data and its use in AI. <br /><br />
<br />
'''Harnessing the civic voice in AI''' impact assessment will develop guidance for meaningful participation of external stakeholders such as civil society organizations and affected communities engaged in Human Rights Impact Assessments (HRIA) of AI systems. These assessments should be a requirement for any AI system deployed in communities. Join us to amplify civic voices in AI development and design, safeguard human rights and to promote rights-based AI through HRIAs that benefit affected communities. <br /><br />
<br />
'''The Trustworthy AI in Education Toolkit''' will help science educators introduce and contextualize AI in their classrooms and other learning environments. Educators and learners need a better understanding of AI to understand what makes AI helpful and trustworthy - or not. Join us to gather, organize and document AI education frameworks, curriculum and activities and put them in a Trustworthy AI context to help educators and learners think critically about AI in their lives. <br /><br />
<br />
As part of Mozilla’s HBCU Collaborative, students at '''Spelman College''' will work with academics and industry experts on a white paper that centers Black women in the conversation on Trustworthy AI and approaches it from an intersectional lens. Interested in contributing to this research project? Join the working group to learn more.<br /><br />
<br />
<big>2020 - 2021 Cohort<br />
</big><big><br />
=== Big text ===<br />
</big>Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how some teams used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> '''Member Feedback:''' "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> '''Member Feedback:''' "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> '''Member Feedback:''' "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> '''Member Feedback:''' "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> '''Member Feedback:''' "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /><br />
<br />
[[Category:Mozilla Festival]]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236877Working Groups2021-07-22T18:52:41Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=AI v3 Square.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how some teams used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> '''Member Feedback:''' "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> '''Member Feedback:''' "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> '''Member Feedback:''' "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> '''Member Feedback:''' "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> '''Member Feedback:''' "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /><br />
<br />
[[Category:Mozilla Festival]]</div>Temipopohttps://wiki.mozilla.org/index.php?title=File:AI_v3_Square.png&diff=1236876File:AI v3 Square.png2021-07-22T18:51:52Z<p>Temipopo: </p>
<hr />
<div>TAI icon</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236870Working Groups2021-07-22T12:56:11Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how some teams used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> '''Member Feedback:''' "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> '''Member Feedback:''' "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> '''Member Feedback:''' "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> '''Member Feedback:''' "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> '''Member Feedback:''' "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /><br />
<br />
[[Category:Mozilla Festival]]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236869Working Groups2021-07-22T12:46:12Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how each team used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> '''Member Feedback:''' "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> '''Member Feedback:''' "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> '''Member Feedback:''' "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> '''Member Feedback:''' "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> '''Member Feedback:''' "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /><br />
<br />
[[Category:Mozilla Festival]]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Privacy_Preserving_Browser&diff=1236845Working Groups/Privacy Preserving Browser2021-07-21T14:31:13Z<p>Temipopo: </p>
<hr />
<div><big>'''Privacy Preserving Browser'''</big><br /><br />
''Exploring alternatives to surveillance capitalism<br />
''<br /><br />
<br />
Privacy Preserving Browser was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. The project came in with a set of Privacy Preserving recsys algorithms. However, they needed to research and validate/prove (or disprove) the real world impact and effectiveness of these algorithms by putting them into the “wild” and observe how real users interacts with them.<br />
<br />
'''Problem'''<br /><br />
"If you are not paying for a product, '''''you''''' are the product" - ''tech proverb''. We are not the owner nor the benefactor of our data. This creates a model that is exploitative and leads to negative outcomes as a community and society like over-consumption/environmental damage, extremism, entertainment without true fulfillment or enjoy that is robbing people of their most scarce asset: our time and futures. If we hand control of data back to the people via the browser, will that shift us towards a new model? We seek a future where data is rewarded to companies building better products for consumer and community well-being.<br />
<br />
'''Solution'''<br /><br />
<br />
Algorithms and AI are making decisions that affects all of us and changing our behaviours. We would like to create a more positive and equitable future as an alternative to surveillance capitalism - starting with a differentially private browser.<br />
<br />
'''Contributors'''<br /><br />
Shashi Gharti, Maddie Shang<br />
<br />
'''Resources'''<br /><br />
[https://docs.google.com/document/d/10TKLFD4c3PoQ3UDZArraN-e4zU-l75ikrHx8PlZjSEg/edit#heading=h.jb6bs6rt4kv1 Preliminary Project Research]<br /><br />
[https://www.youtube.com/watch?v=F46lX5VIoas&t=8234s OpenMined Privacy Conference - Day 1 - Part 2 Livestream]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236844Working Groups2021-07-21T14:29:41Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how each team used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> "The group provided a forum for pitching/discussing the project to a wider audience."<br />
<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
<br />
* [[Working Groups/Privacy Preserving Browser|Privacy Preserving Browser]] <br /></div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Privacy_Preserving_Browser&diff=1236843Working Groups/Privacy Preserving Browser2021-07-21T14:25:35Z<p>Temipopo: </p>
<hr />
<div><big>'''Privacy Preserving Browser'''</big><br /><br />
''Exploring alternatives to surveillance capitalism<br />
''<br /><br />
<br />
Privacy Preserving Browser was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. The project came in with a set of Privacy Preserving recsys algorithms. However, they needed to research and validate/prove (or disprove) the real world impact and effectiveness of these algorithms by putting them into the “wild” and observe how real users interacts with them.<br />
<br />
'''Problem'''<br /><br />
"If you are not paying for a product, '''''you''''' are the product" - ''tech proverb''. We are not the owner nor the benefactor of our data. This creates a model that is exploitative and leads to negative outcomes as a community and society like over-consumption/environmental damage, extremism, entertainment without true fulfillment or enjoy that is robbing people of their most scarce asset: our time and futures. If we hand control of data back to the people via the browser, will that shift us towards a new model? We seek a future where data is rewarded to companies building better products for consumer and community well-being.<br />
<br />
'''Solution'''<br /><br />
<br />
Algorithms and AI are making decisions that affects all of us and changing our behaviours. We would like to create a more positive and equitable future as an alternative to surveillance capitalism - starting with a differentially private browser.<br />
<br />
'''Contributors'''<br /><br />
Shashi Gharti, Maddie Shang<br />
<br />
'''Resources'''<br /><br />
[https://docs.google.com/document/d/10TKLFD4c3PoQ3UDZArraN-e4zU-l75ikrHx8PlZjSEg/edit#heading=h.jb6bs6rt4kv1 Preliminary Project Research]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Privacy_Preserving_Browser&diff=1236842Working Groups/Privacy Preserving Browser2021-07-21T14:22:45Z<p>Temipopo: </p>
<hr />
<div><big>'''Privacy Preserving Browser'''</big><br /><br />
''Exploring alternatives to surveillance capitalism<br />
''<br /><br />
<br />
Privacy Preserving Browser was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. The project came in with a set of Privacy Preserving recsys algorithms. However, they needed to research and validate/prove (or disprove) the real world impact and effectiveness of these algorithms by putting them into the “wild” and observe how real users interacts with them.<br />
<br />
'''Problem'''<br /><br />
"If you are not paying for a product, '''''you''''' are the product"-tech proverbWe are not the owner nor the benefactor of data. This created a model that is exploitative and leads to negative outcomes as a community / society i.e. over-consumption/environmental damage, extremism, entertainment without true fulfillment/enjoy that is robbing people of their most scarce asset: our time and futures. If we hand control of data back to the ppl via the browser, will that shift us towards a new capital model? Where data is rewarded to companies building better products for consumer and community well-being?<br />
<br />
'''Solution'''<br /><br />
<br />
Algorithms and AI are making decisions that affects all of us and changing our behaviours. We would like to create a more positive and equitable future alternative to Surveillance Capitalism - starting with a differentially private browser.<br />
<br />
'''Contributors'''<br /><br />
Shashi Gharti, Maddie Shang<br />
<br />
'''Resources'''<br /><br />
[https://docs.google.com/document/d/10TKLFD4c3PoQ3UDZArraN-e4zU-l75ikrHx8PlZjSEg/edit#heading=h.jb6bs6rt4kv1 Preliminary Project Research]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Privacy_Preserving_Browser&diff=1236841Working Groups/Privacy Preserving Browser2021-07-21T14:22:07Z<p>Temipopo: </p>
<hr />
<div><big>'''Privacy Preserving Browser'''</big><br /><br />
''Exploring alternatives to surveillance capitalism<br />
''<br /><br />
<br />
Privacy Preserving Browser was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. The project came in with a set of Privacy Preserving recsys algorithms. However, they needed to research and validate/prove (or disprove) the real world impact and effectiveness of these algorithms by putting them into the “wild” and observe how real users interacts with them.<br />
<br />
'''Problem'''<br /><br />
"If you are not paying for a product, '''YOU'RE''' the product"-tech proverbWe are not the owner nor the benefactor of data. This created a model that is exploitative and leads to negative outcomes as a community / society i.e. over-consumption/environmental damage, extremism, entertainment without true fulfillment/enjoy that is robbing people of their most scarce asset: our time and futures. If we hand control of data back to the ppl via the browser, will that shift us towards a new capital model? Where data is rewarded to companies building better products for consumer and community well-being?<br />
<br />
'''Solution'''<br /><br />
<br />
Algorithms and AI are making decisions that affects all of us and changing our behaviours. We would like to create a more positive and equitable future alternative to Surveillance Capitalism - starting with a differentially private browser.<br />
<br />
'''Contributors'''<br /><br />
Shashi Gharti, Maddie Shang<br />
<br />
'''Resources'''<br /><br />
[https://docs.google.com/document/d/10TKLFD4c3PoQ3UDZArraN-e4zU-l75ikrHx8PlZjSEg/edit#heading=h.jb6bs6rt4kv1 Preliminary Project Research]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Privacy_Preserving_Browser&diff=1236840Working Groups/Privacy Preserving Browser2021-07-21T14:20:10Z<p>Temipopo: Created page with "<big>'''Privacy Preserving Browser'''</big><br /> ''Exploring alternatives to surveillance capitalism ''<br /> Privacy Preserving Browser was launched in the pilot cohort of..."</p>
<hr />
<div><big>'''Privacy Preserving Browser'''</big><br /><br />
''Exploring alternatives to surveillance capitalism<br />
''<br /><br />
<br />
Privacy Preserving Browser was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. The project came in with a set of Privacy Preserving recsys algorithms. However, they needed like to research and validate/prove (or disprove) the real world impact and effectiveness of these algorithms by putting them into the “wild” and observe how real users interacts with them.<br />
<br />
'''Problem'''<br /><br />
"If you are not paying for a product, '''YOU'RE''' the product"-tech proverbWe are not the owner nor the benefactor of data.This created a model that is exploitative and leads to negative outcomes as a community / society i.e. over consumption/environmental damage, extremism, entertainment without true fulfillment/enjoy that is robbing people of their most scarce asset: our time and futures.If we hand control of data back to the ppl via the browser, will that shift us towards a new capital model? Where data is rewarded to companies building better products for consumer and community well-being?<br />
<br />
'''Solution'''<br /><br />
<br />
Algorithm and AI is making decisions that affects all of us and changing our behaviours. We would like your help to create a more positive and equitable future alternative to Surveillance Capitalism. Starting with a differentially private browser.<br />
<br />
'''Contributors'''<br /><br />
Shashi Gharti, Maddie Shang<br />
<br />
'''Resources'''<br /><br />
[https://docs.google.com/document/d/10TKLFD4c3PoQ3UDZArraN-e4zU-l75ikrHx8PlZjSEg/edit#heading=h.jb6bs6rt4kv1 Preliminary Project Research]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236839Working Groups2021-07-21T14:07:42Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how each team used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> "The group provided a forum for pitching/discussing the project to a wider audience."<br />
* [[Working Groups/Narrative Future of AI|Narrative Future of AI]] <br /> "The working group's projects themselves are directly useful [to my organization's work]. Connecting to other people in the trustworthy AI field is also very valuable!"<br />
* Privacy Preserving Browser</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Narrative_Future_of_AI&diff=1236838Working Groups/Narrative Future of AI2021-07-21T14:03:11Z<p>Temipopo: </p>
<hr />
<div><big><br />
'''Narrative Future of AI'''</big><br /><br />
Narrative Future of AI was launched by [https://algowritten.org/ Algowritten] in the pilot cohort of Mozilla’s building trustworthy AI working group. This project aims to address problematic cultural biases of machine learning through the creation of a series of media works that challenge and explore bias in new algorithmic technologies, such as GPT-3. The project is led by researchers at The School of Digital Arts (opening in September, 2021), which is the founding member of an emerging consortium of North of England universities focusing on AI in digital storytelling, comprising The Centre for Immersive Technology (University of Leeds), XR Stories (University of York) and the Institute of Art and Technology (LJMU).<br />
<br />
'''Problem'''<br /><br />
The future of digital storytelling will involve the increasing use of algorithmic tools, both to develop new forms of narrative and to find efficiencies in creative production. However, unsupervised algorithms trained on massive amounts of web-based text come with issues of bias most harmfully pertaining to gender, race, and class. These issues are compounded by problems of transparency and explicability related to large complex algorithms. <br />
<br />
'''Solution'''<br /><br />
As creative applications for AI emerge and have authorial identities assigned to them, it must be clear where their authorial voice originates and who it deems worthy of inclusion in storytelling. This includes information on training data related to the authors, the stakeholders of the AI authors, as well as genre conventions and storytelling techniques that may continue to be damaging to the representation of marginalised groups. <br />
This project seeks to review typical biases that occur from writing with the GPT-3 API. The outcome will be a series of science fiction stories and feedback from working group members on their observations of bias and problematic AI behaviours. This analysis will form the basis of our first set recommendations for creative writing with advanced machine learning tools.<br />
<br />
'''Contributors'''<br /><br />
Algowritten, which explores bias in algorithmic writing, is maintained and edited by Dr David Jackson and Marsha Courneya who are academics at the School of Digital Arts, Manchester Metropolitan University, UK.<br />
<br />
'''Resources'''<br /><br />
[http://www.algowritten.org Project site]<br />
[https://algowritten.org/algowritten-i-the-mozfest-short-story-collection/ Short story collection]</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Narrative_Future_of_AI&diff=1236835Working Groups/Narrative Future of AI2021-07-20T19:22:04Z<p>Temipopo: </p>
<hr />
<div><big><br />
'''Narrative Future of AI'''</big><br /><br />
'''Problem'''<br /><br />
'''Solution'''<br /><br />
'''Contributors'''<br /><br />
'''Resources'''<br /></div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Narrative_Future_of_AI&diff=1236834Working Groups/Narrative Future of AI2021-07-20T19:21:12Z<p>Temipopo: Created page with "'''Narrative Future of AI '''<big> Big text</big><br /> '''Problem'''<br /> '''Solution'''<br /> '''Contributors'''<br /> '''Resources'''<br />"</p>
<hr />
<div>'''Narrative Future of AI<br />
'''<big><br />
Big text</big><br /><br />
'''Problem'''<br /><br />
'''Solution'''<br /><br />
'''Contributors'''<br /><br />
'''Resources'''<br /></div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236832Working Groups2021-07-20T18:09:06Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how each team used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
* [[Working Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers]] <br /> "The group provided a forum for pitching/discussing the project to a wider audience."<br />
* Narrative Future of AI<br />
* Privacy Preserving Browser</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236831Working Groups2021-07-20T18:06:20Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how each team used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
* [[/Working_Groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers (PRESC)]] <br /> "The group provided a forum for pitching/discussing the project to a wider audience."<br />
* Narrative Future of AI<br />
* Privacy Preserving Browser</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236830Working Groups2021-07-20T18:05:19Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how each team used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
* [[/working_groups/PRESC|Performance Robustness Evaluation for Statistical Classifiers (PRESC)]] <br /> "The group provided a forum for pitching/discussing the project to a wider audience."<br />
* Narrative Future of AI<br />
* Privacy Preserving Browser</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236829Working Groups2021-07-20T18:03:31Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how each team used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
* [[/Working_Groups/PRESC | Performance Robustness Evaluation for Statistical Classifiers (PRESC)]] <br /> "The group provided a forum for pitching/discussing the project to a wider audience."<br />
* Narrative Future of AI<br />
* Privacy Preserving Browser</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236828Working Groups2021-07-20T17:57:38Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how each team used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."<br />
* [[/Working_Groups/PRESC | Performance Robustness Evaluation for Statistical Classifiers (PRESC)]] <br /><br />
*</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/PRESC&diff=1236827Working Groups/PRESC2021-07-20T17:55:26Z<p>Temipopo: </p>
<hr />
<div><big>'''Performance Robustness Evaluation for Statistical Classifiers (PRESC) <br />
</big>'''<br /><br />
<br />
PRESC was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. It is a toolkit for the evaluation of machine learning classification models. Its goal is to provide insights into model performance which extend beyond standard scalar accuracy-based measures and into areas which tend to be underexplored in application, including:<br />
<br />
* Generalizability of the model to unseen data for which the training set may not be representative<br />
* Sensitivity to statistical error and methodological choices<br />
* Performance evaluation localized to meaningful subsets of the feature space<br />
* In-depth analysis of misclassifications and their distribution in the feature space<br />
<br />
'''Problem'''<br /><br />
Watch video [https://youtu.be/Z92yAAm7cl8 here] to learn more about the problem PRESC solves.<br />
<br />
'''Solution'''<br /><br />
We believe that these evaluations are essential for developing confidence in the selection and tuning of machine learning models intended to address user needs, and are important prerequisites towards building trustworthy AI.<br />
<br />
As a tool, PRESC is intended for use by ML engineers to assist in the development and updating of models. PRESC is a tool to help data scientists, developers, academics and activists evaluate the performance of machine learning classification models, specifically in areas which tend to be under-explored, such as generalizability and bias. Our current focus on misclassifications, robustness and stability will help facilitate the inclusion of bias and fairness analyses on the performance reports so that these can be taken into account when crafting or choosing between models.<br />
<br />
'''''Examples'''''<br /><br />
An example script demonstrating how to run a report is available [https://github.com/mozilla/PRESC/blob/master/examples/report/sample_report.py here].<br />
<br />
There are a number of notebooks and explorations in the examples/ dir, but they are not guaranteed to run or be up-to-date as the package has undergone major changes recently and we have not yet finished updating these.<br />
<br />
Some well-known datasets are provided in CSV format in the datasets/ dir for exploration purposes.<br /><br />
<br />
'''Contributors'''<br /><br />
All contributors can be found [https://github.com/mozilla/PRESC/graphs/contributors here]<br /><br />
<br />
'''Resources'''<br /><br />
https://github.com/mozilla/PRESC <br /><br />
https://mozilla.github.io/PRESC/index.html</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/PRESC&diff=1236826Working Groups/PRESC2021-07-20T17:54:28Z<p>Temipopo: </p>
<hr />
<div><big>'''Performance Robustness Evaluation for Statistical Classifiers (PRESC) <br />
</big>'''<br /><br />
<br />
PRESC was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. It is a toolkit for the evaluation of machine learning classification models. Its goal is to provide insights into model performance which extend beyond standard scalar accuracy-based measures and into areas which tend to be underexplored in application, including:<br />
<br />
* Generalizability of the model to unseen data for which the training set may not be representative<br />
* Sensitivity to statistical error and methodological choices<br />
* Performance evaluation localized to meaningful subsets of the feature space<br />
* In-depth analysis of misclassifications and their distribution in the feature space<br />
<br />
'''Problem'''<br /><br />
Watch video [https://youtu.be/Z92yAAm7cl8 here] to learn more about the problem PRESC solves.<br />
<br />
'''Solution'''<br /><br />
We believe that these evaluations are essential for developing confidence in the selection and tuning of machine learning models intended to address user needs, and are important prerequisites towards building trustworthy AI.<br />
<br />
As a tool, PRESC is intended for use by ML engineers to assist in the development and updating of models. PRESC is a tool to help data scientists, developers, academics and activists evaluate the performance of machine learning classification models, specifically in areas which tend to be under-explored, such as generalizability and bias. Our current focus on misclassifications, robustness and stability will help facilitate the inclusion of bias and fairness analyses on the performance reports so that these can be taken into account when crafting or choosing between models.<br />
<br />
'''''Examples'''''<br /><br />
An example script demonstrating how to run a report is available [https://github.com/mozilla/PRESC/blob/master/examples/report/sample_report.py here].<br />
<br />
There are a number of notebooks and explorations in the examples/ dir, but they are not guaranteed to run or be up-to-date as the package has undergone major changes recently and we have not yet finished updating these.<br />
<br />
Some well-known datasets are provided in CSV format in the datasets/ dir for exploration purposes.<br /><br />
'''Contributors'''<br /><br />
All contributors can be found [https://github.com/mozilla/PRESC/graphs/contributors here]<br /><br />
'''Resources'''<br /><br />
https://github.com/mozilla/PRESC<br />
https://mozilla.github.io/PRESC/index.html</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/PRESC&diff=1236825Working Groups/PRESC2021-07-20T17:53:28Z<p>Temipopo: </p>
<hr />
<div><big>'''Performance Robustness Evaluation for Statistical Classifiers (PRESC) <br />
</big>'''<br /><br />
<br />
PRESC was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. It is a toolkit for the evaluation of machine learning classification models. Its goal is to provide insights into model performance which extend beyond standard scalar accuracy-based measures and into areas which tend to be underexplored in application, including:<br />
<br />
* Generalizability of the model to unseen data for which the training set may not be representative<br />
* Sensitivity to statistical error and methodological choices<br />
* Performance evaluation localized to meaningful subsets of the feature space<br />
* In-depth analysis of misclassifications and their distribution in the feature space<br />
<br />
'''Problem'''<br /><br />
Watch video [https://youtu.be/Z92yAAm7cl8 here] to learn more about the problem PRESC solves.<br />
<br />
'''Solution'''<br /><br />
We believe that these evaluations are essential for developing confidence in the selection and tuning of machine learning models intended to address user needs, and are important prerequisites towards building trustworthy AI.<br />
<br />
As a tool, PRESC is intended for use by ML engineers to assist in the development and updating of models. PRESC is a tool to help data scientists, developers, academics and activists evaluate the performance of machine learning classification models, specifically in areas which tend to be under-explored, such as generalizability and bias. Our current focus on misclassifications, robustness and stability will help facilitate the inclusion of bias and fairness analyses on the performance reports so that these can be taken into account when crafting or choosing between models.<br />
<br />
'''''Examples'''''<br /><br />
An example script demonstrating how to run a report is available [https://github.com/mozilla/PRESC/blob/master/examples/report/sample_report.py here].<br />
<br />
There are a number of notebooks and explorations in the examples/ dir, but they are not guaranteed to run or be up-to-date as the package has undergone major changes recently and we have not yet finished updating these.<br />
<br />
Some well-known datasets are provided in CSV format in the datasets/ dir for exploration purposes.<br />
'''Contributors'''<br /><br />
All contributors can be found [https://github.com/mozilla/PRESC/graphs/contributors here]<br />
'''Resources'''<br /><br />
https://github.com/mozilla/PRESC<br />
https://mozilla.github.io/PRESC/index.html</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/PRESC&diff=1236824Working Groups/PRESC2021-07-20T17:51:07Z<p>Temipopo: </p>
<hr />
<div><big>'''Performance Robustness Evaluation for Statistical Classifiers (PRESC) <br />
</big>'''<br /><br />
<br />
PRESC was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. It is a toolkit for the evaluation of machine learning classification models. Its goal is to provide insights into model performance which extend beyond standard scalar accuracy-based measures and into areas which tend to be underexplored in application, including:<br />
<br />
* Generalizability of the model to unseen data for which the training set may not be representative<br />
* Sensitivity to statistical error and methodological choices<br />
* Performance evaluation localized to meaningful subsets of the feature space<br />
* In-depth analysis of misclassifications and their distribution in the feature space<br />
<br />
'''Problem'''<br />
Watch video [https://youtu.be/Z92yAAm7cl8 here] to learn more about the problem PRESC solves.<br />
<br />
'''Solution'''<br />
We believe that these evaluations are essential for developing confidence in the selection and tuning of machine learning models intended to address user needs, and are important prerequisites towards building trustworthy AI.<br />
<br />
As a tool, PRESC is intended for use by ML engineers to assist in the development and updating of models. PRESC is a tool to help data scientists, developers, academics and activists evaluate the performance of machine learning classification models, specifically in areas which tend to be under-explored, such as generalizability and bias. Our current focus on misclassifications, robustness and stability will help facilitate the inclusion of bias and fairness analyses on the performance reports so that these can be taken into account when crafting or choosing between models.<br />
<br />
'''''Examples'''''<br /><br />
An example script demonstrating how to run a report is available [https://github.com/mozilla/PRESC/blob/master/examples/report/sample_report.py here].<br />
<br />
There are a number of notebooks and explorations in the examples/ dir, but they are not guaranteed to run or be up-to-date as the package has undergone major changes recently and we have not yet finished updating these.<br />
<br />
Some well-known datasets are provided in CSV format in the datasets/ dir for exploration purposes.<br />
'''Contributors'''<br /><br />
All contributors can be found [https://github.com/mozilla/PRESC/graphs/contributors here]<br />
'''Resources'''<br /><br />
https://github.com/mozilla/PRESC<br />
https://mozilla.github.io/PRESC/index.html</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/PRESC&diff=1236789Working Groups/PRESC2021-07-19T20:19:12Z<p>Temipopo: Created page with "'''Resources'''<br /> https://github.com/mozilla/PRESC"</p>
<hr />
<div>'''Resources'''<br /><br />
https://github.com/mozilla/PRESC</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Truth_as_a_Public_Good&diff=1236788Working Groups/Truth as a Public Good2021-07-19T20:17:31Z<p>Temipopo: </p>
<hr />
<div><big>'''Truth as a Public Good (2020)<br />
</big>'''<br /><br />
Truth as a Public Good was launched in the pilot cohort of '''Mozilla’s building trustworthy AI working group'''. It was initially conceptualized as a vague idea based off of [https://medium.com/@tara_94961/lessons-learned-from-tech4good-between-marketing-and-substance-b57610b7e3b “Lessons Learned from #Tech4Good: Between Marketing and Substance”] intended to call attention to the for-profit, VC-funded Authentication technologies and standards that are purported to be trustworthy solutions to deep fakes, disinformation, and manipulated online content. <br />
<br />
Most of the contributors hailed from the Health Tech space, so the project sought to answer the question: ‘how do we keep health data and ultimately Health in the public good?’ <br />
<br />
'''Problem<br />
'''<br /><br />
Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. The demand and supply for content authentication was incubated by civil society actors. Now with buzzwords, such as deepfakes and fake news, the private sector is catching up to the potential monetary gain of content authentication.<br />
<br />
The internet and its content was created by all of us, and therefore should belong to all of us. By framing these digital platforms as public goods, we ask: ‘can we build a new healthier relationship between the public and private digital companies in order to deliver real value to society?’<br />
<br />
'''Solution'''<br /><br />
<br />
The group met weekly to discuss this abstract concept and settled on the following description to summarize its purpose: The Truth as a Public Good (TPG) Working Group will explore the “dilemma” of standardized content authentication and the stakeholders involved in this decision-making ecosystem. <br />
<br />
During each of the weekly meetings, the first half hour or so would be a “check-in” where we would each share what was on our minds, how we’re feeling, what we’re looking forward to. We benefited from diverse representations of nationalities and given the time period, a big topic of conversation was Covid-19 and the vaccine, especially each region/country’s unique lived experience with something as unifying as a global pandemic. So when it came to applying our “authentication examination” framework to some kind of real-world use-case, the Covid-19 vaccine story emerged as a compelling anchor. <br />
<br />
During MozFest we designed the following workshop: Since October 2020, the Truth as a Public Good Working Group has been exploring the “dilemma” of standardized content and data authentication and the stakeholders involved in this decision-making ecosystem. Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. Truth as a Public Good will be examined in the context of the Covid-19 vaccine story and how it relates to the public’s right to authenticating public health data. Truth as a Public Good will anchor this concept in the public’s shared experience of observing development of the Covid-19 Vaccine. This working group will host a World Cafe style workshop to examine authentication layers.<br />
<br />
As a result of the working group, we found a Community of like-minded individuals who were able to conceptualize how to apply an abstract framework of transparency and responsible innovation to pretty much any issue set. <br />
<br />
'''Contributors'''<br /><br />
<br />
Ahnjili Zhuparris, Munib Mesinovic, James Littlejohn, Sarada Mahesh, Esther Mwema, Charles Ngounou, Tara Vassefi <br />
<br />
'''Resources'''<br /><br />
https://www.mozillapulse.org/entry/1895</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Nanny_State&diff=1236787Working Groups/Nanny State2021-07-19T20:15:48Z<p>Temipopo: </p>
<hr />
<div><big>'''Nanny State'''</big><br />
Nanny Surveillance State was launched in the pilot cohort of Mozilla’s building trustworthy AI working group. This project explores the impact of surveillance and artificial intelligence on the labor industry, particularly on domestic workers, e.g., nannies and housekeepers. The use of artificial intelligence or AI in the labor sector is pervasive, there are examples of employers tracking labor productivity, health status, and replacing core job activities among others. <br />
<br />
''''Problem'''' <br/><br />
AI is capturing the employee’s digital footprint while simultaneously attempting to predict the employee’s next move. The underlying mechanisms of how AI is used by employers to collect and parse employee data is often not transparent and largely unregulated.<br />
<br />
'''Solution'''<br /><br />
The team created a Diverse Intelligence (DI) chatbot to demonstrate what goes on behind the scenes when creating an AI platform, and to provide a context for my project. The DI chatbot is a storytelling tool that draws participants in to succinctly capture the experience of working as a nanny or housekeeper. <br />
<br />
'''Contributors'''<br /><br />
Di Long, Anisha Fernando, Roya Pakzad, Marlena Wisniak<br />
<br />
'''Resources'''<br /><br />
https://www.mozillapulse.org/entry/1893<br />
https://github.com/fourthletter/nanny-surveillance</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236786Working Groups2021-07-19T20:09:21Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any link below to find out more about our Trustworthy AI projects, the working group process, outputs, and how each team used MozFest as platform to expand their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236785Working Groups2021-07-19T20:07:36Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any of the links below to find out more about the Trustworthy AI projects, our way of working, outputs, and how each team used MozFest as platform to showcase their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."<br />
* [[Working_Groups/Nanny_State|Nanny State]] <br /> "Mozfest introduced me to new people and exposure to a larger community. We aimed to engage with the broader MozFest community to design a more equitable labor platform and UX guide based on trustworthy AI principles for existing platforms. I met individuals from diverse sectors like engineering, legal, policy, and design backgrounds from all over the world. My project’s working group gave me new insights into my project."</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Nanny_State&diff=1236784Working Groups/Nanny State2021-07-19T20:07:00Z<p>Temipopo: Created page with "This project explores the impact of surveillance and artificial intelligence on the labor industry, particularly on domestic workers, e.g., nannies and housekeepers. The use o..."</p>
<hr />
<div>This project explores the impact of surveillance and artificial intelligence on the labor industry, particularly on domestic workers, e.g., nannies and housekeepers. The use of artificial intelligence or AI in the labor sector is pervasive, there are examples of employers tracking labor productivity, health status, and replacing core job activities among others. AI is capturing the employee’s digital footprint while simultaneously attempting to predict the employee’s next move. The underlying mechanisms of how AI is used by employers to collect and parse employee data is often not transparent and largely unregulated.<br />
<br />
Anisha Fernando, Roya Pakzad, Marlena Wisniak</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236783Working Groups2021-07-19T20:00:33Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
Click any of the links below to find out more about the Trustworthy AI projects, our way of working, outputs, and how each team used MozFest as platform to showcase their work.<br /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /> "This project would not have come into fruition were it not for MozFest providing a safe space and the framework to ideate with like-minded individuals committed to Technology’s original promise."<br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /> "I found a team that worked consistently over several months to scope the Zen of ML concept, grapple with who its beneficiaries are, brainstorm learner journeys and experiment with approaches to crafting design principles. Mozfest provided a great milestone for us to work towards. It helped us consolidate our vision and get concrete about outputs."</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Zen_of_ML&diff=1236782Working Groups/Zen of ML2021-07-19T19:55:26Z<p>Temipopo: </p>
<hr />
<div><big>'''Zen of ML'''</big><br /><br />
<br />
<br />
The Zen of Machine Learning was launched in the pilot cohort of '''Mozilla’s building trustworthy AI working group'''. <br />
<br />
The Zen of ML is a set of design principles that helps ML educators and self-learners prioritise responsible machine learning practices. The principles consider the end-to-end machine learning development cycle, from data collection to model evaluation and continuous deployment. Inspired by the Zen of Python, the Zen of ML can be viewed as a culture code that promotes the responsible development of ML products and projects. It is not binding or enforceable, but is intended to shape industry norms and offer a practical guide to building trustworthy AI. <br />
<br />
This project is collecting, refining, publishing, and disseminating the Zen of ML design principles so that machine learning (ML) practitioners can develop and deploy ML code responsibly.<br />
<br />
'''Problem'''<br /><br />
Machine learning (ML) tools are freely and widely available, and can be accessed with simple API calls and standard development pipelines. This has made it possible for anybody who wants to use ML to learn the skills and access the tools to do so. When using ML as a code component, the data-dependent and probabilistic nature of its outputs is hidden and often overlooked. This can have undesirable and even harmful consequences. Thus, while democratising ML tools has increased the inclusiveness of ML, it has created a new challenge as the responsible use of ML tools cannot be guaranteed or controlled. This presents a particular risk for people that are adversely affected by the outcomes of decisions backed by ML predictions.<br />
<br />
'''Solution'''<br /><br />
A list of statements that can be disseminated to ML educators and self-learners to embed the responsible development of ML artifacts and products into the design cycle from the get-go.<br />
We draw inspiration from the Zen of Python, which can be accessed from any python shell with import this. Similarly, we would like to see the Zen of ML included as an import statement in scikit-learn, the entry point to machine learning for many people.<br />
<br />
''The Zen of ML should:''<br /><br />
1. Focus on decision making and identify decisions that arise in building and maintaining machine learning pipelines (from data collection to evaluation)<br /><br />
2. Be a useful starting point for beginner pedagogy (both self-learning and teaching)<br /><br />
3. Be a useful critical reflection tool to revisit for practitioners<br /><br />
4. Language must connect to the technical domain, but remain accessible and comprehensible - i.e. make sense to human beings, minimal jargon<br /><br />
5. Be neither a checklist, nor a specification:<br /><br />
* Comprehensive but non-exhaustive<br />
* General, not custom built for specific ML project types.<br />
* What we love<br />
* What we prefer<br />
* If … then …<br />
* Shame on you! (i.e. what we don't like)<br />
6. Should not be specific to a particular framework (e.g. PyTorch, Scikit-learn)<br /><br />
7. Be as short as possible while being thorough; possible to process at a glance (or three). 18 - 20 short sentences are nice <br /><br />
8. Resonate with responsible ML best practice <br /><br />
<br />
The project held a workshop and hosted a hackathon at MozFest 2021.<br />
<br />
'''Contributors'''<br /><br />
Andy Forest, Bernease Herman, Borhane Blili-Hamelin, Dessalegn Yehuala, Gaurav Jain, Jenine Carron, Jessica Zhou, Kyle Smith, Kyle Meenehan, Vanja Skoric<br />
<br />
'''Resources'''<br /><br />
[https://www.zenofml.org/ Website] <br />
[https://miro.com/app/board/o9J_lSqB3rU=/ MozFest Hackathon Miro Board] password: zen202!ml</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Zen_of_ML&diff=1236781Working Groups/Zen of ML2021-07-19T19:54:35Z<p>Temipopo: </p>
<hr />
<div><big>'''Zen of ML'''</big><br /><br />
<br />
<br />
The Zen of Machine Learning was launched in the pilot cohort of '''Mozilla’s building trustworthy AI working group'''. <br />
<br />
The Zen of ML is a set of design principles that helps ML educators and self-learners prioritise responsible machine learning practices. The principles consider the end-to-end machine learning development cycle, from data collection to model evaluation and continuous deployment. Inspired by the Zen of Python, the Zen of ML can be viewed as a culture code that promotes the responsible development of ML products and projects. It is not binding or enforceable, but is intended to shape industry norms and offer a practical guide to building trustworthy AI. <br />
<br />
This project is collecting, refining, publishing, and disseminating the Zen of ML design principles so that machine learning (ML) practitioners can develop and deploy ML code responsibly.<br />
<br />
'''Problem'''<br />
Machine learning (ML) tools are freely and widely available, and can be accessed with simple API calls and standard development pipelines. This has made it possible for anybody who wants to use ML to learn the skills and access the tools to do so. When using ML as a code component, the data-dependent and probabilistic nature of its outputs is hidden and often overlooked. This can have undesirable and even harmful consequences. Thus, while democratising ML tools has increased the inclusiveness of ML, it has created a new challenge as the responsible use of ML tools cannot be guaranteed or controlled. This presents a particular risk for people that are adversely affected by the outcomes of decisions backed by ML predictions.<br />
<br />
'''Solution'''<br />
A list of statements that can be disseminated to ML educators and self-learners to embed the responsible development of ML artifacts and products into the design cycle from the get-go.<br />
We draw inspiration from the Zen of Python, which can be accessed from any python shell with import this. Similarly, we would like to see the Zen of ML included as an import statement in scikit-learn, the entry point to machine learning for many people.<br />
<br />
''The Zen of ML should:''<br /><br />
1. Focus on decision making and identify decisions that arise in building and maintaining machine learning pipelines (from data collection to evaluation)<br /><br />
2. Be a useful starting point for beginner pedagogy (both self-learning and teaching)<br /><br />
3. Be a useful critical reflection tool to revisit for practitioners<br /><br />
4. Language must connect to the technical domain, but remain accessible and comprehensible - i.e. make sense to human beings, minimal jargon<br /><br />
5. Be neither a checklist, nor a specification:<br /><br />
* Comprehensive but non-exhaustive<br />
* General, not custom built for specific ML project types.<br />
* What we love<br />
* What we prefer<br />
* If … then …<br />
* Shame on you! (i.e. what we don't like)<br />
6. Should not be specific to a particular framework (e.g. PyTorch, Scikit-learn)<br /><br />
7. Be as short as possible while being thorough; possible to process at a glance (or three). 18 - 20 short sentences are nice <br /><br />
8. Resonate with responsible ML best practice <br /><br />
<br />
The project held a workshop and hosted a hackathon at MozFest 2021.<br />
<br />
'''Contributors'''<br />
Andy Forest, Bernease Herman, Borhane Blili-Hamelin, Dessalegn Yehuala, Gaurav Jain, Jenine Carron, Jessica Zhou, Kyle Smith, Kyle Meenehan, Vanja Skoric<br />
<br />
'''Resources'''<br />
[https://www.zenofml.org/ Website] <br />
[https://miro.com/app/board/o9J_lSqB3rU=/ MozFest Hackathon Miro Board] password: zen202!ml</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Truth_as_a_Public_Good&diff=1236780Working Groups/Truth as a Public Good2021-07-19T19:54:12Z<p>Temipopo: </p>
<hr />
<div><big>'''Truth as a Public Good (2020)<br />
</big>'''<br /><br />
Truth as a Public Good was launched in the pilot cohort of '''Mozilla’s building trustworthy AI working group'''. It was initially conceptualized as a vague idea based off of [https://medium.com/@tara_94961/lessons-learned-from-tech4good-between-marketing-and-substance-b57610b7e3b “Lessons Learned from #Tech4Good: Between Marketing and Substance”] intended to call attention to the for-profit, VC-funded Authentication technologies and standards that are purported to be trustworthy solutions to deep fakes, disinformation, and manipulated online content. <br />
<br />
Most of the contributors hailed from the Health Tech space, so the project sought to answer the question: ‘how do we keep health data and ultimately Health in the public good?’ <br />
<br />
'''Problem<br />
'''<br /><br />
Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. The demand and supply for content authentication was incubated by civil society actors. Now with buzzwords, such as deepfakes and fake news, the private sector is catching up to the potential monetary gain of content authentication.<br />
<br />
The internet and its content was created by all of us, and therefore should belong to all of us. By framing these digital platforms as public goods, we ask: ‘can we build a new healthier relationship between the public and private digital companies in order to deliver real value to society?’<br />
<br />
'''Solution'''<br /><br />
<br />
The group met weekly to discuss this abstract concept and settled on the following description to summarize its purpose: The Truth as a Public Good (TPG) Working Group will explore the “dilemma” of standardized content authentication and the stakeholders involved in this decision-making ecosystem. <br />
<br />
During each of the weekly meetings, the first half hour or so would be a “check-in” where we would each share what was on our minds, how we’re feeling, what we’re looking forward to. We benefited from diverse representations of nationalities and given the time period, a big topic of conversation was Covid-19 and the vaccine, especially each region/country’s unique lived experience with something as unifying as a global pandemic. So when it came to applying our “authentication examination” framework to some kind of real-world use-case, the Covid-19 vaccine story emerged as a compelling anchor. <br />
<br />
During MozFest we designed the following workshop: Since October 2020, the Truth as a Public Good Working Group has been exploring the “dilemma” of standardized content and data authentication and the stakeholders involved in this decision-making ecosystem. Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. Truth as a Public Good will be examined in the context of the Covid-19 vaccine story and how it relates to the public’s right to authenticating public health data. Truth as a Public Good will anchor this concept in the public’s shared experience of observing development of the Covid-19 Vaccine. This working group will host a World Cafe style workshop to examine authentication layers.<br />
<br />
As a result of the working group, we found a Community of like-minded individuals who were able to conceptualize how to apply an abstract framework of transparency and responsible innovation to pretty much any issue set. <br />
<br />
'''Contributors'''<br /><br />
<br />
Ahnjili Zhuparris, Munib Mesinovic, James Littlejohn, Sarada Mahesh, Esther Mwema, Charles Ngounou, Tara Vassefi <br />
<br />
'''Resources'''<br /></div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Zen_of_ML&diff=1236779Working Groups/Zen of ML2021-07-19T19:53:26Z<p>Temipopo: </p>
<hr />
<div><big>Zen of ML </big><br /><br />
<br />
<br />
The Zen of Machine Learning was launched in the pilot cohort of '''Mozilla’s building trustworthy AI working group'''. <br />
<br />
The Zen of ML is a set of design principles that helps ML educators and self-learners prioritise responsible machine learning practices. The principles consider the end-to-end machine learning development cycle, from data collection to model evaluation and continuous deployment. Inspired by the Zen of Python, the Zen of ML can be viewed as a culture code that promotes the responsible development of ML products and projects. It is not binding or enforceable, but is intended to shape industry norms and offer a practical guide to building trustworthy AI. <br />
<br />
This project is collecting, refining, publishing, and disseminating the Zen of ML design principles so that machine learning (ML) practitioners can develop and deploy ML code responsibly.<br />
<br />
'''Problem'''<br />
Machine learning (ML) tools are freely and widely available, and can be accessed with simple API calls and standard development pipelines. This has made it possible for anybody who wants to use ML to learn the skills and access the tools to do so. When using ML as a code component, the data-dependent and probabilistic nature of its outputs is hidden and often overlooked. This can have undesirable and even harmful consequences. Thus, while democratising ML tools has increased the inclusiveness of ML, it has created a new challenge as the responsible use of ML tools cannot be guaranteed or controlled. This presents a particular risk for people that are adversely affected by the outcomes of decisions backed by ML predictions.<br />
<br />
'''Solution'''<br />
A list of statements that can be disseminated to ML educators and self-learners to embed the responsible development of ML artifacts and products into the design cycle from the get-go.<br />
We draw inspiration from the Zen of Python, which can be accessed from any python shell with import this. Similarly, we would like to see the Zen of ML included as an import statement in scikit-learn, the entry point to machine learning for many people.<br />
<br />
''The Zen of ML should:''<br /><br />
1. Focus on decision making and identify decisions that arise in building and maintaining machine learning pipelines (from data collection to evaluation)<br /><br />
2. Be a useful starting point for beginner pedagogy (both self-learning and teaching)<br /><br />
3. Be a useful critical reflection tool to revisit for practitioners<br /><br />
4. Language must connect to the technical domain, but remain accessible and comprehensible - i.e. make sense to human beings, minimal jargon<br /><br />
5. Be neither a checklist, nor a specification:<br /><br />
* Comprehensive but non-exhaustive<br />
* General, not custom built for specific ML project types.<br />
* What we love<br />
* What we prefer<br />
* If … then …<br />
* Shame on you! (i.e. what we don't like)<br />
6. Should not be specific to a particular framework (e.g. PyTorch, Scikit-learn)<br /><br />
7. Be as short as possible while being thorough; possible to process at a glance (or three). 18 - 20 short sentences are nice <br /><br />
8. Resonate with responsible ML best practice <br /><br />
<br />
The project held a workshop and hosted a hackathon at MozFest 2021.<br />
<br />
'''Contributors'''<br />
Andy Forest, Bernease Herman, Borhane Blili-Hamelin, Dessalegn Yehuala, Gaurav Jain, Jenine Carron, Jessica Zhou, Kyle Smith, Kyle Meenehan, Vanja Skoric<br />
<br />
'''Resources'''<br />
[https://www.zenofml.org/ Website] <br />
[https://miro.com/app/board/o9J_lSqB3rU=/ MozFest Hackathon Miro Board] password: zen202!ml</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236778Working Groups2021-07-19T19:52:13Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /><br />
<br />
* [[Working_Groups/Truth_as_a_Public_Good|Truth as a Public Good]]<br /><br />
* [[Working_Groups/Zen_of_ML|Zen of ML]]<br /></div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Truth_as_a_Public_Good&diff=1236777Working Groups/Truth as a Public Good2021-07-19T19:49:05Z<p>Temipopo: </p>
<hr />
<div><big>'''Truth as a Public Good (2020)<br />
</big>'''<br /><br />
Truth as a Public Good was initially conceptualized as a vague idea based off of [https://medium.com/@tara_94961/lessons-learned-from-tech4good-between-marketing-and-substance-b57610b7e3b “Lessons Learned from #Tech4Good: Between Marketing and Substance”] intended to call attention to the for-profit, VC-funded Authentication technologies and standards that are purported to be trustworthy solutions to deep fakes, disinformation, and manipulated online content. <br />
<br />
Most of the contributors hailed from the Health Tech space, so the project sought to answer the question: ‘how do we keep health data and ultimately Health in the public good?’ <br />
<br />
'''Problem<br />
'''<br /><br />
Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. The demand and supply for content authentication was incubated by civil society actors. Now with buzzwords, such as deepfakes and fake news, the private sector is catching up to the potential monetary gain of content authentication.<br />
<br />
The internet and its content was created by all of us, and therefore should belong to all of us. By framing these digital platforms as public goods, we ask: ‘can we build a new healthier relationship between the public and private digital companies in order to deliver real value to society?’<br />
<br />
'''Solution'''<br /><br />
<br />
The group met weekly to discuss this abstract concept and settled on the following description to summarize its purpose: The Truth as a Public Good (TPG) Working Group will explore the “dilemma” of standardized content authentication and the stakeholders involved in this decision-making ecosystem. <br />
<br />
During each of the weekly meetings, the first half hour or so would be a “check-in” where we would each share what was on our minds, how we’re feeling, what we’re looking forward to. We benefited from diverse representations of nationalities and given the time period, a big topic of conversation was Covid-19 and the vaccine, especially each region/country’s unique lived experience with something as unifying as a global pandemic. So when it came to applying our “authentication examination” framework to some kind of real-world use-case, the Covid-19 vaccine story emerged as a compelling anchor. <br />
<br />
During MozFest we designed the following workshop: Since October 2020, the Truth as a Public Good Working Group has been exploring the “dilemma” of standardized content and data authentication and the stakeholders involved in this decision-making ecosystem. Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. Truth as a Public Good will be examined in the context of the Covid-19 vaccine story and how it relates to the public’s right to authenticating public health data. Truth as a Public Good will anchor this concept in the public’s shared experience of observing development of the Covid-19 Vaccine. This working group will host a World Cafe style workshop to examine authentication layers.<br />
<br />
As a result of the working group, we found a Community of like-minded individuals who were able to conceptualize how to apply an abstract framework of transparency and responsible innovation to pretty much any issue set. <br />
<br />
'''Contributors'''<br /><br />
<br />
Ahnjili Zhuparris, Munib Mesinovic, James Littlejohn, Sarada Mahesh, Esther Mwema, Charles Ngounou, Tara Vassefi <br />
<br />
'''Resources'''<br /></div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Truth_as_a_Public_Good&diff=1236776Working Groups/Truth as a Public Good2021-07-19T19:48:13Z<p>Temipopo: </p>
<hr />
<div><big>'''Truth as a Public Good (2020)<br />
</big>'''<br /><br />
Truth as a Public Good was initially conceptualized as a vague idea based off of “Lessons Learned from #Tech4Good: Between Marketing and Substance” intended to call attention to the for-profit, VC-funded Authentication technologies and standards that are purported to be trustworthy solutions to deep fakes, disinformation, and manipulated online content. <br />
<br />
Most of the contributors hailed from the Health Tech space, so the project sought to answer the question: ‘how do we keep health data and ultimately Health in the public good?’ <br />
<br />
'''Problem<br />
'''<br /><br />
Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. The demand and supply for content authentication was incubated by civil society actors. Now with buzzwords, such as deepfakes and fake news, the private sector is catching up to the potential monetary gain of content authentication.<br />
<br />
The internet and its content was created by all of us, and therefore should belong to all of us. By framing these digital platforms as public goods, we ask: ‘can we build a new healthier relationship between the public and private digital companies in order to deliver real value to society?’<br />
<br />
'''Solution'''<br /><br />
<br />
The group met weekly to discuss this abstract concept and settled on the following description to summarize its purpose: The Truth as a Public Good (TPG) Working Group will explore the “dilemma” of standardized content authentication and the stakeholders involved in this decision-making ecosystem. <br />
<br />
During each of the weekly meetings, the first half hour or so would be a “check-in” where we would each share what was on our minds, how we’re feeling, what we’re looking forward to. We benefited from diverse representations of nationalities and given the time period, a big topic of conversation was Covid-19 and the vaccine, especially each region/country’s unique lived experience with something as unifying as a global pandemic. So when it came to applying our “authentication examination” framework to some kind of real-world use-case, the Covid-19 vaccine story emerged as a compelling anchor. <br />
<br />
During MozFest we designed the following workshop: Since October 2020, the Truth as a Public Good Working Group has been exploring the “dilemma” of standardized content and data authentication and the stakeholders involved in this decision-making ecosystem. Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. Truth as a Public Good will be examined in the context of the Covid-19 vaccine story and how it relates to the public’s right to authenticating public health data. Truth as a Public Good will anchor this concept in the public’s shared experience of observing development of the Covid-19 Vaccine. This working group will host a World Cafe style workshop to examine authentication layers.<br />
<br />
As a result of the working group, we found a Community of like-minded individuals who were able to conceptualize how to apply an abstract framework of transparency and responsible innovation to pretty much any issue set. <br />
<br />
'''Contributors'''<br /><br />
<br />
Ahnjili Zhuparris, Munib Mesinovic, James Littlejohn, Sarada Mahesh, Esther Mwema, Charles Ngounou, Tara Vassefi <br />
<br />
'''Resources'''<br /></div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Truth_as_a_Public_Good&diff=1236775Working Groups/Truth as a Public Good2021-07-19T19:47:11Z<p>Temipopo: Created page with "<big>'''Truth as a Public Good (2020) </big>'''Truth as a Public Good was initially conceptualized as a vague idea based off of “Lessons Learned from #Tech4Good: Between Mar..."</p>
<hr />
<div><big>'''Truth as a Public Good (2020)<br />
</big>'''Truth as a Public Good was initially conceptualized as a vague idea based off of “Lessons Learned from #Tech4Good: Between Marketing and Substance” intended to call attention to the for-profit, VC-funded Authentication technologies and standards that are purported to be trustworthy solutions to deep fakes, disinformation, and manipulated online content. <br />
<br />
Most of the contributors hailed from the Health Tech space, so the project sought to answer the question: ‘how do we keep health data and ultimately Health in the public good?’ <br />
<br />
'''Problem<br />
'''Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. The demand and supply for content authentication was incubated by civil society actors. Now with buzzwords, such as deepfakes and fake news, the private sector is catching up to the potential monetary gain of content authentication.<br />
<br />
The internet and its content was created by all of us, and therefore should belong to all of us. By framing these digital platforms as public goods, we ask: ‘can we build a new healthier relationship between the public and private digital companies in order to deliver real value to society?’<br />
<br />
'''Solution'''<br />
The group met weekly to discuss this abstract concept and settled on the following description to summarize its purpose: The Truth as a Public Good (TPG) Working Group will explore the “dilemma” of standardized content authentication and the stakeholders involved in this decision-making ecosystem. <br />
<br />
During each of the weekly meetings, the first half hour or so would be a “check-in” where we would each share what was on our minds, how we’re feeling, what we’re looking forward to. We benefited from diverse representations of nationalities and given the time period, a big topic of conversation was Covid-19 and the vaccine, especially each region/country’s unique lived experience with something as unifying as a global pandemic. So when it came to applying our “authentication examination” framework to some kind of real-world use-case, the Covid-19 vaccine story emerged as a compelling anchor. <br />
<br />
During MozFest we designed the following workshop: Since October 2020, the Truth as a Public Good Working Group has been exploring the “dilemma” of standardized content and data authentication and the stakeholders involved in this decision-making ecosystem. Content authentication, evaluating the integrity of shared multimedia content, is crucial in the era of increasing erosion of the public’s trust in media and information sources. Truth as a Public Good will be examined in the context of the Covid-19 vaccine story and how it relates to the public’s right to authenticating public health data. Truth as a Public Good will anchor this concept in the public’s shared experience of observing development of the Covid-19 Vaccine. This working group will host a World Cafe style workshop to examine authentication layers.<br />
<br />
As a result of the working group, we found a Community of like-minded individuals who were able to conceptualize how to apply an abstract framework of transparency and responsible innovation to pretty much any issue set. <br />
<br />
'''Contributors'''<br />
Ahnjili Zhuparris, Munib Mesinovic, James Littlejohn, Sarada Mahesh, Esther Mwema, Charles Ngounou, Tara Vassefi <br />
<br />
'''Resources'''</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups&diff=1236772Working Groups2021-07-19T17:05:17Z<p>Temipopo: </p>
<hr />
<div>__NOTOC__ <br />
<section begin=summary />{{ProgramSummary<br />
|icon=Mozfest-header.png<br />
|pagelocation=Working Groups<br />
|pagetitle=Trustworthy AI Working Groups<br />
|updated=Jul 14, 2021<br />
|owner=Temi Popo<br />
|years=2020-present<br />
|description=Working groups are collections of MozFest community members coming together to focus on a specific topic around trustworthy AI. These working groups are an extension of the Mozilla Festival; By convening regularly online, these groups will support ongoing work around trustworthy AI. All our work and organizing is done openly. [https://www.mozillafestival.org/en/building-trustworthy-ai-working-group/].<br />
}}<section end=summary /></div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Zen_of_ML&diff=1236702Working Groups/Zen of ML2021-07-14T14:03:03Z<p>Temipopo: </p>
<hr />
<div>The Zen of Machine Learning was launched in the pilot cohort of '''Mozilla’s building trustworthy AI working group'''. <br />
<br />
The Zen of ML is a set of design principles that helps ML educators and self-learners prioritise responsible machine learning practices. The principles consider the end-to-end machine learning development cycle, from data collection to model evaluation and continuous deployment. Inspired by the Zen of Python, the Zen of ML can be viewed as a culture code that promotes the responsible development of ML products and projects. It is not binding or enforceable, but is intended to shape industry norms and offer a practical guide to building trustworthy AI. <br />
<br />
This project is collecting, refining, publishing, and disseminating the Zen of ML design principles so that machine learning (ML) practitioners can develop and deploy ML code responsibly.<br />
<br />
'''Problem'''<br />
Machine learning (ML) tools are freely and widely available, and can be accessed with simple API calls and standard development pipelines. This has made it possible for anybody who wants to use ML to learn the skills and access the tools to do so. When using ML as a code component, the data-dependent and probabilistic nature of its outputs is hidden and often overlooked. This can have undesirable and even harmful consequences. Thus, while democratising ML tools has increased the inclusiveness of ML, it has created a new challenge as the responsible use of ML tools cannot be guaranteed or controlled. This presents a particular risk for people that are adversely affected by the outcomes of decisions backed by ML predictions.<br />
<br />
'''Solution'''<br />
A list of statements that can be disseminated to ML educators and self-learners to embed the responsible development of ML artifacts and products into the design cycle from the get-go.<br />
We draw inspiration from the Zen of Python, which can be accessed from any python shell with import this. Similarly, we would like to see the Zen of ML included as an import statement in scikit-learn, the entry point to machine learning for many people.<br />
<br />
''The Zen of ML should:''<br /><br />
1. Focus on decision making and identify decisions that arise in building and maintaining machine learning pipelines (from data collection to evaluation)<br /><br />
2. Be a useful starting point for beginner pedagogy (both self-learning and teaching)<br /><br />
3. Be a useful critical reflection tool to revisit for practitioners<br /><br />
4. Language must connect to the technical domain, but remain accessible and comprehensible - i.e. make sense to human beings, minimal jargon<br /><br />
5. Be neither a checklist, nor a specification:<br /><br />
* Comprehensive but non-exhaustive<br />
* General, not custom built for specific ML project types.<br />
* What we love<br />
* What we prefer<br />
* If … then …<br />
* Shame on you! (i.e. what we don't like)<br />
6. Should not be specific to a particular framework (e.g. PyTorch, Scikit-learn)<br /><br />
7. Be as short as possible while being thorough; possible to process at a glance (or three). 18 - 20 short sentences are nice <br /><br />
8. Resonate with responsible ML best practice <br /><br />
<br />
The project held a workshop and hosted a hackathon at MozFest 2021.<br />
<br />
'''Contributors'''<br />
Andy Forest, Bernease Herman, Borhane Blili-Hamelin, Dessalegn Yehuala, Gaurav Jain, Jenine Carron, Jessica Zhou, Kyle Smith, Kyle Meenehan, Vanja Skoric<br />
<br />
'''Resources'''<br />
[https://www.zenofml.org/ Website] <br />
[https://miro.com/app/board/o9J_lSqB3rU=/ MozFest Hackathon Miro Board] password: zen202!ml</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Zen_of_ML&diff=1236701Working Groups/Zen of ML2021-07-14T14:02:35Z<p>Temipopo: </p>
<hr />
<div>The Zen of Machine Learning was launched in the pilot cohort of '''Mozilla’s building trustworthy AI working group'''. <br />
<br />
The Zen of ML is a set of design principles that helps ML educators and self-learners prioritise responsible machine learning practices. The principles consider the end-to-end machine learning development cycle, from data collection to model evaluation and continuous deployment. Inspired by the Zen of Python, the Zen of ML can be viewed as a culture code that promotes the responsible development of ML products and projects. It is not binding or enforceable, but is intended to shape industry norms and offer a practical guide to building trustworthy AI. <br />
<br />
This project is collecting, refining, publishing, and disseminating the Zen of ML design principles so that machine learning (ML) practitioners can develop and deploy ML code responsibly.<br />
<br />
'''Problem'''<br />
Machine learning (ML) tools are freely and widely available, and can be accessed with simple API calls and standard development pipelines. This has made it possible for anybody who wants to use ML to learn the skills and access the tools to do so. When using ML as a code component, the data-dependent and probabilistic nature of its outputs is hidden and often overlooked. This can have undesirable and even harmful consequences. Thus, while democratising ML tools has increased the inclusiveness of ML, it has created a new challenge as the responsible use of ML tools cannot be guaranteed or controlled. This presents a particular risk for people that are adversely affected by the outcomes of decisions backed by ML predictions.<br />
<br />
'''Solution'''<br />
A list of statements that can be disseminated to ML educators and self-learners to embed the responsible development of ML artifacts and products into the design cycle from the get-go.<br />
We draw inspiration from the Zen of Python, which can be accessed from any python shell with import this. Similarly, we would like to see the Zen of ML included as an import statement in scikit-learn, the entry point to machine learning for many people.<br />
<br />
''The Zen of ML should:''<br />
1. Focus on decision making and identify decisions that arise in building and maintaining machine learning pipelines (from data collection to evaluation)<br /><br />
2. Be a useful starting point for beginner pedagogy (both self-learning and teaching)<br /><br />
3. Be a useful critical reflection tool to revisit for practitioners<br /><br />
4. Language must connect to the technical domain, but remain accessible and comprehensible - i.e. make sense to human beings, minimal jargon<br /><br />
5. Be neither a checklist, nor a specification:<br /><br />
* Comprehensive but non-exhaustive<br />
* General, not custom built for specific ML project types.<br />
* What we love<br />
* What we prefer<br />
* If … then …<br />
* Shame on you! (i.e. what we don't like)<br />
6. Should not be specific to a particular framework (e.g. PyTorch, Scikit-learn)<br /><br />
7. Be as short as possible while being thorough; possible to process at a glance (or three). 18 - 20 short sentences are nice <br /><br />
8. Resonate with responsible ML best practice <br /><br />
<br />
The project held a workshop and hosted a hackathon at MozFest 2021.<br />
<br />
'''Contributors'''<br />
Andy Forest, Bernease Herman, Borhane Blili-Hamelin, Dessalegn Yehuala, Gaurav Jain, Jenine Carron, Jessica Zhou, Kyle Smith, Kyle Meenehan, Vanja Skoric<br />
<br />
'''Resources'''<br />
[https://www.zenofml.org/ Website] <br />
[https://miro.com/app/board/o9J_lSqB3rU=/ MozFest Hackathon Miro Board] password: zen202!ml</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Zen_of_ML&diff=1236700Working Groups/Zen of ML2021-07-14T14:01:27Z<p>Temipopo: </p>
<hr />
<div>The Zen of Machine Learning was launched in the pilot cohort of '''Mozilla’s building trustworthy AI working group'''. <br />
<br />
The Zen of ML is a set of design principles that helps ML educators and self-learners prioritise responsible machine learning practices. The principles consider the end-to-end machine learning development cycle, from data collection to model evaluation and continuous deployment. Inspired by the Zen of Python, the Zen of ML can be viewed as a culture code that promotes the responsible development of ML products and projects. It is not binding or enforceable, but is intended to shape industry norms and offer a practical guide to building trustworthy AI. <br />
<br />
This project is collecting, refining, publishing, and disseminating the Zen of ML design principles so that machine learning (ML) practitioners can develop and deploy ML code responsibly.<br />
<br />
'''Problem'''<br />
Machine learning (ML) tools are freely and widely available, and can be accessed with simple API calls and standard development pipelines. This has made it possible for anybody who wants to use ML to learn the skills and access the tools to do so. When using ML as a code component, the data-dependent and probabilistic nature of its outputs is hidden and often overlooked. This can have undesirable and even harmful consequences. Thus, while democratising ML tools has increased the inclusiveness of ML, it has created a new challenge as the responsible use of ML tools cannot be guaranteed or controlled. This presents a particular risk for people that are adversely affected by the outcomes of decisions backed by ML predictions.<br />
<br />
'''Solution'''<br />
A list of statements that can be disseminated to ML educators and self-learners to embed the responsible development of ML artifacts and products into the design cycle from the get-go.<br />
We draw inspiration from the Zen of Python, which can be accessed from any python shell with import this. Similarly, we would like to see the Zen of ML included as an import statement in scikit-learn, the entry point to machine learning for many people.<br />
<br />
''The Zen of ML should:''<br />
1. Focus on decision making and identify decisions that arise in building and maintaining machine learning pipelines (from data collection to evaluation)<br />
2. Be a useful starting point for beginner pedagogy (both self-learning and teaching)<br />
3. Be a useful critical reflection tool to revisit for practitioners<br />
4. Language must connect to the technical domain, but remain accessible and comprehensible - i.e. make sense to human beings, minimal jargon<br />
5. Be neither a checklist, nor a specification:<br />
* Comprehensive but non-exhaustive<br />
* General, not custom built for specific ML project types.<br />
* What we love<br />
* What we prefer<br />
* If … then …<br />
* Shame on you! (i.e. what we don't like)<br />
6. Should not be specific to a particular framework (e.g. PyTorch, Scikit-learn).<br />
7. Be as short as possible while being thorough; possible to process at a glance (or three). 18 - 20 short sentences are nice<br />
8. Resonate with responsible ML best practice<br />
<br />
The project held a workshop and hosted a hackathon at MozFest 2021.<br />
<br />
'''Contributors'''<br />
Andy Forest, Bernease Herman, Borhane Blili-Hamelin, Dessalegn Yehuala, Gaurav Jain, Jenine Carron, Jessica Zhou, Kyle Smith, Kyle Meenehan, Vanja Skoric<br />
<br />
'''Resources'''<br />
[https://www.zenofml.org/ Website] <br />
[https://miro.com/app/board/o9J_lSqB3rU=/ MozFest Hackathon Miro Board] password: zen202!ml</div>Temipopohttps://wiki.mozilla.org/index.php?title=Working_Groups/Zen_of_ML&diff=1236699Working Groups/Zen of ML2021-07-14T13:54:40Z<p>Temipopo: </p>
<hr />
<div>The Zen of Machine Learning was launched in the pilot cohort of '''Mozilla’s building trustworthy AI working group'''. <br />
<br />
The Zen of ML is a set of design principles that helps ML educators and self-learners prioritise responsible machine learning practices. The principles consider the end-to-end machine learning development cycle, from data collection to model evaluation and continuous deployment. Inspired by the Zen of Python, the Zen of ML can be viewed as a culture code that promotes the responsible development of ML products and projects. It is not binding or enforceable, but is intended to shape industry norms and offer a practical guide to building trustworthy AI. <br />
<br />
This project is collecting, refining, publishing, and disseminating the Zen of ML design principles so that machine learning (ML) practitioners can develop and deploy ML code responsibly.<br />
<br />
'''Problem'''<br />
Machine learning (ML) tools are freely and widely available, and can be accessed with simple API calls and standard development pipelines. This has made it possible for anybody who wants to use ML to learn the skills and access the tools to do so. When using ML as a code component, the data-dependent and probabilistic nature of its outputs is hidden and often overlooked. This can have undesirable and even harmful consequences. Thus, while democratising ML tools has increased the inclusiveness of ML, it has created a new challenge as the responsible use of ML tools cannot be guaranteed or controlled. This presents a particular risk for people that are adversely affected by the outcomes of decisions backed by ML predictions.<br />
<br />
'''Solution'''<br />
A list of statements that can be disseminated to ML educators and self-learners to embed the responsible development of ML artifacts and products into the design cycle from the get-go.<br />
We draw inspiration from the Zen of Python, which can be accessed from any python shell with import this. Similarly, we would like to see the Zen of ML included as an import statement in scikit-learn, the entry point to machine learning for many people.<br />
<br />
''The Zen of ML should:''<br />
// Focus on decision making and identify decisions that arise in building and maintaining machine learning pipelines (from data collection to evaluation)<br />
// Be a useful starting point for beginner pedagogy (both self-learning and teaching)<br />
// Be a useful critical reflection tool to revisit for practitioners<br />
// Language must connect to the technical domain, but remain accessible and comprehensible - i.e. make sense to human beings, minimal jargon<br />
// Be neither a checklist, nor a specification:<br />
// Comprehensive but non-exhaustive<br />
// General, not custom built for specific ML project types.<br />
// What we love<br />
// What we prefer<br />
// If … then …<br />
// Shame on you! (i.e. what we don't like)<br />
// Should not be specific to a particular framework (e.g. PyTorch, Scikit-learn).<br />
// Be as short as possible while being thorough; possible to process at a glance (or three). 18 - 20 short sentences are nice<br />
// Resonate with responsible ML best practice<br />
<br />
The project held a workshop and hosted a hackathon at MozFest 2021.<br />
<br />
'''Contributors'''<br />
Andy Forest, Bernease Herman, Borhane Blili-Hamelin, Dessalegn Yehuala, Gaurav Jain, Jenine Carron, Jessica Zhou, Kyle Smith, Kyle Meenehan, Vanja Skoric<br />
<br />
'''Resources'''<br />
[https://www.zenofml.org/ Website] <br />
[https://miro.com/app/board/o9J_lSqB3rU=/ MozFest Hackathon Miro Board] password: zen202!ml</div>Temipopo