P2PULearningChallenges/Metrics

From MozillaWiki
Jump to: navigation, search

Final Metrics for Challenges on P2PU

Evaluation Goals and Metrics for Challenges

  • The   objective of these metrics is to assess whether or not Challenges on P2PU are a good way to begin creating educational content for Mozilla projects. Both quantitative and qualitative methods are used to determine value for Mozilla, P2PU, and the user. 
  • The  focus is on overall value - We have content and we are moving it into this challenge framework. How is that going? How many people are completing our content (within this framework). Conversion funnel. Completion rates. 

  • Conversion rate
    • Users who hit the site
      • N = not logged-in users (data from google analytics) accessing:
        • School of Webcraft home page
        • Challenge set home pages
        • Challenge home pages
        • Challenge Task pages
    • Share of users who register but never start a challenge 
      • Registered users only --> Visit a challenge related page but no challenge is started
    • Share of users who start a challenge
      • Share of users who complete a challenge
        • Only users who started a challenge at least 3 days prior to reporting
      • Share who abandon a challenge
        • Users who are inactive for 10 days
        • Users who manually "leave" a challenge
    • Share of users who complete more than one challenge (2, 3, 4, ...)
    • Share of users who complete a series (=set) of challenges
    • Number of Challenges completed per learner
      • N = Users who completed at least one challenge
  • Badges
    • Number of Badges issued over time (organized by badge type)
    • Number of badges issued per learner (badge categories: making, understanding)
  • Mentors
    • Number of Mentors over time (some measure of level of activity)
      • Todo -> Check with John
  • Qualitative Survey to ask people how they find the experience
    • Laura to prepare

Hackasaurus Metrics

Understanding

  • Number of Users who hit the site
  • Number of Badges issued over time
  • Number of people signing up for our email/group lists/pledges/campaigns/twitter/bookmarks
  • Number of Hacktivity Kit downloads

Making

  • Number of websites developed
  • People using our templates 
  • Number of hacked webpages
  • Number of Hackasaurus events 
  • Number of event participants
  • Goggle activations (bookmarklet uses) per day

Innovating


  • patches (or forks) from volunteer contributors (dev and curriculum)
  • x # of hackasaurus "experiments" by community members
  • localizations (Hacktivity Kit + web+ goggles)
*****************************************************************

Project Components: 

Website:
  • Number of people visiting
  • XXX page views per day
  • Number of people signing up for our email/group lists/pledges/campaigns/twitter/bookmarks
  • page views per visit
  • return visitors
  • links clicked
  • # of backlinks

Goggles:
  • XXX activations (bookmarklet uses) per day
  • XXX patches (or forks) from volunteer contributors

Hacktivity Kit:
  • XXX # of downloads by the end of 2012/ Q1 - I'd collect this monthly (or weekly)
  • x# of Hacktivity Kit localizations 
    • or 2x the number of localizations from 2011
  • x # of feedback questionnaires completed

P2PU Challenges(this could also be for other things like parable or nav badge if not integrated into p2pu - "interactive challenges")
  • x # of webpages created as a result of challenges
  • Share of users who start a (Hackasaurus) challenge
    • Share who abandon a challenge
    • Share of users who complete a challenge
  • Share of users who complete more than one challenge (2, 3, 4, ...)
  • Share of users who complete a series of challenges
  • Number of Challenges completed per learner
  • Number of Badges issued over time (organized by badge type)
  • Number of badges issued per learner (making and understanding badges)
  • Number of Mentors over time (some measure of level of activity) 

Community
  • x # of countries running hack jams
  • x # of hackasaurus "experiments" by community members
  • x # of localizations
  • x # of participants in Hackasaurus events

BRAINSTORMING METRICS:

Overall goals

>Evaluation Goals and Metrics for Challenges
  • The  objective of these metrics is to assess whether or not Challenges on  P2PU are a good way to begin creating educational content for Mozilla  projects. Both quantitative and qualitative methods are used to  determine value for Mozilla, P2PU, and the user. 
  • Two sets of metrics:
    • NOT  FOCUS -> Internal - Once within a challenge - what are the metrics  that tell us how well someone is doing, and how to make the challenge  features / UX better. Click patterns. Drop off points. Etc. 
    • FOCUS  -> Overall value - We have content and we are moving it into this  challenge framework. How is that going? How many people are completing  our content (within this framework). Conversion funnel. Completion  rates. 

Framing:

  • Focus metrics on state change - attitude, awareness, actions taken

3  Levels of Impact for Metrics for MoFo (framing for our larger goals -  see if we can re-frame some of the metrics below into Participation and  Making/Learning buckets for consistency)
  • Understanding  - what increase in understanding/awareness of the open web and our  programs (this is not as relevant for P2PU challenges specifically)
CV:  challenges provide a model for deep understanding, which is measured  primarily in the ability of learners to reflect their knowledge. A  tangible example of that is to create a tutorial for someone else on a  topic.
  • Participation - how many people are participating, in what ways, how deeply
CV:  an easy way to measure that is sequencing of comments: we do not care  necessarily about how many comments have been made, what we care about  as an indicator of participation is how many of these comments are an  answer to a previous comment. Additionally things like sharing with  external social graphs provd=ide an indicator > more on how to assess  social assessment & participation here <a href="http://bit.ly/vloJcH">http://bit.ly/vloJcH</a>
  • Making - how are people improving their skills
CV: not clear on this, how is this different from understanding?
  • Innovation - how are people innovating (not as relevant?)
CV:  brilliant point- something to consider is measuring how many times for  example a unique solution, such as a piece of code, is being re-used in  other ways.

Can  we currently measure  all these on P2PU?  What do the metrics look like  for Hackasaurus (what  do we have available to us there?)
CV:  no we cannot probably. but we can start considering ways we can measure  participation   (stealth assessment) as well as understanding (manual  assessment)

PS:  Main comment -> Need to reduce number of metrics / let's track  general traffic and then pick a few important ones (max 3-5) to drill  down


Goals

Participation (beyond Reach)Determine success of functional improvement

How  many people is Hackasaurus currently engaging? How many more might  Challenges bring to the Hackasaurus Project? historic, current, and  predictive views using quantitative data 

Method: data analysis (Monitor for trends) – historical and current, weekly collection

Metrics (increasing depth of participation):
  • Access
    • Basic demographics (from Google Analytics)
    • Referrer stats (from Google Analytics)
    • Session Duration
    • Number of total page views by language and country
  • Conversion rate
    • Users who hit the site
    • Share of users who register but never start a challenge
    • Share of users who start a challenge
      • Share who abandon a challenge
      • Share of users who complete a challenge
    • Share of users who complete more than one challenge (2, 3, 4, ...)
    • Share of users who complete a series of challenges
    • Number of Challenges completed per learner
    • Nice to have: Usage pattern over time (levels of activity / completion speed)
      • Maybe hours / week over time
  • Learning
    • Number of Active Learners over time (ie those who complete a task once a week) 
    • Number of Challenge/Curriculum Completions over time (ie Learners who have completed all the Challenge/Curriculum tasks)
    • Share of users who complete at least one challenge
    • Ratio of users who started a challenge and completed it
    • Click Stream
  • Badges
    • Number of Badges issued over time (organized by badge type)
    • Number of badges issued per learner (making and understanding badges)
      • what about collaborating = learning behavior?
  • Mentoring
    • Number of Mentors over time (some measure of level of activity) 
      • This  can probably only be done through a survey, because we let mentors  structure their communication however they want - a lot of their  interaction happens outside of p2pu.org
    • Conversation rates from participant to helper to mentor (deeper participation)
    • sequence of comments > that is something we could measure
    • Number of Gurus (e.g. users who have ability to issue valuable badges) 
      • (does  this exist?)- Yes, during the first pilot of Badges on P2PU, they  seeded the community with gurus (ie "Has-A-Badge") assessors. @Chloe  @Philipp - Do you know how many "gurus" are active on the site?
  • Peer Assessment (as a form of participation)
    • Share of total users who participated in peer assessment (e.g. 20%)
    • Share of peer to peer badges compared to overall badges (e.g. 15%)
    • Number of badges that lack sufficient reviews

Qualitative Survey to ask people how they find the experience


Making

  • Number of Badges (that are tied to making) issued over time
  • Number of Links to participants work (gathering external links will allow us to see what/if people are making)
  • Handle through survey questions



Innovating

  • Handle through survey questions

PS: I combined the following with Participation above


Community

Metrics



3: Up Peer Assessment / e.g. How well does the peer assessment work?
Method: data analysis (Monitor for trends) – historical and current, 1 week after launch, 4 weeks after launch, 3 months after launch

Metrics
  • Share of total users who participated in peer assessment (e.g. 20%)
  • Share of peer to peer badges compared to overall badges (e.g. 15%)
  • Number of badges that lack sufficient reviews



PS: Doesn't fit with Mozilla metrics framework:


4: User Survey - Determine perceived value to users
Method: survey 3 months after launch

Sample strata
  • Learners who abandoned a challenge (target problem areas)
  • Learners who never started a challenge
  • Learners who completed a challenge

Indicators / Variables
  • [future version] + contribution to getting a job / only for relevant challenges
  • Contribution to personal satisfaction
  • Contribution to social recognition
  • Focus on state changes - changes in attitude, awareness, actions taken
  • Did you tell your friends about this?
  • Would you recommend it to a friend?
  • .... need to do more work here

5: Ongoing gathering of empirical information to improve individual challenges
Method: Each learner is asked a single question after completing a challenge 

Metrics
  • + Learner Satisfaction
  • + Challenge structure success (target problem areas)

Possible Wording (need to wordsmith) - 
  • Help us make this challenge better! 
    • Did you think this challenge was a little silly – just right - too boring
    • Did you think this challenge was too hard – just right – too easy / simple



OTHER NOTES
ROI (Long Term)
How have donations increased or decreased? How has participation spread?

Method: data analysis (Quantitative from Mozilla and P2PU user data, donation data) – Ongoing after launch, long term.

Metrics
  • + donation stream to p2pu and Mozilla
  • + number of new signups
  • - Staff Hours
  • - Average Response Rate
  • + Total Likes, One ups or RT on social media messaging marked #challenges        

Notes
  • Suggestion is to postpone this and add to later phase (agree)
  • Risk  that adding option to donate into the challenge will change the user's  learning experience (we really want to measure if the learning works at  this point - adding donations may influence the outcomes)
    • - Average Challenge Completion Time (currently located within the Challenge         metrics)
    • - Average Task Completion Times
    • + Number of new challenges by the community
                
Quantitative

Collecting these metrics will allow us to define meta metrics (ie number of learners vs non learners vs power learners or whatever)

Across the Board:
Metrics for Understanding and Awareness of Open Web and Mozilla Projects
  • Basic demographics
  • Number of total page views by language and country
  • Referrer stats
  • Click Stream

The Metrics Story: Collecting the demographics of users will allow Mozilla to further focus their programs to specific target groups. Total number of page views by language and country furthers this definition. This will allow Mozilla to spend it's resources designing programs for people, programs that work. This will influence stats in participation depth, which will show progress towards the 10 Million Webmakers marker. Knowing which referrers are the most valuable will help streamline resources and eliminate erroneous spending on marketing and/or partnerships. Following the clickstream will help Mozilla understand what sorts of content inspire understanding, which will further Mozilla's ability to create content that pushes people to becoming webmakers.

Participation Depth
  • IP
  • Session Duration & Clicks per Session
  • Think time
  • Conversion rate
    • Share of users who register but never do anything
                                   
The Metrics Story: Although IP logging is a raw metric, with thousands of users it will be valuable to see how deep into Mozilla programming users are going. By cross comparing IP logs between programs, Mozilla will have a better view of influence and partipation depth across the board. Collecting session durations and clicks per session and seeing an increase in these two metrics over time will further underline this viewpoint. Think time can be used to filter out users who simply browse. Decreasing negative conversion rates is important in showing strength of programming.

Skill Improvement
  • Number of Badges issued over time (organized by badge type)
  • Number of Links to participants work (gathering external links will allow us to see         what/if people are making)

The Metrics Story: The more badges that are issued and the more quality links that are submitted to Mozilla sites, the clearer the influence of Mozilla on skill improvement. 

Platform Specific

P2PU
Participation Depth
  • Number of Mentors over time
  • Conversation rates
    • from participant to helper to mentor (deeper participation)
    • Number of Active Learners over time (ie those who complete a task once a week)          
    • Share of users who complete at least one challenge
    • Ratio of users who started a challenge and completed it
  • Share of total users who participated in peer assessment (e.g. 20%)
  • Share of peer to peer badges compared to overall badges (e.g. 15%)

Skill Improvement
  • Number of Gurus (e.g. users who have ability to issue valuable badges)          
  • Number of badges that lack sufficient reviews
  • Number of Challenge/Curriculum Completions over time (ie Learners who have         completed all the Challenge/Curriculum tasks)

01.12.11 Metrics Meeting

Attendees
  • Steph
  • Laura
  • Philipp
  • Chloe

Dealing with Resources
  • Need   to extend Arlton's contract to work on Challenges UX – involved 3.5   months ago. P2PU hired 2 people, 1 for Challenge content and 1 for UX.    Arlton and Jamie Curle. 
    • Was getting the UX person the most efficient way to use the funds (Philipp says no). Discuss in longer call (TBD).
  • Need to figure out the longer term strategy for people
  • SoW   – funded early badges work, SoW Community Management, Webmaking 101   Challenges. 50% p2pu funding (Zuzel, Chloe, John paid with p2pu funds)   and 50% from SoW funding. 
  • By January, launching Hackasaurus on P2PU – get Challenges rocking the house.
  • Zuzel is superhuman -- really?

Metrics Discussion
<a href="https://etherpad.mozilla.org/challenges-evaluation-goals">https://etherpad.mozilla.org/challenges-evaluation-goals</a>
Laura to set up metric meeting with Jess. Consistent set of metrics! Invite Steph 
How   do the p2pu metrics line up with the Hackasaurus metrics, which one  are  close enough aligned that we can then compare. Comparision chart.
Parse metrics – send out

Agreements
Challenge Fixes Board: Released Dec. 14
Implementation of Partner Account that allows Hackasaurus to run their challenges

Success looks like:
The list of agreed upon must haves and launch by mid January