# Learning Resources [Learning objective project board](https://github.com/orgs/fairlearn/projects/3) (note that the [Google Sheet](https://docs.google.com/spreadsheets/d/1L9oIOUwGcC4nJUPwZqjiRERHfQFrbZwUJ-uQ9sfFe1Y/edit?usp=sharing) version has been deprecated, so the GitHub project board is the most up-to-date version) ### 2024-09-25 - Hilde and Michael finished reviewing learning objective priorities and feasibility - Action items - Hilde to close the issue related to 3.6 and update 2.4 and create a new issue - ~~Michael will open a new issue for 2.3 and link to two existing ones~~ ### 2024-09-24 - Hilde and Michael reviewed learning objective priorities and feasibility - Action items - Michael will open ~~issue for [LO 1.2](https://docs.google.com/document/d/10rnkVUSbk9hy1Aa-jkgNPtQj6O_8AwpiUkJnfhjhYfE/edit#heading=h.lqpo1rq545t9) and~~ [LO 2.3](https://docs.google.com/document/d/16maWYopOUcJ8j-EuADLB9LtTCB2H4iv6sm44uWds1mY/edit#heading=h.4mu8p9fl0ub6) - Hilde will link PR to 2.4 to close ### 2024-09-11 - Discussed Allie's plan to bundle a bunch of the LOs into a section of the user guide - Comparing tradeoffs and limitations of fairness metrics should go in Assessment section, in a "Considerations" section - Action items: - Allie will open an issue to create a tutorial/walkthrough section of the User Guide where a lot of these could live - Michael and Hilde will revisit priorities for LOs - e.g., 4.3 should be low priority (or included with other things around the EU AI Act) - Should we reduce priority of the mitigations, given the discussion from the UnWorkshop? ### 2022-06-08 - Discussed user guide restructure [issue](https://github.com/fairlearn/fairlearn/issues/1095) - Discussed LO 1.5 outline to scope it down more, and opened the [issue](https://github.com/fairlearn/fairlearn/issues/1102) - Next steps: - Manojit will outline and open an issue for 3.3 - Michael will outline and open an issue for 1.4 and/or 1.6 - Roadmap for future work: - Recruit new working group members (on community call and on Discord) - Working group members continue outlining and opening issues for high and medium priority learning objectives on the [project board](https://github.com/orgs/fairlearn/projects/3) - Community contributors create content (in the form of pull requests) for those issues, integrating them into the user guide (and restructuring the user guide as needed), with reviews from working group members ### 2022-05-25 ~~- Continue prioritizing remaining LOs in Google Sheet~~ ~~- Move the rest over to GitHub project board~~ - Open high priority issues ~~- Open issue to restructure user guide - Fairness in Machine Learning and Assessment [**Michael**]~~ - [1.5. Restate metrics for operationalizing fairness] (https://docs.google.com/document/d/1nF80lVElPAHyaxPp1Zft92_jDZaPVvDDdCdagazbAQI/edit?usp=sharing) [**Hilde**] - 3.3. Generate fairness metric frame output [**Manojit**] - Find existing content (e.g., for plotting, custom metrics, metrics for different task types) - Include something about doing uncertainty quantification ([Kristian Lum's recent paper](https://arxiv.org/abs/2205.05770)) - Open an issue to integrate content with the MetricFrame user guide, with an outline - Base metrics (E.g., from scikit-learn) - Fairness metrics - Custom metrics - Metrics for different tasks - [Code example](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/cognitive-services-examples/speech-to-text/analyze_stt_fairness.ipynb) RAI widges repo for custom metrics related to word error rate: - 3.6. Apply mitigation approaches [TODO later] - For pre-processing, post-processing, and mitigation - And start outlining medium priority issues - 1.4. Identify relevant fairness related harms for their use case - 1.6. Translate fairness harms into fairness metrics - https://github.com/fairlearn/fairlearn/issues/721 - 2.1. Evaluate existing datasets for sources of fairness-related harms - 2.2. Create documentation of datasets - 5.3. Explain fairness harms, metrics, and mitigations to others in the company (across disciplines, leadership, etc) - 5.7. Understand relationship between fairness and other responsible AI goals - Do we have a good sense for where each of these would go in the user guide? - We've had in our outline a "Link to general guidelines for creating learning resources" - What do we think should go in that guide? - How necessary do we think that is to create now? ### Draft outline of user guide - Overview [For later, when we have content] - Fairness 101 / getting started - Overview of responsible AI and fairness - Fairness in the ML development workflow - Problem Formulation - [Fairness of AI systems](https://fairlearn.org/main/user_guide/fairness_in_machine_learning.html#fairness-of-ai-systems) - [Types of fairness-related harms](https://fairlearn.org/main/user_guide/fairness_in_machine_learning.html#types-of-harms) - Conceptual explanation of fairness metrics [TODO: add [LO 1.5](https://docs.google.com/document/d/1nF80lVElPAHyaxPp1Zft92_jDZaPVvDDdCdagazbAQI/edit?usp=sharing)] - Sociotechnical fairness concepts [new header, but use the existing content from:] - [Construct validity](https://fairlearn.org/main/user_guide/fairness_in_machine_learning.html#construct-validity) - [Abstraction traps](https://fairlearn.org/main/user_guide/fairness_in_machine_learning.html#construct-validity) - Assessment - [Introduction](https://fairlearn.org/main/user_guide/assessment.html#introduction) - [Group fairness, sensitive features](https://fairlearn.org/main/user_guide/fairness_in_machine_learning.html#group-fairness-sensitive-features) [previously in "Fairness in Machine Learning"] - [Parity constraints](https://fairlearn.org/main/user_guide/fairness_in_machine_learning.html#parity-constraints) - [Disparity metrics, group metrics](https://fairlearn.org/main/user_guide/fairness_in_machine_learning.html#disparity-metrics-group-metrics) - [Disaggregated metrics](https://fairlearn.org/main/user_guide/assessment.html#disaggregated-metrics) - [Disaggregated metrics using MetricFrame](https://fairlearn.org/main/user_guide/assessment.html#disaggregated-metrics-using-metricframe) - [Multiple metrics in a single MetricFrame](https://fairlearn.org/main/user_guide/assessment.html#multiple-metrics-in-a-single-metricframe) - [Plotting](https://fairlearn.org/main/user_guide/assessment.html#plotting) ### 2022-05-11 Continued discussion of prioritizing LOs ### 2022-04-26 Went through LOs to prioritize which ones we'd open issues for, based on ease of writing content and importance ### 2022-04-12 Moved Google spreadsheet over to [Github project board](https://github.com/orgs/fairlearn/projects/3/views/1?visibleFields=%5B3054170%2C%22Title%22%2C3054167%2C3054168%2C3450131%2C%22Assignees%22%2C%22Status%22%2C%22Labels%22%2C%22Linked+Pull+Requests%22%5D) ### 2022-03-02: Feedback on Michael's draft issue for [LO 1.2](https://docs.google.com/document/d/10rnkVUSbk9hy1Aa-jkgNPtQj6O_8AwpiUkJnfhjhYfE/edit) ### 2022-02-16: Feedback on Hilde's draft issue for [LO 1.5](https://docs.google.com/document/d/1nF80lVElPAHyaxPp1Zft92_jDZaPVvDDdCdagazbAQI/edit?usp=sharing) ### 2022-01-19: Outlining "Learn" pages on website structure Issue [#986](https://github.com/fairlearn/fairlearn/issues/986) - Synthesized outline - Learn - Overview - Fairness 101 / getting started - Overview of Responsible AI and fairness - Fairness in ML Development - Problem Formulation - (Including these from the User Guide) - Fairness of AI systems - Overview of sociotechnical fairness - Abstraction traps - Types of harms - Construct validity - Definitions - Group fairness, sensitive features - Parity constraints - Disparity metrics - Metrics (ungrouped, multiclass, non- - Data Collection - Data Pre-processing - Pre-processing algorithms - Modeling and Evaluation - Mitigation - Reductions - MetricFrame - Deployment and Monitoring - Collective action - Datasets - Boston Housing - UCI Adult - Case studies - Further resources - Hilde's outline - Sees value in development process - Important to have a fairness 101 / getting started (e.g., measurement modeling, abstraction traps) - User Guide (including what metrics are and theory for how you should pick them) - Dataset should go here too - LOs can be divided into 3: theory, coding, practical things - Could add tags to them - Manojit's outline - A mix of blog posts, hands-on exercise - Overview of Responsible ML - Fairness, interpretability, privacy, human flourishing - (blog posts on navigating the landscape and current challenges, intersection of other topics going) - Overview of sociotechnical fairness - Blog post on how social workers approach designing intervention - Assessing model fairness (mix of blog posts and exercises) - Fairness when you don't have access to sensitive features - Qualitative model evaluation - What goes into model audits - SciPy tutorial, EY notebooks Note: - We should have some indicator of the type of content (e.g., with a tag) for whether it's more theoretical, more hands-on, more organizational action ### 2021-12-15 - Form of resources - .rst files? - Jupyter book - Table of contents is an advantage, but does this afford multiple structuring approaches - What about downloading as a pdf? - Website structure - Get more specific about where: - User guide - Which sections - We should structure the Learn sub-categories better - Which are redundant? (e.g., case studies and sociotechnical examples) - (How) do we integrate social content (e.g., problem formulation stuff) with technical content (e.g., how to use the code)? - Knowing that this might turn off some users who "just want the code" - But separating social and technical runs counter to our values - The Overview page might be a way to prime Fairlearn users for how to approach the rest of it - Maybe a video also? - Assessments - Sphinx extensions for self-assessment? - Executing code? - e.g., ML Failures labs #### Agendas and Notes - For next time (1/12/22): - Try outlining the [Learn pages](https://github.com/fairlearn/fairlearn/issues/986) - Including what we do with the existing content (e.g., user guide, blog posts, etc) - Some relatively granular outline of the User Guide - In a way that incorporates our plan for learning resources ### 2021-12-01 - Example issues - [Learning objective 2.3](https://docs.google.com/document/d/16maWYopOUcJ8j-EuADLB9LtTCB2H4iv6sm44uWds1mY/edit?usp=sharing) ### 2021-11-17 - Template for issues for each learning objective - Learning objective (column C) - Details to include (column E) - Links to existing resources for this LO (columns G, H, I) - Suggested format for the resources for this LO (column J) - Link to general guidelines for creating learning resources - Motivation: why they should care about this topic - Writing style - may depend on format of resource (e.g., more conversational for blog posts, more formal for user guide) - Use a lot of examples - ... Other notes: * A glossary type of structure to be able to refer to different concepts across the website. * A learning objective doesn’t need to be part of one single learning resources; can be spread out across multiple resources. ### 2021-10-20 #### Agenda - Review of LOs, common threads - We got through section 1 and 2 - Next time, start with 3 and go through - Discuss options for structuring the content - Clean up LO spreadsheet to share with community to review (if they're interested) prior to 10/28 ### 2021-10-06 #### Agenda - Consider sequencing and format of resources - Think about how to integrate existing website content - Think about domain experts to involve (question: was this about involving in developing learning objectives? or the learning resources?) Learning objectives added to a spreadsheet [here](https://docs.google.com/spreadsheets/d/1L9oIOUwGcC4nJUPwZqjiRERHfQFrbZwUJ-uQ9sfFe1Y/edit?usp=sharing) #### Questions to discuss: - How do we structure the content? - Should we separate the user guide from other resources (i.e., those without existing Fairlearn capabilities)? - Or, if not, how should we integrate them? - Should we structure these sequentially (as in the development process below), conceptually (e.g., around stakeholders, or contexts, or measurement, etc), or both (as in the Google PAIR guidebook's "[patterns](https://pair.withgoogle.com/guidebook/patterns)" which link to related chapters) - Higher-level sociotechnical goals: - What is the basics or need-to-know? - What is missing from the [Learning Objectives](https://docs.google.com/spreadsheets/d/1L9oIOUwGcC4nJUPwZqjiRERHfQFrbZwUJ-uQ9sfFe1Y/edit?usp=sharing)? - More at the monitoring stage? - Third party auditing? - Stakeholder feedback mechanisms (and opportunities for recourse, feature rollback)? - Public benchmarks? - Anticipating/monitoring/dealing with adversarial threats? - Plan to revisit anticipated harms (and metrics) - How users and/or deployment context may introduce new fairness harms or change existing harms (or harms for new people) - Engagement with organizational context - How to involve other stakeholders in decisions, including internal (e.g., UX researchers, product managers) or external stakeholders - Communication of harms, metrics, etc - We have datasheets, and could possibly add model cards, but wondering if we want more about *communication* than just creating artifacts. e.g., Kevin's "talk to your boss about fairness" example - Worker advocacy or organizing to help - What is our Bloom's taxonomy distribution? Where do we want to be? - What external resources can support these LOs? - How should we leverage those resources? - To include in the issues we write for these? (for contributors developing content) - To point users to, in a resources page #### Next steps - Prior to 10/20: - Go through learning objectives again to: - Add anything missing - Synthesize any redundant LOs - Revise wording across them - Identify common threads/themes, phrased as questions that a user might ask - Identify fundamental key takeaways or goals - On 10/20's working group meeting: - Final review of LOs, common threads, and fundamental goals - Prep LO spreadsheet to share with community - On 10/28th community call: - Discuss learning objectives, structure (e.g., development process and common threads), and fundamental goals #### Examples/Resources - Berkeley's ML Failures [bootcamp](https://cltc.berkeley.edu/mlfailures/) - Google's PAIR [guidebook]((https://pair.withgoogle.com/guidebook/patterns) - HAX [guidelines](https://www.microsoft.com/en-us/haxtoolkit/ai-guidelines/) - less clear how this might transfer - Design ethically [toolkit](https://www.designethically.com/toolkit) - some great resources for worker organizing, org context - Tech Worker [Handbook](https://techworkerhandbook.org/) ### Workshop 4-5: Revise Learning Objectives (Sept 8th and 22nd, 2021) *Select key learning objectives* (Asterisks are ones with Fairlearn functionality) All of these should be prefaced with "Data scientists will be able to..." **Problem Formulation** - Identify the people who may be harmed by their model - [Placeholder] Something about mapping task to context? (e.g., understanding social context and regulations) - And something about construct validity here? (even though we have some existing resources here) - Identify the type of fairness harm that may be relevant for their use case(s) (+)(+)(+) - May include qualitative approaches - Understand how choices about performance metrics to optimize for may contribute to fairness-related harms - Understand the different metrics for operationalizing fairness (eg demographic parity, equalized odds...etc)* - Translate fairness related harms to explicit fairness metrics.* (+) - Identify tradeoffs of using different fairness metrics for their use case - Identify adversarial uses of the envisioned system (-) **Data Collection** - Evaluate existing datasets for balanced demographic groups* (+) - Create documentation of existing datasets (e.g., datasheets)* - Identify best practices for demographic data collection for their context (+)(+) - Evaluate whether data are (in)appropriate and/or complete for task? Representative? Generalizable? (+)(-) - This may be captured by the first bullet on balanced datasets and the next one on construct validity - Evaluate construct validity and reliability considerations for features and label (+)(+) **Data Analysis and Pre-processing** - Evaluate dataset for sources of fairness-related harms - Consider how sample size may impact their fairness evaluations* - Do we want to connect this point to the data collection process? (is some group unrepresented or systematically excluded) - Selection biases, representational biases, etc - Identify consequences of pre-processing decisions for sensitive features for use cases - As part of this resource, we should include recommending that they communicate these decisions (and their consequences) - Decisions may be whether to include a given feature or not, which ones to use for assessment, perhaps binning of some (continuous) features, recommendations on (not) to infer sensitive features, etc. - Identify implications of regulatory requirements for fairness (e.g., privacy and data collection) - Evaluate the need for fairness pre-processing algorithms for the use case - [once there are multiple pre-processing algorithms] Consider tradeoffs between fairness pre-processing algorithms **Modeling and Evaluation** - Integrate Fairlearn into existing modeling processes* - This seems quite large... is there a specific learning objective here? - Identify fairness tradeoffs and implications of model choices - Generate fairness metric frame output* - Interpret fairness metrics in terms of fairness-related harms for their use case - Understand risks of multiple hypothesis testing (if applicable) - Evaluate the need for fairness mitigation algorithms for the use case - Apply mitigation approaches - Identify and explain tradeoffs in using mitigation approaches* **Deployment and Monitoring** - Monitor the model for new fairness-related harms (+) ### Workshop 3: Learning Objectives (cont.) (August 4th, 2021) *What do we want our target audience to learn?* [Brainstorm mural](https://app.mural.co/t/fate3199/m/fate3199/1613663752472/37ea06b86781eff694c734a0195c742627cdcc2f?sender=uec3275c622441a94bd3e9416) #### Process: - Brief recap of previous workshops (see below) - Choose learner personas - (20 mins) Generate learning objectives based on a persona (using the ML process as a guide), using Bloom's taxonomy and the [overview](https://www.cmu.edu/teaching/designteach/design/learningobjectives.html) of learning objectives to inform how to write them - Read each others' and start to synthesize them # Notes ### Workshop 2: Learning Objectives (July 14th, 2021) *What do we want our target audience to learn?* [Brainstorm mural](https://app.mural.co/t/fate3199/m/fate3199/1613663752472/37ea06b86781eff694c734a0195c742627cdcc2f?sender=uec3275c622441a94bd3e9416) #### Agenda * Action items previous workshop * Intro: what is a learning objective? (constructive alignment, specific and measurable, Bloom's taxonomy) * Short [overview](https://www.cmu.edu/teaching/designteach/design/learningobjectives.html) of learning objectives * See other resources on Discord channel * In groups (2/3 people): define a set of learning objectives * Choose 2 personas and identify relevant learning objectives * It might be helpful to structure this along the ML development process: what skills and knowledge are required at each stage? What might learning objectives at different levels of Bloom's taxonomy look like for each stage? 1. Problem formulation 2. Data collection 3. Data preparation 4. Modelling 5. Evaluation 6. Deployment * Share learning objectives and synthesize (if time permits) #### Personas 1. **Data Scientist at Large Corporation** * *Organization*: large organization where ML is used to improve processes/products but is not part of core business. * *Prior knowledge*: * ML theory/application: expert * domain knowledge: novice/intermediate * ML fairness: novice * *Motivation*: intrinsically motivated to make fairer models, avoid bad PR * *Challenges*: * Limited power: needs buy-in from upper management, may not have much say after deployment, may not be able to heavily influence data collection processes. * Limited resources: may not be able to directly access users/stakeholders impacted by model, no time to dive deep into understanding broader context, no resources to hire external experts. 2. **Data Scientist in Highly-Regulated Sector** * *Organization*: organization with strict domain-specific regulations (e.g., banking, healthcare) * *Prior knowledge*: * ML theory/application: expert * domain knowledge: intermediate * ML fairness: novice/intermediate * *Motivation*: comply with domain-specific regulations * *Challenges*: already knows a lot about non-discrimination in their domain but may not know how to adopt current ML fairness concepts for domain-specific regulations 3. **Data Scientist at an AI Startup** * *Organization*: startup with AI as core product * *Prior knowledge*: * ML theory/application: expert * domain knowledge: novice * ML fairness: novice * *Motivation*: wants to know more about fairness to implement in product, avoid bad PR * *Challenges*: limited resources (time, access to stakeholders or domain expertise), fairness may be at odds with business model 4. **Junior Data Scientist at a Government Agency** * *Organization*: government/public sector * *Prior knowledge*: * ML theory/application: intermediate * domain knowledge: intermediate * ML fairness: novice * *Motivation*: comply with regulations, improve public services * *Challenges*: large gap between domain knowledge and ML expertise within the organization makes communicating priorities difficult (i.e., data science team is still very unfamiliar with the domain and the rest of the organization doesn't understand ML) 5. **Analyst turned Data Scientist at a Non-profit** * *Organization*: non-profit * *Prior knowledge*: * ML theory/application: novice * domain knowledge: expert * ML fairness: novice * *Motivation*: intrinsically motivated, avoid bad PR, backed by management * *Challenges*: little experience with formulating machine learning tasks #### Action Items - [X] continue developing learning objectives until next meeting ### Workshop 1: Defining Our Audience (June 30, 2021, 8AM PT / 5PM CEST) *Who are we designing materials for?* #### Agenda * Logistics (how much time each week? biweekly? how do people want to be involved?) room to put other things on the agenda (10 min) * Possibly biweekly - can try setting up calendar invites weekly * In groups (2/3 people) we develop target audience 'persona' (25 min) * *Role* * *Company/Organization* * *Prior Knowledge* * *Learning habits* * *Goals* * *Challenges* * Consolidate results & prioritize personas (20 min)) * See brainstormed list of personas [here](https://app.mural.co/t/fate3199/m/fate3199/1613663752472/37ea06b86781eff694c734a0195c742627cdcc2f?sender=uec3275c622441a94bd3e9416) * Priorities: * Start with data scientists (later: PMs, others) * But many different levels of prior knowledge and goals * Schedule next meeting (5 min) * 7/14, at 11am ET #### Action Items - [X] Work out data scientist personas (Hilde + Michael) - [x] Define structure for defining learning goals for Workshop 2 based on persona prioritization and earlier discussions (Hilde + Michael) - [x] Read about learning goals (Everyone) ### Community Call: Call for Action (June 24, 2021) Michael M. and Hilde set out previous discussions and proposed a plan to move forward. #### Action items - [X] Join the #educational_resources channel on Discord - [X] Let Hilde and Michael M know if you want to join the working group - [X] Find a time for the first (and recurring) meetings - [ ] Later (TBD based on conversation during first meeting!) * Define learning goals, their sequencing, and ways to integrate them into existing content (e.g., steps 1-3 in "what's next" in the [HackPad]((https://hackmd.io/@STU6DFvcRo6VVk1dPGbTMQ/B1i-SMzhd))). * Then, later, open issues to address those goals, in ways that help contributors understand how they fit in with the desired structure ## The Plan 1. Define learning goals: *what do we want practitioners to learn?* 2. Design sequence or structure of content ("syllabus"): *how will practitioners learn it?* * How will we sequence materials/modules? * Options: * Structuring this according to the AI lifecycle * * How does the structure fit in existing resources: e.g., hooks from quickstart or user guide to other educational resources? * How does the structure fit in different user stories? e.g., * the Fairlearn user who only starts at quickstart * the Fairlearn user who knows some parts of fairness (either technical or social), but wants to learn others for their work * the Fairlearn user who wants to dive deep into fairness concepts * Where or how to link to other resources 3. Design "assessments": *how will practitioners know that they've learned it?* (Note: this may involve developing guidance for how contributors might create these) * example notebooks * tutorials (e.g., [SciPy tutorial](https://github.com/fairlearn/talks/tree/main/2021_scipy_tutorial)) * self-assessment quizzes that can be used by practioners to check their own understanding * talking points to communicate concepts or findings with coworkers 5. Logistics: how can Fairlearn contributers most effectively and efficiently be supported in helping develop educational materials? * Maintainers and other interested contributors make progress on items 1 and 2 at least - so contributors don't need to figure out the structure of where their contributions fit in * Create specific, "bite-sized" issues with proposed solutions * Break apart separate steps into separate issues, e.g., * identify relevant research or prior resources developed by others * define concepts * provide examples of concepts * create worked (code?) examples of how users might incorporate concepts into their work * Be clear in the issue with where the content would fit in the website structure * Clarify differences (if any) between process for contributing code and contributing text * How will we prioritize learning objectives to create issues for? ## Who? Organizers: Hilde and Michael M. Volunteers: Ayodele, Lisa, Manojit, Roman ## Notes from Mural / Github Discussion * Prior work generating ideas for educational materials * [Roadmap ideas discussion](https://github.com/fairlearn/fairlearn/discussions/696) * Brainstorming of educational ideas on whiteboard [here](https://drive.google.com/file/d/1miWodPejupJvHZtiAbk5-8Y_u-BPPvQl/view?usp=sharing) * Definitions of: * ~~Abstraction traps~~: by Laura! * ~~Construct validity and reliability~~: by Michael A.(merged today!) ### Learning resource topics * Problem formulation and Fairness * ~~Abstraction traps~~: by Laura! * ~~Construct validity and reliability~~: by Michael A.(merged today!) * Identify fairness harms * For different domains? * For different application areas? * Navigate aligning values of model with values of team / client * Users should understand that "is my model compliant with regulation?" is a different question from "does my model propagate fairness-related harms?" * Measuring fairness * More resources around differences between different types of fairness-related harms (e.g., quality-of-service harms, allocation harms) and when to * Select appropriate fairness metric(s) (i.e., [when to use each fairness metric](https://github.com/fairlearn/fairlearn/issues/721)) * Understand limitations of fairness metrics (when is fairlearn not enough?) * Use Fairlearn together with other common python libraries * Understand challenges and best practices in dealing with demographic data for fairness assessment * Mitigating unfairness * Select appropriate mitigation strategy * Understand limitations of unfairness mitigation algorithms * Use Fairlearn together with other common python libraries * Map strategies to different parts of the machine learning development pipeline * Accountability * Document fairness assessment easily * Communicate needs to mitigate unfairness to business leaders / stakeholders * Involve stakeholders (e.g., participatory design) * Datasets * create user guide entry for fetching datasets * create documentation for an existing dataset (e.g., datasheet?) [[example](https://github.com/fairlearn/fairlearn/issues/507)] * identify sources for information about the datasets (i.e., find links to sources which can then be added to API reference) * illustrate known fairness issues with existing datasets based on the sources found (and/or point to or summarize blog posts or papers about that dataset) * Practical challenges * How to communicate fairness issues to leadership? * ... ### Types of materials * User guide documentation * Use cases and/or example notebooks * Code examples * Toolkits (e.g., model cards, datasheets) * Talk to your boss cheat sheets * Business use cases / decision models for business * Best practices blog posts * Short videos * ... ### Attention Points * *[From Roman's comment on Discord]* The original intention of example notebooks was to set out use cases ([see e.g., this comment](https://github.com/fairlearn/fairlearn/pull/615/files#r550377996)). The current examples under 'Example Notebooks' are more code snippets than example notebooks. We need to find a good place for both types of resources *(and probably develop more use cases)*. ### Sources for other resources * Academic courses * Textbooks (e.g., https://fairmlbook.org/)