--- robots: noindex, nofollow tags: pitch, internship, build --- [toc] <!-- For instructions on shaping a project see here: [Shaping a Project](/kX02SXVbS6KzMOQd56i6Cg) --> # Conformance tests/lint (Esteban's project #1) ## Problem <!-- *From: [Problem guidance](https://basecamp.com/shapeup/1.5-chapter-06#ingredient-1-problem)* *The best problem definition consists of a single specific story that shows why the status quo doesn’t work.* --> In a component library, it's important that the components follow consistent API patterns and coding style. This makes them both easier to use and easier to maintain. However, even with clear coding standards and careful reviewers, inconsistencies can easily slip by. For example, this might happen if someone copy-pastes a component and forgets to change the name, or different component authors come up with the a different name/signature for a prop that does the same thing. This is where conformance tests (and lint rules) come in. They run by default against all components and provide automated verification that standards are being followed. Fluent UI v0 has some convergence tests, but they're tightly coupled to the host project (not reusable in other projects or for partners) and may not quite match the standards we want for our new converged components. Another consideration is that some conformance tests could also be implemented as ESLint rules, which may be more efficient in some cases. ## Appetite <!-- *From: [Appetite guidance](https://basecamp.com/shapeup/1.5-chapter-06#ingredient-2-appetite)* *Think of this as another part of the problem definition. We want to solve this problem, but we also need to do it in a way that will leave time to solve other problems. Here we depart from Shape Up, and allow for timeframes or appetites of 1-6 weeks.* --> TODO - Flesh out project then determine timeline (probably 2-4 weeks) ## Solution <!-- *The solution is what makes this Project "Shaped". Without a specific solution defined we leave too much ambiguity and risk for the Project team to figure out while coding. This is not, however, a detailed spec or list of task. It is good to leave details open for the team to decide on while building the solution* --> TODO: As part of defining the solution to this problem, you'll need to answer these questions: 1. What is our ideal state with component conformance? (define the set of rules we want) 2. What tools/resources exist to help us get there? - Within our repo (mainly v0 conformance tests) - Are there existing ESLint rules for any of the practices we'd like to enforce? - Are there any existing tools for component conformance testing (with Jest)? 3. What should be the dev experience running the tests? 4. What is the sequencing for bringing tests online? (Can't do it all at once, so what comes first?) ### More details/considerations #### Defining rules Defining the set of rules will require looking through the existing tests (see [Conformance tests](/g46wd9OtTzSAbq5d7QFwMA)) and determining which ones are relevant to converge, as well as talking to team members (David, Levi, others?) to determine what we want to test for new components. (v0 tests are a good starting point, but there are likely some other things we'll want to cover.) #### Implementing and sharing rules For rules which can be implemented with static code analysis, we should probably use ESLint. For the remaining rules/tests, we'll use Jest. In either case, the goal will be to set the rules up in a way that's reusable between packages (inside and maybe later outside our repo). Sharing ESLint rules is straightforward: we can make a new package `@fluentui/eslint-plugin-conformance` for custom rules (and custom config of existing rules), and other packages can depend on it. For Jest, we should first look online to see if someone else has written a tool for running component conformance tests. This seems like a common problem, so you'd *think* someone would have made a tool for it by this point. When evaluating any tools, we should consider: - Can it test the things we care about? - Does it have any assumptions baked in which will conflict with our preferences/style? - How hard is it to integrate? - If we find a tool we're interested in integrating, we'll set a hard limit on how long we can spend trying to get it working (to avoid situations like with Monaco last summer) If we do have to write our own conformance test runner, here's part of what I'd envision it looking like: - Separate package `@fluentui/react-conformance` or similar - Has a default set of conformance tests to run for a given component - Exports a function accepting: - either a glob of component files, or the path to a single component file (see TBD below) - override options (turn off certain tests for certain components) - possibly custom tests to run (optional) - TBD: how should this be run? - Call the function within each component's other tests, like v0 does today (downside: people could forget/decline to call it) - Call from a centralized conformance test file in each project (like `conformance.test.tsx`) with a glob of all component files + customizations - Run as a separate build step/tool #### Using rules Primary focus will probably be on getting the rules working against the converged components. If those aren't in a state where conformance tests are relevant yet, the focus will be v0 (since those tests are in a better starting state), ensuring that we split out anything specific to v0 from things that will be more generally relevant. Applying to v7 is a bonus. ## Risks (Rabbit holes) <!-- *From: [Rabbit hole guidance](https://basecamp.com/shapeup/1.5-chapter-06#ingredient-4-rabbit-holes)* *Another key aspect of shaping is de-risking. This involves identifying potential issues and complications in the solution. These may be non-obvious cases where the solution doesn't work. These could be constraints from other parts of the system (dependencies or dependent code). This aspect of shaping is what typically requires the most experience and understanding of the domain. This is likely somethign we will all collectively get better at with practice.* --> We could easily spend a lot of time investigating and attempting to integrate 3rd-party tools. Set a specific timeline to mitigate this (if it's not done by then, stop). There may be some philosophical differences between teams: for example, default exports; number of exports per file; interface/type location; type vs interface for props. We can always make more conformance tests, so we have to prioritize and scope what to tackle now. ## Out of scope (No-gos) <!-- *From: [No-gos guidance](https://basecamp.com/shapeup/1.5-chapter-06#ingredient-5-no-gos)* *A key way to deal with complicated risks or issues with the solution, is to decide a particular funcionatliy is out of scope. If reducing the scope to remove a risk or issue does not prevent us from fulfilling the original problem, then it is fine. Reducing scope may require reaching out to the original customer that had the problem we are solving, or working with a representive of the customer on our team.* --> Fixing anything but the most trivial issues with components is out of scope. (Just disable the particular test for that component instead.) We probably won't fully implement every conformance test idea we think of, or build a "perfect" conformance test suite. ## Appendix: Rules today + ideas See [Conformance tests](/g46wd9OtTzSAbq5d7QFwMA)