# OSCAL Sprint Planning Meeting 20221212
- How do manage and plan our engineering activities? Specifically, we need to think about:
- Robust strategy for maintaining OSCAL models
- Target dates for key milestones
- Tech transition plan
- What isn't working or needs improvements?
- Managing scope of different tracks and priorities (not comprehensive)
- Different tracks and dependencies need enumeration, not clear or well-defined to everyone (TBD today or soon)
- OSCAL models
- Documentation generation for OSCAL
- OSCAL public website(s)
- OSCAL specifications: profile resolution, etc.
- OSCAL toolchains for example production and rendering
- Metaschema superset
- Core Metaschema model
- Library implementations (XSLT, Java, JS/TS, etc.)
- Schema generators
- Documentation generators
- Content generators
- Parsers and object mapper libraries
- Supporting developer tooling
- hugo-uswds and website theme/template tools
- How do we manage scope of miscellaneous or peripheral OSCAL-related work
- Security and CI/CD templates across OSCAL projects
- usnistgov/blossom-case-study
- variety of XSLT-based demo applications, XSLT utility libs
- oscal-deep-diff
- How do we manage in-house dependencies (original/derivative XML/data format libraries, CommonMark-conformant libs, etc.)
- How do we manage datasets of OSCAL content that are semi-official (NIST SP 800-53, 800-53A, others TBD)
- Developers are heavily siloed.
- Visibility of work by individuals is limited
- Reasons:
- Scope of work is broad (see above)
- hard to reason on
- hard to debug
- Fragmentation of expertise / too many technologies
- Existing code is not approachable by new developers
- Missing / incomplete documentation
- Work is not self-explanatory and even adding tests may be insufficient
- Progress is slow
- Reasons:
- Limited tests
- Limited documentation
- **Few explantions of how/why things were designed in a specific way**
- Poor prioritization or not following priorities
- Sometimes the high prority issues are nebulous, lower priority issues are more specific; gravity towards specificity
- Smaller, low priority tasks are often used to unwind or take a break
- Some aspects of the project are overly complex
- We wanted to be extremely flexible and enable the community, they may not hear that message or understand why it has been strongly implied.
- Don't define well where complexity is necessary
- Much development has been organic, complexity has accreted
- Engaging in OSCAL modeling is difficult to get started on
- Need some kind of quickstart
- People can understand the domain and application, but not the modeling -> quickstarts, examples are necessary to engage newcomers
- Our target audience for engineering is front-line developers
- Use of GitHub here is consistent with other open source projects
- Other stakeholders need engagement through different channels
- Issues are clearly documented in what goals and acceptance criteria are needed to complete
- Team members leave goal and AC fields in template as-is, do not customize them, and often do not check off relevant/all items when complete (so why are we even reviewing them)?
- Development goes beyond the essential needed to meet goals
- Developers are responsible for clarifying goals and ACs at work onset and afterwards
- Identify which issues are not complete and ready to start
- Not enough design at the front end to clarify development
- Developers are responsible for clarifying design at work onset and ongoing
- Primary developer for some issues is not clear
- We do not asynchronously acknowledge issue changes, comments, or feedback in a team-wide norm
- Team wide norms
- "Thumbs up" acknowledgement?
- "eyes" reaction for reading
- Monitor mentions?
- PR review
- More peer review of PRs are needed
- Potentially reduces siloing over time
- Code owners reinforces siloing, but gauratees a review
- How do we create incentives for more reviewers? How do we measure?
- Too many topics for a given sprint
- Community to focus in different aspects and areas of OSCAL
- Large number of developmental areas (i.e., OSCAL models, tools, CI/CD)
- No clear team understanding of what the critical path is
- What is a definition of *done* for OSCAL?
- Macro: What are we trying to accomplish with OSCAL at a broad scale?
- What is the portion of the ocean we are trying to boil? What is out of scope?
- What method do we use to define the scope?
- How do we keep OSCAL open (e.g., FISMA, ISO 27002), while scoping it?
- What responsibility does the community have in pushing OSCAL within it's scope?
- Micro: How do we address small feature requests and refinements requested by the community?
- Community responsiveness for early adopters is important to keep them engaged
- How do we prioritize both?
- Empower the community to help us raise issues and work problems on the small scale?
- What are the rules of the game for the community to play? How to get solutions vs problems?
- How do we get community consensus on the problem, before recommending a solution.
- How would the *thing* get done without OSCAL?
- What are the incentives, how do we make them clear and equitable, internally and externally?
- The team need a shared understanding of what the most critical issues are that need to be worked.
- How do we raise and work out an action plan to address blocking concerns or significant development risks?
- Not enough examples to support development
- What is the pathway to developing schemas, examples? Is there one, many?
- As a new developer, how do I change a schema or provide an example? How do I demonstrate a use case?
- Accepting CI/CD runs for a new developer
- Is CI/CD a complication? Maybe oscal-cli is a solution?
- How do we balance the perspectives of suppliers vs operators?
- vendors (suppliers) -> front line developers
- agencies (operators) -> data and risk owners - How do we get more feedback from this group?
# Parking lot
- How do we engage community adopters who need deeper, directed help with specific examples or establishing initial OSCAL architecture?