@tonyfast response to @gabalafou: We are testing different representations of notebooks and cells with automated and manual testing. The notebook variants allow us to track the different versions of notebooks we've been testing for accessibility purposes. Some of the variants were specifically designed for user testing. Other experiments are designed to explore idealized representations of the notebooks and their annotation object model.
@gabalafou to @tonyfast: Thanks! What I was really trying to find out when I put this on the agenda was not so much a walk through to understand the architecture and how these variants are generated, but more specifically, because I don't have time to test every variation, which variants I should explore and test. Perhaps we can cover this in the next meeting.
Some of the variants are from a parametric study to explore how cells would be configured as ordered lists, unordered lists, definition list we represent them as tables and feeds, too. Through the parametric study we could explored the space of possible semantics
Notes
Discussion of work related to scrolling and virtual windowing.
late edit: ordered lists might have preferred semantics over a feed, but we can address this when we test with a screen reader.
There's a separate push to make JupyterLab (Notebook?) completely usable by keyboard only.
top level main > feed
we hope modify the semantics for of the jupyter notebook interface. there would be no vision changes. we will add roles and aria to improve the primary navigation of page with assistive technology.
Summary: we spent this session discussing what a quality annotation object model.
we spent this session discussing what it would take to implement a more explicit accessibility object model based for the new jupyter notebook like. we reviewed the accessibility affordances of the notebooks for all project. our goal is try to capture a similar annotation object model for jupyter notebook release and live up the accessible v7 promise. this effort would knock some items on the @manfromjupyter audit https://github.com/jupyter/notebook/issues/6800
in the near term, it would help to split up this issue like we did 9399.
One must create a new metaschema that defines these vocabularies, and copies the meta-schema that it "inherits" from (or use allOf?)
The $vocabulary section of a metaschema lists the vocabularies, and a boolean flag of whether they constitute a failure if they cannot be located. The units keyword above does not affect validation, so it can safely be ignored if the validator cannot find the URI (it's metadata). Other keyword schemas might not be so permissive:
This schema would incorrectly validate documents with odd integers, but the essence is still upheld. A keyword that changed the "type" would not be ignorable if the validator is at-all to be useful.
Modern JSON Schema introduces vocabularies, which allow you to define a group of keywords and identify them with a URI. Schema authors can then use that URI to tell implementations that the need to support the vocabulary in order to use the schema. If they can't, instead of failing validation, the implementation refuses to run the schema and indicates which vocabularies it doesn't understand.[2]
i.e. $vocabulary solves the problem of "is this failure a 'unrecoverable' error?".
We could use this to introduce a top-level extraSchemas field (?)
Crucially, it means that validators that don't understand what to do with extraSchemas don't try and validate the document.
Challenges
Extra schemas: Failure modes
How can our approaches fail?
two conflicting extra schemas
How can users save themselves if we break stuff? what happens code/clients break?