owned this note
owned this note
Published
Linked with GitHub
# Assembler Design
###### tags: `technical`
## Goals
- Obviously, create a working LC-3 assembler
- Provide novel features beyond those of lc3tools
- meaningful feedback to the assembly writer
- very **good error messages**
- **syntax highlighting**
- **linking**
- i.e. allow labels to reference labels in other files
- Enable us to **assemble to different object file formats** for flexibility in our debugger design
- **Interoperability** with our other tools
## References
### The Book ([Reference](https://www.academia.edu/34254842/INTRODUCTION_TO_COMPUTING_SYSTEMS_FROM_BITS_AND_GATES_TO_C_AND_BEYOND_SECOND_EDITION_International_Edition_2005), see Ch. 7 and Appx. A)
- One instruction consists of four parts:
- ```LABEL OPCODE OPERANDS ; COMMENTS```
- where `LABEL` and `; COMMENTS` are optional
- all of the above only ever shown on one line (i.e. line breaks not valid whitespace between and within these four parts)
- operands may (must?) be comma separated
- optional whitespace after comma, but maybe not before
- opcodes, etc. only shown in uppercase
- Registers
- represented as `R0`, `R1`, ..., `R7`
- Literals
- must have leading `#` for dec, `x` for hex, `b` for bin
- leading zeroes (after above symbol) allowed for hex
- negative decimal literals allowed with leading `#-`
- Labels
- 1-20 chars
- first char must be alphabetic
- otherwise alphanumeric
- lowercase allowed, presumably case-sensitive
- Pseudo-ops (Assembler Directives)
- `.ORIG`
- Usage: `.ORIG ADDRESS_LITERAL`
- literal only shown to be hex
- put first instruction there
- `.FILL`
- Usage: `.FILL LITERAL`
- literal only shown to be hex
- put `LITERAL` here
- `.BLKW`
- Usage: `.BLKW DECIMAL_NUMBER`
- leave that many memory locations empty here
- `.STRINGZ`
- Usage: `.STRINGZ STRING_LITERAL`
- put chars of string literal into next memory locations sequentially
- null-terminated
- one char per location (ASCII)
- `.END`
- Delimiter marking end of program
- Named `TRAP`s
- `GETC`
- `OUT`
- `PUTS`
- `IN`
- `PUTSP`
- `HALT`
- Two Passes
1. create symbol table
2. generate binary
- `.EXTERNAL` Pseudo-op
- solely theoretical (for now :wink:)
- Usage: `.EXTERNAL LABEL`
- provides symbol table entry for `LABEL` without a location in current file
- location filled into symbol table by a **linker**
- (not exactly from book) potential design:
1. create symbol table
2. link (fill in empty locations across files)
3. translate to binary
### "Prior Art"
- **LC3Edit**? (from Quick Reference found [here](https://www.cs.utexas.edu/users/fussell/courses/cs310/tools/simulator/LC3WinGuide.pdf) -- see Ch. 7)
- Labels
- case-sensitive
- max 10 chars
- begin with letter or underscore
- otherwise alphanumeric
- optionally terminated with colon (which is not included in resulting symbol)
- max 1 per **memory location**
- Instructions
- operands optionally comma-separated
- presumably whitespace-separated otherwise
- *allow mixed separators?*
- Pseudo-ops
- 1 `.ORIG` and 1 `.END` per file
- labels can be used with `.FILL`
- this one is necessary, of course, but it implies that not all pseudo-ops need be able to be labeled
- may appear anywhere betwee `.ORIG` and `.END`
- presumably not inside instructions
- Literals
- no binary
- `-` allowed for decimal, but *not* hex
- **lc3tools**
- Strict mode
- Lenient mode
## Design Decisions
Here's how I see our LC-3 assembler at its core:
```rust
use lc3_isa::{Addr, Instruction};
struct Object {
start: Addr,
insns: Vec<Instruction>,
}
type Assembler = Fn(&str) -> Result<Object, AssemblerError>;
```
It attempts to parse an input string into our LC-3 instructions, otherwise returning an error which makes it easier for the user to change their input to a valid one.
We could make it assemble to a simpler object format, like a memory dump, but I think this is simple enough to accomplish from the above `Object` that we don't need to go that far. This could (should?) definitely be an optional mode, though.
The above is very simple to reason about. However, one of the features we want is external symbols, and that doesn't make sense where the string only contains one LC-3 object.
- How should we allow users to assemble multiple files and load them into memory?
- How many `.asm` files?
- To support linking user programs with the OS and drivers we supply, I think we should allow as many as the user wants to combine.
- We should have a linker or some linking process that combines all of the resulting `Object`s into one, reporting problems with overlap in memory, duplicate external symbols, etc.
- How many `.ORIG`/`.END`s per `.asm` file?
- I think each `.ORIG`/`.END` should correspond/assemble to an `Object` as defined above.
- Though, if we allow external symbols/labels, we need to assemble to a less complete `Object` like the following:
- ```rust
struct UnlinkedObject {
start: Addr,
abstract_insns: Vec<ASTInstruction>,
}
```
- =1?
- Similar to LC3Edit's approach
- \>=1 (many)?
- I think **many** is more convenient and I see no reason it would be much more difficult. In fact, `.END` serves (almost) no purpose (it's redundant with a file or string's terminator, but could be useful if assembling a stream?) without allowing **many** and its existence makes it easy to identify different `Object` sources.
To provide good error messages, (Rahul and) I think it would help to build an AST from the input. This would also help for syntax highlighting, though identifying categories of tokens may be good enough for that.
- Should we build an AST?
- Yes, for easier error handling.
- How many intermediate representations (AST or otherwise) are necessary?
- Hard to say until we figure out what errors they could help cover or what object files they could to support.
- I imagine there's a time cost for copying the tree into a different form.
- I want to say 1-3 for easier reasoning (and fast assembly?).
### Developing One Possible Process
The most flexible tool (IMO) for this sort of thing in Rust is `nom`, and I sort of settled on it in the Discord, so I'm going to assume we're using it from here on and I may use terms related to it. The main thing to know is that it is a parser combinator library which provides functions that can build large functions of roughly the form
```rust
type Parser = Fn(&str) -> Result<(&str, Output), ParserError>
```
out of other functions with that form. The `&str` in the result is the slice of the input `&str` remaining after the `Parser` parses a portion of it to produce its structured `Output`. The parsing thus usually (always?) occurs from the beginning to the end of the string.
With that in mind, the simplest system would be to build an AST token by token.
The first token we expect is `.ORIG`, **which can't have a label**. I think we should read for that, then once it's found, begin parsing an `Object`.
- Should we allow anything before `.ORIG` and just ignore it?
- I want to say yes, but maybe we should have a strict mode which disallows it.
After finding `.ORIG`, expect whitespace, then expect a hex address literal. After, ignore optional whitespace and expect either comment (including newline) or newline.
- Should the assembler concern itself with whether the program will lie in user space or not?
- I want to say no for separation of concerns. Is there a case where it could help fix a student's mistake?
- Lenient idea: allow non-hex literals for `.ORIG`?
- Probably not a good idea for the default assembler mode, and probably not useful, but I don't see why not...
We then expect a series of instructions. To at least match lc3tools, I want to allow labels on a separate line from the opcode and operands. So we should first read the next token. There is officially no rule (I found) saying that labels can't be the same as opcodes, but I'm not sure we can successfully disambiguate. If we allow labels to be on a different line **and** allow labels to be the same as opcodes, the following program is ambiguous:
```asm
.ORIG x3000
RET
HALT
.END
```
We can't determine if `RET` is a label for `HALT` or its own instruction.
Another case:
```asm
.ORIG x3000
ST HALT
HALT HALT
BRnzp ST ; we could maybe say that the second line's ST is a label with this line, ambiguous otherwise...?
.END
```
Again, we can't determine if `ST` is a label for `HALT` or if it's supposed to store at the next location. In some cases, we could infer that `ST` is a label based on whether there is an unambiguous `ST` label elsewhere. This sort of system seems like a lot of work.
I think the more feasible solution is to not allow labels to be the same as opcodes. Then, I don't think there's any problems with allowing them on previous lines. To work around it, if users really want a label like `JMP`, they can just use `JMP:` instead.
With this in mind, parsing the language is painless and roughly O(n) (and context-free?).
- Read until `.ORIG`, ignoring anything else.
- We may consider requiring whitespace before `.ORIG`? I think it's unnecessary, but if we do, we can do a lexing pass which turns opcodes, pseudo-ops, etc. into tokens. This could work better with a parser generator like `lalrpop` or `pest`.
- Read `.ORIG`; ignore following whitespace
- Read hex address value; ignore following whitespace, comment, line ending
- Repeat the following:
- Ignore whitespace, read next substring until whitespace
- match on substring:
- `.END` => break/return;
- `<OPCODE>` => read next X substrings as separated list of operands
- `<VALID LABEL>` => ignore whitespace, comments, line endings; then read next substring as OPCODE, then do above branch
- `<OTHER>` => **initially, fail parse. later, for syntax highlighting / good errors, absorb into syntax tree**
- Ignore whitespace, then comment and/or line ending
### The AST
I started working on the above system with `nom` and found that `nom` is very good if we want to fail fast. However, it's a bit tricky if we want it to try and finish parsing and provide feedback in multiple places. For the purposes of syntax highlighting, if we want to provide deeper feedback than lexical highlighting, like underlining multiple errors, I imagine we first need to be able to parse any arbitrary string into a Complete Syntax Tree (CST) containing all of the original input in a form as structured as possible, then analyze it.
## 2020/2/1
I think I've found a simpler method to gradually develop the CST we're looking for. We'll develop a sequence of complete syntax structures, which I'll refer to as intermediate representations (IRs). Each step of the process will result in an IR containing every token.
### IRs
1. Tokens
- `Vec<Token>`
- Produced by lexer
2. Simple Lines
- ```rust
Vec<SimpleLine>
struct SimpleLine {
tokens: Vec<Token>,
comment: Option<Token>,
newline: Option<Token>,
}
```
- Split by `Token::Newline`. IMO, the most obvious errors are those where newlines are in the wrong places. We can leave these up to the user and it makes it simple for us to analyze the syntax line-by-line.
- Separate out the comments and newlines, since comments have no meaning and newlines have their meaning interpreted at this step. Comments are obviously optional, and only one can appear in a line according to the regexes in the lexer (a comment is a semicolon, then any number of non-newline characters). It shouldn't be a problem to find all the comments at the same time as the newlines.
3. Lines (Abstract Labels and Operations)
- ```rust
Vec<Line>
struct Line {
content: LineContent,
whitespace: Vec<Token>,
comment: Option<Token>,
newline: Option<Token>,
}
enum LineContent {
Valid(Option<Label>, Option<Operation>),
Invalid(Vec<Token>),
}
```
- Interpret the `SimpleLine`s as either lone labels, operations, blank lines, or errors.
- We can work to diagnose `LineContent::Invalid`.
- Use `LineContent::Valid` if we interpret the tokens in the line as a valid label or operation based on pattern of tokens. They won't be perfectly valid, as `Word` tokens still need to be disambiguated and validated (e.g. `ADD EGG, R1, R2` would be ok until we validate the reg values).
4. Objects
- ```rust
struct UnvalidatedFile {
objects: Vec<UnvalidatedObject>,
other: Vec<Line>,
}
struct UnvalidatedObject {
operations: Vec<UnvalidatedLine>,
empty_lines: Vec<Line>,
hanging_labels: Vec<Line>,
invalid_lines: Vec<Line>,
}
struct UnvalidatedLine {
label: Option<Label>,
operation: Operation,
whitespace: Vec<Token>,
comment: Option<Token>,
newline: Option<Token>,
}
```
- Organize the lines into objects starting with .ORIG and ending with .END.
- Simultaneously, combine operations with their labels, dumping any invalid lines or hanging labels into appropriate vectors.
- An object won't assemble if there are any hanging labels or invalid lines.
5. Token Content Validation
- Go in and check that all the `Token::Word`s are valid.
- Labels (just length, character limitations)
- Operands
- After this step, if all the operations are valid in all objects, the file should be ready to try and assemble. The only possible errors left are linking errors (labels don't match, are too far away from each other) and placement errors (objects overlap or spill over into invalid addresses).
- **The tree after this step will be the CST to use for analysis, highlighting, and assembly**. The rest will only retain the significant parts until it's finally assembled.
### Incomplete IRs (for assembly)
1. Expand Pseudo-ops (Assembly Pt. 1)
- ```rust
struct File {
objects: Vec<Object>,
}
struct Object {
orig: Addr,
memory_locations: Vec<MemoryLocation>,
}
enum MemoryLocation {
Instruction(Instruction),
Value(Word),
}
struct Instruction {
label: Option<Label>,
operation: Operation,
whitespace: Vec<Token>,
comment: Option<Token>,
newline: Option<Token>,
}
```
- Take a valid object and expand all the pseudo-ops into the appropriate values in memory.
- After this step, each object holds the sequence of unassembled instructions and/or values for each value in memory starting at its origin, so linking is now trivial.
2. Placement
- Calculate how all objects will fit in memory (just for linked files?).
4. Linking
5. Final Assembly
- Create a full image of LC-3 memory with the program inside
The last three are still kind of up in the air. 2 and 3 might be swapped. How they work depends on how we want to allow multiple files and objects to be linked and loaded.
- How do we layer programs together (e.g. OS and user program(s))?
- I think... Assemble to .obj/.img? files: memory images with start/end metadata, and have another program do layering into a final full .mem image, warning the user when overlap occurs.
- I think that the process I described above would only work on unlinked objects (like OS and user program). If the objects need to be linked, the assembly process should handle their placement.