This commit is contained in:
Mid 2025-01-22 17:04:52 +02:00
parent 87a07e29d6
commit 9dc5bddfef

View File

@ -1,22 +1,24 @@
# Nectar Reference Compiler Source Documentation # Nectar Reference Compiler Source Documentation
If you know this compiler took since 2019 to get to its current state, you will correctly guess that I don't really know what I am doing. Compiler literature, and online discussion, is abstract to the point where it is not useful for real-world processors. As a result, much of what you see in the source is the result of a lot of experimentation. I'm sure better methods are available to do the things within. When writing a program, I usually make the most primitive and smallest code I can that does the job. If it turns out I miscalculated the complexity, or I must add some feature that isn't compatible with the codebase, I'll obviously have to refactor it. Still, I've been using this method of programming for probably my entire life.
That being said, if you know this compiler took since 2019 to get to its current state, you will correctly guess that I DO NOT KNOW WHAT I AM DOING. Compiler literature, and online discussion, is abstract to the point where it is not useful for real-world processors. As a result, much of what you see in the source is the result of a lot of experimentation. There's definitely better ways to do the things I show here, but I figured it's better to have at least some resource on how a "real" compiler works.
Basically, the compiler works by progressively iterating through the AST, turning it into a more primitive form step by step. This is necessary because machine code itself is primitive, and instructions typically have 0-3 operands. Thanks to both this, and Nectar itself being highly low-level, the need for an IR disappears. On the other hand, making sure the AST is in a correct state between steps is the prime source of bugs. Basically, the compiler works by progressively iterating through the AST, turning it into a more primitive form step by step. This is necessary because machine code itself is primitive, and instructions typically have 0-3 operands. Thanks to both this, and Nectar itself being highly low-level, the need for an IR disappears. On the other hand, making sure the AST is in a correct state between steps is the prime source of bugs.
Currently the compiler is designed with only i386+ processors in mind. I intend to add support for i286- and other exotic processors, but I honestly don't see it happening ever, especially if this remains a solo project. More RISC architectures with regular register files will be easier to add support for. Currently the compiler is designed with only i386+ processors in mind. I intend to add support for i286- and other exotic processors, but I honestly don't see it happening ever, especially if this remains a solo project. More RISC architectures with regular register files will be easier to add support for, but they're also the kind for which the advantages of this programming language aren't worth the squeeze.
## AST structure ## AST structure
Starting with a Nectar source file, the compiler begins with the two common passes: lexing and parsing. Parsing exploits Nectar's syntax quirks, and may jump back and forth multiple times to fully parse a source file. This is necessary to avoid having to forward declare items. At the end, parsing returns what is called an AST in the source, although formally speaking the term is incorrectly used. Starting with a Nectar source file, the compiler begins with the two common passes: lexing and parsing. Parsing exploits Nectar's syntax quirks, and may jump back and forth multiple times to fully parse a source file. This is necessary to avoid having to forward declare items. At the end, parsing returns what is called an AST in the source, although formally speaking the term is incorrectly used.
An AST node *may not be shared* by multiple other nodes. Also, the internal Nectar AST does not have scaling for pointer arithmetic; all pointers behave as `u8*`. An AST node may not be shared by multiple parent nodes. Also, the internal Nectar AST does not have scaling for pointer arithmetic; all pointers behave as `u8*`. This is the first of many simplifications.
Each block of code is called a "chunk", likely a term I took from Lua. Chunks may contain one another; the least deep one within a function is called the top-level chunk (very important). Top-level chunks may contain other top-level chunks, because user-defined functions are within the "global scope", which is considered a function in itself. After all, nothing stops you from directly inserting instructions in the `.text` section of an executable, without attaching it to a label. Each block of code is called a "chunk", likely a term I took from Lua. Chunks may contain one another; the least deep one within a function is called the top-level chunk (very important). Top-level chunks may contain other top-level chunks, because user-defined functions are within the "global scope", which is considered a function in itself. After all, nothing stops you from directly inserting instructions in the `.text` section of an executable, without attaching it to a label.
During parsing, a tree of maps is used to handle scopes and variable declarations called `VarTable`. Its entries are of type `VarTableEntry` (VTE), which may be of type `VAR`, `SYMBOL` (global variables) or `TYPE` for items in the type-system. Shadowing in vartables is allowed, like in Nectar itself. During parsing, a tree of maps is used to handle scopes and variable declarations, called `VarTable`. Its entries are of type `VarTableEntry` (VTE), which may be of kind `VAR`, `SYMBOL` (global variables) or `TYPE` (type-system entries). Shadowing in vartables is allowed, like in Nectar itself.
The top-level chunk keeps a list of variables within its `ASTChunk` structure. After a chunk is finished parsing, all local variables in its scope are added to its top-level chunk's variable list. Names may conflict, but at this point they're no longer important. Also worth mentioning is that this flat list contains `VarTableEntry` structs, even though `VarTable`s are now irrelevant. Said VTEs are all of type `VAR`; the rest are filtered away because they're not subject to coloring. The top-level chunk keeps a list of variables within its `ASTChunk` structure. After a chunk is finished parsing, all local variables in the current `VarTable` are added to its top-level chunk's variable list. Names may conflict, but at this point they're no longer important. Also worth mentioning is that this flat list contains `VarTableEntry` structs, even though `VarTable`s are now irrelevant. Said VTEs are all of type `VAR`; the rest are ignored because they're not subject to coloring.
There's enough types of passes to push us to have a generic way to invoke the visitor pattern on the AST. Because passes may do many different things to the AST, including modify it, the definition of a generic visitor is very broad. Most functionality is unused by each pass, but all of it is needed. There's enough types of passes to push us to have a generic way to invoke the visitor pattern on the AST. Because passes may do many different things to the AST, including modify it, the definition of a generic visitor is very broad. Most functionality is unused by each pass, but all of it is needed.
@ -41,13 +43,11 @@ Because the `neg` instruction on x86 is single-operand. If targeting an arch lik
Another rule is to extract function arguments and place them into local variables, but *only* if they do not form an x86 operand (for example `5` is ok because `push 5` exists). Another rule is to extract function arguments and place them into local variables, but *only* if they do not form an x86 operand (for example `5` is ok because `push 5` exists).
Dumbification must be repeated until there are no more changes. The dumbification part of the source is responsible for making sure the resulting AST is "trivially compilable" to the machine code. For example, `a = a + b` is trivially compilable, because we have the `add reg, reg` instruction. Dumbification must be repeated until there are no more changes. The dumbification part of the source is responsible for making sure the resulting AST is "trivially compilable" to the machine code. For example, `a = a + b` is trivially compilable, because we have the `add reg, reg` instruction. What is trivially compilable depends on which registers are used in the end (a variable colored as `edi`, `esi` or `ebp` cannot be used for 8-bit stores/loads). These details are not taken into account by dumbification.
Finding what is trivially compilable is actually non-trivial, because what is trivially compilable depends on which registers are used in the end (a variable colored as `edi`, `esi` or `ebp` cannot be used for 8-bit stores/loads). These details are not taken into account by dumbification.
Before dumbification is a single-use pass called pre-dumbification, which takes a top-level chunk, and inserts loads for the function arguments. Such unconditional instructions are not efficient, but they work. Before dumbification is a single-use pass called pre-dumbification, which takes a top-level chunk, and inserts loads for the function arguments. Such unconditional instructions are not efficient, but they work.
Putting all of this together, here is an example of nctref's dumbification of the following Fibonacci implementation, as of writing. Here is the main source: Putting all of this together, here is an example of nctref's dumbification of the following Fibonacci implementation, as of writing. Here is the main Nectar source code:
fibonacci: u32(u32 n) -> { fibonacci: u32(u32 n) -> {
if(n <= 1) { if(n <= 1) {
@ -75,6 +75,8 @@ And the processed AST output by the compiler:
`@stack` is an internal variable that points to the beginning of the current stack frame. `@stack` is an internal variable that points to the beginning of the current stack frame.
NOTE: Later someone called this normalization, which is a much less stupid word than dumbification, and I'm shocked I never thought of it myself. There's also canonicalization...
## Use-def chain ## Use-def chain
I hate these things. Another is def-use chains, but both are horribly underdocumented. Their only use in most literature is so the author can immediately move to SSA form. I hate these things. Another is def-use chains, but both are horribly underdocumented. Their only use in most literature is so the author can immediately move to SSA form.
@ -134,6 +136,8 @@ That's one problem, but there's another:
Despite appearing later in the source, `x = x + 1` is a potential definition for `f(x)`! This means the UD-chain generator must go through loops twice -- once with the upper definitions, and once with definitions from within the loop. Additionally, the UD-chain is assumed to be ordered by appearence in the source, so insertion in the second pass must consider that. Despite appearing later in the source, `x = x + 1` is a potential definition for `f(x)`! This means the UD-chain generator must go through loops twice -- once with the upper definitions, and once with definitions from within the loop. Additionally, the UD-chain is assumed to be ordered by appearence in the source, so insertion in the second pass must consider that.
Now, why did I choose UD chains? Why, simplicity, obviously.
## Coloring ## Coloring
At this point we have a very distorted kind of Nectar AST in our function. Sure we've got blocks and other familiar things, but all variables are in a flat list. These variables are essentially the "virtual registers" you hear a lot about. Because x86 only has six general-purpose registers, we must assign each of these variables (VTEs) to a physical machine register. At this point we have a very distorted kind of Nectar AST in our function. Sure we've got blocks and other familiar things, but all variables are in a flat list. These variables are essentially the "virtual registers" you hear a lot about. Because x86 only has six general-purpose registers, we must assign each of these variables (VTEs) to a physical machine register.
@ -199,15 +203,15 @@ Using the same Fibonacci example as above, this is the result.
When adding a feature, first write it out in Nectar in the ideal dumbified form. Make sure this compiles correctly. Afterward, implement dumbification rules so that code can be written in any fashion. If specific colorings are required, then the pre-coloring and spill2var passes must be updated. The following is an example with multiplication, as this is what I'm adding as of writing. When adding a feature, first write it out in Nectar in the ideal dumbified form. Make sure this compiles correctly. Afterward, implement dumbification rules so that code can be written in any fashion. If specific colorings are required, then the pre-coloring and spill2var passes must be updated. The following is an example with multiplication, as this is what I'm adding as of writing.
Note the way `mul` works on x86. Firstly, one of the operands is the destination, because `mul` is a 2-op instruction. Secondly, the other operand cannot be an immediate, because the operand is defined as r/m (register or memory), so if the second operand is a constant, it must be split into a variable (`varify` in `dumberdowner.c`). Thirdly, this destination must be the A register, so one of the operands must be pre-colored to A. Fourthly, `mul` clobbers the D register with the high half of the product. In other words, we have an instruction with *two* output registers, which the Nectar AST does not support. But we can't have the register allocator assign anything to D here. Note the way `mul` works on x86. Firstly, one of the operands is the destination, because `mul` is a 2-op instruction. Secondly, the other operand cannot be an immediate, because it is defined as r/m (register or memory), so if the second operand is a constant, it must be split into a variable (`varify` in `dumberdowner.c`). Thirdly, the destination must be the A register, so one of the operands must be pre-colored to A. Fourthly, `mul` clobbers the D register with the high half of the product. In other words, we have an instruction with *two* output registers, which the Nectar AST does not support. But we can't have the register allocator assign anything to D here.
To account for this, we can have a second assignment statement right next to the multiplication. Because the main multiplication clobbers the source operand, the mulhi assignment must come before the main mul. Putting all this together, this is the canonical way to do `z = x * y` with an x86 target: To account for this, we can have a second assignment statement right next to the multiplication. Because the main multiplication clobbers the source operand, the mulhi assignment must come before the mul. Putting all this together, this is the canonical way to do `z = x * y` with an x86 target:
z = x; z = x;
w = z *^ y; w = z *^ y;
z = z * y; z = z * y;
Now we must modify the pre-coloring pass to make sure `z` is marked as A and `w` as D. In case such pre-coloring is impossible, the `spill2var` pass must also be modified to spill whatever variables prevent this coloring into another variable (NOT into the stack). If this is the last use of `y`, then it is fine for `y` to be assigned to D. Now we must modify the pre-coloring pass to make sure `z` is marked as A and `w` as D. In case such pre-coloring is impossible, the `spill2var` pass must also be modified to spill whatever variables prevent this coloring into another variable. If `z` is chosen for spilling, then the pre-coloring pass must change both the mulhi and mul statements. If `w` is chosen for spilling, then.. If this is the last use of `y`, then it is fine for `y` to be assigned to D, although this fact is ignored in the code.
Lastly, the codegen pass must recognize the sequence `w = z *^ y; z = z * y;` and emit a single `mul` instruction. Lastly, the codegen pass must recognize the sequence `w = z *^ y; z = z * y;` and emit a single `mul` instruction.
@ -215,7 +219,7 @@ In `cg.c` is a function called `xop`, which returns an x86 operand string, given
Once all that is done and tested, now we can add the following dumbification rules: all binary operations with the operand `AST_BINOP_MUL` or `AST_BINOP_MULHI` must be the whole expression within an assignment statement. If not, extract into a separate assignment & new variable with `varify`. The destination of the assignment, and both operands of the binary operation must be of type `AST_EXPR_VAR`, with their corresponding variables being of type `VARTABLEENTRY_VAR`, not `VARTABLEENTRY_SYMBOL` or `VARTABLEENTRY_TYPE`. If any of those don't apply, `varify` the offenders. Each such assignment have a neighboring, symmetric assignment, so that both A and D are caught by the pre-coloring pass. Once all that is done and tested, now we can add the following dumbification rules: all binary operations with the operand `AST_BINOP_MUL` or `AST_BINOP_MULHI` must be the whole expression within an assignment statement. If not, extract into a separate assignment & new variable with `varify`. The destination of the assignment, and both operands of the binary operation must be of type `AST_EXPR_VAR`, with their corresponding variables being of type `VARTABLEENTRY_VAR`, not `VARTABLEENTRY_SYMBOL` or `VARTABLEENTRY_TYPE`. If any of those don't apply, `varify` the offenders. Each such assignment have a neighboring, symmetric assignment, so that both A and D are caught by the pre-coloring pass.
A common bug when writing a dumbification rule is ending up with one that is always successful. If this happens, the compiler will become stuck endlessly dumbifying, which is nonsense. It would be nice if you could formally prove that won't happen. A common bug when writing a dumbification rule is ending up with one that is always successful. If this happens, the compiler will become stuck endlessly dumbifying, which is nonsense. It would be nice if you could formally prove that won't happen. Another common bug is not realizing the order in which dumbification rules are applied matters :).
You know, I really regret writing this in C. You know, I really regret writing this in C.