diff options
Diffstat (limited to 'docs/dev')
-rw-r--r-- | docs/dev/README.md | 117 | ||||
-rw-r--r-- | docs/dev/architecture.md | 503 | ||||
-rw-r--r-- | docs/dev/debugging.md | 10 | ||||
-rw-r--r-- | docs/dev/guide.md | 10 | ||||
-rw-r--r-- | docs/dev/lsp-extensions.md | 12 | ||||
-rw-r--r-- | docs/dev/style.md | 154 | ||||
-rw-r--r-- | docs/dev/syntax.md | 29 |
7 files changed, 593 insertions, 242 deletions
diff --git a/docs/dev/README.md b/docs/dev/README.md index dd2bfc493..b91013f13 100644 --- a/docs/dev/README.md +++ b/docs/dev/README.md | |||
@@ -9,8 +9,9 @@ $ cargo test | |||
9 | 9 | ||
10 | should be enough to get you started! | 10 | should be enough to get you started! |
11 | 11 | ||
12 | To learn more about how rust-analyzer works, see | 12 | To learn more about how rust-analyzer works, see [./architecture.md](./architecture.md) document. |
13 | [./architecture.md](./architecture.md) document. | 13 | It also explains the high-level layout of the source code. |
14 | Do skim through that document. | ||
14 | 15 | ||
15 | We also publish rustdoc docs to pages: | 16 | We also publish rustdoc docs to pages: |
16 | 17 | ||
@@ -43,6 +44,10 @@ https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Fwg-rls-2.2E0 | |||
43 | while unactionable ones are effectively wont-fix. Each triaged issue should have one of these labels. | 44 | while unactionable ones are effectively wont-fix. Each triaged issue should have one of these labels. |
44 | * [fun](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3Afun) | 45 | * [fun](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3Afun) |
45 | is for cool, but probably hard stuff. | 46 | is for cool, but probably hard stuff. |
47 | * [Design](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%Design) | ||
48 | is for moderate/large scale architecture discussion. | ||
49 | Also a kind of fun. | ||
50 | These issues should generally include a link to a Zulip discussion thread. | ||
46 | 51 | ||
47 | # CI | 52 | # CI |
48 | 53 | ||
@@ -53,8 +58,6 @@ Use `env RUN_SLOW_TESTS=1 cargo test` to run the full suite. | |||
53 | 58 | ||
54 | We use bors-ng to enforce the [not rocket science](https://graydon2.dreamwidth.org/1597.html) rule. | 59 | We use bors-ng to enforce the [not rocket science](https://graydon2.dreamwidth.org/1597.html) rule. |
55 | 60 | ||
56 | You can run `cargo xtask install-pre-commit-hook` to install git-hook to run rustfmt on commit. | ||
57 | |||
58 | # Launching rust-analyzer | 61 | # Launching rust-analyzer |
59 | 62 | ||
60 | Debugging the language server can be tricky. | 63 | Debugging the language server can be tricky. |
@@ -95,25 +98,6 @@ I don't have a specific workflow for this case. | |||
95 | Additionally, I use `cargo run --release -p rust-analyzer -- analysis-stats path/to/some/rust/crate` to run a batch analysis. | 98 | Additionally, I use `cargo run --release -p rust-analyzer -- analysis-stats path/to/some/rust/crate` to run a batch analysis. |
96 | This is primarily useful for performance optimizations, or for bug minimization. | 99 | This is primarily useful for performance optimizations, or for bug minimization. |
97 | 100 | ||
98 | ## Parser Tests | ||
99 | |||
100 | Tests for the parser (`parser`) live in the `syntax` crate (see `test_data` directory). | ||
101 | There are two kinds of tests: | ||
102 | |||
103 | * Manually written test cases in `parser/ok` and `parser/err` | ||
104 | * "Inline" tests in `parser/inline` (these are generated) from comments in `parser` crate. | ||
105 | |||
106 | The purpose of inline tests is not to achieve full coverage by test cases, but to explain to the reader of the code what each particular `if` and `match` is responsible for. | ||
107 | If you are tempted to add a large inline test, it might be a good idea to leave only the simplest example in place, and move the test to a manual `parser/ok` test. | ||
108 | |||
109 | To update test data, run with `UPDATE_EXPECT` variable: | ||
110 | |||
111 | ```bash | ||
112 | env UPDATE_EXPECT=1 cargo qt | ||
113 | ``` | ||
114 | |||
115 | After adding a new inline test you need to run `cargo xtest codegen` and also update the test data as described above. | ||
116 | |||
117 | ## TypeScript Tests | 101 | ## TypeScript Tests |
118 | 102 | ||
119 | If you change files under `editors/code` and would like to run the tests and linter, install npm and run: | 103 | If you change files under `editors/code` and would like to run the tests and linter, install npm and run: |
@@ -124,77 +108,18 @@ npm ci | |||
124 | npm run lint | 108 | npm run lint |
125 | ``` | 109 | ``` |
126 | 110 | ||
127 | # Code organization | ||
128 | |||
129 | All Rust code lives in the `crates` top-level directory, and is organized as a single Cargo workspace. | ||
130 | The `editors` top-level directory contains code for integrating with editors. | ||
131 | Currently, it contains the plugin for VS Code (in TypeScript). | ||
132 | The `docs` top-level directory contains both developer and user documentation. | ||
133 | |||
134 | We have some automation infra in Rust in the `xtask` package. | ||
135 | It contains stuff like formatting checking, code generation and powers `cargo xtask install`. | ||
136 | The latter syntax is achieved with the help of cargo aliases (see `.cargo` directory). | ||
137 | |||
138 | # Architecture Invariants | ||
139 | |||
140 | This section tries to document high-level design constraints, which are not | ||
141 | always obvious from the low-level code. | ||
142 | |||
143 | ## Incomplete syntax trees | ||
144 | |||
145 | Syntax trees are by design incomplete and do not enforce well-formedness. | ||
146 | If an AST method returns an `Option`, it *can* be `None` at runtime, even if this is forbidden by the grammar. | ||
147 | |||
148 | ## LSP independence | ||
149 | |||
150 | rust-analyzer is independent from LSP. | ||
151 | It provides features for a hypothetical perfect Rust-specific IDE client. | ||
152 | Internal representations are lowered to LSP in the `rust-analyzer` crate (the only crate which is allowed to use LSP types). | ||
153 | |||
154 | ## IDE/Compiler split | ||
155 | |||
156 | There's a semi-hard split between "compiler" and "IDE", at the `hir` crate. | ||
157 | Compiler derives new facts about source code. | ||
158 | It explicitly acknowledges that not all info is available (i.e. you can't look at types during name resolution). | ||
159 | |||
160 | IDE assumes that all information is available at all times. | ||
161 | |||
162 | IDE should use only types from `hir`, and should not depend on the underling compiler types. | ||
163 | `hir` is a facade. | ||
164 | |||
165 | ## IDE API | ||
166 | |||
167 | The main IDE crate (`ide`) uses "Plain Old Data" for the API. | ||
168 | Rather than talking in definitions and references, it talks in Strings and textual offsets. | ||
169 | In general, API is centered around UI concerns -- the result of the call is what the user sees in the editor, and not what the compiler sees underneath. | ||
170 | The results are 100% Rust specific though. | ||
171 | Shout outs to LSP developers for popularizing the idea that "UI" is a good place to draw a boundary at. | ||
172 | |||
173 | ## LSP is stateless | ||
174 | |||
175 | The protocol is implemented in the mostly stateless way. | ||
176 | A good mental model is HTTP, which doesn't store per-client state, and instead relies on devices like cookies to maintain an illusion of state. | ||
177 | If some action requires multi-step protocol, each step should be self-contained. | ||
178 | |||
179 | A good example here is code action resolving process. | ||
180 | TO display the lightbulb, we compute the list of code actions without computing edits. | ||
181 | Figuring out the edit is done in a separate `codeAction/resolve` call. | ||
182 | Rather than storing some `lazy_edit: Box<dyn FnOnce() -> Edit>` somewhere, we use a string ID of action to re-compute the list of actions during the resolve process. | ||
183 | (See [this post](https://rust-analyzer.github.io/blog/2020/09/28/how-to-make-a-light-bulb.html) for more details.) | ||
184 | The benefit here is that, generally speaking, the state of the world might change between `codeAction` and `codeAction` resolve requests, so any closure we store might become invalid. | ||
185 | |||
186 | While we don't currently implement any complicated refactors with complex GUI, I imagine we'd use the same techniques for refactors. | ||
187 | After clicking each "Next" button during refactor, the client would send all the info which server needs to re-recreate the context from scratch. | ||
188 | |||
189 | ## CI | ||
190 | |||
191 | CI does not test rust-analyzer, CI is a core part of rust-analyzer, and is maintained with above average standard of quality. | ||
192 | CI is reproducible -- it can only be broken by changes to files in this repository, any dependence on externalities is a bug. | ||
193 | |||
194 | # Code Style & Review Process | 111 | # Code Style & Review Process |
195 | 112 | ||
196 | Do see [./style.md](./style.md). | 113 | Do see [./style.md](./style.md). |
197 | 114 | ||
115 | # How to ... | ||
116 | |||
117 | * ... add an assist? [#7535](https://github.com/rust-analyzer/rust-analyzer/pull/7535) | ||
118 | * ... add a new protocol extension? [#4569](https://github.com/rust-analyzer/rust-analyzer/pull/4569) | ||
119 | * ... add a new configuration option? [#7451](https://github.com/rust-analyzer/rust-analyzer/pull/7451) | ||
120 | * ... add a new completion? [#6964](https://github.com/rust-analyzer/rust-analyzer/pull/6964) | ||
121 | * ... allow new syntax in the parser? [#7338](https://github.com/rust-analyzer/rust-analyzer/pull/7338) | ||
122 | |||
198 | # Logging | 123 | # Logging |
199 | 124 | ||
200 | Logging is done by both rust-analyzer and VS Code, so it might be tricky to | 125 | Logging is done by both rust-analyzer and VS Code, so it might be tricky to |
@@ -212,7 +137,7 @@ To log all communication between the server and the client, there are two choice | |||
212 | 137 | ||
213 | * you can log on the server side, by running something like | 138 | * you can log on the server side, by running something like |
214 | ``` | 139 | ``` |
215 | env RA_LOG=gen_lsp_server=trace code . | 140 | env RA_LOG=lsp_server=debug code . |
216 | ``` | 141 | ``` |
217 | 142 | ||
218 | * you can log on the client side, by enabling `"rust-analyzer.trace.server": | 143 | * you can log on the client side, by enabling `"rust-analyzer.trace.server": |
@@ -251,6 +176,9 @@ RA_PROFILE=*@3>10 // dump everything, up to depth 3, if it takes more tha | |||
251 | 176 | ||
252 | In particular, I have `export RA_PROFILE='*>10'` in my shell profile. | 177 | In particular, I have `export RA_PROFILE='*>10'` in my shell profile. |
253 | 178 | ||
179 | We also have a "counting" profiler which counts number of instances of popular structs. | ||
180 | It is enabled by `RA_COUNT=1`. | ||
181 | |||
254 | To measure time for from-scratch analysis, use something like this: | 182 | To measure time for from-scratch analysis, use something like this: |
255 | 183 | ||
256 | ``` | 184 | ``` |
@@ -288,13 +216,16 @@ Release steps: | |||
288 | * makes a GitHub release | 216 | * makes a GitHub release |
289 | * pushes VS Code extension to the marketplace | 217 | * pushes VS Code extension to the marketplace |
290 | * create new changelog in `rust-analyzer.github.io` | 218 | * create new changelog in `rust-analyzer.github.io` |
291 | * create `rust-analyzer.github.io/git.log` file with the log of merge commits since last release | 219 | 2. While the release is in progress, fill in the changelog |
292 | 2. While the release is in progress, fill-in the changelog using `git.log` | ||
293 | 3. Commit & push the changelog | 220 | 3. Commit & push the changelog |
294 | 4. Tweet | 221 | 4. Tweet |
295 | 5. Inside `rust-analyzer`, run `cargo xtask promote` -- this will create a PR to rust-lang/rust updating rust-analyzer's submodule. | 222 | 5. Inside `rust-analyzer`, run `cargo xtask promote` -- this will create a PR to rust-lang/rust updating rust-analyzer's submodule. |
296 | Self-approve the PR. | 223 | Self-approve the PR. |
297 | 224 | ||
225 | If the GitHub Actions release fails because of a transient problem like a timeout, you can re-run the job from the Actions console. | ||
226 | If it fails because of something that needs to be fixed, remove the release tag (if needed), fix the problem, then start over. | ||
227 | Make sure to remove the new changelog post created when running `cargo xtask release` a second time. | ||
228 | |||
298 | # Permissions | 229 | # Permissions |
299 | 230 | ||
300 | There are three sets of people with extra permissions: | 231 | There are three sets of people with extra permissions: |
diff --git a/docs/dev/architecture.md b/docs/dev/architecture.md index b5831f47c..ead12616e 100644 --- a/docs/dev/architecture.md +++ b/docs/dev/architecture.md | |||
@@ -1,174 +1,449 @@ | |||
1 | # Architecture | 1 | # Architecture |
2 | 2 | ||
3 | This document describes the high-level architecture of rust-analyzer. | 3 | This document describes the high-level architecture of rust-analyzer. |
4 | If you want to familiarize yourself with the code base, you are just | 4 | If you want to familiarize yourself with the code base, you are just in the right place! |
5 | in the right place! | ||
6 | 5 | ||
7 | See also the [guide](./guide.md), which walks through a particular snapshot of | 6 | See also the [guide](./guide.md), which walks through a particular snapshot of rust-analyzer code base. |
8 | rust-analyzer code base. | ||
9 | 7 | ||
10 | Yet another resource is this playlist with videos about various parts of the | 8 | Yet another resource is this playlist with videos about various parts of the analyzer: |
11 | analyzer: | ||
12 | 9 | ||
13 | https://www.youtube.com/playlist?list=PL85XCvVPmGQho7MZkdW-wtPtuJcFpzycE | 10 | https://www.youtube.com/playlist?list=PL85XCvVPmGQho7MZkdW-wtPtuJcFpzycE |
14 | 11 | ||
15 | Note that the guide and videos are pretty dated, this document should be in | 12 | Note that the guide and videos are pretty dated, this document should be, in general, fresher. |
16 | generally fresher. | ||
17 | 13 | ||
18 | ## The Big Picture | 14 | See also these implementation-related blog posts: |
19 | 15 | ||
20 |  | 16 | * https://rust-analyzer.github.io/blog/2019/11/13/find-usages.html |
17 | * https://rust-analyzer.github.io/blog/2020/07/20/three-architectures-for-responsive-ide.html | ||
18 | * https://rust-analyzer.github.io/blog/2020/09/16/challeging-LR-parsing.html | ||
19 | * https://rust-analyzer.github.io/blog/2020/09/28/how-to-make-a-light-bulb.html | ||
20 | * https://rust-analyzer.github.io/blog/2020/10/24/introducing-ungrammar.html | ||
21 | 21 | ||
22 | On the highest level, rust-analyzer is a thing which accepts input source code | 22 | ## Bird's Eye View |
23 | from the client and produces a structured semantic model of the code. | ||
24 | 23 | ||
25 | More specifically, input data consists of a set of test files (`(PathBuf, | 24 |  |
26 | String)` pairs) and information about project structure, captured in the so | ||
27 | called `CrateGraph`. The crate graph specifies which files are crate roots, | ||
28 | which cfg flags are specified for each crate and what dependencies exist between | ||
29 | the crates. The analyzer keeps all this input data in memory and never does any | ||
30 | IO. Because the input data are source code, which typically measures in tens of | ||
31 | megabytes at most, keeping everything in memory is OK. | ||
32 | 25 | ||
33 | A "structured semantic model" is basically an object-oriented representation of | 26 | On the highest level, rust-analyzer is a thing which accepts input source code from the client and produces a structured semantic model of the code. |
34 | modules, functions and types which appear in the source code. This representation | ||
35 | is fully "resolved": all expressions have types, all references are bound to | ||
36 | declarations, etc. | ||
37 | 27 | ||
38 | The client can submit a small delta of input data (typically, a change to a | 28 | More specifically, input data consists of a set of test files (`(PathBuf, String)` pairs) and information about project structure, captured in the so called `CrateGraph`. |
39 | single file) and get a fresh code model which accounts for changes. | 29 | The crate graph specifies which files are crate roots, which cfg flags are specified for each crate and what dependencies exist between the crates. |
30 | This is the input (ground) state. | ||
31 | The analyzer keeps all this input data in memory and never does any IO. | ||
32 | Because the input data is source code, which typically measures in tens of megabytes at most, keeping everything in memory is OK. | ||
40 | 33 | ||
41 | The underlying engine makes sure that model is computed lazily (on-demand) and | 34 | A "structured semantic model" is basically an object-oriented representation of modules, functions and types which appear in the source code. |
42 | can be quickly updated for small modifications. | 35 | This representation is fully "resolved": all expressions have types, all references are bound to declarations, etc. |
36 | This is derived state. | ||
43 | 37 | ||
38 | The client can submit a small delta of input data (typically, a change to a single file) and get a fresh code model which accounts for changes. | ||
44 | 39 | ||
45 | ## Code generation | 40 | The underlying engine makes sure that model is computed lazily (on-demand) and can be quickly updated for small modifications. |
46 | 41 | ||
47 | Some of the components of this repository are generated through automatic | 42 | ## Entry Points |
48 | processes. `cargo xtask codegen` runs all generation tasks. Generated code is | ||
49 | committed to the git repository. | ||
50 | 43 | ||
51 | In particular, `cargo xtask codegen` generates: | 44 | `crates/rust-analyzer/src/bin/main.rs` contains the main function which spawns LSP. |
45 | This is *the* entry point, but it front-loads a lot of complexity, so its fine to just skim through it. | ||
52 | 46 | ||
53 | 1. [`syntax_kind/generated`](https://github.com/rust-analyzer/rust-analyzer/blob/a0be39296d2925972cacd9fbf8b5fb258fad6947/crates/ra_parser/src/syntax_kind/generated.rs) | 47 | `crates/rust-analyzer/src/handlers.rs` implements all LSP requests and is a great place to start if you are already familiar with LSP. |
54 | -- the set of terminals and non-terminals of rust grammar. | ||
55 | 48 | ||
56 | 2. [`ast/generated`](https://github.com/rust-analyzer/rust-analyzer/blob/a0be39296d2925972cacd9fbf8b5fb258fad6947/crates/ra_syntax/src/ast/generated.rs) | 49 | `Analysis` and `AnalysisHost` types define the main API. |
57 | -- AST data structure. | ||
58 | 50 | ||
59 | 3. [`doc_tests/generated`](https://github.com/rust-analyzer/rust-analyzer/blob/a0be39296d2925972cacd9fbf8b5fb258fad6947/crates/assists/src/doc_tests/generated.rs), | 51 | ## Code Map |
60 | [`test_data/parser/inline`](https://github.com/rust-analyzer/rust-analyzer/tree/a0be39296d2925972cacd9fbf8b5fb258fad6947/crates/ra_syntax/test_data/parser/inline) | ||
61 | -- tests for assists and the parser. | ||
62 | 52 | ||
63 | The source for 1 and 2 is in [`ast_src.rs`](https://github.com/rust-analyzer/rust-analyzer/blob/a0be39296d2925972cacd9fbf8b5fb258fad6947/xtask/src/ast_src.rs). | 53 | This section talks briefly about various important directories and data structures. |
54 | Pay attention to the **Architecture Invariant** sections. | ||
55 | They often talk about things which are deliberately absent in the source code. | ||
64 | 56 | ||
65 | ## Code Walk-Through | 57 | Note also which crates are **API Boundaries**. |
58 | Remember, [rules at the boundary are different](https://www.tedinski.com/2018/02/06/system-boundaries.html). | ||
66 | 59 | ||
67 | ### `crates/ra_syntax`, `crates/parser` | 60 | ### `xtask` |
68 | 61 | ||
69 | Rust syntax tree structure and parser. See | 62 | This is rust-analyzer's "build system". |
70 | [RFC](https://github.com/rust-lang/rfcs/pull/2256) and [./syntax.md](./syntax.md) for some design notes. | 63 | We use cargo to compile rust code, but there are also various other tasks, like release management or local installation. |
64 | They are handled by Rust code in the xtask directory. | ||
65 | |||
66 | ### `editors/code` | ||
67 | |||
68 | VS Code plugin. | ||
69 | |||
70 | ### `libs/` | ||
71 | |||
72 | rust-analyzer independent libraries which we publish to crates.io. | ||
73 | It's not heavily utilized at the moment. | ||
74 | |||
75 | ### `crates/parser` | ||
76 | |||
77 | It is a hand-written recursive descent parser, which produces a sequence of events like "start node X", "finish node Y". | ||
78 | It works similarly to | ||
79 | [kotlin's parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java), | ||
80 | which is a good source of inspiration for dealing with syntax errors and incomplete input. | ||
81 | Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs) is what we use for the definition of the Rust language. | ||
82 | `TreeSink` and `TokenSource` traits bridge the tree-agnostic parser from `grammar` with `rowan` trees. | ||
83 | |||
84 | **Architecture Invariant:** the parser is independent of the particular tree structure and particular representation of the tokens. | ||
85 | It transforms one flat stream of events into another flat stream of events. | ||
86 | Token independence allows us to parse out both text-based source code and `tt`-based macro input. | ||
87 | Tree independence allows us to more easily vary the syntax tree implementation. | ||
88 | It should also unlock efficient light-parsing approaches. | ||
89 | For example, you can extract the set of names defined in a file (for typo correction) without building a syntax tree. | ||
90 | |||
91 | **Architecture Invariant:** parsing never fails, the parser produces `(T, Vec<Error>)` rather than `Result<T, Error>`. | ||
92 | |||
93 | ### `crates/syntax` | ||
94 | |||
95 | Rust syntax tree structure and parser. | ||
96 | See [RFC](https://github.com/rust-lang/rfcs/pull/2256) and [./syntax.md](./syntax.md) for some design notes. | ||
71 | 97 | ||
72 | - [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees. | 98 | - [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees. |
73 | - `grammar` module is the actual parser. It is a hand-written recursive descent parser, which | ||
74 | produces a sequence of events like "start node X", "finish node Y". It works similarly to [kotlin's parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java), | ||
75 | which is a good source of inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs) | ||
76 | is what we use for the definition of the Rust language. | ||
77 | - `TreeSink` and `TokenSource` traits bridge the tree-agnostic parser from `grammar` with `rowan` trees. | ||
78 | - `ast` provides a type safe API on top of the raw `rowan` tree. | 99 | - `ast` provides a type safe API on top of the raw `rowan` tree. |
79 | - `ast_src` description of the grammar, which is used to generate `syntax_kinds` | 100 | - `ungrammar` description of the grammar, which is used to generate `syntax_kinds` and `ast` modules, using `cargo xtask codegen` command. |
80 | and `ast` modules, using `cargo xtask codegen` command. | 101 | |
102 | Tests for ra_syntax are mostly data-driven. | ||
103 | `test_data/parser` contains subdirectories with a bunch of `.rs` (test vectors) and `.txt` files with corresponding syntax trees. | ||
104 | During testing, we check `.rs` against `.txt`. | ||
105 | If the `.txt` file is missing, it is created (this is how you update tests). | ||
106 | Additionally, running `cargo xtask codegen` will walk the grammar module and collect all `// test test_name` comments into files inside `test_data/parser/inline` directory. | ||
107 | |||
108 | To update test data, run with `UPDATE_EXPECT` variable: | ||
81 | 109 | ||
82 | Tests for ra_syntax are mostly data-driven: `test_data/parser` contains subdirectories with a bunch of `.rs` | 110 | ```bash |
83 | (test vectors) and `.txt` files with corresponding syntax trees. During testing, we check | 111 | env UPDATE_EXPECT=1 cargo qt |
84 | `.rs` against `.txt`. If the `.txt` file is missing, it is created (this is how you update | 112 | ``` |
85 | tests). Additionally, running `cargo xtask codegen` will walk the grammar module and collect | ||
86 | all `// test test_name` comments into files inside `test_data/parser/inline` directory. | ||
87 | 113 | ||
88 | Note | 114 | After adding a new inline test you need to run `cargo xtest codegen` and also update the test data as described above. |
89 | [`api_walkthrough`](https://github.com/rust-analyzer/rust-analyzer/blob/2fb6af89eb794f775de60b82afe56b6f986c2a40/crates/ra_syntax/src/lib.rs#L190-L348) | 115 | |
116 | Note [`api_walkthrough`](https://github.com/rust-analyzer/rust-analyzer/blob/2fb6af89eb794f775de60b82afe56b6f986c2a40/crates/ra_syntax/src/lib.rs#L190-L348) | ||
90 | in particular: it shows off various methods of working with syntax tree. | 117 | in particular: it shows off various methods of working with syntax tree. |
91 | 118 | ||
92 | See [#93](https://github.com/rust-analyzer/rust-analyzer/pull/93) for an example PR which | 119 | See [#93](https://github.com/rust-analyzer/rust-analyzer/pull/93) for an example PR which fixes a bug in the grammar. |
93 | fixes a bug in the grammar. | 120 | |
121 | **Architecture Invariant:** `syntax` crate is completely independent from the rest of rust-analyzer. It knows nothing about salsa or LSP. | ||
122 | This is important because it is possible to make useful tooling using only the syntax tree. | ||
123 | Without semantic information, you don't need to be able to _build_ code, which makes the tooling more robust. | ||
124 | See also https://web.stanford.edu/~mlfbrown/paper.pdf. | ||
125 | You can view the `syntax` crate as an entry point to rust-analyzer. | ||
126 | `syntax` crate is an **API Boundary**. | ||
127 | |||
128 | **Architecture Invariant:** syntax tree is a value type. | ||
129 | The tree is fully determined by the contents of its syntax nodes, it doesn't need global context (like an interner) and doesn't store semantic info. | ||
130 | Using the tree as a store for semantic info is convenient in traditional compilers, but doesn't work nicely in the IDE. | ||
131 | Specifically, assists and refactors require transforming syntax trees, and that becomes awkward if you need to do something with the semantic info. | ||
132 | |||
133 | **Architecture Invariant:** syntax tree is built for a single file. | ||
134 | This is to enable parallel parsing of all files. | ||
135 | |||
136 | **Architecture Invariant:** Syntax trees are by design incomplete and do not enforce well-formedness. | ||
137 | If an AST method returns an `Option`, it *can* be `None` at runtime, even if this is forbidden by the grammar. | ||
94 | 138 | ||
95 | ### `crates/base_db` | 139 | ### `crates/base_db` |
96 | 140 | ||
97 | We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and | 141 | We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and on-demand computation. |
98 | on-demand computation. Roughly, you can think of salsa as a key-value store, but | 142 | Roughly, you can think of salsa as a key-value store, but it can also compute derived values using specified functions. The `base_db` crate provides basic infrastructure for interacting with salsa. |
99 | it also can compute derived values using specified functions. The `base_db` crate | 143 | Crucially, it defines most of the "input" queries: facts supplied by the client of the analyzer. |
100 | provides basic infrastructure for interacting with salsa. Crucially, it | 144 | Reading the docs of the `base_db::input` module should be useful: everything else is strictly derived from those inputs. |
101 | defines most of the "input" queries: facts supplied by the client of the | 145 | |
102 | analyzer. Reading the docs of the `base_db::input` module should be useful: | 146 | **Architecture Invariant:** particularities of the build system are *not* the part of the ground state. |
103 | everything else is strictly derived from those inputs. | 147 | In particular, `base_db` knows nothing about cargo. |
148 | The `CrateGraph` structure is used to represent the dependencies between the crates abstractly. | ||
149 | |||
150 | **Architecture Invariant:** `base_db` doesn't know about file system and file paths. | ||
151 | Files are represented with opaque `FileId`, there's no operation to get an `std::path::Path` out of the `FileId`. | ||
152 | |||
153 | ### `crates/hir_expand`, `crates/hir_def`, `crates/hir_ty` | ||
154 | |||
155 | These crates are the *brain* of rust-analyzer. | ||
156 | This is the compiler part of the IDE. | ||
157 | |||
158 | `hir_xxx` crates have a strong ECS flavor, in that they work with raw ids and directly query the database. | ||
159 | There's little abstraction here. | ||
160 | These crates integrate deeply with salsa and chalk. | ||
161 | |||
162 | Name resolution, macro expansion and type inference all happen here. | ||
163 | These crates also define various intermediate representations of the core. | ||
104 | 164 | ||
105 | ### `crates/hir*` crates | 165 | `ItemTree` condenses a single `SyntaxTree` into a "summary" data structure, which is stable over modifications to function bodies. |
106 | 166 | ||
107 | HIR provides high-level "object oriented" access to Rust code. | 167 | `DefMap` contains the module tree of a crate and stores module scopes. |
108 | 168 | ||
109 | The principal difference between HIR and syntax trees is that HIR is bound to a | 169 | `Body` stores information about expressions. |
110 | particular crate instance. That is, it has cfg flags and features applied. So, | ||
111 | the relation between syntax and HIR is many-to-one. The `source_binder` module | ||
112 | is responsible for guessing a HIR for a particular source position. | ||
113 | 170 | ||
114 | Underneath, HIR works on top of salsa, using a `HirDatabase` trait. | 171 | **Architecture Invariant:** these crates are not, and will never be, an api boundary. |
115 | 172 | ||
116 | `hir_xxx` crates have a strong ECS flavor, in that they work with raw ids and | 173 | **Architecture Invariant:** these crates explicitly care about being incremental. |
117 | directly query the database. | 174 | The core invariant we maintain is "typing inside a function's body never invalidates global derived data". |
175 | i.e., if you change the body of `foo`, all facts about `bar` should remain intact. | ||
118 | 176 | ||
119 | The top-level `hir` façade crate wraps ids into a more OO-flavored API. | 177 | **Architecture Invariant:** hir exists only in context of particular crate instance with specific CFG flags. |
178 | The same syntax may produce several instances of HIR if the crate participates in the crate graph more than once. | ||
179 | |||
180 | ### `crates/hir` | ||
181 | |||
182 | The top-level `hir` crate is an **API Boundary**. | ||
183 | If you think about "using rust-analyzer as a library", `hir` crate is most likely the façade you'll be talking to. | ||
184 | |||
185 | It wraps ECS-style internal API into a more OO-flavored API (with an extra `db` argument for each call). | ||
186 | |||
187 | **Architecture Invariant:** `hir` provides a static, fully resolved view of the code. | ||
188 | While internal `hir_*` crates _compute_ things, `hir`, from the outside, looks like an inert data structure. | ||
189 | |||
190 | `hir` also handles the delicate task of going from syntax to the corresponding `hir`. | ||
191 | Remember that the mapping here is one-to-many. | ||
192 | See `Semantics` type and `source_to_def` module. | ||
193 | |||
194 | Note in particular a curious recursive structure in `source_to_def`. | ||
195 | We first resolve the parent _syntax_ node to the parent _hir_ element. | ||
196 | Then we ask the _hir_ parent what _syntax_ children does it have. | ||
197 | Then we look for our node in the set of children. | ||
198 | |||
199 | This is the heart of many IDE features, like goto definition, which start with figuring out the hir node at the cursor. | ||
200 | This is some kind of (yet unnamed) uber-IDE pattern, as it is present in Roslyn and Kotlin as well. | ||
120 | 201 | ||
121 | ### `crates/ide` | 202 | ### `crates/ide` |
122 | 203 | ||
123 | A stateful library for analyzing many Rust files as they change. `AnalysisHost` | 204 | The `ide` crate builds on top of `hir` semantic model to provide high-level IDE features like completion or goto definition. |
124 | is a mutable entity (clojure's atom) which holds the current state, incorporates | 205 | It is an **API Boundary**. |
125 | changes and hands out `Analysis` --- an immutable and consistent snapshot of | 206 | If you want to use IDE parts of rust-analyzer via LSP, custom flatbuffers-based protocol or just as a library in your text editor, this is the right API. |
126 | the world state at a point in time, which actually powers analysis. | 207 | |
208 | **Architecture Invariant:** `ide` crate's API is build out of POD types with public fields. | ||
209 | The API uses editor's terminology, it talks about offsets and string labels rather than in terms of definitions or types. | ||
210 | It is effectively the view in MVC and viewmodel in [MVVM](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93viewmodel). | ||
211 | All arguments and return types are conceptually serializable. | ||
212 | In particular, syntax tress and hir types are generally absent from the API (but are used heavily in the implementation). | ||
213 | Shout outs to LSP developers for popularizing the idea that "UI" is a good place to draw a boundary at. | ||
214 | |||
215 | `ide` is also the first crate which has the notion of change over time. | ||
216 | `AnalysisHost` is a state to which you can transactionally `apply_change`. | ||
217 | `Analysis` is an immutable snapshot of the state. | ||
127 | 218 | ||
128 | One interesting aspect of analysis is its support for cancellation. When a | 219 | Internally, `ide` is split across several crates. `ide_assists`, `ide_completion` and `ide_ssr` implement large isolated features. |
129 | change is applied to `AnalysisHost`, first all currently active snapshots are | 220 | `ide_db` implements common IDE functionality (notably, reference search is implemented here). |
130 | canceled. Only after all snapshots are dropped the change actually affects the | 221 | The `ide` contains a public API/façade, as well as implementation for a plethora of smaller features. |
131 | database. | ||
132 | 222 | ||
133 | APIs in this crate are IDE centric: they take text offsets as input and produce | 223 | **Architecture Invariant:** `ide` crate strives to provide a _perfect_ API. |
134 | offsets and strings as output. This works on top of rich code model powered by | 224 | Although at the moment it has only one consumer, the LSP server, LSP *does not* influence it's API design. |
135 | `hir`. | 225 | Instead, we keep in mind a hypothetical _ideal_ client -- an IDE tailored specifically for rust, every nook and cranny of which is packed with Rust-specific goodies. |
136 | 226 | ||
137 | ### `crates/rust-analyzer` | 227 | ### `crates/rust-analyzer` |
138 | 228 | ||
139 | An LSP implementation which wraps `ide` into a language server protocol. | 229 | This crate defines the `rust-analyzer` binary, so it is the **entry point**. |
230 | It implements the language server. | ||
231 | |||
232 | **Architecture Invariant:** `rust-analyzer` is the only crate that knows about LSP and JSON serialization. | ||
233 | If you want to expose a data structure `X` from ide to LSP, don't make it serializable. | ||
234 | Instead, create a serializable counterpart in `rust-analyzer` crate and manually convert between the two. | ||
235 | |||
236 | `GlobalState` is the state of the server. | ||
237 | The `main_loop` defines the server event loop which accepts requests and sends responses. | ||
238 | Requests that modify the state or might block user's typing are handled on the main thread. | ||
239 | All other requests are processed in background. | ||
240 | |||
241 | **Architecture Invariant:** the server is stateless, a-la HTTP. | ||
242 | Sometimes state needs to be preserved between requests. | ||
243 | For example, "what is the `edit` for the fifth completion item of the last completion edit?". | ||
244 | For this, the second request should include enough info to re-create the context from scratch. | ||
245 | This generally means including all the parameters of the original request. | ||
246 | |||
247 | `reload` module contains the code that handles configuration and Cargo.toml changes. | ||
248 | This is a tricky business. | ||
249 | |||
250 | **Architecture Invariant:** `rust-analyzer` should be partially available even when the build is broken. | ||
251 | Reloading process should not prevent IDE features from working. | ||
252 | |||
253 | ### `crates/toolchain`, `crates/project_model`, `crates/flycheck` | ||
254 | |||
255 | These crates deal with invoking `cargo` to learn about project structure and get compiler errors for the "check on save" feature. | ||
256 | |||
257 | They use `crates/path` heavily instead of `std::path`. | ||
258 | A single `rust-analyzer` process can serve many projects, so it is important that server's current directory does not leak. | ||
259 | |||
260 | ### `crates/mbe`, `crates/tt`, `crates/proc_macro_api`, `crates/proc_macro_srv` | ||
261 | |||
262 | These crates implement macros as token tree -> token tree transforms. | ||
263 | They are independent from the rest of the code. | ||
264 | |||
265 | `tt` crate defined `TokenTree`, a single token or a delimited sequence of token trees. | ||
266 | `mbe` crate contains tools for transforming between syntax trees and token tree. | ||
267 | And it also handles the actual parsing and expansion of declarative macro (a-la "Macros By Example" or mbe). | ||
268 | |||
269 | For proc macros, the client-server model are used. | ||
270 | We pass an argument `--proc-macro` to `rust-analyzer` binary to start a separate process (`proc_macro_srv`). | ||
271 | And the client (`proc_macro_api`) provides an interface to talk to that server separately. | ||
272 | |||
273 | And then token trees are passed from client, and the server will load the corresponding dynamic library (which built by `cargo`). | ||
274 | And due to the fact the api for getting result from proc macro are always unstable in `rustc`, | ||
275 | we maintain our own copy (and paste) of that part of code to allow us to build the whole thing in stable rust. | ||
140 | 276 | ||
141 | ### `crates/vfs` | 277 | **Architecture Invariant:** |
278 | Bad proc macros may panic or segfault accidentally. So we run it in another process and recover it from fatal error. | ||
279 | And they may be non-deterministic which conflict how `salsa` works, so special attention is required. | ||
142 | 280 | ||
143 | Although `hir` and `ide` don't do any IO, we need to be able to read | 281 | ### `crates/cfg` |
144 | files from disk at the end of the day. This is what `vfs` does. It also | ||
145 | manages overlays: "dirty" files in the editor, whose "true" contents is | ||
146 | different from data on disk. | ||
147 | 282 | ||
148 | ## Testing Infrastructure | 283 | This crate is responsible for parsing, evaluation and general definition of `cfg` attributes. |
149 | 284 | ||
150 | Rust Analyzer has three interesting [systems | 285 | ### `crates/vfs`, `crates/vfs-notify` |
151 | boundaries](https://www.tedinski.com/2018/04/10/making-tests-a-positive-influence-on-design.html) | ||
152 | to concentrate tests on. | ||
153 | 286 | ||
154 | The outermost boundary is the `rust-analyzer` crate, which defines an LSP | 287 | These crates implement a virtual file system. |
155 | interface in terms of stdio. We do integration testing of this component, by | 288 | They provide consistent snapshots of the underlying file system and insulate messy OS paths. |
156 | feeding it with a stream of LSP requests and checking responses. These tests are | ||
157 | known as "heavy", because they interact with Cargo and read real files from | ||
158 | disk. For this reason, we try to avoid writing too many tests on this boundary: | ||
159 | in a statically typed language, it's hard to make an error in the protocol | ||
160 | itself if messages are themselves typed. | ||
161 | 289 | ||
162 | The middle, and most important, boundary is `ide`. Unlike | 290 | **Architecture Invariant:** vfs doesn't assume a single unified file system. |
163 | `rust-analyzer`, which exposes API, `ide` uses Rust API and is intended to | 291 | i.e., a single rust-analyzer process can act as a remote server for two different machines, where the same `/tmp/foo.rs` path points to different files. |
164 | use by various tools. Typical test creates an `AnalysisHost`, calls some | 292 | For this reason, all path APIs generally take some existing path as a "file system witness". |
165 | `Analysis` functions and compares the results against expectation. | ||
166 | 293 | ||
167 | The innermost and most elaborate boundary is `hir`. It has a much richer | 294 | ### `crates/stdx` |
168 | vocabulary of types than `ide`, but the basic testing setup is the same: we | 295 | |
169 | create a database, run some queries, assert result. | 296 | This crate contains various non-rust-analyzer specific utils, which could have been in std, as well |
297 | as copies of unstable std items we would like to make use of already, like `std::str::split_once`. | ||
298 | |||
299 | ### `crates/profile` | ||
300 | |||
301 | This crate contains utilities for CPU and memory profiling. | ||
302 | |||
303 | |||
304 | ## Cross-Cutting Concerns | ||
305 | |||
306 | This sections talks about the things which are everywhere and nowhere in particular. | ||
307 | |||
308 | ### Code generation | ||
309 | |||
310 | Some of the components of this repository are generated through automatic processes. | ||
311 | `cargo xtask codegen` runs all generation tasks. | ||
312 | Generated code is generally committed to the git repository. | ||
313 | There are tests to check that the generated code is fresh. | ||
314 | |||
315 | In particular, we generate: | ||
316 | |||
317 | * API for working with syntax trees (`syntax::ast`, the [`ungrammar`](https://github.com/rust-analyzer/ungrammar) crate). | ||
318 | * Various sections of the manual: | ||
319 | |||
320 | * features | ||
321 | * assists | ||
322 | * config | ||
323 | |||
324 | * Documentation tests for assists | ||
325 | |||
326 | **Architecture Invariant:** we avoid bootstrapping. | ||
327 | For codegen we need to parse Rust code. | ||
328 | Using rust-analyzer for that would work and would be fun, but it would also complicate the build process a lot. | ||
329 | For that reason, we use syn and manual string parsing. | ||
330 | |||
331 | ### Cancellation | ||
332 | |||
333 | Let's say that the IDE is in the process of computing syntax highlighting, when the user types `foo`. | ||
334 | What should happen? | ||
335 | `rust-analyzer`s answer is that the highlighting process should be cancelled -- its results are now stale, and it also blocks modification of the inputs. | ||
336 | |||
337 | The salsa database maintains a global revision counter. | ||
338 | When applying a change, salsa bumps this counter and waits until all other threads using salsa finish. | ||
339 | If a thread does salsa-based computation and notices that the counter is incremented, it panics with a special value (see `Canceled::throw`). | ||
340 | That is, rust-analyzer requires unwinding. | ||
341 | |||
342 | `ide` is the boundary where the panic is caught and transformed into a `Result<T, Cancelled>`. | ||
343 | |||
344 | ### Testing | ||
345 | |||
346 | Rust Analyzer has three interesting [system boundaries](https://www.tedinski.com/2018/04/10/making-tests-a-positive-influence-on-design.html) to concentrate tests on. | ||
347 | |||
348 | The outermost boundary is the `rust-analyzer` crate, which defines an LSP interface in terms of stdio. | ||
349 | We do integration testing of this component, by feeding it with a stream of LSP requests and checking responses. | ||
350 | These tests are known as "heavy", because they interact with Cargo and read real files from disk. | ||
351 | For this reason, we try to avoid writing too many tests on this boundary: in a statically typed language, it's hard to make an error in the protocol itself if messages are themselves typed. | ||
352 | Heavy tests are only run when `RUN_SLOW_TESTS` env var is set. | ||
353 | |||
354 | The middle, and most important, boundary is `ide`. | ||
355 | Unlike `rust-analyzer`, which exposes API, `ide` uses Rust API and is intended for use by various tools. | ||
356 | A typical test creates an `AnalysisHost`, calls some `Analysis` functions and compares the results against expectation. | ||
357 | |||
358 | The innermost and most elaborate boundary is `hir`. | ||
359 | It has a much richer vocabulary of types than `ide`, but the basic testing setup is the same: we create a database, run some queries, assert result. | ||
170 | 360 | ||
171 | For comparisons, we use the `expect` crate for snapshot testing. | 361 | For comparisons, we use the `expect` crate for snapshot testing. |
172 | 362 | ||
173 | To test various analysis corner cases and avoid forgetting about old tests, we | 363 | To test various analysis corner cases and avoid forgetting about old tests, we use so-called marks. |
174 | use so-called marks. See the `marks` module in the `test_utils` crate for more. | 364 | See the `marks` module in the `test_utils` crate for more. |
365 | |||
366 | **Architecture Invariant:** rust-analyzer tests do not use libcore or libstd. | ||
367 | All required library code must be a part of the tests. | ||
368 | This ensures fast test execution. | ||
369 | |||
370 | **Architecture Invariant:** tests are data driven and do not test the API. | ||
371 | Tests which directly call various API functions are a liability, because they make refactoring the API significantly more complicated. | ||
372 | So most of the tests look like this: | ||
373 | |||
374 | ```rust | ||
375 | #[track_caller] | ||
376 | fn check(input: &str, expect: expect_test::Expect) { | ||
377 | // The single place that actually exercises a particular API | ||
378 | } | ||
379 | |||
380 | #[test] | ||
381 | fn foo() { | ||
382 | check("foo", expect![["bar"]]); | ||
383 | } | ||
384 | |||
385 | #[test] | ||
386 | fn spam() { | ||
387 | check("spam", expect![["eggs"]]); | ||
388 | } | ||
389 | // ...and a hundred more tests that don't care about the specific API at all. | ||
390 | ``` | ||
391 | |||
392 | To specify input data, we use a single string literal in a special format, which can describe a set of rust files. | ||
393 | See the `Fixture` type. | ||
394 | |||
395 | **Architecture Invariant:** all code invariants are tested by `#[test]` tests. | ||
396 | There's no additional checks in CI, formatting and tidy tests are run with `cargo test`. | ||
397 | |||
398 | **Architecture Invariant:** tests do not depend on any kind of external resources, they are perfectly reproducible. | ||
399 | |||
400 | |||
401 | ### Performance Testing | ||
402 | |||
403 | TBA, take a look at the `metrics` xtask and `#[test] fn benchmark_xxx()` functions. | ||
404 | |||
405 | ### Error Handling | ||
406 | |||
407 | **Architecture Invariant:** core parts of rust-analyzer (`ide`/`hir`) don't interact with the outside world and thus can't fail. | ||
408 | Only parts touching LSP are allowed to do IO. | ||
409 | |||
410 | Internals of rust-analyzer need to deal with broken code, but this is not an error condition. | ||
411 | rust-analyzer is robust: various analysis compute `(T, Vec<Error>)` rather than `Result<T, Error>`. | ||
412 | |||
413 | rust-analyzer is a complex long-running process. | ||
414 | It will always have bugs and panics. | ||
415 | But a panic in an isolated feature should not bring down the whole process. | ||
416 | Each LSP-request is protected by a `catch_unwind`. | ||
417 | We use `always` and `never` macros instead of `assert` to gracefully recover from impossible conditions. | ||
418 | |||
419 | ### Observability | ||
420 | |||
421 | rust-analyzer is a long-running process, so it is important to understand what's going on inside. | ||
422 | We have several instruments for that. | ||
423 | |||
424 | The event loop that runs rust-analyzer is very explicit. | ||
425 | Rather than spawning futures or scheduling callbacks (open), the event loop accepts an `enum` of possible events (closed). | ||
426 | It's easy to see all the things that trigger rust-analyzer processing, together with their performance | ||
427 | |||
428 | rust-analyzer includes a simple hierarchical profiler (`hprof`). | ||
429 | It is enabled with `RA_PROFILE='*>50` env var (log all (`*`) actions which take more than `50` ms) and produces output like: | ||
430 | |||
431 | ``` | ||
432 | 85ms - handle_completion | ||
433 | 68ms - import_on_the_fly | ||
434 | 67ms - import_assets::search_for_relative_paths | ||
435 | 0ms - crate_def_map:wait (804 calls) | ||
436 | 0ms - find_path (16 calls) | ||
437 | 2ms - find_similar_imports (1 calls) | ||
438 | 0ms - generic_params_query (334 calls) | ||
439 | 59ms - trait_solve_query (186 calls) | ||
440 | 0ms - Semantics::analyze_impl (1 calls) | ||
441 | 1ms - render_resolution (8 calls) | ||
442 | 0ms - Semantics::analyze_impl (5 calls) | ||
443 | ``` | ||
444 | |||
445 | This is cheap enough to enable in production. | ||
446 | |||
447 | |||
448 | Similarly, we save live object counting (`RA_COUNT=1`). | ||
449 | It is not cheap enough to enable in prod, and this is a bug which should be fixed. | ||
diff --git a/docs/dev/debugging.md b/docs/dev/debugging.md index 8c48fd5a1..5876e71bc 100644 --- a/docs/dev/debugging.md +++ b/docs/dev/debugging.md | |||
@@ -10,7 +10,7 @@ | |||
10 | - Install all TypeScript dependencies | 10 | - Install all TypeScript dependencies |
11 | ```bash | 11 | ```bash |
12 | cd editors/code | 12 | cd editors/code |
13 | npm install | 13 | npm ci |
14 | ``` | 14 | ``` |
15 | 15 | ||
16 | ## Common knowledge | 16 | ## Common knowledge |
@@ -57,6 +57,14 @@ To apply changes to an already running debug process, press <kbd>Ctrl+Shift+P</k | |||
57 | 57 | ||
58 | - Go back to the `[Extension Development Host]` instance and hover over a Rust variable and your breakpoint should hit. | 58 | - Go back to the `[Extension Development Host]` instance and hover over a Rust variable and your breakpoint should hit. |
59 | 59 | ||
60 | If you need to debug the server from the very beginning, including its initialization code, you can use the `--wait-dbg` command line argument or `RA_WAIT_DBG` environment variable. The server will spin at the beginning of the `try_main` function (see `crates\rust-analyzer\src\bin\main.rs`) | ||
61 | ```rust | ||
62 | let mut d = 4; | ||
63 | while d == 4 { // set a breakpoint here and change the value | ||
64 | d = 4; | ||
65 | } | ||
66 | ``` | ||
67 | |||
60 | ## Demo | 68 | ## Demo |
61 | 69 | ||
62 | - [Debugging TypeScript VScode extension](https://www.youtube.com/watch?v=T-hvpK6s4wM). | 70 | - [Debugging TypeScript VScode extension](https://www.youtube.com/watch?v=T-hvpK6s4wM). |
diff --git a/docs/dev/guide.md b/docs/dev/guide.md index b5a5d7c93..c1a55c56c 100644 --- a/docs/dev/guide.md +++ b/docs/dev/guide.md | |||
@@ -65,11 +65,11 @@ Next, let's talk about what the inputs to the `Analysis` are, precisely. | |||
65 | 65 | ||
66 | Rust Analyzer never does any I/O itself, all inputs get passed explicitly via | 66 | Rust Analyzer never does any I/O itself, all inputs get passed explicitly via |
67 | the `AnalysisHost::apply_change` method, which accepts a single argument, a | 67 | the `AnalysisHost::apply_change` method, which accepts a single argument, a |
68 | `AnalysisChange`. [`AnalysisChange`] is a builder for a single change | 68 | `Change`. [`Change`] is a builder for a single change |
69 | "transaction", so it suffices to study its methods to understand all of the | 69 | "transaction", so it suffices to study its methods to understand all of the |
70 | input data. | 70 | input data. |
71 | 71 | ||
72 | [`AnalysisChange`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ide_api/src/lib.rs#L119-L167 | 72 | [`Change`]: https://github.com/rust-analyzer/rust-analyzer/blob/master/crates/base_db/src/change.rs#L14-L89 |
73 | 73 | ||
74 | The `(add|change|remove)_file` methods control the set of the input files, where | 74 | The `(add|change|remove)_file` methods control the set of the input files, where |
75 | each file has an integer id (`FileId`, picked by the client), text (`String`) | 75 | each file has an integer id (`FileId`, picked by the client), text (`String`) |
@@ -158,7 +158,7 @@ it should be possible to dynamically reconfigure it later without restart. | |||
158 | [main_loop.rs#L62-L70](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L62-L70) | 158 | [main_loop.rs#L62-L70](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L62-L70) |
159 | 159 | ||
160 | The [`ProjectModel`] we get after this step is very Cargo and sysroot specific, | 160 | The [`ProjectModel`] we get after this step is very Cargo and sysroot specific, |
161 | it needs to be lowered to get the input in the form of `AnalysisChange`. This | 161 | it needs to be lowered to get the input in the form of `Change`. This |
162 | happens in [`ServerWorldState::new`] method. Specifically | 162 | happens in [`ServerWorldState::new`] method. Specifically |
163 | 163 | ||
164 | * Create a `SourceRoot` for each Cargo package and sysroot. | 164 | * Create a `SourceRoot` for each Cargo package and sysroot. |
@@ -175,7 +175,7 @@ of the main loop, just like any other change. Here's where we handle: | |||
175 | * [File system changes](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L194) | 175 | * [File system changes](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L194) |
176 | * [Changes from the editor](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L377) | 176 | * [Changes from the editor](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L377) |
177 | 177 | ||
178 | After a single loop's turn, we group the changes into one `AnalysisChange` and | 178 | After a single loop's turn, we group the changes into one `Change` and |
179 | [apply] it. This always happens on the main thread and blocks the loop. | 179 | [apply] it. This always happens on the main thread and blocks the loop. |
180 | 180 | ||
181 | [apply]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L216 | 181 | [apply]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L216 |
@@ -256,7 +256,7 @@ database. | |||
256 | [`RootDatabase`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ide_api/src/db.rs#L88-L134 | 256 | [`RootDatabase`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ide_api/src/db.rs#L88-L134 |
257 | 257 | ||
258 | Salsa input queries are defined in [`FilesDatabase`] (which is a part of | 258 | Salsa input queries are defined in [`FilesDatabase`] (which is a part of |
259 | `RootDatabase`). They closely mirror the familiar `AnalysisChange` structure: | 259 | `RootDatabase`). They closely mirror the familiar `Change` structure: |
260 | indeed, what `apply_change` does is it sets the values of input queries. | 260 | indeed, what `apply_change` does is it sets the values of input queries. |
261 | 261 | ||
262 | [`FilesDatabase`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/base_db/src/input.rs#L150-L174 | 262 | [`FilesDatabase`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/base_db/src/input.rs#L150-L174 |
diff --git a/docs/dev/lsp-extensions.md b/docs/dev/lsp-extensions.md index 78d86f060..164c8482e 100644 --- a/docs/dev/lsp-extensions.md +++ b/docs/dev/lsp-extensions.md | |||
@@ -1,5 +1,5 @@ | |||
1 | <!--- | 1 | <!--- |
2 | lsp_ext.rs hash: 91f2c62457e0a20f | 2 | lsp_ext.rs hash: d279d971d4f62cd7 |
3 | 3 | ||
4 | If you need to change the above hash to make the test pass, please check if you | 4 | If you need to change the above hash to make the test pass, please check if you |
5 | need to adjust this doc as well and ping this issue: | 5 | need to adjust this doc as well and ping this issue: |
@@ -19,6 +19,12 @@ Requests, which are likely to always remain specific to `rust-analyzer` are unde | |||
19 | 19 | ||
20 | If you want to be notified about the changes to this document, subscribe to [#4604](https://github.com/rust-analyzer/rust-analyzer/issues/4604). | 20 | If you want to be notified about the changes to this document, subscribe to [#4604](https://github.com/rust-analyzer/rust-analyzer/issues/4604). |
21 | 21 | ||
22 | ## UTF-8 offsets | ||
23 | |||
24 | rust-analyzer supports clangd's extension for opting into UTF-8 as the coordinate space for offsets (by default, LSP uses UTF-16 offsets). | ||
25 | |||
26 | https://clangd.llvm.org/extensions.html#utf-8-offsets | ||
27 | |||
22 | ## `initializationOptions` | 28 | ## `initializationOptions` |
23 | 29 | ||
24 | For `initializationOptions`, `rust-analyzer` expects `"rust-analyzer"` section of the configuration. | 30 | For `initializationOptions`, `rust-analyzer` expects `"rust-analyzer"` section of the configuration. |
@@ -238,7 +244,7 @@ As proper cursor positioning is raison-d'etat for `onEnter`, it uses `SnippetTex | |||
238 | * How to deal with synchronicity of the request? | 244 | * How to deal with synchronicity of the request? |
239 | One option is to require the client to block until the server returns the response. | 245 | One option is to require the client to block until the server returns the response. |
240 | Another option is to do a OT-style merging of edits from client and server. | 246 | Another option is to do a OT-style merging of edits from client and server. |
241 | A third option is to do a record-replay: client applies heuristic on enter immediatelly, then applies all user's keypresses. | 247 | A third option is to do a record-replay: client applies heuristic on enter immediately, then applies all user's keypresses. |
242 | When the server is ready with the response, the client rollbacks all the changes and applies the recorded actions on top of the correct response. | 248 | When the server is ready with the response, the client rollbacks all the changes and applies the recorded actions on top of the correct response. |
243 | * How to deal with multiple carets? | 249 | * How to deal with multiple carets? |
244 | * Should we extend this to arbitrary typed events and not just `onEnter`? | 250 | * Should we extend this to arbitrary typed events and not just `onEnter`? |
@@ -423,7 +429,7 @@ Reloads project information (that is, re-executes `cargo metadata`). | |||
423 | 429 | ||
424 | ```typescript | 430 | ```typescript |
425 | interface StatusParams { | 431 | interface StatusParams { |
426 | status: "loading" | "ready" | "invalid" | "needsReload", | 432 | status: "loading" | "readyPartial" | "ready" | "invalid" | "needsReload", |
427 | } | 433 | } |
428 | ``` | 434 | ``` |
429 | 435 | ||
diff --git a/docs/dev/style.md b/docs/dev/style.md index 21330948b..dd71e3932 100644 --- a/docs/dev/style.md +++ b/docs/dev/style.md | |||
@@ -6,6 +6,9 @@ Our approach to "clean code" is two-fold: | |||
6 | It is explicitly OK for a reviewer to flag only some nits in the PR, and then send a follow-up cleanup PR for things which are easier to explain by example, cc-ing the original author. | 6 | It is explicitly OK for a reviewer to flag only some nits in the PR, and then send a follow-up cleanup PR for things which are easier to explain by example, cc-ing the original author. |
7 | Sending small cleanup PRs (like renaming a single local variable) is encouraged. | 7 | Sending small cleanup PRs (like renaming a single local variable) is encouraged. |
8 | 8 | ||
9 | When reviewing pull requests prefer extending this document to leaving | ||
10 | non-reusable comments on the pull request itself. | ||
11 | |||
9 | # General | 12 | # General |
10 | 13 | ||
11 | ## Scale of Changes | 14 | ## Scale of Changes |
@@ -38,7 +41,7 @@ For the second group, the change would be subjected to quite a bit of scrutiny a | |||
38 | The new API needs to be right (or at least easy to change later). | 41 | The new API needs to be right (or at least easy to change later). |
39 | The actual implementation doesn't matter that much. | 42 | The actual implementation doesn't matter that much. |
40 | It's very important to minimize the amount of changed lines of code for changes of the second kind. | 43 | It's very important to minimize the amount of changed lines of code for changes of the second kind. |
41 | Often, you start doing a change of the first kind, only to realise that you need to elevate to a change of the second kind. | 44 | Often, you start doing a change of the first kind, only to realize that you need to elevate to a change of the second kind. |
42 | In this case, we'll probably ask you to split API changes into a separate PR. | 45 | In this case, we'll probably ask you to split API changes into a separate PR. |
43 | 46 | ||
44 | Changes of the third group should be pretty rare, so we don't specify any specific process for them. | 47 | Changes of the third group should be pretty rare, so we don't specify any specific process for them. |
@@ -99,7 +102,7 @@ Of course, applying Clippy suggestions is welcome as long as they indeed improve | |||
99 | ## Minimal Tests | 102 | ## Minimal Tests |
100 | 103 | ||
101 | Most tests in rust-analyzer start with a snippet of Rust code. | 104 | Most tests in rust-analyzer start with a snippet of Rust code. |
102 | This snippets should be minimal -- if you copy-paste a snippet of real code into the tests, make sure to remove everything which could be removed. | 105 | These snippets should be minimal -- if you copy-paste a snippet of real code into the tests, make sure to remove everything which could be removed. |
103 | 106 | ||
104 | It also makes sense to format snippets more compactly (for example, by placing enum definitions like `enum E { Foo, Bar }` on a single line), | 107 | It also makes sense to format snippets more compactly (for example, by placing enum definitions like `enum E { Foo, Bar }` on a single line), |
105 | as long as they are still readable. | 108 | as long as they are still readable. |
@@ -139,13 +142,24 @@ There are many benefits to this: | |||
139 | 142 | ||
140 | Formatting ensures that you can use your editor's "number of selected characters" feature to correlate offsets with test's source code. | 143 | Formatting ensures that you can use your editor's "number of selected characters" feature to correlate offsets with test's source code. |
141 | 144 | ||
145 | ## Marked Tests | ||
146 | |||
147 | Use | ||
148 | [`mark::hit! / mark::check!`](https://github.com/rust-analyzer/rust-analyzer/blob/71fe719dd5247ed8615641d9303d7ca1aa201c2f/crates/test_utils/src/mark.rs) | ||
149 | when testing specific conditions. | ||
150 | Do not place several marks into a single test or condition. | ||
151 | Do not reuse marks between several tests. | ||
152 | |||
153 | **Rationale:** marks provide an easy way to find the canonical test for each bit of code. | ||
154 | This makes it much easier to understand. | ||
155 | |||
142 | ## Function Preconditions | 156 | ## Function Preconditions |
143 | 157 | ||
144 | Express function preconditions in types and force the caller to provide them (rather than checking in callee): | 158 | Express function preconditions in types and force the caller to provide them (rather than checking in callee): |
145 | 159 | ||
146 | ```rust | 160 | ```rust |
147 | // GOOD | 161 | // GOOD |
148 | fn frbonicate(walrus: Walrus) { | 162 | fn frobnicate(walrus: Walrus) { |
149 | ... | 163 | ... |
150 | } | 164 | } |
151 | 165 | ||
@@ -213,12 +227,12 @@ if idx >= len { | |||
213 | } | 227 | } |
214 | ``` | 228 | ``` |
215 | 229 | ||
216 | **Rationale:** its useful to see the invariant relied upon by the rest of the function clearly spelled out. | 230 | **Rationale:** it's useful to see the invariant relied upon by the rest of the function clearly spelled out. |
217 | 231 | ||
218 | ## Assertions | 232 | ## Assertions |
219 | 233 | ||
220 | Assert liberally. | 234 | Assert liberally. |
221 | Prefer `stdx::assert_never!` to standard `assert!`. | 235 | Prefer `stdx::never!` to standard `assert!`. |
222 | 236 | ||
223 | ## Getters & Setters | 237 | ## Getters & Setters |
224 | 238 | ||
@@ -253,6 +267,20 @@ Non-local code properties degrade under change, privacy makes invariant local. | |||
253 | Borrowed own data discloses irrelevant details about origin of data. | 267 | Borrowed own data discloses irrelevant details about origin of data. |
254 | Irrelevant (neither right nor wrong) things obscure correctness. | 268 | Irrelevant (neither right nor wrong) things obscure correctness. |
255 | 269 | ||
270 | ## Useless Types | ||
271 | |||
272 | More generally, always prefer types on the left | ||
273 | |||
274 | ```rust | ||
275 | // GOOD BAD | ||
276 | &[T] &Vec<T> | ||
277 | &str &String | ||
278 | Option<&T> &Option<T> | ||
279 | ``` | ||
280 | |||
281 | **Rationale:** types on the left are strictly more general. | ||
282 | Even when generality is not required, consistency is important. | ||
283 | |||
256 | ## Constructors | 284 | ## Constructors |
257 | 285 | ||
258 | Prefer `Default` to zero-argument `new` function | 286 | Prefer `Default` to zero-argument `new` function |
@@ -280,6 +308,10 @@ Prefer `Default` even it has to be implemented manually. | |||
280 | 308 | ||
281 | **Rationale:** less typing in the common case, uniformity. | 309 | **Rationale:** less typing in the common case, uniformity. |
282 | 310 | ||
311 | Use `Vec::new` rather than `vec![]`. | ||
312 | |||
313 | **Rationale:** uniformity, strength reduction. | ||
314 | |||
283 | ## Functions Over Objects | 315 | ## Functions Over Objects |
284 | 316 | ||
285 | Avoid creating "doer" objects. | 317 | Avoid creating "doer" objects. |
@@ -336,13 +368,73 @@ impl ThingDoer { | |||
336 | 368 | ||
337 | **Rationale:** not bothering the caller with irrelevant details, not mixing user API with implementor API. | 369 | **Rationale:** not bothering the caller with irrelevant details, not mixing user API with implementor API. |
338 | 370 | ||
371 | ## Functions with many parameters | ||
372 | |||
373 | Avoid creating functions with many optional or boolean parameters. | ||
374 | Introduce a `Config` struct instead. | ||
375 | |||
376 | ```rust | ||
377 | // GOOD | ||
378 | pub struct AnnotationConfig { | ||
379 | pub binary_target: bool, | ||
380 | pub annotate_runnables: bool, | ||
381 | pub annotate_impls: bool, | ||
382 | } | ||
383 | |||
384 | pub fn annotations( | ||
385 | db: &RootDatabase, | ||
386 | file_id: FileId, | ||
387 | config: AnnotationConfig | ||
388 | ) -> Vec<Annotation> { | ||
389 | ... | ||
390 | } | ||
391 | |||
392 | // BAD | ||
393 | pub fn annotations( | ||
394 | db: &RootDatabase, | ||
395 | file_id: FileId, | ||
396 | binary_target: bool, | ||
397 | annotate_runnables: bool, | ||
398 | annotate_impls: bool, | ||
399 | ) -> Vec<Annotation> { | ||
400 | ... | ||
401 | } | ||
402 | ``` | ||
403 | |||
404 | **Rationale:** reducing churn. | ||
405 | If the function has many parameters, they most likely change frequently. | ||
406 | By packing them into a struct we protect all intermediary functions from changes. | ||
407 | |||
408 | Do not implement `Default` for the `Config` struct, the caller has more context to determine better defaults. | ||
409 | Do not store `Config` as a part of the `state`, pass it explicitly. | ||
410 | This gives more flexibility for the caller. | ||
411 | |||
412 | If there is variation not only in the input parameters, but in the return type as well, consider introducing a `Command` type. | ||
413 | |||
414 | ```rust | ||
415 | // MAYBE GOOD | ||
416 | pub struct Query { | ||
417 | pub name: String, | ||
418 | pub case_sensitive: bool, | ||
419 | } | ||
420 | |||
421 | impl Query { | ||
422 | pub fn all(self) -> Vec<Item> { ... } | ||
423 | pub fn first(self) -> Option<Item> { ... } | ||
424 | } | ||
425 | |||
426 | // MAYBE BAD | ||
427 | fn query_all(name: String, case_sensitive: bool) -> Vec<Item> { ... } | ||
428 | fn query_first(name: String, case_sensitive: bool) -> Option<Item> { ... } | ||
429 | ``` | ||
430 | |||
339 | ## Avoid Monomorphization | 431 | ## Avoid Monomorphization |
340 | 432 | ||
341 | Avoid making a lot of code type parametric, *especially* on the boundaries between crates. | 433 | Avoid making a lot of code type parametric, *especially* on the boundaries between crates. |
342 | 434 | ||
343 | ```rust | 435 | ```rust |
344 | // GOOD | 436 | // GOOD |
345 | fn frbonicate(f: impl FnMut()) { | 437 | fn frobnicate(f: impl FnMut()) { |
346 | frobnicate_impl(&mut f) | 438 | frobnicate_impl(&mut f) |
347 | } | 439 | } |
348 | fn frobnicate_impl(f: &mut dyn FnMut()) { | 440 | fn frobnicate_impl(f: &mut dyn FnMut()) { |
@@ -350,7 +442,7 @@ fn frobnicate_impl(f: &mut dyn FnMut()) { | |||
350 | } | 442 | } |
351 | 443 | ||
352 | // BAD | 444 | // BAD |
353 | fn frbonicate(f: impl FnMut()) { | 445 | fn frobnicate(f: impl FnMut()) { |
354 | // lots of code | 446 | // lots of code |
355 | } | 447 | } |
356 | ``` | 448 | ``` |
@@ -359,11 +451,11 @@ Avoid `AsRef` polymorphism, it pays back only for widely used libraries: | |||
359 | 451 | ||
360 | ```rust | 452 | ```rust |
361 | // GOOD | 453 | // GOOD |
362 | fn frbonicate(f: &Path) { | 454 | fn frobnicate(f: &Path) { |
363 | } | 455 | } |
364 | 456 | ||
365 | // BAD | 457 | // BAD |
366 | fn frbonicate(f: impl AsRef<Path>) { | 458 | fn frobnicate(f: impl AsRef<Path>) { |
367 | } | 459 | } |
368 | ``` | 460 | ``` |
369 | 461 | ||
@@ -372,6 +464,14 @@ This allows for exceptionally good performance, but leads to increased compile t | |||
372 | Runtime performance obeys 80%/20% rule -- only a small fraction of code is hot. | 464 | Runtime performance obeys 80%/20% rule -- only a small fraction of code is hot. |
373 | Compile time **does not** obey this rule -- all code has to be compiled. | 465 | Compile time **does not** obey this rule -- all code has to be compiled. |
374 | 466 | ||
467 | ## Appropriate String Types | ||
468 | |||
469 | When interfacing with OS APIs, use `OsString`, even if the original source of data is utf-8 encoded. | ||
470 | **Rationale:** cleanly delineates the boundary when the data goes into the OS-land. | ||
471 | |||
472 | Use `AbsPathBuf` and `AbsPath` over `std::Path`. | ||
473 | **Rationale:** rust-analyzer is a long-lived process which handles several projects at the same time. | ||
474 | It is important not to leak cwd by accident. | ||
375 | 475 | ||
376 | # Premature Pessimization | 476 | # Premature Pessimization |
377 | 477 | ||
@@ -418,12 +518,44 @@ fn frobnicate(s: &str) { | |||
418 | **Rationale:** reveals the costs. | 518 | **Rationale:** reveals the costs. |
419 | It is also more efficient when the caller already owns the allocation. | 519 | It is also more efficient when the caller already owns the allocation. |
420 | 520 | ||
421 | ## Collection types | 521 | ## Collection Types |
422 | 522 | ||
423 | Prefer `rustc_hash::FxHashMap` and `rustc_hash::FxHashSet` instead of the ones in `std::collections`. | 523 | Prefer `rustc_hash::FxHashMap` and `rustc_hash::FxHashSet` instead of the ones in `std::collections`. |
424 | 524 | ||
425 | **Rationale:** they use a hasher that's significantly faster and using them consistently will reduce code size by some small amount. | 525 | **Rationale:** they use a hasher that's significantly faster and using them consistently will reduce code size by some small amount. |
426 | 526 | ||
527 | ## Avoid Intermediate Collections | ||
528 | |||
529 | When writing a recursive function to compute a sets of things, use an accumulator parameter instead of returning a fresh collection. | ||
530 | Accumulator goes first in the list of arguments. | ||
531 | |||
532 | ```rust | ||
533 | // GOOD | ||
534 | pub fn reachable_nodes(node: Node) -> FxHashSet<Node> { | ||
535 | let mut res = FxHashSet::default(); | ||
536 | go(&mut res, node); | ||
537 | res | ||
538 | } | ||
539 | fn go(acc: &mut FxHashSet<Node>, node: Node) { | ||
540 | acc.insert(node); | ||
541 | for n in node.neighbors() { | ||
542 | go(acc, n); | ||
543 | } | ||
544 | } | ||
545 | |||
546 | // BAD | ||
547 | pub fn reachable_nodes(node: Node) -> FxHashSet<Node> { | ||
548 | let mut res = FxHashSet::default(); | ||
549 | res.insert(node); | ||
550 | for n in node.neighbors() { | ||
551 | res.extend(reachable_nodes(n)); | ||
552 | } | ||
553 | res | ||
554 | } | ||
555 | ``` | ||
556 | |||
557 | **Rationale:** re-use allocations, accumulator style is more concise for complex cases. | ||
558 | |||
427 | # Style | 559 | # Style |
428 | 560 | ||
429 | ## Order of Imports | 561 | ## Order of Imports |
@@ -633,7 +765,7 @@ fn foo() -> Option<Bar> { | |||
633 | } | 765 | } |
634 | ``` | 766 | ``` |
635 | 767 | ||
636 | **Rationale:** reduce congnitive stack usage. | 768 | **Rationale:** reduce cognitive stack usage. |
637 | 769 | ||
638 | ## Comparisons | 770 | ## Comparisons |
639 | 771 | ||
diff --git a/docs/dev/syntax.md b/docs/dev/syntax.md index 1edafab68..737cc7a72 100644 --- a/docs/dev/syntax.md +++ b/docs/dev/syntax.md | |||
@@ -92,19 +92,18 @@ [email protected] | |||
92 | [email protected] ")" | 92 | [email protected] ")" |
93 | [email protected] " " | 93 | [email protected] " " |
94 | [email protected] | 94 | [email protected] |
95 | [email protected] | 95 | [email protected] "{" |
96 | [email protected] "{" | 96 | [email protected] " " |
97 | [email protected] " " | 97 | [email protected] |
98 | [email protected] | 98 | [email protected] |
99 | [email protected] | 99 | [email protected] "90" |
100 | [email protected] "90" | 100 | [email protected] " " |
101 | [email protected] " " | 101 | [email protected] "+" |
102 | [email protected] "+" | 102 | [email protected] " " |
103 | [email protected] " " | 103 | [email protected] |
104 | [email protected] | 104 | [email protected] "2" |
105 | [email protected] "2" | 105 | [email protected] " " |
106 | [email protected] " " | 106 | [email protected] "}" |
107 | [email protected] "}" | ||
108 | ``` | 107 | ``` |
109 | 108 | ||
110 | #### Optimizations | 109 | #### Optimizations |
@@ -387,7 +386,7 @@ trait HasVisibility: AstNode { | |||
387 | fn visibility(&self) -> Option<Visibility>; | 386 | fn visibility(&self) -> Option<Visibility>; |
388 | } | 387 | } |
389 | 388 | ||
390 | impl HasVisbility for FnDef { | 389 | impl HasVisibility for FnDef { |
391 | fn visibility(&self) -> Option<Visibility> { | 390 | fn visibility(&self) -> Option<Visibility> { |
392 | self.syntax.children().find_map(Visibility::cast) | 391 | self.syntax.children().find_map(Visibility::cast) |
393 | } | 392 | } |
@@ -527,7 +526,7 @@ In practice, incremental reparsing doesn't actually matter much for IDE use-case | |||
527 | 526 | ||
528 | ### Parsing Algorithm | 527 | ### Parsing Algorithm |
529 | 528 | ||
530 | We use a boring hand-crafted recursive descent + pratt combination, with a special effort of continuting the parsing if an error is detected. | 529 | We use a boring hand-crafted recursive descent + pratt combination, with a special effort of continuing the parsing if an error is detected. |
531 | 530 | ||
532 | ### Parser Recap | 531 | ### Parser Recap |
533 | 532 | ||