aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--guide.md142
1 files changed, 71 insertions, 71 deletions
diff --git a/guide.md b/guide.md
index 2c641cd2a..621a76046 100644
--- a/guide.md
+++ b/guide.md
@@ -96,37 +96,37 @@ Soon we'll talk how we build an LSP server on top of `Analysis`, but first,
96let's deal with that paths issue. 96let's deal with that paths issue.
97 97
98 98
99## Source roots (aka filesystems are horrible) 99## Source roots (a.k.a. "Filesystems are horrible")
100 100
101This is a non-essential section, feel free to skip. 101This is a non-essential section, feel free to skip.
102 102
103The previous section said that the file system path is an attribute of a file, 103The previous section said that the filesystem path is an attribute of a file,
104but this is not a whole truth. Making it an absolute `PathBuf` will be bad for 104but this is not the whole truth. Making it an absolute `PathBuf` will be bad for
105several reasons. First, file-systems are full of (platform-dependent) edge cases: 105several reasons. First, filesystems are full of (platform-dependent) edge cases:
106 106
107* it's hard (requires a syscall) to decide if two paths are equivalent 107* it's hard (requires a syscall) to decide if two paths are equivalent
108* some file-systems are case-sensitive 108* some filesystems are case-sensitive (e.g. on macOS)
109* paths are not necessary UTF-8 109* paths are not necessary UTF-8
110* symlinks can form cycles 110* symlinks can form cycles
111 111
112Second, this might hurt reproducibility and hermeticity of builds. In theory, 112Second, this might hurt reproducibility and hermeticity of builds. In theory,
113moving a project from `/foo/bar/my-project` to `/spam/eggs/my-project` should 113moving a project from `/foo/bar/my-project` to `/spam/eggs/my-project` should
114not change a bit in the output. However, if absolute path is a part of the 114not change a bit in the output. However, if the absolute path is a part of the
115input, it is at least in theory observable, and *could* affect the output. 115input, it is at least in theory observable, and *could* affect the output.
116 116
117Yet another problem is that we really-really want to avoid doing IO, but with 117Yet another problem is that we really *really* want to avoid doing I/O, but with
118Rust the set of "input" files is not necessary known up-front. In theory, you 118Rust the set of "input" files is not necessary known up-front. In theory, you
119can have `#[path="/dev/random"] mod foo;`. 119can have `#[path="/dev/random"] mod foo;`.
120 120
121To solve (or explicitly refuse to solve) these problems rust-analyzer uses the 121To solve (or explicitly refuse to solve) these problems rust-analyzer uses the
122concept of source root. Roughly speaking, source roots is a contents of a 122concept of a "source root". Roughly speaking, source roots are the contents of a
123directory on a file systems, like `/home/matklad/projects/rustraytracer/**.rs`. 123directory on a file systems, like `/home/matklad/projects/rustraytracer/**.rs`.
124 124
125More precisely, all files (`FileId`s) are partitioned into disjoint 125More precisely, all files (`FileId`s) are partitioned into disjoint
126`SourceRoot`s. Each file has a relative utf-8 path within the `SourceRoot`. 126`SourceRoot`s. Each file has a relative UTF-8 path within the `SourceRoot`.
127`SourceRoot` has an identity (integer id). Crucially, the root path of the 127`SourceRoot` has an identity (integer ID). Crucially, the root path of the
128source root itself is unknown to the analyzer: client is supposed to maintain a 128source root itself is unknown to the analyzer: A client is supposed to maintain a
129mapping between SourceRoot ids (which are assigned by the client) and actual 129mapping between `SourceRoot` IDs (which are assigned by the client) and actual
130`PathBuf`s. `SourceRoot`s give a sane tree model of the file system to the 130`PathBuf`s. `SourceRoot`s give a sane tree model of the file system to the
131analyzer. 131analyzer.
132 132
@@ -136,7 +136,7 @@ the source root, even `/dev/random`.
136 136
137## Language Server Protocol 137## Language Server Protocol
138 138
139Now let's see how `Analysis` API is exposed via JSON RPC based LSP protocol. The 139Now let's see how the `Analysis` API is exposed via the JSON RPC based language server protocol. The
140hard part here is managing changes (which can come either from the file system 140hard part here is managing changes (which can come either from the file system
141or from the editor) and concurrency (we want to spawn background jobs for things 141or from the editor) and concurrency (we want to spawn background jobs for things
142like syntax highlighting). We use the event loop pattern to manage the zoo, and 142like syntax highlighting). We use the event loop pattern to manage the zoo, and
@@ -151,7 +151,7 @@ Let's walk through a typical analyzer session!
151 151
152First, we need to figure out what to analyze. To do this, we run `cargo 152First, we need to figure out what to analyze. To do this, we run `cargo
153metadata` to learn about Cargo packages for current workspace and dependencies, 153metadata` to learn about Cargo packages for current workspace and dependencies,
154and we run `rustc --print sysroot` and scan sysroot to learn about crates like 154and we run `rustc --print sysroot` and scan the "sysroot" (the directory containing the current Rust toolchain's files) to learn about crates like
155`std`. Currently we load this configuration once at the start of the server, but 155`std`. Currently we load this configuration once at the start of the server, but
156it should be possible to dynamically reconfigure it later without restart. 156it should be possible to dynamically reconfigure it later without restart.
157 157
@@ -162,7 +162,7 @@ it needs to be lowered to get the input in the form of `AnalysisChange`. This
162happens in [`ServerWorldState::new`] method. Specifically 162happens in [`ServerWorldState::new`] method. Specifically
163 163
164* Create a `SourceRoot` for each Cargo package and sysroot. 164* Create a `SourceRoot` for each Cargo package and sysroot.
165* Schedule a file system scan of the roots. 165* Schedule a filesystem scan of the roots.
166* Create an analyzer's `Crate` for each Cargo **target** and sysroot crate. 166* Create an analyzer's `Crate` for each Cargo **target** and sysroot crate.
167* Setup dependencies between the crates. 167* Setup dependencies between the crates.
168 168
@@ -170,18 +170,18 @@ happens in [`ServerWorldState::new`] method. Specifically
170[`ServerWorldState::new`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L38-L160 170[`ServerWorldState::new`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L38-L160
171 171
172The results of the scan (which may take a while) will be processed in the body 172The results of the scan (which may take a while) will be processed in the body
173of the main loop, just like any other change. Here's where we handle 173of the main loop, just like any other change. Here's where we handle:
174 174
175* [File system changes](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L194) 175* [File system changes](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L194)
176* [Changes from the editor](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L377) 176* [Changes from the editor](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L377)
177 177
178After a single loop's turn, we group them into one `AnalysisChange` and 178After a single loop's turn, we group the changes into one `AnalysisChange` and
179[apply] it. This always happens on the main thread and blocks the loop. 179[apply] it. This always happens on the main thread and blocks the loop.
180 180
181[apply]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L216 181[apply]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L216
182 182
183To handle requests, like ["goto definition"], we create an instance of the 183To handle requests, like ["goto definition"], we create an instance of the
184`Analysis` and [`schedule`] the task (which consumes `Analysis`) onto 184`Analysis` and [`schedule`] the task (which consumes `Analysis`) on the
185threadpool. [The task] calls the corresponding `Analysis` method, while 185threadpool. [The task] calls the corresponding `Analysis` method, while
186massaging the types into the LSP representation. Keep in mind that if we are 186massaging the types into the LSP representation. Keep in mind that if we are
187executing "goto definition" on the threadpool and a new change comes in, the 187executing "goto definition" on the threadpool and a new change comes in, the
@@ -197,7 +197,7 @@ dig into the implementation!
197 197
198## Salsa 198## Salsa
199 199
200The most straightforward way to implement "apply change, get analysis, repeat" 200The most straightforward way to implement an "apply change, get analysis, repeat"
201API would be to maintain the input state and to compute all possible analysis 201API would be to maintain the input state and to compute all possible analysis
202information from scratch after every change. This works, but scales poorly with 202information from scratch after every change. This works, but scales poorly with
203the size of the project. To make this fast, we need to take advantage of the 203the size of the project. To make this fast, we need to take advantage of the
@@ -205,16 +205,16 @@ fact that most of the changes are small, and that analysis results are unlikely
205to change significantly between invocations. 205to change significantly between invocations.
206 206
207To do this we use [salsa]: a framework for incremental on-demand computation. 207To do this we use [salsa]: a framework for incremental on-demand computation.
208You can skip the rest of the section if you are familiar with rustc red-green 208You can skip the rest of the section if you are familiar with rustc's red-green
209algorithm. 209algorithm (which is used for incremental compilation).
210 210
211[salsa]: https://github.com/salsa-rs/salsa 211[salsa]: https://github.com/salsa-rs/salsa
212 212
213It's better to refer to salsa's docs to learn about it. Here's a small excerpt: 213It's better to refer to salsa's docs to learn about it. Here's a small excerpt:
214 214
215The key idea of salsa is that you define your program as a set of queries. Every 215The key idea of salsa is that you define your program as a set of queries. Every
216query is used like function K -> V that maps from some key of type K to a value 216query is used like a function `K -> V` that maps from some key of type `K` to a value
217of type V. Queries come in two basic varieties: 217of type `V`. Queries come in two basic varieties:
218 218
219* **Inputs**: the base inputs to your system. You can change these whenever you 219* **Inputs**: the base inputs to your system. You can change these whenever you
220 like. 220 like.
@@ -254,23 +254,23 @@ indeed, what `apply_change` does is it sets the values of input queries.
254 254
255## From text to semantic model 255## From text to semantic model
256 256
257The bulk of the rust-analyzer is transforming input text into semantic model of 257The bulk of the rust-analyzer is transforming input text into a semantic model of
258Rust code: a web of entities like modules, structs, functions and traits. 258Rust code: a web of entities like modules, structs, functions and traits.
259 259
260An important fact to realize is that (unlike most other languages like C# or 260An important fact to realize is that (unlike most other languages like C# or
261Java) there isn't a one-to-one mapping between source code and semantic model. A 261Java) there isn't a one-to-one mapping between source code and the semantic model. A
262single function definition in the source code might result in several semantic 262single function definition in the source code might result in several semantic
263functions: for example, the same source file might be included as a module into 263functions: for example, the same source file might be included as a module into
264several crate, or a single "crate" might be present in the compilation DAG 264several crate, or a single "crate" might be present in the compilation DAG
265several times, with different sets of `cfg`s enabled. The IDE-specific task of 265several times, with different sets of `cfg`s enabled. The IDE-specific task of
266mapping source code position into semantic model is inherently imprecise for 266mapping source code position into a semantic model is inherently imprecise for
267this reason, and is handled by the [`source_binder`]. 267this reason, and is handled by the [`source_binder`].
268 268
269[`source_binder`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/source_binder.rs 269[`source_binder`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/source_binder.rs
270 270
271The semantic interface is declared in [`code_model_api`] module. Each entity is 271The semantic interface is declared in the [`code_model_api`] module. Each entity is
272identified by integer id and has a bunch of methods which take a salsa database 272identified by an integer ID and has a bunch of methods which take a salsa database
273as an argument and returns other entities (which are ids). Internally, this 273as an argument and returns other entities (which are also IDs). Internally, these
274methods invoke various queries on the database to build the model on demand. 274methods invoke various queries on the database to build the model on demand.
275Here's [the list of queries]. 275Here's [the list of queries].
276 276
@@ -287,7 +287,7 @@ syntax. For this reason, rust-analyzer can build a syntax tree for each "source
287file", which could then be reused by several semantic models if this file 287file", which could then be reused by several semantic models if this file
288happens to be a part of several crates. 288happens to be a part of several crates.
289 289
290Rust-analyzer uses a similar representation of syntax trees to that of `Roslyn` 290The representation of syntax trees that rust-analyzer uses is similar to that of `Roslyn`
291and Swift's new [libsyntax]. Swift's docs give an excellent overview of the 291and Swift's new [libsyntax]. Swift's docs give an excellent overview of the
292approach, so I skip this part here and instead outline the main characteristics 292approach, so I skip this part here and instead outline the main characteristics
293of the syntax trees: 293of the syntax trees:
@@ -328,9 +328,9 @@ The next step in constructing the semantic model is ...
328## Building a Module Tree 328## Building a Module Tree
329 329
330The algorithm for building a tree of modules is to start with a crate root 330The algorithm for building a tree of modules is to start with a crate root
331(remember, each `Crate` from a `CrateGraph` has a `FileId`), collect all mod 331(remember, each `Crate` from a `CrateGraph` has a `FileId`), collect all `mod`
332declarations and recursively process child modules. This is handled by the 332declarations and recursively process child modules. This is handled by the
333[`module_tree_query`], with a two slight variations. 333[`module_tree_query`], with two slight variations.
334 334
335[`module_tree_query`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/module_tree.rs#L116-L123 335[`module_tree_query`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/module_tree.rs#L116-L123
336 336
@@ -358,41 +358,41 @@ declarations.
358[`submodules_query`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/module_tree.rs#L41 358[`submodules_query`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/module_tree.rs#L41
359 359
360We store the resulting modules in a `Vec`-based indexed arena. The indices in 360We store the resulting modules in a `Vec`-based indexed arena. The indices in
361the arena becomes module ids. And this brings us to the next topic: 361the arena becomes module IDs. And this brings us to the next topic:
362assigning ids in the general case. 362assigning IDs in the general case.
363 363
364## Location Interner pattern 364## Location Interner pattern
365 365
366One way to assign ids is how we've dealt with modules: collect all items into a 366One way to assign IDs is how we've dealt with modules: Collect all items into a
367single array in some specific order and use index in the array as an id. The 367single array in some specific order and use the index in the array as an ID. The
368main drawback of this approach is that ids are not stable: adding a new item can 368main drawback of this approach is that these IDs are not stable: Adding a new item can
369shift ids of all other items. This works for modules, because adding a module is 369shift the IDs of all other items. This works for modules, because adding a module is
370a comparatively rare operation, but would be less convenient for, for example 370a comparatively rare operation, but would be less convenient for, for example,
371functions. 371functions.
372 372
373Another solution here is positional ids: we can identify a function as "the 373Another solution here is positional IDs: We can identify a function as "the
374function with name `foo` in a ModuleId(92) module". Such locations are stable: 374function with name `foo` in a ModuleId(92) module". Such locations are stable:
375adding a new function to the module (unless it is also named `foo`) does not 375adding a new function to the module (unless it is also named `foo`) does not
376change the location. However, such "id" ceases to be a `Copy` integer and in 376change the location. However, such "ID" types ceases to be a `Copy`able integer and in
377general can become pretty large if we account for nesting (third parameter of 377general can become pretty large if we account for nesting (for example: "third parameter of
378the foo function of the bar impl in the baz module). 378the `foo` function of the `bar` `impl` in the `baz` module").
379 379
380[`LocationInterner`] allows us to combine benefits of positional and numeric 380[`LocationInterner`] allows us to combine the benefits of positional and numeric
381ids. It is a bidirectional append only map between locations and consecutive 381IDs. It is a bidirectional append-only map between locations and consecutive
382integers which can "intern" a location and return an integer id back. Salsa 382integers which can "intern" a location and return an integer ID back. The salsa
383database we use includes a couple of [interners]. How to "garbage collect" 383database we use includes a couple of [interners]. How to "garbage collect"
384unused locations is an open question. 384unused locations is an open question.
385 385
386[`LocationInterner`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_db/src/loc2id.rs#L65-L71 386[`LocationInterner`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_db/src/loc2id.rs#L65-L71
387[interners]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/db.rs#L22-L23 387[interners]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/db.rs#L22-L23
388 388
389For example, we use `LocationInterner` to assign ids to defs: functions, 389For example, we use `LocationInterner` to assign IDs to definitions of functions,
390structs, enums, etc. The location, [`DefLoc`] contains two bits of information: 390structs, enums, etc. The location, [`DefLoc`] contains two bits of information:
391 391
392* the id of the module which contains the def, 392* the ID of the module which contains the definition,
393* the id of the specific item in the modules source code. 393* the ID of the specific item in the modules source code.
394 394
395We "could" use a text offset for location a particular item, but that would play 395We "could" use a text offset for the location of a particular item, but that would play
396badly with salsa: offsets change after edits. So, as a rule of thumb, we avoid 396badly with salsa: offsets change after edits. So, as a rule of thumb, we avoid
397using offsets, text ranges or syntax trees as keys and values for queries. What 397using offsets, text ranges or syntax trees as keys and values for queries. What
398we do instead is we store "index" of the item among all of the items of a file 398we do instead is we store "index" of the item among all of the items of a file
@@ -402,7 +402,7 @@ we do instead is we store "index" of the item among all of the items of a file
402 402
403One thing we've glossed over for the time being is support for macros. We have 403One thing we've glossed over for the time being is support for macros. We have
404only proof of concept handling of macros at the moment, but they are extremely 404only proof of concept handling of macros at the moment, but they are extremely
405interesting from "assigning ids" perspective. 405interesting from an "assigning IDs" perspective.
406 406
407## Macros and recursive locations 407## Macros and recursive locations
408 408
@@ -440,7 +440,7 @@ macro-generated file, we can discuss name resolution a bit.
440 440
441Name resolution faces the same problem as the module tree: if we look at the 441Name resolution faces the same problem as the module tree: if we look at the
442syntax tree directly, we'll have to recompute name resolution after every 442syntax tree directly, we'll have to recompute name resolution after every
443modification. The solution to the problem is the same: we [lower] source code of 443modification. The solution to the problem is the same: We [lower] the source code of
444each module into a position-independent representation which does not change if 444each module into a position-independent representation which does not change if
445we modify bodies of the items. After that we [loop] resolving all imports until 445we modify bodies of the items. After that we [loop] resolving all imports until
446we've reached a fixed point. 446we've reached a fixed point.
@@ -448,20 +448,20 @@ we've reached a fixed point.
448[lower]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/lower.rs#L113-L117 448[lower]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/lower.rs#L113-L117
449[loop]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres.rs#L186-L196 449[loop]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres.rs#L186-L196
450 450
451And, given all our preparation with ids and position-independent representation, 451And, given all our preparation with IDs and a position-independent representation,
452it is satisfying to [test] that typing inside function body does not invalidate 452it is satisfying to [test] that typing inside function body does not invalidate
453name resolution results. 453name resolution results.
454 454
455[test]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/tests.rs#L376 455[test]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/tests.rs#L376
456 456
457An interesting fact about name resolution is that it "erases" all of 457An interesting fact about name resolution is that it "erases" all of the
458intermediate paths from the imports: in the end, we know which items are defined 458intermediate paths from the imports: in the end, we know which items are defined
459and which items are imported in each module, but, if the import was `use 459and which items are imported in each module, but, if the import was `use
460foo::bar::baz`, we deliberately forget what modules `foo` and `bar` resolve to. 460foo::bar::baz`, we deliberately forget what modules `foo` and `bar` resolve to.
461 461
462To serve "goto definition" requests on intermediate segments we need this info 462To serve "goto definition" requests on intermediate segments we need this info
463in IDE. Luckily, we need it only for a tiny fraction of imports, so we just ask 463in the IDE, however. Luckily, we need it only for a tiny fraction of imports, so we just ask
464the module explicitly, "where does `foo::bar` path resolve to?". This is a 464the module explicitly, "What does the path `foo::bar` resolve to?". This is a
465general pattern: we try to compute the minimal possible amount of information 465general pattern: we try to compute the minimal possible amount of information
466during analysis while allowing IDE to ask for additional specific bits. 466during analysis while allowing IDE to ask for additional specific bits.
467 467
@@ -476,9 +476,9 @@ store the syntax node as a part of name resolution: this will break
476incrementality, due to the fact that syntax changes after every file 476incrementality, due to the fact that syntax changes after every file
477modification. 477modification.
478 478
479We solve this problem during the lowering step of name resolution. Lowering 479We solve this problem during the lowering step of name resolution. The lowering
480query actually produces a *pair* of outputs: `LoweredModule` and [`SourceMap`]. 480query actually produces a *pair* of outputs: `LoweredModule` and [`SourceMap`].
481`LoweredModule` module contains [imports], but in a position-independent form. 481The `LoweredModule` module contains [imports], but in a position-independent form.
482The `SourceMap` contains a mapping from position-independent imports to 482The `SourceMap` contains a mapping from position-independent imports to
483(position-dependent) syntax nodes. 483(position-dependent) syntax nodes.
484 484
@@ -500,20 +500,20 @@ by [@flodiebold]. [#327] was an awesome Christmas present, thank you, Florian!
500Type inference runs on per-function granularity and uses the patterns we've 500Type inference runs on per-function granularity and uses the patterns we've
501discussed previously. 501discussed previously.
502 502
503First, we [lower ast] of function body into a position-independent 503First, we [lower the AST] of a function body into a position-independent
504representation. In this representation, each expression is assigned a 504representation. In this representation, each expression is assigned a
505[positional id]. Alongside the lowered expression, [a source map] is produced, 505[positional ID]. Alongside the lowered expression, [a source map] is produced,
506which maps between expression ids and original syntax. This lowering step also 506which maps between expression ids and original syntax. This lowering step also
507deals with "incomplete" source trees by replacing missing expressions by an 507deals with "incomplete" source trees by replacing missing expressions by an
508explicit `Missing` expression. 508explicit `Missing` expression.
509 509
510Given the lower body of the function, we can now run [type inference] and 510Given the lowered body of the function, we can now run [type inference] and
511construct a mapping from `ExprId`s to types. 511construct a mapping from `ExprId`s to types.
512 512
513[@flodiebold]: https://github.com/flodiebold 513[@flodiebold]: https://github.com/flodiebold
514[#327]: https://github.com/rust-analyzer/rust-analyzer/pull/327 514[#327]: https://github.com/rust-analyzer/rust-analyzer/pull/327
515[lower ast]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs 515[lower the AST]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs
516[positional id]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs#L13-L15 516[positional ID]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs#L13-L15
517[a source map]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs#L41-L44 517[a source map]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs#L41-L44
518[type inference]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/ty.rs#L1208-L1223 518[type inference]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/ty.rs#L1208-L1223
519 519
@@ -524,15 +524,15 @@ To conclude the overview of the rust-analyzer, let's trace the request for
524 524
525We start by [receiving a message] from the language client. We decode the 525We start by [receiving a message] from the language client. We decode the
526message as a request for completion and [schedule it on the threadpool]. This is 526message as a request for completion and [schedule it on the threadpool]. This is
527the place where we [catch] canceled error if, immediately after completion, the 527the also place where we [catch] canceled errors if, immediately after completion, the
528client sends some modification. 528client sends some modification.
529 529
530In [the handler] we deserialize LSP request into rust-analyzer specific data 530In [the handler] we a deserialize LSP request into the rust-analyzer specific data
531types (by converting a file url into a numeric `FileId`), [ask analysis for 531types (by converting a file url into a numeric `FileId`), [ask analysis for
532completion] and serializer results to LSP. 532completion] and serializer results to LSP.
533 533
534[Completion implementation] is finally the place where we start doing the actual 534The [completion implementation] is finally the place where we start doing the actual
535work. The first step is to collection `CompletionContext` -- a struct which 535work. The first step is to collect the `CompletionContext` -- a struct which
536describes the cursor position in terms of Rust syntax and semantics. For 536describes the cursor position in terms of Rust syntax and semantics. For
537example, `function_syntax: Option<&'a ast::FnDef>` stores a reference to 537example, `function_syntax: Option<&'a ast::FnDef>` stores a reference to
538enclosing function *syntax*, while `function: Option<hir::Function>` is the 538enclosing function *syntax*, while `function: Option<hir::Function>` is the
@@ -541,14 +541,14 @@ enclosing function *syntax*, while `function: Option<hir::Function>` is the
541To construct the context, we first do an ["IntelliJ Trick"]: we insert a dummy 541To construct the context, we first do an ["IntelliJ Trick"]: we insert a dummy
542identifier at the cursor's position and parse this modified file, to get a 542identifier at the cursor's position and parse this modified file, to get a
543reasonably looking syntax tree. Then we do a bunch of "classification" routines 543reasonably looking syntax tree. Then we do a bunch of "classification" routines
544to figure out the context. For example, we [find ancestor fn node] and we get a 544to figure out the context. For example, we [find an ancestor `fn` node] and we get a
545[semantic model] for it (using the lossy `source_binder` infrastructure). 545[semantic model] for it (using the lossy `source_binder` infrastructure).
546 546
547The second step is to run a [series of independent completion routines]. Let's 547The second step is to run a [series of independent completion routines]. Let's
548take a closer look at [`complete_dot`], which completes fields and methods in 548take a closer look at [`complete_dot`], which completes fields and methods in
549`foo.bar|`. First we extract a semantic function and a syntactic receiver 549`foo.bar|`. First we extract a semantic function and a syntactic receiver
550expression out of the `Context`. Then we run type-inference for this single 550expression out of the `Context`. Then we run type-inference for this single
551function and map our syntactic expression to `ExprId`. Using the id, we figure 551function and map our syntactic expression to `ExprId`. Using the ID, we figure
552out the type of the receiver expression. Then we add all fields & methods from 552out the type of the receiver expression. Then we add all fields & methods from
553the type to completion. 553the type to completion.
554 554
@@ -557,10 +557,10 @@ the type to completion.
557[catch]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L436-L442 557[catch]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L436-L442
558[the handler]: https://salsa.zulipchat.com/#narrow/stream/181542-rfcs.2Fsalsa-query-group/topic/design.20next.20steps 558[the handler]: https://salsa.zulipchat.com/#narrow/stream/181542-rfcs.2Fsalsa-query-group/topic/design.20next.20steps
559[ask analysis for completion]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/lib.rs#L439-L444 559[ask analysis for completion]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/lib.rs#L439-L444
560[Completion implementation]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion.rs#L46-L62 560[completion implementation]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion.rs#L46-L62
561[`CompletionContext`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L14-L37 561[`CompletionContext`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L14-L37
562["IntelliJ Trick"]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L72-L75 562["IntelliJ Trick"]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L72-L75
563[find ancestor fn node]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L116-L120 563[find an ancestor `fn` node]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L116-L120
564[semantic model]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L123 564[semantic model]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L123
565[series of independent completion routines]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion.rs#L52-L59 565[series of independent completion routines]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion.rs#L52-L59
566[`complete_dot`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/complete_dot.rs#L6-L22 566[`complete_dot`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/complete_dot.rs#L6-L22