From 91203699eccf63ee21fee236f493c361c64b5d86 Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 09:37:51 +0300 Subject: introduce docs dir --- ARCHITECTURE.md | 200 ----------------- CONTRIBUTING.md | 18 -- DEBUGGING.md | 62 ----- README.md | 72 ------ ROADMAP.md | 77 ------- docs/dev/ARCHITECTURE.md | 200 +++++++++++++++++ docs/dev/CONTRIBUTING.md | 18 ++ docs/dev/DEBUGGING.md | 62 +++++ docs/dev/ROADMAP.md | 77 +++++++ docs/dev/guide.md | 575 +++++++++++++++++++++++++++++++++++++++++++++++ docs/dev/lsp-features.md | 74 ++++++ docs/user/README.md | 241 ++++++++++++++++++++ editors/README.md | 241 -------------------- guide.md | 575 ----------------------------------------------- 14 files changed, 1247 insertions(+), 1245 deletions(-) delete mode 100644 ARCHITECTURE.md delete mode 100644 CONTRIBUTING.md delete mode 100644 DEBUGGING.md delete mode 100644 ROADMAP.md create mode 100644 docs/dev/ARCHITECTURE.md create mode 100644 docs/dev/CONTRIBUTING.md create mode 100644 docs/dev/DEBUGGING.md create mode 100644 docs/dev/ROADMAP.md create mode 100644 docs/dev/guide.md create mode 100644 docs/dev/lsp-features.md create mode 100644 docs/user/README.md delete mode 100644 editors/README.md delete mode 100644 guide.md diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md deleted file mode 100644 index 57f76ebae..000000000 --- a/ARCHITECTURE.md +++ /dev/null @@ -1,200 +0,0 @@ -# Architecture - -This document describes the high-level architecture of rust-analyzer. -If you want to familiarize yourself with the code base, you are just -in the right place! - -See also the [guide](./guide.md), which walks through a particular snapshot of -rust-analyzer code base. - -For syntax-trees specifically, there's a [video walk -through](https://youtu.be/DGAuLWdCCAI) as well. - -## The Big Picture - -![](https://user-images.githubusercontent.com/1711539/50114578-e8a34280-0255-11e9-902c-7cfc70747966.png) - -On the highest level, rust-analyzer is a thing which accepts input source code -from the client and produces a structured semantic model of the code. - -More specifically, input data consists of a set of test files (`(PathBuf, -String)` pairs) and information about project structure, captured in the so called -`CrateGraph`. The crate graph specifies which files are crate roots, which cfg -flags are specified for each crate (TODO: actually implement this) and what -dependencies exist between the crates. The analyzer keeps all this input data in -memory and never does any IO. Because the input data is source code, which -typically measures in tens of megabytes at most, keeping all input data in -memory is OK. - -A "structured semantic model" is basically an object-oriented representation of -modules, functions and types which appear in the source code. This representation -is fully "resolved": all expressions have types, all references are bound to -declarations, etc. - -The client can submit a small delta of input data (typically, a change to a -single file) and get a fresh code model which accounts for changes. - -The underlying engine makes sure that model is computed lazily (on-demand) and -can be quickly updated for small modifications. - - -## Code generation - -Some of the components of this repository are generated through automatic -processes. These are outlined below: - -- `gen-syntax`: The kinds of tokens that are reused in several places, so a generator - is used. We use tera templates to generate the files listed below, based on - the grammar described in [grammar.ron]: - - [ast/generated.rs][ast generated] in `ra_syntax` based on - [ast/generated.tera.rs][ast source] - - [syntax_kinds/generated.rs][syntax_kinds generated] in `ra_syntax` based on - [syntax_kinds/generated.tera.rs][syntax_kinds source] - -[tera]: https://tera.netlify.com/ -[grammar.ron]: ./crates/ra_syntax/src/grammar.ron -[ast generated]: ./crates/ra_syntax/src/ast/generated.rs -[ast source]: ./crates/ra_syntax/src/ast/generated.rs.tera -[syntax_kinds generated]: ./crates/ra_syntax/src/syntax_kinds/generated.rs -[syntax_kinds source]: ./crates/ra_syntax/src/syntax_kinds/generated.rs.tera - - -## Code Walk-Through - -### `crates/ra_syntax` - -Rust syntax tree structure and parser. See -[RFC](https://github.com/rust-lang/rfcs/pull/2256) for some design notes. - -- [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees. -- `grammar` module is the actual parser. It is a hand-written recursive descent parser, which - produces a sequence of events like "start node X", "finish not Y". It works similarly to [kotlin's parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java), - which is a good source of inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs) - is what we use for the definition of the Rust language. -- `parser_api/parser_impl` bridges the tree-agnostic parser from `grammar` with `rowan` trees. - This is the thing that turns a flat list of events into a tree (see `EventProcessor`) -- `ast` provides a type safe API on top of the raw `rowan` tree. -- `grammar.ron` RON description of the grammar, which is used to - generate `syntax_kinds` and `ast` modules, using `cargo gen-syntax` command. -- `algo`: generic tree algorithms, including `walk` for O(1) stack - space tree traversal (this is cool) and `visit` for type-driven - visiting the nodes (this is double plus cool, if you understand how - `Visitor` works, you understand the design of syntax trees). - -Tests for ra_syntax are mostly data-driven: `tests/data/parser` contains a bunch of `.rs` -(test vectors) and `.txt` files with corresponding syntax trees. During testing, we check -`.rs` against `.txt`. If the `.txt` file is missing, it is created (this is how you update -tests). Additionally, running `cargo gen-tests` will walk the grammar module and collect -all `//test test_name` comments into files inside `tests/data` directory. - -See [#93](https://github.com/rust-analyzer/rust-analyzer/pull/93) for an example PR which -fixes a bug in the grammar. - -### `crates/ra_db` - -We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and -on-demand computation. Roughly, you can think of salsa as a key-value store, but -it also can compute derived values using specified functions. The `ra_db` crate -provides basic infrastructure for interacting with salsa. Crucially, it -defines most of the "input" queries: facts supplied by the client of the -analyzer. Reading the docs of the `ra_db::input` module should be useful: -everything else is strictly derived from those inputs. - -### `crates/ra_hir` - -HIR provides high-level "object oriented" access to Rust code. - -The principal difference between HIR and syntax trees is that HIR is bound to a -particular crate instance. That is, it has cfg flags and features applied (in -theory, in practice this is to be implemented). So, the relation between -syntax and HIR is many-to-one. The `source_binder` module is responsible for -guessing a HIR for a particular source position. - -Underneath, HIR works on top of salsa, using a `HirDatabase` trait. - -### `crates/ra_ide_api` - -A stateful library for analyzing many Rust files as they change. `AnalysisHost` -is a mutable entity (clojure's atom) which holds the current state, incorporates -changes and hands out `Analysis` --- an immutable and consistent snapshot of -the world state at a point in time, which actually powers analysis. - -One interesting aspect of analysis is its support for cancellation. When a -change is applied to `AnalysisHost`, first all currently active snapshots are -canceled. Only after all snapshots are dropped the change actually affects the -database. - -APIs in this crate are IDE centric: they take text offsets as input and produce -offsets and strings as output. This works on top of rich code model powered by -`hir`. - -### `crates/ra_ide_api_light` - -All IDE features which can be implemented if you only have access to a single -file. `ra_ide_api_light` could be used to enhance editing of Rust code without -the need to fiddle with build-systems, file synchronization and such. - -In a sense, `ra_ide_api_light` is just a bunch of pure functions which take a -syntax tree as input. - -The tests for `ra_ide_api_light` are `#[cfg(test)] mod tests` unit-tests spread -throughout its modules. - - -### `crates/ra_lsp_server` - -An LSP implementation which wraps `ra_ide_api` into a langauge server protocol. - -### `crates/ra_vfs` - -Although `hir` and `ra_ide_api` don't do any IO, we need to be able to read -files from disk at the end of the day. This is what `ra_vfs` does. It also -manages overlays: "dirty" files in the editor, whose "true" contents is -different from data on disk. - -### `crates/gen_lsp_server` - -A language server scaffold, exposing a synchronous crossbeam-channel based API. -This crate handles protocol handshaking and parsing messages, while you -control the message dispatch loop yourself. - -Run with `RUST_LOG=sync_lsp_server=debug` to see all the messages. - -### `crates/ra_cli` - -A CLI interface to rust-analyzer. - -### `crate/tools` - -Custom Cargo tasks used to develop rust-analyzer: - -- `cargo gen-syntax` -- generate `ast` and `syntax_kinds` -- `cargo gen-tests` -- collect inline tests from grammar -- `cargo install-code` -- build and install VS Code extension and server - -### `editors/code` - -VS Code plugin - - -## Common workflows - -To try out VS Code extensions, run `cargo install-code`. This installs both the -`ra_lsp_server` binary and the VS Code extension. To install only the binary, use -`cargo install-lsp` (shorthand for `cargo install --path crates/ra_lsp_server --force`) - -To see logs from the language server, set `RUST_LOG=info` env variable. To see -all communication between the server and the client, use -`RUST_LOG=gen_lsp_server=debug` (this will print quite a bit of stuff). - -There's `rust-analyzer: status` command which prints common high-level debug -info. In particular, it prints info about memory usage of various data -structures, and, if compiled with jemalloc support (`cargo jinstall-lsp` or -`cargo install --path crates/ra_lsp_server --force --features jemalloc`), includes - statistic about the heap. - -To run tests, just `cargo test`. - -To work on the VS Code extension, launch code inside `editors/code` and use `F5` to -launch/debug. To automatically apply formatter and linter suggestions, use `npm -run fix`. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md deleted file mode 100644 index a2efc7afa..000000000 --- a/CONTRIBUTING.md +++ /dev/null @@ -1,18 +0,0 @@ -The project is in its early stages: contributions are welcome and would be -**very** helpful, but the project is not _yet_ optimized for contribution. -Moreover, it is doubly experimental, so there's no guarantee that any work here -would reach production. - -To get an idea of how rust-analyzer works, take a look at the [ARCHITECTURE.md](./ARCHITECTURE.md) -document. - -Useful labels on the issue tracker: - * [E-mentor](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-mentor) - issues have links to the code in question and tests, - * [E-easy](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-easy), - [E-medium](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-medium), - [E-hard](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-hard), - labels are *estimates* for how hard would be to write a fix. - -There's no formal PR check list: everything that passes CI (we use [bors](https://bors.tech/)) is valid, -but it's a good idea to write nice commit messages, test code thoroughly, maintain consistent style, etc. diff --git a/DEBUGGING.md b/DEBUGGING.md deleted file mode 100644 index f868e6998..000000000 --- a/DEBUGGING.md +++ /dev/null @@ -1,62 +0,0 @@ -# Debugging vs Code plugin and the Language Server - -Install [LLDB](https://lldb.llvm.org/) and the [LLDB Extension](https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb). - -Checkout rust rust-analyzer and open it in vscode. - -``` -$ git clone https://github.com/rust-analyzer/rust-analyzer.git --depth 1 -$ cd rust-analyzer -$ code . -``` - -- To attach to the `lsp server` in linux you'll have to run: - - `echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope` - - This enables ptrace on non forked processes - -- Ensure the dependencies for the extension are installed, run the `npm: install - editors/code` task in vscode. - -- Launch the `Debug Extension`, this will build the extension and the `lsp server`. - -- A new instance of vscode with `[Extension Development Host]` in the title. - - Don't worry about disabling `rls` all other extensions will be disabled but this one. - -- In the new vscode instance open a rust project, and navigate to a rust file - -- In the original vscode start an additional debug session (the three periods in the launch) and select `Debug Lsp Server`. - -- A list of running processes should appear select the `ra_lsp_server` from this repo. - -- Navigate to `crates/ra_lsp_server/src/main_loop.rs` and add a breakpoint to the `on_task` function. - -- Go back to the `[Extension Development Host]` instance and hover over a rust variable and your breakpoint should hit. - -## Demo - -![demonstration of debugging](https://user-images.githubusercontent.com/1711539/51384036-254fab80-1b2c-11e9-824d-95f9a6e9cf4f.gif) - -## Troubleshooting - -### Can't find the `ra_lsp_server` process - -It could be a case of just jumping the gun. - -The `ra_lsp_server` is only started once the `onLanguage:rust` activation. - -Make sure you open a rust file in the `[Extension Development Host]` and try again. - -### Can't connect to `ra_lsp_server` - -Make sure you have run `echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope`. - -By default this should reset back to 1 everytime you log in. - -### Breakpoints are never being hit - -Check your version of `lldb` if it's version 6 and lower use the `classic` adapter type. -It's `lldb.adapterType` in settings file. - -If you're running `lldb` version 7 change the lldb adapter type to `bundled` or `native`. diff --git a/README.md b/README.md index 5bc90a3f0..acce7219e 100644 --- a/README.md +++ b/README.md @@ -50,78 +50,6 @@ https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frls-2.2E0 See [CONTRIBUTING.md](./CONTRIBUTING.md) and [ARCHITECTURE.md](./ARCHITECTURE.md) -## Supported LSP features - -### General -- [x] [initialize](https://microsoft.github.io/language-server-protocol/specification#initialize) -- [x] [initialized](https://microsoft.github.io/language-server-protocol/specification#initialized) -- [x] [shutdown](https://microsoft.github.io/language-server-protocol/specification#shutdown) -- [ ] [exit](https://microsoft.github.io/language-server-protocol/specification#exit) -- [x] [$/cancelRequest](https://microsoft.github.io/language-server-protocol/specification#cancelRequest) - -### Workspace -- [ ] [workspace/workspaceFolders](https://microsoft.github.io/language-server-protocol/specification#workspace_workspaceFolders) -- [ ] [workspace/didChangeWorkspaceFolders](https://microsoft.github.io/language-server-protocol/specification#workspace_didChangeWorkspaceFolders) -- [x] [workspace/didChangeConfiguration](https://microsoft.github.io/language-server-protocol/specification#workspace_didChangeConfiguration) -- [ ] [workspace/configuration](https://microsoft.github.io/language-server-protocol/specification#workspace_configuration) -- [x] [workspace/didChangeWatchedFiles](https://microsoft.github.io/language-server-protocol/specification#workspace_didChangeWatchedFiles) -- [x] [workspace/symbol](https://microsoft.github.io/language-server-protocol/specification#workspace_symbol) -- [x] [workspace/executeCommand](https://microsoft.github.io/language-server-protocol/specification#workspace_executeCommand) - - `apply_code_action` -- [ ] [workspace/applyEdit](https://microsoft.github.io/language-server-protocol/specification#workspace_applyEdit) - -### Text Synchronization -- [x] [textDocument/didOpen](https://microsoft.github.io/language-server-protocol/specification#textDocument_didOpen) -- [x] [textDocument/didChange](https://microsoft.github.io/language-server-protocol/specification#textDocument_didChange) -- [ ] [textDocument/willSave](https://microsoft.github.io/language-server-protocol/specification#textDocument_willSave) -- [ ] [textDocument/willSaveWaitUntil](https://microsoft.github.io/language-server-protocol/specification#textDocument_willSaveWaitUntil) -- [x] [textDocument/didSave](https://microsoft.github.io/language-server-protocol/specification#textDocument_didSave) -- [x] [textDocument/didClose](https://microsoft.github.io/language-server-protocol/specification#textDocument_didClose) - -### Diagnostics -- [x] [textDocument/publishDiagnostics](https://microsoft.github.io/language-server-protocol/specification#textDocument_publishDiagnostics) - -### Lanuguage Features -- [x] [textDocument/completion](https://microsoft.github.io/language-server-protocol/specification#textDocument_completion) - - open close: false - - change: Full - - will save: false - - will save wait until: false - - save: false -- [x] [completionItem/resolve](https://microsoft.github.io/language-server-protocol/specification#completionItem_resolve) - - resolve provider: none - - trigger characters: `:`, `.` -- [x] [textDocument/hover](https://microsoft.github.io/language-server-protocol/specification#textDocument_hover) -- [x] [textDocument/signatureHelp](https://microsoft.github.io/language-server-protocol/specification#textDocument_signatureHelp) - - trigger characters: `(`, `,`, `)` -- [ ] [textDocument/declaration](https://microsoft.github.io/language-server-protocol/specification#textDocument_declaration) -- [x] [textDocument/definition](https://microsoft.github.io/language-server-protocol/specification#textDocument_definition) -- [ ] [textDocument/typeDefinition](https://microsoft.github.io/language-server-protocol/specification#textDocument_typeDefinition) -- [x] [textDocument/implementation](https://microsoft.github.io/language-server-protocol/specification#textDocument_implementation) -- [x] [textDocument/references](https://microsoft.github.io/language-server-protocol/specification#textDocument_references) -- [x] [textDocument/documentHighlight](https://microsoft.github.io/language-server-protocol/specification#textDocument_documentHighlight) -- [x] [textDocument/documentSymbol](https://microsoft.github.io/language-server-protocol/specification#textDocument_documentSymbol) -- [x] [textDocument/codeAction](https://microsoft.github.io/language-server-protocol/specification#textDocument_codeAction) - - rust-analyzer.syntaxTree - - rust-analyzer.extendSelection - - rust-analyzer.matchingBrace - - rust-analyzer.parentModule - - rust-analyzer.joinLines - - rust-analyzer.run - - rust-analyzer.analyzerStatus -- [x] [textDocument/codeLens](https://microsoft.github.io/language-server-protocol/specification#textDocument_codeLens) -- [ ] [textDocument/documentLink](https://microsoft.github.io/language-server-protocol/specification#codeLens_resolve) -- [ ] [documentLink/resolve](https://microsoft.github.io/language-server-protocol/specification#documentLink_resolve) -- [ ] [textDocument/documentColor](https://microsoft.github.io/language-server-protocol/specification#textDocument_documentColor) -- [ ] [textDocument/colorPresentation](https://microsoft.github.io/language-server-protocol/specification#textDocument_colorPresentation) -- [x] [textDocument/formatting](https://microsoft.github.io/language-server-protocol/specification#textDocument_formatting) -- [ ] [textDocument/rangeFormatting](https://microsoft.github.io/language-server-protocol/specification#textDocument_rangeFormatting) -- [x] [textDocument/onTypeFormatting](https://microsoft.github.io/language-server-protocol/specification#textDocument_onTypeFormatting) - - first trigger character: `=` - - more trigger character `.` -- [x] [textDocument/rename](https://microsoft.github.io/language-server-protocol/specification#textDocument_rename) -- [x] [textDocument/prepareRename](https://microsoft.github.io/language-server-protocol/specification#textDocument_prepareRename) -- [x] [textDocument/foldingRange](https://microsoft.github.io/language-server-protocol/specification#textDocument_foldingRange) ## License diff --git a/ROADMAP.md b/ROADMAP.md deleted file mode 100644 index 3856ebc5b..000000000 --- a/ROADMAP.md +++ /dev/null @@ -1,77 +0,0 @@ -# Rust Analyzer Roadmap 01 - -Written on 2018-11-06, extends approximately to February 2019. -After that, we should coordinate with the compiler/rls developers to align goals and share code and experience. - - -# Overall Goals - -The mission is: - * Provide an excellent "code analyzed as you type" IDE experience for the Rust language, - * Implement the bulk of the features in Rust itself. - - -High-level architecture constraints: - * Long-term, replace the current rustc frontend. - It's *obvious* that the code should be shared, but OTOH, all great IDEs started as from-scratch rewrites. - * Don't hard-code a particular protocol or mode of operation. - Produce a library which could be used for implementing an LSP server, or for in-process embedding. - * As long as possible, stick with stable Rust. - - -# Current Goals - -Ideally, we would be coordinating with the compiler/rls teams, but they are busy working on making Rust 2018 at the moment. -The sync-up point will happen some time after the edition, probably early 2019. -In the meantime, the goal is to **experiment**, specifically, to figure out how a from-scratch written RLS might look like. - - -## Data Storage and Protocol implementation - -The fundamental part of any architecture is who owns which data, how the data is mutated and how the data is exposed to user. -For storage we use the [salsa](http://github.com/salsa-rs/salsa) library, which provides a solid model that seems to be the way to go. - -Modification to source files is mostly driven by the language client, but we also should support watching the file system. The current -file watching implementation is a stub. - -**Action Item:** implement reliable file watching service. - -We also should extract LSP bits as a reusable library. There's already `gen_lsp_server`, but it is pretty limited. - -**Action Item:** try using `gen_lsp_server` in more than one language server, for example for TOML and Nix. - -The ideal architecture for `gen_lsp_server` is still unclear. I'd rather avoid futures: they bring significant runtime complexity -(call stacks become insane) and the performance benefits are negligible for our use case (one thread per request is perfectly OK given -the low amount of requests a language server receives). The current interface is based on crossbeam-channel, but it's not clear -if that is the best choice. - - -## Low-effort, high payoff features - -Implementing 20% of type inference will give use 80% of completion. -Thus it makes sense to partially implement name resolution, type inference and trait matching, even though there is a chance that -this code is replaced later on when we integrate with the compiler - -Specifically, we need to: - -* **Action Item:** implement path resolution, so that we get completion in imports and such. -* **Action Item:** implement simple type inference, so that we get completion for inherent methods. -* **Action Item:** implement nicer completion infrastructure, so that we have icons, snippets, doc comments, after insert callbacks, ... - - -## Dragons to kill - -To make experiments most effective, we should try to prototype solutions for the hardest problems. -In the case of Rust, the two hardest problems are: - * Conditional compilation and source/model mismatch. - A single source file might correspond to several entities in the semantic model. - For example, different cfg flags produce effectively different crates from the same source. - * Macros are intertwined with name resolution in a single fix-point iteration algorithm. - This is just plain hard to implement, but also interacts poorly with on-demand. - - -For the first bullet point, we need to design descriptors infra and explicit mapping step between sources and semantic model, which is intentionally fuzzy in one direction. -The **action item** here is basically "write code, see what works, keep high-level picture in mind". - -For the second bullet point, there's hope that salsa with its deep memoization will result in a fast enough solution even without being fully on-demand. -Again, the **action item** is to write the code and see what works. Salsa itself uses macros heavily, so it should be a great test. diff --git a/docs/dev/ARCHITECTURE.md b/docs/dev/ARCHITECTURE.md new file mode 100644 index 000000000..57f76ebae --- /dev/null +++ b/docs/dev/ARCHITECTURE.md @@ -0,0 +1,200 @@ +# Architecture + +This document describes the high-level architecture of rust-analyzer. +If you want to familiarize yourself with the code base, you are just +in the right place! + +See also the [guide](./guide.md), which walks through a particular snapshot of +rust-analyzer code base. + +For syntax-trees specifically, there's a [video walk +through](https://youtu.be/DGAuLWdCCAI) as well. + +## The Big Picture + +![](https://user-images.githubusercontent.com/1711539/50114578-e8a34280-0255-11e9-902c-7cfc70747966.png) + +On the highest level, rust-analyzer is a thing which accepts input source code +from the client and produces a structured semantic model of the code. + +More specifically, input data consists of a set of test files (`(PathBuf, +String)` pairs) and information about project structure, captured in the so called +`CrateGraph`. The crate graph specifies which files are crate roots, which cfg +flags are specified for each crate (TODO: actually implement this) and what +dependencies exist between the crates. The analyzer keeps all this input data in +memory and never does any IO. Because the input data is source code, which +typically measures in tens of megabytes at most, keeping all input data in +memory is OK. + +A "structured semantic model" is basically an object-oriented representation of +modules, functions and types which appear in the source code. This representation +is fully "resolved": all expressions have types, all references are bound to +declarations, etc. + +The client can submit a small delta of input data (typically, a change to a +single file) and get a fresh code model which accounts for changes. + +The underlying engine makes sure that model is computed lazily (on-demand) and +can be quickly updated for small modifications. + + +## Code generation + +Some of the components of this repository are generated through automatic +processes. These are outlined below: + +- `gen-syntax`: The kinds of tokens that are reused in several places, so a generator + is used. We use tera templates to generate the files listed below, based on + the grammar described in [grammar.ron]: + - [ast/generated.rs][ast generated] in `ra_syntax` based on + [ast/generated.tera.rs][ast source] + - [syntax_kinds/generated.rs][syntax_kinds generated] in `ra_syntax` based on + [syntax_kinds/generated.tera.rs][syntax_kinds source] + +[tera]: https://tera.netlify.com/ +[grammar.ron]: ./crates/ra_syntax/src/grammar.ron +[ast generated]: ./crates/ra_syntax/src/ast/generated.rs +[ast source]: ./crates/ra_syntax/src/ast/generated.rs.tera +[syntax_kinds generated]: ./crates/ra_syntax/src/syntax_kinds/generated.rs +[syntax_kinds source]: ./crates/ra_syntax/src/syntax_kinds/generated.rs.tera + + +## Code Walk-Through + +### `crates/ra_syntax` + +Rust syntax tree structure and parser. See +[RFC](https://github.com/rust-lang/rfcs/pull/2256) for some design notes. + +- [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees. +- `grammar` module is the actual parser. It is a hand-written recursive descent parser, which + produces a sequence of events like "start node X", "finish not Y". It works similarly to [kotlin's parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java), + which is a good source of inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs) + is what we use for the definition of the Rust language. +- `parser_api/parser_impl` bridges the tree-agnostic parser from `grammar` with `rowan` trees. + This is the thing that turns a flat list of events into a tree (see `EventProcessor`) +- `ast` provides a type safe API on top of the raw `rowan` tree. +- `grammar.ron` RON description of the grammar, which is used to + generate `syntax_kinds` and `ast` modules, using `cargo gen-syntax` command. +- `algo`: generic tree algorithms, including `walk` for O(1) stack + space tree traversal (this is cool) and `visit` for type-driven + visiting the nodes (this is double plus cool, if you understand how + `Visitor` works, you understand the design of syntax trees). + +Tests for ra_syntax are mostly data-driven: `tests/data/parser` contains a bunch of `.rs` +(test vectors) and `.txt` files with corresponding syntax trees. During testing, we check +`.rs` against `.txt`. If the `.txt` file is missing, it is created (this is how you update +tests). Additionally, running `cargo gen-tests` will walk the grammar module and collect +all `//test test_name` comments into files inside `tests/data` directory. + +See [#93](https://github.com/rust-analyzer/rust-analyzer/pull/93) for an example PR which +fixes a bug in the grammar. + +### `crates/ra_db` + +We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and +on-demand computation. Roughly, you can think of salsa as a key-value store, but +it also can compute derived values using specified functions. The `ra_db` crate +provides basic infrastructure for interacting with salsa. Crucially, it +defines most of the "input" queries: facts supplied by the client of the +analyzer. Reading the docs of the `ra_db::input` module should be useful: +everything else is strictly derived from those inputs. + +### `crates/ra_hir` + +HIR provides high-level "object oriented" access to Rust code. + +The principal difference between HIR and syntax trees is that HIR is bound to a +particular crate instance. That is, it has cfg flags and features applied (in +theory, in practice this is to be implemented). So, the relation between +syntax and HIR is many-to-one. The `source_binder` module is responsible for +guessing a HIR for a particular source position. + +Underneath, HIR works on top of salsa, using a `HirDatabase` trait. + +### `crates/ra_ide_api` + +A stateful library for analyzing many Rust files as they change. `AnalysisHost` +is a mutable entity (clojure's atom) which holds the current state, incorporates +changes and hands out `Analysis` --- an immutable and consistent snapshot of +the world state at a point in time, which actually powers analysis. + +One interesting aspect of analysis is its support for cancellation. When a +change is applied to `AnalysisHost`, first all currently active snapshots are +canceled. Only after all snapshots are dropped the change actually affects the +database. + +APIs in this crate are IDE centric: they take text offsets as input and produce +offsets and strings as output. This works on top of rich code model powered by +`hir`. + +### `crates/ra_ide_api_light` + +All IDE features which can be implemented if you only have access to a single +file. `ra_ide_api_light` could be used to enhance editing of Rust code without +the need to fiddle with build-systems, file synchronization and such. + +In a sense, `ra_ide_api_light` is just a bunch of pure functions which take a +syntax tree as input. + +The tests for `ra_ide_api_light` are `#[cfg(test)] mod tests` unit-tests spread +throughout its modules. + + +### `crates/ra_lsp_server` + +An LSP implementation which wraps `ra_ide_api` into a langauge server protocol. + +### `crates/ra_vfs` + +Although `hir` and `ra_ide_api` don't do any IO, we need to be able to read +files from disk at the end of the day. This is what `ra_vfs` does. It also +manages overlays: "dirty" files in the editor, whose "true" contents is +different from data on disk. + +### `crates/gen_lsp_server` + +A language server scaffold, exposing a synchronous crossbeam-channel based API. +This crate handles protocol handshaking and parsing messages, while you +control the message dispatch loop yourself. + +Run with `RUST_LOG=sync_lsp_server=debug` to see all the messages. + +### `crates/ra_cli` + +A CLI interface to rust-analyzer. + +### `crate/tools` + +Custom Cargo tasks used to develop rust-analyzer: + +- `cargo gen-syntax` -- generate `ast` and `syntax_kinds` +- `cargo gen-tests` -- collect inline tests from grammar +- `cargo install-code` -- build and install VS Code extension and server + +### `editors/code` + +VS Code plugin + + +## Common workflows + +To try out VS Code extensions, run `cargo install-code`. This installs both the +`ra_lsp_server` binary and the VS Code extension. To install only the binary, use +`cargo install-lsp` (shorthand for `cargo install --path crates/ra_lsp_server --force`) + +To see logs from the language server, set `RUST_LOG=info` env variable. To see +all communication between the server and the client, use +`RUST_LOG=gen_lsp_server=debug` (this will print quite a bit of stuff). + +There's `rust-analyzer: status` command which prints common high-level debug +info. In particular, it prints info about memory usage of various data +structures, and, if compiled with jemalloc support (`cargo jinstall-lsp` or +`cargo install --path crates/ra_lsp_server --force --features jemalloc`), includes + statistic about the heap. + +To run tests, just `cargo test`. + +To work on the VS Code extension, launch code inside `editors/code` and use `F5` to +launch/debug. To automatically apply formatter and linter suggestions, use `npm +run fix`. diff --git a/docs/dev/CONTRIBUTING.md b/docs/dev/CONTRIBUTING.md new file mode 100644 index 000000000..a2efc7afa --- /dev/null +++ b/docs/dev/CONTRIBUTING.md @@ -0,0 +1,18 @@ +The project is in its early stages: contributions are welcome and would be +**very** helpful, but the project is not _yet_ optimized for contribution. +Moreover, it is doubly experimental, so there's no guarantee that any work here +would reach production. + +To get an idea of how rust-analyzer works, take a look at the [ARCHITECTURE.md](./ARCHITECTURE.md) +document. + +Useful labels on the issue tracker: + * [E-mentor](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-mentor) + issues have links to the code in question and tests, + * [E-easy](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-easy), + [E-medium](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-medium), + [E-hard](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-hard), + labels are *estimates* for how hard would be to write a fix. + +There's no formal PR check list: everything that passes CI (we use [bors](https://bors.tech/)) is valid, +but it's a good idea to write nice commit messages, test code thoroughly, maintain consistent style, etc. diff --git a/docs/dev/DEBUGGING.md b/docs/dev/DEBUGGING.md new file mode 100644 index 000000000..f868e6998 --- /dev/null +++ b/docs/dev/DEBUGGING.md @@ -0,0 +1,62 @@ +# Debugging vs Code plugin and the Language Server + +Install [LLDB](https://lldb.llvm.org/) and the [LLDB Extension](https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb). + +Checkout rust rust-analyzer and open it in vscode. + +``` +$ git clone https://github.com/rust-analyzer/rust-analyzer.git --depth 1 +$ cd rust-analyzer +$ code . +``` + +- To attach to the `lsp server` in linux you'll have to run: + + `echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope` + + This enables ptrace on non forked processes + +- Ensure the dependencies for the extension are installed, run the `npm: install - editors/code` task in vscode. + +- Launch the `Debug Extension`, this will build the extension and the `lsp server`. + +- A new instance of vscode with `[Extension Development Host]` in the title. + + Don't worry about disabling `rls` all other extensions will be disabled but this one. + +- In the new vscode instance open a rust project, and navigate to a rust file + +- In the original vscode start an additional debug session (the three periods in the launch) and select `Debug Lsp Server`. + +- A list of running processes should appear select the `ra_lsp_server` from this repo. + +- Navigate to `crates/ra_lsp_server/src/main_loop.rs` and add a breakpoint to the `on_task` function. + +- Go back to the `[Extension Development Host]` instance and hover over a rust variable and your breakpoint should hit. + +## Demo + +![demonstration of debugging](https://user-images.githubusercontent.com/1711539/51384036-254fab80-1b2c-11e9-824d-95f9a6e9cf4f.gif) + +## Troubleshooting + +### Can't find the `ra_lsp_server` process + +It could be a case of just jumping the gun. + +The `ra_lsp_server` is only started once the `onLanguage:rust` activation. + +Make sure you open a rust file in the `[Extension Development Host]` and try again. + +### Can't connect to `ra_lsp_server` + +Make sure you have run `echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope`. + +By default this should reset back to 1 everytime you log in. + +### Breakpoints are never being hit + +Check your version of `lldb` if it's version 6 and lower use the `classic` adapter type. +It's `lldb.adapterType` in settings file. + +If you're running `lldb` version 7 change the lldb adapter type to `bundled` or `native`. diff --git a/docs/dev/ROADMAP.md b/docs/dev/ROADMAP.md new file mode 100644 index 000000000..3856ebc5b --- /dev/null +++ b/docs/dev/ROADMAP.md @@ -0,0 +1,77 @@ +# Rust Analyzer Roadmap 01 + +Written on 2018-11-06, extends approximately to February 2019. +After that, we should coordinate with the compiler/rls developers to align goals and share code and experience. + + +# Overall Goals + +The mission is: + * Provide an excellent "code analyzed as you type" IDE experience for the Rust language, + * Implement the bulk of the features in Rust itself. + + +High-level architecture constraints: + * Long-term, replace the current rustc frontend. + It's *obvious* that the code should be shared, but OTOH, all great IDEs started as from-scratch rewrites. + * Don't hard-code a particular protocol or mode of operation. + Produce a library which could be used for implementing an LSP server, or for in-process embedding. + * As long as possible, stick with stable Rust. + + +# Current Goals + +Ideally, we would be coordinating with the compiler/rls teams, but they are busy working on making Rust 2018 at the moment. +The sync-up point will happen some time after the edition, probably early 2019. +In the meantime, the goal is to **experiment**, specifically, to figure out how a from-scratch written RLS might look like. + + +## Data Storage and Protocol implementation + +The fundamental part of any architecture is who owns which data, how the data is mutated and how the data is exposed to user. +For storage we use the [salsa](http://github.com/salsa-rs/salsa) library, which provides a solid model that seems to be the way to go. + +Modification to source files is mostly driven by the language client, but we also should support watching the file system. The current +file watching implementation is a stub. + +**Action Item:** implement reliable file watching service. + +We also should extract LSP bits as a reusable library. There's already `gen_lsp_server`, but it is pretty limited. + +**Action Item:** try using `gen_lsp_server` in more than one language server, for example for TOML and Nix. + +The ideal architecture for `gen_lsp_server` is still unclear. I'd rather avoid futures: they bring significant runtime complexity +(call stacks become insane) and the performance benefits are negligible for our use case (one thread per request is perfectly OK given +the low amount of requests a language server receives). The current interface is based on crossbeam-channel, but it's not clear +if that is the best choice. + + +## Low-effort, high payoff features + +Implementing 20% of type inference will give use 80% of completion. +Thus it makes sense to partially implement name resolution, type inference and trait matching, even though there is a chance that +this code is replaced later on when we integrate with the compiler + +Specifically, we need to: + +* **Action Item:** implement path resolution, so that we get completion in imports and such. +* **Action Item:** implement simple type inference, so that we get completion for inherent methods. +* **Action Item:** implement nicer completion infrastructure, so that we have icons, snippets, doc comments, after insert callbacks, ... + + +## Dragons to kill + +To make experiments most effective, we should try to prototype solutions for the hardest problems. +In the case of Rust, the two hardest problems are: + * Conditional compilation and source/model mismatch. + A single source file might correspond to several entities in the semantic model. + For example, different cfg flags produce effectively different crates from the same source. + * Macros are intertwined with name resolution in a single fix-point iteration algorithm. + This is just plain hard to implement, but also interacts poorly with on-demand. + + +For the first bullet point, we need to design descriptors infra and explicit mapping step between sources and semantic model, which is intentionally fuzzy in one direction. +The **action item** here is basically "write code, see what works, keep high-level picture in mind". + +For the second bullet point, there's hope that salsa with its deep memoization will result in a fast enough solution even without being fully on-demand. +Again, the **action item** is to write the code and see what works. Salsa itself uses macros heavily, so it should be a great test. diff --git a/docs/dev/guide.md b/docs/dev/guide.md new file mode 100644 index 000000000..abbe4c154 --- /dev/null +++ b/docs/dev/guide.md @@ -0,0 +1,575 @@ +# Guide to rust-analyzer + +## About the guide + +This guide describes the current state of rust-analyzer as of 2019-01-20 (git +tag [guide-2019-01]). Its purpose is to document various problems and +architectural solutions related to the problem of building IDE-first compiler +for Rust. There is a video version of this guide as well: +https://youtu.be/ANKBNiSWyfc. + +[guide-2019-01]: https://github.com/rust-analyzer/rust-analyzer/tree/guide-2019-01 + +## The big picture + +On the highest possible level, rust-analyzer is a stateful component. A client may +apply changes to the analyzer (new contents of `foo.rs` file is "fn main() {}") +and it may ask semantic questions about the current state (what is the +definition of the identifier with offset 92 in file `bar.rs`?). Two important +properties hold: + +* Analyzer does not do any I/O. It starts in an empty state and all input data is + provided via `apply_change` API. + +* Only queries about the current state are supported. One can, of course, + simulate undo and redo by keeping a log of changes and inverse changes respectively. + +## IDE API + +To see the bigger picture of how the IDE features works, let's take a look at the [`AnalysisHost`] and +[`Analysis`] pair of types. `AnalysisHost` has three methods: + +* `default()` for creating an empty analysis instance +* `apply_change(&mut self)` to make changes (this is how you get from an empty + state to something interesting) +* `analysis(&self)` to get an instance of `Analysis` + +`Analysis` has a ton of methods for IDEs, like `goto_definition`, or +`completions`. Both inputs and outputs of `Analysis`' methods are formulated in +terms of files and offsets, and **not** in terms of Rust concepts like structs, +traits, etc. The "typed" API with Rust specific types is slightly lower in the +stack, we'll talk about it later. + +[`AnalysisHost`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/lib.rs#L265-L284 +[`Analysis`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/lib.rs#L291-L478 + +The reason for this separation of `Analysis` and `AnalysisHost` is that we want to apply +changes "uniquely", but we might also want to fork an `Analysis` and send it to +another thread for background processing. That is, there is only a single +`AnalysisHost`, but there may be several (equivalent) `Analysis`. + +Note that all of the `Analysis` API return `Cancelable`. This is required to +be responsive in an IDE setting. Sometimes a long-running query is being computed +and the user types something in the editor and asks for completion. In this +case, we cancel the long-running computation (so it returns `Err(Canceled)`), +apply the change and execute request for completion. We never use stale data to +answer requests. Under the cover, `AnalysisHost` "remembers" all outstanding +`Analysis` instances. The `AnalysisHost::apply_change` method cancels all +`Analysis`es, blocks until all of them are `Dropped` and then applies changes +in-place. This may be familiar to Rustaceans who use read-write locks for interior +mutability. + +Next, let's talk about what the inputs to the `Analysis` are, precisely. + +## Inputs + +Rust Analyzer never does any I/O itself, all inputs get passed explicitly via +the `AnalysisHost::apply_change` method, which accepts a single argument, a +`AnalysisChange`. [`AnalysisChange`] is a builder for a single change +"transaction", so it suffices to study its methods to understand all of the +input data. + +[`AnalysisChange`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/lib.rs#L119-L167 + +The `(add|change|remove)_file` methods control the set of the input files, where +each file has an integer id (`FileId`, picked by the client), text (`String`) +and a filesystem path. Paths are tricky; they'll be explained below, in source roots +section, together with the `add_root` method. The `add_library` method allows us to add a +group of files which are assumed to rarely change. It's mostly an optimization +and does not change the fundamental picture. + +The `set_crate_graph` method allows us to control how the input files are partitioned +into compilation unites -- crates. It also controls (in theory, not implemented +yet) `cfg` flags. `CrateGraph` is a directed acyclic graph of crates. Each crate +has a root `FileId`, a set of active `cfg` flags and a set of dependencies. Each +dependency is a pair of a crate and a name. It is possible to have two crates +with the same root `FileId` but different `cfg`-flags/dependencies. This model +is lower than Cargo's model of packages: each Cargo package consists of several +targets, each of which is a separate crate (or several crates, if you try +different feature combinations). + +Procedural macros should become inputs as well, but currently they are not +supported. Procedural macro will be a black box `Box TokenStream>` +function, and will be inserted into the crate graph just like dependencies. + +Soon we'll talk how we build an LSP server on top of `Analysis`, but first, +let's deal with that paths issue. + + +## Source roots (a.k.a. "Filesystems are horrible") + +This is a non-essential section, feel free to skip. + +The previous section said that the filesystem path is an attribute of a file, +but this is not the whole truth. Making it an absolute `PathBuf` will be bad for +several reasons. First, filesystems are full of (platform-dependent) edge cases: + +* it's hard (requires a syscall) to decide if two paths are equivalent +* some filesystems are case-sensitive (e.g. on macOS) +* paths are not necessary UTF-8 +* symlinks can form cycles + +Second, this might hurt reproducibility and hermeticity of builds. In theory, +moving a project from `/foo/bar/my-project` to `/spam/eggs/my-project` should +not change a bit in the output. However, if the absolute path is a part of the +input, it is at least in theory observable, and *could* affect the output. + +Yet another problem is that we really *really* want to avoid doing I/O, but with +Rust the set of "input" files is not necessary known up-front. In theory, you +can have `#[path="/dev/random"] mod foo;`. + +To solve (or explicitly refuse to solve) these problems rust-analyzer uses the +concept of a "source root". Roughly speaking, source roots are the contents of a +directory on a file systems, like `/home/matklad/projects/rustraytracer/**.rs`. + +More precisely, all files (`FileId`s) are partitioned into disjoint +`SourceRoot`s. Each file has a relative UTF-8 path within the `SourceRoot`. +`SourceRoot` has an identity (integer ID). Crucially, the root path of the +source root itself is unknown to the analyzer: A client is supposed to maintain a +mapping between `SourceRoot` IDs (which are assigned by the client) and actual +`PathBuf`s. `SourceRoot`s give a sane tree model of the file system to the +analyzer. + +Note that `mod`, `#[path]` and `include!()` can only reference files from the +same source root. It is of course is possible to explicitly add extra files to +the source root, even `/dev/random`. + +## Language Server Protocol + +Now let's see how the `Analysis` API is exposed via the JSON RPC based language server protocol. The +hard part here is managing changes (which can come either from the file system +or from the editor) and concurrency (we want to spawn background jobs for things +like syntax highlighting). We use the event loop pattern to manage the zoo, and +the loop is the [`main_loop_inner`] function. The [`main_loop`] does a one-time +initialization and tearing down of the resources. + +[`main_loop`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L51-L110 +[`main_loop_inner`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L156-L258 + + +Let's walk through a typical analyzer session! + +First, we need to figure out what to analyze. To do this, we run `cargo +metadata` to learn about Cargo packages for current workspace and dependencies, +and we run `rustc --print sysroot` and scan the "sysroot" (the directory containing the current Rust toolchain's files) to learn about crates like +`std`. Currently we load this configuration once at the start of the server, but +it should be possible to dynamically reconfigure it later without restart. + +[main_loop.rs#L62-L70](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L62-L70) + +The [`ProjectModel`] we get after this step is very Cargo and sysroot specific, +it needs to be lowered to get the input in the form of `AnalysisChange`. This +happens in [`ServerWorldState::new`] method. Specifically + +* Create a `SourceRoot` for each Cargo package and sysroot. +* Schedule a filesystem scan of the roots. +* Create an analyzer's `Crate` for each Cargo **target** and sysroot crate. +* Setup dependencies between the crates. + +[`ProjectModel`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/project_model.rs#L16-L20 +[`ServerWorldState::new`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L38-L160 + +The results of the scan (which may take a while) will be processed in the body +of the main loop, just like any other change. Here's where we handle: + +* [File system changes](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L194) +* [Changes from the editor](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L377) + +After a single loop's turn, we group the changes into one `AnalysisChange` and +[apply] it. This always happens on the main thread and blocks the loop. + +[apply]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L216 + +To handle requests, like ["goto definition"], we create an instance of the +`Analysis` and [`schedule`] the task (which consumes `Analysis`) on the +threadpool. [The task] calls the corresponding `Analysis` method, while +massaging the types into the LSP representation. Keep in mind that if we are +executing "goto definition" on the threadpool and a new change comes in, the +task will be canceled as soon as the main loop calls `apply_change` on the +`AnalysisHost`. + +["goto definition"]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L216 +[`schedule`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L426-L455 +[The task]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop/handlers.rs#L205-L223 + +This concludes the overview of the analyzer's programing *interface*. Next, lets +dig into the implementation! + +## Salsa + +The most straightforward way to implement an "apply change, get analysis, repeat" +API would be to maintain the input state and to compute all possible analysis +information from scratch after every change. This works, but scales poorly with +the size of the project. To make this fast, we need to take advantage of the +fact that most of the changes are small, and that analysis results are unlikely +to change significantly between invocations. + +To do this we use [salsa]: a framework for incremental on-demand computation. +You can skip the rest of the section if you are familiar with rustc's red-green +algorithm (which is used for incremental compilation). + +[salsa]: https://github.com/salsa-rs/salsa + +It's better to refer to salsa's docs to learn about it. Here's a small excerpt: + +The key idea of salsa is that you define your program as a set of queries. Every +query is used like a function `K -> V` that maps from some key of type `K` to a value +of type `V`. Queries come in two basic varieties: + +* **Inputs**: the base inputs to your system. You can change these whenever you + like. + +* **Functions**: pure functions (no side effects) that transform your inputs + into other values. The results of queries is memoized to avoid recomputing + them a lot. When you make changes to the inputs, we'll figure out (fairly + intelligently) when we can re-use these memoized values and when we have to + recompute them. + + +For further discussion, its important to understand one bit of "fairly +intelligently". Suppose we have two functions, `f1` and `f2`, and one input, +`z`. We call `f1(X)` which in turn calls `f2(Y)` which inspects `i(Z)`. `i(Z)` +returns some value `V1`, `f2` uses that and returns `R1`, `f1` uses that and +returns `O`. Now, let's change `i` at `Z` to `V2` from `V1` and try to compute +`f1(X)` again. Because `f1(X)` (transitively) depends on `i(Z)`, we can't just +reuse its value as is. However, if `f2(Y)` is *still* equal to `R1` (despite +`i`'s change), we, in fact, *can* reuse `O` as result of `f1(X)`. And that's how +salsa works: it recomputes results in *reverse* order, starting from inputs and +progressing towards outputs, stopping as soon as it sees an intermediate value +that hasn't changed. If this sounds confusing to you, don't worry: it is +confusing. This illustration by @killercup might help: + +step 1 + +step 2 + +step 3 + +step 4 + +## Salsa Input Queries + +All analyzer information is stored in a salsa database. `Analysis` and +`AnalysisHost` types are newtype wrappers for [`RootDatabase`] -- a salsa +database. + +[`RootDatabase`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/db.rs#L88-L134 + +Salsa input queries are defined in [`FilesDatabase`] (which is a part of +`RootDatabase`). They closely mirror the familiar `AnalysisChange` structure: +indeed, what `apply_change` does is it sets the values of input queries. + +[`FilesDatabase`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_db/src/input.rs#L150-L174 + +## From text to semantic model + +The bulk of the rust-analyzer is transforming input text into a semantic model of +Rust code: a web of entities like modules, structs, functions and traits. + +An important fact to realize is that (unlike most other languages like C# or +Java) there isn't a one-to-one mapping between source code and the semantic model. A +single function definition in the source code might result in several semantic +functions: for example, the same source file might be included as a module into +several crate, or a single "crate" might be present in the compilation DAG +several times, with different sets of `cfg`s enabled. The IDE-specific task of +mapping source code position into a semantic model is inherently imprecise for +this reason, and is handled by the [`source_binder`]. + +[`source_binder`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/source_binder.rs + +The semantic interface is declared in the [`code_model_api`] module. Each entity is +identified by an integer ID and has a bunch of methods which take a salsa database +as an argument and returns other entities (which are also IDs). Internally, these +methods invoke various queries on the database to build the model on demand. +Here's [the list of queries]. + +[`code_model_api`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/code_model_api.rs +[the list of queries]: https://github.com/rust-analyzer/rust-analyzer/blob/7e84440e25e19529e4ff8a66e521d1b06349c6ec/crates/ra_hir/src/db.rs#L20-L106 + +The first step of building the model is parsing the source code. + +## Syntax trees + +An important property of the Rust language is that each file can be parsed in +isolation. Unlike, say, `C++`, an `include` can't change the meaning of the +syntax. For this reason, rust-analyzer can build a syntax tree for each "source +file", which could then be reused by several semantic models if this file +happens to be a part of several crates. + +The representation of syntax trees that rust-analyzer uses is similar to that of `Roslyn` +and Swift's new [libsyntax]. Swift's docs give an excellent overview of the +approach, so I skip this part here and instead outline the main characteristics +of the syntax trees: + +* Syntax trees are fully lossless. Converting **any** text to a syntax tree and + back is a total identity function. All whitespace and comments are explicitly + represented in the tree. + +* Syntax nodes have generic `(next|previous)_sibling`, `parent`, + `(first|last)_child` functions. You can get from any one node to any other + node in the file using only these functions. + +* Syntax nodes know their range (start offset and length) in the file. + +* Syntax nodes share the ownership of their syntax tree: if you keep a reference + to a single function, the whole enclosing file is alive. + +* Syntax trees are immutable and the cost of replacing the subtree is + proportional to the depth of the subtree. Read Swift's docs to learn how + immutable + parent pointers + cheap modification is possible. + +* Syntax trees are build on best-effort basis. All accessor methods return + `Option`s. The tree for `fn foo` will contain a function declaration with + `None` for parameter list and body. + +* Syntax trees do not know the file they are built from, they only know about + the text. + +The implementation is based on the generic [rowan] crate on top of which a +[rust-specific] AST is generated. + +[libsyntax]: https://github.com/apple/swift/tree/5e2c815edfd758f9b1309ce07bfc01c4bc20ec23/lib/Syntax +[rowan]: https://github.com/rust-analyzer/rowan/tree/100a36dc820eb393b74abe0d20ddf99077b61f88 +[rust-specific]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_syntax/src/ast/generated.rs + +The next step in constructing the semantic model is ... + +## Building a Module Tree + +The algorithm for building a tree of modules is to start with a crate root +(remember, each `Crate` from a `CrateGraph` has a `FileId`), collect all `mod` +declarations and recursively process child modules. This is handled by the +[`module_tree_query`], with two slight variations. + +[`module_tree_query`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/module_tree.rs#L116-L123 + +First, rust-analyzer builds a module tree for all crates in a source root +simultaneously. The main reason for this is historical (`module_tree` predates +`CrateGraph`), but this approach also enables accounting for files which are not +part of any crate. That is, if you create a file but do not include it as a +submodule anywhere, you still get semantic completion, and you get a warning +about a free-floating module (the actual warning is not implemented yet). + +The second difference is that `module_tree_query` does not *directly* depend on +the "parse" query (which is confusingly called `source_file`). Why would calling +the parse directly be bad? Suppose the user changes the file slightly, by adding +an insignificant whitespace. Adding whitespace changes the parse tree (because +it includes whitespace), and that means recomputing the whole module tree. + +We deal with this problem by introducing an intermediate [`submodules_query`]. +This query processes the syntax tree and extracts a set of declared submodule +names. Now, changing the whitespace results in `submodules_query` being +re-executed for a *single* module, but because the result of this query stays +the same, we don't have to re-execute [`module_tree_query`]. In fact, we only +need to re-execute it when we add/remove new files or when we change mod +declarations. + +[`submodules_query`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/module_tree.rs#L41 + +We store the resulting modules in a `Vec`-based indexed arena. The indices in +the arena becomes module IDs. And this brings us to the next topic: +assigning IDs in the general case. + +## Location Interner pattern + +One way to assign IDs is how we've dealt with modules: Collect all items into a +single array in some specific order and use the index in the array as an ID. The +main drawback of this approach is that these IDs are not stable: Adding a new item can +shift the IDs of all other items. This works for modules, because adding a module is +a comparatively rare operation, but would be less convenient for, for example, +functions. + +Another solution here is positional IDs: We can identify a function as "the +function with name `foo` in a ModuleId(92) module". Such locations are stable: +adding a new function to the module (unless it is also named `foo`) does not +change the location. However, such "ID" types ceases to be a `Copy`able integer and in +general can become pretty large if we account for nesting (for example: "third parameter of +the `foo` function of the `bar` `impl` in the `baz` module"). + +[`LocationInterner`] allows us to combine the benefits of positional and numeric +IDs. It is a bidirectional append-only map between locations and consecutive +integers which can "intern" a location and return an integer ID back. The salsa +database we use includes a couple of [interners]. How to "garbage collect" +unused locations is an open question. + +[`LocationInterner`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_db/src/loc2id.rs#L65-L71 +[interners]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/db.rs#L22-L23 + +For example, we use `LocationInterner` to assign IDs to definitions of functions, +structs, enums, etc. The location, [`DefLoc`] contains two bits of information: + +* the ID of the module which contains the definition, +* the ID of the specific item in the modules source code. + +We "could" use a text offset for the location of a particular item, but that would play +badly with salsa: offsets change after edits. So, as a rule of thumb, we avoid +using offsets, text ranges or syntax trees as keys and values for queries. What +we do instead is we store "index" of the item among all of the items of a file +(so, a positional based ID, but localized to a single file). + +[`DefLoc`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/ids.rs#L127-L139 + +One thing we've glossed over for the time being is support for macros. We have +only proof of concept handling of macros at the moment, but they are extremely +interesting from an "assigning IDs" perspective. + +## Macros and recursive locations + +The tricky bit about macros is that they effectively create new source files. +While we can use `FileId`s to refer to original files, we can't just assign them +willy-nilly to the pseudo files of macro expansion. Instead, we use a special +ID, [`HirFileId`] to refer to either a usual file or a macro-generated file: + +```rust +enum HirFileId { + FileId(FileId), + Macro(MacroCallId), +} +``` + +`MacroCallId` is an interned ID that specifies a particular macro invocation. +Its `MacroCallLoc` contains: + +* `ModuleId` of the containing module +* `HirFileId` of the containing file or pseudo file +* an index of this particular macro invocation in this file (positional id + again). + +Note how `HirFileId` is defined in terms of `MacroCallLoc` which is defined in +terms of `HirFileId`! This does not recur infinitely though: any chain of +`HirFileId`s bottoms out in `HirFileId::FileId`, that is, some source file +actually written by the user. + +[`HirFileId`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/ids.rs#L18-L125 + +Now that we understand how to identify a definition, in a source or in a +macro-generated file, we can discuss name resolution a bit. + +## Name resolution + +Name resolution faces the same problem as the module tree: if we look at the +syntax tree directly, we'll have to recompute name resolution after every +modification. The solution to the problem is the same: We [lower] the source code of +each module into a position-independent representation which does not change if +we modify bodies of the items. After that we [loop] resolving all imports until +we've reached a fixed point. + +[lower]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/lower.rs#L113-L117 +[loop]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres.rs#L186-L196 + +And, given all our preparation with IDs and a position-independent representation, +it is satisfying to [test] that typing inside function body does not invalidate +name resolution results. + +[test]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/tests.rs#L376 + +An interesting fact about name resolution is that it "erases" all of the +intermediate paths from the imports: in the end, we know which items are defined +and which items are imported in each module, but, if the import was `use +foo::bar::baz`, we deliberately forget what modules `foo` and `bar` resolve to. + +To serve "goto definition" requests on intermediate segments we need this info +in the IDE, however. Luckily, we need it only for a tiny fraction of imports, so we just ask +the module explicitly, "What does the path `foo::bar` resolve to?". This is a +general pattern: we try to compute the minimal possible amount of information +during analysis while allowing IDE to ask for additional specific bits. + +Name resolution is also a good place to introduce another salsa pattern used +throughout the analyzer: + +## Source Map pattern + +Due to an obscure edge case in completion, IDE needs to know the syntax node of +an use statement which imported the given completion candidate. We can't just +store the syntax node as a part of name resolution: this will break +incrementality, due to the fact that syntax changes after every file +modification. + +We solve this problem during the lowering step of name resolution. The lowering +query actually produces a *pair* of outputs: `LoweredModule` and [`SourceMap`]. +The `LoweredModule` module contains [imports], but in a position-independent form. +The `SourceMap` contains a mapping from position-independent imports to +(position-dependent) syntax nodes. + +The result of this basic lowering query changes after every modification. But +there's an intermediate [projection query] which returns only the first +position-independent part of the lowering. The result of this query is stable. +Naturally, name resolution [uses] this stable projection query. + +[imports]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/lower.rs#L52-L59 +[`SourceMap`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/lower.rs#L52-L59 +[projection query]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/lower.rs#L97-L103 +[uses]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/query_definitions.rs#L49 + +## Type inference + +First of all, implementation of type inference in rust-analyzer was spearheaded +by [@flodiebold]. [#327] was an awesome Christmas present, thank you, Florian! + +Type inference runs on per-function granularity and uses the patterns we've +discussed previously. + +First, we [lower the AST] of a function body into a position-independent +representation. In this representation, each expression is assigned a +[positional ID]. Alongside the lowered expression, [a source map] is produced, +which maps between expression ids and original syntax. This lowering step also +deals with "incomplete" source trees by replacing missing expressions by an +explicit `Missing` expression. + +Given the lowered body of the function, we can now run [type inference] and +construct a mapping from `ExprId`s to types. + +[@flodiebold]: https://github.com/flodiebold +[#327]: https://github.com/rust-analyzer/rust-analyzer/pull/327 +[lower the AST]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs +[positional ID]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs#L13-L15 +[a source map]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs#L41-L44 +[type inference]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/ty.rs#L1208-L1223 + +## Tying it all together: completion + +To conclude the overview of the rust-analyzer, let's trace the request for +(type-inference powered!) code completion! + +We start by [receiving a message] from the language client. We decode the +message as a request for completion and [schedule it on the threadpool]. This is +the also place where we [catch] canceled errors if, immediately after completion, the +client sends some modification. + +In [the handler] we a deserialize LSP request into the rust-analyzer specific data +types (by converting a file url into a numeric `FileId`), [ask analysis for +completion] and serializer results to LSP. + +The [completion implementation] is finally the place where we start doing the actual +work. The first step is to collect the `CompletionContext` -- a struct which +describes the cursor position in terms of Rust syntax and semantics. For +example, `function_syntax: Option<&'a ast::FnDef>` stores a reference to +enclosing function *syntax*, while `function: Option` is the +`Def` for this function. + +To construct the context, we first do an ["IntelliJ Trick"]: we insert a dummy +identifier at the cursor's position and parse this modified file, to get a +reasonably looking syntax tree. Then we do a bunch of "classification" routines +to figure out the context. For example, we [find an ancestor `fn` node] and we get a +[semantic model] for it (using the lossy `source_binder` infrastructure). + +The second step is to run a [series of independent completion routines]. Let's +take a closer look at [`complete_dot`], which completes fields and methods in +`foo.bar|`. First we extract a semantic function and a syntactic receiver +expression out of the `Context`. Then we run type-inference for this single +function and map our syntactic expression to `ExprId`. Using the ID, we figure +out the type of the receiver expression. Then we add all fields & methods from +the type to completion. + +[receiving a message]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L203 +[schedule it on the threadpool]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L428 +[catch]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L436-L442 +[the handler]: https://salsa.zulipchat.com/#narrow/stream/181542-rfcs.2Fsalsa-query-group/topic/design.20next.20steps +[ask analysis for completion]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/lib.rs#L439-L444 +[completion implementation]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion.rs#L46-L62 +[`CompletionContext`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L14-L37 +["IntelliJ Trick"]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L72-L75 +[find an ancestor `fn` node]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L116-L120 +[semantic model]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L123 +[series of independent completion routines]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion.rs#L52-L59 +[`complete_dot`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/complete_dot.rs#L6-L22 diff --git a/docs/dev/lsp-features.md b/docs/dev/lsp-features.md new file mode 100644 index 000000000..212d132ee --- /dev/null +++ b/docs/dev/lsp-features.md @@ -0,0 +1,74 @@ +# Supported LSP features + +This list documents LSP features, supported by rust-analyzer. + +## General +- [x] [initialize](https://microsoft.github.io/language-server-protocol/specification#initialize) +- [x] [initialized](https://microsoft.github.io/language-server-protocol/specification#initialized) +- [x] [shutdown](https://microsoft.github.io/language-server-protocol/specification#shutdown) +- [ ] [exit](https://microsoft.github.io/language-server-protocol/specification#exit) +- [x] [$/cancelRequest](https://microsoft.github.io/language-server-protocol/specification#cancelRequest) + +## Workspace +- [ ] [workspace/workspaceFolders](https://microsoft.github.io/language-server-protocol/specification#workspace_workspaceFolders) +- [ ] [workspace/didChangeWorkspaceFolders](https://microsoft.github.io/language-server-protocol/specification#workspace_didChangeWorkspaceFolders) +- [x] [workspace/didChangeConfiguration](https://microsoft.github.io/language-server-protocol/specification#workspace_didChangeConfiguration) +- [ ] [workspace/configuration](https://microsoft.github.io/language-server-protocol/specification#workspace_configuration) +- [x] [workspace/didChangeWatchedFiles](https://microsoft.github.io/language-server-protocol/specification#workspace_didChangeWatchedFiles) +- [x] [workspace/symbol](https://microsoft.github.io/language-server-protocol/specification#workspace_symbol) +- [x] [workspace/executeCommand](https://microsoft.github.io/language-server-protocol/specification#workspace_executeCommand) + - `apply_code_action` +- [ ] [workspace/applyEdit](https://microsoft.github.io/language-server-protocol/specification#workspace_applyEdit) + +## Text Synchronization +- [x] [textDocument/didOpen](https://microsoft.github.io/language-server-protocol/specification#textDocument_didOpen) +- [x] [textDocument/didChange](https://microsoft.github.io/language-server-protocol/specification#textDocument_didChange) +- [ ] [textDocument/willSave](https://microsoft.github.io/language-server-protocol/specification#textDocument_willSave) +- [ ] [textDocument/willSaveWaitUntil](https://microsoft.github.io/language-server-protocol/specification#textDocument_willSaveWaitUntil) +- [x] [textDocument/didSave](https://microsoft.github.io/language-server-protocol/specification#textDocument_didSave) +- [x] [textDocument/didClose](https://microsoft.github.io/language-server-protocol/specification#textDocument_didClose) + +## Diagnostics +- [x] [textDocument/publishDiagnostics](https://microsoft.github.io/language-server-protocol/specification#textDocument_publishDiagnostics) + +## Lanuguage Features +- [x] [textDocument/completion](https://microsoft.github.io/language-server-protocol/specification#textDocument_completion) + - open close: false + - change: Full + - will save: false + - will save wait until: false + - save: false +- [x] [completionItem/resolve](https://microsoft.github.io/language-server-protocol/specification#completionItem_resolve) + - resolve provider: none + - trigger characters: `:`, `.` +- [x] [textDocument/hover](https://microsoft.github.io/language-server-protocol/specification#textDocument_hover) +- [x] [textDocument/signatureHelp](https://microsoft.github.io/language-server-protocol/specification#textDocument_signatureHelp) + - trigger characters: `(`, `,`, `)` +- [ ] [textDocument/declaration](https://microsoft.github.io/language-server-protocol/specification#textDocument_declaration) +- [x] [textDocument/definition](https://microsoft.github.io/language-server-protocol/specification#textDocument_definition) +- [ ] [textDocument/typeDefinition](https://microsoft.github.io/language-server-protocol/specification#textDocument_typeDefinition) +- [x] [textDocument/implementation](https://microsoft.github.io/language-server-protocol/specification#textDocument_implementation) +- [x] [textDocument/references](https://microsoft.github.io/language-server-protocol/specification#textDocument_references) +- [x] [textDocument/documentHighlight](https://microsoft.github.io/language-server-protocol/specification#textDocument_documentHighlight) +- [x] [textDocument/documentSymbol](https://microsoft.github.io/language-server-protocol/specification#textDocument_documentSymbol) +- [x] [textDocument/codeAction](https://microsoft.github.io/language-server-protocol/specification#textDocument_codeAction) + - rust-analyzer.syntaxTree + - rust-analyzer.extendSelection + - rust-analyzer.matchingBrace + - rust-analyzer.parentModule + - rust-analyzer.joinLines + - rust-analyzer.run + - rust-analyzer.analyzerStatus +- [x] [textDocument/codeLens](https://microsoft.github.io/language-server-protocol/specification#textDocument_codeLens) +- [ ] [textDocument/documentLink](https://microsoft.github.io/language-server-protocol/specification#codeLens_resolve) +- [ ] [documentLink/resolve](https://microsoft.github.io/language-server-protocol/specification#documentLink_resolve) +- [ ] [textDocument/documentColor](https://microsoft.github.io/language-server-protocol/specification#textDocument_documentColor) +- [ ] [textDocument/colorPresentation](https://microsoft.github.io/language-server-protocol/specification#textDocument_colorPresentation) +- [x] [textDocument/formatting](https://microsoft.github.io/language-server-protocol/specification#textDocument_formatting) +- [ ] [textDocument/rangeFormatting](https://microsoft.github.io/language-server-protocol/specification#textDocument_rangeFormatting) +- [x] [textDocument/onTypeFormatting](https://microsoft.github.io/language-server-protocol/specification#textDocument_onTypeFormatting) + - first trigger character: `=` + - more trigger character `.` +- [x] [textDocument/rename](https://microsoft.github.io/language-server-protocol/specification#textDocument_rename) +- [x] [textDocument/prepareRename](https://microsoft.github.io/language-server-protocol/specification#textDocument_prepareRename) +- [x] [textDocument/foldingRange](https://microsoft.github.io/language-server-protocol/specification#textDocument_foldingRange) diff --git a/docs/user/README.md b/docs/user/README.md new file mode 100644 index 000000000..ddc6ee048 --- /dev/null +++ b/docs/user/README.md @@ -0,0 +1,241 @@ + +Prerequisites: + +In order to build the VS Code plugin, you need to have node.js and npm with +a minimum version of 10 installed. Please refer to +[node.js and npm documentation](https://nodejs.org) for installation instructions. + +You will also need the most recent version of VS Code: we don't try to +maintain compatibility with older versions yet. + +The experimental VS Code plugin can then be built and installed by executing the +following commands: + +``` +$ git clone https://github.com/rust-analyzer/rust-analyzer.git --depth 1 +$ cd rust-analyzer +$ cargo install-code + +# for stdlib support +$ rustup component add rust-src +``` + +This will run `cargo install --package ra_lsp_server` to install the server +binary into `~/.cargo/bin`, and then will build and install plugin from +`editors/code`. See +[this](https://github.com/rust-analyzer/rust-analyzer/blob/0199572a3d06ff66eeae85a2d2c9762996f0d2d8/crates/tools/src/main.rs#L150) +for details. The installation is expected to *just work*, if it doesn't, report +bugs! + +It's better to remove existing Rust plugins to avoid interference. + +## Rust Analyzer Specific Features + +These features are implemented as extensions to the language server protocol. +They are more experimental in nature and work only with VS Code. + +### Syntax highlighting + +It overrides built-in highlighting, and works only with a specific theme +(zenburn). `rust-analyzer.highlightingOn` setting can be used to disable it. + +### Go to symbol in workspace ctrl+t + +It mostly works on top of the built-in LSP functionality, however `#` and `*` +symbols can be used to narrow down the search. Specifically, + +- `#Foo` searches for `Foo` type in the current workspace +- `#foo#` searches for `foo` function in the current workspace +- `#Foo*` searches for `Foo` type among dependencies, excluding `stdlib` +- `#foo#*` searches for `foo` function among dependencies. + +That is, `#` switches from "types" to all symbols, `*` switches from the current +workspace to dependencies. + +### Commands ctrl+shift+p + +#### Show Rust Syntax Tree + +Shows the parse tree of the current file. It exists mostly for debugging +rust-analyzer itself. + +#### Extend Selection + +Extends the current selection to the encompassing syntactic construct +(expression, statement, item, module, etc). It works with multiple cursors. Do +bind this command to a key, its super-useful! Expected to be upstreamed to LSP soonish: +https://github.com/Microsoft/language-server-protocol/issues/613 + +#### Matching Brace + +If the cursor is on any brace (`<>(){}[]`) which is a part of a brace-pair, +moves cursor to the matching brace. It uses the actual parser to determine +braces, so it won't confuse generics with comparisons. + +#### Parent Module + +Navigates to the parent module of the current module. + +#### Join Lines + +Join selected lines into one, smartly fixing up whitespace and trailing commas. + +#### Run + +Shows popup suggesting to run a test/benchmark/binary **at the current cursor +location**. Super useful for repeatedly running just a single test. Do bind this +to a shortcut! + + +### On Typing Assists + +Some features trigger on typing certain characters: + +- typing `let =` tries to smartly add `;` if `=` is followed by an existing expression. +- Enter inside comments automatically inserts `///` +- typing `.` in a chain method call auto-indents + + +### Code Actions (Assists) + +These are triggered in a particular context via light bulb. We use custom code on +the VS Code side to be able to position cursor. + + +- Flip `,` + +```rust +// before: +fn foo(x: usize,<|> dim: (usize, usize)) +// after: +fn foo(dim: (usize, usize), x: usize) +``` + +- Add `#[derive]` + +```rust +// before: +struct Foo { + <|>x: i32 +} +// after: +#[derive(<|>)] +struct Foo { + x: i32 +} +``` + +- Add `impl` + +```rust +// before: +struct Foo<'a, T: Debug> { + <|>t: T +} +// after: +struct Foo<'a, T: Debug> { + t: T +} + +impl<'a, T: Debug> Foo<'a, T> { + <|> +} +``` + +- Change visibility + +```rust +// before: +fn<|> foo() {} + +// after +pub(crate) fn foo() {} +``` + +- Introduce variable: + +```rust +// before: +fn foo() { + foo(<|>1 + 1<|>); +} + +// after: +fn foo() { + let var_name = 1 + 1; + foo(var_name); +} +``` + +- Replace if-let with match: + +```rust +// before: +impl VariantData { + pub fn is_struct(&self) -> bool { + if <|>let VariantData::Struct(..) = *self { + true + } else { + false + } + } +} + +// after: +impl VariantData { + pub fn is_struct(&self) -> bool { + <|>match *self { + VariantData::Struct(..) => true, + _ => false, + } + } +} +``` + +- Split import + +```rust +// before: +use algo:<|>:visitor::{Visitor, visit}; +//after: +use algo::{<|>visitor::{Visitor, visit}}; +``` + +## LSP features + +* **Go to definition**: works correctly for local variables and some paths, + falls back to heuristic name matching for other things for the time being. + +* **Completion**: completes paths, including dependencies and standard library. + Does not handle glob imports and macros. Completes fields and inherent + methods. + +* **Outline** alt+shift+o + +* **Signature Info** + +* **Format document**. Formats the current file with rustfmt. Rustfmt must be + installed separately with `rustup component add rustfmt`. + +* **Hover** shows types of expressions and docstings + +* **Rename** works for local variables + +* **Code Lens** for running tests + +* **Folding** + +* **Diagnostics** + - missing module for `mod foo;` with a fix to create `foo.rs`. + - struct field shorthand + - unnecessary braces in use item + + +## Performance + +Rust Analyzer is expected to be pretty fast. Specifically, the initial analysis +of the project (i.e, when you first invoke completion or symbols) typically +takes dozen of seconds at most. After that, everything is supposed to be more or +less instant. However currently all analysis results are kept in memory, so +memory usage is pretty high. Working with `rust-lang/rust` repo, for example, +needs about 5 gigabytes of ram. diff --git a/editors/README.md b/editors/README.md deleted file mode 100644 index ddc6ee048..000000000 --- a/editors/README.md +++ /dev/null @@ -1,241 +0,0 @@ - -Prerequisites: - -In order to build the VS Code plugin, you need to have node.js and npm with -a minimum version of 10 installed. Please refer to -[node.js and npm documentation](https://nodejs.org) for installation instructions. - -You will also need the most recent version of VS Code: we don't try to -maintain compatibility with older versions yet. - -The experimental VS Code plugin can then be built and installed by executing the -following commands: - -``` -$ git clone https://github.com/rust-analyzer/rust-analyzer.git --depth 1 -$ cd rust-analyzer -$ cargo install-code - -# for stdlib support -$ rustup component add rust-src -``` - -This will run `cargo install --package ra_lsp_server` to install the server -binary into `~/.cargo/bin`, and then will build and install plugin from -`editors/code`. See -[this](https://github.com/rust-analyzer/rust-analyzer/blob/0199572a3d06ff66eeae85a2d2c9762996f0d2d8/crates/tools/src/main.rs#L150) -for details. The installation is expected to *just work*, if it doesn't, report -bugs! - -It's better to remove existing Rust plugins to avoid interference. - -## Rust Analyzer Specific Features - -These features are implemented as extensions to the language server protocol. -They are more experimental in nature and work only with VS Code. - -### Syntax highlighting - -It overrides built-in highlighting, and works only with a specific theme -(zenburn). `rust-analyzer.highlightingOn` setting can be used to disable it. - -### Go to symbol in workspace ctrl+t - -It mostly works on top of the built-in LSP functionality, however `#` and `*` -symbols can be used to narrow down the search. Specifically, - -- `#Foo` searches for `Foo` type in the current workspace -- `#foo#` searches for `foo` function in the current workspace -- `#Foo*` searches for `Foo` type among dependencies, excluding `stdlib` -- `#foo#*` searches for `foo` function among dependencies. - -That is, `#` switches from "types" to all symbols, `*` switches from the current -workspace to dependencies. - -### Commands ctrl+shift+p - -#### Show Rust Syntax Tree - -Shows the parse tree of the current file. It exists mostly for debugging -rust-analyzer itself. - -#### Extend Selection - -Extends the current selection to the encompassing syntactic construct -(expression, statement, item, module, etc). It works with multiple cursors. Do -bind this command to a key, its super-useful! Expected to be upstreamed to LSP soonish: -https://github.com/Microsoft/language-server-protocol/issues/613 - -#### Matching Brace - -If the cursor is on any brace (`<>(){}[]`) which is a part of a brace-pair, -moves cursor to the matching brace. It uses the actual parser to determine -braces, so it won't confuse generics with comparisons. - -#### Parent Module - -Navigates to the parent module of the current module. - -#### Join Lines - -Join selected lines into one, smartly fixing up whitespace and trailing commas. - -#### Run - -Shows popup suggesting to run a test/benchmark/binary **at the current cursor -location**. Super useful for repeatedly running just a single test. Do bind this -to a shortcut! - - -### On Typing Assists - -Some features trigger on typing certain characters: - -- typing `let =` tries to smartly add `;` if `=` is followed by an existing expression. -- Enter inside comments automatically inserts `///` -- typing `.` in a chain method call auto-indents - - -### Code Actions (Assists) - -These are triggered in a particular context via light bulb. We use custom code on -the VS Code side to be able to position cursor. - - -- Flip `,` - -```rust -// before: -fn foo(x: usize,<|> dim: (usize, usize)) -// after: -fn foo(dim: (usize, usize), x: usize) -``` - -- Add `#[derive]` - -```rust -// before: -struct Foo { - <|>x: i32 -} -// after: -#[derive(<|>)] -struct Foo { - x: i32 -} -``` - -- Add `impl` - -```rust -// before: -struct Foo<'a, T: Debug> { - <|>t: T -} -// after: -struct Foo<'a, T: Debug> { - t: T -} - -impl<'a, T: Debug> Foo<'a, T> { - <|> -} -``` - -- Change visibility - -```rust -// before: -fn<|> foo() {} - -// after -pub(crate) fn foo() {} -``` - -- Introduce variable: - -```rust -// before: -fn foo() { - foo(<|>1 + 1<|>); -} - -// after: -fn foo() { - let var_name = 1 + 1; - foo(var_name); -} -``` - -- Replace if-let with match: - -```rust -// before: -impl VariantData { - pub fn is_struct(&self) -> bool { - if <|>let VariantData::Struct(..) = *self { - true - } else { - false - } - } -} - -// after: -impl VariantData { - pub fn is_struct(&self) -> bool { - <|>match *self { - VariantData::Struct(..) => true, - _ => false, - } - } -} -``` - -- Split import - -```rust -// before: -use algo:<|>:visitor::{Visitor, visit}; -//after: -use algo::{<|>visitor::{Visitor, visit}}; -``` - -## LSP features - -* **Go to definition**: works correctly for local variables and some paths, - falls back to heuristic name matching for other things for the time being. - -* **Completion**: completes paths, including dependencies and standard library. - Does not handle glob imports and macros. Completes fields and inherent - methods. - -* **Outline** alt+shift+o - -* **Signature Info** - -* **Format document**. Formats the current file with rustfmt. Rustfmt must be - installed separately with `rustup component add rustfmt`. - -* **Hover** shows types of expressions and docstings - -* **Rename** works for local variables - -* **Code Lens** for running tests - -* **Folding** - -* **Diagnostics** - - missing module for `mod foo;` with a fix to create `foo.rs`. - - struct field shorthand - - unnecessary braces in use item - - -## Performance - -Rust Analyzer is expected to be pretty fast. Specifically, the initial analysis -of the project (i.e, when you first invoke completion or symbols) typically -takes dozen of seconds at most. After that, everything is supposed to be more or -less instant. However currently all analysis results are kept in memory, so -memory usage is pretty high. Working with `rust-lang/rust` repo, for example, -needs about 5 gigabytes of ram. diff --git a/guide.md b/guide.md deleted file mode 100644 index abbe4c154..000000000 --- a/guide.md +++ /dev/null @@ -1,575 +0,0 @@ -# Guide to rust-analyzer - -## About the guide - -This guide describes the current state of rust-analyzer as of 2019-01-20 (git -tag [guide-2019-01]). Its purpose is to document various problems and -architectural solutions related to the problem of building IDE-first compiler -for Rust. There is a video version of this guide as well: -https://youtu.be/ANKBNiSWyfc. - -[guide-2019-01]: https://github.com/rust-analyzer/rust-analyzer/tree/guide-2019-01 - -## The big picture - -On the highest possible level, rust-analyzer is a stateful component. A client may -apply changes to the analyzer (new contents of `foo.rs` file is "fn main() {}") -and it may ask semantic questions about the current state (what is the -definition of the identifier with offset 92 in file `bar.rs`?). Two important -properties hold: - -* Analyzer does not do any I/O. It starts in an empty state and all input data is - provided via `apply_change` API. - -* Only queries about the current state are supported. One can, of course, - simulate undo and redo by keeping a log of changes and inverse changes respectively. - -## IDE API - -To see the bigger picture of how the IDE features works, let's take a look at the [`AnalysisHost`] and -[`Analysis`] pair of types. `AnalysisHost` has three methods: - -* `default()` for creating an empty analysis instance -* `apply_change(&mut self)` to make changes (this is how you get from an empty - state to something interesting) -* `analysis(&self)` to get an instance of `Analysis` - -`Analysis` has a ton of methods for IDEs, like `goto_definition`, or -`completions`. Both inputs and outputs of `Analysis`' methods are formulated in -terms of files and offsets, and **not** in terms of Rust concepts like structs, -traits, etc. The "typed" API with Rust specific types is slightly lower in the -stack, we'll talk about it later. - -[`AnalysisHost`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/lib.rs#L265-L284 -[`Analysis`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/lib.rs#L291-L478 - -The reason for this separation of `Analysis` and `AnalysisHost` is that we want to apply -changes "uniquely", but we might also want to fork an `Analysis` and send it to -another thread for background processing. That is, there is only a single -`AnalysisHost`, but there may be several (equivalent) `Analysis`. - -Note that all of the `Analysis` API return `Cancelable`. This is required to -be responsive in an IDE setting. Sometimes a long-running query is being computed -and the user types something in the editor and asks for completion. In this -case, we cancel the long-running computation (so it returns `Err(Canceled)`), -apply the change and execute request for completion. We never use stale data to -answer requests. Under the cover, `AnalysisHost` "remembers" all outstanding -`Analysis` instances. The `AnalysisHost::apply_change` method cancels all -`Analysis`es, blocks until all of them are `Dropped` and then applies changes -in-place. This may be familiar to Rustaceans who use read-write locks for interior -mutability. - -Next, let's talk about what the inputs to the `Analysis` are, precisely. - -## Inputs - -Rust Analyzer never does any I/O itself, all inputs get passed explicitly via -the `AnalysisHost::apply_change` method, which accepts a single argument, a -`AnalysisChange`. [`AnalysisChange`] is a builder for a single change -"transaction", so it suffices to study its methods to understand all of the -input data. - -[`AnalysisChange`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/lib.rs#L119-L167 - -The `(add|change|remove)_file` methods control the set of the input files, where -each file has an integer id (`FileId`, picked by the client), text (`String`) -and a filesystem path. Paths are tricky; they'll be explained below, in source roots -section, together with the `add_root` method. The `add_library` method allows us to add a -group of files which are assumed to rarely change. It's mostly an optimization -and does not change the fundamental picture. - -The `set_crate_graph` method allows us to control how the input files are partitioned -into compilation unites -- crates. It also controls (in theory, not implemented -yet) `cfg` flags. `CrateGraph` is a directed acyclic graph of crates. Each crate -has a root `FileId`, a set of active `cfg` flags and a set of dependencies. Each -dependency is a pair of a crate and a name. It is possible to have two crates -with the same root `FileId` but different `cfg`-flags/dependencies. This model -is lower than Cargo's model of packages: each Cargo package consists of several -targets, each of which is a separate crate (or several crates, if you try -different feature combinations). - -Procedural macros should become inputs as well, but currently they are not -supported. Procedural macro will be a black box `Box TokenStream>` -function, and will be inserted into the crate graph just like dependencies. - -Soon we'll talk how we build an LSP server on top of `Analysis`, but first, -let's deal with that paths issue. - - -## Source roots (a.k.a. "Filesystems are horrible") - -This is a non-essential section, feel free to skip. - -The previous section said that the filesystem path is an attribute of a file, -but this is not the whole truth. Making it an absolute `PathBuf` will be bad for -several reasons. First, filesystems are full of (platform-dependent) edge cases: - -* it's hard (requires a syscall) to decide if two paths are equivalent -* some filesystems are case-sensitive (e.g. on macOS) -* paths are not necessary UTF-8 -* symlinks can form cycles - -Second, this might hurt reproducibility and hermeticity of builds. In theory, -moving a project from `/foo/bar/my-project` to `/spam/eggs/my-project` should -not change a bit in the output. However, if the absolute path is a part of the -input, it is at least in theory observable, and *could* affect the output. - -Yet another problem is that we really *really* want to avoid doing I/O, but with -Rust the set of "input" files is not necessary known up-front. In theory, you -can have `#[path="/dev/random"] mod foo;`. - -To solve (or explicitly refuse to solve) these problems rust-analyzer uses the -concept of a "source root". Roughly speaking, source roots are the contents of a -directory on a file systems, like `/home/matklad/projects/rustraytracer/**.rs`. - -More precisely, all files (`FileId`s) are partitioned into disjoint -`SourceRoot`s. Each file has a relative UTF-8 path within the `SourceRoot`. -`SourceRoot` has an identity (integer ID). Crucially, the root path of the -source root itself is unknown to the analyzer: A client is supposed to maintain a -mapping between `SourceRoot` IDs (which are assigned by the client) and actual -`PathBuf`s. `SourceRoot`s give a sane tree model of the file system to the -analyzer. - -Note that `mod`, `#[path]` and `include!()` can only reference files from the -same source root. It is of course is possible to explicitly add extra files to -the source root, even `/dev/random`. - -## Language Server Protocol - -Now let's see how the `Analysis` API is exposed via the JSON RPC based language server protocol. The -hard part here is managing changes (which can come either from the file system -or from the editor) and concurrency (we want to spawn background jobs for things -like syntax highlighting). We use the event loop pattern to manage the zoo, and -the loop is the [`main_loop_inner`] function. The [`main_loop`] does a one-time -initialization and tearing down of the resources. - -[`main_loop`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L51-L110 -[`main_loop_inner`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L156-L258 - - -Let's walk through a typical analyzer session! - -First, we need to figure out what to analyze. To do this, we run `cargo -metadata` to learn about Cargo packages for current workspace and dependencies, -and we run `rustc --print sysroot` and scan the "sysroot" (the directory containing the current Rust toolchain's files) to learn about crates like -`std`. Currently we load this configuration once at the start of the server, but -it should be possible to dynamically reconfigure it later without restart. - -[main_loop.rs#L62-L70](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L62-L70) - -The [`ProjectModel`] we get after this step is very Cargo and sysroot specific, -it needs to be lowered to get the input in the form of `AnalysisChange`. This -happens in [`ServerWorldState::new`] method. Specifically - -* Create a `SourceRoot` for each Cargo package and sysroot. -* Schedule a filesystem scan of the roots. -* Create an analyzer's `Crate` for each Cargo **target** and sysroot crate. -* Setup dependencies between the crates. - -[`ProjectModel`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/project_model.rs#L16-L20 -[`ServerWorldState::new`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L38-L160 - -The results of the scan (which may take a while) will be processed in the body -of the main loop, just like any other change. Here's where we handle: - -* [File system changes](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L194) -* [Changes from the editor](https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L377) - -After a single loop's turn, we group the changes into one `AnalysisChange` and -[apply] it. This always happens on the main thread and blocks the loop. - -[apply]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L216 - -To handle requests, like ["goto definition"], we create an instance of the -`Analysis` and [`schedule`] the task (which consumes `Analysis`) on the -threadpool. [The task] calls the corresponding `Analysis` method, while -massaging the types into the LSP representation. Keep in mind that if we are -executing "goto definition" on the threadpool and a new change comes in, the -task will be canceled as soon as the main loop calls `apply_change` on the -`AnalysisHost`. - -["goto definition"]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/server_world.rs#L216 -[`schedule`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L426-L455 -[The task]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop/handlers.rs#L205-L223 - -This concludes the overview of the analyzer's programing *interface*. Next, lets -dig into the implementation! - -## Salsa - -The most straightforward way to implement an "apply change, get analysis, repeat" -API would be to maintain the input state and to compute all possible analysis -information from scratch after every change. This works, but scales poorly with -the size of the project. To make this fast, we need to take advantage of the -fact that most of the changes are small, and that analysis results are unlikely -to change significantly between invocations. - -To do this we use [salsa]: a framework for incremental on-demand computation. -You can skip the rest of the section if you are familiar with rustc's red-green -algorithm (which is used for incremental compilation). - -[salsa]: https://github.com/salsa-rs/salsa - -It's better to refer to salsa's docs to learn about it. Here's a small excerpt: - -The key idea of salsa is that you define your program as a set of queries. Every -query is used like a function `K -> V` that maps from some key of type `K` to a value -of type `V`. Queries come in two basic varieties: - -* **Inputs**: the base inputs to your system. You can change these whenever you - like. - -* **Functions**: pure functions (no side effects) that transform your inputs - into other values. The results of queries is memoized to avoid recomputing - them a lot. When you make changes to the inputs, we'll figure out (fairly - intelligently) when we can re-use these memoized values and when we have to - recompute them. - - -For further discussion, its important to understand one bit of "fairly -intelligently". Suppose we have two functions, `f1` and `f2`, and one input, -`z`. We call `f1(X)` which in turn calls `f2(Y)` which inspects `i(Z)`. `i(Z)` -returns some value `V1`, `f2` uses that and returns `R1`, `f1` uses that and -returns `O`. Now, let's change `i` at `Z` to `V2` from `V1` and try to compute -`f1(X)` again. Because `f1(X)` (transitively) depends on `i(Z)`, we can't just -reuse its value as is. However, if `f2(Y)` is *still* equal to `R1` (despite -`i`'s change), we, in fact, *can* reuse `O` as result of `f1(X)`. And that's how -salsa works: it recomputes results in *reverse* order, starting from inputs and -progressing towards outputs, stopping as soon as it sees an intermediate value -that hasn't changed. If this sounds confusing to you, don't worry: it is -confusing. This illustration by @killercup might help: - -step 1 - -step 2 - -step 3 - -step 4 - -## Salsa Input Queries - -All analyzer information is stored in a salsa database. `Analysis` and -`AnalysisHost` types are newtype wrappers for [`RootDatabase`] -- a salsa -database. - -[`RootDatabase`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/db.rs#L88-L134 - -Salsa input queries are defined in [`FilesDatabase`] (which is a part of -`RootDatabase`). They closely mirror the familiar `AnalysisChange` structure: -indeed, what `apply_change` does is it sets the values of input queries. - -[`FilesDatabase`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_db/src/input.rs#L150-L174 - -## From text to semantic model - -The bulk of the rust-analyzer is transforming input text into a semantic model of -Rust code: a web of entities like modules, structs, functions and traits. - -An important fact to realize is that (unlike most other languages like C# or -Java) there isn't a one-to-one mapping between source code and the semantic model. A -single function definition in the source code might result in several semantic -functions: for example, the same source file might be included as a module into -several crate, or a single "crate" might be present in the compilation DAG -several times, with different sets of `cfg`s enabled. The IDE-specific task of -mapping source code position into a semantic model is inherently imprecise for -this reason, and is handled by the [`source_binder`]. - -[`source_binder`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/source_binder.rs - -The semantic interface is declared in the [`code_model_api`] module. Each entity is -identified by an integer ID and has a bunch of methods which take a salsa database -as an argument and returns other entities (which are also IDs). Internally, these -methods invoke various queries on the database to build the model on demand. -Here's [the list of queries]. - -[`code_model_api`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/code_model_api.rs -[the list of queries]: https://github.com/rust-analyzer/rust-analyzer/blob/7e84440e25e19529e4ff8a66e521d1b06349c6ec/crates/ra_hir/src/db.rs#L20-L106 - -The first step of building the model is parsing the source code. - -## Syntax trees - -An important property of the Rust language is that each file can be parsed in -isolation. Unlike, say, `C++`, an `include` can't change the meaning of the -syntax. For this reason, rust-analyzer can build a syntax tree for each "source -file", which could then be reused by several semantic models if this file -happens to be a part of several crates. - -The representation of syntax trees that rust-analyzer uses is similar to that of `Roslyn` -and Swift's new [libsyntax]. Swift's docs give an excellent overview of the -approach, so I skip this part here and instead outline the main characteristics -of the syntax trees: - -* Syntax trees are fully lossless. Converting **any** text to a syntax tree and - back is a total identity function. All whitespace and comments are explicitly - represented in the tree. - -* Syntax nodes have generic `(next|previous)_sibling`, `parent`, - `(first|last)_child` functions. You can get from any one node to any other - node in the file using only these functions. - -* Syntax nodes know their range (start offset and length) in the file. - -* Syntax nodes share the ownership of their syntax tree: if you keep a reference - to a single function, the whole enclosing file is alive. - -* Syntax trees are immutable and the cost of replacing the subtree is - proportional to the depth of the subtree. Read Swift's docs to learn how - immutable + parent pointers + cheap modification is possible. - -* Syntax trees are build on best-effort basis. All accessor methods return - `Option`s. The tree for `fn foo` will contain a function declaration with - `None` for parameter list and body. - -* Syntax trees do not know the file they are built from, they only know about - the text. - -The implementation is based on the generic [rowan] crate on top of which a -[rust-specific] AST is generated. - -[libsyntax]: https://github.com/apple/swift/tree/5e2c815edfd758f9b1309ce07bfc01c4bc20ec23/lib/Syntax -[rowan]: https://github.com/rust-analyzer/rowan/tree/100a36dc820eb393b74abe0d20ddf99077b61f88 -[rust-specific]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_syntax/src/ast/generated.rs - -The next step in constructing the semantic model is ... - -## Building a Module Tree - -The algorithm for building a tree of modules is to start with a crate root -(remember, each `Crate` from a `CrateGraph` has a `FileId`), collect all `mod` -declarations and recursively process child modules. This is handled by the -[`module_tree_query`], with two slight variations. - -[`module_tree_query`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/module_tree.rs#L116-L123 - -First, rust-analyzer builds a module tree for all crates in a source root -simultaneously. The main reason for this is historical (`module_tree` predates -`CrateGraph`), but this approach also enables accounting for files which are not -part of any crate. That is, if you create a file but do not include it as a -submodule anywhere, you still get semantic completion, and you get a warning -about a free-floating module (the actual warning is not implemented yet). - -The second difference is that `module_tree_query` does not *directly* depend on -the "parse" query (which is confusingly called `source_file`). Why would calling -the parse directly be bad? Suppose the user changes the file slightly, by adding -an insignificant whitespace. Adding whitespace changes the parse tree (because -it includes whitespace), and that means recomputing the whole module tree. - -We deal with this problem by introducing an intermediate [`submodules_query`]. -This query processes the syntax tree and extracts a set of declared submodule -names. Now, changing the whitespace results in `submodules_query` being -re-executed for a *single* module, but because the result of this query stays -the same, we don't have to re-execute [`module_tree_query`]. In fact, we only -need to re-execute it when we add/remove new files or when we change mod -declarations. - -[`submodules_query`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/module_tree.rs#L41 - -We store the resulting modules in a `Vec`-based indexed arena. The indices in -the arena becomes module IDs. And this brings us to the next topic: -assigning IDs in the general case. - -## Location Interner pattern - -One way to assign IDs is how we've dealt with modules: Collect all items into a -single array in some specific order and use the index in the array as an ID. The -main drawback of this approach is that these IDs are not stable: Adding a new item can -shift the IDs of all other items. This works for modules, because adding a module is -a comparatively rare operation, but would be less convenient for, for example, -functions. - -Another solution here is positional IDs: We can identify a function as "the -function with name `foo` in a ModuleId(92) module". Such locations are stable: -adding a new function to the module (unless it is also named `foo`) does not -change the location. However, such "ID" types ceases to be a `Copy`able integer and in -general can become pretty large if we account for nesting (for example: "third parameter of -the `foo` function of the `bar` `impl` in the `baz` module"). - -[`LocationInterner`] allows us to combine the benefits of positional and numeric -IDs. It is a bidirectional append-only map between locations and consecutive -integers which can "intern" a location and return an integer ID back. The salsa -database we use includes a couple of [interners]. How to "garbage collect" -unused locations is an open question. - -[`LocationInterner`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_db/src/loc2id.rs#L65-L71 -[interners]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/db.rs#L22-L23 - -For example, we use `LocationInterner` to assign IDs to definitions of functions, -structs, enums, etc. The location, [`DefLoc`] contains two bits of information: - -* the ID of the module which contains the definition, -* the ID of the specific item in the modules source code. - -We "could" use a text offset for the location of a particular item, but that would play -badly with salsa: offsets change after edits. So, as a rule of thumb, we avoid -using offsets, text ranges or syntax trees as keys and values for queries. What -we do instead is we store "index" of the item among all of the items of a file -(so, a positional based ID, but localized to a single file). - -[`DefLoc`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/ids.rs#L127-L139 - -One thing we've glossed over for the time being is support for macros. We have -only proof of concept handling of macros at the moment, but they are extremely -interesting from an "assigning IDs" perspective. - -## Macros and recursive locations - -The tricky bit about macros is that they effectively create new source files. -While we can use `FileId`s to refer to original files, we can't just assign them -willy-nilly to the pseudo files of macro expansion. Instead, we use a special -ID, [`HirFileId`] to refer to either a usual file or a macro-generated file: - -```rust -enum HirFileId { - FileId(FileId), - Macro(MacroCallId), -} -``` - -`MacroCallId` is an interned ID that specifies a particular macro invocation. -Its `MacroCallLoc` contains: - -* `ModuleId` of the containing module -* `HirFileId` of the containing file or pseudo file -* an index of this particular macro invocation in this file (positional id - again). - -Note how `HirFileId` is defined in terms of `MacroCallLoc` which is defined in -terms of `HirFileId`! This does not recur infinitely though: any chain of -`HirFileId`s bottoms out in `HirFileId::FileId`, that is, some source file -actually written by the user. - -[`HirFileId`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/ids.rs#L18-L125 - -Now that we understand how to identify a definition, in a source or in a -macro-generated file, we can discuss name resolution a bit. - -## Name resolution - -Name resolution faces the same problem as the module tree: if we look at the -syntax tree directly, we'll have to recompute name resolution after every -modification. The solution to the problem is the same: We [lower] the source code of -each module into a position-independent representation which does not change if -we modify bodies of the items. After that we [loop] resolving all imports until -we've reached a fixed point. - -[lower]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/lower.rs#L113-L117 -[loop]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres.rs#L186-L196 - -And, given all our preparation with IDs and a position-independent representation, -it is satisfying to [test] that typing inside function body does not invalidate -name resolution results. - -[test]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/tests.rs#L376 - -An interesting fact about name resolution is that it "erases" all of the -intermediate paths from the imports: in the end, we know which items are defined -and which items are imported in each module, but, if the import was `use -foo::bar::baz`, we deliberately forget what modules `foo` and `bar` resolve to. - -To serve "goto definition" requests on intermediate segments we need this info -in the IDE, however. Luckily, we need it only for a tiny fraction of imports, so we just ask -the module explicitly, "What does the path `foo::bar` resolve to?". This is a -general pattern: we try to compute the minimal possible amount of information -during analysis while allowing IDE to ask for additional specific bits. - -Name resolution is also a good place to introduce another salsa pattern used -throughout the analyzer: - -## Source Map pattern - -Due to an obscure edge case in completion, IDE needs to know the syntax node of -an use statement which imported the given completion candidate. We can't just -store the syntax node as a part of name resolution: this will break -incrementality, due to the fact that syntax changes after every file -modification. - -We solve this problem during the lowering step of name resolution. The lowering -query actually produces a *pair* of outputs: `LoweredModule` and [`SourceMap`]. -The `LoweredModule` module contains [imports], but in a position-independent form. -The `SourceMap` contains a mapping from position-independent imports to -(position-dependent) syntax nodes. - -The result of this basic lowering query changes after every modification. But -there's an intermediate [projection query] which returns only the first -position-independent part of the lowering. The result of this query is stable. -Naturally, name resolution [uses] this stable projection query. - -[imports]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/lower.rs#L52-L59 -[`SourceMap`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/lower.rs#L52-L59 -[projection query]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/nameres/lower.rs#L97-L103 -[uses]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/query_definitions.rs#L49 - -## Type inference - -First of all, implementation of type inference in rust-analyzer was spearheaded -by [@flodiebold]. [#327] was an awesome Christmas present, thank you, Florian! - -Type inference runs on per-function granularity and uses the patterns we've -discussed previously. - -First, we [lower the AST] of a function body into a position-independent -representation. In this representation, each expression is assigned a -[positional ID]. Alongside the lowered expression, [a source map] is produced, -which maps between expression ids and original syntax. This lowering step also -deals with "incomplete" source trees by replacing missing expressions by an -explicit `Missing` expression. - -Given the lowered body of the function, we can now run [type inference] and -construct a mapping from `ExprId`s to types. - -[@flodiebold]: https://github.com/flodiebold -[#327]: https://github.com/rust-analyzer/rust-analyzer/pull/327 -[lower the AST]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs -[positional ID]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs#L13-L15 -[a source map]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/expr.rs#L41-L44 -[type inference]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_hir/src/ty.rs#L1208-L1223 - -## Tying it all together: completion - -To conclude the overview of the rust-analyzer, let's trace the request for -(type-inference powered!) code completion! - -We start by [receiving a message] from the language client. We decode the -message as a request for completion and [schedule it on the threadpool]. This is -the also place where we [catch] canceled errors if, immediately after completion, the -client sends some modification. - -In [the handler] we a deserialize LSP request into the rust-analyzer specific data -types (by converting a file url into a numeric `FileId`), [ask analysis for -completion] and serializer results to LSP. - -The [completion implementation] is finally the place where we start doing the actual -work. The first step is to collect the `CompletionContext` -- a struct which -describes the cursor position in terms of Rust syntax and semantics. For -example, `function_syntax: Option<&'a ast::FnDef>` stores a reference to -enclosing function *syntax*, while `function: Option` is the -`Def` for this function. - -To construct the context, we first do an ["IntelliJ Trick"]: we insert a dummy -identifier at the cursor's position and parse this modified file, to get a -reasonably looking syntax tree. Then we do a bunch of "classification" routines -to figure out the context. For example, we [find an ancestor `fn` node] and we get a -[semantic model] for it (using the lossy `source_binder` infrastructure). - -The second step is to run a [series of independent completion routines]. Let's -take a closer look at [`complete_dot`], which completes fields and methods in -`foo.bar|`. First we extract a semantic function and a syntactic receiver -expression out of the `Context`. Then we run type-inference for this single -function and map our syntactic expression to `ExprId`. Using the ID, we figure -out the type of the receiver expression. Then we add all fields & methods from -the type to completion. - -[receiving a message]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L203 -[schedule it on the threadpool]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L428 -[catch]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_lsp_server/src/main_loop.rs#L436-L442 -[the handler]: https://salsa.zulipchat.com/#narrow/stream/181542-rfcs.2Fsalsa-query-group/topic/design.20next.20steps -[ask analysis for completion]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/lib.rs#L439-L444 -[completion implementation]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion.rs#L46-L62 -[`CompletionContext`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L14-L37 -["IntelliJ Trick"]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L72-L75 -[find an ancestor `fn` node]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L116-L120 -[semantic model]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/completion_context.rs#L123 -[series of independent completion routines]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion.rs#L52-L59 -[`complete_dot`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/ra_ide_api/src/completion/complete_dot.rs#L6-L22 -- cgit v1.2.3 From 206bbe9c93f3fd33922c9e00cfb263b980a79ca2 Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 10:00:54 +0300 Subject: README is short and up to the point --- README.md | 44 ++++++++++++++++++++++---------------------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/README.md b/README.md index acce7219e..4cada764d 100644 --- a/README.md +++ b/README.md @@ -13,32 +13,37 @@ Work on the Rust Analyzer is sponsored by [![Ferrous Systems](https://ferrous-systems.com/images/ferrous-logo-text.svg)](https://ferrous-systems.com/) -## Quick Start +## Language Server Quick Start -Rust analyzer builds on Rust >= 1.31.0 and uses the 2018 edition. +Rust Analyzer is a work-in-progress, so you'll have to build it from source, and +you might encounter critical bugs. That said, it is complete enough to provide a +useful IDE experience and some people use it as a daily driver. -``` -# run tests -$ cargo test +To build rust-analyzer, you need: -# show syntax tree of a Rust file -$ cargo run --package ra_cli parse < crates/ra_syntax/src/lib.rs +* latest stable rust for language server itself +* latest stable npm and VS Code for VS Code extension (`code` should be a path) -# show symbols of a Rust file -$ cargo run --package ra_cli symbols < crates/ra_syntax/src/lib.rs +For setup for other languages, see [./docs/users]. -# install the language server -$ cargo install-lsp -or -$ cargo install --path crates/ra_lsp_server ``` +# clone the repo +$ git clone https://github.com/rust-analyzer/rust-analyzer && cd rust-analyzer -See [these instructions](./editors/README.md) for VS Code setup and the list of -features (some of which are VS Code specific). +# install both the language server and VS Code extension +$ cargo install-code + +# alternatively, install only the server. Binary name is `ra_lsp_server`. +$ cargo install-lsp +``` +## Documentation -## Debugging +If you want to **contribute** to rust-analyzer or just curious about how things work +under the hood, check the [./docs/dev] folder. -See [these instructions](./DEBUGGING.md) on how to debug the vscode extension and the lsp server. +If you want to **use** rust-analyzer's language server with your editor of +choice, check [./docs/users] folder. It also contains some tips & tricks to help +you be more productive when using rust-analyzer. ## Getting in touch @@ -46,11 +51,6 @@ We are on the rust-lang Zulip! https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frls-2.2E0 -## Contributing - -See [CONTRIBUTING.md](./CONTRIBUTING.md) and [ARCHITECTURE.md](./ARCHITECTURE.md) - - ## License Rust analyzer is primarily distributed under the terms of both the MIT -- cgit v1.2.3 From 192a5cd11d413fdbaeb8d2e5106d82ae3b4a05c1 Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 12:17:12 +0300 Subject: better user docs --- docs/user/README.md | 248 +++++++++----------------------------------------- docs/user/features.md | 168 ++++++++++++++++++++++++++++++++++ 2 files changed, 210 insertions(+), 206 deletions(-) create mode 100644 docs/user/features.md diff --git a/docs/user/README.md b/docs/user/README.md index ddc6ee048..b25e152d0 100644 --- a/docs/user/README.md +++ b/docs/user/README.md @@ -1,3 +1,22 @@ +The main interface to rust-analyzer is the +[LSP](https://microsoft.github.io/language-server-protocol/) implementation. To +install lsp server, use `cargo install-lsp`, which is a shorthand for `cargo +install --package ra_lsp_server`. The binary is named `ra_lsp_server`, you +should be able to use it with any LSP-compatible editor. We use custom +extensions to LSP, so special client-side support is required to take full +advantage of rust-analyzer. This repository contains support code for VS Code +and Emacs. + +Rust Analyzer needs sources of rust standard library to work, so you might need +to execute + +``` +$ rustup component add rust-src +``` + +See [./features.md] document for a list of features that are available. + +## VS Code Prerequisites: @@ -15,227 +34,44 @@ following commands: $ git clone https://github.com/rust-analyzer/rust-analyzer.git --depth 1 $ cd rust-analyzer $ cargo install-code - -# for stdlib support -$ rustup component add rust-src ``` This will run `cargo install --package ra_lsp_server` to install the server binary into `~/.cargo/bin`, and then will build and install plugin from `editors/code`. See -[this](https://github.com/rust-analyzer/rust-analyzer/blob/0199572a3d06ff66eeae85a2d2c9762996f0d2d8/crates/tools/src/main.rs#L150) +[this](https://github.com/rust-analyzer/rust-analyzer/blob/69ee5c9c5ef212f7911028c9ddf581559e6565c3/crates/tools/src/main.rs#L37-L56) for details. The installation is expected to *just work*, if it doesn't, report bugs! It's better to remove existing Rust plugins to avoid interference. -## Rust Analyzer Specific Features - -These features are implemented as extensions to the language server protocol. -They are more experimental in nature and work only with VS Code. - -### Syntax highlighting - -It overrides built-in highlighting, and works only with a specific theme -(zenburn). `rust-analyzer.highlightingOn` setting can be used to disable it. - -### Go to symbol in workspace ctrl+t - -It mostly works on top of the built-in LSP functionality, however `#` and `*` -symbols can be used to narrow down the search. Specifically, - -- `#Foo` searches for `Foo` type in the current workspace -- `#foo#` searches for `foo` function in the current workspace -- `#Foo*` searches for `Foo` type among dependencies, excluding `stdlib` -- `#foo#*` searches for `foo` function among dependencies. - -That is, `#` switches from "types" to all symbols, `*` switches from the current -workspace to dependencies. - -### Commands ctrl+shift+p - -#### Show Rust Syntax Tree - -Shows the parse tree of the current file. It exists mostly for debugging -rust-analyzer itself. - -#### Extend Selection - -Extends the current selection to the encompassing syntactic construct -(expression, statement, item, module, etc). It works with multiple cursors. Do -bind this command to a key, its super-useful! Expected to be upstreamed to LSP soonish: -https://github.com/Microsoft/language-server-protocol/issues/613 - -#### Matching Brace - -If the cursor is on any brace (`<>(){}[]`) which is a part of a brace-pair, -moves cursor to the matching brace. It uses the actual parser to determine -braces, so it won't confuse generics with comparisons. - -#### Parent Module - -Navigates to the parent module of the current module. - -#### Join Lines - -Join selected lines into one, smartly fixing up whitespace and trailing commas. - -#### Run - -Shows popup suggesting to run a test/benchmark/binary **at the current cursor -location**. Super useful for repeatedly running just a single test. Do bind this -to a shortcut! - - -### On Typing Assists - -Some features trigger on typing certain characters: - -- typing `let =` tries to smartly add `;` if `=` is followed by an existing expression. -- Enter inside comments automatically inserts `///` -- typing `.` in a chain method call auto-indents +Beyond basic LSP features, there are some extension commands which you can +invoke via Ctrl+Shift+P or bind to a shortcut. See [./features.md] +for details. +### Settings -### Code Actions (Assists) +* `rust-analyzer.highlightingOn`: enables experimental syntax highlighting +* `rust-analyzer.showWorkspaceLoadedNotification`: to ease troubleshooting, a + notification is shown by default when a workspace is loaded +* `rust-analyzer.enableEnhancedTyping`: by default, rust-analyzer intercepts + `Enter` key to make it easier to continue comments +* `rust-analyzer.raLspServerPath`: path to `ra_lsp_server` executable +* `rust-analyzer.enableCargoWatchOnStartup`: prompt to install & enable `cargo + watch` for live error highlighting (note, this **does not** use rust-analyzer) +* `rust-analyzer.trace.server`: enables internal logging -These are triggered in a particular context via light bulb. We use custom code on -the VS Code side to be able to position cursor. +## Emacs -- Flip `,` - -```rust -// before: -fn foo(x: usize,<|> dim: (usize, usize)) -// after: -fn foo(dim: (usize, usize), x: usize) -``` - -- Add `#[derive]` - -```rust -// before: -struct Foo { - <|>x: i32 -} -// after: -#[derive(<|>)] -struct Foo { - x: i32 -} -``` - -- Add `impl` - -```rust -// before: -struct Foo<'a, T: Debug> { - <|>t: T -} -// after: -struct Foo<'a, T: Debug> { - t: T -} - -impl<'a, T: Debug> Foo<'a, T> { - <|> -} -``` - -- Change visibility - -```rust -// before: -fn<|> foo() {} - -// after -pub(crate) fn foo() {} -``` - -- Introduce variable: - -```rust -// before: -fn foo() { - foo(<|>1 + 1<|>); -} - -// after: -fn foo() { - let var_name = 1 + 1; - foo(var_name); -} -``` - -- Replace if-let with match: - -```rust -// before: -impl VariantData { - pub fn is_struct(&self) -> bool { - if <|>let VariantData::Struct(..) = *self { - true - } else { - false - } - } -} - -// after: -impl VariantData { - pub fn is_struct(&self) -> bool { - <|>match *self { - VariantData::Struct(..) => true, - _ => false, - } - } -} -``` - -- Split import - -```rust -// before: -use algo:<|>:visitor::{Visitor, visit}; -//after: -use algo::{<|>visitor::{Visitor, visit}}; -``` - -## LSP features - -* **Go to definition**: works correctly for local variables and some paths, - falls back to heuristic name matching for other things for the time being. - -* **Completion**: completes paths, including dependencies and standard library. - Does not handle glob imports and macros. Completes fields and inherent - methods. - -* **Outline** alt+shift+o - -* **Signature Info** - -* **Format document**. Formats the current file with rustfmt. Rustfmt must be - installed separately with `rustup component add rustfmt`. - -* **Hover** shows types of expressions and docstings - -* **Rename** works for local variables - -* **Code Lens** for running tests - -* **Folding** - -* **Diagnostics** - - missing module for `mod foo;` with a fix to create `foo.rs`. - - struct field shorthand - - unnecessary braces in use item +Prerequisites: +`emacs-lsp`, `dash` and `ht` packages. -## Performance +Installation: -Rust Analyzer is expected to be pretty fast. Specifically, the initial analysis -of the project (i.e, when you first invoke completion or symbols) typically -takes dozen of seconds at most. After that, everything is supposed to be more or -less instant. However currently all analysis results are kept in memory, so -memory usage is pretty high. Working with `rust-lang/rust` repo, for example, -needs about 5 gigabytes of ram. +* add +[ra-emacs-lsp.el](https://github.com/rust-analyzer/rust-analyzer/blob/69ee5c9c5ef212f7911028c9ddf581559e6565c3/editors/emacs/ra-emacs-lsp.el) +to load path and require it in `init.el` +* run `lsp` in a rust buffer +* (Optionally) bind commands like `join-lines` or `extend-selection` to keys diff --git a/docs/user/features.md b/docs/user/features.md new file mode 100644 index 000000000..5df606aee --- /dev/null +++ b/docs/user/features.md @@ -0,0 +1,168 @@ +This documents is an index of features that rust-analyzer language server provides. + +### Go to symbol in workspace ctrl+t + +It mostly works on top of the built-in LSP functionality, however `#` and `*` +symbols can be used to narrow down the search. Specifically, + +- `#Foo` searches for `Foo` type in the current workspace +- `#foo#` searches for `foo` function in the current workspace +- `#Foo*` searches for `Foo` type among dependencies, excluding `stdlib` +- `#foo#*` searches for `foo` function among dependencies. + +That is, `#` switches from "types" to all symbols, `*` switches from the current +workspace to dependencies. + +### Commands ctrl+shift+p + +#### Show Rust Syntax Tree + +Shows the parse tree of the current file. It exists mostly for debugging +rust-analyzer itself. + +#### Extend Selection + +Extends the current selection to the encompassing syntactic construct +(expression, statement, item, module, etc). It works with multiple cursors. Do +bind this command to a key, its super-useful! Expected to be upstreamed to LSP soonish: +https://github.com/Microsoft/language-server-protocol/issues/613 + +#### Matching Brace + +If the cursor is on any brace (`<>(){}[]`) which is a part of a brace-pair, +moves cursor to the matching brace. It uses the actual parser to determine +braces, so it won't confuse generics with comparisons. + +#### Parent Module + +Navigates to the parent module of the current module. + +#### Join Lines + +Join selected lines into one, smartly fixing up whitespace and trailing commas. + +#### Run + +Shows popup suggesting to run a test/benchmark/binary **at the current cursor +location**. Super useful for repeatedly running just a single test. Do bind this +to a shortcut! + + +### On Typing Assists + +Some features trigger on typing certain characters: + +- typing `let =` tries to smartly add `;` if `=` is followed by an existing expression. +- Enter inside comments automatically inserts `///` +- typing `.` in a chain method call auto-indents + + + + + +### Code Actions (Assists) + +These are triggered in a particular context via light bulb. We use custom code on +the VS Code side to be able to position cursor. + + +- Flip `,` + +```rust +// before: +fn foo(x: usize,<|> dim: (usize, usize)) +// after: +fn foo(dim: (usize, usize), x: usize) +``` + +- Add `#[derive]` + +```rust +// before: +struct Foo { + <|>x: i32 +} +// after: +#[derive(<|>)] +struct Foo { + x: i32 +} +``` + +- Add `impl` + +```rust +// before: +struct Foo<'a, T: Debug> { + <|>t: T +} +// after: +struct Foo<'a, T: Debug> { + t: T +} + +impl<'a, T: Debug> Foo<'a, T> { + <|> +} +``` + +- Change visibility + +```rust +// before: +fn<|> foo() {} + +// after +pub(crate) fn foo() {} +``` + +- Introduce variable: + +```rust +// before: +fn foo() { + foo(<|>1 + 1<|>); +} + +// after: +fn foo() { + let var_name = 1 + 1; + foo(var_name); +} +``` + +- Replace if-let with match: + +```rust +// before: +impl VariantData { + pub fn is_struct(&self) -> bool { + if <|>let VariantData::Struct(..) = *self { + true + } else { + false + } + } +} + +// after: +impl VariantData { + pub fn is_struct(&self) -> bool { + <|>match *self { + VariantData::Struct(..) => true, + _ => false, + } + } +} +``` + +- Split import + +```rust +// before: +use algo:<|>:visitor::{Visitor, visit}; +//after: +use algo::{<|>visitor::{Visitor, visit}}; +``` + + -- cgit v1.2.3 From 56ad19ef025ddf8461b56c5f42ca074e4f2ebe23 Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 12:21:55 +0300 Subject: fix links --- README.md | 6 +++--- docs/user/README.md | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 4cada764d..7debf7c3b 100644 --- a/README.md +++ b/README.md @@ -24,7 +24,7 @@ To build rust-analyzer, you need: * latest stable rust for language server itself * latest stable npm and VS Code for VS Code extension (`code` should be a path) -For setup for other languages, see [./docs/users]. +For setup for other languages, see [./docs/users](./docs/users). ``` # clone the repo @@ -39,10 +39,10 @@ $ cargo install-lsp ## Documentation If you want to **contribute** to rust-analyzer or just curious about how things work -under the hood, check the [./docs/dev] folder. +under the hood, check the [./docs/dev](./docs/dev) folder. If you want to **use** rust-analyzer's language server with your editor of -choice, check [./docs/users] folder. It also contains some tips & tricks to help +choice, check [./docs/users](./docs/users) folder. It also contains some tips & tricks to help you be more productive when using rust-analyzer. ## Getting in touch diff --git a/docs/user/README.md b/docs/user/README.md index b25e152d0..8de46981b 100644 --- a/docs/user/README.md +++ b/docs/user/README.md @@ -14,7 +14,7 @@ to execute $ rustup component add rust-src ``` -See [./features.md] document for a list of features that are available. +See [./features.md](./features.md) document for a list of features that are available. ## VS Code @@ -46,7 +46,7 @@ bugs! It's better to remove existing Rust plugins to avoid interference. Beyond basic LSP features, there are some extension commands which you can -invoke via Ctrl+Shift+P or bind to a shortcut. See [./features.md] +invoke via Ctrl+Shift+P or bind to a shortcut. See [./features.md](./features.md) for details. ### Settings -- cgit v1.2.3 From 505bd45bcc44ee58090e514f46cbe884924e3574 Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 12:24:12 +0300 Subject: fix links --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 7debf7c3b..638804632 100644 --- a/README.md +++ b/README.md @@ -24,7 +24,7 @@ To build rust-analyzer, you need: * latest stable rust for language server itself * latest stable npm and VS Code for VS Code extension (`code` should be a path) -For setup for other languages, see [./docs/users](./docs/users). +For setup for other languages, see [./docs/user](./docs/user). ``` # clone the repo @@ -42,7 +42,7 @@ If you want to **contribute** to rust-analyzer or just curious about how things under the hood, check the [./docs/dev](./docs/dev) folder. If you want to **use** rust-analyzer's language server with your editor of -choice, check [./docs/users](./docs/users) folder. It also contains some tips & tricks to help +choice, check [./docs/user](./docs/user) folder. It also contains some tips & tricks to help you be more productive when using rust-analyzer. ## Getting in touch -- cgit v1.2.3 From ab9fef1ee26c185cdf2b14c3d21ecfae7b0905ae Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 12:25:03 +0300 Subject: sadly, we support only a single language atm --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 638804632..ad3ad22c3 100644 --- a/README.md +++ b/README.md @@ -24,7 +24,7 @@ To build rust-analyzer, you need: * latest stable rust for language server itself * latest stable npm and VS Code for VS Code extension (`code` should be a path) -For setup for other languages, see [./docs/user](./docs/user). +For setup for other editors, see [./docs/user](./docs/user). ``` # clone the repo -- cgit v1.2.3 From 07a9e5c0e1c20f66730f608647e96ce29359b91d Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 13:05:03 +0300 Subject: document assists --- docs/user/features.md | 251 ++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 203 insertions(+), 48 deletions(-) diff --git a/docs/user/features.md b/docs/user/features.md index 5df606aee..aa3bf5157 100644 --- a/docs/user/features.md +++ b/docs/user/features.md @@ -1,45 +1,47 @@ -This documents is an index of features that rust-analyzer language server provides. +This documents is an index of features that rust-analyzer language server +provides. Shortcuts are for the default VS Code layout. If there's no shortcut, +you can use Ctrl+Shift+P to search for the corresponding action. -### Go to symbol in workspace ctrl+t +### Workspace Symbol ctrl+t -It mostly works on top of the built-in LSP functionality, however `#` and `*` -symbols can be used to narrow down the search. Specifically, +Uses fuzzy-search to find types, modules and function by name across your +project and dependencies. This **the** most useful feature, which improves code +navigation tremendously. It mostly works on top of the built-in LSP +functionality, however `#` and `*` symbols can be used to narrow down the +search. Specifically, -- `#Foo` searches for `Foo` type in the current workspace -- `#foo#` searches for `foo` function in the current workspace -- `#Foo*` searches for `Foo` type among dependencies, excluding `stdlib` -- `#foo#*` searches for `foo` function among dependencies. +- `Foo` searches for `Foo` type in the current workspace +- `foo#` searches for `foo` function in the current workspace +- `Foo*` searches for `Foo` type among dependencies, excluding `stdlib` +- `foo#*` searches for `foo` function among dependencies. That is, `#` switches from "types" to all symbols, `*` switches from the current workspace to dependencies. -### Commands ctrl+shift+p - -#### Show Rust Syntax Tree - -Shows the parse tree of the current file. It exists mostly for debugging -rust-analyzer itself. +### Document Symbol ctrl+shift+o -#### Extend Selection +Provides a tree of the symbols defined in the file. Can be used to -Extends the current selection to the encompassing syntactic construct -(expression, statement, item, module, etc). It works with multiple cursors. Do -bind this command to a key, its super-useful! Expected to be upstreamed to LSP soonish: -https://github.com/Microsoft/language-server-protocol/issues/613 +* fuzzy search symbol in a file (super useful) +* draw breadcrumbs to describe the context around the cursor +* draw outline of the file -#### Matching Brace +### On Typing Assists -If the cursor is on any brace (`<>(){}[]`) which is a part of a brace-pair, -moves cursor to the matching brace. It uses the actual parser to determine -braces, so it won't confuse generics with comparisons. +Some features trigger on typing certain characters: -#### Parent Module +- typing `let =` tries to smartly add `;` if `=` is followed by an existing expression. +- Enter inside comments automatically inserts `///` +- typing `.` in a chain method call auto-indents -Navigates to the parent module of the current module. +### Commands ctrl+shift+p -#### Join Lines +#### Extend Selection -Join selected lines into one, smartly fixing up whitespace and trailing commas. +Extends the current selection to the encompassing syntactic construct +(expression, statement, item, module, etc). It works with multiple cursors. Do +bind this command to a key, it's super-useful! Expected to be upstreamed to LSP +soonish: https://github.com/Microsoft/language-server-protocol/issues/613 #### Run @@ -47,33 +49,37 @@ Shows popup suggesting to run a test/benchmark/binary **at the current cursor location**. Super useful for repeatedly running just a single test. Do bind this to a shortcut! +#### Parent Module -### On Typing Assists +Navigates to the parent module of the current module. -Some features trigger on typing certain characters: +#### Matching Brace -- typing `let =` tries to smartly add `;` if `=` is followed by an existing expression. -- Enter inside comments automatically inserts `///` -- typing `.` in a chain method call auto-indents +If the cursor is on any brace (`<>(){}[]`) which is a part of a brace-pair, +moves cursor to the matching brace. It uses the actual parser to determine +braces, so it won't confuse generics with comparisons. +#### Join Lines +Join selected lines into one, smartly fixing up whitespace and trailing commas. +#### Show Syntax Tree +Shows the parse tree of the current file. It exists mostly for debugging +rust-analyzer itself. -### Code Actions (Assists) +#### Status -These are triggered in a particular context via light bulb. We use custom code on -the VS Code side to be able to position cursor. +Shows internal statistic about memory usage of rust-analyzer +#### Run garbage collection -- Flip `,` +Manually triggers GC -```rust -// before: -fn foo(x: usize,<|> dim: (usize, usize)) -// after: -fn foo(dim: (usize, usize), x: usize) -``` +### Code Actions (Assists) + +These are triggered in a particular context via light bulb. We use custom code on +the VS Code side to be able to position cursor. `<|>` signifies cursor - Add `#[derive]` @@ -106,14 +112,147 @@ impl<'a, T: Debug> Foo<'a, T> { } ``` -- Change visibility +- Add missing `impl` members ```rust // before: -fn<|> foo() {} +trait Foo { + fn foo(&self); + fn bar(&self); + fn baz(&self); +} + +struct S; + +impl Foo for S { + fn bar(&self) {} + <|> +} + +// after: +trait Foo { + fn foo(&self); + fn bar(&self); + fn baz(&self); +} -// after -pub(crate) fn foo() {} +struct S; + +impl Foo for S { + fn bar(&self) {} + fn foo(&self) { unimplemented!() } + fn baz(&self) { unimplemented!() }<|> +} +``` + +- Import path + +```rust +// before: +impl std::fmt::Debug<|> for Foo { +} + +// after: +use std::fmt::Debug + +impl Debug<|> for Foo { +} +``` + +- Change Visibility + +```rust +// before: +<|>fn foo() {} + +// after: +<|>pub(crate) fn foo() {} + +// after: +<|>pub fn foo() {} +``` + +- Fill match arms + +```rust +// before: +enum A { + As, + Bs, + Cs(String), + Ds(String, String), + Es{x: usize, y: usize} +} + +fn main() { + let a = A::As; + match a<|> {} +} + +// after: +enum A { + As, + Bs, + Cs(String), + Ds(String, String), + Es{x: usize, y: usize} +} + +fn main() { + let a = A::As; + match <|>a { + A::As => (), + A::Bs => (), + A::Cs(_) => (), + A::Ds(_, _) => (), + A::Es{x, y} => (), + } +} +``` + +-- Fill struct fields + +```rust +// before: +struct S<'a, D> { + a: u32, + b: String, + c: (i32, i32), + d: D, + r: &'a str, +} + +fn main() { + let s = S<|> {} +} + +// after: +struct S<'a, D> { + a: u32, + b: String, + c: (i32, i32), + d: D, + r: &'a str, +} + +fn main() { + let s = <|>S { + a: (), + b: (), + c: (), + d: (), + r: (), + } +} +``` + +- Flip `,` + +```rust +// before: +fn foo(x: usize,<|> dim: (usize, usize)) {} +// after: +fn foo(dim: (usize, usize), x: usize) {} ``` - Introduce variable: @@ -131,6 +270,24 @@ fn foo() { } ``` +-- Remove `dbg!` + +```rust +// before: +fn foo(n: usize) { + if let Some(_) = dbg!(n.<|>checked_sub(4)) { + // ... + } +} + +// after: +fn foo(n: usize) { + if let Some(_) = n.<|>checked_sub(4) { + // ... + } +} +``` + - Replace if-let with match: ```rust @@ -164,5 +321,3 @@ use algo:<|>:visitor::{Visitor, visit}; //after: use algo::{<|>visitor::{Visitor, visit}}; ``` - - -- cgit v1.2.3 From dbed0f0e9960904193fd56327201b91bf585e016 Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 13:19:46 +0300 Subject: document some nice things --- docs/user/features.md | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/docs/user/features.md b/docs/user/features.md index aa3bf5157..90f182f35 100644 --- a/docs/user/features.md +++ b/docs/user/features.md @@ -321,3 +321,39 @@ use algo:<|>:visitor::{Visitor, visit}; //after: use algo::{<|>visitor::{Visitor, visit}}; ``` + +### Magic Completions + +In addition to usual reference completion, rust-analyzer provides some ✨magic✨ +completions as well: + +Keywords like `if`, `else` `while`, `loop` are completed with braces, and cursor +is placed at the appropriate position. Even though `if` is easy to type, you +still want to complete it, to get ` { }` for free! `return` is inserted with a +space or `;` depending on the return type of the function. + +When completing a function call, `()` are automatically inserted. If function +takes arguments, cursor is positioned inside the parenthesis. + +There are postifx completions, which can be triggerd by typing something like +`foo().if`. The word after `.` determines postifx completion, possible variants are: + +- `expr.if` -> `if expr {}` +- `expr.match` -> `match expr {}` +- `expr.while` -> `while expr {}` +- `expr.ref` -> `&expr` +- `expr.refm` -> `&mut expr` +- `expr.not` -> `!expr` +- `expr.dbg` -> `dbg!(expr)` + +There also snippet completions: + +#### Inside Expressions + +- `pd` -> `println!("{:?}")` +- `ppd` -> `println!("{:#?}")` + +#### Inside Modules + +- `tfn` -> `#[test] fn f(){}` + -- cgit v1.2.3 From fbf35c839b706286307b673a3dd6231c81ad4661 Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 14:49:06 +0300 Subject: kill old roadmap: it is completed --- docs/dev/ROADMAP.md | 77 ----------------------------------------------------- 1 file changed, 77 deletions(-) delete mode 100644 docs/dev/ROADMAP.md diff --git a/docs/dev/ROADMAP.md b/docs/dev/ROADMAP.md deleted file mode 100644 index 3856ebc5b..000000000 --- a/docs/dev/ROADMAP.md +++ /dev/null @@ -1,77 +0,0 @@ -# Rust Analyzer Roadmap 01 - -Written on 2018-11-06, extends approximately to February 2019. -After that, we should coordinate with the compiler/rls developers to align goals and share code and experience. - - -# Overall Goals - -The mission is: - * Provide an excellent "code analyzed as you type" IDE experience for the Rust language, - * Implement the bulk of the features in Rust itself. - - -High-level architecture constraints: - * Long-term, replace the current rustc frontend. - It's *obvious* that the code should be shared, but OTOH, all great IDEs started as from-scratch rewrites. - * Don't hard-code a particular protocol or mode of operation. - Produce a library which could be used for implementing an LSP server, or for in-process embedding. - * As long as possible, stick with stable Rust. - - -# Current Goals - -Ideally, we would be coordinating with the compiler/rls teams, but they are busy working on making Rust 2018 at the moment. -The sync-up point will happen some time after the edition, probably early 2019. -In the meantime, the goal is to **experiment**, specifically, to figure out how a from-scratch written RLS might look like. - - -## Data Storage and Protocol implementation - -The fundamental part of any architecture is who owns which data, how the data is mutated and how the data is exposed to user. -For storage we use the [salsa](http://github.com/salsa-rs/salsa) library, which provides a solid model that seems to be the way to go. - -Modification to source files is mostly driven by the language client, but we also should support watching the file system. The current -file watching implementation is a stub. - -**Action Item:** implement reliable file watching service. - -We also should extract LSP bits as a reusable library. There's already `gen_lsp_server`, but it is pretty limited. - -**Action Item:** try using `gen_lsp_server` in more than one language server, for example for TOML and Nix. - -The ideal architecture for `gen_lsp_server` is still unclear. I'd rather avoid futures: they bring significant runtime complexity -(call stacks become insane) and the performance benefits are negligible for our use case (one thread per request is perfectly OK given -the low amount of requests a language server receives). The current interface is based on crossbeam-channel, but it's not clear -if that is the best choice. - - -## Low-effort, high payoff features - -Implementing 20% of type inference will give use 80% of completion. -Thus it makes sense to partially implement name resolution, type inference and trait matching, even though there is a chance that -this code is replaced later on when we integrate with the compiler - -Specifically, we need to: - -* **Action Item:** implement path resolution, so that we get completion in imports and such. -* **Action Item:** implement simple type inference, so that we get completion for inherent methods. -* **Action Item:** implement nicer completion infrastructure, so that we have icons, snippets, doc comments, after insert callbacks, ... - - -## Dragons to kill - -To make experiments most effective, we should try to prototype solutions for the hardest problems. -In the case of Rust, the two hardest problems are: - * Conditional compilation and source/model mismatch. - A single source file might correspond to several entities in the semantic model. - For example, different cfg flags produce effectively different crates from the same source. - * Macros are intertwined with name resolution in a single fix-point iteration algorithm. - This is just plain hard to implement, but also interacts poorly with on-demand. - - -For the first bullet point, we need to design descriptors infra and explicit mapping step between sources and semantic model, which is intentionally fuzzy in one direction. -The **action item** here is basically "write code, see what works, keep high-level picture in mind". - -For the second bullet point, there's hope that salsa with its deep memoization will result in a fast enough solution even without being fully on-demand. -Again, the **action item** is to write the code and see what works. Salsa itself uses macros heavily, so it should be a great test. -- cgit v1.2.3 From 728990a5807882276e34f1f581f70a59f61ba991 Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 15:22:05 +0300 Subject: start dev readme --- docs/dev/ARCHITECTURE.md | 200 ----------------------------------------------- docs/dev/README.md | 37 +++++++++ docs/dev/arhictecture.md | 200 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 237 insertions(+), 200 deletions(-) delete mode 100644 docs/dev/ARCHITECTURE.md create mode 100644 docs/dev/README.md create mode 100644 docs/dev/arhictecture.md diff --git a/docs/dev/ARCHITECTURE.md b/docs/dev/ARCHITECTURE.md deleted file mode 100644 index 57f76ebae..000000000 --- a/docs/dev/ARCHITECTURE.md +++ /dev/null @@ -1,200 +0,0 @@ -# Architecture - -This document describes the high-level architecture of rust-analyzer. -If you want to familiarize yourself with the code base, you are just -in the right place! - -See also the [guide](./guide.md), which walks through a particular snapshot of -rust-analyzer code base. - -For syntax-trees specifically, there's a [video walk -through](https://youtu.be/DGAuLWdCCAI) as well. - -## The Big Picture - -![](https://user-images.githubusercontent.com/1711539/50114578-e8a34280-0255-11e9-902c-7cfc70747966.png) - -On the highest level, rust-analyzer is a thing which accepts input source code -from the client and produces a structured semantic model of the code. - -More specifically, input data consists of a set of test files (`(PathBuf, -String)` pairs) and information about project structure, captured in the so called -`CrateGraph`. The crate graph specifies which files are crate roots, which cfg -flags are specified for each crate (TODO: actually implement this) and what -dependencies exist between the crates. The analyzer keeps all this input data in -memory and never does any IO. Because the input data is source code, which -typically measures in tens of megabytes at most, keeping all input data in -memory is OK. - -A "structured semantic model" is basically an object-oriented representation of -modules, functions and types which appear in the source code. This representation -is fully "resolved": all expressions have types, all references are bound to -declarations, etc. - -The client can submit a small delta of input data (typically, a change to a -single file) and get a fresh code model which accounts for changes. - -The underlying engine makes sure that model is computed lazily (on-demand) and -can be quickly updated for small modifications. - - -## Code generation - -Some of the components of this repository are generated through automatic -processes. These are outlined below: - -- `gen-syntax`: The kinds of tokens that are reused in several places, so a generator - is used. We use tera templates to generate the files listed below, based on - the grammar described in [grammar.ron]: - - [ast/generated.rs][ast generated] in `ra_syntax` based on - [ast/generated.tera.rs][ast source] - - [syntax_kinds/generated.rs][syntax_kinds generated] in `ra_syntax` based on - [syntax_kinds/generated.tera.rs][syntax_kinds source] - -[tera]: https://tera.netlify.com/ -[grammar.ron]: ./crates/ra_syntax/src/grammar.ron -[ast generated]: ./crates/ra_syntax/src/ast/generated.rs -[ast source]: ./crates/ra_syntax/src/ast/generated.rs.tera -[syntax_kinds generated]: ./crates/ra_syntax/src/syntax_kinds/generated.rs -[syntax_kinds source]: ./crates/ra_syntax/src/syntax_kinds/generated.rs.tera - - -## Code Walk-Through - -### `crates/ra_syntax` - -Rust syntax tree structure and parser. See -[RFC](https://github.com/rust-lang/rfcs/pull/2256) for some design notes. - -- [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees. -- `grammar` module is the actual parser. It is a hand-written recursive descent parser, which - produces a sequence of events like "start node X", "finish not Y". It works similarly to [kotlin's parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java), - which is a good source of inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs) - is what we use for the definition of the Rust language. -- `parser_api/parser_impl` bridges the tree-agnostic parser from `grammar` with `rowan` trees. - This is the thing that turns a flat list of events into a tree (see `EventProcessor`) -- `ast` provides a type safe API on top of the raw `rowan` tree. -- `grammar.ron` RON description of the grammar, which is used to - generate `syntax_kinds` and `ast` modules, using `cargo gen-syntax` command. -- `algo`: generic tree algorithms, including `walk` for O(1) stack - space tree traversal (this is cool) and `visit` for type-driven - visiting the nodes (this is double plus cool, if you understand how - `Visitor` works, you understand the design of syntax trees). - -Tests for ra_syntax are mostly data-driven: `tests/data/parser` contains a bunch of `.rs` -(test vectors) and `.txt` files with corresponding syntax trees. During testing, we check -`.rs` against `.txt`. If the `.txt` file is missing, it is created (this is how you update -tests). Additionally, running `cargo gen-tests` will walk the grammar module and collect -all `//test test_name` comments into files inside `tests/data` directory. - -See [#93](https://github.com/rust-analyzer/rust-analyzer/pull/93) for an example PR which -fixes a bug in the grammar. - -### `crates/ra_db` - -We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and -on-demand computation. Roughly, you can think of salsa as a key-value store, but -it also can compute derived values using specified functions. The `ra_db` crate -provides basic infrastructure for interacting with salsa. Crucially, it -defines most of the "input" queries: facts supplied by the client of the -analyzer. Reading the docs of the `ra_db::input` module should be useful: -everything else is strictly derived from those inputs. - -### `crates/ra_hir` - -HIR provides high-level "object oriented" access to Rust code. - -The principal difference between HIR and syntax trees is that HIR is bound to a -particular crate instance. That is, it has cfg flags and features applied (in -theory, in practice this is to be implemented). So, the relation between -syntax and HIR is many-to-one. The `source_binder` module is responsible for -guessing a HIR for a particular source position. - -Underneath, HIR works on top of salsa, using a `HirDatabase` trait. - -### `crates/ra_ide_api` - -A stateful library for analyzing many Rust files as they change. `AnalysisHost` -is a mutable entity (clojure's atom) which holds the current state, incorporates -changes and hands out `Analysis` --- an immutable and consistent snapshot of -the world state at a point in time, which actually powers analysis. - -One interesting aspect of analysis is its support for cancellation. When a -change is applied to `AnalysisHost`, first all currently active snapshots are -canceled. Only after all snapshots are dropped the change actually affects the -database. - -APIs in this crate are IDE centric: they take text offsets as input and produce -offsets and strings as output. This works on top of rich code model powered by -`hir`. - -### `crates/ra_ide_api_light` - -All IDE features which can be implemented if you only have access to a single -file. `ra_ide_api_light` could be used to enhance editing of Rust code without -the need to fiddle with build-systems, file synchronization and such. - -In a sense, `ra_ide_api_light` is just a bunch of pure functions which take a -syntax tree as input. - -The tests for `ra_ide_api_light` are `#[cfg(test)] mod tests` unit-tests spread -throughout its modules. - - -### `crates/ra_lsp_server` - -An LSP implementation which wraps `ra_ide_api` into a langauge server protocol. - -### `crates/ra_vfs` - -Although `hir` and `ra_ide_api` don't do any IO, we need to be able to read -files from disk at the end of the day. This is what `ra_vfs` does. It also -manages overlays: "dirty" files in the editor, whose "true" contents is -different from data on disk. - -### `crates/gen_lsp_server` - -A language server scaffold, exposing a synchronous crossbeam-channel based API. -This crate handles protocol handshaking and parsing messages, while you -control the message dispatch loop yourself. - -Run with `RUST_LOG=sync_lsp_server=debug` to see all the messages. - -### `crates/ra_cli` - -A CLI interface to rust-analyzer. - -### `crate/tools` - -Custom Cargo tasks used to develop rust-analyzer: - -- `cargo gen-syntax` -- generate `ast` and `syntax_kinds` -- `cargo gen-tests` -- collect inline tests from grammar -- `cargo install-code` -- build and install VS Code extension and server - -### `editors/code` - -VS Code plugin - - -## Common workflows - -To try out VS Code extensions, run `cargo install-code`. This installs both the -`ra_lsp_server` binary and the VS Code extension. To install only the binary, use -`cargo install-lsp` (shorthand for `cargo install --path crates/ra_lsp_server --force`) - -To see logs from the language server, set `RUST_LOG=info` env variable. To see -all communication between the server and the client, use -`RUST_LOG=gen_lsp_server=debug` (this will print quite a bit of stuff). - -There's `rust-analyzer: status` command which prints common high-level debug -info. In particular, it prints info about memory usage of various data -structures, and, if compiled with jemalloc support (`cargo jinstall-lsp` or -`cargo install --path crates/ra_lsp_server --force --features jemalloc`), includes - statistic about the heap. - -To run tests, just `cargo test`. - -To work on the VS Code extension, launch code inside `editors/code` and use `F5` to -launch/debug. To automatically apply formatter and linter suggestions, use `npm -run fix`. diff --git a/docs/dev/README.md b/docs/dev/README.md new file mode 100644 index 000000000..74bf86f68 --- /dev/null +++ b/docs/dev/README.md @@ -0,0 +1,37 @@ +# Contributing Quick Start + +Rust Analyzer is just a usual rust project, which is organized as a Cargo +workspace, builds on stable and doesn't depend on C libraries. So, just + +``` +$ cargo test +``` + +should be enough to get you started! + +To learn more about how rust-analyzer works, see +[./architecture.md](./architecture.md) document. + +Various organizational and process issues are discussed here. + +# Getting in Touch + +Rust Analyzer is a part of [RLS-2.0 working +group](https://github.com/rust-lang/compiler-team/tree/6a769c13656c0a6959ebc09e7b1f7c09b86fb9c0/working-groups/rls-2.0). +Discussion happens in this Zulip stream: + +https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Fwg-rls-2.2E0 + +# Issue Labels + +* [good-first-issue](https://github.com/rust-analyzer/rust-analyzer/labels/good%20first%20issue) + are good issues to get into the project. +* [E-mentor](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-mentor) + issues have links to the code in question and tests. +* [E-easy](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-easy), + [E-medium](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-medium), + [E-hard](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-hard), + labels are *estimates* for how hard would be to write a fix. +* [E-fun](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-fun) + is for cool, but probably hard stuff. + diff --git a/docs/dev/arhictecture.md b/docs/dev/arhictecture.md new file mode 100644 index 000000000..57f76ebae --- /dev/null +++ b/docs/dev/arhictecture.md @@ -0,0 +1,200 @@ +# Architecture + +This document describes the high-level architecture of rust-analyzer. +If you want to familiarize yourself with the code base, you are just +in the right place! + +See also the [guide](./guide.md), which walks through a particular snapshot of +rust-analyzer code base. + +For syntax-trees specifically, there's a [video walk +through](https://youtu.be/DGAuLWdCCAI) as well. + +## The Big Picture + +![](https://user-images.githubusercontent.com/1711539/50114578-e8a34280-0255-11e9-902c-7cfc70747966.png) + +On the highest level, rust-analyzer is a thing which accepts input source code +from the client and produces a structured semantic model of the code. + +More specifically, input data consists of a set of test files (`(PathBuf, +String)` pairs) and information about project structure, captured in the so called +`CrateGraph`. The crate graph specifies which files are crate roots, which cfg +flags are specified for each crate (TODO: actually implement this) and what +dependencies exist between the crates. The analyzer keeps all this input data in +memory and never does any IO. Because the input data is source code, which +typically measures in tens of megabytes at most, keeping all input data in +memory is OK. + +A "structured semantic model" is basically an object-oriented representation of +modules, functions and types which appear in the source code. This representation +is fully "resolved": all expressions have types, all references are bound to +declarations, etc. + +The client can submit a small delta of input data (typically, a change to a +single file) and get a fresh code model which accounts for changes. + +The underlying engine makes sure that model is computed lazily (on-demand) and +can be quickly updated for small modifications. + + +## Code generation + +Some of the components of this repository are generated through automatic +processes. These are outlined below: + +- `gen-syntax`: The kinds of tokens that are reused in several places, so a generator + is used. We use tera templates to generate the files listed below, based on + the grammar described in [grammar.ron]: + - [ast/generated.rs][ast generated] in `ra_syntax` based on + [ast/generated.tera.rs][ast source] + - [syntax_kinds/generated.rs][syntax_kinds generated] in `ra_syntax` based on + [syntax_kinds/generated.tera.rs][syntax_kinds source] + +[tera]: https://tera.netlify.com/ +[grammar.ron]: ./crates/ra_syntax/src/grammar.ron +[ast generated]: ./crates/ra_syntax/src/ast/generated.rs +[ast source]: ./crates/ra_syntax/src/ast/generated.rs.tera +[syntax_kinds generated]: ./crates/ra_syntax/src/syntax_kinds/generated.rs +[syntax_kinds source]: ./crates/ra_syntax/src/syntax_kinds/generated.rs.tera + + +## Code Walk-Through + +### `crates/ra_syntax` + +Rust syntax tree structure and parser. See +[RFC](https://github.com/rust-lang/rfcs/pull/2256) for some design notes. + +- [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees. +- `grammar` module is the actual parser. It is a hand-written recursive descent parser, which + produces a sequence of events like "start node X", "finish not Y". It works similarly to [kotlin's parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java), + which is a good source of inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs) + is what we use for the definition of the Rust language. +- `parser_api/parser_impl` bridges the tree-agnostic parser from `grammar` with `rowan` trees. + This is the thing that turns a flat list of events into a tree (see `EventProcessor`) +- `ast` provides a type safe API on top of the raw `rowan` tree. +- `grammar.ron` RON description of the grammar, which is used to + generate `syntax_kinds` and `ast` modules, using `cargo gen-syntax` command. +- `algo`: generic tree algorithms, including `walk` for O(1) stack + space tree traversal (this is cool) and `visit` for type-driven + visiting the nodes (this is double plus cool, if you understand how + `Visitor` works, you understand the design of syntax trees). + +Tests for ra_syntax are mostly data-driven: `tests/data/parser` contains a bunch of `.rs` +(test vectors) and `.txt` files with corresponding syntax trees. During testing, we check +`.rs` against `.txt`. If the `.txt` file is missing, it is created (this is how you update +tests). Additionally, running `cargo gen-tests` will walk the grammar module and collect +all `//test test_name` comments into files inside `tests/data` directory. + +See [#93](https://github.com/rust-analyzer/rust-analyzer/pull/93) for an example PR which +fixes a bug in the grammar. + +### `crates/ra_db` + +We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and +on-demand computation. Roughly, you can think of salsa as a key-value store, but +it also can compute derived values using specified functions. The `ra_db` crate +provides basic infrastructure for interacting with salsa. Crucially, it +defines most of the "input" queries: facts supplied by the client of the +analyzer. Reading the docs of the `ra_db::input` module should be useful: +everything else is strictly derived from those inputs. + +### `crates/ra_hir` + +HIR provides high-level "object oriented" access to Rust code. + +The principal difference between HIR and syntax trees is that HIR is bound to a +particular crate instance. That is, it has cfg flags and features applied (in +theory, in practice this is to be implemented). So, the relation between +syntax and HIR is many-to-one. The `source_binder` module is responsible for +guessing a HIR for a particular source position. + +Underneath, HIR works on top of salsa, using a `HirDatabase` trait. + +### `crates/ra_ide_api` + +A stateful library for analyzing many Rust files as they change. `AnalysisHost` +is a mutable entity (clojure's atom) which holds the current state, incorporates +changes and hands out `Analysis` --- an immutable and consistent snapshot of +the world state at a point in time, which actually powers analysis. + +One interesting aspect of analysis is its support for cancellation. When a +change is applied to `AnalysisHost`, first all currently active snapshots are +canceled. Only after all snapshots are dropped the change actually affects the +database. + +APIs in this crate are IDE centric: they take text offsets as input and produce +offsets and strings as output. This works on top of rich code model powered by +`hir`. + +### `crates/ra_ide_api_light` + +All IDE features which can be implemented if you only have access to a single +file. `ra_ide_api_light` could be used to enhance editing of Rust code without +the need to fiddle with build-systems, file synchronization and such. + +In a sense, `ra_ide_api_light` is just a bunch of pure functions which take a +syntax tree as input. + +The tests for `ra_ide_api_light` are `#[cfg(test)] mod tests` unit-tests spread +throughout its modules. + + +### `crates/ra_lsp_server` + +An LSP implementation which wraps `ra_ide_api` into a langauge server protocol. + +### `crates/ra_vfs` + +Although `hir` and `ra_ide_api` don't do any IO, we need to be able to read +files from disk at the end of the day. This is what `ra_vfs` does. It also +manages overlays: "dirty" files in the editor, whose "true" contents is +different from data on disk. + +### `crates/gen_lsp_server` + +A language server scaffold, exposing a synchronous crossbeam-channel based API. +This crate handles protocol handshaking and parsing messages, while you +control the message dispatch loop yourself. + +Run with `RUST_LOG=sync_lsp_server=debug` to see all the messages. + +### `crates/ra_cli` + +A CLI interface to rust-analyzer. + +### `crate/tools` + +Custom Cargo tasks used to develop rust-analyzer: + +- `cargo gen-syntax` -- generate `ast` and `syntax_kinds` +- `cargo gen-tests` -- collect inline tests from grammar +- `cargo install-code` -- build and install VS Code extension and server + +### `editors/code` + +VS Code plugin + + +## Common workflows + +To try out VS Code extensions, run `cargo install-code`. This installs both the +`ra_lsp_server` binary and the VS Code extension. To install only the binary, use +`cargo install-lsp` (shorthand for `cargo install --path crates/ra_lsp_server --force`) + +To see logs from the language server, set `RUST_LOG=info` env variable. To see +all communication between the server and the client, use +`RUST_LOG=gen_lsp_server=debug` (this will print quite a bit of stuff). + +There's `rust-analyzer: status` command which prints common high-level debug +info. In particular, it prints info about memory usage of various data +structures, and, if compiled with jemalloc support (`cargo jinstall-lsp` or +`cargo install --path crates/ra_lsp_server --force --features jemalloc`), includes + statistic about the heap. + +To run tests, just `cargo test`. + +To work on the VS Code extension, launch code inside `editors/code` and use `F5` to +launch/debug. To automatically apply formatter and linter suggestions, use `npm +run fix`. -- cgit v1.2.3 From ac6749d18ca9f32eedbc88a68ec41dbb3342a1e1 Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 15:25:05 +0300 Subject: fixes --- docs/user/README.md | 2 +- docs/user/features.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/user/README.md b/docs/user/README.md index 8de46981b..439c4e6ae 100644 --- a/docs/user/README.md +++ b/docs/user/README.md @@ -74,4 +74,4 @@ Installation: [ra-emacs-lsp.el](https://github.com/rust-analyzer/rust-analyzer/blob/69ee5c9c5ef212f7911028c9ddf581559e6565c3/editors/emacs/ra-emacs-lsp.el) to load path and require it in `init.el` * run `lsp` in a rust buffer -* (Optionally) bind commands like `join-lines` or `extend-selection` to keys +* (Optionally) bind commands like `rust-analyzer-join-lines` or `rust-analyzer-extend-selection` to keys diff --git a/docs/user/features.md b/docs/user/features.md index 90f182f35..b9d2aa84f 100644 --- a/docs/user/features.md +++ b/docs/user/features.md @@ -153,7 +153,7 @@ impl std::fmt::Debug<|> for Foo { } // after: -use std::fmt::Debug +use std::fmt::Debug; impl Debug<|> for Foo { } -- cgit v1.2.3 From 1ad322236dbe54ada2c284bda4a2b72830b3ff3d Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 15:34:09 +0300 Subject: remove old contributing --- docs/dev/CONTRIBUTING.md | 18 ------------------ docs/dev/README.md | 6 ++++++ 2 files changed, 6 insertions(+), 18 deletions(-) delete mode 100644 docs/dev/CONTRIBUTING.md diff --git a/docs/dev/CONTRIBUTING.md b/docs/dev/CONTRIBUTING.md deleted file mode 100644 index a2efc7afa..000000000 --- a/docs/dev/CONTRIBUTING.md +++ /dev/null @@ -1,18 +0,0 @@ -The project is in its early stages: contributions are welcome and would be -**very** helpful, but the project is not _yet_ optimized for contribution. -Moreover, it is doubly experimental, so there's no guarantee that any work here -would reach production. - -To get an idea of how rust-analyzer works, take a look at the [ARCHITECTURE.md](./ARCHITECTURE.md) -document. - -Useful labels on the issue tracker: - * [E-mentor](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-mentor) - issues have links to the code in question and tests, - * [E-easy](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-easy), - [E-medium](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-medium), - [E-hard](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-hard), - labels are *estimates* for how hard would be to write a fix. - -There's no formal PR check list: everything that passes CI (we use [bors](https://bors.tech/)) is valid, -but it's a good idea to write nice commit messages, test code thoroughly, maintain consistent style, etc. diff --git a/docs/dev/README.md b/docs/dev/README.md index 74bf86f68..0c09dddfc 100644 --- a/docs/dev/README.md +++ b/docs/dev/README.md @@ -35,3 +35,9 @@ https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Fwg-rls-2.2E0 * [E-fun](https://github.com/rust-analyzer/rust-analyzer/issues?q=is%3Aopen+is%3Aissue+label%3AE-fun) is for cool, but probably hard stuff. +# CI + +We use Travis for CI. Most of the things, including formatting, are checked by +`cargo test` so, if `cargo test` passes locally, that's a good sign that CI will +be green as well. We use bors-ng to enforce the [not rocket +science](https://graydon2.dreamwidth.org/1597.html) rule. -- cgit v1.2.3 From d56c2f24252d5d444267e33950825f0a7cb438ca Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 16:05:49 +0300 Subject: explain how to launch the thing --- docs/dev/DEBUGGING.md | 62 --------------- docs/dev/README.md | 81 +++++++++++++++++++ docs/dev/architecture.md | 174 +++++++++++++++++++++++++++++++++++++++++ docs/dev/arhictecture.md | 200 ----------------------------------------------- docs/dev/debugging.md | 62 +++++++++++++++ 5 files changed, 317 insertions(+), 262 deletions(-) delete mode 100644 docs/dev/DEBUGGING.md create mode 100644 docs/dev/architecture.md delete mode 100644 docs/dev/arhictecture.md create mode 100644 docs/dev/debugging.md diff --git a/docs/dev/DEBUGGING.md b/docs/dev/DEBUGGING.md deleted file mode 100644 index f868e6998..000000000 --- a/docs/dev/DEBUGGING.md +++ /dev/null @@ -1,62 +0,0 @@ -# Debugging vs Code plugin and the Language Server - -Install [LLDB](https://lldb.llvm.org/) and the [LLDB Extension](https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb). - -Checkout rust rust-analyzer and open it in vscode. - -``` -$ git clone https://github.com/rust-analyzer/rust-analyzer.git --depth 1 -$ cd rust-analyzer -$ code . -``` - -- To attach to the `lsp server` in linux you'll have to run: - - `echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope` - - This enables ptrace on non forked processes - -- Ensure the dependencies for the extension are installed, run the `npm: install - editors/code` task in vscode. - -- Launch the `Debug Extension`, this will build the extension and the `lsp server`. - -- A new instance of vscode with `[Extension Development Host]` in the title. - - Don't worry about disabling `rls` all other extensions will be disabled but this one. - -- In the new vscode instance open a rust project, and navigate to a rust file - -- In the original vscode start an additional debug session (the three periods in the launch) and select `Debug Lsp Server`. - -- A list of running processes should appear select the `ra_lsp_server` from this repo. - -- Navigate to `crates/ra_lsp_server/src/main_loop.rs` and add a breakpoint to the `on_task` function. - -- Go back to the `[Extension Development Host]` instance and hover over a rust variable and your breakpoint should hit. - -## Demo - -![demonstration of debugging](https://user-images.githubusercontent.com/1711539/51384036-254fab80-1b2c-11e9-824d-95f9a6e9cf4f.gif) - -## Troubleshooting - -### Can't find the `ra_lsp_server` process - -It could be a case of just jumping the gun. - -The `ra_lsp_server` is only started once the `onLanguage:rust` activation. - -Make sure you open a rust file in the `[Extension Development Host]` and try again. - -### Can't connect to `ra_lsp_server` - -Make sure you have run `echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope`. - -By default this should reset back to 1 everytime you log in. - -### Breakpoints are never being hit - -Check your version of `lldb` if it's version 6 and lower use the `classic` adapter type. -It's `lldb.adapterType` in settings file. - -If you're running `lldb` version 7 change the lldb adapter type to `bundled` or `native`. diff --git a/docs/dev/README.md b/docs/dev/README.md index 0c09dddfc..ac7f4fd71 100644 --- a/docs/dev/README.md +++ b/docs/dev/README.md @@ -41,3 +41,84 @@ We use Travis for CI. Most of the things, including formatting, are checked by `cargo test` so, if `cargo test` passes locally, that's a good sign that CI will be green as well. We use bors-ng to enforce the [not rocket science](https://graydon2.dreamwidth.org/1597.html) rule. + +You can run `cargo format-hook` to install git-hook to run rustfmt on commit. + +# Code organization + +All Rust code lives in the `crates` top-level directory, and is organized as a +single Cargo workspace. The `editors` top-level directory contains code for +integrating with editors. Currently, it contains plugins for VS Code (in +typescript) and Emacs (in elisp). The `docs` top-level directory contains both +developer and user documentation. + +We have some automation infra in Rust in the `crates/tool` package. It contains +stuff like formatting checking, code generation and powers `cargo install-code`. +The latter syntax is achieved with the help of cargo aliases (see `.cargo` +directory). + +# Launching rust-analyzer + +Debugging language server can be tricky: LSP is rather chatty, so driving it +from the command line is not really feasible, driving it via VS Code requires +interacting with two processes. + +For this reason, the best way to see how rust-analyzer works is to find a +relevant test and execute it (VS Code includes an action for running a single +test). + +However, launching a VS Code instance with locally build language server is +possible. There's even a VS Code task for this, so just F5 should +work (thanks, [@andrew-w-ross](https://github.com/andrew-w-ross)!). + +I often just install development version with `cargo jinstall-lsp` and +restart the host VS Code. + +See [./debugging.md](./debugging.md) for how to attach to rust-analyzer with +debugger, and don't forget that rust-analyzer has useful `pd` snippet and `dbg` +postfix completion for printf debugging :-) + +# Working With VS Code Extension + +To work on the VS Code extension, launch code inside `editors/code` and use `F5` +to launch/debug. To automatically apply formatter and linter suggestions, use +`npm run fix`. + +# Logging + +Logging is done by both rust-analyzer and VS Code, so it might be tricky to +figure out where logs go. + +Inside rust-analyzer, we use the standard `log` crate for logging, and +`flexi_logger` for logging frotend. By default, log goes to stderr (the same as +with `env_logger`), but the stderr itself is processed by VS Code. To mirror +logs to a `./log` directory, set `RA_INTERNAL_MODE=1` environmental variable. + +To see stderr in the running VS Code instance, go to the "Output" tab of the +panel and select `rust-analyzer`. This shows `eprintln!` as well. Note that +`stdout` is used for the actual protocol, so `println!` will break things. + +To log all communication between the server and the client, there are two choices: + +* you can log on the server side, by running something like + ``` + env RUST_LOG=gen_lsp_server=trace code . + ``` + +* you can log on the client side, by enabling `"rust-analyzer.trace.server": + "verbose"` workspace setting. These logs are shown in a separate tab in the + output and could be used with LSP inspector. Kudos to + [@DJMcNab](https://github.com/DJMcNab) for setting this awesome infra up! + + +There's also two VS Code commands which might be of interest: + +* `Rust Analyzer: Status` shows some memory-usage statistics. To take full + advantage of it, you need to compile rust-analyzer with jemalloc support: + ``` + $ cargo install --path crates/ra_lsp_server --force --features jemalloc + ``` + + There's an alias for this: `cargo jinstall-lsp`. + +* `Rust Analyzer: Syntax Tree` shows syntax tree of the current file/selection. diff --git a/docs/dev/architecture.md b/docs/dev/architecture.md new file mode 100644 index 000000000..3cd63bf73 --- /dev/null +++ b/docs/dev/architecture.md @@ -0,0 +1,174 @@ +# Architecture + +This document describes the high-level architecture of rust-analyzer. +If you want to familiarize yourself with the code base, you are just +in the right place! + +See also the [guide](./guide.md), which walks through a particular snapshot of +rust-analyzer code base. + +Yet another resource is this playlist with videos about various parts of the +analyzer: + +https://www.youtube.com/playlist?list=PL85XCvVPmGQho7MZkdW-wtPtuJcFpzycE + +## The Big Picture + +![](https://user-images.githubusercontent.com/1711539/50114578-e8a34280-0255-11e9-902c-7cfc70747966.png) + +On the highest level, rust-analyzer is a thing which accepts input source code +from the client and produces a structured semantic model of the code. + +More specifically, input data consists of a set of test files (`(PathBuf, +String)` pairs) and information about project structure, captured in the so called +`CrateGraph`. The crate graph specifies which files are crate roots, which cfg +flags are specified for each crate (TODO: actually implement this) and what +dependencies exist between the crates. The analyzer keeps all this input data in +memory and never does any IO. Because the input data is source code, which +typically measures in tens of megabytes at most, keeping all input data in +memory is OK. + +A "structured semantic model" is basically an object-oriented representation of +modules, functions and types which appear in the source code. This representation +is fully "resolved": all expressions have types, all references are bound to +declarations, etc. + +The client can submit a small delta of input data (typically, a change to a +single file) and get a fresh code model which accounts for changes. + +The underlying engine makes sure that model is computed lazily (on-demand) and +can be quickly updated for small modifications. + + +## Code generation + +Some of the components of this repository are generated through automatic +processes. These are outlined below: + +- `gen-syntax`: The kinds of tokens that are reused in several places, so a generator + is used. We use tera templates to generate the files listed below, based on + the grammar described in [grammar.ron]: + - [ast/generated.rs][ast generated] in `ra_syntax` based on + [ast/generated.tera.rs][ast source] + - [syntax_kinds/generated.rs][syntax_kinds generated] in `ra_syntax` based on + [syntax_kinds/generated.tera.rs][syntax_kinds source] + +[tera]: https://tera.netlify.com/ +[grammar.ron]: ./crates/ra_syntax/src/grammar.ron +[ast generated]: ./crates/ra_syntax/src/ast/generated.rs +[ast source]: ./crates/ra_syntax/src/ast/generated.rs.tera +[syntax_kinds generated]: ./crates/ra_syntax/src/syntax_kinds/generated.rs +[syntax_kinds source]: ./crates/ra_syntax/src/syntax_kinds/generated.rs.tera + + +## Code Walk-Through + +### `crates/ra_syntax`, `crates/ra_parser` + +Rust syntax tree structure and parser. See +[RFC](https://github.com/rust-lang/rfcs/pull/2256) for some design notes. + +- [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees. +- `grammar` module is the actual parser. It is a hand-written recursive descent parser, which + produces a sequence of events like "start node X", "finish not Y". It works similarly to [kotlin's parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java), + which is a good source of inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs) + is what we use for the definition of the Rust language. +- `parser_api/parser_impl` bridges the tree-agnostic parser from `grammar` with `rowan` trees. + This is the thing that turns a flat list of events into a tree (see `EventProcessor`) +- `ast` provides a type safe API on top of the raw `rowan` tree. +- `grammar.ron` RON description of the grammar, which is used to + generate `syntax_kinds` and `ast` modules, using `cargo gen-syntax` command. +- `algo`: generic tree algorithms, including `walk` for O(1) stack + space tree traversal (this is cool) and `visit` for type-driven + visiting the nodes (this is double plus cool, if you understand how + `Visitor` works, you understand the design of syntax trees). + +Tests for ra_syntax are mostly data-driven: `tests/data/parser` contains a bunch of `.rs` +(test vectors) and `.txt` files with corresponding syntax trees. During testing, we check +`.rs` against `.txt`. If the `.txt` file is missing, it is created (this is how you update +tests). Additionally, running `cargo gen-tests` will walk the grammar module and collect +all `//test test_name` comments into files inside `tests/data` directory. + +See [#93](https://github.com/rust-analyzer/rust-analyzer/pull/93) for an example PR which +fixes a bug in the grammar. + +### `crates/ra_db` + +We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and +on-demand computation. Roughly, you can think of salsa as a key-value store, but +it also can compute derived values using specified functions. The `ra_db` crate +provides basic infrastructure for interacting with salsa. Crucially, it +defines most of the "input" queries: facts supplied by the client of the +analyzer. Reading the docs of the `ra_db::input` module should be useful: +everything else is strictly derived from those inputs. + +### `crates/ra_hir` + +HIR provides high-level "object oriented" access to Rust code. + +The principal difference between HIR and syntax trees is that HIR is bound to a +particular crate instance. That is, it has cfg flags and features applied (in +theory, in practice this is to be implemented). So, the relation between +syntax and HIR is many-to-one. The `source_binder` module is responsible for +guessing a HIR for a particular source position. + +Underneath, HIR works on top of salsa, using a `HirDatabase` trait. + +### `crates/ra_ide_api` + +A stateful library for analyzing many Rust files as they change. `AnalysisHost` +is a mutable entity (clojure's atom) which holds the current state, incorporates +changes and hands out `Analysis` --- an immutable and consistent snapshot of +the world state at a point in time, which actually powers analysis. + +One interesting aspect of analysis is its support for cancellation. When a +change is applied to `AnalysisHost`, first all currently active snapshots are +canceled. Only after all snapshots are dropped the change actually affects the +database. + +APIs in this crate are IDE centric: they take text offsets as input and produce +offsets and strings as output. This works on top of rich code model powered by +`hir`. + +### `crates/ra_ide_api_light` + +All IDE features which can be implemented if you only have access to a single +file. `ra_ide_api_light` could be used to enhance editing of Rust code without +the need to fiddle with build-systems, file synchronization and such. + +In a sense, `ra_ide_api_light` is just a bunch of pure functions which take a +syntax tree as input. + +The tests for `ra_ide_api_light` are `#[cfg(test)] mod tests` unit-tests spread +throughout its modules. + + +### `crates/ra_lsp_server` + +An LSP implementation which wraps `ra_ide_api` into a langauge server protocol. + +### `ra_vfs` + +Although `hir` and `ra_ide_api` don't do any IO, we need to be able to read +files from disk at the end of the day. This is what `ra_vfs` does. It also +manages overlays: "dirty" files in the editor, whose "true" contents is +different from data on disk. This is more or less the single really +platform-dependent component, so it lives in a separate repository and has an +extensive cross-platform CI testing. + +### `crates/gen_lsp_server` + +A language server scaffold, exposing a synchronous crossbeam-channel based API. +This crate handles protocol handshaking and parsing messages, while you +control the message dispatch loop yourself. + +Run with `RUST_LOG=sync_lsp_server=debug` to see all the messages. + +### `crates/ra_cli` + +A CLI interface to rust-analyzer. + + +## Testing Infrastructure + + diff --git a/docs/dev/arhictecture.md b/docs/dev/arhictecture.md deleted file mode 100644 index 57f76ebae..000000000 --- a/docs/dev/arhictecture.md +++ /dev/null @@ -1,200 +0,0 @@ -# Architecture - -This document describes the high-level architecture of rust-analyzer. -If you want to familiarize yourself with the code base, you are just -in the right place! - -See also the [guide](./guide.md), which walks through a particular snapshot of -rust-analyzer code base. - -For syntax-trees specifically, there's a [video walk -through](https://youtu.be/DGAuLWdCCAI) as well. - -## The Big Picture - -![](https://user-images.githubusercontent.com/1711539/50114578-e8a34280-0255-11e9-902c-7cfc70747966.png) - -On the highest level, rust-analyzer is a thing which accepts input source code -from the client and produces a structured semantic model of the code. - -More specifically, input data consists of a set of test files (`(PathBuf, -String)` pairs) and information about project structure, captured in the so called -`CrateGraph`. The crate graph specifies which files are crate roots, which cfg -flags are specified for each crate (TODO: actually implement this) and what -dependencies exist between the crates. The analyzer keeps all this input data in -memory and never does any IO. Because the input data is source code, which -typically measures in tens of megabytes at most, keeping all input data in -memory is OK. - -A "structured semantic model" is basically an object-oriented representation of -modules, functions and types which appear in the source code. This representation -is fully "resolved": all expressions have types, all references are bound to -declarations, etc. - -The client can submit a small delta of input data (typically, a change to a -single file) and get a fresh code model which accounts for changes. - -The underlying engine makes sure that model is computed lazily (on-demand) and -can be quickly updated for small modifications. - - -## Code generation - -Some of the components of this repository are generated through automatic -processes. These are outlined below: - -- `gen-syntax`: The kinds of tokens that are reused in several places, so a generator - is used. We use tera templates to generate the files listed below, based on - the grammar described in [grammar.ron]: - - [ast/generated.rs][ast generated] in `ra_syntax` based on - [ast/generated.tera.rs][ast source] - - [syntax_kinds/generated.rs][syntax_kinds generated] in `ra_syntax` based on - [syntax_kinds/generated.tera.rs][syntax_kinds source] - -[tera]: https://tera.netlify.com/ -[grammar.ron]: ./crates/ra_syntax/src/grammar.ron -[ast generated]: ./crates/ra_syntax/src/ast/generated.rs -[ast source]: ./crates/ra_syntax/src/ast/generated.rs.tera -[syntax_kinds generated]: ./crates/ra_syntax/src/syntax_kinds/generated.rs -[syntax_kinds source]: ./crates/ra_syntax/src/syntax_kinds/generated.rs.tera - - -## Code Walk-Through - -### `crates/ra_syntax` - -Rust syntax tree structure and parser. See -[RFC](https://github.com/rust-lang/rfcs/pull/2256) for some design notes. - -- [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees. -- `grammar` module is the actual parser. It is a hand-written recursive descent parser, which - produces a sequence of events like "start node X", "finish not Y". It works similarly to [kotlin's parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java), - which is a good source of inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs) - is what we use for the definition of the Rust language. -- `parser_api/parser_impl` bridges the tree-agnostic parser from `grammar` with `rowan` trees. - This is the thing that turns a flat list of events into a tree (see `EventProcessor`) -- `ast` provides a type safe API on top of the raw `rowan` tree. -- `grammar.ron` RON description of the grammar, which is used to - generate `syntax_kinds` and `ast` modules, using `cargo gen-syntax` command. -- `algo`: generic tree algorithms, including `walk` for O(1) stack - space tree traversal (this is cool) and `visit` for type-driven - visiting the nodes (this is double plus cool, if you understand how - `Visitor` works, you understand the design of syntax trees). - -Tests for ra_syntax are mostly data-driven: `tests/data/parser` contains a bunch of `.rs` -(test vectors) and `.txt` files with corresponding syntax trees. During testing, we check -`.rs` against `.txt`. If the `.txt` file is missing, it is created (this is how you update -tests). Additionally, running `cargo gen-tests` will walk the grammar module and collect -all `//test test_name` comments into files inside `tests/data` directory. - -See [#93](https://github.com/rust-analyzer/rust-analyzer/pull/93) for an example PR which -fixes a bug in the grammar. - -### `crates/ra_db` - -We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and -on-demand computation. Roughly, you can think of salsa as a key-value store, but -it also can compute derived values using specified functions. The `ra_db` crate -provides basic infrastructure for interacting with salsa. Crucially, it -defines most of the "input" queries: facts supplied by the client of the -analyzer. Reading the docs of the `ra_db::input` module should be useful: -everything else is strictly derived from those inputs. - -### `crates/ra_hir` - -HIR provides high-level "object oriented" access to Rust code. - -The principal difference between HIR and syntax trees is that HIR is bound to a -particular crate instance. That is, it has cfg flags and features applied (in -theory, in practice this is to be implemented). So, the relation between -syntax and HIR is many-to-one. The `source_binder` module is responsible for -guessing a HIR for a particular source position. - -Underneath, HIR works on top of salsa, using a `HirDatabase` trait. - -### `crates/ra_ide_api` - -A stateful library for analyzing many Rust files as they change. `AnalysisHost` -is a mutable entity (clojure's atom) which holds the current state, incorporates -changes and hands out `Analysis` --- an immutable and consistent snapshot of -the world state at a point in time, which actually powers analysis. - -One interesting aspect of analysis is its support for cancellation. When a -change is applied to `AnalysisHost`, first all currently active snapshots are -canceled. Only after all snapshots are dropped the change actually affects the -database. - -APIs in this crate are IDE centric: they take text offsets as input and produce -offsets and strings as output. This works on top of rich code model powered by -`hir`. - -### `crates/ra_ide_api_light` - -All IDE features which can be implemented if you only have access to a single -file. `ra_ide_api_light` could be used to enhance editing of Rust code without -the need to fiddle with build-systems, file synchronization and such. - -In a sense, `ra_ide_api_light` is just a bunch of pure functions which take a -syntax tree as input. - -The tests for `ra_ide_api_light` are `#[cfg(test)] mod tests` unit-tests spread -throughout its modules. - - -### `crates/ra_lsp_server` - -An LSP implementation which wraps `ra_ide_api` into a langauge server protocol. - -### `crates/ra_vfs` - -Although `hir` and `ra_ide_api` don't do any IO, we need to be able to read -files from disk at the end of the day. This is what `ra_vfs` does. It also -manages overlays: "dirty" files in the editor, whose "true" contents is -different from data on disk. - -### `crates/gen_lsp_server` - -A language server scaffold, exposing a synchronous crossbeam-channel based API. -This crate handles protocol handshaking and parsing messages, while you -control the message dispatch loop yourself. - -Run with `RUST_LOG=sync_lsp_server=debug` to see all the messages. - -### `crates/ra_cli` - -A CLI interface to rust-analyzer. - -### `crate/tools` - -Custom Cargo tasks used to develop rust-analyzer: - -- `cargo gen-syntax` -- generate `ast` and `syntax_kinds` -- `cargo gen-tests` -- collect inline tests from grammar -- `cargo install-code` -- build and install VS Code extension and server - -### `editors/code` - -VS Code plugin - - -## Common workflows - -To try out VS Code extensions, run `cargo install-code`. This installs both the -`ra_lsp_server` binary and the VS Code extension. To install only the binary, use -`cargo install-lsp` (shorthand for `cargo install --path crates/ra_lsp_server --force`) - -To see logs from the language server, set `RUST_LOG=info` env variable. To see -all communication between the server and the client, use -`RUST_LOG=gen_lsp_server=debug` (this will print quite a bit of stuff). - -There's `rust-analyzer: status` command which prints common high-level debug -info. In particular, it prints info about memory usage of various data -structures, and, if compiled with jemalloc support (`cargo jinstall-lsp` or -`cargo install --path crates/ra_lsp_server --force --features jemalloc`), includes - statistic about the heap. - -To run tests, just `cargo test`. - -To work on the VS Code extension, launch code inside `editors/code` and use `F5` to -launch/debug. To automatically apply formatter and linter suggestions, use `npm -run fix`. diff --git a/docs/dev/debugging.md b/docs/dev/debugging.md new file mode 100644 index 000000000..f868e6998 --- /dev/null +++ b/docs/dev/debugging.md @@ -0,0 +1,62 @@ +# Debugging vs Code plugin and the Language Server + +Install [LLDB](https://lldb.llvm.org/) and the [LLDB Extension](https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb). + +Checkout rust rust-analyzer and open it in vscode. + +``` +$ git clone https://github.com/rust-analyzer/rust-analyzer.git --depth 1 +$ cd rust-analyzer +$ code . +``` + +- To attach to the `lsp server` in linux you'll have to run: + + `echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope` + + This enables ptrace on non forked processes + +- Ensure the dependencies for the extension are installed, run the `npm: install - editors/code` task in vscode. + +- Launch the `Debug Extension`, this will build the extension and the `lsp server`. + +- A new instance of vscode with `[Extension Development Host]` in the title. + + Don't worry about disabling `rls` all other extensions will be disabled but this one. + +- In the new vscode instance open a rust project, and navigate to a rust file + +- In the original vscode start an additional debug session (the three periods in the launch) and select `Debug Lsp Server`. + +- A list of running processes should appear select the `ra_lsp_server` from this repo. + +- Navigate to `crates/ra_lsp_server/src/main_loop.rs` and add a breakpoint to the `on_task` function. + +- Go back to the `[Extension Development Host]` instance and hover over a rust variable and your breakpoint should hit. + +## Demo + +![demonstration of debugging](https://user-images.githubusercontent.com/1711539/51384036-254fab80-1b2c-11e9-824d-95f9a6e9cf4f.gif) + +## Troubleshooting + +### Can't find the `ra_lsp_server` process + +It could be a case of just jumping the gun. + +The `ra_lsp_server` is only started once the `onLanguage:rust` activation. + +Make sure you open a rust file in the `[Extension Development Host]` and try again. + +### Can't connect to `ra_lsp_server` + +Make sure you have run `echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope`. + +By default this should reset back to 1 everytime you log in. + +### Breakpoints are never being hit + +Check your version of `lldb` if it's version 6 and lower use the `classic` adapter type. +It's `lldb.adapterType` in settings file. + +If you're running `lldb` version 7 change the lldb adapter type to `bundled` or `native`. -- cgit v1.2.3 From 86d5c32e4a96dc18ebc3f834ee74d403e4deceba Mon Sep 17 00:00:00 2001 From: Aleksey Kladov Date: Wed, 20 Mar 2019 17:22:22 +0300 Subject: describe how do we test things --- docs/dev/architecture.md | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/docs/dev/architecture.md b/docs/dev/architecture.md index 3cd63bf73..f990d5bf0 100644 --- a/docs/dev/architecture.md +++ b/docs/dev/architecture.md @@ -171,4 +171,29 @@ A CLI interface to rust-analyzer. ## Testing Infrastructure - +Rust Analyzer has three interesting [systems +boundaries](https://www.tedinski.com/2018/04/10/making-tests-a-positive-influence-on-design.html) +to concentrate tests on. + +The outermost boundary is the `ra_lsp_server` crate, which defines an LSP +interface in terms of stdio. We do integration testing of this component, by +feeding it with a stream of LSP requests and checking responses. These tests are +known as "heavy", because they interact with Cargo and read real files from +disk. For this reason, we try to avoid writing too many tests on this boundary: +in a statically typed language, it's hard to make an error in the protocol +itself if messages are themselves typed. + +The middle, and most important, boundary is `ra_ide_api`. Unlike +`ra_lsp_server`, which exposes API, `ide_api` uses Rust API and is intended to +use by various tools. Typical test creates an `AnalysisHost`, calls some +`Analysis` functions and compares the results against expectation. + +The innermost and most elaborate boundary is `hir`. It has a much richer +vocabulary of types than `ide_api`, but the basic testing setup is the same: we +create a database, run some queries, assert result. + +For comparisons, we use [insta](https://github.com/mitsuhiko/insta/) library for +snapshot testing. + +To test various analysis corner cases and avoid forgetting about old tests, we +use so-called marks. See the `marks` module in the `test_utils` crate for more. -- cgit v1.2.3 From 290237d2eb472522191721397d0c4db905cdc565 Mon Sep 17 00:00:00 2001 From: bjorn3 Date: Wed, 20 Mar 2019 17:39:56 +0300 Subject: Update README.md Co-Authored-By: matklad --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ad3ad22c3..3a0c9dee1 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ useful IDE experience and some people use it as a daily driver. To build rust-analyzer, you need: * latest stable rust for language server itself -* latest stable npm and VS Code for VS Code extension (`code` should be a path) +* latest stable npm and VS Code for VS Code extension (`code` should be in path) For setup for other editors, see [./docs/user](./docs/user). -- cgit v1.2.3