aboutsummaryrefslogtreecommitdiff
path: root/crates/ra_lsp_server/src
Commit message (Collapse)AuthorAgeFilesLines
...
* Resolve types on the serverKirill Bulatov2019-07-211-25/+10
|
* Refactor server apiKirill Bulatov2019-07-201-28/+42
|
* If possible, show type lenses for the let bindingsKirill Bulatov2019-07-201-16/+26
|
* Add "Run" lens for binary runnablesKirill Bulatov2019-07-161-1/+1
|
* Remove executeCommandProvider: apply_code_action.Michael Bolin2019-07-112-10/+8
| | | | | | | | | | | | | | | | | | | This appears to have been introduced ages ago in https://github.com/rust-analyzer/rust-analyzer/commit/be742a587704f27f4e503c50f549aa9ec1527fcc but has since been removed. As it stands, it is problematic if multiple instances of the rust-analyzer LSP are launched during the same VS Code session because VS Code complains about multiple LSP servers trying to register the same command. Most LSP servers workaround this by parameterizing the command by the process id. For example, this is where `rls` does this: https://github.com/rust-lang/rls/blob/ff0b9057c8f62bc4f8113d741e96c9587ef1a817/rls/src/server/mod.rs#L413-L421 Though `apply_code_action` does not seems to be used, so it seems better to delete it than to parameterize it.
* Ignore workspace/didChangeConfiguration notifications.Michael Bolin2019-07-112-4/+11
|
* don't send LocationLink unless the client opts-inAleksey Kladov2019-07-084-9/+41
| | | | closes #1474
* simplifyAleksey Kladov2019-07-082-23/+20
|
* add try_conv_with_to_vecAleksey Kladov2019-07-082-40/+54
|
* Simplify responses by using into()Jeremy Kolb2019-07-071-11/+12
|
* use flatten branch of lsp-typesJeremy Kolb2019-07-071-23/+12
|
* Formatting againJeremy Kolb2019-07-051-5/+5
|
* Symplify by using into()Jeremy Kolb2019-07-051-3/+3
|
* FormattingJeremy Kolb2019-07-041-1/+3
|
* Some clippy fixes for 1.36Jeremy Kolb2019-07-042-5/+4
|
* Fix formattingJeremy Kolb2019-07-041-5/+5
|
* Change default()Jeremy Kolb2019-07-041-1/+1
|
* Update to lsp-types 0.58.0Jeremy Kolb2019-07-041-5/+5
|
* allow rustfmt to reorder importsAleksey Kladov2019-07-0411-45/+44
| | | | | | This wasn't a right decision in the first place, the feature flag was broken in the last rustfmt release, and syntax highlighting of imports is more important anyway
* Swallow expected `rustfmt` errorsRyan Cumming2019-06-261-10/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | My workflow in Visual Studio Code + Rust Analyzer has become: 1. Make a change to Rust source code using all the analysis magic 2. Save the file to trigger `cargo watch`. I have format on save enabled for all file types so this also runs `rustfmt` 3. Fix any diagnostics that `cargo watch` finds Unfortunately if the Rust source has any syntax errors the act of saving will pop up a scary "command has failed" message and will switch to the "Output" tab to show the `rustfmt` error and exit code. I did a quick survey of what other Language Servers do in this case. Both the JSON and TypeScript servers will swallow the error and return success. This is consistent with how I remember my workflow in those languages. The syntax error will show up as a diagnostic so it should be clear why the file isn't formatting. I checked the `rustfmt` source code and while it does distinguish "parse errors" from "operational errors" internally they both result in exit status of 1. However, more catastrophic errors (missing `rustfmt`, SIGSEGV, etc) will return 127+ error codes which we can distinguish from a normal failure. This changes our handler to log an info message and feign success if `rustfmt` exits with status 1. Another option I considered was only swallowing the error if the formatting request came from format-on-save. However, the Language Server Protocol doesn't seem to distinguish those cases.
* reuse AnalysisHost in batch analysisAleksey Kladov2019-06-151-1/+1
|
* re-enable backtraces on panicAleksey Kladov2019-06-151-2/+1
|
* cargo formatMuhammad Mominul Huque2019-06-151-7/+2
|
* Get rid of failure: ra_lsp_server & ra_project_modelMuhammad Mominul Huque2019-06-144-21/+25
|
* Temp fix for slow onEnter issueAleksey Kladov2019-06-131-1/+2
| | | | | | | | | | The issue was windows specific -- cancellation caused collection of bracktraces at some point, and that was slow on windows. The proper fix here is to make sure that we don't collect bracktraces unnecessary (which we currently do due to failure), but, as a temporary fix, let's just not force their collection in the first place!
* make LRU cache configurableAleksey Kladov2019-06-123-6/+18
|
* make Docs handing more ideomaticAleksey Kladov2019-06-082-17/+8
|
* Fix clippy::or_fun_callAlan Du2019-06-041-1/+1
|
* Fix clippy::identity_conversionAlan Du2019-06-043-20/+15
|
* renameAleksey Kladov2019-06-016-80/+86
|
* move subs insideAleksey Kladov2019-06-011-4/+2
|
* use sync queries for join lines and friendsAleksey Kladov2019-05-311-5/+11
|
* add sync requestsAleksey Kladov2019-05-312-43/+56
|
* cleanupAleksey Kladov2019-05-311-39/+42
|
* cleanupAleksey Kladov2019-05-311-35/+48
|
* simplifyAleksey Kladov2019-05-311-51/+52
|
* move completed requests to a separate fileAleksey Kladov2019-05-315-80/+114
|
* simplifyAleksey Kladov2019-05-311-3/+3
|
* introduce constantAleksey Kladov2019-05-311-7/+13
|
* minorAleksey Kladov2019-05-311-1/+1
|
* update ra_ide_api to use builtinsAleksey Kladov2019-05-301-0/+1
|
* less noisy statusAleksey Kladov2019-05-291-1/+1
|
* optimization: cancel backlog in onEnterAleksey Kladov2019-05-292-3/+16
|
* add latest requests to status pageAleksey Kladov2019-05-293-12/+67
|
* log the actual time of requestsAleksey Kladov2019-05-291-16/+31
|
* trigger garbage collection *after* requests, not beforeAleksey Kladov2019-05-291-2/+5
|
* more perf loggingAleksey Kladov2019-05-291-3/+8
|
* rename stray id fieldPascal Hertleif2019-05-272-2/+2
|
* make it build againPascal Hertleif2019-05-271-1/+1
|
* Semantic highlighting spikePascal Hertleif2019-05-272-1/+6
| | | | | | | | | | Very simple approach: For each identifier, set the hash of the range where it's defined as its 'id' and use it in the VSCode extension to generate unique colors. Thus, the generated colors are per-file. They are also quite fragile, and I'm not entirely sure why. Looks like we need to make sure the same ranges aren't overwritten by a later request?