aboutsummaryrefslogtreecommitdiff
path: root/crates/ra_lsp_server
Commit message (Collapse)AuthorAgeFilesLines
...
* Swallow expected `rustfmt` errorsRyan Cumming2019-06-261-10/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | My workflow in Visual Studio Code + Rust Analyzer has become: 1. Make a change to Rust source code using all the analysis magic 2. Save the file to trigger `cargo watch`. I have format on save enabled for all file types so this also runs `rustfmt` 3. Fix any diagnostics that `cargo watch` finds Unfortunately if the Rust source has any syntax errors the act of saving will pop up a scary "command has failed" message and will switch to the "Output" tab to show the `rustfmt` error and exit code. I did a quick survey of what other Language Servers do in this case. Both the JSON and TypeScript servers will swallow the error and return success. This is consistent with how I remember my workflow in those languages. The syntax error will show up as a diagnostic so it should be clear why the file isn't formatting. I checked the `rustfmt` source code and while it does distinguish "parse errors" from "operational errors" internally they both result in exit status of 1. However, more catastrophic errors (missing `rustfmt`, SIGSEGV, etc) will return 127+ error codes which we can distinguish from a normal failure. This changes our handler to log an info message and feign success if `rustfmt` exits with status 1. Another option I considered was only swallowing the error if the formatting request came from format-on-save. However, the Language Server Protocol doesn't seem to distinguish those cases.
* Bump cargo_metadata, ena, flexi_loggerkjeremy2019-06-201-1/+1
|
* reuse AnalysisHost in batch analysisAleksey Kladov2019-06-151-1/+1
|
* re-enable backtraces on panicAleksey Kladov2019-06-151-2/+1
|
* cargo formatMuhammad Mominul Huque2019-06-151-7/+2
|
* Get rid of failure: ra_lsp_server & ra_project_modelMuhammad Mominul Huque2019-06-145-23/+25
|
* Temp fix for slow onEnter issueAleksey Kladov2019-06-131-1/+2
| | | | | | | | | | The issue was windows specific -- cancellation caused collection of bracktraces at some point, and that was slow on windows. The proper fix here is to make sure that we don't collect bracktraces unnecessary (which we currently do due to failure), but, as a temporary fix, let's just not force their collection in the first place!
* make LRU cache configurableAleksey Kladov2019-06-123-6/+18
|
* make Docs handing more ideomaticAleksey Kladov2019-06-082-17/+8
|
* Fix clippy::or_fun_callAlan Du2019-06-041-1/+1
|
* Fix clippy::identity_conversionAlan Du2019-06-043-20/+15
|
* Fix clippy::unused_mutAlan Du2019-06-041-1/+1
|
* Fix clippy::unnecessary_mut_passedAlan Du2019-06-041-7/+2
|
* Fix clippy::single_matchAlan Du2019-06-041-4/+3
|
* renameAleksey Kladov2019-06-016-80/+86
|
* move subs insideAleksey Kladov2019-06-011-4/+2
|
* use sync queries for join lines and friendsAleksey Kladov2019-05-311-5/+11
|
* add sync requestsAleksey Kladov2019-05-312-43/+56
|
* cleanupAleksey Kladov2019-05-311-39/+42
|
* cleanupAleksey Kladov2019-05-311-35/+48
|
* simplifyAleksey Kladov2019-05-311-51/+52
|
* move completed requests to a separate fileAleksey Kladov2019-05-315-80/+114
|
* simplifyAleksey Kladov2019-05-311-3/+3
|
* introduce constantAleksey Kladov2019-05-311-7/+13
|
* minorAleksey Kladov2019-05-311-1/+1
|
* update ra_ide_api to use builtinsAleksey Kladov2019-05-301-0/+1
|
* :arrow_up: parking_lotAleksey Kladov2019-05-301-1/+1
|
* bump timeout for CIAleksey Kladov2019-05-291-1/+1
|
* less noisy statusAleksey Kladov2019-05-291-1/+1
|
* optimization: cancel backlog in onEnterAleksey Kladov2019-05-292-3/+16
|
* add latest requests to status pageAleksey Kladov2019-05-293-12/+67
|
* log the actual time of requestsAleksey Kladov2019-05-291-16/+31
|
* trigger garbage collection *after* requests, not beforeAleksey Kladov2019-05-291-2/+5
|
* more perf loggingAleksey Kladov2019-05-291-3/+8
|
* silnce profiling in testsAleksey Kladov2019-05-291-1/+2
|
* Merge #1334bors[bot]2019-05-272-1/+82
|\ | | | | | | | | | | | | | | 1334: check for cancellation during macro expansion r=matklad a=matklad closes #1331 Co-authored-by: Aleksey Kladov <[email protected]>
| * check cancellation when expanding macrosAleksey Kladov2019-05-271-3/+2
| |
| * enable profiling in testsAleksey Kladov2019-05-272-1/+83
| |
* | rename stray id fieldPascal Hertleif2019-05-272-2/+2
| |
* | make it build againPascal Hertleif2019-05-271-1/+1
| |
* | Semantic highlighting spikePascal Hertleif2019-05-272-1/+6
|/ | | | | | | | | | Very simple approach: For each identifier, set the hash of the range where it's defined as its 'id' and use it in the VSCode extension to generate unique colors. Thus, the generated colors are per-file. They are also quite fragile, and I'm not entirely sure why. Looks like we need to make sure the same ranges aren't overwritten by a later request?
* add profile calls to real-time requestsAleksey Kladov2019-05-271-0/+5
|
* Added local macro gotoLenard Pratt2019-05-041-0/+1
|
* Basic resolution for ADTkjeremy2019-04-233-2/+23
|
* :arrow_up: lspAleksey Kladov2019-04-211-1/+1
|
* switch to official extend selection APIAleksey Kladov2019-04-215-3/+72
|
* cleanup cancellationAleksey Kladov2019-04-171-10/+5
| | | | | Now that we explicitelly exit the reading loop on exit notification, we can assume that the sender is always alive
* add a couple of profiling pointsAleksey Kladov2019-04-141-0/+3
|
* filter by timeAleksey Kladov2019-04-141-21/+4
|
* cleanup syntaxAleksey Kladov2019-04-141-8/+20
|