| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
This wasn't a right decision in the first place, the feature flag was
broken in the last rustfmt release, and syntax highlighting of imports
is more important anyway
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
My workflow in Visual Studio Code + Rust Analyzer has become:
1. Make a change to Rust source code using all the analysis magic
2. Save the file to trigger `cargo watch`. I have format on save enabled
for all file types so this also runs `rustfmt`
3. Fix any diagnostics that `cargo watch` finds
Unfortunately if the Rust source has any syntax errors the act of saving
will pop up a scary "command has failed" message and will switch to the
"Output" tab to show the `rustfmt` error and exit code.
I did a quick survey of what other Language Servers do in this case.
Both the JSON and TypeScript servers will swallow the error and return
success. This is consistent with how I remember my workflow in those
languages. The syntax error will show up as a diagnostic so it should
be clear why the file isn't formatting.
I checked the `rustfmt` source code and while it does distinguish "parse
errors" from "operational errors" internally they both result in exit
status of 1. However, more catastrophic errors (missing `rustfmt`,
SIGSEGV, etc) will return 127+ error codes which we can distinguish from
a normal failure.
This changes our handler to log an info message and feign success if
`rustfmt` exits with status 1.
Another option I considered was only swallowing the error if the
formatting request came from format-on-save. However, the Language
Server Protocol doesn't seem to distinguish those cases.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The issue was windows specific -- cancellation caused collection of
bracktraces at some point, and that was slow on windows.
The proper fix here is to make sure that we don't collect bracktraces
unnecessary (which we currently do due to failure), but, as a
temporary fix, let's just not force their collection in the first
place!
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Very simple approach: For each identifier, set the hash of the range
where it's defined as its 'id' and use it in the VSCode extension to
generate unique colors.
Thus, the generated colors are per-file. They are also quite fragile,
and I'm not entirely sure why. Looks like we need to make sure the
same ranges aren't overwritten by a later request?
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This is a quick, partial fix for #1104
|
|
|
|
|
| |
This is used by CallInfo to create a pretty printed function signature that can
be used with completions and other places as well.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This way the two IncludeRustFiles implementations can simply call the
ProjectRoots' methods, so that the include logic is in one place.
|
|
|
|
|
|
|
|
|
| |
`ProjectWorkspace::to_roots` now returns a new `ProjectRoot` which contains
information regarding whether or not the given path is part of the current
workspace or an external dependency. This information can then be used in
`ra_batch` and `ra_lsp_server` to implement more advanced filtering. This allows
us to filter some unnecessary folders from external dependencies such as tests,
examples and benches.
|
|
|
|
|
| |
Currently this matches the previous filtering, meaning all roots are filtered
using the same rules.
|