| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous way of sending from the thread pool suffered from stale
diagnostics due to being canceled before we could clear the old ones.
The key change is moving to sending diagnostics from the main loop
thread, but doing all the hard work in the thread pool. This should
provide the best of both worlds, with little to no of the downsides.
This should hopefully fix a lot of issues, but we'll need testing in
each individual issue to be sure.
|
|
|
|
|
|
|
|
| |
By default, `spawn` inherits stderr/stdout/stderr of the parent
process, and so, if child, for example does fcntl(O_NONBLOCK), weird
stuff happens to us.
Closes https://github.com/rust-analyzer/lsp-server/pull/10
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We previously used serde's stream deserializer to read json blobs from
the cargo output. It has an issue though: If the deserializer encounters
invalid input, it gets stuck reporting the same error again and again
because it is unable to foward over the input until it reaches a new
valid object.
Reading a line at a time and manually deserializing fixes this issue,
because cargo makes sure to only outpu one json blob per line, so should
we encounter invalid input, we can just skip a line and continue.
The main reason this would happen is stray printf-debugging in
procedural macros, so we still report that an error occured, but we
handle it gracefully now.
Fixes #2935
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
2924: Modify ordering of drops in check watcher to only ever have one cargo r=matklad a=kiljacken
Due to the way drops are ordered when assigning to a mutable variable we
were launching a new cargo sub-process before letting the old one quite.
By explicitly replacing the original watcher with a dummy first, we
ensure it is dropped and the process is completed, before we start the
new process.
Co-authored-by: Emil Lauridsen <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Due to the way drops are ordered when assigning to a mutable variable we
were launching a new cargo sub-process before letting the old one quite.
By explicitly replacing the original watcher with a dummy first, we
ensure it is dropped and the process is completed, before we start the
new process.
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
@matklad mentioned this might be a good idea.
So the general idea is that we don't really need the lock, as we can
just clone the check watcher state when creating a snapshot. We can then
use `Arc::get_mut` to get mutable access to the state from `WorldState`
when needed.
Running with this it seems to improve responsiveness a bit while cargo
is running, but I have no hard numbers to prove it. In any case, a
serialization point less is always better when we're trying to be
responsive.
|
| |
|
| |
|
| |
|
|
|
|
| |
Stabilizes most proposed features
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| | |
2681: cargo-watcher: Resolve macro call site in more cases r=matklad a=kiljacken
This resolves the actual macro call site in a few more cases, f.x. when a macro invokes `compile_error!` (I'm looking at you `ra_hir_def::path::__path`).
Co-authored-by: Emil Lauridsen <[email protected]>
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Even though this didn't error, it became clear to me that it was closing
the wrong channel, resulting in the child thread never finishing.
|
| |
|
| |
|
| |
|
|
|
|
| |
subprocess
|
|
|