diff options
author | bors[bot] <26634292+bors[bot]@users.noreply.github.com> | 2020-04-25 15:30:10 +0100 |
---|---|---|
committer | GitHub <[email protected]> | 2020-04-25 15:30:10 +0100 |
commit | fc57358efda7c028cbe8a438446cce5f540f48ca (patch) | |
tree | 402dd311ea4de95468c79eb8afa6722b7391dbb8 /crates | |
parent | 05981823bac91ba338110902fd435c6e3166f1d6 (diff) | |
parent | 0c12b7e8c8605e84873858da0c4aa170140c14bb (diff) |
Merge #4133
4133: main: eagerly prime goto-definition caches r=matklad a=BurntSushi
This commit eagerly primes the caches used by goto-definition by
submitting a "phantom" goto-definition request. This is perhaps a bit
circuitous, but it does actually get the job done. The result of this
change is that once RA is finished its initial loading of a project,
goto-definition requests are instant. There don't appear to be any more
surprise latency spikes.
This _partially_ addresses #1650 in that it front-loads the latency of the
first goto-definition request, which in turn makes it more predictable and
less surprising. In particular, this addresses the use case where one opens
the text editor, starts reading code for a while, and only later issues the
first goto-definition request. Before this PR, that first goto-definition request
is guaranteed to have high latency in any reasonably sized project. But
after this PR, there's a good chance that it will now be instant.
What this _doesn't_ address is that initial loading time. In fact, it makes it
longer by adding a phantom goto-definition request to the initial startup
sequence. However, I observed that while this did make initial loading
slower, it was overall a somewhat small (but not insignificant) fraction
of initial loading time.
-----
At least, the above is what I _want_ to do. The actual change in this PR is just a proof-of-concept. I came up with after an evening of printf-debugging. Once I found the spot where this cache priming should go, I was unsure of how to generate a phantom input. So I just took an input I knew worked from my printf-debugging and hacked it in. Obviously, what I'd like to do is make this more general such that it will always work.
I don't know whether this is the "right" approach or not. My guess is that there is perhaps a cleaner solution that more directly primes whatever cache is being lazily populated rather than fudging the issue with a phantom goto-definition request.
I created this as a draft PR because I'd really like help making this general. I think whether y'all want to accept this patch is perhaps a separate question. IMO, it seems like a good idea, but to be honest, I'm happy to maintain this patch on my own since it's so trivial. But I would like to generalize it so that it will work in any project.
My thinking is that all I really need to do is find a file and a token somewhere in the loaded project, and then use that as input. But I don't quite know how to connect all the data structures to do that. Any help would be appreciated!
cc @matklad since I've been a worm in your ear about this problem. :-)
Co-authored-by: Andrew Gallant <[email protected]>
Diffstat (limited to 'crates')
-rw-r--r-- | crates/ra_ide/src/prime_caches.rs | 5 | ||||
-rw-r--r-- | crates/rust-analyzer/src/main_loop.rs | 16 |
2 files changed, 10 insertions, 11 deletions
diff --git a/crates/ra_ide/src/prime_caches.rs b/crates/ra_ide/src/prime_caches.rs index 628c989bf..90bf7d25f 100644 --- a/crates/ra_ide/src/prime_caches.rs +++ b/crates/ra_ide/src/prime_caches.rs | |||
@@ -3,13 +3,10 @@ | |||
3 | //! request takes longer to compute. This modules implemented prepopulating of | 3 | //! request takes longer to compute. This modules implemented prepopulating of |
4 | //! various caches, it's not really advanced at the moment. | 4 | //! various caches, it's not really advanced at the moment. |
5 | 5 | ||
6 | use hir::Semantics; | ||
7 | |||
8 | use crate::{FileId, RootDatabase}; | 6 | use crate::{FileId, RootDatabase}; |
9 | 7 | ||
10 | pub(crate) fn prime_caches(db: &RootDatabase, files: Vec<FileId>) { | 8 | pub(crate) fn prime_caches(db: &RootDatabase, files: Vec<FileId>) { |
11 | let sema = Semantics::new(db); | ||
12 | for file in files { | 9 | for file in files { |
13 | let _ = sema.to_module_def(file); | 10 | let _ = crate::syntax_highlighting::highlight(db, file, None); |
14 | } | 11 | } |
15 | } | 12 | } |
diff --git a/crates/rust-analyzer/src/main_loop.rs b/crates/rust-analyzer/src/main_loop.rs index d993c4146..f3aef3f0f 100644 --- a/crates/rust-analyzer/src/main_loop.rs +++ b/crates/rust-analyzer/src/main_loop.rs | |||
@@ -222,6 +222,7 @@ pub fn main_loop(ws_roots: Vec<PathBuf>, config: Config, connection: Connection) | |||
222 | libdata_receiver.into_iter().for_each(drop); | 222 | libdata_receiver.into_iter().for_each(drop); |
223 | log::info!("...tasks have finished"); | 223 | log::info!("...tasks have finished"); |
224 | log::info!("joining threadpool..."); | 224 | log::info!("joining threadpool..."); |
225 | pool.join(); | ||
225 | drop(pool); | 226 | drop(pool); |
226 | log::info!("...threadpool has finished"); | 227 | log::info!("...threadpool has finished"); |
227 | 228 | ||
@@ -417,28 +418,29 @@ fn loop_turn( | |||
417 | && loop_state.pending_libraries.is_empty() | 418 | && loop_state.pending_libraries.is_empty() |
418 | && loop_state.in_flight_libraries == 0 | 419 | && loop_state.in_flight_libraries == 0 |
419 | { | 420 | { |
421 | state_changed = true; | ||
420 | loop_state.workspace_loaded = true; | 422 | loop_state.workspace_loaded = true; |
421 | if let Some(flycheck) = &world_state.flycheck { | 423 | if let Some(flycheck) = &world_state.flycheck { |
422 | flycheck.update(); | 424 | flycheck.update(); |
423 | } | 425 | } |
424 | pool.execute({ | ||
425 | let subs = loop_state.subscriptions.subscriptions(); | ||
426 | let snap = world_state.snapshot(); | ||
427 | move || snap.analysis().prime_caches(subs).unwrap_or_else(|_: Canceled| ()) | ||
428 | }); | ||
429 | } | 426 | } |
430 | 427 | ||
431 | if show_progress { | 428 | if show_progress { |
432 | send_startup_progress(&connection.sender, loop_state); | 429 | send_startup_progress(&connection.sender, loop_state); |
433 | } | 430 | } |
434 | 431 | ||
435 | if state_changed { | 432 | if state_changed && loop_state.workspace_loaded { |
436 | update_file_notifications_on_threadpool( | 433 | update_file_notifications_on_threadpool( |
437 | pool, | 434 | pool, |
438 | world_state.snapshot(), | 435 | world_state.snapshot(), |
439 | task_sender.clone(), | 436 | task_sender.clone(), |
440 | loop_state.subscriptions.subscriptions(), | 437 | loop_state.subscriptions.subscriptions(), |
441 | ) | 438 | ); |
439 | pool.execute({ | ||
440 | let subs = loop_state.subscriptions.subscriptions(); | ||
441 | let snap = world_state.snapshot(); | ||
442 | move || snap.analysis().prime_caches(subs).unwrap_or_else(|_: Canceled| ()) | ||
443 | }); | ||
442 | } | 444 | } |
443 | 445 | ||
444 | let loop_duration = loop_start.elapsed(); | 446 | let loop_duration = loop_start.elapsed(); |