| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
4133: main: eagerly prime goto-definition caches r=matklad a=BurntSushi
This commit eagerly primes the caches used by goto-definition by
submitting a "phantom" goto-definition request. This is perhaps a bit
circuitous, but it does actually get the job done. The result of this
change is that once RA is finished its initial loading of a project,
goto-definition requests are instant. There don't appear to be any more
surprise latency spikes.
This _partially_ addresses #1650 in that it front-loads the latency of the
first goto-definition request, which in turn makes it more predictable and
less surprising. In particular, this addresses the use case where one opens
the text editor, starts reading code for a while, and only later issues the
first goto-definition request. Before this PR, that first goto-definition request
is guaranteed to have high latency in any reasonably sized project. But
after this PR, there's a good chance that it will now be instant.
What this _doesn't_ address is that initial loading time. In fact, it makes it
longer by adding a phantom goto-definition request to the initial startup
sequence. However, I observed that while this did make initial loading
slower, it was overall a somewhat small (but not insignificant) fraction
of initial loading time.
-----
At least, the above is what I _want_ to do. The actual change in this PR is just a proof-of-concept. I came up with after an evening of printf-debugging. Once I found the spot where this cache priming should go, I was unsure of how to generate a phantom input. So I just took an input I knew worked from my printf-debugging and hacked it in. Obviously, what I'd like to do is make this more general such that it will always work.
I don't know whether this is the "right" approach or not. My guess is that there is perhaps a cleaner solution that more directly primes whatever cache is being lazily populated rather than fudging the issue with a phantom goto-definition request.
I created this as a draft PR because I'd really like help making this general. I think whether y'all want to accept this patch is perhaps a separate question. IMO, it seems like a good idea, but to be honest, I'm happy to maintain this patch on my own since it's so trivial. But I would like to generalize it so that it will work in any project.
My thinking is that all I really need to do is find a file and a token somewhere in the loaded project, and then use that as input. But I don't quite know how to connect all the data structures to do that. Any help would be appreciated!
cc @matklad since I've been a worm in your ear about this problem. :-)
Co-authored-by: Andrew Gallant <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This commit makes RA more aggressive about eagerly priming the caches.
In particular, this fixes an issue where even after RA was done priming
its caches, an initial goto-definition request would have very high
latency. This fixes that issue by requesting syntax highlighting for
everything. It is presumed that this is a tad wasteful, but not overly
so.
This commit also tweaks the logic that determines when the cache is
primed. Namely, instead of just priming it when the state is loaded
initially, we attempt to prime it whenever some state changes. This
fixes an issue where if a modification notification is seen before cache
priming is done, it would stop the cache priming early.
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
|\
| |
| |
| |
| |
| |
| |
| | |
4128: Include correct item path for variant completions r=matklad a=jonas-schievink
The test would previously suggest `E::V`, which is not enough to name the variant as the enum is in a module. Now it correctly suggests the full path `m::E::V`.
Co-authored-by: Jonas Schievink <[email protected]>
|
| | |
|
| | | |
| \ | |
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
3998: Make add_function generate functions in other modules via qualified path r=matklad a=TimoFreiberg
Additional feature for #3639
- [x] Add tests for paths with more segments
- [x] Make generating the function in another file work
- [x] Add `pub` or `pub(crate)` to the generated function if it's generated in a different module
- [x] Make the assist jump to the edited file
- [x] Enable file support in the `check_assist` helper
4006: Syntax highlighting for format strings r=matklad a=ltentrup
I have an implementation for syntax highlighting for format string modifiers `{}`.
The first commit refactors the changes in #3826 into a separate struct.
The second commit implements the highlighting: first we check in a macro call whether the macro is a format macro from `std`. In this case, we remember the format string node. If we encounter this node during syntax highlighting, we check for the format modifiers `{}` using regular expressions.
There are a few places which I am not quite sure:
- Is the way I extract the macro names correct?
- Is the `HighlightTag::Attribute` suitable for highlighting the `{}`?
Let me know what you think, any feedback is welcome!
Co-authored-by: Timo Freiberg <[email protected]>
Co-authored-by: Leander Tentrup <[email protected]>
Co-authored-by: Leander Tentrup <[email protected]>
|
| | | |
| | | |
| | | |
| | | | |
identifiers
|
| | | |
| | | |
| | | |
| | | | |
Co-Authored-By: bjorn3 <[email protected]>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Detailed changes:
1) Implement a lexer for string literals that divides the string in format specifier `{}` including the format specifier modifier.
2) Adapt syntax highlighting to add ranges for the detected sequences.
3) Add a test case for the format string syntax highlighting.
|
| | | | |
|
| | | | |
|
| |_|/
|/| | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
closes #2518
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
3954: Improve autocompletion by looking on the type and name r=matklad a=bnjjj
This tweet (https://twitter.com/tjholowaychuk/status/1248918374731714560) gaves me the idea to implement that in rust-analyzer.
Basically for this first example I made some examples when we are in a function call definition. I look on the parameter list to prioritize autocompletions for the same types and if it's the same type + the same name then it's displayed first in the completion list.
So here is a draft, first step to open a discussion and know what you think about the implementation. It works (cf tests) but maybe I can make a better implementation at some places. Be careful the code needs some refactoring to be better and concise.
PS: It was lot of fun writing this haha
Co-authored-by: Benjamin Coenen <[email protected]>
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Benjamin Coenen <[email protected]>
|
| |\| | |
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Benjamin Coenen <[email protected]>
|
| |\ \ \
| | | |/
| | |/| |
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Benjamin Coenen <[email protected]>
|
| |\ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Benjamin Coenen <[email protected]>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Benjamin Coenen <[email protected]>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Benjamin Coenen <[email protected]>
|
| |\ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
implementation, include sort in Completions struct
Signed-off-by: Benjamin Coenen <[email protected]>
|
| |\ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | | |
Signed-off-by: Benjamin Coenen <[email protected]>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Signed-off-by: Benjamin Coenen <[email protected]>
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
4065: Complete unqualified enum names in patterns and expressions r=matklad a=nathanwhit
This PR implements the completion described in #4014.
The result looks like so for patterns:
<img width="542" alt="Screen Shot 2020-04-20 at 3 53 55 PM" src="https://user-images.githubusercontent.com/17734409/79794010-8f529400-831f-11ea-9673-f838aa9bc962.png">
and for `expr`s:
<img width="620" alt="Screen Shot 2020-04-21 at 3 51 24 PM" src="https://user-images.githubusercontent.com/17734409/79908784-d73ded80-83e9-11ea-991d-921f0cb27e6f.png">
I'm not confident that the completion text itself is very robust, as it will unconditionally add completions for enum variants with the form `Enum::Variant`. This means (I believe) it would still suggest `Enum::Variant` even if the local name is changed i.e. `use Enum as Foo` or the variants are brought into scope such as through `use Enum::*`.
Co-authored-by: nathanwhit <[email protected]>
|
| | | | | | | | |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Adds tests for completion of enum variants in match arms, if-let statements, and basic expressions.
|