aboutsummaryrefslogtreecommitdiff
path: root/crates
Commit message (Collapse)AuthorAgeFilesLines
...
| * store params in the graphAleksey Kladov2018-09-151-5/+7
| |
| * any-cacheAleksey Kladov2018-09-152-21/+18
| |
| * fix dep trackingAleksey Kladov2018-09-152-31/+18
| |
| * add deps trackingAleksey Kladov2018-09-153-47/+148
| |
| * Add simplisitc global modification cachingAleksey Kladov2018-09-153-31/+111
| |
| * initial query tracingAleksey Kladov2018-09-152-13/+55
| |
| * start query-based modulesAleksey Kladov2018-09-157-41/+361
| |
* | Merge #69bors[bot]2018-09-154-198/+360
|\ \ | |/ |/| | | | | | | | | | | 69: Incremental reparsing for single tokens r=matklad a=darksv Implement incremental reparsing for `WHITESPACE`, `COMMENT`, `DOC_COMMENT`, `IDENT`, `STRING` and `RAW_STRING`. This allows to avoid reparsing whole blocks when a change was made only within these tokens. Co-authored-by: darksv <[email protected]>
| * independent tests for incremental reparsing of blocks and leavesdarksv2018-09-151-48/+68
| |
| * move reparsing testsdarksv2018-09-153-129/+140
| |
| * commit missing filedarksv2018-09-151-1/+1
| |
| * create separated mod for reparsing functionalitydarksv2018-09-153-170/+200
| |
| * correctly handle IDENTs when changed to contextual keywordsdarksv2018-09-141-1/+15
| |
| * create leaf directly without calling the parserdarksv2018-09-141-23/+2
| |
| * Incremental reparsing for single tokens (WHITESPACE, COMMENT, DOC_COMMENT, ↵darksv2018-09-132-14/+122
| | | | | | | | IDENT, STRING, RAW_STRING)
* | adjust trailing newlinedarksv2018-09-142-2/+4
| |
* | add missing files with inline testsdarksv2018-09-142-0/+13
| |
* | Support for unionsdarksv2018-09-144-7/+161
|/
* don't get stuck in slice patternsAleksey Kladov2018-09-122-21/+122
|
* correctly setup path-map for fs-changesAleksey Kladov2018-09-121-23/+24
|
* Merge #68bors[bot]2018-09-119-28/+98
|\ | | | | | | | | | | | | | | 68: Implement incremental reparsing for remaining braced blocks r=matklad a=darksv Fixes #66 Co-authored-by: darksv <[email protected]>
| * Do not reparse token tree when it is not delimited by bracesdarksv2018-09-102-1/+6
| |
| * Implement reparsing for remaining blocksdarksv2018-09-109-28/+93
| |
* | store file rsovlerAleksey Kladov2018-09-107-116/+146
|/
* Merge #65bors[bot]2018-09-081-1/+1
|\ | | | | | | | | | | | | | | 65: simplify r=matklad a=matklad Co-authored-by: Aleksey Kladov <[email protected]>
| * simplifyAleksey Kladov2018-09-081-1/+1
| |
* | Fix yet another parser infinite loopAleksey Kladov2018-09-082-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit is an example of fixing a common parser error: infinite loop due to error recovery. This error typically happens when we parse a list of items and fail to parse a specific item at the current position. One choices is to skip a token and try to parse a list item at the next position. This is a good, but not universal, default. When parsing a list of arguments in a function call, you, for example, don't want to skip over `fn`, because it's most likely that it is a function declaration, and not a mistyped arg: ``` fn foo() { quux(1, 2 fn bar() { } ``` Another choice is to bail out of the loop immediately, but it isn't perfect either: sometimes skipping over garbage helps: ``` quux(1, foo:, 92) // should skip over `:`, b/c that's part of `foo::bar` ``` In general, parser tries to balance these two cases, though we don't have a definitive strategy yet. However, if the parser accidentally neither skips over a token, nor breaks out of the loop, then it becomes stuck in the loop infinitely (there's an internal counter to self-check this situation and panic though), and that's exactly what is demonstrated by the test. To fix such situation, first of all, add the test case to tests/data/parser/{err,fuzz-failures}. Then, run ``` RUST_BACKTRACE=short cargo test --package libsyntax2 ```` to verify that parser indeed panics, and to get an idea what grammar production is the culprit (look for `_list` functions!). In this case, I see ``` 10: libsyntax2::grammar::expressions::atom::match_arm_list at crates/libsyntax2/src/grammar/expressions/atom.rs:309 ``` and that's look like it might be a culprit. I verify it by adding `eprintln!("loopy {:?}", p.current());` and indeed I see that this is printed repeatedly. Diagnosing this a bit shows that the problem is that `pattern::pattern` function does not consume anything if the next token is `let`. That is a good default to make cases like ``` let let foo = 92; ``` where the user hasn't typed the pattern yet, to parse in a reasonable they correctly. For match arms, pretty much the single thing we expect is a pattern, so, for a fix, I introduce a special variant of pattern that does not do recovery.
* | Add fuzz failures dirAleksey Kladov2018-09-082-4/+213
|/
* move fuzz-invariants to the libraryAleksey Kladov2018-09-083-41/+44
|
* Add trivial fuzzer for parserPascal Hertleif2018-09-083-0/+38
| | | | | | | | | | | | | As described in #61, fuzz testing some parts of this would be ~~fun~~ helpful. So, I started with the most trivial fuzzer I could think of: Put random stuff into File::parse and see what happens. To speed things up, I also did cp src/**/*.rs fuzz/corpus/parser/ in the `crates/libsyntax2/` directory (running the fuzzer once will generate the necessary directories).
* Don't overflow when limiting symbol searchAleksey Kladov2018-09-081-3/+2
|
* Some abstraction around workersAleksey Kladov2018-09-085-74/+92
|
* Deal with deadlocks in a more principaled wayAleksey Kladov2018-09-086-27/+39
|
* fix deadlockAleksey Kladov2018-09-081-14/+18
|
* Fix block structure in enumsAleksey Kladov2018-09-085-25/+274
|
* simplifyAleksey Kladov2018-09-085-9/+14
|
* Don't get stuck in tuple exprsAleksey Kladov2018-09-084-96/+484
|
* Don't get stuck in macrosAleksey Kladov2018-09-083-1/+54
|
* fix stuck parserAleksey Kladov2018-09-087-2/+719
|
* fix labled expressionsAleksey Kladov2018-09-089-14/+313
|
* nested mod completionAleksey Kladov2018-09-075-46/+68
|
* Remove dyn dispatchAleksey Kladov2018-09-073-21/+65
|
* Separete API from IMPLAleksey Kladov2018-09-071-14/+14
| | | | | | Looks like there's a rule of thumb: don't call API functions from an implementation! In this case, following this rule of thumb saves us an Arc-bump!
* Moved TokenSet into it's own file.Zac Winter2018-09-064-37/+41
|
* better introduceAleksey Kladov2018-09-051-8/+26
|
* introduce variableAleksey Kladov2018-09-056-10/+72
|
* use correct workdir for the serverAleksey Kladov2018-09-052-10/+19
|
* even less hacksAleksey Kladov2018-09-051-18/+16
|
* less hacky pathsAleksey Kladov2018-09-051-12/+9
|
* Merge #56bors[bot]2018-09-055-45/+137
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 56: Unify lookahead naming between parser and lexer. r=matklad a=zachlute Resolves Issue #26. I wanted to play around with libsyntax2, and fixing a random issue seemed like a good way to mess around in the code. This PR mostly does what's suggested in that issue. I elected to go with `at` and `at_str` instead of trying to do any fancy overloading shenanigans, because...uh, well, frankly I don't really know how to do any fancy overloading shenanigans. The only really questionable bit is `nth_is_p`, which could also have potentially been named `nth_at_p`, but `is` seemed more apropos. I also added simple tests for `Ptr` so I could be less terrified I broke something. Comments and criticisms very welcome. I'm still pretty new to Rust. Co-authored-by: Zach Lute <[email protected]>