aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/dev/README.md345
-rw-r--r--docs/dev/style.md212
-rw-r--r--docs/dev/syntax.md4
3 files changed, 297 insertions, 264 deletions
diff --git a/docs/dev/README.md b/docs/dev/README.md
index 417352c9d..67813a9c0 100644
--- a/docs/dev/README.md
+++ b/docs/dev/README.md
@@ -50,271 +50,85 @@ We use bors-ng to enforce the [not rocket science](https://graydon2.dreamwidth.o
50 50
51You can run `cargo xtask install-pre-commit-hook` to install git-hook to run rustfmt on commit. 51You can run `cargo xtask install-pre-commit-hook` to install git-hook to run rustfmt on commit.
52 52
53# Code organization
54
55All Rust code lives in the `crates` top-level directory, and is organized as a
56single Cargo workspace. The `editors` top-level directory contains code for
57integrating with editors. Currently, it contains the plugin for VS Code (in
58TypeScript). The `docs` top-level directory contains both developer and user
59documentation.
60
61We have some automation infra in Rust in the `xtask` package. It contains
62stuff like formatting checking, code generation and powers `cargo xtask install`.
63The latter syntax is achieved with the help of cargo aliases (see `.cargo`
64directory).
65
66# Launching rust-analyzer 53# Launching rust-analyzer
67 54
68Debugging the language server can be tricky: LSP is rather chatty, so driving it 55Debugging the language server can be tricky.
69from the command line is not really feasible, driving it via VS Code requires 56LSP is rather chatty, so driving it from the command line is not really feasible, driving it via VS Code requires interacting with two processes.
70interacting with two processes.
71 57
72For this reason, the best way to see how rust-analyzer works is to find a 58For this reason, the best way to see how rust-analyzer works is to find a relevant test and execute it.
73relevant test and execute it (VS Code includes an action for running a single 59VS Code & Emacs include an action for running a single test.
74test).
75 60
76However, launching a VS Code instance with a locally built language server is 61Launching a VS Code instance with a locally built language server is also possible.
77possible. There's **"Run Extension (Debug Build)"** launch configuration for this. 62There's **"Run Extension (Debug Build)"** launch configuration for this in VS Code.
78 63
79In general, I use one of the following workflows for fixing bugs and 64In general, I use one of the following workflows for fixing bugs and implementing features:
80implementing features.
81 65
82If the problem concerns only internal parts of rust-analyzer (i.e. I don't need 66If the problem concerns only internal parts of rust-analyzer (i.e. I don't need to touch the `rust-analyzer` crate or TypeScript code), there is a unit-test for it.
83to touch the `rust-analyzer` crate or TypeScript code), there is a unit-test for it. 67So, I use **Rust Analyzer: Run** action in VS Code to run this single test, and then just do printf-driven development/debugging.
84So, I use **Rust Analyzer: Run** action in VS Code to run this single test, and 68As a sanity check after I'm done, I use `cargo xtask install --server` and **Reload Window** action in VS Code to verify that the thing works as I expect.
85then just do printf-driven development/debugging. As a sanity check after I'm
86done, I use `cargo xtask install --server` and **Reload Window** action in VS
87Code to sanity check that the thing works as I expect.
88 69
89If the problem concerns only the VS Code extension, I use **Run Installed Extension** 70If the problem concerns only the VS Code extension, I use **Run Installed Extension** launch configuration from `launch.json`.
90launch configuration from `launch.json`. Notably, this uses the usual 71Notably, this uses the usual `rust-analyzer` binary from `PATH`.
91`rust-analyzer` binary from `PATH`. For this, it is important to have the following 72For this, it is important to have the following in your `settings.json` file:
92in your `settings.json` file:
93```json 73```json
94{ 74{
95 "rust-analyzer.serverPath": "rust-analyzer" 75 "rust-analyzer.serverPath": "rust-analyzer"
96} 76}
97``` 77```
98After I am done with the fix, I use `cargo 78After I am done with the fix, I use `cargo xtask install --client-code` to try the new extension for real.
99xtask install --client-code` to try the new extension for real.
100
101If I need to fix something in the `rust-analyzer` crate, I feel sad because it's
102on the boundary between the two processes, and working there is slow. I usually
103just `cargo xtask install --server` and poke changes from my live environment.
104Note that this uses `--release`, which is usually faster overall, because
105loading stdlib into debug version of rust-analyzer takes a lot of time. To speed
106things up, sometimes I open a temporary hello-world project which has
107`"rust-analyzer.withSysroot": false` in `.code/settings.json`. This flag causes
108rust-analyzer to skip loading the sysroot, which greatly reduces the amount of
109things rust-analyzer needs to do, and makes printf's more useful. Note that you
110should only use the `eprint!` family of macros for debugging: stdout is used for LSP
111communication, and `print!` would break it.
112
113If I need to fix something simultaneously in the server and in the client, I
114feel even more sad. I don't have a specific workflow for this case.
115
116Additionally, I use `cargo run --release -p rust-analyzer -- analysis-stats
117path/to/some/rust/crate` to run a batch analysis. This is primarily useful for
118performance optimizations, or for bug minimization.
119
120# Code Style & Review Process
121
122Our approach to "clean code" is two-fold:
123
124* We generally don't block PRs on style changes.
125* At the same time, all code in rust-analyzer is constantly refactored.
126
127It is explicitly OK for a reviewer to flag only some nits in the PR, and then send a follow-up cleanup PR for things which are easier to explain by example, cc-ing the original author.
128Sending small cleanup PRs (like renaming a single local variable) is encouraged.
129
130## Scale of Changes
131
132Everyone knows that it's better to send small & focused pull requests.
133The problem is, sometimes you *have* to, eg, rewrite the whole compiler, and that just doesn't fit into a set of isolated PRs.
134
135The main things to keep an eye on are the boundaries between various components.
136There are three kinds of changes:
137
1381. Internals of a single component are changed.
139 Specifically, you don't change any `pub` items.
140 A good example here would be an addition of a new assist.
141
1422. API of a component is expanded.
143 Specifically, you add a new `pub` function which wasn't there before.
144 A good example here would be expansion of assist API, for example, to implement lazy assists or assists groups.
145
1463. A new dependency between components is introduced.
147 Specifically, you add a `pub use` reexport from another crate or you add a new line to the `[dependencies]` section of `Cargo.toml`.
148 A good example here would be adding reference search capability to the assists crates.
149 79
150For the first group, the change is generally merged as long as: 80If I need to fix something in the `rust-analyzer` crate, I feel sad because it's on the boundary between the two processes, and working there is slow.
81I usually just `cargo xtask install --server` and poke changes from my live environment.
82Note that this uses `--release`, which is usually faster overall, because loading stdlib into debug version of rust-analyzer takes a lot of time.
83To speed things up, sometimes I open a temporary hello-world project which has `"rust-analyzer.withSysroot": false` in `.code/settings.json`.
84This flag causes rust-analyzer to skip loading the sysroot, which greatly reduces the amount of things rust-analyzer needs to do, and makes printf's more useful.
85Note that you should only use the `eprint!` family of macros for debugging: stdout is used for LSP communication, and `print!` would break it.
151 86
152* it works for the happy case, 87If I need to fix something simultaneously in the server and in the client, I feel even more sad.
153* it has tests, 88I don't have a specific workflow for this case.
154* it doesn't panic for the unhappy case.
155 89
156For the second group, the change would be subjected to quite a bit of scrutiny and iteration. 90Additionally, I use `cargo run --release -p rust-analyzer -- analysis-stats path/to/some/rust/crate` to run a batch analysis.
157The new API needs to be right (or at least easy to change later). 91This is primarily useful for performance optimizations, or for bug minimization.
158The actual implementation doesn't matter that much.
159It's very important to minimize the amount of changed lines of code for changes of the second kind.
160Often, you start doing a change of the first kind, only to realise that you need to elevate to a change of the second kind.
161In this case, we'll probably ask you to split API changes into a separate PR.
162
163Changes of the third group should be pretty rare, so we don't specify any specific process for them.
164That said, adding an innocent-looking `pub use` is a very simple way to break encapsulation, keep an eye on it!
165
166Note: if you enjoyed this abstract hand-waving about boundaries, you might appreciate
167https://www.tedinski.com/2018/02/06/system-boundaries.html
168
169## Minimal Tests
170
171Most tests in rust-analyzer start with a snippet of Rust code.
172This snippets should be minimal -- if you copy-paste a snippet of real code into the tests, make sure to remove everything which could be removed.
173There are many benefits to this:
174
175* less to read or to scroll past
176* easier to understand what exactly is tested
177* less stuff printed during printf-debugging
178* less time to run test
179
180It also makes sense to format snippets more compactly (for example, by placing enum defitions like `enum E { Foo, Bar }` on a single line),
181as long as they are still readable.
182
183## Order of Imports
184
185We separate import groups with blank lines
186
187```rust
188mod x;
189mod y;
190
191use std::{ ... }
192
193use crate_foo::{ ... }
194use crate_bar::{ ... }
195
196use crate::{}
197
198use super::{} // but prefer `use crate::`
199```
200 92
201## Import Style 93## Parser Tests
202
203Items from `hir` and `ast` should be used qualified:
204
205```rust
206// Good
207use ra_syntax::ast;
208
209fn frobnicate(func: hir::Function, strukt: ast::StructDef) {}
210
211// Not as good
212use hir::Function;
213use ra_syntax::ast::StructDef;
214
215fn frobnicate(func: Function, strukt: StructDef) {}
216```
217
218Avoid local `use MyEnum::*` imports.
219
220Prefer `use crate::foo::bar` to `use super::bar`.
221
222## Order of Items
223
224Optimize for the reader who sees the file for the first time, and wants to get the general idea about what's going on.
225People read things from top to bottom, so place most important things first.
226
227Specifically, if all items except one are private, always put the non-private item on top.
228
229Put `struct`s and `enum`s first, functions and impls last.
230
231Do
232
233```rust
234// Good
235struct Foo {
236 bars: Vec<Bar>
237}
238
239struct Bar;
240```
241
242rather than
243
244```rust
245// Not as good
246struct Bar;
247
248struct Foo {
249 bars: Vec<Bar>
250}
251```
252
253## Variable Naming
254
255We generally use boring and long names for local variables ([yay code completion](https://github.com/rust-analyzer/rust-analyzer/pull/4162#discussion_r417130973)).
256The default name is a lowercased name of the type: `global_state: GlobalState`.
257Avoid ad-hoc acronyms and contractions, but use the ones that exist consistently (`db`, `ctx`, `acc`).
258The default name for "result of the function" local variable is `res`.
259
260## Collection types
261 94
262We prefer `rustc_hash::FxHashMap` and `rustc_hash::FxHashSet` instead of the ones in `std::collections`. 95Tests for the parser (`ra_parser`) live in the `ra_syntax` crate (see `test_data` directory).
263They use a hasher that's slightly faster and using them consistently will reduce code size by some small amount. 96There are two kinds of tests:
264 97
265## Preconditions 98* Manually written test cases in `parser/ok` and `parser/err`
99* "Inline" tests in `parser/inline` (these are generated) from comments in `ra_parser` crate.
266 100
267Function preconditions should generally be expressed in types and provided by the caller (rather than checked by callee): 101The purpose of inline tests is not to achieve full coverage by test cases, but to explain to the reader of the code what each particular `if` and `match` is responsible for.
102If you are tempted to add a large inline test, it might be a good idea to leave only the simplest example in place, and move the test to a manual `parser/ok` test.
268 103
269```rust 104To update test data, run with `UPDATE_EXPECT` variable:
270// Good
271fn frbonicate(walrus: Walrus) {
272 ...
273}
274 105
275// Not as good 106```bash
276fn frobnicate(walrus: Option<Walrus>) { 107env UPDATE_EXPECT=1 cargo qt
277 let walrus = match walrus {
278 Some(it) => it,
279 None => return,
280 };
281 ...
282}
283``` 108```
284 109
285## Premature Pessimization 110After adding a new inline test you need to run `cargo xtest codegen` and also update the test data as described above.
286
287While we don't specifically optimize code yet, avoid writing code which is slower than it needs to be.
288Don't allocate a `Vec` where an iterator would do, don't allocate strings needlessly.
289 111
290```rust 112## TypeScript Tests
291// Good
292use itertools::Itertools;
293 113
294let (first_word, second_word) = match text.split_ascii_whitespace().collect_tuple() { 114If you change files under `editors/code` and would like to run the tests and linter, install npm and run:
295 Some(it) => it,
296 None => return,
297}
298 115
299// Not as good 116```bash
300let words = text.split_ascii_whitespace().collect::<Vec<_>>(); 117cd editors/code
301if words.len() != 2 { 118npm ci
302 return 119npm run lint
303}
304``` 120```
305 121
306## Documentation 122# Code organization
307
308For `.md` and `.adoc` files, prefer a sentence-per-line format, don't wrap lines.
309If the line is too long, you want to split the sentence in two :-)
310
311## Commit Style
312 123
313We don't have specific rules around git history hygiene. 124All Rust code lives in the `crates` top-level directory, and is organized as a single Cargo workspace.
314Maintaining clean git history is encouraged, but not enforced. 125The `editors` top-level directory contains code for integrating with editors.
315We use rebase workflow, it's OK to rewrite history during PR review process. 126Currently, it contains the plugin for VS Code (in TypeScript).
127The `docs` top-level directory contains both developer and user documentation.
316 128
317Avoid @mentioning people in commit messages and pull request descriptions (they are added to commit message by bors), as such messages create a lot of duplicate notification traffic during rebases. 129We have some automation infra in Rust in the `xtask` package.
130It contains stuff like formatting checking, code generation and powers `cargo xtask install`.
131The latter syntax is achieved with the help of cargo aliases (see `.cargo` directory).
318 132
319# Architecture Invariants 133# Architecture Invariants
320 134
@@ -349,35 +163,11 @@ The main IDE crate (`ra_ide`) uses "Plain Old Data" for the API.
349Rather than talking in definitions and references, it talks in Strings and textual offsets. 163Rather than talking in definitions and references, it talks in Strings and textual offsets.
350In general, API is centered around UI concerns -- the result of the call is what the user sees in the editor, and not what the compiler sees underneath. 164In general, API is centered around UI concerns -- the result of the call is what the user sees in the editor, and not what the compiler sees underneath.
351The results are 100% Rust specific though. 165The results are 100% Rust specific though.
166Shout outs to LSP developers for popularizing the idea that "UI" is a good place to draw a boundary at.
352 167
353## Parser Tests 168# Code Style & Review Process
354
355Tests for the parser (`ra_parser`) live in the `ra_syntax` crate (see `test_data` directory).
356There are two kinds of tests:
357
358* Manually written test cases in `parser/ok` and `parser/err`
359* "Inline" tests in `parser/inline` (these are generated) from comments in `ra_parser` crate.
360
361The purpose of inline tests is not to achieve full coverage by test cases, but to explain to the reader of the code what each particular `if` and `match` is responsible for.
362If you are tempted to add a large inline test, it might be a good idea to leave only the simplest example in place, and move the test to a manual `parser/ok` test.
363
364To update test data, run with `UPDATE_EXPECT` variable:
365
366```bash
367env UPDATE_EXPECT=1 cargo qt
368```
369
370After adding a new inline test you need to run `cargo xtest codegen` and also update the test data as described above.
371
372## TypeScript Tests
373
374If you change files under `editors/code` and would like to run the tests and linter, install npm and run:
375 169
376```bash 170Do see [./style.md](./style.md).
377cd editors/code
378npm ci
379npm run lint
380```
381 171
382# Logging 172# Logging
383 173
@@ -445,3 +235,34 @@ For measuring time of incremental analysis, use either of these:
445$ cargo run --release -p rust-analyzer -- analysis-bench ../chalk/ --highlight ../chalk/chalk-engine/src/logic.rs 235$ cargo run --release -p rust-analyzer -- analysis-bench ../chalk/ --highlight ../chalk/chalk-engine/src/logic.rs
446$ cargo run --release -p rust-analyzer -- analysis-bench ../chalk/ --complete ../chalk/chalk-engine/src/logic.rs:94:0 236$ cargo run --release -p rust-analyzer -- analysis-bench ../chalk/ --complete ../chalk/chalk-engine/src/logic.rs:94:0
447``` 237```
238
239# Release Process
240
241Release process is handled by `release`, `dist` and `promote` xtasks, `release` being the main one.
242
243`release` assumes that you have checkouts of `rust-analyzer`, `rust-analyzer.github.io`, and `rust-lang/rust` in the same directory:
244
245```
246./rust-analyzer
247./rust-analyzer.github.io
248./rust-rust-analyzer # Note the name!
249```
250
251Additionally, it assumes that remote for `rust-analyzer` is called `upstream` (I use `origin` to point to my fork).
252
253Release steps:
254
2551. Inside rust-analyzer, run `cargo xtask release`. This will:
256 * checkout the `release` branch
257 * reset it to `upstream/nightly`
258 * push it to `upstream`. This triggers GitHub Actions which:
259 ** runs `cargo xtask dist` to package binaries and VS Code extension
260 ** makes a GitHub release
261 ** pushes VS Code extension to the marketplace
262 * create new changelog in `rust-analyzer.github.io`
263 * create `rust-analyzer.github.io/git.log` file with the log of merge commits since last release
2642. While the release is in progress, fill-in the changelog using `git.log`
2653. Commit & push the changelog
2664. Tweet
2675. Inside `rust-analyzer`, run `cargo xtask promote` -- this will create a PR to rust-lang/rust updating rust-analyzer's submodule.
268 Self-approve the PR.
diff --git a/docs/dev/style.md b/docs/dev/style.md
new file mode 100644
index 000000000..1c68f5702
--- /dev/null
+++ b/docs/dev/style.md
@@ -0,0 +1,212 @@
1Our approach to "clean code" is two-fold:
2
3* We generally don't block PRs on style changes.
4* At the same time, all code in rust-analyzer is constantly refactored.
5
6It is explicitly OK for a reviewer to flag only some nits in the PR, and then send a follow-up cleanup PR for things which are easier to explain by example, cc-ing the original author.
7Sending small cleanup PRs (like renaming a single local variable) is encouraged.
8
9# Scale of Changes
10
11Everyone knows that it's better to send small & focused pull requests.
12The problem is, sometimes you *have* to, eg, rewrite the whole compiler, and that just doesn't fit into a set of isolated PRs.
13
14The main things to keep an eye on are the boundaries between various components.
15There are three kinds of changes:
16
171. Internals of a single component are changed.
18 Specifically, you don't change any `pub` items.
19 A good example here would be an addition of a new assist.
20
212. API of a component is expanded.
22 Specifically, you add a new `pub` function which wasn't there before.
23 A good example here would be expansion of assist API, for example, to implement lazy assists or assists groups.
24
253. A new dependency between components is introduced.
26 Specifically, you add a `pub use` reexport from another crate or you add a new line to the `[dependencies]` section of `Cargo.toml`.
27 A good example here would be adding reference search capability to the assists crates.
28
29For the first group, the change is generally merged as long as:
30
31* it works for the happy case,
32* it has tests,
33* it doesn't panic for the unhappy case.
34
35For the second group, the change would be subjected to quite a bit of scrutiny and iteration.
36The new API needs to be right (or at least easy to change later).
37The actual implementation doesn't matter that much.
38It's very important to minimize the amount of changed lines of code for changes of the second kind.
39Often, you start doing a change of the first kind, only to realise that you need to elevate to a change of the second kind.
40In this case, we'll probably ask you to split API changes into a separate PR.
41
42Changes of the third group should be pretty rare, so we don't specify any specific process for them.
43That said, adding an innocent-looking `pub use` is a very simple way to break encapsulation, keep an eye on it!
44
45Note: if you enjoyed this abstract hand-waving about boundaries, you might appreciate
46https://www.tedinski.com/2018/02/06/system-boundaries.html
47
48# Crates.io Dependencies
49
50We try to be very conservative with usage of crates.io dependencies.
51Don't use small "helper" crates (exception: `itertools` is allowed).
52If there's some general reusable bit of code you need, consider adding it to the `stdx` crate.
53
54# Minimal Tests
55
56Most tests in rust-analyzer start with a snippet of Rust code.
57This snippets should be minimal -- if you copy-paste a snippet of real code into the tests, make sure to remove everything which could be removed.
58There are many benefits to this:
59
60* less to read or to scroll past
61* easier to understand what exactly is tested
62* less stuff printed during printf-debugging
63* less time to run test
64
65It also makes sense to format snippets more compactly (for example, by placing enum definitions like `enum E { Foo, Bar }` on a single line),
66as long as they are still readable.
67
68## Order of Imports
69
70Separate import groups with blank lines.
71Use one `use` per crate.
72
73```rust
74mod x;
75mod y;
76
77// First std.
78use std::{ ... }
79
80// Second, external crates (both crates.io crates and other rust-analyzer crates).
81use crate_foo::{ ... }
82use crate_bar::{ ... }
83
84// Then current crate.
85use crate::{}
86
87// Finally, parent and child modules, but prefer `use crate::`.
88use super::{}
89```
90
91Module declarations come before the imports.
92Order them in "suggested reading order" for a person new to the code base.
93
94## Import Style
95
96Qualify items from `hir` and `ast`.
97
98```rust
99// Good
100use ra_syntax::ast;
101
102fn frobnicate(func: hir::Function, strukt: ast::StructDef) {}
103
104// Not as good
105use hir::Function;
106use ra_syntax::ast::StructDef;
107
108fn frobnicate(func: Function, strukt: StructDef) {}
109```
110
111Avoid local `use MyEnum::*` imports.
112
113Prefer `use crate::foo::bar` to `use super::bar`.
114
115## Order of Items
116
117Optimize for the reader who sees the file for the first time, and wants to get a general idea about what's going on.
118People read things from top to bottom, so place most important things first.
119
120Specifically, if all items except one are private, always put the non-private item on top.
121
122Put `struct`s and `enum`s first, functions and impls last.
123
124Do
125
126```rust
127// Good
128struct Foo {
129 bars: Vec<Bar>
130}
131
132struct Bar;
133```
134
135rather than
136
137```rust
138// Not as good
139struct Bar;
140
141struct Foo {
142 bars: Vec<Bar>
143}
144```
145
146## Variable Naming
147
148Use boring and long names for local variables ([yay code completion](https://github.com/rust-analyzer/rust-analyzer/pull/4162#discussion_r417130973)).
149The default name is a lowercased name of the type: `global_state: GlobalState`.
150Avoid ad-hoc acronyms and contractions, but use the ones that exist consistently (`db`, `ctx`, `acc`).
151The default name for "result of the function" local variable is `res`.
152The default name for "I don't really care about the name" variable is `it`.
153
154## Collection types
155
156Prefer `rustc_hash::FxHashMap` and `rustc_hash::FxHashSet` instead of the ones in `std::collections`.
157They use a hasher that's slightly faster and using them consistently will reduce code size by some small amount.
158
159## Preconditions
160
161Express function preconditions in types and force the caller to provide them (rather than checking in callee):
162
163```rust
164// Good
165fn frbonicate(walrus: Walrus) {
166 ...
167}
168
169// Not as good
170fn frobnicate(walrus: Option<Walrus>) {
171 let walrus = match walrus {
172 Some(it) => it,
173 None => return,
174 };
175 ...
176}
177```
178
179## Premature Pessimization
180
181Avoid writing code which is slower than it needs to be.
182Don't allocate a `Vec` where an iterator would do, don't allocate strings needlessly.
183
184```rust
185// Good
186use itertools::Itertools;
187
188let (first_word, second_word) = match text.split_ascii_whitespace().collect_tuple() {
189 Some(it) => it,
190 None => return,
191}
192
193// Not as good
194let words = text.split_ascii_whitespace().collect::<Vec<_>>();
195if words.len() != 2 {
196 return
197}
198```
199
200## Documentation
201
202For `.md` and `.adoc` files, prefer a sentence-per-line format, don't wrap lines.
203If the line is too long, you want to split the sentence in two :-)
204
205## Commit Style
206
207We don't have specific rules around git history hygiene.
208Maintaining clean git history is encouraged, but not enforced.
209Use rebase workflow, it's OK to rewrite history during PR review process.
210
211Avoid @mentioning people in commit messages and pull request descriptions(they are added to commit message by bors).
212Such messages create a lot of duplicate notification traffic during rebases.
diff --git a/docs/dev/syntax.md b/docs/dev/syntax.md
index c2864bbbc..d4bc4b07c 100644
--- a/docs/dev/syntax.md
+++ b/docs/dev/syntax.md
@@ -82,7 +82,7 @@ Points of note:
82An input like `fn f() { 90 + 2 }` might be parsed as 82An input like `fn f() { 90 + 2 }` might be parsed as
83 83
84``` 84```
85FN_DEF@0..17 85[email protected]
86 [email protected] "fn" 86 [email protected] "fn"
87 [email protected] " " 87 [email protected] " "
88 [email protected] 88 [email protected]
@@ -342,7 +342,7 @@ pub struct FnDef {
342impl AstNode for FnDef { 342impl AstNode for FnDef {
343 fn cast(syntax: SyntaxNode) -> Option<Self> { 343 fn cast(syntax: SyntaxNode) -> Option<Self> {
344 match kind { 344 match kind {
345 FN_DEF => Some(FnDef { syntax }), 345 FN => Some(FnDef { syntax }),
346 _ => None, 346 _ => None,
347 } 347 }
348 } 348 }