-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Autocomplete Component: add e2e tests #42674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Question for reviewers: There's some basic functionality I didn't test here, because it's actually covered in the existing user mention testing suite. While that suite is currently focused on user mentions, it feels to me like it's more testing some basic Autocomplete functionality that isn't really specific to mentions. Does it make sense to:
|
|
Size Change: 0 B Total Size: 1.26 MB ℹ️ View Unchanged
|
|
I would either create similar tests in this suite, or add a comment in the spec explaining the absence of some basic tests, and pointing at them in the user-mention suite. Not sure what others would do, though (I don't have much experience with e2e tests in gutenberg). Also wanted to point out that the some of the CI tasks can't run because of a couple of errors/warnings in the PHP unit tests task |
Thank! I'll think on these two options, and see if others have any input.
Good catch, thanks for mentioning. Should be fixed now, I'll monitor the tests 👍 |
ciampo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for working on this, @chad1008 !
I'm not very experienced with working in the e2e-tests package, but from what I can gather the way these tests have been set up (i.e. creating an ad-hoc plugin to test the autocomplete functionality in the plugins/ directory) looks correct to me.
Apart from the inline comments, two more observations that come to mind:
- It would be interesting to see if these some of these e2e tests would fail once we introduce the changes from #41382 — have you tried this, @chad1008 ?
- I realized that the
Autocompletecomponent in the@wordpress/componentspackage is still without unit tests — it would be great if, as a follow-up to this PR, we could add some unit tests to the component too
mirka
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so I understand better, is this because these tests cannot be reliably done in jsdom, or because we actually do want to test it end-to-end in a WP editor context?
If it's the former, we've actually been thinking about setting up something to run Playwright tests in the Storybook environment. I see we don't even have a story yet for Autocomplete, but that would be a nice isolated environment if the "Autocomplete component" is what we actually want to test.
If it's the latter, it might be better to be a bit careful about context/verbiage, because at that level it is not really a test for the "Autocomplete component", but how everything in the editor interacts, e.g. filters, script enqueuing, surrounding keyboard shortcut handlers. In this case I think it makes sense to merge our tests into the user mentions suite, rather than create an entirely new suite. The fact that we're testing in a WP env means that we're far from testing "the Autocomplete component" itself, so we might as well be testing the most important autocomplete functionality in the editor (user mentions).
Both approaches are valid, but I want to know if ideally you would've wanted an isolated Playwright test in Storybook, rather than a full-blown WordPress E2E. It will inform our decision on if/when to invest in a Storybook Playwright setup.
|
@ciampo Thanks for all of the feedback (and the reminders to avoid hard-coded classnames)! They've now all be replaced with role-based selectors that look for relevant text content for the given test. @mirka Your questions are (as always) extremely thought provoking! My goal with these tests was honestly pretty narrowly scoped... I really only wanted to implement tests to protect against the two regressions I mentioned at the top. The e2e environment was just where I got them working first. (That sounds pretty feeble when I write it out like that, thanks for helping me think about it a little more critically!) The rest of the test suite was added out of a desire for thoroughness. When it comes to the two tests I started this PR for, I think e2e makes sense, because they deal with how the component interacts with its neighbors, including another instance of the same autocomplete component. On the other tests that deal just with option rendering and selection though, I can see your point... it might make sense to take another pass at some unit tests and move some of these to that environment instead.
I don't (yet) have a strong preference here... it's definitely the kind of thing I current defer to, and learn from, the more experienced opinions of folks like yourself and @ciampo on. I'd say if a more isolated Storybook setup is a better fit, I'm happy to work on adapting them to that kind of setup. |
Have you already pushed those changes? I'm currently not seeing new commits since my last review
Have you verified that it's effectively another instance, or that multiple "completers" can't be tested with the same
I subscribe to @mirka 's words — as a rule of thumb, we should prioritize writing unit tests (which, thanks to Since the main intention was to prevent regressions related to #41724 and #41709, what you could do is:
As @mirka said, as we try to write these unit tests, let's keep an eye on any potential |
|
I briefly looked into creating an isolated story for the Autocomplete component. Turns out, it is tightly coupled with the editor implementation (the So, given that it's a pretty tightly coupled component to begin with, I think it makes sense to test it in the realm/scope of integration or E2E, rather than at the component unit level. I'd even prefer to move the component to the |
This reverts commit 597528b.
ff0412f to
aa8f5b5
Compare
Yikes, sorry about that. Changes are now actually pushed.
Poor choice of words on my part - you're right, it's one Autocompleter with multiple completers added to the array. The issue was with activating more than one of those completers within a single block, which is what we're now testing for.
Thank you for looking into that! I wasn't yet sure if my issues preparing tests at the unit-test level were due to my lack of experience or some limitation beyond my control. You saved me a bunch of experimentation here 🙇 @mirka, does it still feel like it makes sense to merge these into the existing user mention suite? |
Sadly, no. I should have. I've tried it now, and for some reason they don't fail. I can test the behavior manually on the exact same build and my manual interactions fail, but the test sails right through without a hitch. Will require further investigation. |
I trust your decision since you have more context. But from my limited context, yes, I think I would try to merge it into a single suite. Just because, in E2E we need to be prudent about not writing too many tests so it's generally more efficient to test as many things at once if you can. But whichever way you decide (merge into Autocomplete suite, merge into User Mentions suite, or keep separate), I wouldn't say either approach is clearly bad though. |
Yeah, we should definitely make sure that these new test would be able to flag such regressions! |
|
Update: This got a little more complex! While working to make sure the two regressions were actually caught by the new tests, I found there were more specific than I'd originally realized! The don't affect a really basic completer like the one I've created with our test plugin. I think that a basic completer is still good to have test for, so I went in a new direction. Since I've pivoted a bit, I spun up a new PR: That's very much a draft at the moment and not ready for review, but I wanted to keep everyone on this thread in the loop. PS oh, and hopefully I didn't take @mirka's advice to test as many things at once as possible too seriously! |
|
Just to understand, is #42905 a new take on this PR or are they both ideally supposed to be merged? |
|
Yep, done. I mostly kept this one open in case you looked at the new one and said "I hate this." 😆 |
What?
Add some basic e2e testing for the Autocomplete component, as well as some tests specific to past regressions
Why?
The component currently lacks detailed testing, outside of some user mention tests. Increased testing will allow for safer/more confident updates to the component in the future.
How?
Six tests are implemented in this change:
Escapekeypress eventImplementing all of these tests meant introducing a new/additional completer, for a couple of reasons. First, I wanted to test that a newly-introduced completer functions as expected, and not just test the built-in items like user mentions and the block inserter. Second, one of the tests is specifically looking at handling multiple completers at once.
To accomplish this, a script to add the completer to the editor via a hook as added, along with a small plugin to enqueue that script. The test then enables and disables the plugin as needed.
Testing Instructions
After applying this patch:
npm run test-e2e:watch -- packages/e2e-tests/specs/editor/various/autocomplete.test.js