Add live ik solutions to cbirrt#5385
Conversation
motionplan/armplanning/cBiRRT.go
Outdated
| } | ||
| // constrainNear will ensure path between oldNear and newNear satisfies constraints along the way | ||
| near = &node{inputs: newNear} | ||
| near = &node{name: int(nodeNameCounter.Add(1)), inputs: newNear} |
There was a problem hiding this comment.
Not intended to be part of the final solution. But I found giving nodes a "name" to be useful. To verify, for instance, whether the goal node we eventually reached was a pregenerted IK solution or a live one fed midway.
scene9_9_request.json
Outdated
| @@ -0,0 +1,2033 @@ | |||
| { | |||
There was a problem hiding this comment.
Probably need to add some test somewhere. Will move this or whatever we land on to the armplanning/data directory if we keep this PR open.
scene9_9_request.json
Outdated
| "planner_options": { | ||
| "goal_metric_type": "squared_norm", | ||
| "arc_length_tolerance": 0, | ||
| "max_ik_solutions": 10, |
There was a problem hiding this comment.
Playing with this was useful for observing cbirrt
motionplan/armplanning/cBiRRT.go
Outdated
| rrtMaps.goalMap[newGoal] = nil | ||
|
|
||
| // Readjust the target to give the new solution a chance to succeed. | ||
| target, err = mp.sample(newGoal, iterNum) |
There was a problem hiding this comment.
This was the "unexpected" part of adding live IK solutions to cbirrt. Without this step of re-assigning the target, I was never able to see a new solution succeed at getting picked.
But if we do this too often, we waste time not advancing existing solutions. Some of which are probably perfectly fine. Hence the iterNum%20 at the top of the conditional.
Mostly just need feedback on whether this general idea is acceptable (for now) @erh or if I should be doing something substantially different here.
|
@erh Merged in main. Still needs cleanup (e.g: test file in top-level directory -- undo node name stuff). Also the example request I was using re: adding more IK soutions is no longer relevant: Let me know if you have another example in mind I should run against/add as a test |
This reverts commit 111bba2.
… rather than at the end of each single goal.
|
|
||
| // Number of IK solutions that should be generated before stopping. | ||
| defaultSolutionsToSeed = 100 | ||
| defaultSolutionsToSeed = 10 |
There was a problem hiding this comment.
should we just remove this option entirely?
i hate all of these.
motionplan/armplanning/node.go
Outdated
| } | ||
|
|
||
| type node struct { | ||
| name int |
motionplan/armplanning/node.go
Outdated
| } | ||
| } | ||
|
|
||
| return &node{name: int(nodeNameCounter.Add(1)), inputs: step, cost: sss.psc.pc.configurationDistanceFunc(stepArc)} |
|
|
||
| // return bool is if we should stop because we're done. | ||
| func (sss *solutionSolvingState) process(ctx context.Context, stepSolution *ik.Solution, | ||
| ) bool { |
There was a problem hiding this comment.
i changed this api around a bit, do you like your version better?
There was a problem hiding this comment.
I'm not tied to this breakdown. It's certainly necessary to have the part of process that accepts a solution without writing it to the internal array of solutions. Because for live IK, we need to shove the solution node onto a channel.
But I'm not sure if the processCorrectness and processSimilarity both need to exist? I'm not sure what I was thinking there. Something tells me that maybe stepArc was being used in processSimilarity in addition to some other API call/log line? Or maybe more likely is that, originally, I wasn't using processSimilarity for the live solutions, for simplicity of managing that slice that getSolutions returns. And then revisited that decision.
But it's certainly not the case anymore -- processCorrectness and processSimilarity are always called in pairs. Happy to condense these two "functional" processes into a single one.
Or definitely let me know if you were honing in on a different detail between your API and mine.
| } | ||
| } | ||
|
|
||
| func (bgGen *backgroundGenerator) StopAndWait() { |
| return | ||
| } | ||
|
|
||
| step := solvingState.toInputs(ctx, solution) |
There was a problem hiding this comment.
with the API change i had made, you can just call process()
There was a problem hiding this comment.
I'm not sure if I follow. The output here is pushing myNode onto a channel.
processs output (here and on main) is to add to solution state slice.
Additionally process will evaluate + optionally set node.checkPath which I don't think has an impact for live solutions.
Availability
Quality
Performance
The above data was generated by running scenes defined in the
|
Not a final "ready to merge" state. But need agreement on details of how to blend in new solutions. What I think I've found that's more important than anything else is that cbirrt can behave very erratically.
When running againstmain, scene 9 with max ik solutions changed to 3, we need "only" 65 rrt iterations to get an answer. Change the number of solutions to 4 and it now takes 249 rrt iterations.The only difference is that the new node generated is the new "optimal" node.I think a better first step might be to add a "performance" test that isolates IK generation from cbirrt. Where we can feed cbirrt different subsets of the same IK solutions. That always including an actual "good" solution, which is not necessarily IK's "optimal" node. Just to understand what the deviations are.edit scene 9 is no longer relevant. scene 9 solves in 1 rrt iteration
What this patch functionally does now is let's us start cbirrt with less IK solutions. We can have confidence that IK will continue to generate solutions. And if none of the original IK solutions are sufficient, we should eventually discover any necessary IK solution that the pre-patch code would find.
Timing results from wine crazy touch 1 and 2 are improved because we now return 10 solutions instead of waiting for 1 full second to return a few dozen.
New timings:
Old timings:
For optimized cases that don't go through cbirrt, we had to take care to not introduce a regression. Specifically
wine-adjust.jsonhas 34 goals, none of which fall into cbirrt. There is an overhead to create (and cleanup/wait on) goroutines that are producing IK results. Each cleanup/wait takes ~2ms. It was important to batch up all the waiting at the top-level plan manager code. The batched code also saw an improvement withwine-adjust.json.New:
Old:
For a comparison -- waiting for each
planSingleGoal(rather than atPlanMotion), we get the following profile: