-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Edge effects #2
Comments
I expect this is because rotation is not a trivial operation with affinities. For example:
You would get affinities:
Rotating the affinities would give:
If we now try to convert these back into labels we would get something like:
You could either rotate the associated offsets: |
Ah ok. Now this looks like it is due to splitting and merging with equal weight at the edges. If you pad with zeros and use a bias of 0.5 and your affs are very confident at 1. Then you might be merging into the padded region on one axis with weight 0.5, but splitting along the second axis also with weight -0.5. since the absolute value is the same, the order is random and you get annoying artifacts. If you pad with your bias term, then the padded affs will have absolute weight near 0 and be processed last avoiding any false splits that they might cause. |
Unfortunately this is a recurring problem with using mutex watershed. Because it's such a greedy algorithm that simply processes edges in order of greatest absolute value first and does not account for object size at all, it is very sensitive to tiny differences that can change the order of edge processing. |
Ah thanks, I think I get it. I'm reading the original mutex watershed paper to better understand how it works. I was also wondering why these artifacts seem to appear only at the padded edges of the volume rather than just at every background boundary (if I pad with 0, it's just the same as background). I can't really see any artifacts at the edges of those three blobs closer to the center of the image above, for example |
When these artifacts show up inside predictions they normally don't propagate as far. At the boundary of predicted objects you may have inconsistent affinities leading to small artifacts, but they will normally not be saturated to 0 and 1. At the boundary you may have values of 0.3 and 0.7 etc. Meaning inconsistencies due to edge processing order cannot propagate far from the low confidence boundary region into the high confidence region in the center of an object. |
Right, I also thought that a practical solution would be to make sure that the padded edges are processed as late as possible (by having their weights be ~zero after accounting for the bias). I was thinking about your drawn examples and was also trying to see why the artifacts only appear at the right and bottom boundaries of the volume (in fact, since it's a 3d volume, the defects appear at 3 out of 6 boundaries, though it's not clear from the 2d slices I attached above). So I think this directional preference comes from how the offsets are defined: I also think we could figure out a "clever" padding for the bottom/right boundaries to restore consistency there. I think in the simplest case of uniform boundary of a single large object, like the one around the bottom-right corner in my images above, we'd just have to pad with both 1s and 0s in a certain way to make the connectivity look like the "good" top-left corner |
Exactly, for the top left corner you don't run into the inconsistent affinities due to the padding. As you noted it's because of the neighborhood being positive values. If your neighborhood was defined with (-1,0,0)... You would have the same defects on the opposite side. The inconsistencies are essentially caused by predictions going "out of bounds". If you use a network to predict the affinities, it will use shape priors and some context and probably knows with high confidence when objects continue past the edge of the image or not. This is why you get high confidence 1s predicted between the true pixels and padded pixels in my little diagram. This causes the inconsistency. Since we can't pad with the true context, I think there will almost always be errors. But why do you need to pad? You could predict a larger region and crop to get the appropriate context and not deal with padding artifacts |
Right, I likely won't have to pad eventually. I think I will be predicting a proper external boundary instead and the issue will disappear automatically. I just wanted to understand better where and why the algorithm might fail. By the way, what do the |
Hi Will, I'm seeing some strange edge effects depending which side of the affinities volume is padded with zeros:
What do you think?
The text was updated successfully, but these errors were encountered: