Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about MAE #10

Closed
How-Will opened this issue Mar 1, 2023 · 5 comments
Closed

Question about MAE #10

How-Will opened this issue Mar 1, 2023 · 5 comments

Comments

@How-Will
Copy link

How-Will commented Mar 1, 2023

Hi, Wenjie

def masked_mae_cal(inputs, target, mask):
    """ calculate Mean Absolute Error"""
    return torch.sum(torch.abs(inputs - target) * mask) / (torch.sum(mask) + 1e-9)

I have a little doubt about the calculation of MAE.
I found you normalizes the dataset with standard scaling, it means the target and input are standard normalized. So why not calculate MAE after inverse the scaling to them?

@WenjieDu
Copy link
Owner

WenjieDu commented Mar 1, 2023

Hi there,

Thank you so much for your attention to SAITS! If you find SAITS is helpful to your work, please star⭐️ this repository. Your star is your recognition, which can let others notice SAITS. It matters and is definitely a kind of contribution.

I have received your message and will respond ASAP. Thank you again for your patience! 😃

Best,
Wenjie

@How-Will
Copy link
Author

How-Will commented Mar 1, 2023

image

I applied SAITS to my data and found some fluctuations in the imputated data, is this normal?

@WenjieDu
Copy link
Owner

WenjieDu commented Mar 2, 2023

Hi William, thank you for raising this issue.

For your 1st question about the error metric calculation, sure, you can invert the transformation before calculating, but this asks for one more unnecessary inverse step. Please remember that the purpose of error metrics is to make fair comparisons between all methods. No matter we make the inverse or not, the comparisons in our experiments are impartial.

For the 2nd question about your imputation results, I personally think this is normal because there're fluctuations in your original data, e.g. data from 2013/6/4 to 2013/7/24 in the bottom fig in your screenshot. I know you may want to make the imputation look more smooth, namely, not so many flips. I'd suggest you try to 1). tune the hyperparameters of SAITS to make the model not overfit the original data; 2). try to add additional methods to smooth the imputation, e.g. add additional loss constraints, or add post-processing steps to manually eliminate these too many flips.

@How-Will
Copy link
Author

How-Will commented Mar 3, 2023

Thanks very much, your reply help me a lot.

@How-Will How-Will closed this as completed Mar 3, 2023
@WenjieDu
Copy link
Owner

WenjieDu commented Mar 3, 2023

My pleasure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants