Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fetch the latest commit #395

Open
wants to merge 84 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
84 commits
Select commit Hold shift + click to select a range
7a81831
chore: delete unrelated script
johnny890122 Jan 30, 2024
d5a4f02
test: remove torch version specification to meet the python version
johnny890122 Jan 30, 2024
0f8c0fd
test: prepare to test weather dataset with iTransformer
johnny890122 Jan 30, 2024
86593f3
chore: add dataset
johnny890122 Jan 30, 2024
515ab6b
test: testing if run on server ok
johnny890122 Jan 31, 2024
17bc826
test: update wind tpc data for test
johnny890122 Jan 31, 2024
bbd493a
fix: change tpc dataset header
johnny890122 Jan 31, 2024
c0ac0ea
Merge branch 'thuml:main' into main
johnny890122 Feb 20, 2024
c375dae
chore: remove dataset
johnny890122 Feb 20, 2024
5e3bc19
chore: modifu ignore file
johnny890122 Feb 20, 2024
774904a
test: sync test
johnny890122 Feb 20, 2024
be30f43
test: conduct exp for timesnet
johnny890122 Feb 27, 2024
f082d59
test: 增加 debug module
johnny890122 Feb 29, 2024
326510a
test: add gpu to debug exrension
cyho999 Feb 29, 2024
24fa91b
chore: denpendcy
johnny890122 Feb 29, 2024
7e6858b
Merge pull request #1 from johnny890122/test/TimesNet
johnny890122 Mar 5, 2024
9d4a1fa
feat: change inception from shared to ind
johnny890122 Mar 5, 2024
c2f4c32
feat: add resnet18
johnny890122 Mar 5, 2024
8543d86
feat: modify resnet
johnny890122 Mar 5, 2024
a566cd1
feat: resnet to 4
johnny890122 Mar 5, 2024
0480f83
test: add wandb
johnny890122 Mar 6, 2024
1e2c13c
test: add wandb denpendcy
johnny890122 Mar 6, 2024
685694b
test: modify wandb loss saving
johnny890122 Mar 6, 2024
c8ae29c
test: modify wandb
johnny890122 Mar 6, 2024
98cc2ac
test: wandb test
johnny890122 Mar 6, 2024
6037903
test: wandb:
johnny890122 Mar 6, 2024
9681d37
test: wandb
johnny890122 Mar 12, 2024
576b265
test: wandb plot
johnny890122 Mar 12, 2024
ac38ec7
test: plot
johnny890122 Mar 12, 2024
99a6feb
test: plot
johnny890122 Mar 12, 2024
777c636
Merge pull request #2 from johnny890122/test/wandb
johnny890122 Mar 12, 2024
1c4879b
feat: plot LSTM & TimesNet prediction with plotly
johnny890122 Mar 26, 2024
6e016a2
feat: add rmse of each lead time
johnny890122 Mar 26, 2024
62a07d7
Merge branch 'thuml:main' into main
johnny890122 Mar 26, 2024
f7a18e3
fix: fix rmse function
johnny890122 Mar 26, 2024
8f2d979
Merge branch 'main' of https://github.com/johnny890122/Time-Series-Li…
cyho999 Mar 26, 2024
5edcec0
test: run formosa
johnny890122 Apr 27, 2024
e854e8f
fix: script typo
johnny890122 Apr 27, 2024
5351ffc
feat: track rmse and mape under different leadtime
johnny890122 Apr 27, 2024
cd82373
ix: fix bug
johnny890122 Apr 27, 2024
db0634b
fix: fix bugs
johnny890122 Apr 27, 2024
3385514
test: TimeNet experimentw
johnny890122 Apr 27, 2024
2caf597
test: iTransformer
johnny890122 Apr 27, 2024
ece0cc9
test: iTransformer
johnny890122 Apr 27, 2024
7748c4b
Merge branch 'main' of https://github.com/johnny890122/Time-Series-Li…
cyho999 Apr 30, 2024
4b70f8e
Merge remote-tracking branch 'upstream/main'
johnny890122 Apr 30, 2024
23eaa27
chore: upadte exp result
johnny890122 Apr 30, 2024
bca4207
test: try to run transformer
johnny890122 May 1, 2024
65a6061
test: run
johnny890122 May 1, 2024
19cdf6f
chore: plot time series curve
johnny890122 May 1, 2024
6eb1ccd
chore: remove legacy exp result
johnny890122 May 1, 2024
750b2f5
chore: collect transformer exp data
johnny890122 May 5, 2024
6b465a7
Merge pull request #4 from johnny890122/test/Transformer
johnny890122 May 5, 2024
0b2b51a
chore: turn-off scale data option
johnny890122 May 5, 2024
0e9fe3e
chore: collect data exp
johnny890122 May 5, 2024
a642139
chore: collect exp data
johnny890122 May 5, 2024
3e3c6ef
feat: prepare for wrf
johnny890122 May 8, 2024
a382f97
chore: collect iTransformer exp dat
johnny890122 May 8, 2024
41fd351
Merge branch 'main' of https://github.com/johnny890122/Time-Series-Li…
cyho999 May 8, 2024
ea805e5
chore: collect TimesNet exp data
johnny890122 May 8, 2024
4355a63
chore: collect exp dat
johnny890122 May 13, 2024
5c36b6f
Merge pull request #5 from johnny890122/feat/formosa_wrf
johnny890122 May 13, 2024
bb96234
chore: collect exp/data
johnny890122 May 26, 2024
0c327d9
Merge branch 'main' of https://github.com/johnny890122/Time-Series-Li…
cyho999 May 29, 2024
56108fb
update
johnny890122 Jul 1, 2024
6b3f77a
test
johnny890122 Jul 2, 2024
3a3d8e6
itest
johnny890122 Jul 2, 2024
1ae6c7e
update
johnny890122 Jul 2, 2024
0ef99e3
chore: short_wrf_iTransformerwq
johnny890122 Jul 2, 2024
dc312d8
chore: timesent wrf
johnny890122 Jul 2, 2024
cfbd0be
timesnet
johnny890122 Jul 2, 2024
628e7ae
update
johnny890122 Jul 2, 2024
8f93dc7
timesnet
johnny890122 Jul 2, 2024
6f92279
iTransformer
johnny890122 Jul 2, 2024
ed64329
add plot
johnny890122 Jul 2, 2024
205d896
Merge branch 'feat/test'
johnny890122 Jul 5, 2024
b0bf8a1
fix: plot error
johnny890122 Jul 5, 2024
aa58275
feat: add plot.ipynb
johnny890122 Jul 5, 2024
6dd8a1a
update
johnny890122 Jul 5, 2024
2b46c59
Merge branch 'feat/test'
johnny890122 Jul 5, 2024
425b6be
update
johnny890122 Jul 5, 2024
4a96fbc
update
johnny890122 Jul 5, 2024
d89d756
update plot
johnny890122 Jul 8, 2024
58fdcef
update plot
johnny890122 Jul 12, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added .DS_Store
Binary file not shown.
5 changes: 3 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -157,9 +157,10 @@ data_loader_all.py
/scripts/imputation/tmp/
/utils/self_tools.py
/scripts/exp_scripts/

/wandb/
.DS_Store
dataset/.DS_Store
checkpoints/
results/
result_long_term_forecast.txt
result_anomaly_detection.txt
scripts/augmentation/
Expand Down
22 changes: 22 additions & 0 deletions .vscode/launch.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Debug with Arguments",
"type": "python",
"request": "launch",
"program": "run.py",
// "stopOnEntry": true,
"args": ["--task_name=long_term_forecast", "--is_training=1",
"--root_path=./dataset/wind/", "--data_path=formosa.csv",
"--model_id=wind_12_12", "--model=TimesNet", "--data=custom",
"--features=MS", "--seq_len=12", "--label_len=12", "--pred_len=96",
"--e_layers=3", "--d_layers=1", "--factor=3", "--enc_in=13",
"--dec_in=7", "--c_out=1", "--des='Exp'", "--d_model=64",
"--d_ff=64", "--itr=1"],
"env": {"CUDA_VISIBLE_DEVICE": "1"},
"console": "integratedTerminal",
"pythonPath": "/Users/zhangxiangxian/anaconda3/envs/TSLibrary/bin/python"
}
]
}
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
10 changes: 9 additions & 1 deletion data_provider/data_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ def inverse_transform(self, data):
class Dataset_Custom(Dataset):
def __init__(self, args, root_path, flag='train', size=None,
features='S', data_path='ETTh1.csv',
target='OT', scale=True, timeenc=0, freq='h', seasonal_patterns=None):
target='OT', scale=False, timeenc=0, freq='h', seasonal_patterns=None):
# size [seq_len, label_len, pred_len]
self.args = args
# info
Expand Down Expand Up @@ -249,12 +249,15 @@ def __read_data__(self):
num_train = int(len(df_raw) * 0.7)
num_test = int(len(df_raw) * 0.2)
num_vali = len(df_raw) - num_train - num_test

# 定義 data split。
border1s = [0, num_train - self.seq_len, len(df_raw) - num_test - self.seq_len]
border2s = [num_train, num_train + num_vali, len(df_raw)]
border1 = border1s[self.set_type]
border2 = border2s[self.set_type]

if self.features == 'M' or self.features == 'MS':
# 所有 column excpet data
cols_data = df_raw.columns[1:]
df_data = df_raw[cols_data]
elif self.features == 'S':
Expand All @@ -269,6 +272,11 @@ def __read_data__(self):

df_stamp = df_raw[['date']][border1:border2]
df_stamp['date'] = pd.to_datetime(df_stamp.date)

if self.set_type == 2: # test
df_stamp = df_stamp[self.seq_len-1:]
df_stamp.to_csv('results/test_index.csv', index=False)

if self.timeenc == 0:
df_stamp['month'] = df_stamp.date.apply(lambda row: row.month, 1)
df_stamp['day'] = df_stamp.date.apply(lambda row: row.day, 1)
Expand Down
Binary file added dataset/.DS_Store
Binary file not shown.
Binary file added dataset/weather/.DS_Store
Binary file not shown.
8,735 changes: 8,735 additions & 0 deletions dataset/wind/tpc_RF.csv

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions exp/exp_basic.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import torch
from models import Autoformer, Transformer, TimesNet, Nonstationary_Transformer, DLinear, FEDformer, \
Informer, LightTS, Reformer, ETSformer, Pyraformer, PatchTST, MICN, Crossformer, FiLM, iTransformer, \
Koopa, TiDE, FreTS, TimeMixer, TSMixer, SegRNN, MambaSimple, Mamba
Koopa, TiDE, FreTS, TimeMixer, TSMixer, SegRNN


class Exp_Basic(object):
Expand All @@ -28,8 +28,8 @@ def __init__(self, args):
'Koopa': Koopa,
'TiDE': TiDE,
'FreTS': FreTS,
'MambaSimple': MambaSimple,
'Mamba': Mamba,
# 'MambaSimple': MambaSimple,
# 'Mamba': Mamba,
'TimeMixer': TimeMixer,
'TSMixer': TSMixer,
'SegRNN': SegRNN
Expand Down
50 changes: 42 additions & 8 deletions exp/exp_long_term_forecasting.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,19 +9,20 @@
import time
import warnings
import numpy as np
import pandas
import wandb
from utils.dtw_metric import dtw,accelerated_dtw
from utils.augmentation import run_augmentation,run_augmentation_single

warnings.filterwarnings('ignore')

import matplotlib.pyplot as plt
import shutil

class Exp_Long_Term_Forecast(Exp_Basic):
def __init__(self, args):
super(Exp_Long_Term_Forecast, self).__init__(args)

def _build_model(self):
model = self.model_dict[self.args.model].Model(self.args).float()

if self.args.use_multi_gpu and self.args.use_gpu:
model = nn.DataParallel(model, device_ids=self.args.device_ids)
return model
Expand Down Expand Up @@ -98,6 +99,12 @@ def train(self, setting):
if self.args.use_amp:
scaler = torch.cuda.amp.GradScaler()

wandb.init(mode='disabled')
# run = wandb.init(project="TimesNet", config=self.args)
wandb.watch(self.model)
wandb.config.model_architecture = self.model


for epoch in range(self.args.train_epochs):
iter_count = 0
train_loss = []
Expand Down Expand Up @@ -162,6 +169,12 @@ def train(self, setting):
vali_loss = self.vali(vali_data, vali_loader, criterion)
test_loss = self.vali(test_data, test_loader, criterion)

wandb.log({
"train_loss": train_loss,
"vali_loss": vali_loss,
"test_loss": test_loss,
})

print("Epoch: {0}, Steps: {1} | Train Loss: {2:.7f} Vali Loss: {3:.7f} Test Loss: {4:.7f}".format(
epoch + 1, train_steps, train_loss, vali_loss, test_loss))
early_stopping(vali_loss, self.model, path)
Expand All @@ -173,7 +186,7 @@ def train(self, setting):

best_model_path = path + '/' + 'checkpoint.pth'
self.model.load_state_dict(torch.load(best_model_path))

# run.log_model(path=best_model_path, name="best_model")
return self.model

def test(self, setting, test=0):
Expand Down Expand Up @@ -223,7 +236,7 @@ def test(self, setting, test=0):
shape = outputs.shape
outputs = test_data.inverse_transform(outputs.squeeze(0)).reshape(shape)
batch_y = test_data.inverse_transform(batch_y.squeeze(0)).reshape(shape)

outputs = outputs[:, :, f_dim:]
batch_y = batch_y[:, :, f_dim:]

Expand All @@ -244,15 +257,36 @@ def test(self, setting, test=0):
preds = np.array(preds)
trues = np.array(trues)
print('test shape:', preds.shape, trues.shape)

preds = preds.reshape(-1, preds.shape[-2], preds.shape[-1])
trues = trues.reshape(-1, trues.shape[-2], trues.shape[-1])
print('test shape:', preds.shape, trues.shape)

# result save
idx_lst, rmse_lst, mape_lst = [], [], []
for i in range(self.args.pred_len):
pred = preds[:,i,:]
true = trues[:,i,:]
fig, ax = plt.subplots()
ax.plot(pred.flatten(), label=f'lead time: {i+1}')
ax.plot(true.flatten(), label=f'trues')
ax.legend()
wandb.log({"plot": wandb.Image(fig)})
mae, mse, rmse, mape, mspe = metric(pred, true)
idx_lst.append(i)
rmse_lst.append(rmse)
mape_lst.append(mape)

folder_path = './results/' + setting + '/'
# result save
if not os.path.exists(folder_path):
os.makedirs(folder_path)

pandas.DataFrame({
"leadtime": idx_lst,
'rmse': rmse_lst,
'mape': mape_lst}
).to_csv(folder_path + 'rmse_mape.csv', index=False)

shutil.move('./results/test_index.csv', folder_path + 'test_index.csv')
# dtw calculation
if self.args.use_dtw:
dtw_list = []
Expand All @@ -267,7 +301,7 @@ def test(self, setting, test=0):
dtw = np.array(dtw_list).mean()
else:
dtw = -999


mae, mse, rmse, mape, mspe = metric(preds, trues)
print('mse:{}, mae:{}, dtw:{}'.format(mse, mae, dtw))
Expand Down
26 changes: 26 additions & 0 deletions for_test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
export CUDA_VISIBLE_DEVICES=0

model_name=TimesNet

python -u run.py \
--task_name long_term_forecast \
--is_training 1 \
--root_path ./dataset/wind/ \
--data_path formosa_wrf_short.csv \
--model_id wind_12_12_short_wrf \
--model $model_name \
--data custom \
--features MS \
--seq_len 12 \
--label_len 12 \
--pred_len 12 \
--e_layers 3 \
--d_layers 1 \
--factor 3 \
--enc_in 19 \
--dec_in 19 \
--c_out 1 \
--des 'Exp' \
--d_model 64\
--d_ff 64\
--itr 1 \
144 changes: 144 additions & 0 deletions layers/Conv_Blocks.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,151 @@
import torch
import torch.nn as nn
import torch.nn as nn
import torch.utils.model_zoo as model_zoo

__all__ = ['ResNet', 'resnet18']

model_urls = {
'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
}

def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)

def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)

class BasicBlock(nn.Module):
expansion = 1

def __init__(self, inplanes, planes, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride

def forward(self, x):
identity = x

out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)

out = self.conv2(out)
out = self.bn2(out)

if self.downsample is not None:
identity = self.downsample(x)

out += identity
out = self.relu(out)

return out

class ResNet(nn.Module):

def __init__(self, block, layers, in_channels=64, out_channels=64, zero_init_residual=False):
super(ResNet, self).__init__()
self.inplanes = 64
self.conv1 = nn.Conv2d(in_channels, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, out_channels)

for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)

# Zero-initialize the last BN in each residual branch,
# so that the residual branch starts with zeros, and each residual block behaves like an identity.
# This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
if zero_init_residual:
for m in self.modules():
if isinstance(m, BasicBlock):
nn.init.constant_(m.bn2.weight, 0)
elif isinstance(m, Bottleneck):
nn.init.constant_(m.bn3.weight, 0)

def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
nn.BatchNorm2d(planes * block.expansion),
)

layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes))

return nn.Sequential(*layers)

def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)

x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)

x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)

return x

def _resnet(arch, block, layers, pretrained, progress, **kwargs):
model = ResNet(block, layers, **kwargs)
if pretrained:
state_dict = model_zoo.load_url(model_urls[arch], progress=progress)
model.load_state_dict(state_dict)
return model

def resnet18(pretrained=False, progress=True, **kwargs):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress, **kwargs)

class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3):
super(ResidualBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, padding=kernel_size//2)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, in_channels, kernel_size=kernel_size, padding=kernel_size//2)

def forward(self, x):
identity = x
out = self.conv1(x)
out = self.relu(out)
out = self.conv2(out)
out += identity
out = self.relu(out)
return out
class Inception_Block_V1(nn.Module):
def __init__(self, in_channels, out_channels, num_kernels=6, init_weight=True):
super(Inception_Block_V1, self).__init__()
Expand Down
Loading