Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If the pre-training of UniTS includes a test dataset, can it still be called ZS? #28

Open
IkeYang opened this issue Jul 20, 2024 · 10 comments

Comments

@IkeYang
Copy link

IkeYang commented Jul 20, 2024

I have a question whether the pre-trained UniTS training dataset contains the test dataset. If so, then the training process of time series prediction and time series completion is essentially a self-supervised process. Can the UniTS proposed by the author be called a zero-shot model?

@IkeYang
Copy link
Author

IkeYang commented Jul 20, 2024

Let me further clarify my question. If the dataset for training UniTS contains the same type of data used in the test, even though UniTS uses self-supervised learning, (time series imputation and prediction supervised training can also be regarded as a kind of self-supervised learning, because their label is themselves.), can the proposed UniT be regarded as a zero-shot paradigm when it is tested on the same type of data?

@Sample-design-alt
Copy link

I have the same question with you. The paper refers to 'It excels in zero-shot forecasting for out-of-domain data'. But how can the model get the 'prompt token'? The prompt token is related to dataset_name (related code: prefix_prompt = self.prompt_tokens[dataset_name]

prefix_prompt = self.prompt_tokens[dataset_name]
), but how do you get the prompt token if Units hasn't been trained on this type of dataset?

@IkeYang
Copy link
Author

IkeYang commented Jul 22, 2024

I have the same question with you. The paper refers to 'It excels in zero-shot forecasting for out-of-domain data'. But how can the model get the 'prompt token'? The prompt token is related to dataset_name (related code: prefix_prompt = self.prompt_tokens[dataset_name]

prefix_prompt = self.prompt_tokens[dataset_name]

), but how do you get the prompt token if Units hasn't been trained on this type of dataset?

Yes, the description in the paper seems to indicate that the pre-training dataset may contain the corresponding training dataset part of the test dataset. For example, the training dataset of ETT data is used for pre-training, and the test set is used for ZS testing. I hope the author can correct and clarify my impression.

@gasvn
Copy link
Member

gasvn commented Jul 28, 2024

Let me further clarify my question. If the dataset for training UniTS contains the same type of data used in the test, even though UniTS uses self-supervised learning, (time series imputation and prediction supervised training can also be regarded as a kind of self-supervised learning, because their label is themselves.), can the proposed UniT be regarded as a zero-shot paradigm when it is tested on the same type of data?

  • The training/testing dataset is splited following existing works, so there is no data lackage regarding testing data. The model didn't see the same sample during training.
  • In our setting, we have 1) zero-shot learning with new foreasting length, which is to predict with a new forecasting length but still under the same data domain (the prompt is the same) 2) we have some inital results about the fully zero-shot learning where the model didn't see the data domain during the training, in that case, we use the same prompt tokens for all datasets during pretraining, so we don't need to obtain the domain-specific prompt tokens.

@gasvn
Copy link
Member

gasvn commented Jul 28, 2024

The model used for zero-shot pretraining is different from the standard UniTS, as it uses shared prompt token for all tasks.
https://github.com/mims-harvard/UniTS/blob/main/models/UniTS_zeroshot.py

@IkeYang
Copy link
Author

IkeYang commented Jul 29, 2024

Let me further clarify my question. If the dataset for training UniTS contains the same type of data used in the test, even though UniTS uses self-supervised learning, (time series imputation and prediction supervised training can also be regarded as a kind of self-supervised learning, because their label is themselves.), can the proposed UniT be regarded as a zero-shot paradigm when it is tested on the same type of data?

  • The training/testing dataset is splited following existing works, so there is no data lackage regarding testing data. The model didn't see the same sample during training.
  • In our setting, we have 1) zero-shot learning with new foreasting length, which is to predict with a new forecasting length but still under the same data domain (the prompt is the same) 2) we have some inital results about the fully zero-shot learning where the model didn't see the data domain during the training, in that case, we use the same prompt tokens for all datasets during pretraining, so we don't need to obtain the domain-specific prompt tokens.

Thanks for your reply. Your work is really amazing, but I still have a little Question. For example the traffic dataset, which usually has 17544 data points, people use the first 70% data in supervised training and the last 20% data for testing. My question is in the article, whether the test set is the last 20% data in the testing phase and whether the first 70% data is used in the pre-training process.

@gasvn
Copy link
Member

gasvn commented Jul 29, 2024

We use the dataloader from Time-Series-Library repo, so the training/testing dataset set split follows the common practice. Testing set is only used for evaluation not used during pretraining or finetuning. For pre-training, training sets from multiple datasets are used without task-specific labels.

@IkeYang
Copy link
Author

IkeYang commented Jul 29, 2024

Thank you for your reply. Is the training sets of pre-training containing the training part of the dataset used for evaluation? For example, the test set part of ETT series dataset is used for zero-shot performance testing, then the corresponding training dataset part should not appear in the pre-training process, otherwise it does not meet the principle of zero-shot. My question is whether the training dataset part of the dataset used for testing appeared during the pre-training process? Thanks for your patients.

@gasvn
Copy link
Member

gasvn commented Aug 1, 2024

For Few-shot classification and forecasting, datasets are not used during pretraining, so it’s new datasets.

For-shot imputation, the imputation task is not performed during pretraining, so it's a new task. In which the ETTm1 dataset is not trained for pretraining. (The reason for this is that people have been using these datasets for imputation, so we follow their settings to use these datasets.)

For anomaly detection, the task and datasets are not used during pretraining, so it’s new task and new dataset.

@gasvn
Copy link
Member

gasvn commented Aug 1, 2024

For zero-shot experiments:1) Direct Multi-Step Forecasting aims to show that our model can make predictions on new forecasting length with one inference step, which is different from existing works that need a predefined forecasting length. 2) we have initial zero-shot forecasting exp in the appendix, where the dataset are not seen during pretraining.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants