Skip to content

hon9kon9ize/hk-corpus-pipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hong Kong Web Corpus Pipeline

.github/workflows/scrape.yml

The goal of this project is create a pipeline and tools for scraping and processing Hong Kong web data.

How this project works?

This project uses Github Actions to run a pipeline that scrapes web data from various sources, processes the data, and then pushes the data to a repository.

What kind of data are we looking for?

We're not only looking for text data in Cantonese, but also any Hong Kong related information in general. This includes news articles, social media posts, and other forms of text data.

What would this data be used for?

The data collected will be used to create a corpus of web text data. This corpus can be used for various NLP tasks, such as training language models, sentiment analysis, and other text analysis tasks.

Data Sources

This is not a comprehensive list of data sources, and we are open to suggestions for other sources of Cantonese text data.

Encyclopedias

News

Blogs

How to Run

Fork this repository and add these secrets or environment variables to your repository:

  • HF_TOKEN - Hugging Face API token
  • HF_REPO_NAME - Name of the repository to push the dataset to (e.g. hon9kon9ize/yue-corpus)

Then Github workflows will automatically run the pipeline and push the dataset to your repository.

About

Hong Kong Web Corpus Pipeline

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published