-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathREADME.Rmd
170 lines (125 loc) · 5.63 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
pkgload::load_all(export_all = FALSE)
```
# gibasa
<!-- badges: start -->
[](https://paithiov909.r-universe.dev/gibasa)

[](https://github.com/paithiov909/gibasa/actions)
[](https://app.codecov.io/gh/paithiov909/gibasa)
[](https://cran.r-project.org/package=gibasa)
<!-- badges: end -->
## Overview
gibasa is a plain 'Rcpp' wrapper for 'MeCab', a morphological analyzer for CJK text.
Part-of-speech tagging with morphological analyzers is useful for processing CJK text data. This is because most words in CJK text are not separated by whitespaces and `tokenizers::tokenize_words` may split them into wrong tokens.
The main goal of gibasa package is to provide an alternative to `tidytext::unnest_tokens` for CJK text data. For this goal, gibasa provides three main functions: `gibasa::tokenize`, `gibasa::prettify`, and `gibasa::pack`.

- `gibasa::tokenize` takes a TIF-compliant data.frame of corpus, returning tokens as format that known as 'tidy text data', so that users can replace `tidytext::unnest_tokens` with it for tokenizing CJK text.
- `gibasa::prettify` turns tagged features into columns.
- `gibasa::pack` takes a 'tidy text data', typically returning space-separated corpus.
## Installation
You can install binary package via [CRAN](https://cran.r-project.org/package=gibasa) or [r-universe](https://paithiov909.r-universe.dev/gibasa).
```r
## Install gibasa from r-universe repository
install.packages("gibasa", repos = c("https://paithiov909.r-universe.dev", "https://cloud.r-project.org"))
## Or build from source package
Sys.setenv(MECAB_DEFAULT_RC = "/fullpath/to/your/mecabrc") # if necessary
remotes::install_github("paithiov909/gibasa")
```
To use gibasa package requires the [MeCab](https://taku910.github.io/mecab/) library and its dictionary installed and available.
In case using Linux or macOS, you can install them with their package managers, or build and install from the source by yourself.
In case using Windows, use installer [built for 64bit](https://github.com/ikegami-yukino/mecab/releases/tag/v0.996.2). Note that gibasa requires a UTF-8 dictionary, not a Shift-JIS one.
As of v0.9.4, gibasa looks at the file specified by the environment variable `MECABRC` or the file located at `~/.mecabrc`. If the MeCab dictionary is in a different location than the default, create a mecabrc file and specify where the dictionary is located.
For example, to install and use the [ipadic](https://pypi.org/project/ipadic/) from PyPI, run:
```sh
$ python3 -m pip install ipadic
$ python3 -c "import ipadic; print('dicdir=' + ipadic.DICDIR);" > ~/.mecabrc
```
## Usage
### Tokenize sentences
```{r}
res <- gibasa::tokenize(
data.frame(
doc_id = seq_along(gibasa::ginga[5:8]),
text = gibasa::ginga[5:8]
),
text,
doc_id
)
res
```
### Prettify output
```{r}
gibasa::prettify(res)
gibasa::prettify(res, col_select = 1:3)
gibasa::prettify(res, col_select = c(1, 3, 5))
gibasa::prettify(res, col_select = c("POS1", "Original"))
```
### Pack output
```{r}
res <- gibasa::prettify(res)
gibasa::pack(res)
dplyr::mutate(
res,
token = dplyr::if_else(is.na(Original), token, Original),
token = paste(token, POS1, sep = "/")
) |>
gibasa::pack() |>
head(1L)
```
### Change dictionary
IPA, UniDic, [CC-CEDICT-MeCab](https://github.com/ueda-keisuke/CC-CEDICT-MeCab), and [mecab-ko-dic](https://bitbucket.org/eunjeon/mecab-ko-dic/src/master/) schemes are supported.
```{r}
## UniDic 2.1.2
gibasa::tokenize("あのイーハトーヴォのすきとおった風", sys_dic = file.path("mecab/unidic-lite")) |>
gibasa::prettify(into = gibasa::get_dict_features("unidic26"))
## CC-CEDICT
gibasa::tokenize("它可以进行日语和汉语的语态分析", sys_dic = file.path("mecab/cc-cedict")) |>
gibasa::prettify(into = gibasa::get_dict_features("cc-cedict"))
## mecab-ko-dic
gibasa::tokenize("하네다공항한정토트백", sys_dic = file.path("mecab/mecab-ko-dic")) |>
gibasa::prettify(into = gibasa::get_dict_features("ko-dic"))
```
## Build dictionaries
### Build a system dictionary
```{r}
## build a new ipadic in temporary directory
build_sys_dic(
dic_dir = file.path("mecab/ipadic-eucjp"), # replace here with path to your source dictionary
out_dir = tempdir(),
encoding = "euc-jp" # encoding of source csv files
)
## copy the 'dicrc' file
file.copy(file.path("mecab/ipadic-eucjp/dicrc"), tempdir())
dictionary_info(sys_dic = tempdir())
```
### Build a user dictionary
```{r}
## write a csv file and compile it into a user dictionary
writeLines(
c(
"月ノ,1290,1290,4579,名詞,固有名詞,人名,姓,*,*,月ノ,ツキノ,ツキノ",
"美兎,1291,1291,8561,名詞,固有名詞,人名,名,*,*,美兎,ミト,ミト"
),
con = (csv_file <- tempfile(fileext = ".csv"))
)
build_user_dic(
dic_dir = file.path("mecab/ipadic-eucjp"),
file = (user_dic <- tempfile(fileext = ".dic")),
csv_file = csv_file,
encoding = "utf8"
)
tokenize("月ノ美兎は箱の中", sys_dic = tempdir(), user_dic = user_dic)
```
## License
GPL (>=3).