Skip to content

Commit b586772

Browse files
committed
Initial commit
0 parents  commit b586772

File tree

88 files changed

+4155
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

88 files changed

+4155
-0
lines changed

.gitattributes

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
# Auto detect text files and perform LF normalization
2+
* text=auto

LICENSE

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) [year] [fullname]
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

README.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
## DADER: Domain Adaptation for Deep Entity Resolution
2+
3+
![python](https://img.shields.io/badge/python-3.6.5-blue)
4+
![pytorch](https://img.shields.io/badge/pytorch-1.7.1-brightgreen)
5+
6+
Entity resolution (ER) is a core problem of data integration. The state-of-the-art (SOTA) results on ER are achieved by deep learning (DL) based methods, trained with a lot of labeled matching/non-matching entity pairs. This may not be a problem when using well-prepared benchmark datasets. Nevertheless, for many real-world ER applications, the situation changes dramatically, with a painful issue to collect large-scale labeled datasets. In this paper, we seek to answer: If we have a well-labeled source ER dataset, can we train a DL-based ER model for target dataset, without any labels or with a few labels? This is known as domain adaptation (DA), which has achieved great successes in computer vision and natural language processing, but is not systematically studied for ER. Our goal is to systematically explore the benefits and limitations of a wide range of DA methods for ER. To this purpose, we develop a DADER (Domain Adaptation for Deep Entity Resolution) framework that significantly advances ER in applying DA. We define a space of design solutions for the three modules of DADER, namely Feature Extractor, Matcher, and Feature Aligner. We conduct so far the most comprehensive experimental study to explore the design space and compare different choices of DA for ER. We provide guidance for selecting appropriate design solutions based on extensive experiments.
7+
8+
<!-- <img src="figure/architecture.png" width="820" /> -->
9+
10+
This repository contains the implementation code of six representative methods of [DADER]: MMD, K-order, GRL, InvGAN, InvGAN+KD, ED.
11+
12+
<!-- <img src="figure/designspace.png" width="700" /> -->
13+
14+
15+
## DataSets
16+
The dataset format is <entity1,entity2,label>. See [Hugging Face](https://huggingface.co/datasets/RUC-DataLab/ER-dataset) for details.
17+
18+
<!-- <img src="figure/dataset.png" width="700" /> -->
19+
20+
21+
## Quick Start
22+
Step 1: Requirements
23+
- Before running the code, please make sure your Python version is 3.6.5 and cuda version is 11.1. Then install necessary packages by :
24+
- `pip install dader`
25+
26+
- If Pytorch is not installed automatically, you can install it using the following command:
27+
- `pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html`
28+
29+
Step 2: Run Example
30+
31+
```python
32+
#!/usr/bin/env python3
33+
from dader import data, model
34+
35+
# load datasets
36+
X_src, y_src = data.load_data(path='source.csv')
37+
X_tgt, X_tgt_val, y_tgt, y_tgt_val = data.load_data(path='target.csv', valid_rate = 0.1)
38+
39+
40+
# load model
41+
aligner = model.Model(method = 'invgankd', architecture = 'Bert')
42+
# train & adapt
43+
aligner.fit(X_src, y_src, X_tgt, X_tgt_val, y_tgt_val, batch_size = 16, ada_max_epoch=20)
44+
# predict
45+
y_prd = aligner.predict(X_tgt)
46+
# evaluate
47+
eval_result = aligner.eval(X_tgt, y_prd, y_tgt)
48+
49+
```
50+

build/lib/dader/__init__.py

Whitespace-only changes.

build/lib/dader/__version__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
VERSION = (0, 0, 4)
2+
3+
__version__ = '.'.join(map(str, VERSION))

build/lib/dader/data/__init__.py

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
from .dataset import load_data
2+
from .process import get_data_loader, get_data_loader_ED
3+
from .process import convert_examples_to_features, convert_examples_to_features_ED
4+
5+
__all__ = [
6+
'load_data', 'get_data_loader','convert_examples_to_features',
7+
'get_data_loader_ED','convert_examples_to_features_ED'
8+
]
9+

build/lib/dader/data/_utils.py

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
import csv
2+
import pandas as pd
3+
4+
def read_csv(input_file, quotechar='"'):
5+
"""Reads a tab separated value file."""
6+
with open(input_file, "r") as f:
7+
reader = csv.reader(f,quotechar=quotechar)
8+
lines = []
9+
for line in reader:
10+
lines.append(line)
11+
return lines
12+
13+
def read_tsv(input_file, quotechar=None):
14+
"""Reads a tab separated value file."""
15+
with open(input_file, "r") as f:
16+
reader = csv.reader(f,delimiter="\t", quotechar=quotechar)
17+
lines = []
18+
for line in reader:
19+
lines.append(line)
20+
return lines
21+
22+
def norm(s):
23+
s = s.replace(","," ").replace("\'","").replace("\"","")
24+
return s

build/lib/dader/data/dataset.py

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
import numpy as np
2+
from sklearn.model_selection import train_test_split
3+
from ._utils import read_csv, read_tsv, norm
4+
5+
6+
def file2list(path,use_attri):
7+
data = read_csv(path)
8+
pairs = []
9+
labels = [0]*(len(data)-1)
10+
length = len(data[0])
11+
mid = int(length/2)
12+
if length % 2 == 1 :
13+
labels = [ int(x[length-1]) for x in data[1:] ]
14+
attri = [ x[2:] for x in data[0][:mid] ]
15+
if use_attri:
16+
attri = [ attri.index(x) for x in use_attri ]
17+
else:
18+
attri = [i for i in range(mid)]
19+
20+
mid = int(length/2)
21+
for x in data[1:]:
22+
str1 = ""
23+
str2 = ""
24+
for j in attri:
25+
str1 = str1 + x[j]
26+
str2 = str2 + x[mid+j]
27+
28+
pair = str1 + " [SEP] "+ str2
29+
pairs.append(norm(pair))
30+
31+
print("****** Data Example ****** ")
32+
print("Entity pairs: ",pairs[:10])
33+
print("Label: ", labels[:10])
34+
return pairs, labels
35+
36+
37+
def load_data(path, use_attri=None, valid_rate=None):
38+
# read data from path: line[left.title,left.name, ... Tab right.title,right.name, ... Tab label]
39+
pairs, labels = file2list(path,use_attri)
40+
41+
# split to train/valid
42+
if valid_rate:
43+
train_x, valid_x, train_y, valid_y = train_test_split(pairs, labels,
44+
test_size=valid_rate,
45+
stratify=labels,
46+
random_state=0)
47+
return train_x, valid_x, train_y, valid_y
48+
else:
49+
return pairs, labels

build/lib/dader/data/process.py

Lines changed: 154 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,154 @@
1+
import torch
2+
from torch.utils.data import DataLoader, TensorDataset, RandomSampler, SequentialSampler
3+
from tqdm.notebook import tqdm
4+
5+
class InputFeatures(object):
6+
"""A single set of features of data."""
7+
def __init__(self, input_ids=None, input_mask=None, segment_ids=None,label_id=None,exm_id=None):
8+
self.input_ids = input_ids
9+
self.input_mask = input_mask
10+
self.segment_ids = segment_ids
11+
self.label_id = label_id
12+
self.exm_id = exm_id
13+
14+
class InputFeaturesED(object):
15+
"""A single set of features of data for ED."""
16+
def __init__(self, input_ids, attention_mask,label_id):
17+
self.input_ids = input_ids
18+
self.attention_mask = attention_mask
19+
self.label_id = label_id
20+
21+
def convert_examples_to_features(pairs, labels, max_seq_length, tokenizer,
22+
cls_token='[CLS]', sep_token='[SEP]', pad_token=0):
23+
# print("convert %d examples to features" % len(pairs))
24+
features = []
25+
if labels == None:
26+
labels = [0]*len(pairs)
27+
datazip =list(zip(pairs,labels))
28+
for i in range(len(datazip)):
29+
(pair, label) = datazip[i]
30+
#if (ex_index + 1) % 200 == 0:
31+
# print("writing example %d of %d" % (ex_index + 1, len(pairs)))
32+
# [CLS] seq1 [SEP] seq2 [SEP]
33+
if sep_token in pair:
34+
left = pair.split(sep_token)[0]
35+
right = pair.split(sep_token)[1]
36+
ltokens = tokenizer.tokenize(left)
37+
rtokens = tokenizer.tokenize(right)
38+
more = len(ltokens) + len(rtokens) - max_seq_length + 3
39+
if more > 0:
40+
if more <len(rtokens) : # remove excessively long string
41+
rtokens = rtokens[:(len(rtokens) - more)]
42+
elif more <len(ltokens):
43+
ltokens = ltokens[:(len(ltokens) - more)]
44+
else:
45+
print("The sequence is too long, please add the ``max_seq_length``!")
46+
continue
47+
tokens = [cls_token] + ltokens + [sep_token] + rtokens + [sep_token]
48+
segment_ids = [0]*(len(ltokens)+2) + [1]*(len(rtokens)+1)
49+
# [CLS] seq1 [SEP]
50+
else:
51+
tokens = tokenizer.tokenize(pair)
52+
if len(tokens) > max_seq_length - 2:
53+
tokens = tokens[:(max_seq_length - 2)]
54+
tokens = [cls_token] + tokens + [sep_token]
55+
segment_ids = [0]*(len(tokens))
56+
input_ids = tokenizer.convert_tokens_to_ids(tokens)
57+
input_mask = [1] * len(input_ids)
58+
padding_length = max_seq_length - len(input_ids)
59+
input_ids = input_ids + ([pad_token] * padding_length)
60+
input_mask = input_mask + ([0] * padding_length)
61+
segment_ids = segment_ids + ([0] * padding_length)
62+
63+
assert len(input_ids) == max_seq_length
64+
assert len(input_mask) == max_seq_length
65+
assert len(segment_ids) == max_seq_length
66+
67+
features.append(
68+
InputFeatures(input_ids=input_ids,
69+
input_mask=input_mask,
70+
segment_ids = segment_ids,
71+
label_id=label,
72+
exm_id=i))
73+
return features
74+
75+
def convert_examples_to_features_ED(pairs, labels, max_seq_length, tokenizer,
76+
pad_token=0, cls_token='<s>',sep_token='</s>'):
77+
features = []
78+
if labels == None:
79+
labels = [0]*len(pairs)
80+
for ex_index, (pair, label) in enumerate(zip(pairs, labels)):
81+
if (ex_index + 1) % 200 == 0:
82+
print("writing example %d of %d" % (ex_index + 1, len(pairs)))
83+
if sep_token in pair:
84+
left = pair.split(sep_token)[0]
85+
right = pair.split(sep_token)[1]
86+
ltokens = tokenizer.tokenize(left)
87+
rtokens = tokenizer.tokenize(right)
88+
more = len(ltokens) + len(rtokens) - max_seq_length + 3
89+
if more > 0:
90+
if more <len(rtokens) :
91+
rtokens = rtokens[:(len(rtokens) - more)]
92+
elif more <len(ltokens):
93+
ltokens = ltokens[:(len(ltokens) - more)]
94+
else:
95+
print("The sequence is too long, please add the ``max_seq_length``!")
96+
continue
97+
tokens = [cls_token] +ltokens + [sep_token] + rtokens + [sep_token]
98+
segment_ids = [0]*(len(ltokens)+2) + [1]*(len(rtokens)+1)
99+
else:
100+
tokens = tokenizer.tokenize(pair)
101+
if len(tokens) > max_seq_length - 2:
102+
tokens = tokens[:(max_seq_length - 2)]
103+
tokens = [cls_token] + tokens + [sep_token]
104+
segment_ids = [0]*(len(tokens))
105+
106+
input_ids = tokenizer.convert_tokens_to_ids(tokens)
107+
input_mask = [1] * len(input_ids)
108+
padding_length = max_seq_length - len(input_ids)
109+
input_ids = input_ids + ([pad_token] * padding_length)
110+
input_mask = input_mask + ([0] * padding_length)
111+
features.append(InputFeaturesED(input_ids=input_ids,
112+
attention_mask=input_mask,
113+
label_id=label
114+
))
115+
return features
116+
117+
def get_data_loader(features, batch_size, is_train=0):
118+
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
119+
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
120+
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)
121+
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long)
122+
all_exm_ids = torch.tensor([f.exm_id for f in features], dtype=torch.long)
123+
dataset = TensorDataset(all_input_ids, all_input_mask,all_segment_ids, all_label_ids,all_exm_ids)
124+
125+
if is_train:
126+
"""Delet the last incomplete epoch"""
127+
# sampler = RandomSampler(dataset)
128+
sampler = SequentialSampler(dataset)
129+
dataloader = DataLoader(dataset, sampler=sampler, batch_size=batch_size, drop_last=True)
130+
else:
131+
"""Read all data"""
132+
sampler = SequentialSampler(dataset)
133+
dataloader = DataLoader(dataset, sampler=sampler, batch_size=batch_size)
134+
return dataloader
135+
136+
def get_data_loader_ED(features, batch_size, is_train=0):
137+
"""
138+
data_loader for Reconstruction-based method
139+
"""
140+
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
141+
all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
142+
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long)
143+
dataset = TensorDataset(all_input_ids, all_attention_mask,all_label_ids)
144+
145+
if is_train:
146+
"""Delet the last incomplete epoch"""
147+
sampler = RandomSampler(dataset)
148+
dataloader = DataLoader(dataset, sampler=sampler, batch_size=batch_size, drop_last=True)
149+
else:
150+
"""Read all data"""
151+
sampler = SequentialSampler(dataset)
152+
dataloader = DataLoader(dataset, sampler=sampler, batch_size=batch_size)
153+
return dataloader
154+

build/lib/dader/metrics/__init__.py

Whitespace-only changes.

build/lib/dader/metrics/coral.py

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
#!/usr/bin/env python
2+
# encoding: utf-8
3+
4+
import torch
5+
6+
def cal_coral_loss(source, target):
7+
batch_size = int(source.size()[0])
8+
dim = int(source.size()[1])
9+
source_T = torch.transpose(source,0,1)
10+
target_T = torch.transpose(target,0,1)
11+
cov_s = (1/(batch_size-1))*torch.mm(source_T, source)
12+
cov_t = (1/(batch_size-1))*torch.mm(target_T, target)
13+
mean_s = torch.mm(torch.ones(1,batch_size).cuda(),source)
14+
mean_t = torch.mm(torch.ones(1,batch_size).cuda(),target)
15+
square_mean_s = (1/(batch_size*(batch_size-1)))*torch.mm(torch.transpose(mean_s,0,1),mean_s)
16+
square_mean_t = (1/(batch_size*(batch_size-1)))*torch.mm(torch.transpose(mean_t,0,1),mean_t)
17+
cov_s = cov_s - square_mean_s
18+
cov_t = cov_t - square_mean_t
19+
#print(cov_s.size())
20+
coral_loss = 1/(4*dim*dim)*(torch.sum((cov_s-cov_t)**2))
21+
#print(coral_loss.size())
22+
return coral_loss
23+
24+
25+

build/lib/dader/metrics/js.py

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
import torch
2+
3+
def KL_divergence(p,q):
4+
d = p/q
5+
d = torch.log(d)
6+
d = p*d
7+
return torch.sum(d)
8+
9+
10+
def JS_divergence(p,q):
11+
M=(p+q)/2
12+
return 0.5*KL_divergence(p, M)+0.5*KL_divergence(q, M)

0 commit comments

Comments
 (0)