-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit b586772
Showing
88 changed files
with
4,155 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
# Auto detect text files and perform LF normalization | ||
* text=auto |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
MIT License | ||
|
||
Copyright (c) [year] [fullname] | ||
|
||
Permission is hereby granted, free of charge, to any person obtaining a copy | ||
of this software and associated documentation files (the "Software"), to deal | ||
in the Software without restriction, including without limitation the rights | ||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
copies of the Software, and to permit persons to whom the Software is | ||
furnished to do so, subject to the following conditions: | ||
|
||
The above copyright notice and this permission notice shall be included in all | ||
copies or substantial portions of the Software. | ||
|
||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | ||
SOFTWARE. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
## DADER: Domain Adaptation for Deep Entity Resolution | ||
|
||
 | ||
 | ||
|
||
Entity resolution (ER) is a core problem of data integration. The state-of-the-art (SOTA) results on ER are achieved by deep learning (DL) based methods, trained with a lot of labeled matching/non-matching entity pairs. This may not be a problem when using well-prepared benchmark datasets. Nevertheless, for many real-world ER applications, the situation changes dramatically, with a painful issue to collect large-scale labeled datasets. In this paper, we seek to answer: If we have a well-labeled source ER dataset, can we train a DL-based ER model for target dataset, without any labels or with a few labels? This is known as domain adaptation (DA), which has achieved great successes in computer vision and natural language processing, but is not systematically studied for ER. Our goal is to systematically explore the benefits and limitations of a wide range of DA methods for ER. To this purpose, we develop a DADER (Domain Adaptation for Deep Entity Resolution) framework that significantly advances ER in applying DA. We define a space of design solutions for the three modules of DADER, namely Feature Extractor, Matcher, and Feature Aligner. We conduct so far the most comprehensive experimental study to explore the design space and compare different choices of DA for ER. We provide guidance for selecting appropriate design solutions based on extensive experiments. | ||
|
||
<!-- <img src="figure/architecture.png" width="820" /> --> | ||
|
||
This repository contains the implementation code of six representative methods of [DADER]: MMD, K-order, GRL, InvGAN, InvGAN+KD, ED. | ||
|
||
<!-- <img src="figure/designspace.png" width="700" /> --> | ||
|
||
|
||
## DataSets | ||
The dataset format is <entity1,entity2,label>. See [Hugging Face](https://huggingface.co/datasets/RUC-DataLab/ER-dataset) for details. | ||
|
||
<!-- <img src="figure/dataset.png" width="700" /> --> | ||
|
||
|
||
## Quick Start | ||
Step 1: Requirements | ||
- Before running the code, please make sure your Python version is 3.6.5 and cuda version is 11.1. Then install necessary packages by : | ||
- `pip install dader` | ||
|
||
- If Pytorch is not installed automatically, you can install it using the following command: | ||
- `pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html` | ||
|
||
Step 2: Run Example | ||
|
||
```python | ||
#!/usr/bin/env python3 | ||
from dader import data, model | ||
|
||
# load datasets | ||
X_src, y_src = data.load_data(path='source.csv') | ||
X_tgt, X_tgt_val, y_tgt, y_tgt_val = data.load_data(path='target.csv', valid_rate = 0.1) | ||
|
||
|
||
# load model | ||
aligner = model.Model(method = 'invgankd', architecture = 'Bert') | ||
# train & adapt | ||
aligner.fit(X_src, y_src, X_tgt, X_tgt_val, y_tgt_val, batch_size = 16, ada_max_epoch=20) | ||
# predict | ||
y_prd = aligner.predict(X_tgt) | ||
# evaluate | ||
eval_result = aligner.eval(X_tgt, y_prd, y_tgt) | ||
|
||
``` | ||
|
Empty file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
VERSION = (0, 0, 4) | ||
|
||
__version__ = '.'.join(map(str, VERSION)) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
from .dataset import load_data | ||
from .process import get_data_loader, get_data_loader_ED | ||
from .process import convert_examples_to_features, convert_examples_to_features_ED | ||
|
||
__all__ = [ | ||
'load_data', 'get_data_loader','convert_examples_to_features', | ||
'get_data_loader_ED','convert_examples_to_features_ED' | ||
] | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
import csv | ||
import pandas as pd | ||
|
||
def read_csv(input_file, quotechar='"'): | ||
"""Reads a tab separated value file.""" | ||
with open(input_file, "r") as f: | ||
reader = csv.reader(f,quotechar=quotechar) | ||
lines = [] | ||
for line in reader: | ||
lines.append(line) | ||
return lines | ||
|
||
def read_tsv(input_file, quotechar=None): | ||
"""Reads a tab separated value file.""" | ||
with open(input_file, "r") as f: | ||
reader = csv.reader(f,delimiter="\t", quotechar=quotechar) | ||
lines = [] | ||
for line in reader: | ||
lines.append(line) | ||
return lines | ||
|
||
def norm(s): | ||
s = s.replace(","," ").replace("\'","").replace("\"","") | ||
return s |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
import numpy as np | ||
from sklearn.model_selection import train_test_split | ||
from ._utils import read_csv, read_tsv, norm | ||
|
||
|
||
def file2list(path,use_attri): | ||
data = read_csv(path) | ||
pairs = [] | ||
labels = [0]*(len(data)-1) | ||
length = len(data[0]) | ||
mid = int(length/2) | ||
if length % 2 == 1 : | ||
labels = [ int(x[length-1]) for x in data[1:] ] | ||
attri = [ x[2:] for x in data[0][:mid] ] | ||
if use_attri: | ||
attri = [ attri.index(x) for x in use_attri ] | ||
else: | ||
attri = [i for i in range(mid)] | ||
|
||
mid = int(length/2) | ||
for x in data[1:]: | ||
str1 = "" | ||
str2 = "" | ||
for j in attri: | ||
str1 = str1 + x[j] | ||
str2 = str2 + x[mid+j] | ||
|
||
pair = str1 + " [SEP] "+ str2 | ||
pairs.append(norm(pair)) | ||
|
||
print("****** Data Example ****** ") | ||
print("Entity pairs: ",pairs[:10]) | ||
print("Label: ", labels[:10]) | ||
return pairs, labels | ||
|
||
|
||
def load_data(path, use_attri=None, valid_rate=None): | ||
# read data from path: line[left.title,left.name, ... Tab right.title,right.name, ... Tab label] | ||
pairs, labels = file2list(path,use_attri) | ||
|
||
# split to train/valid | ||
if valid_rate: | ||
train_x, valid_x, train_y, valid_y = train_test_split(pairs, labels, | ||
test_size=valid_rate, | ||
stratify=labels, | ||
random_state=0) | ||
return train_x, valid_x, train_y, valid_y | ||
else: | ||
return pairs, labels |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,154 @@ | ||
import torch | ||
from torch.utils.data import DataLoader, TensorDataset, RandomSampler, SequentialSampler | ||
from tqdm.notebook import tqdm | ||
|
||
class InputFeatures(object): | ||
"""A single set of features of data.""" | ||
def __init__(self, input_ids=None, input_mask=None, segment_ids=None,label_id=None,exm_id=None): | ||
self.input_ids = input_ids | ||
self.input_mask = input_mask | ||
self.segment_ids = segment_ids | ||
self.label_id = label_id | ||
self.exm_id = exm_id | ||
|
||
class InputFeaturesED(object): | ||
"""A single set of features of data for ED.""" | ||
def __init__(self, input_ids, attention_mask,label_id): | ||
self.input_ids = input_ids | ||
self.attention_mask = attention_mask | ||
self.label_id = label_id | ||
|
||
def convert_examples_to_features(pairs, labels, max_seq_length, tokenizer, | ||
cls_token='[CLS]', sep_token='[SEP]', pad_token=0): | ||
# print("convert %d examples to features" % len(pairs)) | ||
features = [] | ||
if labels == None: | ||
labels = [0]*len(pairs) | ||
datazip =list(zip(pairs,labels)) | ||
for i in range(len(datazip)): | ||
(pair, label) = datazip[i] | ||
#if (ex_index + 1) % 200 == 0: | ||
# print("writing example %d of %d" % (ex_index + 1, len(pairs))) | ||
# [CLS] seq1 [SEP] seq2 [SEP] | ||
if sep_token in pair: | ||
left = pair.split(sep_token)[0] | ||
right = pair.split(sep_token)[1] | ||
ltokens = tokenizer.tokenize(left) | ||
rtokens = tokenizer.tokenize(right) | ||
more = len(ltokens) + len(rtokens) - max_seq_length + 3 | ||
if more > 0: | ||
if more <len(rtokens) : # remove excessively long string | ||
rtokens = rtokens[:(len(rtokens) - more)] | ||
elif more <len(ltokens): | ||
ltokens = ltokens[:(len(ltokens) - more)] | ||
else: | ||
print("The sequence is too long, please add the ``max_seq_length``!") | ||
continue | ||
tokens = [cls_token] + ltokens + [sep_token] + rtokens + [sep_token] | ||
segment_ids = [0]*(len(ltokens)+2) + [1]*(len(rtokens)+1) | ||
# [CLS] seq1 [SEP] | ||
else: | ||
tokens = tokenizer.tokenize(pair) | ||
if len(tokens) > max_seq_length - 2: | ||
tokens = tokens[:(max_seq_length - 2)] | ||
tokens = [cls_token] + tokens + [sep_token] | ||
segment_ids = [0]*(len(tokens)) | ||
input_ids = tokenizer.convert_tokens_to_ids(tokens) | ||
input_mask = [1] * len(input_ids) | ||
padding_length = max_seq_length - len(input_ids) | ||
input_ids = input_ids + ([pad_token] * padding_length) | ||
input_mask = input_mask + ([0] * padding_length) | ||
segment_ids = segment_ids + ([0] * padding_length) | ||
|
||
assert len(input_ids) == max_seq_length | ||
assert len(input_mask) == max_seq_length | ||
assert len(segment_ids) == max_seq_length | ||
|
||
features.append( | ||
InputFeatures(input_ids=input_ids, | ||
input_mask=input_mask, | ||
segment_ids = segment_ids, | ||
label_id=label, | ||
exm_id=i)) | ||
return features | ||
|
||
def convert_examples_to_features_ED(pairs, labels, max_seq_length, tokenizer, | ||
pad_token=0, cls_token='<s>',sep_token='</s>'): | ||
features = [] | ||
if labels == None: | ||
labels = [0]*len(pairs) | ||
for ex_index, (pair, label) in enumerate(zip(pairs, labels)): | ||
if (ex_index + 1) % 200 == 0: | ||
print("writing example %d of %d" % (ex_index + 1, len(pairs))) | ||
if sep_token in pair: | ||
left = pair.split(sep_token)[0] | ||
right = pair.split(sep_token)[1] | ||
ltokens = tokenizer.tokenize(left) | ||
rtokens = tokenizer.tokenize(right) | ||
more = len(ltokens) + len(rtokens) - max_seq_length + 3 | ||
if more > 0: | ||
if more <len(rtokens) : | ||
rtokens = rtokens[:(len(rtokens) - more)] | ||
elif more <len(ltokens): | ||
ltokens = ltokens[:(len(ltokens) - more)] | ||
else: | ||
print("The sequence is too long, please add the ``max_seq_length``!") | ||
continue | ||
tokens = [cls_token] +ltokens + [sep_token] + rtokens + [sep_token] | ||
segment_ids = [0]*(len(ltokens)+2) + [1]*(len(rtokens)+1) | ||
else: | ||
tokens = tokenizer.tokenize(pair) | ||
if len(tokens) > max_seq_length - 2: | ||
tokens = tokens[:(max_seq_length - 2)] | ||
tokens = [cls_token] + tokens + [sep_token] | ||
segment_ids = [0]*(len(tokens)) | ||
|
||
input_ids = tokenizer.convert_tokens_to_ids(tokens) | ||
input_mask = [1] * len(input_ids) | ||
padding_length = max_seq_length - len(input_ids) | ||
input_ids = input_ids + ([pad_token] * padding_length) | ||
input_mask = input_mask + ([0] * padding_length) | ||
features.append(InputFeaturesED(input_ids=input_ids, | ||
attention_mask=input_mask, | ||
label_id=label | ||
)) | ||
return features | ||
|
||
def get_data_loader(features, batch_size, is_train=0): | ||
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) | ||
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long) | ||
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long) | ||
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long) | ||
all_exm_ids = torch.tensor([f.exm_id for f in features], dtype=torch.long) | ||
dataset = TensorDataset(all_input_ids, all_input_mask,all_segment_ids, all_label_ids,all_exm_ids) | ||
|
||
if is_train: | ||
"""Delet the last incomplete epoch""" | ||
# sampler = RandomSampler(dataset) | ||
sampler = SequentialSampler(dataset) | ||
dataloader = DataLoader(dataset, sampler=sampler, batch_size=batch_size, drop_last=True) | ||
else: | ||
"""Read all data""" | ||
sampler = SequentialSampler(dataset) | ||
dataloader = DataLoader(dataset, sampler=sampler, batch_size=batch_size) | ||
return dataloader | ||
|
||
def get_data_loader_ED(features, batch_size, is_train=0): | ||
""" | ||
data_loader for Reconstruction-based method | ||
""" | ||
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) | ||
all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long) | ||
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long) | ||
dataset = TensorDataset(all_input_ids, all_attention_mask,all_label_ids) | ||
|
||
if is_train: | ||
"""Delet the last incomplete epoch""" | ||
sampler = RandomSampler(dataset) | ||
dataloader = DataLoader(dataset, sampler=sampler, batch_size=batch_size, drop_last=True) | ||
else: | ||
"""Read all data""" | ||
sampler = SequentialSampler(dataset) | ||
dataloader = DataLoader(dataset, sampler=sampler, batch_size=batch_size) | ||
return dataloader | ||
|
Empty file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
#!/usr/bin/env python | ||
# encoding: utf-8 | ||
|
||
import torch | ||
|
||
def cal_coral_loss(source, target): | ||
batch_size = int(source.size()[0]) | ||
dim = int(source.size()[1]) | ||
source_T = torch.transpose(source,0,1) | ||
target_T = torch.transpose(target,0,1) | ||
cov_s = (1/(batch_size-1))*torch.mm(source_T, source) | ||
cov_t = (1/(batch_size-1))*torch.mm(target_T, target) | ||
mean_s = torch.mm(torch.ones(1,batch_size).cuda(),source) | ||
mean_t = torch.mm(torch.ones(1,batch_size).cuda(),target) | ||
square_mean_s = (1/(batch_size*(batch_size-1)))*torch.mm(torch.transpose(mean_s,0,1),mean_s) | ||
square_mean_t = (1/(batch_size*(batch_size-1)))*torch.mm(torch.transpose(mean_t,0,1),mean_t) | ||
cov_s = cov_s - square_mean_s | ||
cov_t = cov_t - square_mean_t | ||
#print(cov_s.size()) | ||
coral_loss = 1/(4*dim*dim)*(torch.sum((cov_s-cov_t)**2)) | ||
#print(coral_loss.size()) | ||
return coral_loss | ||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
import torch | ||
|
||
def KL_divergence(p,q): | ||
d = p/q | ||
d = torch.log(d) | ||
d = p*d | ||
return torch.sum(d) | ||
|
||
|
||
def JS_divergence(p,q): | ||
M=(p+q)/2 | ||
return 0.5*KL_divergence(p, M)+0.5*KL_divergence(q, M) |
Oops, something went wrong.