Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PR for review on slim version. #4

Open
wants to merge 26 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
56a9de5
Script with Regularization added.
cramraj8 Sep 13, 2017
1134025
delete reg.py
cramraj8 Sep 18, 2017
768952e
TF/Slim converted deep regression model
cramraj8 Sep 18, 2017
d850fd6
changed file name from default to .py
cramraj8 Sep 18, 2017
1bc7d43
Squeezed the DNN architecture code block
cramraj8 Sep 18, 2017
9d978f2
deleting initial main script
cramraj8 Sep 19, 2017
bc9bf14
New files added (deep_regression.py)
cramraj8 Sep 19, 2017
97612fb
Changed the data loading format to .CSV from .mat
cramraj8 Sep 19, 2017
5fc9c01
Data sets added
cramraj8 Sep 19, 2017
3c0849a
decompose this module
cramraj8 Sep 29, 2017
ffd45a4
Added decomposed modules
cramraj8 Sep 29, 2017
00c97b1
removed unnecessary module
cramraj8 Sep 29, 2017
f280ade
deleted .mat file because we have .csv files
cramraj8 Sep 29, 2017
7fc124f
Updated the fully-slim version.
cramraj8 Oct 14, 2017
c6cf378
visualize is not necessary in slim-version
cramraj8 Oct 14, 2017
98d6b99
Updated data_providers script
cramraj8 Oct 14, 2017
1910ed5
No need for plot image
cramraj8 Oct 14, 2017
df92e45
Added tf.batch.train function
cramraj8 Oct 15, 2017
4cd8ac6
created write_tfrecord_demo script
cramraj8 Oct 16, 2017
6f99fc7
created write_tfrecord script
cramraj8 Oct 16, 2017
e7c306e
created read_tf.record_demo script
cramraj8 Oct 16, 2017
70a40c3
created read_tf.record script
cramraj8 Oct 16, 2017
2b4731b
Made changes to slim_gflags
cramraj8 Oct 18, 2017
a8a233d
deleted the script with strange name
cramraj8 Oct 18, 2017
4f8230e
Made only one main script there
cramraj8 Oct 18, 2017
19dd16b
added model architecture
cramraj8 Oct 19, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file removed Brain_Integ.mat
Binary file not shown.
1 change: 1 addition & 0 deletions Brain_Integ_X.csv

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions Brain_Integ_Y.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Censored,Survival0,1440,3931,4700,2110,6910,10240,3940,4240,3180,4690,4050,3840,7171,13221,14051,2180,12290,6320,8140,4140,4660,330,3800,1500,3821,9321,1810,5060,1331,9580,3570,1080,2540,1381,2681,2731,2600,830,1141,1871,1390,1591,2370,1640,450,36670,2241,2531,3911,1451,471,1451,1510,5480,1110,4630,3330,10620,4420,4540,620,410,880,8800,5430,1190,7721,2940,1480,1240,2130,2020,5230,2440,5750,1440,3680,540,4280,5110,4551,9531,2861,4521,1671,2411,710,1540,5050,3000,7370,12330,9140,7530,4800,771,370,5440,7130,4680,1420,3860,5150,1130,3830,5851,2720,2900,300,1350,3600,60,4600,3790,1051,6361,1321,4361,2280,1380,940,1110,2790,1000,820,5190,1650,2540,1470,1530,1140,7270,10480,5671,11010,491,35740,13350,11061,18281,12221,81,16310,1940,5380,8141,15681,14281,15191,14941,14261,37331,24931,22891,14581,22180,15471,27721,13541,11121,23811,14121,5731,9191,14211,12101,11391,10211,22191,9621,7360,7751,9161,1821,9551,8351,8461,8890,4381,5631,3681,3421,5441,9351,4920,6071,9681,19431,14011,7481,12791,8461,5231,5231,7061,5481,4431,5310,2051,4071,2571,2071,55461,5761,5321,4611,64230,15850,6820,14910,22860,14010,20000,370,26600,2140,3540,40680,6050,5120,28750,34701,2421,730,3490,19151,47520,46950,4560,32000,1990,2690,35711,27610,9331,8000,3150,2421,28690,5760,18860,7881,18340,7091,12771,840,1550,7220,4441,6291,6511,5821,5161,4711,4311,10121,9081,9641,7431,7961,7600,10331,7771,6561,6780,3470,8141,5220,6480,5471,25651,15670,2280,13510,11831,170,15250,20520,12510,24330,17620,40840,23790,29070,39780,28350,1110,5920,4920,6480,7270,15780,4660,9870,11200,44450,3980,22350,9841,12011,71,61,14530,7751,15880,11201,5711,4541,12941,12571,4671,5441,6861,6271,6111,8621,4161,4331,4871,6111,4551,6851,9921,11641,3260,5591,5691,7211,4491,6551,4571,6771,8681,3281,11151,6221,5671,1741,31,2030,3510,12201,41,11,5031,1141,5301,1991,7381,1941,12271,22871,29181,32531,52551,1221,1531,9081,7051,37251,1391,5260,961,6711,13991,17061,17520,71,4341,51,4941,231,13591,1841,13001,31,9641,4551,31,31,901,5331,2101,4421,12011,5851,11891,71,15401,151,7181,11301,101,41,1121,1621,10790,1131,3431,9561,501,4941,2860,19331,30,11521,10041,3721,1901,31,141,1691,4191,71,9001,10401,4781,7921,41,6151,21,31,821,5151,551,4911,5121,6511,5331,741,3201,3361,31,17211,13820,12621,13871,20781,21071,10780,12091,12171,12941,7580,9620,3881,6300,5781,13011,4381,01,01,71,2041,2431,721,701,2871,1791,2421,2311,391,71,3171,4031,3541,3331,2741,4141,2921,2301,3721,5841,1051,721,1341,771,3841,4821,4271,5881,4511,4421,4140,3721,4081,2791,4421,4971,5091,581,5021,5081,3371,3951,27021,28601,9931,9391,1661,3011,631,5960,18911,26501,5711,12451,10760,7421,7851,9921,10691,29181,22891,41131,26021,11731,8051,5671,4281,10990,2411,7151,6081,6091,2491,621,5641,8260,51661,9140,9611,15001,5661,4541,6501,5991,4861,13411,10261,7951,7871,-11,18680,8210,9541,13971,13141,4111,30,2401,11911,7141,11271,4871,7061,15531,13931,14261,1147
Expand Down
52 changes: 52 additions & 0 deletions data_providers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# -*- coding: utf-8 -*-

import numpy as np
import pandas as pd


DATA_FILE = ['./Brain_Integ_X.csv', './Brain_Integ_Y.csv']

def data_providers(data_file=['./Brain_Integ_X.csv', './Brain_Integ_Y.csv']):
"""The function for reading datasets.

This function reads the features, lable datasets separately and filters out
only the dead patients' respective observations for further processings.

Args:
data_file: list of strings representing the paths for input files.
Here the input features and input ground truth values are provided separately.

Returns:
`Numpy array`, extracted feature matrix and label column.

Example:
>>> read_dataset()
( [[2.3, 2.4, 6.5],[2.3, 5.4,3.3]], [12, 82] )

"""

data_feed = pd.read_csv(data_file[0], skiprows=[0], header=None)
labels_feed = pd.read_csv(data_file[1], skiprows=[1], header=0)
survival = labels_feed['Survival']
censored = labels_feed['Censored']

survival = survival.values
censored = censored.values
data = data_feed.values
data = np.float32(data)

#Filtering only the dead patients for survival analysis
censored_survival = survival[censored == 1]
censored_data = data[censored == 1]

y = np.asarray(censored_survival)
x = np.asarray(censored_data)

print('Shape of X : ', x.shape)
print('Shape of Y : ', y.shape)
return (x, y)


if __name__ == '__main__':

data_x, data_y = data_providers(DATA_FILE)
42 changes: 42 additions & 0 deletions dnn_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function
import numpy as np
import tensorflow as tf


slim = tf.contrib.slim


def multilayer_nn_model(inputs, hidden_layers, n_classes, beta,
scope="deep_regression_model"):
"""Creates a deep regression model.

This function takes input as the required parameters to build a deep
neural network and builds the layer-wise network. Once its instance is
called by parsing the input feedings, this function will perform the
feed-forward and returns the output layer responses.

Args:
inputs: A node that yields a `Tensor` of size [total_observations,
input_features].

Returns:
predictions: `Tensor` of shape (1) (scalar) of response.
end_points: A dict of end points representing the hidden layers.
"""

with tf.variable_scope(scope, 'deep_regression', [inputs]):
end_points = {}
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(beta)):
net = slim.stack(inputs,
slim.fully_connected,
hidden_layers,
scope='fc')
end_points['fc'] = net
predictions = slim.fully_connected(net, n_classes, activation_fn=None,
scope='prediction')
end_points['out'] = predictions
return predictions, end_points
57 changes: 57 additions & 0 deletions read_record.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function
import tensorflow as tf
import numpy as np


TFRECORD_FILE = "data.tfrecords"


def read_tfrecord(tfrecords_filename="data.tfrecords"):
"""The function for reading the tf.record file.

This function takes input as the protobuff file name. Then, given the
header strings along with their data-types, this function will
retrieve those respective values from the byte file through a loop.
Finaaly, it will convert those strings back to the desired data-type
(here its np.asarray[dtype=np.float32]).

Args:
tfrecords_filename: This carries the tfrecord file-name with the file-path.

Returns:
predictors: A numpy array(M*N) contains 'np.float32' elements.
This variable returns the feature values matrix.
gnd_truths: A numpy array(M*1) contains 'np.float32' elements.
This variable returns the observed survival values vector.

"""

predictors = []
gnd_truths = []
record_iterator = tf.python_io.tf_record_iterator(path=TFRECORD_FILE)

for element in record_iterator:
example = tf.train.Example()
example.ParseFromString(element)

predictor_string = (example.features.feature['predictor_string']
.bytes_list
.value[0])
gnd_truth_string = (example.features.feature['gnd_truth_string']
.bytes_list
.value[0])

predictor = np.fromstring(predictor_string, dtype=np.float32)
gnd_truth = np.fromstring(gnd_truth_string, dtype=np.float32)

predictors.append((predictor))
gnd_truths.append((gnd_truth))

return predictors, gnd_truths


if __name__ == '__main__':

predictors, gnd_truths = read_tfrecord(TFRECORD_FILE)
63 changes: 63 additions & 0 deletions read_record_demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function
import tensorflow as tf
import numpy as np


TFRECORD_FILE = "data.tfrecords"


def read_tfrecord(tfrecords_filename="data.tfrecords"):
"""The function for reading the tf.record file.

This function takes input as the protobuff file name. Then, given the
header strings along with their data-types, this function will
retrieve those respective values from the byte file through a loop.
Finaaly, it will convert those strings back to the desired data-type
(here its np.asarray[dtype=np.float32]).

Args:
tfrecords_filename: This carries the tfrecord file-name with the file-path.

Returns:
predictors: A numpy array(M*N) contains 'np.float32' elements.
This variable returns the feature values matrix.
gnd_truths: A numpy array(M*1) contains 'np.float32' elements.
This variable returns the observed survival values vector.

"""

predictors = []
gnd_truths = []
record_iterator = tf.python_io.tf_record_iterator(path=TFRECORD_FILE)

for element in record_iterator:
example = tf.train.Example()
example.ParseFromString(element)

predictor_string = (example.features.feature['predictor_string']
.bytes_list
.value[0])
gnd_truth_string = (example.features.feature['gnd_truth_string']
.bytes_list
.value[0])

predictor = np.fromstring(predictor_string, dtype=np.float32)
gnd_truth = np.fromstring(gnd_truth_string, dtype=np.float32)

print('predictor : ', predictor)
print('gnd_truth : ', gnd_truth)

predictors.append((predictor))
gnd_truths.append((gnd_truth))

return predictors, gnd_truths


if __name__ == '__main__':

predictors, gnd_truths = read_tfrecord(TFRECORD_FILE)

# print('predictors np.array : ', predictors)
# print('gnd_truth np.array : ', gnd_truths)
161 changes: 0 additions & 161 deletions reg.py

This file was deleted.

Loading