-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathscript.exp
executable file
·143 lines (139 loc) · 5.08 KB
/
script.exp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
#!/usr/bin/expect -f
#
# This Expect script was generated by autoexpect on Thu Jul 11 03:47:50 2019
# Expect and autoexpect were both written by Don Libes, NIST.
#
# Note that autoexpect does not guarantee a working script. It
# necessarily has to guess about certain things. Two reasons a script
# might fail are:
#
# 1) timing - A surprising number of programs (rn, ksh, zsh, telnet,
# etc.) and devices discard or ignore keystrokes that arrive "too
# quickly" after prompts. If you find your new script hanging up at
# one spot, try adding a short sleep just before the previous send.
# Setting "force_conservative" to 1 (see below) makes Expect do this
# automatically - pausing briefly before sending each character. This
# pacifies every program I know of. The -c flag makes the script do
# this in the first place. The -C flag allows you to define a
# character to toggle this mode off and on.
set force_conservative 0 ;# set to 1 to force conservative mode even if
;# script wasn't run conservatively originally
if {$force_conservative} {
set send_slow {1 .1}
proc send {ignore arg} {
sleep .1
exp_send -s -- $arg
}
}
#
# 2) differing output - Some programs produce different output each time
# they run. The "date" command is an obvious example. Another is
# ftp, if it produces throughput statistics at the end of a file
# transfer. If this causes a problem, delete these patterns or replace
# them with wildcards. An alternative is to use the -p flag (for
# "prompt") which makes Expect only look for the last line of output
# (i.e., the prompt). The -P flag allows you to define a character to
# toggle this mode off and on.
#
# Read the man page for more info.
#
# -Don
set timeout -1
spawn ./configure
match_max 100000
expect -exact "Extracting Bazel installation...\r
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command \"bazel shutdown\".\r\r
You have bazel 0.15.0- (@non-git) installed.\r
Please specify the location of python. \[Default is /usr/bin/python\]: "
send -- "/usr/bin/python3\r"
expect "Please input the desired Python library path to use."
send -- "\r"
expect -exact "\r
Do you wish to build TensorFlow with Apache Ignite support? \[Y/n\]: "
send -- "n\r"
expect -exact "n\r
No Apache Ignite support will be enabled for TensorFlow.\r
\r
Do you wish to build TensorFlow with XLA JIT support? \[Y/n\]: "
send -- "n\r"
expect -exact "n\r
No XLA JIT support will be enabled for TensorFlow.\r
\r
Do you wish to build TensorFlow with OpenCL SYCL support? \[y/N\]: "
send -- "\r"
expect -exact "\r
No OpenCL SYCL support will be enabled for TensorFlow.\r
\r
Do you wish to build TensorFlow with ROCm support? \[y/N\]: "
send -- "\r"
expect -exact "\r
No ROCm support will be enabled for TensorFlow.\r
\r
Do you wish to build TensorFlow with CUDA support? \[y/N\]: "
send -- "y\r"
expect -exact "y\r
CUDA support will be enabled for TensorFlow.\r
\r
Please specify the CUDA SDK version you want to use. \[Leave empty to default to CUDA 9.0\]: "
send -- "10.0\r"
expect -exact "10.0\r
\r
\r
Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. \[Default is /usr/local/cuda\]: "
send -- "\r"
expect -exact "\r
\r
\r
Please specify the cuDNN version you want to use. \[Leave empty to default to cuDNN 7\]: "
send -- "\r"
expect -exact "\r
\r
\r
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. \[Default is /usr/local/cuda\]: "
send -- "\r"
expect -exact "\r
\r
\r
Do you wish to build TensorFlow with TensorRT support? \[y/N\]: "
send -- "\r"
expect -exact "\r
No TensorRT support will be enabled for TensorFlow.\r
\r
Please specify the NCCL version you want to use. If NCCL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. \[Default is 2.2\]: "
send -- "\r"
expect -exact "\r
\r
\r
NCCL libraries found in /usr/lib/powerpc64le-linux-gnu/libnccl.so\r
This looks like a system path.\r
Assuming NCCL header path is /usr/include\r
Please specify a list of comma-separated Cuda compute capabilities you want to build with.\r
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.\r
Please note that each additional compute capability significantly increases your build time and binary size. \[Default is: 3.5,7.0\]: "
send -- "3.5,7.0,7.5\r"
expect -exact "3.5,7.0,7.5\r
\r
\r
Do you want to use clang as CUDA compiler? \[y/N\]: "
send -- "\r"
expect -exact "\r
nvcc will be used as CUDA compiler.\r
\r
Please specify which gcc should be used by nvcc as the host compiler. \[Default is /usr/bin/gcc\]: "
send -- "\r"
expect -exact "\r
\r
\r
Do you wish to build TensorFlow with MPI support? \[y/N\]: "
send -- "\r"
expect -exact "\r
No MPI support will be enabled for TensorFlow.\r
\r
Please specify optimization flags to use during compilation when bazel option \"--config=opt\" is specified \[Default is -mcpu=native\]: "
send -- "\r"
expect -exact "\r
\r
\r
Would you like to interactively configure ./WORKSPACE for Android builds? \[y/N\]: "
send -- "\r"
expect eof