This documentaion describes how to Extract, transform, and load candidate files for VLA.
Here we are assuming that the backend is using Elasticsearch. Follow ** ES Setup Instuctions ** below.
- Clone git repository in a directory
git clone https://github.com/shakeh/vla-data-etl.git
- Once you have Elasticserach setup with proper indexes, you can run the python code to parse candidate pickle files. You will need to call your own filename after '-f'
python parse_cands.py -f *filename*
python parse_cands.py -f cands_14A-425_14sep03_stats_merge.pkl
After following the instruction found on ES site, continue with the following setup guide: https://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html
http.cors.enabled:true
http.cors.allow-origin : “*” // *IMPORTANT* change this when in operations
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length
./elasticsearch --cluster.name my_cluster_name --node.name my_node_name
// Custom setup for RF ElasticSearch
./elasticsearch --cluster.name realfast --node.name candidate_data
curl -XPUT 'localhost:9200/cand?pretty'
python parse_cands.py -f cands_14A-425_14sep03_stats_merge.pkl
http://localhost:9200/cand/_search?q=*&pretty - display everything in index
curl -X<REST Verb> <Node>:<Port>/<Index>/<Type>/<ID>
curl 'localhost:9200/_cat/nodes?v'
curl 'localhost:9200/_cat/health?v'
curl -XPUT 'localhost:9200/customer'
curl -XPUT 'localhost:9200/customer/external/1' -d '
{
"name": "John Doe"
}'
curl 'localhost:9200/customer/external/1'
curl -XDELETE 'localhost:9200/customer'