Skip to content

Commit 52c4d5a

Browse files
committed
V1
1 parent 843e1f9 commit 52c4d5a

File tree

98 files changed

+3631
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

98 files changed

+3631
-0
lines changed
66.5 KB
Loading
87.2 KB
Loading
110 KB
Loading
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# REST API for Administering Oracle NoSQL Database.
2+
3+
The REST API for Administering Oracle NoSQL Database is configured when executing makebootconfig utility.
4+
5+
`-admin-web-port <admin web service port>`
6+
7+
> The TCP/IP port on which the admin web service should be started. If not specified, the default port value is –1.
8+
> If a positive integer number is not specified for -admin-web-port, then admin web service does not start up along with the admin service.
9+
> See REST API for Administering Oracle NoSQL Database.
10+
11+
12+
All the admin commands presented in previous sections can executed via REST API calls. Ci-below some examples
13+
14+
Show topology
15+
````
16+
curl -i -X POST "http://node1-nosql:5999/V0/nosql/admin/topology" -d '{"command":"show"}'
17+
curl -i -X POST "http://node2-nosql:5999/V0/nosql/admin/topology" -d '{"command":"show"}'
18+
curl -i -X POST "http://node3-nosql:5999/V0/nosql/admin/topology" -d '{"command":"show"}'
19+
````
20+
Verify configuration
21+
````
22+
curl -i -X POST "http://node1-nosql:5999/V0/nosql/admin/configuration" -d '{"command":"verify"}'
23+
````
24+
25+
Ping
26+
````
27+
curl -i -X POST "http://node1-nosql:5999/V0/nosql/admin" -d '{"command":"ping"}'
28+
````
+130
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
# Backing Up the Store
2+
3+
To make backups of your KVStore, use the CLI snapshot command to copy nodes in the store.
4+
To maintain consistency, no topology changes should be in process when you create a snapshot.
5+
Restoring a snapshot relies on the system configuration having exactly the same topology that was in effect when you created the snapshot.
6+
7+
Due to the distributed nature and scale of Oracle NoSQL Database, it is unlikely that a single machine has the resources to contain snapshots for the entire store.
8+
9+
## Managing Snapshots
10+
11+
When you create a snapshot, the utility collects data from every Replication Node in the system, including Masters and replicas.
12+
If the operation does not succeed for any one node in a shard, the entire snapshot fails.
13+
14+
The command “snapshot create” provided the backup name when it runs successfully.
15+
````
16+
kv_admin snapshot create -name BACKUP
17+
Created data snapshot named 210705-101307-BACKUP on all 11 components
18+
Successfully backup configurations on sn1, sn2, sn3
19+
````
20+
21+
The command “snapshot create” is not providing the backup name if something goes wrong.
22+
````
23+
kv_admin snapshot create -name BACKUP
24+
Create data snapshot succeeded but not on all components
25+
Successfully backup configurations on sn1, sn2, sn3
26+
````
27+
28+
AS YOU CAN SEE HERE, There is no warning or information if all replication nodes are unavailable for a replication group
29+
30+
````
31+
kv_admin snapshot create -name BACKUP
32+
Successfully backup configurations on sn1, sn2, sn3
33+
````
34+
35+
use JSON output that shows more information and allow to see what exactly happened it. (same tests)
36+
````
37+
kv_admin snapshot create -name BACKUP -json 2>/dev/null
38+
{
39+
"operation" : "snapshot operation",
40+
"returnCode" : 5000,
41+
"description" : "Operation ends successfully",
42+
"returnValue" : {
43+
"snapshotName" : "210705-133631-BACKUP",
44+
"successSnapshots" : [ "admin1", "admin2", "rg1-rn1", "rg1-rn2", "rg1-rn3", "rg2-rn1", "rg2-rn2", "rg2-rn3", "rg3-rn1", "rg3-rn2", "rg3-rn3" ],
45+
"failureSnapshots" : [ ],
46+
"successSnapshotConfigs" : [ "sn1", "sn2", "sn3" ],
47+
"failureSnapshotConfigs" : [ ]
48+
}
49+
}
50+
````
51+
````
52+
kv_admin snapshot create -name BACKUP -json 2>/dev/null
53+
{
54+
"operation" : "snapshot operation",
55+
"returnCode" : 5500,
56+
"description" : "Operation ends successfully",
57+
"returnValue" : {
58+
"snapshotName" : "210705-133737-BACKUP",
59+
"successSnapshots" : [ "admin1", "admin2", "rg1-rn1", "rg1-rn2", "rg1-rn3", "rg2-rn1", "rg2-rn2", "rg2-rn3", "rg3-rn1", "rg3-rn2" ],
60+
"failureSnapshots" : [ "rg3-rn3" ],
61+
"successSnapshotConfigs" : [ "sn1", "sn2", "sn3" ],
62+
"failureSnapshotConfigs" : [ ]
63+
}
64+
}
65+
````
66+
````
67+
kv_admin snapshot create -name BACKUP -json 2>/dev/null
68+
{
69+
"operation" : "snapshot operation",
70+
"returnCode" : 5500,
71+
"description" : "Operation ends successfully",
72+
"returnValue" : {
73+
"snapshotName" : "210705-133846-BACKUP",
74+
"successSnapshots" : [ "admin1", "admin2", "rg1-rn1", "rg1-rn2", "rg1-rn3", "rg2-rn1", "rg2-rn2", "rg2-rn3" ],
75+
"failureSnapshots" : [ "rg3-rn1", "rg3-rn2", "rg3-rn3" ],
76+
"successSnapshotConfigs" : [ "sn1", "sn2", "sn3" ],
77+
"failureSnapshotConfigs" : [ ]
78+
}
79+
}
80+
````
81+
You can use the command `show topology` to have the backup path at each Storage Node (sn) :
82+
* { rootDirPath }/snapshots/
83+
* {storageDirEnvPath[]}/../snapsthots
84+
* {adminDirsPath}/*/snapshots
85+
86+
````
87+
kv_admin show topology -verbose -json | jq -r '.returnValue.sns[] | select (.resourceId == "sn1")|[{name:.resourceId,host:.hostname,rootDir:.rootDirPath,rns:.rns[]}]'
88+
[
89+
{
90+
"name": "sn1",
91+
"host": "node1-nosql",
92+
"rootDir": "/home/opc/nosql/kvroot",
93+
"rns": {
94+
"resourceId": "rg1-rn1",
95+
"storageDirPath": "/home/opc/nosql/data/disk1",
96+
"storageDirEnvPath": "/home/opc/nosql/data/disk1/rg1-rn1/env",
97+
"storageDirSize": 524288000
98+
}
99+
},
100+
{
101+
"name": "sn1",
102+
"host": "node1-nosql",
103+
"rootDir": "/home/opc/nosql/kvroot",
104+
"rns": {
105+
"resourceId": "rg2-rn1",
106+
"storageDirPath": "/home/opc/nosql/data/disk2",
107+
"storageDirEnvPath": "/home/opc/nosql/data/disk2/rg2-rn1/env",
108+
"storageDirSize": 524288000
109+
}
110+
},
111+
{
112+
"name": "sn1",
113+
"host": "node1-nosql",
114+
"rootDir": "/home/opc/nosql/kvroot",
115+
"rns": {
116+
"resourceId": "rg3-rn1",
117+
"storageDirPath": "/home/opc/nosql/data/disk3",
118+
"storageDirEnvPath": "/home/opc/nosql/data/disk3/rg3-rn1/env",
119+
"storageDirSize": 524288000
120+
}
121+
}
122+
]
123+
````
124+
NB: Currently the adminDirsPath is not shown. An enhacement request was filled. In the meantime, please use the following command :
125+
126+
````
127+
kv_admin show parameter -service sn1 -json | jq -r -c '.returnValue.adminDirs[].path'
128+
/home/opc/nosql/admin
129+
````
130+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
# Export/Import
2+
3+
## Using the Import and Export Utilities
4+
5+
Oracle NoSQL Database contains an import/export utility to extract and load table based data, raw key/value based data, and large object data.
6+
You can use the import/export utility, available through kvtool.jar, to:
7+
* Export table data from Oracle NoSQL Database and store the data as JSON formatted files on a local (or network mounted) file system.
8+
* Import ad-hoc JSON data generated from a relational database or other sources, and JSON data generated via MongoDB strict export.
9+
* Export data and metadata from one or more existing Oracle NoSQL Database tables, raw key/value based data, and large object data to a compact binary format.
10+
* Read data from, or write data to files in the file system.
11+
* Import one or more tables into an Oracle NoSQL Database.
12+
* Restart from a checkpoint if an import or export fails before completion.
13+
14+
### Export the entire contents of Oracle NoSQL Database data store
15+
16+
1. Export the entire contents of Oracle NoSQL Database data store
17+
````
18+
cd ~/demo-simple-nosql-cluster/script
19+
mkdir -p ~/kvstore_export
20+
cat export_config
21+
# modify the path if necessary
22+
java -jar $KVHOME/lib/kvtool.jar export -export-all -store OUG -helper-hosts node1-nosql:5000 -config export_config -format JSON
23+
````
24+
25+
2. Import all data from the export package created in 1 into a different Oracle NoSQL Database data store
26+
For demo purpose, we will use the same data store - we need drop the tables before execute it
27+
28+
````
29+
cd ~/demo-simple-nosql-cluster/script
30+
cat import_config
31+
# modify the path if necessary
32+
java -jar $KVHOME/lib/kvtool.jar import -import-all -store OUG -helper-hosts node1-nosql:5000 -config import_config -status /home/opc/checkpoint_dir -format JSON
33+
````
34+
35+
36+
## Oracle NoSQL Data Migrator Vs. Import/Export Utility
37+
38+
The Oracle NoSQL Data Migrator is created to replace and enhance the existing on-premise-only import/export utility.
39+
It moves the NoSQL table data and schema definition between a source and a sink or target.
40+
It supports multiple sources and sinks as listed in Supported Sources and Sinks.
41+
However, the import/export utility lets you import into or export from Oracle NoSQL Database (on-premise) only.
42+
That is, using the import/export utility, you can either import data into the Oracle NoSQL Database or export data from Oracle NoSQL Database.
43+
When you export, the source type is always Oracle NoSQL Database (where you extract data from) and the sink is the recipient of that data.
44+
When you import, the source type is currently limited to a file and the sink is always Oracle NoSQL Database
45+
46+
see example here
47+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# Replacing a Failed Disk
2+
3+
You can replace a disk that is either in the process of failing, or has already failed.
4+
Disk replacement procedures are necessary to keep the store running.
5+
These are the steps required to replace a failed disk to preserve data availability.
6+
7+
The replication data itself is stored by each distinct Replication Node service on separate, physical media as well.
8+
Storing data in this way provides failure isolation and will typically make disk replacement less complicated and time consuming
9+
10+
To replace a failed disk:
11+
12+
13+
14+
1. Determine which disk has failed. To do this, you can use standard system monitoring and management mechanisms.
15+
2. Then given a directory structure, determine which Replication Node service to stop.
16+
3. Use the plan stop-service command to stop the affected service (rg2-rn3) so that any attempts by the system to communicate with it are no longer made
17+
`kv-> plan stop-service -service rg2-rn3`
18+
4. Remove the failed disk (disk2) using whatever procedure is dictated by the operating system, disk manufacturer, and/or hardware platform.
19+
5. Install a new disk using any appropriate procedures.
20+
6. Format the disk to have the same storage directory as before
21+
7. With the new disk in place, use the plan start-service command to start
22+
`kv-> plan stop-service -service rg2-rn3`
23+
24+
# HDD Failure simulation
25+
26+
````
27+
kv-> plan stop -service rg1-rn3
28+
rm -rf ${KVDATA}/disk1/rg1-rn3
29+
kv-> plan start -service rg1-rn3
30+
````
31+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
Copyright (c) 2018, 2022 Oracle and/or its affiliates. All rights reserved.
2+
3+
The Universal Permissive License (UPL), Version 1.0
4+
5+
Subject to the condition set forth below, permission is hereby granted to any
6+
person obtaining a copy of this software, associated documentation and/or data
7+
(collectively the "Software"), free of charge and under any and all copyright
8+
rights in the Software, and any and all patent rights owned or freely licensable
9+
by each licensor hereunder covering either (i) the unmodified Software as
10+
contributed to or provided by such licensor, or (ii) the Larger Works (as
11+
defined below), to deal in both
12+
13+
(a) the Software, and
14+
(b) any piece of software and/or hardware listed in the lrgrwrks.txt file if one
15+
is included with the Software (each a ¿Larger Work¿ to which the Software is
16+
contributed by such licensors),
17+
18+
without restriction, including without limitation the rights to copy, create
19+
derivative works of, display, perform, and distribute the Software and make,
20+
use, sell, offer for sale, import, export, have made, and have sold the Software
21+
and the Larger Work(s), and to sublicense the foregoing rights on either these
22+
or other terms.
23+
24+
This license is subject to the following condition:
25+
26+
The above copyright notice and either this complete permission notice or at a
27+
minimum a reference to the UPL must be included in all copies or substantial
28+
portions of the Software.
29+
30+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
31+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
32+
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
33+
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
34+
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
35+
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

0 commit comments

Comments
 (0)