Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from bregman-arie:master #93

Merged
merged 17 commits into from
Feb 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 40 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -622,6 +622,14 @@ Throughput. To have good throughput, the upload stream should be routed to an un

<details>
<summary>Explain Spine & Leaf</summary><br><b>
"Spine & Leaf" is a networking topology commonly used in data center environments to connect multiple switches and manage network traffic efficiently. It is also known as "spine-leaf" architecture or "leaf-spine" topology. This design provides high bandwidth, low latency, and scalability, making it ideal for modern data centers handling large volumes of data and traffic.

Within a Spine & Leaf network there are two main tipology of switches:

* Spine Switches: Spine switches are high-performance switches arranged in a spine layer. These switches act as the core of the network and are typically interconnected with each leaf switch. Each spine switch is connected to all the leaf switches in the data center.
* Leaf Switches: Leaf switches are connected to end devices like servers, storage arrays, and other networking equipment. Each leaf switch is connected to every spine switch in the data center. This creates a non-blocking, full-mesh connectivity between leaf and spine switches, ensuring any leaf switch can communicate with any other leaf switch with maximum throughput.

The Spine & Leaf architecture has become increasingly popular in data centers due to its ability to handle the demands of modern cloud computing, virtualization, and big data applications, providing a scalable, high-performance, and reliable network infrastructure
</b></details>

<details>
Expand Down Expand Up @@ -3307,7 +3315,9 @@ Bonus: extract the last word of each line
## System Design

<details>
<summary>Explain what is a "Single point of failure"?</summary><br><b>
<summary>Explain what a "single point of failure" is. </summary><br><b>
A "single point of failure", in a system or organization, if it were to fail would cause the entire system to fail or significantly disrupt it's operation. In other words, it is a vulnerability where there
is no backup in place to compensate for the failure.
</b></details>

<details>
Expand All @@ -3334,10 +3344,34 @@ In multi-CDN, content is distributed across multiple different CDNs, each might

<details>
<summary>Explain "3-Tier Architecture" (including pros and cons)</summary><br><b>
A "3-Tier Architecture" is a pattern used in software development for designing and structuring applications. It divides the application into 3 interconnected layers: Presentation, Business logic and Data storage.
PROS:
* Scalability
* Security
* Reusability
CONS:
* Complexity
* Performance overhead
* Cost and development time
</b></details>

<details>
<summary>Explain Mono-repo vs. Multi-repo. What are the cons and pros of each approach?</summary><br><b>
<summary>Explain Mono-repo vs. Multi-repo.What are the cons and pros of each approach?</summary><br><b>
In a Mono-repo, all the code for an organization is stored in a single,centralized repository.
PROS (Mono-repo):
* Unified tooling
* Code Sharing
CONS (Mono-repo):
* Increased complexity
* Slower cloning

In a Multi-repo setup, each component is stored in it's own separate repository. Each repository has it's own version control history.
PROS (Multi-repo):
* Simpler to manage
* Different teams and developers can work on different parts of the project independently, making parallel development easier.
CONS (Multi-repo):
* Code duplication
* Integration challenges
</b></details>

<details>
Expand All @@ -3346,6 +3380,7 @@ In multi-CDN, content is distributed across multiple different CDNs, each might
* Not suitable for frequent code changes and the ability to deploy new features
* Not designed for today's infrastructure (like public clouds)
* Scaling a team to work monolithic architecture is more challenging
* If a single component in this architecture fails, then the entire application fails.
</b></details>

<details>
Expand All @@ -3357,16 +3392,17 @@ In multi-CDN, content is distributed across multiple different CDNs, each might

<details>
<summary>What's a service mesh?</summary><br><b>

[This article](https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh) provides a great explanation.
It is a layer that facilitates communication management and control between microservices in a containerized application. It handles tasks such as load balancing, encryption, and monitoring.
</b></details>

<details>
<summary>Explain "Loose Coupling"</summary><br><b>
In "Loose Coupling", components of a system communicate with each other with a little understanding of each other's internal workings. This improves scalability and ease of modification in complex systems.
</b></details>

<details>
<summary>What is a message queue? When is it used?</summary><br><b>
It is a communication mechanism used in distributed systems to enable asynchronous communication between different components. It is generally used when the systems use a microservices approach.
</b></details>

#### Scalability
Expand Down
17 changes: 13 additions & 4 deletions certificates/aws-cloud-practitioner.md
Original file line number Diff line number Diff line change
Expand Up @@ -400,8 +400,8 @@ Learn more [here](https://aws.amazon.com/snowmobile)
<details>
<summary>What is IAM? What are some of its features?</summary><br><b>

IAM stands for Identity and Access Management, and is used for managing users, groups, access policies & roles
Full explanation is [here](https://aws.amazon.com/iam)
In short: it's used for managing users, groups, access policies & roles
</b></details>

<details>
Expand Down Expand Up @@ -570,7 +570,7 @@ Read more about it [here](https://aws.amazon.com/sns)
<details>
<summary>What is the shared responsibility model? What AWS is responsible for and what the user is responsible for based on the shared responsibility model?</summary><br><b>

The shared responsibility model defines what the customer is responsible for and what AWS is responsible for.
The shared responsibility model defines what the customer is responsible for and what AWS is responsible for. For example, AWS is responsible for security "of" the cloud, while the customer is responsible for security "in" the cloud.

More on the shared responsibility model [here](https://aws.amazon.com/compliance/shared-responsibility-model)
</b></details>
Expand Down Expand Up @@ -611,6 +611,8 @@ Learn more [here](https://aws.amazon.com/inspector)

<details>
<summary>What is AWS Guarduty?</summary><br><b>

Guarduty is a threat detection service that monitors your AWS accounts to help detect and mitigate malicious activity
</b></details>

<details>
Expand All @@ -621,6 +623,8 @@ AWS definition: "AWS Shield is a managed Distributed Denial of Service (DDoS) pr

<details>
<summary>What is AWS WAF? Give an example of how it can used and describe what resources or services you can use it with</summary><br><b>

An AWS Web Application Firewall (WAF) can filter out unwanted web traffic (bots), and protect against attacks like SQL injection and cross-site scripting. One service you could use it with would be Amazon CloudFront, a CDN service, to block attacks before they reach your origin servers
</b></details>

<details>
Expand Down Expand Up @@ -697,6 +701,11 @@ Learn more [here](https://aws.amazon.com/certificate-manager)

<details>
<summary>What is AWS RDS?</summary><br><b>

Amazon Relational Database Service (RDS) is a service for setting up and managing resizable, cost-efficient relational databases
resource

Learn more [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html)
</b></details>

<details>
Expand Down Expand Up @@ -730,7 +739,7 @@ Learn more [here](https://aws.amazon.com/dynamodb/dax)
<details>
<summary>What is AWS Redshift and how is it different than RDS?</summary><br><b>

cloud data warehouse
AWS Redshift is a cloud data warehousing service that is geared towards handling massive amounts of data (think petabytes) and being able to execute complex queries. In contrast, Amazon RDS is best suited for things like web applications requiring simple queries with more frequent transactions, and on a smaller scale.
</b></details>

<details>
Expand Down Expand Up @@ -815,7 +824,7 @@ CloudFormation
<details>
<summary>Which service would you use for building a website or web application?</summary><br><b>

Lightsail
Lightsail or Elastic Beanstalk
</b></details>

<details>
Expand Down
5 changes: 4 additions & 1 deletion topics/ansible/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -509,6 +509,9 @@ If your group has 8 hosts. It will run the whole play on 4 hosts and then the sa

<details>
<summary>What is Molecule? How does it works?</summary><br><b>

It's used to rapidy develop and test Ansbile roles. Molecule can be used to test Ansible roles against a varaitey of Linux Distros at the same time. This testing ability helps instill confidence of the automation today and as time go on while a role is maintined.

</b></details>

<details>
Expand All @@ -529,4 +532,4 @@ If your group has 8 hosts. It will run the whole play on 4 hosts and then the sa
<summary>What are collections in Ansible?</summary><br><b>
</b></details>

<!-- {% endraw %} -->
<!-- {% endraw %} -->
34 changes: 34 additions & 0 deletions topics/aws/exercises/create_user/solution.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,37 @@ As you probably know at this point, it's not recommended to work with the root a
10. Click on "Next: Tags"
11. Add a tag with the key `Role` and the value `DevOps`
12. Click on "Review" and then create on "Create user"

13. ### Solution using Terraform

```

resource "aws_iam_group_membership" "team" {
name = "tf-testing-group-membership"

users = [
aws_iam_user.newuser.name,

]

group = aws_iam_group.admin.name
}

resource "aws_iam_group_policy_attachment" "test-attach" {
group = aws_iam_group.admin.name
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}
resource "aws_iam_group" "admin" {
name = "admin"
}

resource "aws_iam_user" "newuser" {
name = "newuser"
path = "/system/"

tags = {
Role = "DevOps"
}
}
```

14 changes: 14 additions & 0 deletions topics/aws/exercises/password_policy_and_mfa/solution.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,17 @@ MFA:
3. Expand "Multi-factor authentication (MFA)" and click on "Activate MFA"
4. Choose one of the devices
5. Follow the instructions to set it up and click on "Assign MFA"

6. ### Solution using Terraform:

```
resource "aws_iam_account_password_policy" "strict" {
minimum_password_length = 8
require_numbers = true
allow_users_to_change_password = true
password_reuse_prevention = 1
}
```

**Note:** You cannot add MFA through terraform, you have to do it in the GUI.

2 changes: 2 additions & 0 deletions topics/azure/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,8 @@ An Azure region is a set of datacenters deployed within an interval-defined and

<details>
<summary>What is the N-tier architecture?</summary><br><b>

N-tier architecture divides an application into logical layers and physical tiers. Each layer has a specific responsibility. Tiers are physically separated, running on separate machines. An N-tier application can have a closed layer architecture or an open layer architecture. N-tier architectures are typically implemented as infrastructure-as-service (IaaS) applications, with each tier running on a separate set of VMs
</b></details>

### Storage
Expand Down
4 changes: 2 additions & 2 deletions topics/databases/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@

Relational (SQL)
NoSQL
Time serties
Time series
</b></details>

### SQL
Expand Down Expand Up @@ -188,4 +188,4 @@ A database designed specifically for time series based data.
It comes with multiple optimizations:

<TODO>: complete this :)
</b></details>
</b></details>
20 changes: 20 additions & 0 deletions topics/kafka/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,3 +44,23 @@ An application that publishes data to the Kafka cluster.

- Broker: a server with kafka process running on it. Such server has local storage. In a single Kafka clusters there are usually multiple brokers.
</b></details>

<details>
<summary>What is the role of ZooKeeper is Kafka?</summary><br/><b>
In Kafka, Zookeeper is a centralized controller that manages metadata for producers, brokers, and consumers.
Zookeeper also:
<ul>
<li>Tracks which brokers are part of the Kafka cluster</li>
<li>
Determines which broker is the leader of a given partition and topic
</li>
<li>
Performs leader elections
</li>
<li>
Manages cluster membership of brokers
</li>
</ul>

</b>
</details>
10 changes: 7 additions & 3 deletions topics/kubernetes/CKA.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

## Setup

* Set up Kubernetes cluster. Use on of the following
* Set up Kubernetes cluster. Use one of the following
1. Minikube for local free & simple cluster
2. Managed Cluster (EKS, GKE, AKS)

Expand Down Expand Up @@ -54,7 +54,7 @@ Note: create an alias (`alias k=kubectl`) and get used to `k get po`
</b></details>

<details>
<summary>Assuming you have a Pod called "nginx-test", how to remove it?</summary><br><b>
<summary>Assuming that you have a Pod called "nginx-test", how to remove it?</summary><br><b>

`k delete nginx-test`
</b></details>
Expand Down Expand Up @@ -107,7 +107,7 @@ If you ask yourself how would I remember writing all of that? no worries, you ca
<details>
<summary>How to test a manifest is valid?</summary><br><b>

with `--dry-run` flag which will not actually create it, but it will test it and you can find this way any syntax issues.
with `--dry-run` flag which will not actually create it, but it will test it and you can find this way, any syntax issues.

`k create -f YAML_FILE --dry-run`
</b></details>
Expand Down Expand Up @@ -158,7 +158,11 @@ To count them: `k get po -l env=prod --no-headers | wc -l`
First change to the directory tracked by kubelet for creating static pod: `cd /etc/kubernetes/manifests` (you can verify path by reading kubelet conf file)

Now create the definition/manifest in that directory

`k run some-pod --image=python --command sleep 2017 --restart=Never --dry-run=client -o yaml > status-pod.yaml`
=======
`k run some-pod --image=python --command sleep 2017 --restart=Never --dry-run=client -o yaml > static-pod.yaml`

</b></details>

<details>
Expand Down
23 changes: 16 additions & 7 deletions topics/kubernetes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -314,6 +314,7 @@ Outputs the status of each of the control plane components.
<details>
<summary>What happens to running pods if if you stop Kubelet on the worker nodes?</summary><br><b>

When you stop the kubelet service on a worker node, it will no longer be able to communicate with the Kubernetes API server. As a result, the node will be marked as NotReady and the pods running on that node will be marked as Unknown. The Kubernetes control plane will then attempt to reschedule the pods to other available nodes in the cluster.
</b></details>

#### Nodes Commands
Expand Down Expand Up @@ -736,21 +737,29 @@ A Deployment is a declarative statement for the desired state for Pods and Repli
<details>
<summary>How to create a deployment with the image "nginx:alpine"?</code></summary><br><b>

`kubectl create deployment my_first_deployment --image=nginx:alpine`
`kubectl create deployment my-first-deployment --image=nginx:alpine`

OR

```
cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
EOF
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
```
</b></details>

Expand Down
3 changes: 2 additions & 1 deletion topics/kubernetes/solutions/killing_containers.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,5 @@

## After you complete the exercise

* Why did the "RESTARTS" count raised? - `because we killed the process and Kubernetes identified the container isn't running properly so it performed restart to the Pod`

* Why did the "RESTARTS" count raised? - `Kubernetes restarted the Pod because we killed the process and the container was not running properly.`
5 changes: 2 additions & 3 deletions topics/linux/exercises/copy/solution.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,8 @@ touch /tmp/x
cp x ~/
cp x y
mkdir files
cp x files
cp y files
mv x files | mv y files
cp -r files copy_of_files
mv copy_of_files files2
rm -rf files files2
```
```
4 changes: 2 additions & 2 deletions topics/linux/exercises/create_remove/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
1. Create a file called `x`
2. Create a directory called `content`
3. Move `x` file to the `content` directory
4. Create a file insidethe `content` directory called `y`
4. Create a file inside the `content` directory called `y`
5. Create the following directory structure in `content` directory: `dir1/dir2/dir3`
6. Remove the content directory

## Solution

Click [here](solution.md) to view the solution.
Click [here](solution.md) to view the solution.
Loading
Loading