Email: aaqaishtyaq at gmail.com
My other identities:
- Twitter: @aaqaishtyaq
- GitHub: aaqaishtyaq
- Linkedin: aaqaishtyaq
Note: I hate phone calls. Also, a polite request to please follow the NoHello rule in case you're reaching out for technical help.
diff --git a/404.html b/404.html new file mode 100644 index 0000000..48d5d6c --- /dev/null +++ b/404.html @@ -0,0 +1 @@ +
Email: aaqaishtyaq at gmail.com
My other identities:
Note: I hate phone calls. Also, a polite request to please follow the NoHello rule in case you're reaching out for technical help.
My name is Aaqa Ishtyaq, I'm a Polyglot Software Engineer, with an interest in Distributed Systems and Observability.
I've been writing code professionally for 5 years now, and as a hobby, since I was 12. I'm a massive cyberpunk nerd. I like to read books, take photos, and hike in the mountains near my place.
I'm a Backend Engineer working remotely from India on the infrastructure engineering team at Deel. Previously, I was a Senior Software Engineer at HackerRank, focusing on their core remote code execution system.
The theme of my blog is software engineering, but I tend to derive broader life lessons from my retrospectives.
Here are some featured posts and pages:
Bubble sort is another commonly known sorting algorithm. The idea here is to scan a list of items (say integers) sequentially (from left to right) and compare consecutive pairs of elements starting at index 0.
Example:
+=
+ # 92 is index 0 and the consecutive pairs are
+# (92,11), (11,45), (45,2234) and so on ...
+
At first we compare elements (list[0],list[1]) then (list[1],list[2]) then (list[2],list[3]) and so on until the end of the list is reached.
When comparing we check if element i is greater than element i + 1, if they are we just swap the two elements and move on to the next pair. If they are not this means that the pair is already sorted, so we also move on to the next pair.
Example:
=
+
+# Let's compare my_numbers[0] and my_numbers[1]
+
+
+
+# [11, 92, 45, 2234, 0, 7, 65]
+
This process has to be repeated for however many items are on the list. So if the list holds 9 items, it means we need to loop through it 9 times at most. But what if our original list is partially sorted? We might not need 9 passes through the list.
One way for us to know that the list is fully sorted is if we have made no swaps during our pass. For that, we need a variable to keep track of how many swaps were made during a pass.
Example:
=
+
+# Elements (0,1) are compared and swapped. List is now 11,92,45,2234,0,7,65
+# Elements (1,2) are compared and swapped. List now 11,45,92,2234,0,7,65
+# Elements (2,3) are compared and not swapped. List remains the same.
+# Elements (3,4) are compared and swapped. List is now 11,45,92,0,2234,0,7,65
+# Elements (4,5) are compared and swapped. List is now 11,45,92,0,7,2234,65
+# Elements (5,6) are compared and swapped. List is now 11,45,92,0,7,65,2234
+
+# This represents one unique pass through the list.
+
Notice how after each pass the highest value number is pushed at len(list) - 1.
Let's look at how to implement Bubble Sort using Python:
+
+ = False
+
+
+
+ = True
+
+
+
+
+
+ = ,
+ , = False
+
This works right and it will sort any list you throw at it. However we can slightly optimise it: We know that, after each pass the highest value element is guaranteed to be sorted and placed at len(some_list) - 1. Because of this, for each subsequent pass, we can stop comparing the last sorted item. instead of comparing pairs that we know are already sorted. This is what it looks like:
+
+ = False
+ = - 1
+
+
+
+ = True
+
+
+
+
+
+ = ,
+ , = False
+
+ -= 1
+
After each pass through the loop, we know the right side of the list is sorted so we decrement the value of last_sorted_item. What this means is that the 1st pass will loop from 0 to len(some_list) -1, the second time, it will be from 0 to len(some_list) - 2 and so on ...
The rate of growth of this algorithm is quadratic. Expressed as O(n^2) in "big-oh" notation.
+
+ = False # time here is constant
+ = - 1
+
+ # We go through this first loop n times
+
+ = True
+
+ # we go through this loop n-1 times
+
+
+
+ # execution here is constant
+ = ,
+ , = False
+
+ -= 1 # constant time
+
It's O(n^2) because, for each pass through the loop n times, we loop n times through the consecutive pairs. It's not a very efficient algorithm when used on large samples of data. It should only be used if you have a specific case on a small data set.
Next in the series is QuickSort, another interesting and more efficient sorting algorithm. As always, if you have questions, comments or if you spotted a typo or a mistake, please feel free to let me know on Twitter, I'm @aaqaishtyaq and always happy to help!
Quicksort is a "divide and conquer" type of algorithm. The good thing about it is that the worst case can almost always be avoided by using what is called a randomized version of quicksort (more on that later).
The idea of Quicksort is to take an unsorted list and select an element (on that list) called a "pivot". Then the list is rearranged such that all elements greater (in value) than the pivot are placed to its right, and all elements lesser (in value) are placed to its left.
This process is called partitioning. At this stage in the execution of the algorithm, the order of the elements doesn't matter so long as the lesser/bigger values are placed on the correct side of the pivot.
Partitioning will produce two sublists with the pivot as a separator ( this is because the pivot will be at its natural place after the first pass aka sorted). The problem then becomes sorting these two sublists.
Note: Partitioning does not require creating copies of the list, we work on it directly as long as we keep track of the start and end indices of each sublist.
To sort the two sublists, we can apply the same logic as above (choosing a pivot, and sorting the two resulting sublists) because QuickSort is a recursive algorithm.
When a sublist only contains a single element, it's already sorted so we can stop the recursion at this point, it's our exit condition.
Note on choosing a pivot
Some people use the last item of the list, and some people use the median of the first, last, and medium elements but the most common way is to choose a random pivot to ensure n log n
execution.
+ = ,
+ ,
+
+
+
+
+ = #
+
+
+
+
+
+ =
+ =
+
+
+
+
+
+
+ += 1
+
+ = ,
+ , return
+
A randomized version of Quicksort would look similar to what's above except that we must randomize the selection of our pivot.
+# ...
+
+
+ =
+ = ,
+ ,
+ # ...
+
Here, we set the pivot to a random integer in the range between start
and end
. Then, we swap the value at that index with the value at array[end]. If you run the code successively, you'll notice that the pivot is different every time. It's a nice optimization that can save some time.
It's one of the most efficient sorting algorithm. In fact, most sorting functions that come packaged in many language's standard libraries use an implementation of QuickSort.
The order of growth for QuickSort in the worst case is quadratic O(n^2). The average case, however, which is the most common scenario, has a complexity of O(n log n).
QuickSort works best when used on large sets of data because of its recursive nature.
In this short (spoiler: it's actually quite lengthy) post, I will be going through a list of very useful and handy methods in the os
module (which is part of the Python standard library) for handling files and directories.
This one is pretty straightforward. If you're comfortable with the linux shell, you know that mkdir
is the command to use to create directories. Unsurprisingly, Python uses the same naming convention.
Example:
+
+
+
The method takes a string its argument and will create the directory under the file's parent folder. (For instance, if the path to the file calling os.mkdir() is /home/username/Documents/app.py, the "my_awesome_directory" will be created under /home/username/Documents)
This is useful if you want to get the path to a file/folder that you know is under the working file's parent directory. If this was confusing, here's an example.
In the Linux shell (bash or otherwise), you can issue the command pwd
(which I believe stands for "print working directory") to quickly print your current location within a given session.
In Python you would achieve this like so:
+
+=
+
Notice I used all caps for the variable name. This is because it's usually a constant. This variable isn't meant to be changed. You can now use this variable to locate any file or folder within that directory. Keep reading and I'll show you how.
This is something you'll find yourself doing a lot. Especially on large projects that require configuration files and other such things. If you have experience working with Django for example, they have a settings.py
file littered with calls to the os module. There are many benefits to this approach. Perhaps the most obvious being is that if you ever decide to move your project to another location, you don't want to keep modifying the path every time. Remember, programming is all about being lazy.
So this is how you would do it:
+
+# Assuming this file is located at /home/username/myproject/app.py
+# and that you want to operate on a a file called config.cfg within the same
+# directory :
+
+= # evaluates to /home/username/myproject
+ =
+
+# /home/username/myproject/config.cfg
+
An important note: os.path.join() merely concatenates the two paths together. It doesn't check whether the path is valid. So be careful when using this method. Also notice how the method call is to os.path.join() and not os.join()
The other day I was working on a small web scraper for a side project of mine. After the it was done fetching data, my script would save the results into a pickle file (don't worry if you don't know what it is) that would be read by my program, saving me the trouble of sitting there waiting to fetch the same info over and over again each time I run the script.
The solution was to tell my script to check whether a specific file (let's call it results.pkl) exists at a given path. If it does, the program continues and if not, the program executes the crawler function.
This is clever because now I only have to fetch the data and if the file gets deleted I know I can rely on the program to go and crawl the sites as expected.
And now for the example:
+
+=
+ =
+
+
+ # scrapes a bunch of websites and saves the result in a file called
+ # results.pkl under the current directory
+ pass
+
+
+
+
+
+ ## the file exists so we can open it and work with its content
+
The same thing can be done to check that the path exists AND that it's a directory:
=
+ =
+
+
+ ## do something with the files inside the folder
+
+
+
Very useful when you want to read several files that are under the same directory. It can be done in two ways: conventional and pythonic. I'll show you both.
+ # do something with the filename (open it, copy it, move it, rename it...)
+
+
or
=
+
Guess which way is more pythonic!
These have been the most useful file/directory functions for me in Python. I really love the fact that the method names sound natural and are (for the most part) similar to linux commands. What are your favourite file operation methods? Any tip or trick you want to share with me? Something I've missed? Ping on twitter! I'm @aaqaishtyaq.
If you found this article useful, please share it with your nerd friends/coworkers and spread the word!
A couple of days ago (at the time of writing), I started my newest side project. It's a portfolio showcasing my (very very very) amateur photography. It's written in React.js with Sass and I have to say it was extremely enjoyable to work on. Unsurprisingly though, I ran into some issues while deploying to production, which after a lot of head banging against every possible flat surface I could find, I managed to sort. So this post will be about how to make React.js work with sass in production and how to serve the project using Nginx as a front end web server.
We'll be using the official starter kit / CLI tool provided by the facebook team called create-react-app
.
You can install it by running the following command:
+
Including sass in a React app can be done in two ways: You can either eject the project and manually modify the webpack config files or you can follow the procedure in the official docs. I've personally chosen to go with the first approach. You can just follow the steps in this tutorial to get up and running.
Note: If you went with the second approach, you can just skip the following and directly jump to the next section
Once you've done that, open the config/webpack.config.prod.js file, locate the rules
section under module.exports
and add the following snippet to it.
+
This config will be executed when you call npm run build
. If you don't do this, you'll end up with an empty css file.
Before building the project for the first time, we must first unregister the service worker. Doing this will ensure that the static assets aren't cached by the client's browsers. It's important because if you skip this step, any subsequent change / build you'll deploy won't necessarily be reflected right away client side. (I'm not knowledgable enough on service workers to provide details on this behaviour, but it's something to note. If you really need service workers in your project, you might want to explore other solutions to avoid asset caching).
Your index.js file should now look something like this:
+;
+App /, 'root';
+
+
This is the part where we actually build our project. By building I mean compiling all React files and their related dependencies, transpiling them into vanilla Javascript, compiling sass files into css, minifying them etc ... To do this, we can simply run npm run build
and voila ! You should have a brand new build
folder in your project.
Your app is now compiled and ready to be served. Now is a good time to start thinking about deployment strategies. In this basic scenario (and quite frankly, most scenraios), you really just want to upload the build folder to a remote server and have it accessible as a static site.
You can use ftp to transfer the files and it would be perfectly acceptable, but it's not the most flexible solution. The alternative is to use a CLI utility called rsync
(which is available on mac and linux, not sure about windows). With rsync, you can synchronise files and folders within the same computer or across machines you have ssh access to.
Here's the command we would run to synchronise the build folder to a server on the internet:
# Asssuming we're inside the project folder
+
+
Let's break down this command:
-a
means archive, which is a shortcut for multiple switches. It recursively syncs all files and subfolders within build
to the destnation path, keeping the modification dates, permissions and other metadata unchanged.
-v
means verbose. It just outputs the steps to the screen so you can see what happens in real time.
-P
stands for progress. This is particularly useful in this case because you rely on a network connection to sync the files. Using this option will display a progress bar for each file in the queue.
But you don't want to keep doing all of that every time we want to push now do you ?
Thankfully, you can use create a bash script to automate this process a litte bit. Here's how mine looks like:
#!/bin/sh
+
+
+&& &&
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Again, let's walk through that script section by section:
npm run build
which, as previously explained, will create the build directory with our compiled, ready to be deployed files.master
.Obviously this is very basic and in a more complex project, you'd have to run unit tests and do other things your project requires.
Finally, you need to give executable permissions to the file by running:
+
Now all you have to do when you want to deploy your project to production is run
+
Note: this assumes your server is running ubuntu or any other debian based distro
Ok, so if you've followed the steps correctly, you should have your project files uploaded to your remote server. Now we need to use Nginx to make the site accessible to the internet.
First, create a new config file inside /etc/nginx/sites-available
.
+
Next step is to edit the file using either vim or nano (sudo privileges may be required).
server {
+
+80;
+.com www.mywebsite.com;
+ mywebsite
+ location / {
+/path/to/your/project;
+ index index.html index.htm;
+
+"text/html";
+ }
+
+/var/log/nginx/mywebsite_access.log;
+/var/log/nginx/mywebsite_errors.log;
+}
+
If you're not familiar with Nginx, let me explain what you just copied.
Close the file and exit nano or vim.
Nginx keeps its configuration files in two separate directories: /etc/nginx/sites-available
and /etc/nginx/sites-enabled
. It will serve any website whose configuration file is in the latter folder.
All you have to do now is create a symlink (think of it like a shortcut to an app in a desktop GUI) to your config, and store it in sites-enabled
. That way, if you ever decide to shut down the site, you'll simply need to delete the symlink and you're good.
Before creating the symlink, it's good measure to check if the configuration file has any errors in it. To check for errors just run the following command:
+
It will scan all of your config files and check for errors (and return them to you if there are any).
Now you can create the symlink by executing:
+
Note: If you decide to use service workers with your project and you're still running into caching issues, you should know that Nginx can also be used as a static assets server. I haven't looked into this scenario yet. I might test that approach in the future and detail the process in another post.
Lastly, reload nginx and your website should now be accessible via its URL (provided you correctly setup the DNS settings with your domain name registrar)
+
If you read this article all the way through, thanks for sticking with me ! You can send me questions, remarks, or comments on twitter, I'm @aaqaishtyaq on twitter.
I'm going to make a bold statement: Django replaced Ruby on Rails in the hearts of many developers. With this increase in popularity, we've seen tons or articles, videos and websites dedicated to setting un Django and creating apps using the framework.
Unfortunately, when it comes to deployment, many of these resources only mention heroku or pythonanywhere. While these are excellent solutions for quickly shipping your MVP or prototype, it lacks a bit in terms of flexibility if you want to create your custom deployment pipeline.
tl-dr: If you manage your own server infrastructure, we're going to setup a git deployment workflow with django
The typical workflow usually looks like this:
Usually when you commit and push work you do something like:
origin
being the name of the remote server your code is being pushed to. What took me a while to realise is that you can have many remotes for your repo that point to different servers.
The idea here is to add a new remote to our repo. It will point to our production server such that when we run git push live master
, our code will be copied over to there.
To achieve this, we have some setup work to do on our live server. So go ahead and connect to it via ssh.
+
+# If your server's ssh service listens on a port other than 22, you'll need
+# to add the -p switch
+
+
+
Once we're in, we need to create a new directory for our application. This is where our deployed code will be copied to.
+
+# Some people prefer to use /var/www/<project_name>, it's really up to you. Just
+# make sure remember the path to your project
+
Now head over to /var
and create another directory called repos
+
+# Depending on your setup, you might need sudo priveleges
+
Inside that directory, we need to create a folder named after our project (or domain name) and append it with .git
(not necessary but it's good practice)
+
Inside this folder we'll create what is called a bare
repository. To do this just run:
+
If you run ls
inside that folder you'll see a bunch of files and directories (the same ones found inside the .git
folder in normal repos). One of these directories is called hooks
.
Inside that folder, we'll need create a file called post-receive.
# Assuming you are inside /var/repos/yourproject.git
+
+
Now open it up with vi/vim
+
Hit i
to switch to insert mode, and add the following to the file:
#!/bin/bash
+DEPLOYDIR=/home/username/sites/myawesomedjangoproject # or whatever path you chose
+GIT_WORK_TREE=""
+
Please note that the first shebang line is important, it instructs git to use bash instead of the default shell. Otherwise it won't activate our (soon to be created) virtual environment
exit vim by hitting :wq
(which in vim lingo means write and quit)
What we've done here is set two variables. DEPLOYDIR
is an alias for our project path on the server, and GIT_WORK_TREE
which is a special variable that tells git to copy the code it receives inside of our DEPLOYDIR
. This ensures that we're always running the latest version of our code.
As you've probably noticed, this post-receive file looks very much like a shell script. That's because it is (as explained above). It's executed every time you push code to the repo.
The last thing we need to is make the script executable, so as soon as you're back in the shell run:
+
You can now exit the server and go back to your local machine.
Now that we've created our remote repository, we need to add it to our project (I like to call mine live
).
It takes one simple command:
+
+# And if your server's ssh service listens on a different port :
+
+
+
+
To make sure it was added, you can print the list of available remotes by running:
+
and that's it ! You can now make changes locally, commit and deploy them live (or staging if it's a staging server) and see your changes instantly.
You can obviously still push to github/lab or bitbucket with git push origin <branch>
like you normally would.
As I mentioned in the first part, the post-receive hook is a shell script. Which means you can use it to perform all kinds of tasks against your code, like running front-end builds, installing dependencies, etc ...
Here's an example for a basic Django App:
#!/bin/bash
+
+DEPLOYDIR=/home/username/site/myawesomedjangoproject
+
+
+GIT_WORK_TREE=""
+
+
+if
I run my Django Apps as systemd services, if you don't you can just call python manage.py runserver. If you want to know how to setup Django the way I do just follow this very comprehensive tutorial over on Digital Ocean
I am fully aware that there are more sophisticated methods of deployment through Docker, Travis (For continious integration) etc. But if you have a small app that you want to ship and you already have an infrastructure, I've found this method to be more than suitable.
Please report any missing info, mistake, error, typo. I'm on twitter if you wanna chat.
I'm currently sitting at the most boring meetup I've probably ever attended in Delhi. It's about chatbots. I don't care about chatbots, I care about free stickers and pizza. So I'll take this opportunity to open up about a subject that's dear to my heart: vim.
I used to believe vim was exclusive to this superior race of developers who gulp coffee like it's water and seem to only read HN and nothing else. (Hi, if you're coming from HN). Architecture and Software design comes naturally to them, they never run into bugs and they can recognize the most obscure of algorithms at a glance (Shout out to Shashank, one of my mentors).
Shanky is a good, productive developer. Shany uses vim. I want to be like Shanky. I want to use vim.
There are a million reasons why you should jump ship and join the cult. In the next paragraphs, I will detail some of these reasons.
There's a learning curve to vim. But it's worth the trouble. And if you're on Linux or MacOS, there's a built-in tool called vimtutor
(just fire it up from a terminal, I am not sure about Windows though) and a wide variety of online tools to learn vim. Namely openvim, vim adventures, and vim genius.
Personally, The way I learned was by using it on small, fun side projects of mine during the weekends, mostly to become familiar with the new mental model. And just like everything in life, shit takes time, and practice makes perfect. So keep at it and you'll eventually come to your "aha" moment. As you get more and more comfortable using vim, it will become harder and harder to go back to a regular editor / IDE.
Because it runs on the terminal, you'll never have to wait 20 seconds to get on with your work. (Atom anyone ?)
And if you like pretty things, there's a large selection of colorschemes for you to choose from. On top of that, there's a plugin for just about anything you might need. And if there isn't, you can program your own.
Not really, but I wanted to place a complicated word to sound smart. Seriously though, it's everywhere. On Mac OS, Windows and of course Linux/Unix. If you work on remote servers you can quickly edit files on the fly without having to use nano. (Don't use nano)
Say for example a coworker/friend is running into a bug, you come to help and they're using an IDE you're not familiar with, well you can just access the files from their terminal and start debugging right away.
Or if you're like me, and you spill water on your Macbook keyboard and it becomes toast, you can spin up a VPS on Digital Ocean or AWS, and pick up where you left off (almost) right away.
My color scheme of choice (at the time of writing) is afterglow.
And here's a list of my favorite plugins:
I'll end this article with a quote from a Chamillionaire:
They see you vimmin', they hatin'. Patroling they tryna catch me coding dirty
VM on Google Compute Engine (GCE) supports nested virtualisation, unlike AWS, and allows to run KVM. We can create a VM on GCE to run and test Firecracker microVM.
Setup GCP Project and Zone
FC_PROJECT=your_name-firecracker
+FC_REGION=us-east1
+FC_ZONE=us-east1-b
+
+
+
+
Create a VM image (Machine Image) able to run KVM
FC_VDISK=disk-ub22
+FC_IMAGE=ub22-nested-kvm
+
+
+
+
+
Create the VM
FC_VM=firecracker-vm
+
+
+
Connect to the VM via SSH.
+
Verify that VMX is enabled, enable KVM
+
+
+&& [ && ||
+
+
Add the path(s) to your file(s) which you would like to ignore to your .gitignore
file (and commit them). These file entries will also apply to others checking out the repository.
Add the path(s) to your file(s) which you would like to ignore to your .git/info/exclude
file. These file entries will only apply to your local working copy.
In order to ignore changed files to being listed as modified, you can use the following git command:
To revert that ignorance use the following command:
Note: If you find errors, typos or would like to add new tips, feel free to reach out to me on twitter. I'm @aaqaishtyaq. Thank you for reading ! And if you find this useful, share it with your friends and coworkers !
I created a static website linkbin, Where I could save links in a markdown file that can be viewed later. It is serving its purpose and because it uses hugo, I don't have to think about VPS cost.
Recently, I ran into an issue where my website serves links but when you click on it, It will open the link in the same tab. Ugggh! I don't want to close my website just to open a link. Ideally, it should open the link in a new tab.
How can we do it? It would be easy Eh?
Since markdown is eventually rendered as HTML, The most naive way I found was to write HTML in markdown
example
+
No way, I am going to rewrite all my links into HTML. It adds cost later on as well. I will have to add HTML tags to write the link instead of this syntax:
This could vary depending upon the markdown parser you are using. If you're using Jekyll, like this website, then you can use the following syntax to open links in a new tab.
But, I am not using Jekyll for linkbin. It uses a fairly newer version of Hugo. Hugo ,v0.62.0 or later
, uses Goldmark parser and supports Markdown render hooks. Markdown Render Hooks offer you several ways to extend the default markdown behavior, e.g. resizing uploaded images, opening links in new tabs, or creating mermaid diagrams from code. You can do this by creating templates in the layouts/_default/_markup
directory with base names render-link
or render-image
or render-codeblock
. Your directory layout may look like this:
+
+
+
+
+
+
+
+
Add the following HTML template file (or render hook) at layouts/_default/_markup/render-link.html
:
+ {{ .Text | safeHTML }}
+
You'll find that the links now open in a new tab! For internal blog links (which you would want to open in the same tab), you can use the relative link of the post, e.g. for an other-post.md
file within the posts
directory, you could use
If I’ve missed something or made a horrible mistake if you have any questions regarding this article then feel free to ping me on Twitter. I’m @aaqaishtyaq.
In this article, we will be looking into setting up networking on a linux box from scratch.
We will be creating a fresh new VM using Lima on macOS. You can create a new VM using VirtualBox, Vagrant, or even create a VM on OCI for free.
The common thing that constitutes a network stack includes the:
We can create a non-comprehensive inspect-net-stack
script.
#!/usr/bin/env bash
+
+
+
+
+
+
+
+
+
+
After the execution of the inspect script on my machine produces the following input:
+> Network
+
+
+
+
+
+
+> Route
+
+
+
+> Iptables
+
+
+
+
We want to make sure that each of the containers we are going to create soon will get a separate network stack.
Linux namespaces used for container isolation are called network namespaces. We won't be creating the fully isolated container, rather restrict the scope to only the network stack. One of the ways to create a network stack is by using the ip
tool, part of iproute2
+
+
+
We have a new network namespace but to start using the network namespace, We can use a command called nsenter
. It enters one or more of the specified namespaces and then executes the given program:
# This is create a new bash process under netns0 namespace
+
+
+
+
+> Network
+
+
+
+> Route
+
+> Iptables
+
+
+
+
As you can see from the above output, the bash
process running inside the netns0
namespace sees a different network stack. There are no routing rules and no custom iptables chain. Only one loopback device (lo
).
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Beware: The nsenter
command from above started a nested bash session in the netns0 network namespace. Don't forget to exit from it.
In the upcoming posts, I will discuss connecting the host with this namespace. Interconnecting various network switches, Just like patching the physical switch with a LAN cable.
As developers, we all (hopefully) use git. It's not extremely hard nor time-consuming to get started with it and you will surely thank your future self for taking the time to learn it. Ok so that was for the cringy intro.
Now let's get down to business: what I really want to share in this post is a list of tricks I've learned during the past 2 1/2 years of using git. Some of it might seem trivial to seasoned developers but if you're just getting started, stick with me because this might just blow your mind.
git fetch
only updates the tracking remote branches. If you actually want to update the local repo, you need to merge the local branch with the remote tracking branch using git merge
.
remote branches are prefixed with the name of the remote and a slash: origin/branchname
git pull on the other hand, will execute both commands for you so when you issue a git pull
in a branch it will fetch that branch from the remote repo and merge it with yours.
To list all remote trackin branches, git branch -r
is your friend.
At some point you'll inevitably have to deal with merge conflicts. Essentially what this means is that git noticed a file was modified on both branches and it does not know which version is correct. It leaves it up to you to decide which one you want to push.
if you open the file you will see a HEAD
part prefixed with <<<<
signs and a second part containing the code on the branch you're trying to merge. Both sections are separated with an equal sign.
The top HEAD section contains the version of the file as it is on your current branch. The other shows you what the code looks like on the branch you're trying to merge from.
To resolve the conflict just delete the part you don't want (Including all the equal signs etc), save the file and commit it again.
Note that as a safety measure it's always good practice to do a git pull
to see if you don't have any remaining conflicts. If not you can just push your code and you're all set!
As for the tips:
git merge --abort
will clean up the current working directory and go back to the last version before the merge (it pretty much nullfies the merge).
git merge --squash
is an interesting one as it will bring the changes you made in the feature branch and create a new commit on your current branch without mixing the two histories.
Now this one's tricky !
To remove a file from both the working directory and version control, you can use git rm <filename>
but if you already deleted a file in the cli like I (and almost everyone) usually do then just run git rm <path to the file>
and it will stage it as deleted so you can commit.
But what if you ramrafed (rm -rf
) a bunch of files in your working directory? You don't really want to manually stage every file for deletion, do you? (DO YOU ?) So what do you do in that case? Well, you just run git add -u
(u for updating the working tree) and all of your deleted files will be staged so you can commit and push.
"What if a man accidentally tracked a file and man doesn't want it deleted from the index ?" I hear you ask with an exquisite South London accent.
Well, my G, just run:
and you're done. Efficience ting.
Something else that happens quite often is moving/renaming files (which is essentially the same thing on Linux systems, as the path/address of the file changed).
Here's an example: imagine you have a file called index.html that you want to rename to home.html. you can use git mv
which takes a source and a destination:
+
if you want to move and rename just run
+
However if you moved / renamed the file manually on the command line, running git status
will tell you that index.html was deleted and home.html is untracked.
To fix this you will have to run two commands
Running git status again will mark the file as renamed.
Note that this also works if you don't rename the file. For example, if you just want to move index.html to src/index.html. The same command will apply (git mv index.html src/index.html
).
The other way of achieving this is:
+
which will pick up the changes and automatically stage them for commit.
So there you go, I hope you learned something useful in this article. If you know other mind-blowing tips and tricks for git, @ me on twitter.
Use range
instead of xrange
. In python3, the former creates an iterator that produces the values one at a time making it much more efficient and fast.
+=
+
+ print
+
+
.reversed use Just
+=
+
+ print
+
+
To keep track of the index of each item in a collection, enumerate is your buddy.
=
+
+ print ,
+
+
Yeah you could use zip, but izip is faster, so use that instead.
+
+
+=
+ =
+
+ print ,
+
+
You can sort out the list first and then loop through it, or you could use sorted.
+=
+
+ print
+
+
And BAM, you're ... sorted.
To do that, use iter().
Bad example:
Loop over a file containing a list of names until the loop returns an empty string, in which case we break out of it.
+=
+
+ =
+
+ break
+
+
Beautiful example:
In this case, we call a function (f.read) until it returns the sentinel value passed as a second argument to iter. That way we avoid having to make the unnecessary if check.
+ print
+
The normal way to do it:
=
+
+
+ print
+
If you wish to mutate the data, prefer dict.keys()
.
+=
+
+
+ # do the mutation
+
+
Don't do this:
+=
+
+
+ print
+
+
It's slow because we have to rehash the dictionary and do a lookup everytime.
Instead choose iteritems()
:
+=
+
+
+ print ,
+
+
Just instantiate a new dict with two zipped lists. Real magic.
+
+
+=
+ =
+
+=
+
+
Like in the case of an API response in Flask.
+
+
+=
+
+:
+
+
+ =
+
+ return
If you learned something from this article, share it with your co-workers and fellow hackers. If you notice any typo, error etc let me know on twitter.
I personally love writing APIs, it's my favourite thing about backend development. As developers we interact with APIs all the time, to process payments with stripe, for geolocation with google maps, to retrieve tweets etc ...
So what exactly makes a RESTful API great ? What can you do to ensure that it's easy to use for other developers ? Let's dive right in.
In your responses, you should separate the metadata from the the body and put them in their own objects.
// Example: GET /api/user/2
+
+
+
When creating a new resource, always return a 201 created
instead of a 200 OK
. It's a common mistake developers make. Another best practice is to return a url to the newly created resource in the location header of your response. You can also choose to send a serialised version of the resource in the response body. (I personally like to do that).
+// Example: POST /api/users
+
+// Response Headers
+
+// Content-Type: application/json
+// ...
+// Location: https://yoursite.com/api/users/2
+
+
+
Properly dealing with errors is crucial to provide the best experience possible to your users. Just like with normal responses, always include the status code in the meta part.
You should also provide a nice descriptive message detailing the cause of the error as well as a potential solution to fix it.
Doing this will help the consumers of your API to gracefully handle these errors on the client side. They will thank you for it.
// Example GET /api/secretendpoint
+
+
+
This part is subject to many passionate debates, apparently it's a very touchy topic.
Below I will list the most common status codes, what they mean and how I use them.
DELETE
requests)Caching is great and is a must when deploying an API. It helps a huge amount with performace by reducing server load as you are not required to perform the same action multiple times. You could choose to do it manually or use a reverse proxy like nginx or even something like varnish to handle that for you.
Caching will return a 304 Not Modified
if the resource hasn't changed. The great thing about this is that it does not affect your rate limiting, so there's literally no excuse not to implement it.
For security and performance reasons you want limit the amount of requests made against your API. You must return the maximum and remaining number of request allowed for a given user in a particular time window (which you are free to determine).
You want to enable CORS to allow requests from browsers using AJAX. This can be done by setting the following headers:
If you have any comment, suggestions or remarks of any sort, you can ping me on twitter @aaqaishtyaq !
Scrapy is good for scraping static web pages using python but when it comes to dynamic web pages scrapy can't do wonders, and there comes Selenium
but as good as selenium is, it just got beaten by Scrapy in terms or speed.
Web nowdays is all about Dynamic JS based pages and AJAX. So for this very scenario the guys over scrapy-plugins created scrapy-splash
. Scrapy-Splash is a plugin that connects Scrapy with Splash (Lightweight, scriptable browser as a service with an HTTP API). In a nutshell what splash do is it traps the response recieved from the server and renders it. Then it return a render.html
which is static and can be easily scraped.
A. Before we begin you need to install Docker
first, You can follow the official instruction as per your Operating System.
B. After installing docker navigate to your project folder, activate virtualenv
and install scrapy-splsh plugin
+
C. Pull the Splash Docker Image and run it
+
+
A. Add the Splash server address to settings.py
of your Scrapy project like this:
=
+
If you are running docker on your local machine then you can simply use http://localhost:<port>
, but if you are running it on a remote machine you need to specify it's I.P. Address like this http://192.168.59.103:<port>
B. Enable the Splash middleware by adding it to DOWNLOADER_MIDDLEWARES
in your settings.py
file and changing HttpCompressionMiddleware priority:
=
+
C. Enable SplashDeduplicateArgsMiddleware by adding it to SPIDER_MIDDLEWARES in your settings.py:
=
+
D. Set a custom DUPEFILTER_CLASS:
=
+
Before you use scrapy-splash
you need to import it in your spider. You can do that by adding this line:
+
from now on insted of using scrapy.Request
you can simply use SplashRequest
to get response from Splash
insted of directly from ther server.
It's all well and good but actual spider buiding does not happens in vim
or sublime
, it takes place in shell
.
So how to use Splash in the shell?
Good Question.
Insted of invoking shell with:
+>>> fetch()
+
or with this:
+
You invoke shell with this:
+
Let me explain
localhost:port
is where your splash service is runningurl
is url you want to crawlrender.html
is one of the possible http api endpoints, returns redered html page in this casetimeout
time in seconds for timeoutwait
time in seconds to wait for javascript to execute before reading/saving the html.If I’ve missed something, made a horrible mistake of if you have any questions regarding this article then feel free to ping me on Twitter. I’m @aaqaishtyaq.
So I had been working with Django, Flask and Express.js for a while now, and my database of choice for every single project has of course always been Postgres. What makes these frameworks great (regardless of language) is the ability to use an ORM (Object Relational Mapper) that sits between your code and the DB. It does all the heavy lifting for you and takes care of executing SQL queries on your behalf.
This is great but I couldn't stop thinking "what if I have to manually debug something directly into postgres ?" This is when reality slapped me in the face, I barely knew the SQL Language. So I thought It'd be fun to create a cheatsheet that I (and you) could keep as a reference for these times when you absolutely need to set this column to UNIQUE and you don't know how.
Enjoy ! (and yeah I went all in with the pokemon references)
/* create a user without privileges*/
+CREATE ROLE aaqa;
+
+/* create a user with privileges*/
+CREATE ROLE aaqa LOGIN CREATEDB CREATEROLE REPLICATION;
+
+/* Add privileges to existing user*/
+ALTER ROLE aaqa WITH LOGIN CREATEROLE CREATEDB REPLICATION;
+
ALTER ROLE aaqa WITH superuser;
+
ALTER ROLE psyduck RENAME TO brock;
+
;
+
pokemons RENAME TO charizard;
+
(
+
+ id INT PRIMARY KEY NOT NULL,
+ pokemon_type CHAR(50),
+ name CHAR(50) NOT NULL UNIQUE,
+ gender CHAR(50) NOT NULL UNIQUE
+);
+
pokemons;
+
/* assuming there's a pikachu role on the system */
+DROP ROLE pikachu;
+
pokemons OWNER TO aaqa;
+
trainers RENAME TO gym_trainers;
+
TEXT; /* it makes zero
+ gym_trainers ALTER COLUMN pokemon_type TYPE sense to want to change this column type to TEXT but YOLO */
+
gym_trainers ALTER COLUMN name RENAME TO trainer_name;
+
TEXT;
+ gym_trainers ADD COLUMN bio
+/*
+ALTER TABLE <table_name>
+ADD COLUMN <column_name> <data_type> <constraints if any>
+*/
+
INT UNIQUE; /* cause why not */
+ gym_trainers ADD COLUMN age
CHAR(60) NOT NULL;
+ gym_trainers ADD COLUMN main_pokemon
NOT NULL;
+ gym_trainers ALTER COLUMN main_pokemon DROP
CHAR(80) NOT NULL DEFAULT 'Indigo
+ gym_trainers ADD COLUMN city Plateau';
+
+ (
+
+ id INT PRIMARY KEY,
+ trainer_name CHAR(50) NOT NULL,
+
+ /*
+ method 1
+ Add the check yolo style
+ */
+ age INT NOT NULL CHECK (age > 18)
+
+ /*
+ method 2
+ Add a named constraint for better error handling
+ */
+ age INT CONSTRAINT legal_age CHECK (age > 18) NOT NULL
+
+ /*
+ method 3
+ Add the constraint at the end
+ for more clarity
+ */
+ age INT NOT NULL,
+ CONSTRAINT legal_age CHECK (age > 18) NOT NULL
+);
+
CONSTRAINT unique_name UNIQUE (trainer_name);
+ trainers ADD
CONSTRAINT unique_name;
+ trainers DROP
INSERT INTO trainers VALUES (1, 23, 'brock');
+
+/* or */
+
+INSERT INTO trainers (age, trainer_name)
+VALUES (1, 23, 'brock');
+
+/*
+ Note that in the second case we don't have pass
+ the id. Postgres will automatically generate and autoincrement
+ it for us. To omit the id column we must use named inserts otherwise an
+ error is raised.
+*/
+
+INSERT INTO trainers (age, trainer_name) VALUES
+(19, 'misty'),
+(22, 'chen'),
+/* ... */
+/* ... */
+
TRUNCATE trainers;
+
+/* Considering this table structure */
+
+ (
+
+ id INT PRIMARY KEY,
+ /* ... */
+ /* ... */
+ /* ... */
+);
+
+
+/*
+ 1.
+ Create a sequence for the auto generating prinary key
+ It follows the tablename_columnname_seq
+*/
+ ;
+
+/* 2. Set the id column to not null */
+SET NOT NULL;
+ pokemon_list ALTER COLUMN id
+/* 3. Set the default value to the next value in the sequence*/
+ pokemon_list
+ALTER COLUMN pokemon_list
+SET DEFAULT nextval('pokemon_list_id_seq');
+
+/* 4. Link the sequence to the correct table and column */
+pokemon_list.id;
+ pokemon_list_id_seq OWNED BY
/*
+ Must use absolute path and the user must have appropriate permissions
+ Defaults to importing using TAB as the default parameter.
+ We'll use a csv file as an example
+*/
+
+COPY pokemon_list FROM '/path/to/yourfile.csv' DELIMITER ',';
+
+/*
+ This only works if a pk is specified for each row
+ The (my) prefered way to do it is the following
+*/
+
+COPY pokemon_list (name, level, type) FROM '/path/to/yourfile.csv' DELIMITER ',';
+
+/* It's much more flexible because you control what data you actually import*/
+
To be able to export a table to a file, we need to ensure that postgres has write permissions to the file.
We can now safely copy the table to the file.
COPY pokemon_list TO '/path/to/file.csv' DELIMITER ',';
+
SELECT name AS pokemon_name, type AS pokemon_type
+FROM pokemon_list;
+
SELECT *
+FROM pokemon_list
+WHERE id > 3;
+
SELECT *
+FROM pokemon_list
+WHERE type LIKE '%water%';
+
SELECT *
+FROM pokemon_list
+ORDER BY id DESC;
+
SELECT *
+FROM pokemon_list
+ORDER BY level;
+
+/* If the column you're ordering by is not of type INT then the ordering will be
+done alphabetically */
+
SELECT DISTINCT type AS pokemon_type
+FROM pokemon_list;
+
SELECT *
+FROM pokemon_list
+LIMIT 3;
+
SELECT *
+FROM pokemon_list
+ORDER BY id DESC
+LIMIT 3;
+
(
+
+ id SERIAL PRIMARY KEY,
+ type_name CHAR(120) NOT NULL
+)
+
+ (
+
+ id serial PRIMARY KEY,
+ pokemon_name CHAR(120) NOT NULL,
+ pokemon_level INT NOT NULL,
+ pokemon_type INT REFERENCES pokemon_types(id) NOT NULL,
+ CONSTRAINT pokemon_level_not_zero CHECK (pokemon_level > 0)
+
+);
+
SELECT name, level, pokemon_types.name AS type
+FROM pokemon_list
+JOIN pokemon_types
+ON pokemon_type_id = pokemon_types.id
+WHERE pokemon_type_id = 1;
+
+/* Will return the name, level and type name for all water pokemons */
+
SELECT name, level, pokemon_types.name AS type
+FROM pokemon_list
+JOIN pokemon_types
+ON pokemon_type_id = pokemon_types.id;
+
+SELECT name, level pokemon_types.name AS type
+FROM pokemon_list
+JOIN pokemon_types
+ON pokemon_type_id = pokemon_types.id;
+
+/* To see the data */
+
+SELECT * FROM pokemonswithtypes;
+
SELECT name, pokemon_types.name AS type
+FROM pokemon_list
+JOIN pokemon_types
+ON pokemon_type = pokemon_types.id
+WHERE pokemon_type = 1;
+
+/* only show the name and type for water pokemons */
+
pokemonswithtypes;
+
/* MAX */
+SELECT MAX(pokemon_level)
+FROM pokemon_list;
+
+/* MIN */
+SELECT MIN(pokemon_level)
+FROM pokemon_list;
+
+/* AVG */
+SELECT AVG(pokemon_level)
+FROM pokemon_list;
+
+/* ROUND */
+SELECT ROUND(AVG(pokemon_level))
+FROM pokemon_list;
+
+/* COUNT */
+SELECT COUNT(*)
+FROM pokemon_list;
+
+/* SUM */
+SELECT SUM(pokemon_level)
+FROM pokemon_list;
+
/* Add a column is_legendary of type boolean to table pokemon_list */
+NOT NULL DEFAULT TRUE;
+ pokemon_list ADD COLUMN is_legendary BOOL
+/* BOOL_AND
+ returns a result if **ALL** records have that column set to true
+*/
+
+SELECT BOOL_AND(is_legendary) FROM pokemon_list;
+
+
+/*
+ BOOL_OR
+ returns a result if one or more records have that column set to true
+*/
+
+SELECT BOOL_OR(is_legendary) FROM pokemon_list;
+
UPDATE pokemon_list
+SET is_legendary = FALSE;
+
UPDATE pokemon_list
+SET is_legendary = TRUE
+WHERE id = 2;
+
DELETE FROM pokemon_list WHERE id = 4
+
DELETE FROM pokemon_list
+WHERE id BETWEEN 1 AND 4;
+
DELETE FROM pokemon_list;
+
Note: The difference between DELETE and DROP or TRUNCATE is that the former can be undone (rolled back) the latter can't
pokemon_types
+DROP CONSTRAINT IF EXISTS unique_type_name;
+
COMMENT ON TABLE pokemon_types is 'pokemon with types'
+
+/* To display the comment, in psql simply run \dt+. It will return a description
+column containing that comment. It's useful when working on a legacy database
+for example*/
+
+/* Please note that comments aren't exclusive to tables, they can be executed on
+schemas and multiple other objects.*/
+
Note: If you find errors, typos or would like to add new tips, feel free to reach out to me on twitter. I'm @aaqaishtyaq. Thank you for reading ! And if you find this useful, share it with your friends and coworkers !
I'm a programmer. Like most programmers, I use an Apple laptop. They're just the best on the market and the best purchase you can make as a developer. No fanboyism here, just stating facts. They're fast, well built, and durable.
I rely on this laptop for all my personal / client work. I run vagrant boxes, docker containers in it and all the usual stuff.
As a result, I've developed this irrational fear that it would get stolen or fall off my bedroom window (things like that happen, trust me). Interestingly enough, it coincides with my growing interest for remote work.
So I started to look for ways to create a development server that would allow me to remain productive even if I lose / break my laptop, or when I'm on the road.
Ok, so you've read all this and you're thinking "How is it going to benefit me ?".
The main selling points to creating a remote development environment are the following:
Because all of your work is now done on a remote machine you don't really have to care about what computer you're using. It can be a super expensive (notice I didn't say overpriced) Apple laptop or any of its really good windows competitors (the DELL xps 13/15 comes to mind) or even a super cheap, 35$ raspberry Pi. Some people even use chromebooks ! They stick ubuntu on them and use them as their primary machines (SSH is a bit tricky to setup on chrome OS) but hey! Official Linux support is coming for you ChromeOS folks.
This one's obvious. While git is a life saver and a great tool for collaboration in large teams (distributed or not), you can also use it as a backup system for your code when you're a solo developer.
I've used sublime text but felt bad for continuously extending the trial because I couldn't pay for it. Then I switched back to VSCode and got frustrated by how slow it runs after using Sublime. Then i tried vim, and I never looked back. It's the lightest, fastest and overall best text editor out there by a huge margin (in my opinion).
Vim is highly customizable and lets you save your settings inside a .vimrc
file, which makes it version control friendly. It also makes your vim environment 100% portable as it comes standard in most linux server distros.
The best companion to vim. Tmux is a terminal multiplexer. With Tmux you can essentially access multiple terminal sessions inside a single window. This gives you the ability to work on multiple projects at a time. You can even save sessions, attach and reattach to them. It's painless. Just like vim, it's extremely customizable. All of your settings can be stored inside of a .tmux.conf
file.
To start developing on a remote server, you'll need ... well, a remote server. If you don't know what a VPS is, it stands for Virtual Private Server. It's essentially a virtual machine that you pay for monthly, which runs a server distribution of linux (Ubuntu, CentOS ...), or Unix (FreeBSD). You can connect to it via SSH (it has a public ip address) and start playing around. You have complete control over the server, you can configure it however way you want.
There's a large range of VPS providers on the market, the most notable ones are Linode, Digital Ocean, and Amazon EC2.
As I previously mentioned above, you can save your settings for vim and tmux in what are called dotfiles (files that start with a period). However dotfiles are not exclusive to vim and tmux, you can for example save your shell configuration in a .zshrc or .bashrc config file, or your git settings in a .gitconfig file.
Doing this is extremely powerful because you can store these configuration files on github and always pull the latest version when you launch a new development server.
You can check out my own dotfiles on github to help you get started.
Manually setting up servers is fun for the first couple of times, then it just becomes repetitive. And what do you do with repetitive tasks ? You automate them.
Ansible is a provisioning tool written in python that will help you do just that. You give it the ip address (or addresses) of the server you want to configure and it will execute all the tasks you tell it to.
With Ansible you can:
If you don't like Ansible, there are other server provisioning tools like puppet, chef and salt that will work just as good.
As I sit down to pen my thoughts on George Orwell's seminal work, "1984," I find myself grappling with a myriad of emotions and contemplations. This dystopian masterpiece has left an indelible mark on my psyche, provoking deep introspection and stirring a range of reactions.
From the outset, Orwell thrusts readers into the bleak and oppressive world of Oceania, a totalitarian regime where every aspect of life is tightly controlled by the ominous figure of Big Brother. As I followed the protagonist, Winston Smith, through his harrowing journey of rebellion and defiance against the Party, I couldn't help but feel a sense of unease and resonance with the themes explored.
One of the most striking aspects of "1984" is its chillingly prophetic depiction of surveillance, propaganda, and the manipulation of truth. Orwell's portrayal of a society where individual freedom is subjugated to the will of the Party and where historical facts are distorted to suit its narrative feels hauntingly relevant in today's world of mass surveillance and information warfare.
As I delved deeper into the novel, I found myself captivated by Orwell's exploration of power dynamics and the fragility of truth. The Party's relentless pursuit of control and its perversion of language to stifle dissent served as a stark reminder of the dangers of unchecked authoritarianism and the erosion of fundamental freedoms.
Yet, amidst the darkness and despair, there are moments of profound humanity in "1984." Winston's quest for love and connection, his defiance in the face of oppression, and his yearning for truth and freedom resonated deeply with me. Despite the overwhelming odds stacked against him, Winston's unwavering spirit serves as a beacon of hope in a world consumed by fear and oppression.
Orwell's writing is both evocative and incisive, drawing readers into the bleak and desolate landscape of Oceania with its vivid imagery and haunting prose. His critique of totalitarianism and his exploration of the human condition linger long after the final page has been turned, prompting reflection on the nature of power, truth, and freedom.
In conclusion, "1984" stands as a timeless masterpiece that continues to captivate and provoke readers with its relevance and insight into the complexities of the human condition. It serves as a poignant reminder of the dangers of authoritarianism and the imperative to safeguard individual liberties and uphold the truth. For me, reading "1984" has been a transformative experience, one that has deepened my understanding of the world and reinforced my commitment to defending freedom and truth in all its forms
Should you have any feedback, recommendations, or thoughts of any nature, feel free to reach out to me on Twitter at @aaqaishtyaq!
After completing 1984, a friend suggested I read this book. As I ponder my recent exploration of George Orwell's "Animal Farm," I am engrossed by the depth of its messages and the profound wisdom nestled in its narrative. Crafted as an allegory, this timeless tale provides a poignant critique of authority, corruption, and the complexities of governance.
From the outset, Orwell draws readers into the world of Manor Farm, where the animals rise against their human oppressors under the rallying cry of "All animals are equal." Led by the pigs, notably the clever and manipulative Napoleon and the idealistic Snowball, the animals establish their society based on the principles of Animalism. At first, their revolution seems to promise a utopian existence, free from the tyranny of human exploitation.
However, as the story unfolds, we witness the gradual erosion of the animals' ideals as the pigs consolidate their power and become indistinguishable from the humans they once opposed. Through cunning manipulation and propaganda, Napoleon and his cohorts subvert the principles of Animalism, betraying the very creatures they claim to represent.
Orwell masterfully uses animal characters to represent different aspects of human nature and political ideologies. The pigs, with their intelligence and ability to organize, symbolize the ruling elite, while the other animals represent various segments of society, from the loyal but naive working class to the cynical skeptics who question authority.
One of the most chilling aspects of "Animal Farm" is its portrayal of how power can corrupt even the noblest of intentions. As Napoleon consolidates his control, he employs fear, manipulation, and violence to maintain his grip on power, ultimately betraying the revolutionary ideals upon which Animal Farm was founded.
Through its allegorical narrative, "Animal Farm" serves as a stark warning about the dangers of totalitarianism and the perversion of revolutionary ideals. Orwell's keen observations about human nature and politics resonate just as strongly today as they did when the novel was first published.
As I close the book, I am left with a profound sense of unease and reflection. "Animal Farm" is not just a story about animals; it is a powerful allegory that holds a mirror to our society, prompting us to question the nature of power, authority, and the pursuit of utopia. It is a timeless masterpiece that reminds us of the importance of vigilance in safeguarding our freedoms and ideals.
If you have any comments, suggestions or remarks of any sort, you can ping me on Twitter @aaqaishtyaq!
As I take a moment to contemplate my recent voyage through the intricacies of Fyodor Dostoevsky's masterpiece, "The Idiot", I find myself immersed in a whirlwind of emotions and reflections. This novel transcends mere storytelling; it delves deep into the essence of the human experience, brimming with layers of complexity and profound insights that linger long after the final chapter has been read.
At the core of "The Idiot" lies Prince Myshkin, a character who eludes simplistic characterization. He embodies an extraordinary blend of compassion and empathy, yet is burdened by a naïveté and innocence that sharply contrast with the cynicism of the world around him. Myshkin's journey captivated me, drawing me towards his pure-heartedness and unyielding faith in the inherent goodness of humanity.
Dostoevsky intricately intertwines the lives of a diverse cast of characters around Myshkin, each grappling with their own inner demons and ethical quandaries. From the enigmatic Nastasya Filippovna to the tormented Rogozhin, each figure serves as a mirror reflecting the intricate tapestry of human nature, capable of both immense kindness and profound cruelty.
A central theme of the novel revolves around the interplay between intellect and emotion, reason and passion. Myshkin, with his steadfast belief in the transformative power of love and forgiveness, starkly contrasts with the calculating and cynical ethos of St. Petersburg society. Through his interactions with other characters, we are compelled to confront our perceptions of what it truly means to be human.
Moreover, "The Idiot" serves as a profound exploration of identity and authenticity. Myshkin grapples with questions of selfhood and belonging amidst the turbulent waters of social hierarchy and moral ambiguity. His inability to conform to societal norms renders him both a subject of ridicule and a beacon of hope in a world overshadowed by greed and ambition. As I concluded "The Idiot," I found myself deeply moved by Dostoevsky's penetrating insight into the human psyche. This is not a novel that provides easy solutions or neatly tied endings. Instead, it challenges us to confront the complexities of our humanity and to reconcile the inherent contradictions that define us.
Ultimately, "The Idiot" stands as a testament to the enduring potency of compassion and empathy in a world often lacking in both. It serves as a poignant reminder that amidst our imperfections and shortcomings, goodness can still be discovered in the most unexpected of places. For this profound journey, I am immensely grateful.
If you read this article all the way through, thanks for sticking with me! You can send me questions, remarks, or comments on Twitter, I'm @aaqaishtyaq.
A collection of bite-sized notes on various topics that I’ve learned over time.
I had a lot of files in a git repository, which I didn't want to add in git
, So to remove all of them to a temporary directory I used a collection of Unix commands.
| |
+
In the heart of Himachal's Parvati Valley lie its hidden gems: Pulga and Kalga. Picture-perfect villages embraced by towering mountains, where every corner tells a story of nature's magnificence. Here, snow-capped peaks stand tall, kissing the sky, while icy waterfalls dance down the slopes, painting the landscape with their pure, untamed beauty. Despite the biting cold, every moment feels like a warm embrace from Mother Nature herself. It's not just a trip; it's a journey of awe, wonder, and connection with the raw essence of the mountains.
pty
needs.The dotfiles for all software mentioned, and more. This is my nix based dotfiles (home_ops).